ICISCA-15-Proceeding.. - International Conference on Information

Transcription

ICISCA-15-Proceeding.. - International Conference on Information
ICISCA 2015
The 2nd International Conference on
Information, System and Convergence
Applications
24-26 June 2015
Everly Hotel Putrajaya, Malaysia
http://www.icisca-conference.org
Organizers
Society of Convergence and Integrated Research
Korean Convergence Society
Co-organizer
Inha University IIER(Korea)
XJTLU CeSGIC(China)
Malaysian Invention and Design Society, MINDS (Malaysia)
Chiang Mai University(Thailand)
India IIR (India)
Int. Consortium of Optimization and Modelling in Science and Industry(UK)
IEEE CIS Task on Business Intelligence and Knowledge Management(UK)
TNB Research (Malaysia)
Multimedia University (Malaysia)
UTAR University (Malaysia)
APU University (Malaysia)
Infrastructure University Kuala Lumpur (Malaysia)
UCSI University (Malaysia)
Sponsor
Everly Hotel Putrajaya (Malaysia)
International Conference on Information, System and Convergence Applications
Welcome Address
On behalf of the organizing committee members, it is great pleasure to welcome you
to the international conference, ICISCA2015, held in Putrajaya, Malaysia on June
24-27, 2015.
ICISCA 2015 is the meeting of the important international conference in the field of
convergence technology, jointly by the Society of convergence and integrated
research, Korea Convergence Society, plus 13 co-organizers with universities.
I hope this conference will serve as an invaluable venue where all members of the
world convergence research committees come together to share their research
outcomes and to enhance networking with international experts. Additionally, due to
the ICISCA2015 involves many researchers over the world, it is also our hope that
this will create a synergetic effect that will foster future collaborations.
I sincerely thank all of the participants and especially distinguished invited speakers,
Emeritus Professor Tan Sri Augustine SH Ong, Professor Kang Li, Professor Mohd
Zaid Bin Abdullah, and also committee members of this conference.
Finally, please take enjoy the time in historical and friendly Kuala Lumpur with a
change to taste delicacy of Malaysian culture and food during your stay.
I sincerely hope you a productive and enjoyable meeting !
Sanghun Lee
Kwangwoon University, Korea
General Chair, ICISCA2015
I
2015
International Conference on Information, System and Convergence Applications
Conference Committees
General Chair:
Prof. Sanghun LEE, Kwangwoon University, Korea
Vice-General Co-Chairs:
Sangmin LEE, Inha University, Korea
Nipon THEERA-UMPON, Chiangmai University, Thailand
Advisory Committee Chairs:
Xin-She YANG, Middlesex University, UK
Gunhee HAN, Baeksok University, Korea
Keun Ho RYU, Chungbuk National University, Korea
Kang Li, Queen’s University Belfast, UK
Publicity Chairs:
Tanuja SRIVASTAVA, Indian Institute of Technology, India
Kaushal K. SRIVASTAVA, CESER PUBLICATIONS, India
Tenghwang TAN, UCSI University, Malaysia
Steering Committee Chair:
Sanghyuk LEE, XJTLU, China
Program Committee Chairs:
Vui Kien LIAU, UCSI University
Gyusoo CHAE, Baekseok University, Korea
Local Organizing Committee:
Yew Kee LIM, IET Manufacturing TPN, IET UK
Nai Shyan LAI, APU University
Ka Fei THANG, APU University
Lee Choo KUAN, Infrastructure University Kuala Lumpur
Mastaneh MOKAYEF, UCSI University
Chu Liang LEE, MMU University
Nadia Mei Lin TAN, UNITEN University
Registration Chairs:
A. CLEMENTKING, King Khalid University, Saudi Arabia
Seungsoo SHIN, Tongmyung University, Korea
Jia Yew PANG, Everly Hotel Group, Malaysia & APU University, Malaysia
II
2015
International Conference on Information, System and Convergence Applications
Program Committee Members
•
Kapseong RO, Western Michigan University, USA
•
Prof. Usha, Assam Don Bosco University, India
•
Bing YANG, Central south university, China
•
Wei-li CHENG, Taiyuan University of Technology, China
•
Kiseok CHOI, NTIS Division, KISTI, Korea
•
Lvxiang DENG, Central South University, China
•
Yin-feng DOU, Heilongjiang University, China
•
Yunbin HE, Central South University, China
•
Kyung-Won HWANG, Dong-A University, Korea
•
Hai-Ning LIANG, The University of Western Ontario, Canada
•
Zhong-ping QUE, Taiyuan University of Technology, China
•
Cheng-jian RAN, Heilongjiang University, China
•
Fa-guang WANG, China University of mining and technology, China
•
Te XU, Northeastern University, China
•
WoonSeung YEO, KAIST, Korea
•
Hyeon-min SHIM, Inha University, Korea
•
Ki-hwan HONG, Inha University, Korea
•
P.RADHAKRISHANAN, King Khalid University, Kingdom of Saudi Arabia
•
C. JOTHIVENKATESWARN, Presidency college, India
•
J.Satheesh KUMAR, Bharathiar University, India
•
Thirumurugan SHANMUGAM, College of Applied Sciences, Oman
•
V.JEYABALARAJA, Velammal Engineering College, India
•
Yan Sun, XJTLU, China
•
Scott UK-Jin LEE, Hanyang University, Korea
•
Mohamed A. DARWISH, Eindhoven University of Technology, Netherland
•
EngGee LIM, XJTLU, China
•
Mohamed NAYEL, Assiut University, Egypt
•
Abhishek SHUKLA, R.D. Engineering College Technical Campus, India
•
Binod KUMAR, Jayawant Technical Campus, University of Pune, INDIA
•
Vinesh THIRUCHELVAM, APU University, Malaysia
•
Mahmood AL-IMAM, UCSI University, Malaysia
•
Mohd Hairi HALMI, Multimedia University, Malaysia
•
Mohamad Kamarol MOHD JAMIL,USM, Malaysia
•
Hui Mun LOOE, TNB Research, Malaysia
•
Ammar Ali AL TALIB, UCSI University, Malaysia
III
2015
2015
International Conference on Information, System and Convergence Applications
Conference Statistics
Country
China
India
Korea
Malaysia
Saudi Arabia
Thailand
UK
Total
Oral
2
13
11
17
1
2
1
47
Poster
0
0
48
0
0
0
0
48
Total
2
13
59
17
1
2
1
95
IV
International Conference on Information, System and Convergence Applications
Keynote Speaker: Advances in Oil Palm
Research
Academician Tan Sri Prof Augustine Soon Hock Ong is the Chairman of
the International Society for Fat Research (ISF) since 1997 and the
President of the Malaysian Oil Scientists’ and Technologists’ Association,
Senior Fellow of the Academy of Sciences, Malaysia, Fellow of the Royal
Society of Chemistry London and Fellow of the Third World Academy of
Sciences. Prof Ong was the former Director-General of the Palm Oil
Research Institute of Malaysia (PORIM) from 1987 to 1989 and former
Director in Science and Technology, Malaysian Palm Oil Promotion
Council (MPOPC) from 1990 to 1996.
He has acquired extensive networking in the oils and fats industry as well as in the academic world
both locally and overseas. He was the Fulbright-Hays Fellow at the Massachusetts Institute of
Technology (MIT) 1966 to 1967. He spent a sabbatical year in the University of Oxford as the Visiting
Professor at the Dyson Perrins Laboratory, 1976 to 1977. He has been active in research and
development for more than 45 years since 1959 and this experience includes the chemistry and
technology of palm oil in which he had more than 25 years’ involvement since 1974. He has 14 patents
in the technology of palm oil to his credit and published more than 380 articles. He was the founding
editor-in-chief of Elaeis: International Journal of Oil Palm Research and Development and is still a
member of the Editorial Board.
He played a significant role in the programme to counter the Anti-Palm Oil Campaign from 1987 to
1989 which came to a favourable conclusion in 1989. He has been invited to serve as a member of
Research Advisory Panels on Cocoa, Forestry, Rubber and Petroleum as well as a member,
International Advisory Council, Universiti Tunku Abdul Rahman. He is Founder President of the
Malaysian Invention and Design Society (MINDS) since 1986. He is also President of the Confederation
of Scientific and Technology Associations in Malaysia (COSTAM) and Malaysian Oil Scientists and
Technologists’ Association (MOSTA). He serves on the Board of several corporate organizations
including University of Malaya, Malaysian-American Commission on Educational Exchange (MACEE),
Country Heights Holdings Berhad.
V
2015
International Conference on Information, System and Convergence Applications
Keynote Speaker: Electromagnetic sensing
and industrial vision
MOHD ZAID ABDULLAH graduated from Universiti Sains Malaysia (USM)
with a B.App. Sc. degree in Electronics in 1986 before joining Hitachi
Semiconductor (Malaysia) as a Test Engineer. In 1989 he commenced
an M.Sc. in Instrument Design and Application at University of
Manchester Institute of Science and Technology (UMIST). He remained
in Manchester, carrying out research in Electrical Impedance
Tomography at the same university, and received his Ph.D. degree in
1993. He returned to Malaysia in the same year to start a career as a
lecturer with USM. His research interests include electromagnetic
sensing, digital signal and image processing, and industrial vision. He
has authored or co-authored more than 80 research articles in international journals and conference
proceedings.
While at USM, he has made or is making significant contributions to 19 research projects as a principal
investigator and co-investigator. To-date the total value of the funds that he is responsible is more
than RM 4 million. Of this total, 72% is a continuation of his niche area in microwave imaging, 18 % in
advanced sensing and instrumentation, and the remaining 10 % is in applied digital signal and image
processing. His research is attracting the support from government as well as private agencies. For
instance the funding of his research comes from the government agencies like the USM Research
University, the Ministry of Science, Technology and Innovation, and industries i.e. Agilent Technology,
Motorola, TT-Vision and TORAY. He is also very active in graduate teaching and one-to-one PhD
mentoring. To-date he has supervised 9 doctoral students through completion in addition to 24 MSc
dissertations. He is founding editorial board member of ASEAN Engineering Journal, and reviewer
board member of numerous scholarly journals like the Transactions of Institute of Measurement and
Control, IEEE Transactions on Instrumentation and Measurement, Measurement, Medical, Biological
Engineering Computing, Journal of Food Engineering, etc.
He also a recipient of many prestigious international fellowship awards such as the Association of the
Commonwealth Universities (1999), the Japanese Society Promotion of Science (2000), the Royal
Society (1994) and the Engineering Physical Sciences Research Council (2006). He is also a Visiting
Professor of the Universiti Malaysia Pahang (UMP), Universiti Teknikal Malaysia Melaka (UTeM), and
Invited Scholar of the National University of Ireland, University of Manchester and Chulalongkorn
University. Additionally he has received numerous national and international honours such as the
prestigious Senior Moulton medal for the best article published by the Institute of Chemical Engineers
(2001), IEEE best paper awards (2011, 2012), and Keynote Speakers at the International Conference
on Emerging Trends in Science and Engineering Technology (ICONSET 2012) and IACSIT International
Conference on System Engineering and Modelling (2012). At present he is a Professor and the Dean
of the USM’s School of Electrical and Electronic Engineering. Professor Mohd Zaid Abdullah is a
Chartered Engineer and Fellow of the Institute of Engineering and Technology (IET), UK.
VI
2015
International Conference on Information, System and Convergence Applications
Keynote Speaker: Intelligent systems and
control for decarbonizing the whole energy
system from head to tail
Prof Kang Li is a Chair Professor of Intelligent Systems and Control at the
School of Electronics, Electrical Engineering and Computer Science, Queen’s
University Belfast, U.K., and Chairs the School Internationalization
Committee.
Prof Li’s research interests include nonlinear system modeling, identification,
and control, data analytics, bio-inspired computational intelligence, and
fault-diagnosis and detection, with applications to power systems, renewable
energies, smart grid, electric vehicles, and polymer processing. He is
particularly interested in the development of control technologies for decarbonising the whole
energy system from head to tail, considering generation, transmission and distribution, and user
demands, and is currently leading a team to develop and commercialize a new generation of lowcost minimally invasive intelligent energy and condition monitoring system as well as intelligent
control and optimization platform for energy saving, primarily for SMEs.
Prof Li is the author or co-author of more than 300 articles with several award winning publications.
He was the editor or co-editor of 14 conference proceedings (Springer), and guest editor of 15
special issues in international journals. He has led and participated in a number of research projects,
funded by research councils, EU and industry, mostly on sustainable energy and intelligent
manufacturing, totalling over 5 million pounds in past 10 years.
Prof Li serves in the editorial boards of Neurocomputing, the Transactions of the Institute of
Measurement and Control, Cognitive Computation, and International Journal of Modelling,
Identification and Control. He chairs the IEEE United Kingdom and Ireland Section Control and
Communication Ireland chapter, and was the Secretary of the IEEE UK & Ireland Section. He also
serves in the Executive Committee of the UK Automatic Control Council, IFAC Technical Committee
on Computational Intelligence in Control, IEEE Computational Intelligence Society Neural Network
Technical Committee, and Adaptive Dynamic Programming & Reinforcement Learning Technical
Committee.
Prof Li is a visiting professor of Harbin Institute of Technology, Shanghai University, and Ningbo
Institute of Technology of Zhejiang University. He also held visiting fellowship or visiting
professorship at National University of Singapore, University of Iowa, New Jersey Institute of
Technology, Tsinghua University, and Technical University of Bari, Taranto.
Prof Li was the organizer/co-organizer of LSMS and ICSEE conference series, chaired or co-chaired
various committees for over 10 international conferences, and was an IPC member of 80
international conferences. Prof Li is a senior member of IEEE and Fellow of UK Higher Education
Academy.
VII
2015
International Conference on Information, System and Convergence Applications
Program
Day – 1 (June 24. 2015)
Time
18:00 19:00
Event
Early Registration
Location: Counter Desk
19:00 20:00
ICISCA Committee meeting
Day – 2 (June 25. 2015)
Time
08:00 09:00
09:00 09:15
09:15 10:15
Event
Registration
Location: Counter Desk
Mesmera Ballroom 3
Opening Address
Gyusoo Chae
Committee Chair of ICISCA2015
Baekseok University, Korea
Keynote Speech 1
Advances in Oil Palm Research
Emeritus Prof. Tan Sri Augustine Ong
Malaysian Oil Scientists and Technologists' Association, Malaysia
10:15 10:20
10:20 11:20
Break
Venue: Irama 5
Session 1
Artificial Intelligence
Venue: Irama 6
Session 2
Biomedical Engineering and
Application
Chair : Scott Uk-Jin Lee
Hanyang University, Korea
Chair : Joseph Kim
Xi’an Jiaotong Liverpool University, China
S1-1. Morphological image enhancement
and analysis using directionality
histogram
Radhakrishnan Palanikumar
King Khalid University, Saudi Arabia
S2-1. An Improved Hybrid Algorithm for
Accurate Determination of Parameters
of Lung Nodules with Dirichlet
boundaries in CT Images
G. Niranjana Dr.M.Ponnavaikko
SRM University, India
S1-2. A Robust Sky Segmentation Method
for Daytime Images
H. L. Wong, C. S. Woo
Multimedia University, Malaysia
S1-3. Prediction Of Sediments Using Back
Propagation Neural Network (BPNN)
Model
A.Clementking C. JothiVenkateswaran
S2-2. Determination of Similarity Measure
on MRI brain clustered Image
S.Rani, D.Gladis, R Palanikumar
Presidency College,India
S2-3. Driving Sequence Information from
AAIndex for Protein Hot Spots
Prediction
Peipei Li, Keun Ho Ryu
VIII
2015
International Conference on Information, System and Convergence Applications
King Khalid University, Saudi Arabia
Chungbuk National University, Korea
S1-4. An Improved Least Mean Square
Algorithm for Adaptive Filter in Active
Noise Control Application
R. Mustafa, A. M. Muad
UCSI University, Malaysia
S1-5. Hard Exudates and Cotton Wool
Spots Localization in Digital Fundus
Images Using Multi-prototype
Classifier
Methee T, Kittichai W, Sansanee A,
Direk P, and Nipon T
Chiang Mai University,Thailand
11:20 11:25
11:25 12:25
S2-4. Biomedical Implants: Failure &
Prevention techniques – A review
Research Scholar
R Praveen, V JaiGanesh, S Prabakar
Sathyabama University, India
Break
Venue: Irama 5
Session 3
Smart sensor and Application to
Integrated System
Venue: Irama 6
Session 4
Healthcare Technology and Application
Chair : Gyoosoo Chae
Baekseok University, Korea
Chair : Tenghwang Tan
UCSI University, Malaysia
12:25 13:30
S3-1. Anti Hijack System with Eye Pressure
Control System
M.Barathvikraman, H.Divya, Praveen. R
Thiru Seven Hills Polytechnic College, India
S4-1. Automatic White Blood Cell Detection
in Low Resolution Bright Field
Microscopic Images
A Usanee, TU Nipon, T
Chatchai, and A Sansanee
Chiang Mai University
S3-2. Design and Development of Electrical
Resistance Tomography to Detect
Cracks in the Pipelines
O F Alashkar, V Chitturi
Asia Pacific University, Malaysia
S4-2. Role of Classification Algorithms in
Medical domain: A Survey
E.Venkatesan, T. Velmurugan
D. G. Vaishnav College, India
S3-3. Hot-Point Probe Measurements for
Aluminium Doped ZnO Films
WC Au, KY Chan, YK Sin, ZN Ng, CL Lee
Multimedia University, Malaysia
S4-3. A study on feature vectors of heart
rate variability and image of carotid
for cardiovascular disease diagnosis
H Kim, S H Park, K S Ryu, M Piao, K H Ryu
Chungbuk National University, Korea
S3-4. Relative Humidity Sensor Employing
Optical Fibers Coated with ZnO
Nanostructures
YI Go, H Zuraidah
INTI IU, Malaysia
S4-4. Image Segmentation in Medical Data:
A Survey
S. Mahalakshmi, T.Velmurugan
D. G Vaishnav College, India
S3-5. GPR Principle for Soil Moisture
Measurement
CW Yap, R Mardeni and NN Ahmad
Asia Pacific University, Malaysia
S4-5. A Survey on Medical Images
Extraction using Parallel Algorithm in
Data Mining
A.Naveen, T.Velmurugan
D. G Vaishnav College, India
Lunch Break
IX
2015
International Conference on Information, System and Convergence Applications
Mesmera Ballroom 3
13:30 14:30
Keynote Speech 2
Intelligent systems and control for decarbonizing
the whole energy system from head to tail
Prof. Kang Li
Queen’s University Belfast, UK
14:30 14:35
14:35 15:50
Break
Venue: Irama 5
Session 5
Smart Grid, Power and Energy System
Venue: Irama 6
Session 6
Process Engineering and Technology
Chair : Vinesh Thiruchelvam
Asia Pacific University, Malaysia
Chair : Ling Wang
Northeast Dianli University
S5-1. A Study of Arc Fault Temperature in
Low Voltage Switchboard
L C Kuan, J Y Pang
Infrastructure University Kuala Lumpur,
Malaysia
S6-1. Effect of Injection Time on the
Performance and Emissions of
Lemon Grass Oil Biodiesel Operated
Diesel Engine
G.Vijayan, S.Prabhakar,S.Prakash,
M.Saravana Kumar, R Praveen
AVIT, India
S5-2. Power factor improvement with SVC
based on the PI controller under Load
Fault
S G Farkoush, S B Rhee
Yeungnam University, Korea
S6-2. The Development of Automated
Fertigation System
C W Yap, T Vinesh, G Rajaram
Asia Pacific University, Malaysia
S5-3
S6-3. Experimental Investigation on Ethanol
Fuel in VCR-SI Engine
S.Prabhakar, K.Annamalai, Praveen.R,
M.Saravana Kumar, S.Prakash
AVIT, India
Unit Commitment Considering
Vehicle to Grid and Wind Generations
Zhile Yang, Kang Li
Queen’s University Belfast, UK
S5-4. Theoretical Analysis and Software
Modeling of Composite Energy
Storage Based on Battery and
Supercapacitor in Microgrid
Photovoltaic Power System
W Jing, C H Lai, Wallace S.H. Wong,
M.L. Dennis Wong
Swinburne University of Technology Sarawak
Campus, Malaysia
S6-4. Active Cell Equalizer by a Forward
Converter with Active Clamp
Thuc Minh Bui, Sungwoo Bae
Yeungnam University, Korea
S5-5. On Enery-Efficient Time
Synchronization based on Source
Clock Frequency Recovery in
Wireless Sensor Networks
K S Kim, S Lee, and E G Lim
Xi'an Jiaotong-Liverpool University, China
S6-5. Optimization of Process Parameters
of Dissimilar Alloys AA5083 and 5456
by Friction Stir Welding
V Jaiganesh
S. A. Engineering College, India
S5-6. Improved Multi-Axes solar Tracking
sytem and Analysing on power
S6-6. Use of Vegetables Oil as Alternate
Fuels in Diesel Engines – A Review
B.Gokul, S.Prabhakar,S.Prakash,
X
2015
International Conference on Information, System and Convergence Applications
Generated power consumed by the
system
A S Balakrishnan, S K Selvaperumal,
R Lakshmanan, C S Tan
Asia Pacific University, Malaysia
15:50 16:00
16:00 17:40
M.Saravana Kumar, Praveen.R
AVIT, India
Break
Venue: Irama 5
Session 7
Embedded system and Information
Technology
Venue: Irama 6
Session 8
Communication and Computational
Modelling
Chair : Yew Kee Lim
IET Manufacturing TPM, UK
Chair : Sunghyuck Hong
Baekseok University, Korea
S7-1. A Telepresence And Autonomous
Tour Guide Robot
Alpha Daye Diallo, Suresh Gobee, Vickneswari
Durairajah
Asia Pacific University, Malaysia
S8-1. An investigation study of a printed
array antenna for 900MHz bands
Gyoo-Soo Chae
Baekseok University, Korea
S7-2. An Effective Approach for Parallel
Processing with Multiple
Microcontrollers
Abdul Rahim Mohamed Ariffin, Scott Uk-Jin Lee
Hanyang University, Korea
S7-3 Hand Gesture Recognition Using
Ternary Content Addressable Memory
Based on Pattern Matching Technique
T. Nagakarthik and Jun Rim Choi
Kyungpook National University, Korea
S8-2. Economic Operation Scheme of a
Green Base Station
Sungwoo Bae
Yeungnam University, Korea
S7-4. Effects of Mobile Cloud Computing on
Health Care Industry
M Ahmadi, M Baradaran Rohani, A Hakemi, M
Vali, K Madadipouya
Asia Pacific University, Malaysia
S8-4. Comparision of Estimation method
for State-of-Charge in Battery
Seonwoo Jeon, Sungwoo Bae
Yeungnam University, Korea
S7-5. An Associative Index Method for
Pyramid Hierarchical Architecture of
Social Graph
Ling Wang, Wei Ding, Tie Hua Zhou
Northeast Dianli University, China
S8-5. Channel Estimation for MIMO-OFDM
Systems
S Manzoor, S Govinda and A Salem
UCSI University, Malaysia
S7-6. A Reliable User Authentication and
Data Protection Model in Cloud
Computing Environments
M Ahmadi, M Vali, F Moghaddam, A Hakemi,
K Madadipouya
Asia Pacific University, Malaysia
S8-6. Smart load management of Electric
Vehicles in distribution and
residential networks with
Synchronous Reference Frame
Controller
S G Farkoush, S B Rhee
Yeungnam University, Korea
S7-7. Recommendations of IT Management
in a Call Centre
I B Muhammed, K Shanmugam and
N K Appadurai
Asia Pacific University, Malaysia
S8-7. Optimising Maximum Power Demand
Using Smart Sequencial Algorithm
J Y Pang, L C Kuan, V K Liau,
K N Chitewe, Dennis Tan
Asia Pacific University, Malaysia
XI
S8-3. Design and Simulation of Microstrip
Patch Antenna for Ultra Wide Band
(UWB) applications
S. K. Wong, T. H. Tan, M Mokayef
UCSI University, Malaysia
2015
International Conference on Information, System and Convergence Applications
S7-8. DARVENGER(Digitally advance
rescue vehicle with free energy
generator)
S. Sivapriyan, R. D. Jaishankar, Tamilamuthan,
B. Vigenesh, M. Kaviya and
K. Rajalakshmi
Sree Sastha Institute of Engineering and
Technology, India
S8-8. High Speed CNFET Digital Design
using Simple CNFET Circuit Structure
Kyung Ki Kim
Daegu University, Korea
Day – 3 (June 26. 2015)
Time
08:30 09:00
Event
Registration
Location: Counter Desk
Venue: Irama 5
09:00 10:00
10:00 10:10
10:10 11:10
Keynote Speech 3
Electromagnetic sensing and industrial vision
Professor Dr Mohd Zaid Bin Abdullah
Universiti Sains Malaysia USM
Break
Poster Session
Venue: Irama 6,8
P-01. Genetic Algorithm based Pre-Training for Deep Neural Network
Hongsub An, Hyeon-min Shim, Sangmin Lee
Inha University, Korea
P-02. Improved Object Segmentation Using Modified GrowCut
GaOn Kim, GangSeong Lee, YoungSoo Park, YeongPyo Hong, SangHun Lee
Kwangwoon University, Korea
P-03. Depth Map Generation using HSV Color Transformation
JiHoon Kim, GangSeong Lee, YoungSoo Park, YeongPyo Hong, SangHun Lee
Kwangwoon University, Seoul, Republic of Korea
P-04. Find Sentiment And Target Word Pair Model
Wonhui Yu, Heuiseok Lim
Dept. Computer Science Education, Seoul, Korea
P-05. Novel Operation Scheme of Static Transfer Switches for Peak Shedding
Chang-Hwan Kim, Sang-Bong Rhee
Yeungnam University, Korea
P-06. Detection of Incorrect Sitting Posture by IMU Built-in Neckband
Hyeon-min Shim, SangYong Ma, and Sangmin Lee
Inha University, Korea
P-07. Modeling of a Learner Profiling System based on Learner Characteristics
Hyesung Ji, HeuiSeok Lim
Korea University, Korea
XII
2015
International Conference on Information, System and Convergence Applications
P-08. Context Reasoning Approach for Context-aware Middleware
Yoosoo Oh
Daegu University, Korea
P-09. Role of NT-proBNP for Prognostic in Non ST-segment Elevation Myocardial Infarction
Patients from KorMI database Database and Bioinformatics Laboratory
H S Shon, W Jang, S H Park, J W Bae, K A Kim, K H Ryu
Chungbuk National University, South Korea
P-10. A 65nm CMOS Current Mode Amplitude Modulator for Quad-band GSM/EDGE Polar
Transmitter
Hyunwon Moon
Daegu University, Korea
P-11. Appling Harmony Search Optimization Method to Economic Load Dispatch Problems
in Power Grids
Si-Na Park, Sang-Bong Rhee
Yeungnam University, Korea
P-12. Ventilation System Energy Consumption Simulator for a Metropolitan Subway Station
Sungwoo Bae, Jeongtae Kim
Yeungnam University, Korea
P-13. The effectiveness of international development cooperation (IDC) educational program
for nursing students
Sun Young Park Heejeong Kim
Baekseok University, Korea
P-14. A Study on the Relationship between Nursing Professionalism, Internal Marketing and
Turnover Intention among Hospital Nurses
Eun Ja Yeun, Misoon Jeon
Konkuk University, Korea
P-15. The Level of Depression and Anxiety in Undergraduate Students
Eun Ja Yeun, Misoon Jeon
Konkuk University, Korea
P-16. Analysis of dental hygienists’ financial preparation for old age
Hee-Sun Woo, Seok-Hun Kim
Suwon women’s University, Korea
P-17. The motion graphic effect of the mobile AR user interface
YunSung Cho, SeokHun Kim
Suwon Women’s University, Korea
P-18. New Authentication Methods based by User’s Behavior Big Data Analysis on Cloud
Sunghyuck Hong
Baekseok University, Korea
P-19. The Effect of Musical activities program on Parenting stress and Depression- Focused
on Housewives with Preschool Children
Shinhong Min
Baekseok University, Korea
P-20. Relationship between ego resiliency of girl students and smart phone addiction
Soonyoung Yun, Shinhong Min
Baekseok University, Korea
P-21. Analysis on resilience, self-care ability and self-care practices of middle & high school
students
XIII
2015
International Conference on Information, System and Convergence Applications
Shinhong Min, Soonyoung Yun
Baekseok University, Korea
P-22. An Algorithm for Zero-One Concave Minimization Problems under a single linear
constraint
Se-Ho Oh
Cheongju University, Korea
P-23. An Analysis of Risk Sharing between the Manufacturer and the Supplier
Chan Jung Park
Cheongju University, Korea
P-24. Meme and Culture Contents in Korea
Kyung Sook Kim
Cheongju University, Korea
P-25. Unique Features of the Internet Technology and Their Impacts on
Industry Structure and Corporate Competitive Strategy
Lark Sang Kim
Cheongju University, Korea
P-26. Analysis of Torso Patterns by Somatotype -Focused on Development of Body Surface
Shell
Mi Hyang Na
Cheongju University, Korea
P-27. Value Relevance of the Fair Value Hierarchy and the Impact of Fair Value Disclosures
in Korea
HyunTaek Oh
Cheongju University, Korea
P-28. Development of a Water-Droplet-Shaped Bra Mold Cup Design
Heh Soon Jung, Mi Hyang Na
Cheongju University, Korea
P-29. An Analysis on the Minimum Efficiency Scale of Local autonomies in Korea
Sung Tai Kim, Young Jun Chun, Jin-Yeong Kim
Cheongju University, Korea
P-30. The Effect of HRD programs on Labor Productivity: The Moderating Role of Learning
Climate
Woo-Jae Choi
Cheongju University, Korea
P-31. CSR and Brand Performance
Jae Mee Yoo
Cheongju University, Korea
P-32. The Effect of Hedging with Property-Liability Insurance on the Probability of Financial
Distress
Young Mok Choi
Cheongju University, Korea
P-33. A Study on Justification for the Use of Chest CT Scan in Physical Examinations
You In-Gyu , Lim Chung Hwan
Hanseo University, Korea
P-34. A Study on Microstruture of Gardnerella Vaginalis
Mi-Soon Park, Zhehu Jin, Byung-Soo Chang
Dept. of Pathology, Korea Clinical Laboratory, Korea,
XIV
2015
International Conference on Information, System and Convergence Applications
P-35. A study on the DICOM file of Head CT and dose calculation in the human body using
the Geant4 code
Eun Hee Mo, Sang Ho Lee, Cheong-Hwan Lim
Wonkwang University hospital, Korea
P-36. Scientific Analysis of the Gilt-bronze Incense Burner of Baekje Period from the
Neungsalli Temple Site in Buyeo, South Korea
Hyung-tae Kang , Min-jeong Koh
Dept. of Conservation Science, Buyeo National Museum of Korea
P-37. A Study of 3D Pelvic Computed Tomographyby Using the Assistance Shoes
Park Chang-Bok, Jung Hong-Ryang
Hanseo University, Korea
P-38. Study on the improvement of the health screening questionnaire of the korean health
insurance service center
Wan-Young Yoon
Seowon University, Korea
P-39. Effect of the muscular strength exercise and massage on muscle injury marker and
IGF-1
Kim, Do-Jin, Kim, Jong-Hyuck
Daelim University College, Korea
P-40. A Study on the Low Intensity Aerobic Exercise and Postural Correction Exercise on
Fatigue Substance and Aging Hormone
Beak Soon-Gi, Kim Do-Jin
Jungwon University, Korea
P-41. Effect of Golf Swing Exercise on the Vascular Compliance and Metabolic Syndrome
Risk Factors in Elderly Women
Kim Do-Jin, Kim Sang-Yeob
Daelim University College, Korea
P-42. A Study on Exploration of the Growth Process & Learning Promotion Elements of a
Sports for All Instructor through Informal Learning
Kim Seung-Yong
Hanyang University, Korea
P-43. The Effects of An Aroma Back Massage on Electroencephalogram
Kang So-Hyung
Hanyang University, Korea
P-44. A Study on Supportive Policy for Domestic Winter Sports on the Occasion of 2018
PyeongChang Winter Olympics
Mi-Suk Kim, Ill-Gwang Kim
Korea Institute of Sport Science, South Korea
P-45. Difference in satisfaction with protein supplements, willingness to spread word-ofmouth and willingness to repurchase supplements of university students majoring in
physical education
Ill-Gwang, Kim
Seowon University, Korea
P-46. Effect of muscle activity for stair walking and stepper training in young adults
Kyung Mi Kim, JaeHo Yu, JinSeop Kim, JiHeon Hong, DongYeop Lee
Sun Moon University, Korea
P-47. The effect of elastic and non-elastic tape on Flat foot
SungMin Lee, DongYeop Lee, JiHeon Hong, JaeHo Yu, JinSeop Kim
XV
2015
International Conference on Information, System and Convergence Applications
Sun Moon University, Korea
P-48. The Influence of induced fatigue on lower limb muscle activation at landing in adult
women
Hyun-A Lee, Dong Yeop Lee, JinSeop Kim, JiHeon Hong, JaeHo Yu
Sun Moon University, Korea
11:10 11:15
11:15 –
11:45
11.45 –
12:00
12:00 –
13:00
13:00 –
14:00
14:00 –
17:00
Break
Best Paper Awards
Closing Ceremony
Lunch Break
Break
Optional
Visit to UNITEN UNIVERSITY
XVI
2015
2015
International Conference on Information, System and Convergence Applications
Contents
Session 1
Artificial Intelligence
S1-1
Morphological image enhancement and analysis
using directionality histogram
S1-2
A Robust Sky Segmentation Method for Daytime
Images
S1-3
Prediction Of Sediments Using Back Propagation
Neural Network (BPNN) Model
S1-4
An Improved Least Mean Square Algorithm for
Adaptive Filter in Active Noise Control
Application
S1-5
Hard Exudates and Cotton Wool Spots
Localization in Digital Fundus Images Using Multiprototype Classifier
Session 2
Biomedical Engineering and Application
S2-1
An Improved Hybrid Algorithm for Accurate
Determination of Parameters of Lung Nodules
with Dirichlet boundaries in CT Images
S2-2
Determination of Similarity Measure on MRI
brain clustered Image
S2-3
Driving Sequence Information from AAIndex for
Protein Hot Spots Prediction
S2-4
Biomedical Implants: Failure & Prevention
techniques – A review
Session 3
Smart sensor and Application to Integrated System
S3-1
Hand Gesture Recognition Using Ternary Content
Addressable Memory Based on Pattern Matching
Technique
S3-2
Design and Development of Electrical Resistance
Tomography to Detect Cracks in the Pipelines
S3-3
Hot-Point Probe Measurements for Aluminium
Doped ZnO Films
S3-4
Relative Humidity Sensor Employing Optical
Fibers Coated with ZnO Nanostructures
XVII
-----------------
Radhakrishnan
Palanikumar
H. L. Wong, C. S.
Woo
A.Clementking C.
JothiVenkateswaran
R. Mustafa, A. M.
Muad
1
5
9
14
-----
Methee T, Kittichai
17
W, Sansanee A,
Direk P, and Nipon T
U
-----
G. Niranjana
Dr.M.Ponnavaikko
21
-----
S.Rani, D.Gladis, R.
Palanikumar
Peipei Li, Keun Ho
Ryu
R Praveen, V
JaiGanesh, S
Prabakar
28
-----
T. Nagakarthik and
J R Choi
41
-----
O F Alashkar, V
Chitturi
Benedict W C Au, K
Y Chan, Y K Sin, Z N
Ng, C L Lee
Z. Harith,
N.Irawati., M.
Batumalay, H.A.
52
---------
-----
-----
34
38
56
58
2015
International Conference on Information, System and Convergence Applications
S3-5
GPR Principle for Soil Moisture Measurement
Session 4
Healthcare Technology and Application
S4-1
Automatic White Blood Cell Detection in Low
Resolution Bright Field Microscopic Images
S4-2
S4-3
S4-4
S4-5
-----
-----
62
Usanee A, Nipon T
U, Chatchai T, and
Sansanee A
E.Venkatesan, T.
Velmurugan
H Kim, S H Park, K S
Ryu, M Piao, K H
Ryu
S. Mahalakshmi,
T.Velmurugan
A.Naveen,
T.Velmurugan
67
Kuan Lee Choo,
Pang Jia Yew
S G Farkoush, S B
Rhee
Zhile Yang, Kang Li
92
-----
W Jing, C H Lai, W S
H Wong, M L D
Wong
102
-----
Sungwoo Bae
106
-----
Arun S B, Sathish K
S, Ravi L, Tan C S
108
-----
G.Vijayan,
S.Prabhakar,
S.Prakash, M.S
Kumar, R. Praveen
114
Role of Classification Algorithms in Medical
domain: A Survey
A study on feature vectors of heart rate
variability and image of carotid for cardiovascular
disease diagnosis
Image Segmentation in Medical Data: A Survey
-----
A Survey on Medical Images Extraction using
Parallel Algorithm in Data Mining
-----
Session 5
Smart Grid, Power and Energy System
S5-1
A Study of Arc Fault Temperature in Low Voltage
Switchboard
S5-2
Power factor improvement with SVC based on
the PI controller under Load Fault
S5-3
Unit Commitment Considering Vehicle to Grid
and Wind Generations
S5-4
Theoretical Analysis and Software Modeling of
Composite Energy Storage Based on Battery and
Supercapacitor in Microgrid Photovoltaic Power
System
S5-5
Economic Operation Scheme of a Green Base
Station
S5-6
Improved Multi-Axes solar Tracking sytem and
Analysing on power Generated power consumed
by the system
Session 6
Rafaie, G. Yun II,
S.W.Harun, R. M.
Nor, H. Ahmad
Yap, C.W., Mardeni
R., and Ahmad, N.N.
-----
-----
-------------
71
77
81
86
96
98
Process Engineering and Technology
S6-1
Effect of Injection Time on the Performance and
Emissions of Lemon Grass Oil Biodiesel Operated
Diesel Engine
XVIII
2015
International Conference on Information, System and Convergence Applications
S6-2
S6-3
S6-4
S6-5
S6-6
The Development of Automated Fertigation
System
Experimental Investigation on Ethanol Fuel in
VCR-SI Engine
-----
Active Cell Equalizer by a Forward Converter with
Active Clamp
Optimization of Process Parameters of Dissimilar
Alloys AA5083 and 5456 by Friction Stir Welding
Use of Vegetables Oil as Alternate Fuels in Diesel
Engines – A Review
-----
Session 7
Embedded system and Information Technology
S7-1
A Telepresence And Autonomous Tour Guide
Robot
S7-2
An Effective Approach for Parallel Processing
with Multiple Microcontrollers
S7-3
S7-4
S7-5
S7-6
S7-7
S7-8
Yap C W, Vinesh T,
Rajaram G
S.Prabhakar,
K.Annamalai, R.
Praveen, M.S.
Kumar, S.Prakash
Ling Wang, Wei
Ding, Tie Hua Zhou
Jaiganesh. V
119
-----
B.Gokul,
S.Prabhakar,
S.Prakash, M.S
Kumar, R. Praveen
134
-----
Alpha D D, Suresh
G, Vickneswari D
G Kim , A R M
Ariffin, Scott Uk-Jin
Lee
M Barathvikraman,
H Divya, R Praveen
Mohammad A,
Mahsa B R, Aida H,
Mostafa V, Kasra M
L Wang, W Ding
and T H Zhou
Mohammad A,
Mostafa V, Farez M,
Aida H, Kasra M
Ibrahim B M,
Kamalanathan S
and Naresh K A
S. Sivapriyan, R. D.
Jaishankar,
Tamilamuthan, B.
Vigenesh, M. Kaviya
and K. Rajalakshmi
138
Gyoo-Soo Chae
168
-----
-----
-----
Anti Hijack System with Eye Pressure Control
System
Effects of Mobile Cloud Computing on Health
Care Industry
-----
An Associative Index Method for Pyramid
Hierarchical Architecture of Social Graph
A Reliable User Authentication and Data
Protection Model in Cloud Computing
Environments
Recommendations of IT Management in a Call
Centre
-----
DARVENGER(Digitally advance rescue vehicle
with free energy generator)
-----
Session 8
Communication and Computational Modelling
S8-1
An investigation study of a printed array antenna
for 900MHz bands
XIX
-----
-----
-----
-----
123
127
129
142
146
150
153
157
161
166
2015
International Conference on Information, System and Convergence Applications
S8-2
S8-3
S8-4
S8-5
S8-6
S8-7
S8-8
On Enery-Efficient Time Synchronization based
on Source Clock Frequency Recovery in Wireless
Sensor Networks
Design and Simulation of Microstrip Patch
Antenna for Ultra Wide Band (UWB) applications
Comparision of Estimation method for State-ofCharge in Battery
Channel Estimation for MIMO-OFDM Systems
-----
Kyeong Soo Kim,
Sanghyuk Lee, and
Eng Gee Lim
S.K. Wong, T. H.
Tan, M Mokayef
Seonwoo Jeon,
Sungwoo Bae
Shahid M, Sunil G
and Adnan S
Saeid Gholami
Farkoush, SangBong Rhee
Pang J Y, Kuan L C,
Liau V K, Kudzai N
C, Tan D
Kyung Ki Kim
171
Smart load management of Electric Vehicles in
distribution and residential networks with
Synchronous Reference Frame Controller
Optimising Maximum Power Demand Using
Smart Sequencial Algorithm
-----
H S An, H M Shim, S
Lee
G Kim, G S Lee, Y
Park, Y P Hong, S H
Lee
J H Kim, G S Lee, Y S
Park, Y P Hong, S H
Lee
W Yu, H Lim
C H Kim, S B Rhee
191
206
-----
H M Shim, S Y Ma,
and S M Lee
H Ji, H S Lim
-----
Yoosoo Oh
211
-----
H S Shon, W Jang, S
H Park, J W Bae, K A
Kim, K H Ryu
213
-----
Hyunwon Moon
217
-----
Si-Na Park, SangBong Rhee
219
-------------
-----
High Speed CNFET Digital Design using Simple ----CNFET Circuit Structure
Poster Session
P-01
Genetic Algorithm based Pre-Training for Deep
Neural Network
P-02
Improved Object Segmentation Using Modified
GrowCut
---------
P-03
Depth Map Generation using HSV Color
Transformation
-----
P-04
P-05
Find Sentiment And Target Word Pair Model
Novel Operation Scheme of Static Transfer
Switches for Peak Shedding
Detection of Incorrect Sitting Posture by IMU
Built-in Neckband
Modeling of a Learner Profiling System based on
Learner Characteristics
Context Reasoning Approach for Context-aware
Middleware
Role of NT-proBNP for Prognostic in Non STsegment Elevation Myocardial Infarction Patients
from KorMI database Database and
Bioinformatics Laboratory
A 65nm CMOS Current Mode Amplitude
Modulator for Quad-band GSM/EDGE Polar
Transmitter
Appling Harmony Search Optimization Method to
Economic Load Dispatch Problems in Power Grids
---------
P-06
P-07
P-08
P-09
P-10
P-11
XX
-----
173
175
177
181
183
187
193
196
200
204
208
2015
International Conference on Information, System and Convergence Applications
P-12
P-13
P-14
P-15
P-16
P-17
P-18
P-19
P-20
P-21
P-22
P-23
P-24
P-25
P-26
P-27
P-28
P-29
P-30
P-31
P-32
Ventilation System Energy Consumption
Simulator for a Metropolitan Subway Station
The effectiveness of international development
cooperation (IDC) educational program for
nursing students
A Study on the Relationship between Nursing
Professionalism, Internal Marketing and
Turnover Intention among Hospital Nurses
The Level of Depression and Anxiety in
Undergraduate Students
Analysis of dental hygienists’ financial
preparation for old age
The motion graphic effect of the mobile AR user
interface
New Authentication Methods based by User’s
Behavior Big Data Analysis on Cloud
The Effect of Musical activities program on
Parenting stress and Depression- Focused on
Housewives with Preschool Children
Relationship between ego resiliency of girl
students and smart phone addiction
Analysis on resilience, self-care ability and selfcare practices of middle & high school students
An Algorithm for Zero-One Concave Minimization
Problems under a single linear constraint
An Analysis of Risk Sharing between the
Manufacturer and the Supplier
Meme and Culture Contents in Korea
Unique Features of the Internet Technology and
Their Impacts on Industry Structure and
Corporate Competitive Strategy
Analysis of Torso Patterns by Somatotype Focused on Development of Body Surface Shell
Value Relevance of the Fair Value Hierarchy and
the Impact of Fair Value Disclosures in Korea
Development of a Water-Droplet-Shaped Bra
Mold Cup Design
An Analysis on the Minimum Efficiency Scale of
Local autonomies in Korea
-----
The Effect of HRD programs on Labor
Productivity: The Moderating Role of Learning
Climate
CSR and Brand Performance
The Effect of Hedging with Property-Liability
Insurance on the Probability of Financial Distress
XXI
Sungwoo Bae,
Jeongtae Kim
Sun Young Park
Heejeong Kim
222
-----
Eun Ja Yeun,
Misoon Jeon
227
-----
229
-----
Eun Ja Yeun,
Misoon Jeon
Hee-Sun Woo, SeokHun Kim
YunSung Cho,
SeokHun Kim
Sunghyuck Hong
-----
Shinhong Min
237
-----
239
-----
Soonyoung Yun,
Shinhong Min
Shinhong Min,
Soonyoung Yun
Se-Ho Oh
-----
Chan Jung Park
245
---------
Kyung Sook Kim
Lark Sang Kim
247
249
-----
Mi Hyang Na
251
-----
HyunTaek Oh
253
-----
-----
Heh Soon Jung, Mi
255
Hyang Na
Sung Tai Kim, Young 257
Jun Chun, Jin-Yeong
Kim
Woo-Jae Choi
259
---------
Jae Mee Yoo
Young Mok Choi
-----
---------
-----
-----
225
231
233
235
241
243
261
263
2015
International Conference on Information, System and Convergence Applications
P-33
A Study on Justification for the Use of Chest CT
Scan in Physical Examinations
A Study on Microstruture of Gardnerella
Vaginalis
-----
A study on the DICOM file of Head CT and dose
calculation in the human body using the Geant4
code
Scientific Analysis of the Gilt-bronze Incense
Burner of Baekje Period from the Neungsalli
Temple Site in Buyeo, South Korea
A Study of 3D Pelvic Computed Tomographyby
Using the Assistance Shoes
Study on the improvement of the health
screening questionnaire of the korean health
insurance service center
Effect of the muscular strength exercise and
massage on muscle injury marker and IGF-1
A Study on the Low Intensity Aerobic Exercise
and Postural Correction Exercise on Fatigue
Substance and Aging Hormone
Effect of Golf Swing Exercise on the Vascular
Compliance and Metabolic Syndrome Risk
Factors in Elderly Women
A Study on Exploration of the Growth Process &
Learning Promotion Elements of a Sports for All
Instructor through Informal Learning
The Effects of An Aroma Back Massage on
Electroencephalogram
A Study on Supportive Policy for Domestic Winter
Sports on the Occasion of 2018 PyeongChang
Winter Olympics
Difference in satisfaction with protein
supplements, willingness to spread word-ofmouth and willingness to repurchase
supplements of university students majoring in
physical education
Effect of muscle activity for stair walking and
stepper training in young adults
-----
P-47
The effect of elastic and non-elastic tape on Flat
foot
-----
P-48
The Influence of induced fatigue on lower limb
muscle activation at landing in adult women
-----
P-34
P-35
P-36
P-37
P-38
P-39
P-40
P-41
P-42
P-43
P-44
P-45
P-46
XXII
You In-Gyu , Lim
Chung Hwan
Mi-Soon Park,
Zhehu Jin, ByungSoo Chang
Eun Hee Mo, Sang
Ho Lee, CheongHwan Lim
Hyung-tae Kang ,
Min-jeong Koh
265
Park Chang-Bok,
Jung Hong-Ryang
Wan-Young Yoon
273
Kim, Do-Jin, Kim,
Jong-Hyuck
Beak Soon-Gi, Kim
Do-Jin
277
-----
Kim Do-Jin, Kim
Sang-Yeob
281
-----
Kim Seung-Yong
283
-----
Kang So-Hyung
285
-----
Mi-Suk Kim, IllGwang Kim
287
-----
Ill-Gwang, Kim
289
-----
K M Kim, J H Yu, J S
Kim, J H Hong, D Y
Lee
S M Lee, D Y Lee, J H
Hong, J H Yu, J S
Kim
H A Lee, D Y Lee, J S
Kim, J H Hong, J H
Yu
291
-----
-----
---------
---------
267
269
271
275
279
293
295
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Morphological image enhancement and analysis using
directionality histogram
Radhakrishnan Palanikumar
Associate Professor, Department of Computer Science,
College of Computer Science, King Khalid University,
P.O.Box: 394, Abha, Kingdom of Saudi Arabia, 61411
e-mail: [email protected]
Abstract — This paper discuss about morphological image enhancement and analysis of images using the
directionality histogram. Morphological opening, closing process with directionality histogram produces the
features extraction of images. Image enhancement is one of the important preprocess in digital image processing,
where morphological transformations is useful. Various features of images are extracted through opening and
closing through alternatively and with sequential manner. Analysis of images is carried out through the above
approaches, which produces various features like Local thickness, Geometry to Distance map, Distance map to
distance ridge, Distance ridge to local thickness and usual edges and ridges with valleys. Directionality histogram
identifies direction and amount of edges travelling, which helps us to compare image and its enhanced versions.
Keywords: Morphological Transformation, Image enhancement, Image analysis, directionality histogram
INTRODUCTION
problem is the development of accurate thresholding
algorithms that reliably distinguish blood vessels from
surrounding tissue. Although various thresholding
algorithms have been proposed, our results suggest that
without appropriate pre- or post-processing, the existing
approaches may fail to obtain satisfactory results for
capillary images that include areas of contamination. In
this study, we propose a novel local thresholding
algorithm, called directional histogram ratio at random
probes (DHR-RP). This method explicitly considers the
geometric features of tube-like objects in conducting
image binarization, and has a reliable performance in
distinguishing small vessels from either clean or
contaminated background. Experimental and simulation
studies suggest that our DHR-RP algorithm is superior
over existing thresholding methods. [3]. Mathematical
Morphology in Geomorphology and GISci presents a
multitude of mathematical morphological approaches for
processing and analyzing digital images in quantitative
geomorphology and geographic information science
(GISci). Covering many interdisciplinary applications,
the book explains how to use mathematical morphology
not only to perform quantitative morphologic and scaling
analyses of terrestrial phenomena and processes, but also
to deal with challenges encountered in quantitative
spatial reasoning studies [4]. Directional histogram
characterizes the directionality of a texture image.
Compared to the commonly used texture analysis
methods, the co-occurrence matrix and Gabor features,
the directional histogram gave the best retrieval results.
In addition to that, the computational cost of the
directional histogram is significantly lower than in the
case of the other approaches. In conclusion, the
directional histogram proved to be effective in the texture
image retrieval, especially in case of the nonhomogenous textures. Because the most of the textures
Quantitate and qualitative characterization of image is a
significant process in analysis like pattern recognition,
computer vision, digital geometry and signal processing.
There are different techniques for analyze or enhance the
digital image, each of them are providing a specific
solution. Analyze the images are one of way to extract
the features of given digital images. In the same manner
the enhancement of images is also an important
preprocess, so that the expected results are more
accurate. Morphological opening and closing process
helps to enhance the given images. These processes are
alternatively applied in sequential manner.
The
directionality histogram is one of the qualitative and
quantitates identification of the given or preprocessed
images. It considers the angle in which ridges and valleys
carried out in the images.
In-depth presentation of the principles and applications
of morphological image analysis are discussed and
achieved through a step by step process starting from the
basic morphological operators and extending to the most
recent results [1]. Windows size is evaluated by the
histogram features as the main variable and also Pixel
level thickness can be calculated. But the intensity
features and directionality operated over the selected
region. Histogram would libel the feature in larger
windows but small window extract the statistical features
[2]. With the development of micron-scale imaging
techniques, capillaries can be conveniently visualized
using methods such as two-photon and whole mount
microscopy. However, the presence of background
staining, leaky vessels and the diffusion of small
fluorescent molecules can lead to significant complexity
in image analysis and loss of information necessary to
accurately quantify vascular metrics. One solution to this
1
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
occurring in the nature are non-homogenous, this ability
is essential in image retrieval [5]. Many low-level
features, as well as varying methods of extraction and
interpretation rely on directionality analysis (for example
the Hough transform, Gabor filters, SIFT descriptors and
the structure tensor). The theory of the gradient based
structure tensor (a.k.a. the second moment matrix) is a
very well suited theoretical platform in which to analyze
and explain the similarities and connections (indeed
often equivalence) of supposedly different methods and
features that deal with image directionality. Of special
interest to this study is the SIFT descriptors (histogram
of oriented gradients, HOGs). Our analysis of
interrelationships of prominent directionality analysis
tools offers the possibility of computation of HOGs
without binning, in an algorithm of comparative time
complexity [6].
To enhance the binary form of image we use the template
or structuring element shown in figure 5. Morphological
opening and closing applied alternatively in sequential
manner which helps to remove the noise and improve the
quality of binary form image.
Lena-std
Image analysis has some basic importance of edge
detection. Characterize the object boundaries are helpful
for object segmentation, registration and identification in
a scene. There are many methods for edge detection, but
it can be identified by two major methods, search-based
and zero-crossing based. Approach: Proposed method
tries to follow searching the edge points by applying
graph cuts in the place of morphological approach. The
graph cut edge detection algorithm is very effective to
detect edges with minimum searches [7]. In content
based image analysis and retrieval, texture feature is an
essential component due to its strong discriminative
power. Directionality is one of the most significant
texture features which are well perceived by the human
visual system. Both subjective and objective analyses
prove that the proposed method outperforms the
conventional Tamura method. It has also been shown
that the proposed directionality has better retrieval
performance than the conventional Tamura directionality
[8].
Histogram of figure 1
IMAGE ANALYSIS BY MORPHOLOGY
Image analysis is most significant process for any
features extraction, segmentation, and pattern
reorganization. Morphological process is one way to
achieve the above mentioned process. The median filters
are used in lena images (figure 1) after converting into
binary form (figure 3). It can be defined on gray scale
and binary images with any number of dimensions. It is
a Euclidean metric or a non-Euclidean geodesic metric,
which is also used in reconstruction of images. The lena
image details are described in table 1 which has 512 x
512 and with 32 bit per pixels. The Histogram of lena
image are given figure 2.
Binary form of Lena
IMAGE INFORMATION
Enhanced lena binary image
Title:
Width:
Height:
Pixel size:
Coordinate origin:
Bits per pixel:
lena-std.tif
512 pixels
512 pixels
1x1 pixel*pixel
0,0
32 (RGB)
The histogram of lena image (figure 2) has shown with
log values in figure 6.
The enhanced lena binary imge (figure 2) has exposed in
figure 7 by histogram also with logs in figure 8. The
2
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
figure 6 and figure 8 are shows the comparison
enhancement of given image, The window size used in
figure 6 and figure 8 are comartively increased in the
enhance lena image. The log areas are very clear
indication of improvement of given image which helps
the removel of noises and other information not relevant
to feature extraction.
0
0
1
0
0
1
1
1
1
1
1
1
1
1
1
0
0
1
0
0
RESULTS AND DISCUSSION
The analyze of images includes in two different
preprocessing, one with morphological analysis and
second is about directionality histogram. Various
analyze are discussed in this section. It includes edge
detection like ridges, local thickness of enhanced images,
geometry distance of map, and Distance map to distance
ridge of lena. Another approach we proposed for
analyzing is directionality histogram. This histogram
calculated by the processing identify the peak of the
layers which produce the valleys, through the angle of
valleys the complete image can be analyzed. The edges
of enhanced lena images are shown in figure 9. The
significant changes are noticed through morphological
enhancement.
0
0
1
0
0
Structure Element for morphological transformation
Edges of enhanced lena binary image
The local thickness of lena image are extracted through
morpholgoical transformations which is showed in figure
10. The regions are shown through thickness it is with
local minima. In this image some yellow colors are
shown in thick values, it represents the nearest pixels are
well connected. Also we noticed that there is no loss of
information of image, while finding the local thickness.
It is very simple to retrive back original image from local
thickness regions.
Histogram of figure 1 with logs
Histogram of Enhanced binary lena image
Local thickness of enhanced image
Geometry to the distance map of lena image is derived in
figure 11. Which shows regions with geometry and
distance map. The distance map are identified with
related to geometry of given lena image. So results shows
the originality of image with distance map. It relavent to
apply the vernoi diagram.
Histogram with logs of figure 4
The mean and standard deviation of given and enhance
images showns the differences and improvement to
analyze the images which helps to extract the features for
further processing.
3
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
The texture features of Lena image can be extracted in
the directionality histogram. . It is simply human
perception of important texture.
IV. CONCLUSIONS
Extracting the features from the digital images are
challenging with respect to the quality of inputted image,
or natural images, hand drawn images etc. While
applying various methods for analyzing the images it
could produces the conflict information, so it is
important to select the proper method for particular
extraction of features. The proposed method discussed
two types of feature extraction for analyzing the Lena
images. Through these methodologies we derived few
features in various derivations. Directionality histogram
is one of significant features of textures, which may help
to compare the images..
Geometry to distance map of lena
The ridges are identifed in previous figures, which can
be related to distance map. The relationship between
distance map and the ridges of lena image are shown in
figure 12.
REFERENCES
Soille, Pierre. Morphological image analysis: principles and
applications. Springer-Verlag New York, Inc., 2003.
Peter Enser, "image and video retrieval", third international conference,
CIVR 2004 Dublin, Ireland, July 2004 proceedings, Springer
Science & Business Media, Jul 8, 2004 - Computers - 679 pages
Na Lu, Jharon Silva, Yu Gu, Scott Gerber, Hulin Wu, Harris Gelbard,
Stephen Dewhurst, Hongyu Miao, Directional histogram ratio at
random probes: A local thresholding criterion for capillary
images, Pattern Recognition, July 2013, Vol.46(7):1933–
1948, doi:10.1016/j.patcog.2013.01.011
Behara Seshadri Daya Sagar ,
Mathematical Morphology in
Geomorphology and GISci, CRC Press, Taylor & Francis
Group,
ISBN-13: 9781439872000 ISBN10: 1439872007 Edition: 1st 2013.
Leena Lepistö, Iivari Kunttu, And Ari Visa, “Retrieval of nonhomogenous textures based on directionality”, Proceedings of 4th
European Workshop on Image Analysis for Multimedia
Interactive Services, London, UK, Apr. 9.-11. 2003. pp. 107-110
Josef Bigun and Stefan M. Karlsson. Histogram of directions by the
structure tensor. In Proceedings of the 4th International
Symposium on Applied Sciences in Biomedical and
Communication Technologies, 2011
Radhakrishnan, P. (2012). An Alternative Graph Cut Algorithm for
Morphological Edge Detection. American Journal of Applied
Sciences, 9(7), 1107.
M.M. Islam, D. Zhang, G. Lu, A geometric method to compute
directionality features for texture images, in: Proceedings of the
International Conference on Multimedia and Expo, Hannover,
Germany, June 23–26, 2008, pp. 1521–1524
Distance map to distance rigde of lena
Direcationality histogram of the enhanced image are
shown in figure 13. It is very important to show the angle
in which vellys are transformed. So that we can analyze
the image very significent manner. Directionality is one
of most important feature of textures. Which is simple to
locate through human visual system. Statistical
properties of histogram of Lena image is used to calculae
the directionality. The spatial relationship are exactly
notified in the direcationaly histogram .
Directionality Histogram of Enhanced Lena Image
4
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
A Robust Sky Segmentation Method for Daytime Images
H. L. Wong1, C. S. Woo2
1
Faculty of Engineering, Multimedia University, Malaysia
[email protected]
1, 2
Faculty of Computer Science and Information Technology, University of Malaya, Malaysia
[email protected]
Abstract—The Advance Driver Assistance System (ADAS) aims at improving the safety of occupants in moving
vehicles. Sensors are mounted on the vehicles to gather information around a vehicle’s surroundings. Then, the
information is processed using a single or multi computing platform. Examples of ADAS subcomponent which
require image understanding are road lane detection, traffic sign recognition and pedestrian detection.
Understanding of an image local brightness can be an import feature before the next processing step. The sky tends
to be the brightest part of an outdoor image taken during the day. So far, research on sky segmentation is only focus
from the viewpoint of Unmanned Aerial Vehicle (UAV) and satellite images. This paper focuses on the sky
segmentation on daytime images from the driver’s viewpoint. We hope that the segmented region can be use as a
reference for local image brightness understanding. The algorithm was tested on sample images from the German
Traffic Sign Detection Benchmark (GTSDB) database. The preliminary results show that majority of the sky region
can be segmented at the presence of lighting and scene variations.
Keywords-image, segmentation, sky, ADAS
INTRODUCTION
ADAS is an important aspect for modern vehicles and
future transportation system. Data processed can be used
for automated driving, intervention of the vehicle’s
control, warning feedback or just pure information [1].
Ultimately, the aim of ADAS is to reduce the number of
accidents on the road. Accidents are mainly caused by
human error. With ADAS intervention, technologists
believe that accidents can be reduced. Major vehicle
manufacturer such as General Motors, Volkswagen and
BMW are collaborating with tech companies like Google
for continuous improvement in ADAS. Researches of
tech companies have clocked tens of thousands
kilometers of driverless navigation through multi-terrain
using vehicles heavy loaded with various sensors [2].
However, a world-wide commercial realization of
automated or driverless vehicle is still far-fetched because
there plenty of environmental, hardware, regulation
variations that has yet to be addressed.
(a)
(b)
(c)
Figure 1. (a) Bright; (b) Dark; (c) Shadowed in partial region.
Color, shape or a combination of both features are
usually used for traffic sign detection [4 – 7]. To improve
the outlook of an image which is too dark or too bright,
histogram equalization is commonly performed during
the early stage [8]. When brightness of the neighboring
region around the traffic sign is known, histogram
equalization step can highlight the color and the border of
the traffic sign (see Fig. 2).
(a)
(b)
Figure 2. Effects of histogram equalization for (a) dark and (b) pale
inputs when brightness of the neighboring pixels are known.
If a road view image contains both locally dark region
and very bright region, the brightness of dark region
might not be adjusted optimally due to inclusion of very
bright region at histogram equalization step. The scenario
is illustrated in Fig. 3. As sky tends to be the brightest
region in a daytime road view, we hope that the
segmented region can provide better cue for brightnessbased regional histogram equalization.
Among the five human senses, visual cue is the most
important for drivers. Thus, visual understanding of the
environment is an important aspect for the computer to
emulate the human’s visual processing. Among the
ADAS components that utilize visual information are
pedestrian detection, traffic sign detection and
recognition, road lane departure detection and etc. In real
daytime driving environment, the lighting can vary
drastically due to the vehicle’s position with respect to the
sun direction, the cloud coverage and variations in the
camera. Due to the broad area of ADAS components, we
shall henceforth use traffic sign detection to highlight the
importance of understanding the environment brightness
in pre-processing. Examples in Fig. 1 show the possible
lighting variations. The images are samples from the
GTSDB [3].
This paper is organized as follows: the motivation of
this research has been explained in Section I, the
algorithm for sky segmentation is given in Section II, the
experimental results are highlighted in Section III, and the
paper is concluded in Section IV.
5
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
We employ a two-level connected component
analysis of a binary image to segment a probably sky
region from the natural scene. The two-level approach is
employed so that the connected components attained
from the first level are separable as illustrated in Fig 5.
After labeling the connected components at the second
level, selected candidate from the connected components
(Algorithm 2) are evaluated for its average brightness.
(a)
vs.
(b)
(c)
Figure 3. (a) Original image; (b) Histogram equalization the original
image  segment traffics sign; (c) Segment traffic sign  histogram
equalization on the segmented region.
(a)
SKY SEGMENTATION FROM DRIVER’S VIEWPOINT
This algorithm is intended for sky segmentation from
the driver’s road viewpoint. Identification of the bright
region in a road view image can be used as a preprocessing step for further image analysis such as road
lane recognition and traffic sign detection. Daytime sky
can be segmented using the brightness region and location
of that image region as the reference point. The sky is
clear of objects. Otherwise, there can be presence of
clouds or occasional objects such as plane, helicopter or
even hot air balloon. Those occasional objects can be
discarded easily through image morphology, as they tend
to appear as small specks in the sky.
(b)
On a sunny day, the brightness of cloud tends to be
close to the brightness of the sky. On a cloudy day, the
cloud tends to overcast majority of the sky. When a
thunderstorm is approaching, the cloud will be much
darker than the sky. Our interest here is to be able to
identify the very bright region. One purpose is to enable
region base histogram equalization based on a region’s
brightness category. Thus, we identify the sky and clouds
as the brightest region in an image for a sunny or overcast
day. In contrast we treat only the sky as the brightest
region when a thunderstorm is approaching.
(c)
Figure 5. (a) First level connected components – the sky region is not a
connected component; (b) Toggle the bits from image (a); (c) Second
level connected components – the sky region is labeled in red.
The pseudo codes for the proposed algorithm are
listed as the following:
Algorithm 1: SkySegmentation
Besides the variations of the environmental lighting,
the drastic variation of a driver’s road scenery also
increases the difficulty of how the sky can be
distinguished from other objects of the image. For
instance, the sky region for a road with dense vegetation
or man-made objects would be much smaller than the sky
region taken at a broad highway, as shown in Fig. 4. We
also hope to address sky segmentation of various scenes
in this paper.
Require: BGR image I(c,r), where c = columns and r = rows
Convert BGR to HSV color space, get V
Perform Canny edge detection  Binary image B(c,r)
Perform connected component labeling Level-1 on B
 CC1(m), where m is the maximum number of
connect components.
For i = 0 to m:
Draw white line: Line width = 2 pixels  B1(c,r)
End for
B2(c,r) = B1(c, r )
(a)
(b)
Perform connected component labeling Level-2 on
B2(c,r)  CC2(n), where n is the maximum
number of connect components.
Figure 4. (a) Sky is clearly distinguishable; (b) Sky is not apparent.
6
For i = 0 to n:
Find polygonal approximation
Find the bounding rectangle for each polygon
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Algorithm 2: FilterBoundRect
Require: CC2(n), bounding rectangle for CC2(n), I(c,r)
For i = 0 to n:
(c)
Compute BR.y, where y is the top left corner of BR
Compute BR.y + BR.h, where h is the height of BR
Compute BR.h * BR.w, where w is the width of BR
(d)
If (BR.y<(I.r/2)+50) && (BR.y+BR.h<I.r – 10) && (BR.h *
BR.w>p)
Figure 6. Positive sky segmentation for images with (a) blue sky, (b)
cloudy sky, (c) cluttered scene due to man-made objects and (d) trees
obstructing the sky.
Log n identifier  k
At this moment, the algorithm is not able to
distinguish building from the sky if the wall is bright or
reflecting light. If the environment is foggy and if the
natural object is far away, it may be perceive as the sky
too. Examples of over-segmentation are shown in Fig. 7.
Cntr(w,h) = Fill the CC2 with whites, background
black
Call ComputeBrightness, return AvgBrt
Log AvgBrt
Clear Cntr(w,h)
End if
End for
(a)
For i = 0 to k:
Algorithm
3: ComputeBrightness
Sort AvgBrt
in descending order.
Require:
Cntr(w,h), k
End for
(b)
Initialize:
= 0, r = 0 brightness reference
AvgBrtAvgBrt
max = AvgBrt(0)
Figure 7. (a) Parts of building and (b) trees segmented as the sky.
For i = 0 to k:
Quantitative result is unavailable at this point as it
requires exhaustive hand-labeling of the sky region on
each image. However, we provide the test images
sampled from the GTSDB database and the full results at
http://pesona.mmu.edu.my/~hlwong/Conf.html. Using
this preliminary result shared, we hope that it can be
useful for comparison purpose by others in the future.
If |AvgBrt(i)
Cntr.px==white
- AvgBrtmax|<q
AvgBrt
Accept
as=+V.px
sky  CC2(k) = Fill with whites  B3
Else r++
End
if Ras
Reject
sky  CC2(k)
= Fill
with blacks  B3
ESULTS
AND D
ISCUSSIONS
CONCLUSIONS AND FUTURE WORK
The
was written using C++ with OpenCV
Endalgorithm
for
End if
library. The software was tested on 100 samples of road
AvgBrt=
End for AvgBrt/r
view images
obtained from the GTSDB database. The
variations
addressed
are global illumination variations,
Return AvgBrt
B3(c,r)
color of the sky and density of vegetation and man-made
objects.
Examples of positive the sky segmentation
Exit
results are shown in Fig. 6. Positive sky segmentation is
defined as majority of the segmented region (labeled
white) is the sky or brightest part of the image if the sky
cannot be visually observed.
The paper showcases the sky segmentation algorithm
for driver’s road viewpoint during the day. We
hypothesize that the sky region has the highest likelihood
that it is the brightest region of the image. In a shadowed
environment, the shadowed region may be enhanced
more appropriately if the bright regions are excluded in
histogram equalization. In addition, identification of the
sky region can also be used to reduce the search space
whenever there is a need to find the road, pedestrian or
cars. Real-world road view images from a standard
database were used in the experiment. Majority of the
sky region can be segmented but brighter part of the
images such as building wall and trees in foggy
environment may cause over-segmentation. In our future
work, we hope to improve the current algorithm and to
apply brightness-based regional histogram equalization
for visual-based ADAS application.
(a)
ACKNOWLEDGMENT
(b)
7
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
The authors wish to thank University of Malaya for
supporting this work under grant PV075/2011A.
REFERENCES
O.M.J. Carsten and L. Nilsson, “Safety assessment of driver assistance
systems,” European Journal of Transport and Infrastructure
Research, 1 (3). pp. 225 – 243, 2001
K. Kowalenko, “Crash-Free Commutes: IEEE members work to make
vehicles smarter and safer,” the institute - The IEEE news source,
6 January 2012.
S. Houben, J. Stallkamp, J. Salmen, M. Schlipsing and C. Igel,
"Detection of traffic signs in real-world images: The German
traffic sign detection benchmark," Neural Networks (IJCNN),
The 2013 International Joint Conference on , pp.1 – 8, 4-9 August
2013.
J. F. Khan, S. M. A. Bhuiyan and R. R. Adhami, "Image Segmentation
and Shape Analysis for Road-Sign Detection,", IEEE
Transactions on Intelligent Transportation Systems, vol.12, no.1,
pp.83 – 96, March 2011.
G. K. Siogkas and E. S. Dermatas, "Detection, Tracking and
Classification
of
Road
Signs
in
Adverse
Conditions," Electrotechnical Conference, 2006. MELECON
2006. IEEE Mediterranean , pp. 537 – 540, 16 – 19 May 2006.
T. T. Le, S. T. Tran, S. Mita and T. D. Nguyen, "Real time traffic sign
detection using color and shape-based features," Lecture Notes in
Computer Science, vol. 5991, pp 268 – 278, 2010.
A. Martinović, G. Glavaš, M. Juribašić, D. Sutić and Z. Kalafatić,
"Real-time detection and recognition of traffic signs," MIPRO,
2010 Proceedings of the 33rd International Convention, pp.760 –
765, 24 – 28 May 2010.
R.C. Gonzalez and R. E. Woods, "Digital Image Processing (3rd
Edition)", Prentice-Hall, Inc. Upper Saddle River, NJ, USA,
2006.
8
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Prediction Of Sediments Using Back Propagation Neural
Network (BPNN) Model
A.Clementking1 C. JothiVenkateswaran2
1. Associate professor, Department of Computer Science, king Khalid University, Kingdom of Saudi Arabia
[email protected]
2. Associate Professor & Head,PG and Research Department of Computer Science, Presidency College,Chennai
Abstract : Environmental data mining used to analysis and predict the environmental related data for social
applications. The data mining techniques such as association, clustering, classification and predication used for the
domain applications. The environmental mining applied for water resource for the planning, estimation, resoce
optimization , quality and sediment process. In water quality analysis, the sediment formation predication analysed
to determined the water quality as well as maintenances process using neural network model. Neural Network
models are used to determine the optimum values and discover the unknown and hidden knowledge The water
properties are tainted while merging water resource one with another. This work aimed to predict minimum level
of sediment with selected physiochemical water quality attributes such as temperature, activity of the hydrogen ion
H+ (pH), Dissolved Oxygen (DO), Sodium, Carbonate, Bicarbonate, Calcium, Chloride, Nitrate, Total Dissolved
Solids (TDS), Total Suspended Solids(TSS), Biochemical Oxygen Demand (BOD) and Chemical Oxygen Demand
COD. This paper describes the development of Back Propagation Neural network model with obtained sediment
variations.
Key words: Environmental Mining, Neural Networks, Back Propagation NN model, , Water Sediment
I. INTRODUCTION
The data mining techniques are applied into
geographical, environmental and spatiotemporal data for
analysis and predictions. The data mining techniques such
as classification and clustering used for air pollution,
water pollution and land utilization and its impacts
analysis. The Air, water and soil analysis and its changes
are predicated using data mining techniques and models
.The environmental data mining focused on air, water and
soil related data analysis and its impacts predictions. The
data mining techniques such as association, clustering
classification, prediction sequential analysis and pattern
generations are exercised to identify the knowledge
which could apply for decision making system. This
paper focused on predication of sediments applying a
neural network model. It will help for resource planning
and distribution is a challenging task to decision makers
for the establishment of smart environment.
iii.
iv.
Design and develop a distinctive neural network
model to determine the weight of sediment
Train the developed distinctive neural network
using back propagation algorithm to obtain
sediment weights from its physiochemical
properties
The dataset fetched from four lentic systems of
Tirunelveli and Tuticorin districts. Dataset has been taken
from the Doctoral thesis of Mohanraj Ebenezer of
ManonmaniamSundaranar
University,
Tirunelveli,
Tamilnadu which comprise of four lentic systems of
Tirunelveli and Tuticorin districts. The data sources are
classified as follows:
Station-I :
UdayarpattyBrathy Station which is
situated in the heart of Tirunelveli Municipal Corporation
limit and subjected to a high degree of modification due
to local conditions. The area of the Station is about 1 1.5
acres.
II. SCOPE
This paper is aimed to design an innovative model to
predict sediment from physiochemical properties of water
using neural network techniques for water distribution
recommendation according to water quality and its
sediment. The scope is to Design, develop and train a
distinctive neural network using back propagation
algorithm to obtain
sediment weights from its
physiochemical properties.
III. METHODOLOGY
The sustainable water quality and sediment attributes
variation analysis made for water resource distribution.
Identification of multi object variations on water
resources accomplished with following steps
i.
Collection of observed water quality attribute
Dataset
ii.
Selection of Physiochemical attributes to
determine WQI
Station-II :
Marthandeswar Station which is
situated in a nearby village called Karungulam. It covers
an area of 33 acres.
Station-III :
It is a rocky pool which prohibited for
public to use. It covers an area of 15 cents.
Station-IV:
It is a large rocky pool and it covers an
area of about 49 cents.
IV. PHYSIOCHEMICAL ATTRIBUTES OF WATER
The water quality attribute values are taken from
four lentic systems of Tirunelveli and Tuticorin districts.
9
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
The physical, chemical and biological attributes are taken
as monthly wise report for two years period . The missing
values are assigned as zero in the dataset since Station-I
and II has no water during the month of June .The
statistical correlation measures computed for water
quality attributes to identify the relationship between
water resources. The selective physiochemical attributes
such as temperature, pH, DO, sodium, carbonate,
bicarbonate, calcium, chloride, nitrate, TDS, TSS, BOD,
COD were consider for water quality computation
1
2
3
4
5
6
7
8
9
1
0
1
1
1
2
1
3
1
4
1
5
1
6
1
7
1
8
1
9
2
0
2
1
2
2
Temp
pH
DO
27.5
0
29.0
0
30.0
0
31.5
0
34.0
0
8.2
0
8.1
0
8.2
0
8.0
0
8.4
0
0.0
0
8.2
0
8.3
0
8.2
0
8.2
0
8.1
0
8.0
0
8.1
0
8.0
0
8.2
0
8.2
0
8.2
0
0.0
0
8.3
0
8.2
0
8.0
0
8.0
0
5.2
0
5.8
0
6.2
0
6.0
0
5.7
0
0.0
0
5.4
0
5.0
0
6.2
0
7.2
0
6.2
0
6.4
0
5.4
0
5.6
0
5.8
0
6.0
0
6.0
0
0.0
0
5.7
0
6.0
0
6.2
0
6.8
0
0.00
32.5
0
31.5
0
31.0
0
29.5
0
28.5
0
28.0
0
28.0
0
28.5
0
29.0
0
30.0
0
34.5
0
0.00
32.5
0
32.0
0
31.5
0
30.0
0
Sodiu
m
Carbo
nate
Bicarbo
nate
4.63
1.40
6.00
3.84
0.00
7.00
3.96
0.00
3.50
4.01
0.00
3.00
4.00
0.00
2.40
0.00
0.00
0.00
3.99
0.00
4.30
3.96
0.80
6.20
3.72
0.00
4.80
3.80
0.00
5.60
2.98
0.00
6.00
3.00
0.00
6.40
3.41
0.00
7.20
3.21
0.00
6.80
3.07
1.20
6.60
3.09
0.00
6.70
2.98
0.00
7.10
0.00
0.00
0.00
3.07
0.00
6.90
2.87
1.40
6.60
2.94
0.80
6.60
3.01
0.00
6.40
Temp
pH
DO
2
3
2
4
28.5
0
29.0
0
8.1
0
8.2
0
7.0
0
6.9
0
Sodiu
m
Carbo
nate
Bicarbo
nate
2.88
0.00
6.50
2.65
0.00
6.30
Table 1b : Observed water Quality attribute values
Table 1a : Observed water Quality attribute
values
S.
N
o
S.
N
o
S.
N
o
Calciu
m
Chlorid
e
Nitrat
e
1
14.00
25.60
0.88
2
13.80
24.40
0.94
3
13.80
26.20
0.99
4
13.90
38.60
1.14
5
13.20
32.15
1.21
6
0.00
0.00
0.00
7
13.20
20.60
1.10
8
13.00
20.40
1.00
9
12.80
21.80
0.96
10
12.80
17.20
0.89
93.40
11
12.90
17.40
0.74
96.80
12
13.00
18.70
0.72
91.20
TDS
TSS
151.2
0
149.4
0
153.2
0
209.7
0
214.4
0
0.00
165.7
0
158.4
0
156.6
0
45.0
0
44.0
0
40.1
0
40.4
0
46.3
0
0.00
51.6
0
56.0
0
60.4
0
58.8
0
68.0
0
74.6
0
68.7
0
64.2
0
66.6
0
68.2
0
70.2
0
0.00
66.4
0
62.8
0
64.0
0
63.8
0
66.4
0
68.9
0
156.6
0
154.2
0
155.1
0
204.9
0
200.1
0
0.00
152.7
0
154.0
0
155.6
0
13
12.40
28.40
0.67
14
13.10
28.20
0.74
15
13.80
27.90
0.78
16
13.60
36.20
0.70
17
14.00
37.10
0.71
18
0.00
0.00
0.00
19
13.80
26.90
0.73
20
13.00
26.70
0.77
21
12.80
25.80
0.84
22
13.20
21.00
0.79
93.80
23
12.90
20.70
0.80
91.60
24
13.00
20.40
0.81
90.40
BO
D
4.20
5.10
6.80
6.80
7.95
0.00
5.60
5.50
4.85
4.65
5.20
5.10
4.30
4.80
5.60
6.40
6.80
0.00
6.40
6.60
5.80
5.66
6.00
6.80
CO
D
21.4
0
19.8
0
20.2
0
29.4
0
33.2
0
0.00
32.1
0
34.6
0
29.7
0
28.0
0
25.2
0
26.1
0
23.4
0
21.8
0
22.9
0
28.2
0
29.6
0
0.00
34.2
0
31.0
0
30.2
0
26.8
0
26.4
0
27.2
0
V.DESIGN OF BACK PROPAGATION NEURAL NETWORK
The data mining neural network model suites to
map physiochemical properties and its weight process for
the prediction of sediment. The Back propagation neural
network is a multilayered, feed forward neural network .
It is used for supervised training of multilayered neural
networks. Back propagation works by approximating the
non-linear relationship between the input and the output
by adjusting the weight values internally . In the existing
approaches are shows maximum of six attributes are as an
input and single output layer.
This work attempted unique model designed and
developed to predict the water quality attribute weight
10
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
values using nine inputs. Physiochemical parameters of
the water such as pH, dissolved oxygen, calcium,
chloride, nitrate, total dissolved solids, total suspended
solids, biological oxygen demand and chemical oxygen
demand are considered as an input neuron and its
interactions are assigned as hidden neurons. The hidden
layer connects all input neurons and output neurons. The
weights of water quality and sediment are assigned as an
output layer of neurons. Single Back propagation feed
forward model is trained to compute the weight of water
quality internal components and its interactions. The
hidden weights of maximum values are considered for
water quality and minimum weight values are adopted for
sediment computations.
Learning in a back propagation network is in
two steps. First each pattern Ip is presented to the network
and propagated forward to the output. Second, a method
called gradient descent is used to minimize the total error
on the patterns in the training set. In gradient descent,
weights are changed in proportion to the negative of an
error derivative with respect to each weight:
Where
ΔW is changes of weight on network
𝛿𝐸is gradient decent on Error
𝛿𝑊𝑗𝑖 is gradient decent on weight
Weights move in the direction of steepest descent on the
error surface defined by the total error (summed across
patterns):
2
∑
(𝑡𝑝𝑗 − 𝑂𝑝𝑗 )
𝐸 = 1⁄2 ∑
𝑃=1..𝑛
𝑗=1..𝑚
Where
Opj be the activation of output unit
uj in response to pattern p
tpj is the target output value for unit uj
VI. DESIGN OF NEURAL NETWORK BACK-PROPAGATION
MODEL FOR SEDIMENT PREDICTION
stations. The total quality of the four station and its
weights are computed using the designed 9:9:2 neural
network model.
The computer water quality variation on the
observed data with its average quality index is presented
in the table 2.
Table 2 Computed Sediment
Mont
h
I
II
III
IV
WS
WS
WS
WS
0.06141 0.00750 0.02927
1
-0.00172
2
6
3
0.05549 0.09661 0.20336 0.03831
2
3
1
6
7
0.04336 0.08697 0.05073
3
1
8
1
-0.01128
0.18609 0.00146 0.01753 0.02260
4
7
1
7
1
0.00417 0.03541 0.09139 0.01309
5
5
6
9
1
0.07689 0.01746 0.24114 0.14508
6
5
7
6
6
0.35074 0.07336 0.03484 0.03237
7
5
9
3
2
0.13662 0.01894 0.08777 0.19396
8
4
3
8
7
0.13933 0.00461 0.00444 0.15276
9
4
6
3
7
0.22298 0.16176 0.05320 0.05932
10
9
7
9
6
0.01977 0.33538 0.00093 0.17755
11
6
9
4
5
0.00819 0.02961 0.05356 0.15640
12
5
6
7
6
0.00555 0.03924 0.10389
13
1
5
2
-0.00054
0.09305 0.04547 0.21853 0.14354
14
4
7
7
1
0.01125 0.05800 0.03347
15
0.11088
6
8
5
0.05048
0.11402 0.10437
16
5
-0.00043
9
3
0.04600 0.02313 0.06766 0.04172
17
4
9
8
3
0.07882 0.01381 0.05261 0.01896
18
6
8
9
8
0.17061 0.22987 0.03873 0.07878
19
2
9
7
2
0.18226 0.16278 0.22711 0.14740
20
7
9
8
5
0.00557 0.14639 0.06831
21
0.13434
1
3
1
0.07733 0.15619
0.27980
22
8
9
0.1611
9
0.03945 0.15508 0.00238 0.02674
23
1
8
6
6
0.03781 0.25329 0.00444 0.02870
24
5
1
5
1
The nine input, nine hidden and two
output neural network is trained with the back
propagation algorithm. The input values are
Physiochemical parameters of the water such as pH,
dissolved oxygen, calcium, chloride, nitrate, total
dissolved solids, total suspended solids, biological
oxygen demand and chemical oxygen demand.
The monthly and seasonal variations of
these parameters were accounted for this model. The
above stated nine properties are considered as an input
neuron and its interactions are assigned as hidden
neurons. The quality weight of the water and the sediment
weights are assigned as an output.
The input parameters are pH(I1) ,
dissolved oxygen(I2), calcium(I3), chloride(I4),
Nitrate(I5), total dissolved solids(I6), total suspended
solids(I7), biological oxygen demand(I8) and chemical
oxygen demand (I9). The following model is constructed
using neural network model.
The research work designed and evaluated the
internal weight of the water quality and sediment of
various levels of quality and the sediment of four different
11
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
0.4
Variations
the sediment variation of station 1 presented below
Variations
0.4
0.3
0.2
0.1
WS
WS
0
-0.2
1 4 7 10 13 16 19 22
Months
0
Fig 5. Variations of Water Quality and Sediment of
Station IV
-0.1 1 4 7 10 13 16 19 22
Fig 2. Variations of Water Quality and Sediment of
Station I
The station IV sediment variation is computed
and presented in the Fig 5. All these variations are occurs
due to the changes on physiochemical attribute variations
of the water.
The station I sediment variation is computed
and presented in the Fig 2. The sediment level is
increased maximum in the 7th month of the first year and
7th and 8th month of the second year. The water quality
and the sediment values are directly proportionate one
with another.
From the fetched physiochemical attribute
variation, the average of four station sediment weights
are calculated and presented in the table 3
Table 3 Variations of Physiochemical attributes
average weights for sediment
0.4
Variations
0.2
0.3
Station
S_I
S_II
S_III
S_IV
average
0.2
PH
0.1987
0.1975
0.2004
0.2142
0.2027
DO
0.2347
0.2044
0.1555
0.1687
0.1908
CALCIUM
0.2129
0.1959
0.1596
0.1601
0.1821
CHLORIDE
0.2692
0.1803
0.2534
0.1268
0.2074
NITRATE
0.178
0.1281
0.2209
0.2233
0.1876
TDS
0.2191
0.2193
0.1973
0.1612
0.1992
TSS
0.2147
0.1712
0.2564
0.181
0.2058
BOD
0.2005
0.2088
0.2115
0.1797
0.2001
COD
0.1764
0.1979
0.203
0.2182
0.1989
0.1
WS
0
-0.1 1 4 7 10 13 16 19 22
Fig 3 Variations of Water Quality and Sediment of
Station II
The station II sediment variation is computed
and presented in the Fig 3. The sediment level is
increased maximum in the 11th month of the first year
and 7th month of the second year.
The average variations of the physiochemical
attributes for the sediment presented in the Fig 6.
Variations
0.3
0.2
0.2100
0.2000
0.1900
0.1800
0.1700
0.1600
WS
0.1
0
1 3 5 7 9 11 13 15 17 19 21 23
Months
Fig 4. Variations of Water Quality and Sediment of
Station III
The station III sediment variation is computed
and presented in the Fig 4. The sediment values are
fluctuated in the entire period
Fig 6 : Physiochemical Attributes variations for
Sediments
As per the observations of the Fig 6, the
chloride and total suspended solids have more variations
at the maximum value. The calcium and nitrate have
fewer variations at the minimum level. The reaming
attributes are having marginal variation in line to the
12
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
7.
water quality. The chloride has maximum variation in
both water quality and sediments.
The influence of the physiochemical attributes
frequency is computed for different sediment using back
propagation prediction. The prediction aimed to identify
sediment influencing attributes.
8.
9.
VII. CONCLUSION
10.
The
spatiotemporal
variations
of
physicochemical attributes were accounted to design
neural network back propagation model. The nine bio
chemical properties (temperature, pH, Dissolved Oxygen
(DO), Sodium, Carbonate, Bicarbonate, Calcium,
Chloride, Nitrate) are considered as an input neuron and
its interactions are assigned as hidden neurons. The
weights of sediment are assigned as an output layer of
neurons. A unique Supervised Neural Network Back
Propagation model 9:9:2 is designed to training dataset
for water quality and predictions. The network is
initialized with randomly chosen weights. The back
propagation algorithm is used to find a local minimum of
the error function. The gradient of the error function is
computed and used to correct the initial weights. The
trained neural network produced the weight for sediment
of physicochemical parameters of training dataset.
11.
12.
13.
14.
15.
The current data is evaluated and the findings
are used for the recommendations. The influences of
attributes lead to identify sediments. The implementation
of neural network back propagation model produced
weight which shows the level of ingredients in the water
sources in significant level. The experimental results are
presented as a recommendation to authorities in the
“decision-making” process for the maintenance water
resource planning.
16.
17.
18.
19.
REFERENCES
1.
2.
3.
4.
5.
6.
Abrahart, R. J., Mount, N. J., AbGhani, N., Clifford, N. J., &
Dawson, C. W. (2011). DAMP: A protocol for contextualising
goodness-of-fit statistics in sediment-discharge data-driven
modelling. Journal of hydrology, 409(3), 596-611.
Aleksander, I., and H. Morton (1990), An Introduction to Neural
Computing, Chapman and Hall, London.
Aleksander, I., and J. Taylor (eds.) (1992), Artificial Neural
Networks 2, Elsevier Science Publishers, Amsterdam
Alvarez-Guerra, M., González-Piñuela, C., Andrés, A., Galán, B.,
&Viguri, J. R.(2008). Assessment of Self-Organizing Map
artificial neural networks for the classification of sediment
quality. Environment international, 34(6), 782-790.
Anpalaki J. Ragavan (2008) Data mining application of nonlinear mixed modeling in water quality analysis, 1-14
Arockiam, L., Baskar, S. S., &Jeyasimman, L. (2012). Clustering
Techniques in Data Mining. Asian Journal of information
Technology, 11(1), 40-44.
20.
21.
22.
23.
13
Aytek, A., &Kişi, Ö. (2008). A genetic programming approach to
suspended sediment modelling. Journal of hydrology, 351(3),
288-298
Balasubramanian. T and Umarani. R.(2012). Clustering: An
Analysis Technique in Data Mining for Health Hazards of High
Levels of Fluoride in Potable Water, International Journal of
Computer Science & Engineering Technology. 2(4) ,1113-1117
Bhattacharya, B., Deibel, I. K., Karstens, S. A. M., &Solomatine,
D. P. (2007). Neural networks in sedimentation modelling
approach channel of the port area of Rotterdam. Proceedings in
Marine Science, 8, 477-492.
Bianchi, M., Feliatra, F., Tréguer, P., Vincendeau, M. A.,
&Morvan, J. (1997). Nitrification rates, ammonium and nitrate
distribution in upper layers of the water column and in sediments
of the Indian sector of the Southern Ocean. Deep Sea Research
Part II: Topical Studies in Oceanography, 44(5), 1017-1032.
Bieroza, M., Baker, A., & Bridgeman, J. (2012). New data mining
and calibration approaches to the assessment of water treatment
efficiency. Advances in Engineering Software, 44(1), 126-135.
BogdanSkwarzec , Krzysztof Kabat ,AleksanderAstel ,Seasonal
and spatial variability of 210Po, 238U and 239-240Pu levels in
the river catchment area assessed by application of neuralnetwork based classification, Journal of Environmental
Radioactivity,2008 Elsevier Ltd
Bose, N.K. and Liang, P. (1996). Neural Networks Fundamental
With Graphs Algorithms And Applications. Mcgraw-Hill: New
York, NY
Chang-Shian Chen , Boris Po-Tsang Chen , Frederick Nai-Fang
Chou , Chao-Chung Yang ,2010 Development and application of
a decision group Back-Propagation Neural Network for flood
forecasting, Journal of Hydrology, journal,Elsevier B.V
Daniel P. Loucks, (1998). Watershed Planning: Changing Issues,
Processes and Expectations. Water Resources Update. 111, 3845. De Walling, SN Wilkinson, AJ Horowitz(2011), Catchment
Erosion, Sediment Delivery, and Sediment Quality,Elsevier B.V.
305-338
El-Shafie A., Noureldin A.E, M.R. Taha and H. Basri, 2008.
Neural Network Model for Nile River Inflow Forecasting Based
on Correlation Analysis of Historical Inflow Data. Journal of
Applied Sciences, 8: 4487-4499.
Garg, V., &Jothiprakash, V. (2013). Evaluation of reservoir
sedimentation using data driven techniques. Applied Soft
Computing, 13(8), 3567-3581
Haas, T. C. 1998 Modeling waterbody eutrophication with a
Bayesian belief network. School of Business Administration,
University of Wisconsin Milwaukee, WI
Hamilton, S. J., Buhl, K. J., &Lamothe, P. J. (2004). Selenium
and other trace elements in water, sediment, aquatic plants,
aquatic invertebrates, and fish from streams in SE Idaho near
phosphate mining. Handbook of Exploration and Environmental
Geochemistry, 8, 483-525.
Izquierdo, J, Montalvo, I., Pérez-García, R., & Campbell, E.
(2014). Mining solution spaces for decision making in water
distribution systems. Procedia Engineering, 70, 864-871
Kolli
K., &Seshadri, R. (2013). Ground Water Quality
Assessment using Data Mining Techniques. International Journal
of Computer Applications, 76.39-45
MdAzamathulla H., (2013). A Review on Application of Soft
Computing Methods in Water Resources Engineering.
Metaheuristics in Water, Geotechnical and Transport
Engineering, 27-41
Sarangi, A., & Bhattacharya, A. K. (2005). Comparison of
artificial neural network and regression models for sediment loss
prediction from Banha watershed in India. Agricultural water
management, 78(3), 195-208.
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
An Improved Least Mean Square Algorithm for Adaptive
Filter in Active Noise Control Application
R. Mustafa and A. M. Muad
R. Mustafa – Faculty of Engineering, Technology & Built Environment UCSI University, Kuala Lumpur, Malaysia
[email protected]
A. M. Muad – Faculty of Engineering and Built Environment Universiti Kebangsaan Malaysia, Bandar Baru Bangi,
Malaysia
Abstract— The method of least mean square (LMS) is used as an adaptive algorithm in active noise control (ANC)
application due to its simplicity and robustness in implementation. This paper presents an improved LMS algorithm
to address the convergence performance of the error signal in a system identification process for ANC headset, in
which repeated updates on filter weight are carried out within every sampled audio data. The proposed work uses
field programmable gate arrays to realize real-time hardware implementation of LMS adaptive filter with the
repeated updates of filter weight at 48 kHz data sampling rate. Results from the simulations have predicted error
convergence for several selections of learning constant μ, while the hardware implementation further verified the
results from simulation with more stringent selection of learning constant due to the time-varying environment.
Keywords - Least Mean Square Algorithm, System Identification, Error Convergence;
INTRODUCTION
of our modified version. In section 3, the implementation
of LMS algorithm on FPGA is described, followed by
the simulation results on system identification process in
section 4. Finally, the hardware implementation of the
real-time system identification experiment for ANC
headset application is presented in section 5. Section 6
presents the conclusion of the findings.
The celebrated least mean square (LMS) algorithm is
one of the most applied adaptive methods in active noise
control (ANC) application and required real-time
processing for a successful and efficient hardware
implementation [1-3]. The use of specialized digital
signal processor (DSP) chip and with the capability of
handling numerous floating point operations manage to
address the real–time processing issue in narrowband
attenuation of ANC headset [4-5], and in broadband
attenuation for duct application [6] based on least mean
square (LMS) algorithm.
LMS ADAPTIVE FILTER
LMS algorithm is an approximation method of
steepest gradient descent that relies on the value of the
instantaneous squared error signal [7,8]. Due to this
approximation, the calculation of adaptive filter weight
has resulted in the simplification that is expressed as:
The conventional DSP chip evaluates a signal in a
sequential behavior, where each updates on the filter
weight might require several instruction cycles to
complete. As a result, heavy processing for complicated
LMS-based algorithm will require more instruction
cycles and will invoke additional time delay to ANC
system. Therefore, in general the error signal will
converge much slower. In our proposed work, we utilize
the field programmable gate arrays (FPGA) advantage to
achieve real-time and faster convergence of error signal
for LMS-based algorithm. A system identification
process for ANC headset in a broadband time-varying
and uncontrolled environment is carried out to validate
the algorithm effectiveness. In this work, we focused and
emphasized on the FPGA adaptive filter algorithm to
create a new modified version of LMS weight updates
that could improve the convergence performance of error
signal.
w(n + 1) = w(n) +  x(n)e(n)
…(1)
where w(n) = [w0(n), w1(n), …, wL-1(n)]T are the
weight updates or adaptive filter coefficients vectors for
L-length filter, x(n) = [x(n), x(n – 1),…, x(n – L + 1)]T
correspond to the reference signal vectors, e(n) is the
error signal at sampling time n and μ is the learning
constant.
Our implementation uses the structure of finite
impulse response (FIR), a stable digital filter that has a
usual canonical form of tapped-delay input as a basic
element to realize the designated adaptive filter. The
LMS algorithm updates the FIR filter weight to produce
the output signal, y(n) that can be expressed as an
arithmetic sum of products:
N 1
The paper is organized as follows: Section 2
describes briefly on theory and underlying principle of
LMS-based adaptive filter used, followed by the design
y ( n) 
 w ( n)  x ( n  k )
k
k 0
14
…(2)
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
where y(n) is the adaptive filter output at time n, x(n – k)
is the tapped-delay input reference signal and wk(n) for k
= 0,1,…,N-1 are the N-tap FIR filter coefficients. In ANC
system, the residual error signal is acquired from
microphone that is also corresponding to the difference
between the primary noise, d(n) and the adaptive filter
output , y(n) which is expressed as:
e(n)  d (n)  y(n)
0    26
SIMULATIONS
The simulation result is obtained using built-in
simulator in Altera Quartus II development platform.
However, due to memory and software limitation, a
complete MSE convergence that is represented by the
convergence of the instantaneous error signal output
could not be fully observed in simulation time. Instead, a
prediction of convergence is made based on the initial
result observed from the output of error signal. Based on
the results, it is indicated that in general MSE
convergence only occurred with the learning constant in
the range of; 224    219 .
…(3)
Repeated Weight Updates
In general, numerical calculation for updating the
filter weights utilizing conventional DSP chip is
performed one time in every sample of audio signal that
is expressed as [1]:
wk(n + 1) = wk(n) +
 x(n – k)e(n)
EXPERIMENTS
Schematic diagram in Figure 1 shows the real-time
hardware implementation of system identification for
broadband ANC headset in a laboratory time-varying
environment. A personal computer (PC) is used to
synthesize the HDL code and programmed it onto the
Altera DE2 development board. The PC sound card is
used to acquire the resulting error signal from a
microphone using Matlab Simulink. The headset being
used is the model of HD280 from Sennheiser and has
been modified to insert a small microphone through its
earmuff. In this experiment, a mannequin wearing the
headset is used to replicate the human being.
...(4)
where the error signal e(n) is multiplied with each
tapped-delay input of reference signal x(n – k) alongside
with learning constant μ before being summed up with
the previous weights.
However, if we could allow the weight updates to be
calculated more than once within each sample of audio
signal [9], then logically we can expect faster adaptation
of weight updates, hence faster convergence of error
signal. To achieve this, we have utilized the capability of
parallel processing on FPGA to modified equation (4)
into:
wk(n + 1) = wk(n) +
 x(n – k)[e(n – k)hk(n)]
…(6)
… (5)
where e(n – k) is the tapped-delayed error signal, and
hk(n) is the impulse response of all-pass filter.
FPGA IMPLEMENTATION
The FIR-based LMS adaptive filter is designed using
conventional multiplier and accumulator and is
implemented onto the Altera Cyclone II FPGA chip
embedded on a development platform. The development
platform is also featuring with a real-time audio coding
and decoding (codec) that can facilitate analogue and
digital conversion of analogue audio signal at 48 kHz
sampling rate along with a simple anti-aliasing and
reconstruction filter. Therefore, additional design of
analogue audio interface circuitry with FPGA chip is thus
avoided.
System identification for broadband ANC headset using FPGA
Learning Constant, μ
Due to the fixed-point data representation on FPGA,
a truncation process during calculation is unavoidable
especially to match the size of data at the end terminal.
The truncation process also has the implication of
dividing the data by the factor of 2 besides giving the
effect of quantization and rounding errors. When this is
implemented in the path of filter weight updates, due to
this division effect it will implicitly infers a
multiplication with a learning constant, μ. As a result, we
obtained the practical bound of learning constant in our
design, that is:
Figure 2 shows the average of power spectrum
density (PSD) of the error signal at different learning
constant μ normalized to the primary signal, d(n). It is
shown from the graph that the PSD of error signal for the
learning constant in the range of   219 and   225 is
greater or almost equal to the PSD of primary signal.
Therefore, the MSE signal does not converge in this
range. If we compare the result in simulation, we can see
that convergence of MSE in hardware implementation
occurred with the learning constant in the range of
224    221 and is not achievable elsewhere. Based
on this observation we can conclude that the practical
15
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
bound for learning constant in hardware implementation
is more stringent than in simulation due to the real-life
time-varying environment.
ACKNOWLEDGMENT
The Author acknowledged the Faculty of Engineering
and Built Environment of Universiti Kebangsaan
Malaysia (UKM) for providing the laboratory facility to
conduct the research.
REFERENCES
[1]
[2]
[3]
[4]
[5]
Average PSD of 1000 samples of error signal at different learning
constant, μ.
CONCLUSION
[6]
In this study, a modified LMS-based algorithm has
been successfully implemented on FPGA for the use in
ANC application. This is achieved with the help of
repetitions of weight updates accomplished in every
sample of audio signal, in which system identification
has successfully been carried out to demonstrate the
algorithm effectiveness.
The simulation and
experimental have indicated the capability of the
developed algorithm to perform well in a real-life timevarying environment.
[7]
[8]
[9]
16
Sen M. Kuo and Dennis R. Morgan, Active noise control
systems: Algorithms and DSP implementations, John Wiley &
Sons, Inc., New York, 1996.
P. A. Nelson and S. J. Elliott, Active Control of Sound.
Academic Press, London, 1992.
S. J. Elliott, Signal Processing for Active Control. Academic
Press, New York, 2001.
W. S. Gan, S. Mitra and S. M. Kuo “Adaptive Feedback Active
Noise Control Headset: Implementation, Evaluation and Its
Extensions” IEEE Transactions on Consumer Electronics,
Vol. 51, No. 3, August 2005, pp 975-982.
Sen M. Kuo, S. Mitra and W. S. Gan, “Active Noise Control
System for Headphone Applications” IEEE Transactions on
Control Systems Technology, Vol. 14, No. 2, March 2006, pp.
331-335.
C. Y. Chang “Efficient Active Noise Controller using a Fixedpoint DSP” Elsevier Signal Processing, Vol. 89, 2008, pp 843850.
B. Widrow, J. R. Glover, J. M. McCool, J. Kaunitz, C. S.
Williams, R. H. Hern, J. R. Zeidler, E. Dong and R. C.
Goodlin, “Adaptive Noise Cancelling: Principles and
Applications” Proceeding IEEE, Vol. 63, Dec 1975, pp. 16921716.
S. Haykin, Adaptive Filter Theory, 4th ed., Prentice-Hall, New
Jersey. 2002.
R. Mustafa and M. A. Mohd Ali, “Fast and Efficient Least
Mean Square Algorithm for Active Noise Control System
Identification” Acoustical Letter, Acoust. Sci. & Tech. Vol.
33(2), 2012, pp. 111-112.
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Hard Exudates and Cotton Wool Spots Localization in Digital
Fundus Images Using Multi-prototype Classifier
Methee Thepmongkhon1,2, Kittichai Wantanajittikul1,2, Sansanee Auephanwiriyakul2,3, Senior Member, IEEE,
Direk Patikulsila4, and Nipon Theera-Umpon2,5, Senior Member, IEEE
1
Biomedical Engineering Program, Faculty of Engineering, Chiang Mai University, Chiang Mai, Thailand
2
Biomedical Engineering Center, Chiang Mai University, Chiang Mai, Thailand
3
Department of Computer Engineering, Faculty of Engineering, Chiang Mai University, Chiang Mai, Thailand
4
Department of Ophthalmology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
5
Department of Electrical Engineering, Faculty of Engineering, Chiang Mai University, Chiang Mai, Thailand
[email protected]
Abstract— Diabetic retinopathy (DR) can lead to the blindness to the patients with diabetes. An early DR screening
can help reduce the blindness rate. To help the ophthalmologist in the DR screening, an automatic abnormalities
detection system is needed. Two of the important abnormalities are hard exudates and cotton wool spots. In this
paper, an automatic hard exudates and cotton wool spots localization is proposed. The proposed system is developed
based on a multi-prototype scheme created by the possibilistic c-means (PCM) and the nearest neighbor classifier.
The results show that the sensitivity and the predictive positive value (PPV) are 75.6%% and 64.8%, respectively.
These promising sensitivity and PPV indicate that the proposed system can properly locate these two abnormalities.
Keywords-hard exudates; cotton wool spots; possibilistic c-means; image segmentation; multi-prototypes
INTRODUCTION
Patients with diabetes for five or more years can
develop diabetic retinopathy (DR) and finally blindness.
There are approximately 50 to 65 blindness cases per
100,000 people [1]. The early stage of DR screening in
diabetic patient can help reduce the risk. Since the DR
screening is manually performed by a trained
ophthalmologist, it takes a great deal of time for analysis
because of the large number of retinal images needed to
be reviewed. To help the ophthalmologists, an automatic
DR screening is required. To create an automatic
detection system, we need to localize the abnormalities
from DR first. Two of the common abnormalities from
the DR are cotton wool spot (small, whitish/grey, cloudlike, linear or serpentine, slightly elevated lesions with
fimbriated edges that appeared to float within the
substance of the inner) and hard exudates (deeper,
yellowish, well-defined, crystalline granules commonly
associated with retinal exudative and inflammatory
processes) [2]. Figure 1 shows an optic disk along with
both abnormalities
There are several research works in finding hard
exudates [3–6]. Some of them considered detection of
both hard exudates and cotton wool spots [7–8].
Although these works yielded good detection results,
they all use complicated methods. In this paper, we
utilized a rather simple clustering method to locate hard
exudates and cotton wool spots in fundus images. In
particular, we utilized the possibilistic C-means (PCM)
clustering algorithm [9] to create multi-prototype and
then utilized the nearest neighbor to find these
abnormalities after eliminating the optic disk.
Figure 1: Sample original fundus images of (a) normal eye. (b) with
abnormalities: hard exudates (yellow circle) and cotton wool spot
(green circle)). Optic disk is indicated by blue circle.
MATERIAL AND METHOD
Data preparation
The public data set DiaRetDB1 version 2.1 from was
downloaded
from
http://www2.it.lut.fi/project/
imageret/diaretdb1_v2_1/. There are 89 fundus images
in total. Each image is with the size of 1500×1152. We
selected 9 fundus images that have hard exudates or
cotton wool spots to be a training data set. We tested our
system on the remaining 80 fundus images.
Proposed method
We utilized the possibilistic C-means (PCM)
clustering algorithm [9] to create multi-prototype for the
testing process. We briefly describe the PCM as
following. Let X = x j j  1 N  be a set of N feature
vectors in p-dimensional feature space. Let B =
 c1 , , cC  represent a C-tuple of prototypes each of
which characterizes one of the C clusters.
17
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
 u ji m d 2 x j , ci 
N
i 
 
j 1
.
(2)
 u ji m
N
j 1
The update equation of membership of x j in each
cluster i is:
u ji 

(3)
1
1
 m1


 
 d 2 x j , ci
1 

i



where d 2 x j , ci is a squared distance between a vector
x j and the center ci . Therefore, the membership uji is
not relative and depends only on the distance of x j from
cluster center ci rather than on the distance of x j from
all other prototypes. The update equation of the cluster
center ci is
N
ci 
 u 
ji
j 1
1
1
1
0
1
0
N
i 1 j 1
ji
m
.
(4)
j 1
(5)
where

1

 R  G    R  B 
1 
2
  cos 
1

2
2

  R  G    R  B  G  B  



The objective function is as follows:
c
 u 
BG
 
H 
360


B
G

Figure 3: Structuring element for opening and closing operations
 m d 2 x j , ci   i  1  u ji m
J m B, U; X    u ji
N
xj
To create the multi-prototype, we selected 5,000
black pixels from the areas surrounding the retinas, 5,000
red pixels from the areas that are blood vessels and
hemorrhages, 5,000 orange pixels from the retinas’
background and 5,000 yellow pixels from the hard
exudates and cotton wool spots. There were 20,000
pixels in total. We then utilized the Red, Green and Blue
channels along with the Hue [11] to be our features. The
Hue is calculated as following
Figure 2: Optic disk elimination process, (a) original image, (b) green
channel, (c) chosen area, (d) cropped image, (e) adaptive histogram
equalization of (d), (f) thresholding of (e), (g) opening of (f), (h)
closing of (g), (i) largest area, (j) shifted centroid, (k) mask for optic
disk, (l) final image without optic disk.
0
1
0
m
c
N
i 1
j 1


.




(6)
We implemented the PCM with 250 clusters on each
class separately. We then had 1,000 prototypes in total.
Now we are ready to test the fundus images.
To make the testing process easier, we eliminated the
optic disk area first. The process was done on the Green
channel of the fundus image as shown in figure 2(b).
Since the optic disk always locates around the middle left
or middle right of the image, we then cropped the middle
(1)
where m  [1,) is called the fuzzifier. In our
experiment, we set m  2. i are suitable positive
numbers that need to be estimated from the distance
statistics. It is calculated as
18
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
part of the image. To do this, we created the area by
including 25% of number of rows above of the middle
line and 25% of those below that line as shown in figure
2(c) and then we cropped this area as shown in figure
2(d). Next, to enhance the cropped image, we
implemented the adaptive histogram equalization [10] as
shown in figure 2(e). Then we created the binary image
as shown in figure 2(f) using the global thresholding. We
added 80 to the thresholding value calculated from the
Otsu’s method [10] and used that as a global thresholding
value in the process. To eliminate small areas in the
image, we applied the morphological opening and
closing [10] as shown in figures 2(g) and 2(h) with the
structuring element as shown in figure 3. The centroid of
the largest white area (figure 2(i)) was then calculated.
We selected the pixel that was on the left (or right)
(depending on the position of the area) by 3% of the
number of columns to be the center of the optic disk as
shown in figure 2(j). Then the circle with the diameter of
20% of the number of columns was drawn as shown in
figure 2(k). This area was superimposed on the fundus
image as shown in figure 2(l) to delete the optic disk
before we implemented the hard exudates and cotton
wool spots localization.
After we eliminated the optic disk area of the fundus
image, we implemented the nearest neighbor classifier
by finding the closest prototype. We then assigned that
pixel to the class of the closest prototype.
Figure 5: (a) Fundus image with hard exudates. (b) Result from the
system (c) Hard exudates from expert’s opinion, (d) Cotton wool spots
from expert’s opinion.
An example of correct hard exudates (green circle)
and cotton wool spots (blue circle) localization is shown
in figure 5(b). However, in the figure, we can see that
there are some missed cotton wool spots areas. This
might be because the cotton wool spots area has similar
color with the other parts of the fundus image.
For quantitative evaluation, we use sensitivity and
predictive positive value (PPV) to show the localization
performance of the system. The sensitivity and the PPV
are 75.6%% and 64.8%, respectively.
CONCLUSION
Patients with diabetes can develop the diabetic
retinopathy (DR) and finally become blind. To help the
ophthalmologists in the DR screening, the automatic
abnormalities detection system is needed. In this paper,
we developed the system to locate the hard exudates and
cotton wool spots. These are two of the important
abnormalities in the DR. The system was implemented
with the multi-prototype created by the possibilistic cmeans (PCM) clustering algorithm. The nearest neighbor
classifier was utilized to locate the hard exudates and the
cotton wool spots. Although, the results show that the
sensitivity and the PPV of the system are 75.6%% and
64.8%, respectively. The results indicate that we can use
a simple algorithm to locate these abnormalities. It
should be noted that this system does not need any
preprocessing before performing the localization system.
For our future works, the k-nearest neighbor classifier
will be used in the localization process to help increase
the sensitivity and the PPV.
RESULTS AND DISCUSSION
The testing was implemented on the remaining 80
fundus images. There were only 140 areas of hard
exudates and cotton wool spots. The results were
compared to the ground truth provided by the Machine
Vision
and Pattern
Recognition
Laboratory,
Lappeenranta University of Technology. We counted an
area as one area if all pixels were connected as 8connected component. An example of correct hard
exudates localization is shown in figure 4(b) (green
circle).
REFERENCES
[1]
[2]
.
Figure 4: (a) Fundus image with hard exudates. (b) Result from the
system (c) Hard exudates from expert’s opinion, (d) Cotton wool spots
from expert’s opinion.
[3]
[4]
19
J. A. Olson, F. M. Strachana, J. H. Hipwell, K. A. Goatman,
K. C. McHardy, J. V. Forrestera, and P. F. Sharp, “A
comparative evaluation of digital imaging, retinal
photography and optometrist examination in screening for
diabetic retinopathy”, Diabet. Med., vol 20, pp. 7528–534,
2003.
J. G. Arroyo, “Cotton-Wool Spots May Challenge
Diagnosis”, Review of Ophthalmology, vol 11, issue 4, pp.
111, 2004.
N. G. Ranamuka and R. G. N. Meegama, “Detection of hard
exudates for diabetic retinopathy images using fuzzy logic”
IET Image Processing, vol. 7, issue 2, pp. 121-130, 2013.
M. G. F. Eadgahi and H. Pourreza, “Localization of Hard
Exudates in Retinal Fundus Image by Mathematical
Morphology Operation”, 2012 2nd International
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
eConference on Computer and Knowledge Engineering
(ICCKE), pp. 185-189, 2012.
[5] X. Chen, W. Bu, X. Wu, B. Dai and Y. Teng, “A Novel
Method for Automatic Hard Exudates Detection in Color
Retinal Images” Proceedings of the 2012 International
Conference on Machine Learning and Cybernetics, pp.
1175-1181, 2012.
[6] G. Fang, N. Yang, H. Lu, K. Li, “Automatic Segmentation
of Hard Exudates in Fundus Images Based on Boosted Soft
Segmentation”, 2010 International Conference on
Intelligent Control and Information Processing (ICICIP),
pp. 633-638, 2010.
[7] A. W. Reza, C. Eswaran and K. Dimyati “Diagnosis of
Diabetic Retinopathy: Automatic Extraction of Optic Disc
and Exudates from Retinal Images using Marker-controlled
Watershed Transformation”, Journey of Medical Systems,
vol 35, pp. 1491 – 1501, 2011.
[8] A. Reza, C. Eswaran, Subhas Hati “Automatic Tracing of
Optic Disc and Exudates from Color Fundus Images Using
Fixed and Variable Thresholds” Journey of Medical
Systems., vol. 33, pp. 73-80, 2009.
[9] R. Krishnupuram and J. M. Keller, “A possibilistic
approach to clustering” Fuzzy Systems, IEEE Transactions
on., vol. 1, pp. 98-110, 1993.
[10] R. C. Gonzalez and R. E. Woods, Digital Image Processing
(third edition), Pearson Education, Inc., New Jersey, 2008.
[11] K. Jack, Video Demystified, 5th Edition., 2007.
20
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
An Improved Hybrid Algorithm for Accurate Determination of
Parameters of Lung Nodules with Dirichlet boundaries in CT
Images
G. Niranjana1 Dr.M.Ponnavaikko2
1
2
Assistant Professor, SRM University, Chennai, Tamilnadu, India
Vice-Chancellor, Bharath University, Chennai, Tamilnadu, India
Abstract - Lung cancer is the most common cancer for death among all cancers and CT scan is the best modality for
imaging lung cancer. The separation of tumor region with dirichlet boundaries from normal tissue is a challenging
task. A hybrid method for segmentation of cancerous tumor from the CT scan images is presented. The input image
is considered as a graph representing each pixel as a node. Two seed points which are user-defined (pre-labeled) pixels
given as labels, one for foreground and the other for the background. The gradient of the seed points are calculated.
The probability of walking from each unlabeled pixel to each labeled pixel is calculated and a vector of probabilities
for each of the unlabeled pixels is defined. By combining this vector of probabilities obtained for each unlabeled pixel,
they can be assigned to one of the labels using the watershed algorithm to obtain tumor segmentation. We used 23
images for validating our method and our experiment compared the original random walker algorithm, random
walker with improved weights and Random walker with Improved Weights along with Watershed algorithm. The
maximum DSC values obtained are 0.92 for Random Walk, 0.94 for Random Walk with Improved Weight and 0.97
for Watershed combined.
Keywords: Lung Cancer, CT images, Random Walker algorithm, Watershed algorithm.
An improved hybrid approach[17,18] for interactive image
segmentation using the random walker algorithm with
modified weights is proposed in this paper. In the random
walker algorithm presented by Leo Grady [4] , given K
number of user defined (pre-labeled) pixels as labels, the
probability that a random walker starting at each unlabeled
pixel to reach each of these K labels can be found. Unlike
the original random walker algorithm, the proposed method
obtains a K-tuple vector of probabilities for each unlabeled
pixel and is combined with watershed algorithm. The
resulting image produced is segmented using the watershed
algorithm with more accurate delineations weights and
watershed segmentation results. Resulted images have
average between the objects and boundaries [5]. The Dice
similarity coefficient (DSC) is used as a statistical
validation metric to evaluate the performance of both the
reproducibility of manual segmentations and the spatial
overlap accuracy of automated probabilistic fractional
segmentation of images.
1. Introduction
Lung cancer is the most common deadliest disease.
According to the latest survey reported in the year 2014[1],
a total of 159,260 people had died due to lung cancer in US.
In India every year 63,000 new lung cancer cases are
being reported. According to a recent survey of WHO, the
mortality rate of lung cancer is higher than any other cancer
[1]. It is very difficult to analyze the cancer at its early stage.
Various Computer Aided Diagnosis (CAD) systems as
reported in [1] have been designed for the early diagnose of
lung tumor. Early diagnose of the lung tumor can increase
the survival rate of 1 to 5 years. Hence a proper method for
detection and classification of lung tumor is the need of the
hour.
Most segmentation methods have an automatic
implementation. However, automatic segmentation
technique doesn’t always provide accurate results, and since
the tumor size and position can be distinct with different
pixels range [2]. Since interactive segmentation methods
use the user’s guidance, segmentation results tend to be
more accurate. Hence interactive segmentation techniques
are used in medical image processing [3,17].
Compared to original Random walker method our method
has the following advantages:
 Accurate segmentation of nodules with dirichlet
boundaries
21
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia


Use of constant value instead of free parameter β
Accuracy is increased with minimum of seed points
pairwise pixel similarities for optimal segmentation results.
The two most common graph based methods used
Rest of the paper is organized as follows. In section 2 the
proposed method is described. Section 3 includes the
materials and methods for the proposed work followed by
Results and Conclusion in section 4 and 5.
for segmentation are Graph Cut and Random Walk
techniques [6].
The graph cuts [5] technique has been developed as a
method for interactive & seeded segmentation. Graph cuts
views the image as a graph, weighted to reflect intensity
changes. The user marks some nodes as foreground and
others as background and the algorithm performs a maxflow/min-cut analysis to find the minimum-weight cut
between the source and the sink. A feature of this algorithm
is that an arbitrary segmentation may be obtained with
enough user interaction [16]. However, although
performing well in many situations, there are a few
concerns associated with this technique. Since the algorithm
returns the smallest cut separating the seeds, the algorithm
will often return the cut that minimally separates the seeds
from the rest of the image, if a small number of seeds are
used. Therefore, a user may need to continue placing seeds
in order to overcome this “small cut” problem. Additionally,
the K-way graph cuts problem is NP-Hard, requiring use of
a heuristic to obtain a solution. Finally, multiple “smallest
cuts” may exist in the image that is quite different from each
other. Therefore, a small amount of noise (adjusting even a
single pixel) could cause the contour returned by the
algorithm to change drastically.
2. Related Work
Segmentation is a widely researched topic and there are
numerous segmentation algorithms roughly classified into
the following four categories: (1) thresholding based
methods, (2) region based methods, (3) Stochastic and
learning based methods and (4) boundary based methods
[6]. This paper addresses a graph-based segmentation
approach based on the principle of random walks combined
with watershed segmentation. Thus we limit our review to
only Region based Random walk segmentation and
boundary based watershed segmentation algorithms.
In region based segmentation methods the homogeneity of
the image is the main consideration for determining object
boundaries. The region-based segmentation methods also
utilize the intensities of the image for detecting boundaries.
The region-based methods are mainly divided into two
subgroups: Region Growing and Graph based methods [6].
Region growing technique incorporates spatial information
in the image along with the intensity information . The
algorithm starts at a user defined seed point and based on
the mean and standard deviation of the intensities within the
local seed region, connected pixels are either included or
excluded in the segmentation results. A second input, a
homogeneity metric, is used to decide how different a new
pixel can be from the statistics of the region already selected
and can still be included in the segmentation [8]. This
process is repeated until the entire region of interest has
been segmented or the segmented region does not change
further. Although region growing methods have been
shown to work well in homogeneous regions with
appropriately set intensity homogeneity parameters,
segmentation of heterogeneous structures has not been
satisfactory. Region growing may fail even for sufficiently
homogeneous uptake regions when the homogeneity
parameter of the region growing algorithm is not
appropriately set [10,11].
Leo Grady [4] proposed a semi-supervised random walker
approach to interactive image segmentation formulated on
a weighted graph, where the unlabeled pixels are assigned
the label of the node to which it is most likely to send a
random walker. This algorithm has shown to perform well
on different types of images, but is strongly influenced by
the placement of the labels within the image [6].
3. Proposed Method
The flowchart of the proposed algorithm is as given in
Fig:1. In order to solve the problem of noise in the CT image
median filtering technique is used initially. The
preprocessed image is then segmented to extract the lung
region using global thresholding technique. Using the user
defined input as the labels, a vector of probabilities is
defined for each unlabeled pixel using Random Walk
algorithm. Combining the vector of probabilities for each
unlabeled pixel, a label is assigned using Watershed
algorithm for tumor region extraction.
Graph-based approaches have a big advantage over other
segmentation methods by incorporating efficient
recognition into the segmentation process by using
foreground and background seeds, specified by the user
(supervised) or automatically (unsupervised) to locate the
objects in the image [9].These seed points act as hard
constraints and combine global information with local
3.1. Preprocessing & Lung Extraction
The CT images used are noisy with obscure edges. In order
to improve segmentation of the region of interest, we use
median filtering. The goal of median filtering is to filter out
noise that has corrupted the image. It is based on a statistical
approach. Median filtering is a nonlinear operation often
22
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
used in image processing to reduce “salt and pepper” noise
[7]. A median filter is more effective than convolution when
the goal is to simultaneously reduce noise and preserve
edges.
1.
2.
Segmentation stage is to separate the objects and borders
(lines, curves) in an image. Global Thresholding technique
is used to segment and extract the lung region. Thresholding
is a non-linear operation that converts a gray-scale image
into a binary image where the two levels are assigned to
pixels that are below or above the specified threshold value.
Otsu’s method is used to compute global
of walking from each unlabeled pixel to each labeled
pixel is calculated.
3. A vector of probabilities for each unlabeled pixel is
defined.
4. Combine the vector of probabilities obtained for each
unlabeled pixel, and assigned to one of the labels using
the watershed algorithm to obtain tumor segmentation.
3.2.1. Overview of Random Walker Algorithm:
In image segmentation, the relationship between random
walks and Dirichlet problem is established in clustering the
respective sub-regions according to the users’ inputs [1].
The algorithm calculates the
probability that a
constrained random walker starting from each unlabeled
pixel will first reach the labeled pixels (seeds)[15]. A
final segmentation is obtained by selecting, for each
point, the most probable seed destination of the random
walker.
Input CT image
Pre processing
Lung Extraction
Nodule Segmentation
Segmented Nodule Output
A graph consists of a pair G = (V, E) with vertices (nodes)
v ∈V and edges e ∈ E ⊆V × V. An edge e, spanning two
vertices, 𝑣𝑖 and 𝑣𝑗 , is denoted by 𝑒𝑖𝑗 . A weighted graph
assigns a value to each edge called a weight. The weight of
an edge 𝑒𝑖𝑗 is denoted by w ( 𝑒𝑖𝑗 ) or simply 𝑤𝑖𝑗 . The
degree of a vertex 𝑑𝑖 = ∑ 𝑤(𝑒𝑖𝑗 )for all edges 𝑒𝑖𝑗 incident on
𝑣𝑖 . In order to interpret 𝑤𝑖𝑗 as the bias affecting a random
walker’s choice, we require that 𝑤𝑖𝑗 > 0. We also assume
that our graph is undirected and connected.
Given a set of foreground seeds, 𝐹, and background
seeds,𝐵, where set of nodes 𝑆 = 𝐹 ∪ 𝐵 and 𝐹 ∩ 𝐵 = ∅ ,
the probability of a random walker, 𝑥i starting at node 𝑣 i
first reaches a seeded node, 𝑣 S is equivalent to the
solution to the Dirichlet problem of finding the harmonic
function subjects to its boundary values.
Fig 1: Flowchart of the Proposed Algorithm
image threshold. Otsu’s method is based on threshold
selection by statistical criteria. Otsu suggested minimizing
the weighted sum of within-class variances of the object and
background pixels to establish an optimum threshold.
Fig 2: CT Image of Lungs
Read user defined seed points, to mark the tumor and
non tumor regions.
Apply Random Walker Algorithm, given a set of userdefined (pre-labeled) pixels as labels, the probability
𝐿
(1)
Fig 3: Image of Extracted lung region
3.2. Nodule Segmentation
U
𝑋 = −𝐵T𝑋𝑀
(1)
In order to reduce variability for feature extraction, the first
and essential step is to accurately delineate the lung
nodules. Accurate delineation of lung tumors is also crucial
for optimal radiation oncology. The following algorithm is
used for segmenting the lung nodule.
where 𝐿 U , the unseeded nodes in Laplacian, is one
component of the decomposition of the combinatorial
Laplacian matrix, Eq. (2), 𝐵 is the boundary conditions
at the locations of the seeded points, 𝑋𝑀 .
Algorithm:
23
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
𝑑𝑖
𝐿𝑖𝑗 = { −𝑤𝑖𝑗
0
𝑖𝑓 𝑖 = 𝑗,
𝑖𝑓 𝑣𝑖 𝑎𝑛𝑑 𝑣𝑗 𝑎𝑟𝑒 𝑖𝑛𝑐𝑖𝑑𝑒𝑛𝑡 𝑛𝑜𝑑𝑒
𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
unlabeled pixel to first reach any labeled pixel decreases as
we move away from this labeled pixel, we see a local ridge
formation around each labeled pixel in the resultant image
generated by (6). We grow the labeled pixel regions in all
directions until they reach their corresponding ridge
locations in all directions in the image R, thereby extending
the original labeled pixel region size fed by the user, to a
larger size. Finally, we invert this image R and
(2)
With a defined set of seeds 𝑋U, the belongingness of an
unlabelled node 𝑥i to the seed 𝑣 S with label 𝑆, where 𝑆 =
(𝐹, 𝐵) can be identified when its probability, 𝑝𝑟 to reach
𝑣 S with label 𝑆 is higher.
𝑣𝑖 = 𝑠, since 𝑝𝑟𝑖 = max(𝑠)
perform a marker-controlled watershed transform on this
inverted image, where the labeled pixel regions act as
markers. This leads to improved image segmentation with
more accurate delineations between the objects boundaries.
Fig. 4 shows the segmented nodule region.
(3)
The weighting function is represented by the typical
Gaussian weighting function,
2
𝑤𝑖𝑗 = 𝑒𝑥𝑝 {−𝛽(𝑔𝑖 − 𝑔𝑗 ) }
(4)
where 𝑔𝑖 is the image intensity at pixel i. The value of 𝛽 is
the only free parameter in this algorithm.
4. Experimental Results
We tested our method with 23 CT images. A sample of
5 images is shown in Fig. 5. Nodules are segmented using
Random Walk (RW), Random Walk with Improved Weight
(RWIW) and Random Walk with Improved Weight
combined with Watershed (RWIW-WS).Segmentation
accuracy is calculated based on the boundary descriptors
such as its area, major axis, minor axis, eccentricity and
perimeter[22,23]. Results obtained from the tested CT
images are given in Table I.
3.2.2. Improvements in weighting function
In the original Random walk method, specified in (4)
parameter β is a free parameter defined by the user. We
propose to use the distance between adjacent nodes in
place of a constant parameter to take into account the
different distances between adjacent nodes [9]. Thus, the
weight function in Eq. (4) becomes:
𝑤𝑖𝑗 = 𝑒𝑥𝑝 {
−(𝑔𝑖 −𝑔𝑗 )
ℎ𝑖𝑗
The DSC is calculated as
DSC = 2 (M ∩ A) / (M + A)
(7)
where M is manual segmentation of the nodule and A is the
proposed method segmentation of the nodule.
2
}
(5)
where the added term ℎ𝑖𝑗 represents the Euclidean
distance between adjacent pixels i and j and and
setting 𝛽 = 1. In the initial method, the probability
depends only on the gradient between pixels, but not
directly on their intensity
.
To strengthen the grouping of pixels[20] having similar
intensity by adding the likelihood of probability to each
class (tumor and non tumor) to Eq. (4).The
improvements proposed for the algorithm use local
information.
3.2.3. Combining probabilities and Watershed
Fig 4: Detected nodule region.
Once we obtain the K-tuple vector of probabilities for each
unlabeled pixel, we combine this vector of probabilities into
one value by taking the product of all the probabilities in the
vector, in order to obtain a resultant image R:
𝑗
𝑅 = ∏𝑗 𝑥𝑢
5. Conclusion
The varying image conditions and complexity of medical
images makes fully automated segmentation techniques
unreliable. Also segmenting the regions with vague
boundaries is a difficult task. Thus there arises a need for
user interactive segmentation techniques, where
radiologists and oncologists can participate in image
segmentation. The proposed method is used to identify lung
nodules from CT images with irregular boundaries with
(6)
The resultant image R will have maximum values in the
𝑗
areas where the probabilities 𝑥𝑢 are equal for every 0 < j ≤
K, i.e., when an unlabeled pixel has equal probability to
reach any of the K labels. Since the probability of an
24
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
user defined seed points. The Random Walker Watershed
algorithm proposed for tumor detection, works on the
principle of Random Walks which combines the
probabilities of each unlabeled pixel and generates a
resulting image which is then segmented using the
watershed algorithm. The advantage is that the labeled pixel
regions given as inputs to the segmentation algorithm
(a)
(b)
could be placed anywhere within the object of interest in
order to accurately segment and delineate the region of
interest. Further the Random Walker algorithm is enhanced
since the weight function not only of the intensity
gradient, but also of normalized Euclidean distances
between adjacent pixels. Results show that the proposed
method improves the accuracy of segmenting nodules with
dirchlet boundaries.
(c)
(d)
(e)
Fig. 5 Sample of 5 CT images out of 23 images taken for testing in the study
(a)
(b)
(c)
(d)
(e)
(f)
Sample input images
Extraction of lung region in processing stage
Outlined output of nodule segmentation with improved weight in Random walker algorithm
Outlined output of nodule segmentation with Random walker and watershed algorithm combined
Segmented nodule with improved weight in Random walker algorithm
Segmented nodule with Random walker and watershed algorithm combined
25
(f)
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
[11] J. Dehmeshki, H. Amin, M. Valdivieso, and X. Ye, “Segmentation of
pulmonary nodules in thoracic CT scans: a region growing approach,”
IEEE Transactions on Medical Imaging, vol. 27, no. 4, pp. 467–480, 2008.
References
[1] I. Sluimer, A. Schilham, M. Prokop, and B. V. Ginneken, “Computer
analysis of computer tomography scans of the lung: A survey,”
IEEE Trans. Med. Imag., vol. 25, no. 4, pp. 385–405, Apr. 2006.
[2] Ning Wang, Lin-Lin Huang, Baochang Zhang, “A Fast Hybrid Method
for Interactive Liver Segmentation”, IEEE 2010 Chinese Conference
on Pattern Recognition (CCPR).
[12] S. Diciotti, G. Picozzi,M. Falchini,M.Mascalchi,N. Villari, and G.
Valli, “3-D segmentation algorithm of small lung nodules in spiral CT
images,” IEEE Transactions on Information Technology in Biomedicine,
vol. 12, no. 1, pp. 7–19, 2008.
[13] L. R.Goodman, M.Gulsun, L.Washington, P. G.Nagy, andK. L.
Piacsek, “Inherent variability of CT lung nodule measurements in vivo
using semiautomated volumetric measurements,”American Journal of
Roentgenology, vol. 186, no. 4, pp. 989–994, 2006.
[14] Helen, R. ,Kamaraj, N. , Selvi, K. ; Raja Raman, V.,” Segmentation
of pulmonary parenchyma in CT lung images based on 2D Otsu optimized
by PSO “,International Conference on .
[15] D.P. Onomaa, S. Ruan ,S. Thureau, L. Nkhalia,, R. Modzelewskia
, G.A. Monnehan ,P. Vera, I. Gardin “Segmentation of heterogeneous
or small FDG PET positive tissue based on a 3D-locally adaptive
random walk algorithm”,Elseviers, Computerized Medical Imaging and
Graphics, August 2014.
[3] G. Qiu, P.C. Yuen, “Interactive imaging and vision-Ideas, algorithms
and applications”, Pattern Recognition, 43: 431-433, 2010.
[4] Leo Grady, “Random Walks for Image Segmentation”, IEEE
Transactions on Pattern Analysis and Machine Intelligence, Vol. 28,
No. 11, Nov. 2006.
[5] Sundaresh Ram and Jeffrey J. Rodríguez, “Random Walker
Watersheds: A New Image Segmentation Approach”, 2013 IEEE
International Conference on Acoustics, Speech and Signal Processing
[6] Brent Foster, Ulas Bagci, Awais Mansoor, Ziyue Xu, Daniel J. Mollura,
“A Review On Segmentation Of Positron Emission Tomography
Images”, Elsevier, Computers in Biology and Medicine, pp. 76–96,
2014.
[16] Y.Y. Boykov and M.-P. Jolly, “ Interactive graph cuts for optimal
boundary and region segmentation of objects in n-d images,” in Proc. IEEE
Int. Conf. Computer Vision, 2001, pp. 105–112.
[17] Lim Khai Yin and Mandava Rajeswari “Random walker with
improved weighting function for interactive medical image
Segmentation”, Bio-Medical Materials and Engineering 2014.
[7] Ayman El-Baz, Garth M. Beache, Georgy Gimel’farb, Kenji Suzuki,4
Kazunori Okada, Ahmed Elnakib, Ahmed Soliman, and Behnoush
Abdollahi, “Computer-Aided Diagnosis Systems for Lung Cancer:
Challenges and Methodologies, International Journal of Biomedical
Imaging, Volume 2013.
[18] Ning Wang, Lin-Lin Huang, Baochang Zhang, “A Fast Hybrid
Method for Interactive Liver Segmentation”, 2013
[19] R. Adams, L. Bischof, Seeded region growing, IEEE Trans. Pattern
Anal. Mach. Intell. 16 (6) , pp. 641–647, 1994.
[20] S. Hu, C. Xu, W. Guan, Y. Tang and Y. Liu, Texture feature extraction
based on wavelet transform and gray-level cooccurrence matrices applied
to osteosarcoma diagnosis, Bio-Medical Materials and Engineering 24
(2014), 129–143.
[21] D.A. Clausi, An analysis of co-occurrence texture statistics as a
function of grey level quantization, Canadian Journal of Remote Sensing
28 (2002), 45–62.
[22] M. De Martinao, F. Causa and S.B. Serpico, Classification of optical
high resolution images in urban environment using spectral and textural
information, Proceedings of 2003 IEEE International Geoscience and
Remote Sensing Symposium 1 (2003), 467–469
[23] Medical Image Computing and Computer Assisted Intervention
(MICCAI),
Multimodal
Brain
Tumor
Segmentation,http://www2.imm.dtu.dk/projects/BRATS2012/data.html,
[8] E. Day, J. Betler, D. Parda, B. Reitz, A. Kirichenko, S. Mohammadi,
M. Miften,, “ A Region Growing Method For Tumor Volume
Segmentation On PET Images For Rectal And Anal Cancer Patients”,
Med. Phys. 36 (10) , pp. 4349–4358,2009.
[9] U. Bagci, J. Yao, J. Caban, E. Turkbey, O. Aras, D. Mollura, A graphtheoretic approach for segmentation of PET images, in: 2011 Annual
International Conference of the IEEE Engineering in Medicine and
Biology Society, EMBC, 2011, pp. 8479–8482.
[10] J. M. Kuhnigk, V. Dicken, L. Bornemann et al., “Morphological
segmentation and partial volume analysis for volumetry of solid
pulmonary lesions in thoracic CT scans,” IEEE Transactions on Medical
Imaging, vol. 25, no. 4, pp. 417–434, 2006.
2012.
26
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Image
ID
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
RW
104
61
27
34
92
68
162
69
1115
168
233
56
36
352
97
1521
29
1257
175
151
185
41
551
Area
RWIW
84
58
29
34
92
68
166
71
1130
168
230
59
36
352
97
1521
29
1257
175
151
185
41
536
RWIW
WS
84
58
29
37
92
68
171
71
1118
168
228
56
35
353
99
1519
29
1257
175
151
180
41
507
RW
21.048
9.551
9.083
9.01
11.5
12.245
23.269
12.805
48.049
17.793
24.301
9.349
11.566
28.532
12.902
65.897
7.063
46.125
16.383
17.377
20.57
12.035
29.775
Major Axis
RWIW RWIW
WS
14.45
9.432
9.069
9.01
11.5
12.245
23.852
12.803
48.134
17.793
24.022
10.078
11.566
28.532
12.902
65.897
7.063
46.125
16.383
17.377
20.57
12.035
29.775
14.45
9.43
9.069
8.797
11.50
12.25
24.039
12.803
47.911
17.793
23.836
9.348
11.006
29.782
13.932
61.817
7.0629
46.125
16.383
17.377
19.206
12.035
29.693
RW
7.633
8.307
4.02
5.009
10.317
7.371
9.414
7.066
30.058
12.306
12.872
8.063
5.005
15.021
8.275
34.979
5.406
36.709
13.938
12.962
14.41
4.755
24.431
Minor Axis
RWIW RWIW
WS
7.714
8.032
4.307
5.009
10.317
7.371
9.472
7.249
30.351
12.306
12.848
7.905
5.005
15.021
8.275
34.979
5.406
36.709
13.938
12.962
14.41
4.755
24.431
7.714
8.032
4.307
5.545
10.317
7.371
9.762
7.249
30.186
12.306
12.857
8.063
4.175
16.069
9.265
33.779
5.406
36.709
13.938
12.962
13.414
4.755
22.732
27
RW
52.63
25.9
18.83
20.24
32.73
29.66
54.63
30.49
140.43
49.31
62.63
27.07
23.83
82.73
33.53
185.33
17.66
164.71
50.04
57.46
59.28
26.14
106.67
Perimeter
RWIW RWIW
WS
35.8
25.9
19.41
20.24
32.73
29.66
57.21
30.49
138.43
49.31
62.04
28.49
23.83
82.73
33.53
185.33
17.66
164.71
50.04
57.46
59.28
26.14
106.67
35.8
25.9
19.41
20.83
32.73
29.66
57.21
30.49
139.01
49.31
61.46
27.07
22.83
83.94
35.9
182.33
17.66
164.71
50.04
57.46
57.21
26.14
100.08
RW
0.932
0.494
0.897
0.831
0.442
0.79
0.915
0.834
0.78
0.722
0.848
0.506
0.945
0.892
0.877
0.871
0.644
0.605
0.526
0.666
0.826
0.919
0.517
Eccentricity
RWIW RWIW
WS
0.846
0.524
0.88
0.831
0.442
0.79
0.918
0.824
0.777
0.722
0.845
0.62
0.945
0.892
0.877
0.871
0.644
0.605
0.526
0.666
0.826
0.919
0.572
0.846
0.524
0.88
0.776
0.442
0.79
0.914
0.824
0.777
0.722
0.842
0.506
0.925
0.842
0.747
0.838
0.644
0.605
0.526
0.666
0.716
0.919
0.643
Equivalent Diameter
RW
RWIW RWIW
WS
11.507
8.956
6.412
6.58
10.823
9.305
14.184
9.441
37.678
14.625
17.224
8.444
6.936
21.098
10.367
44.978
6.077
40.006
14.927
13.868
17.024
7.225
26.487
10.403
8.593
5.754
6.58
10.823
9.305
14.184
9.508
37.931
14.625
17.113
8.667
6.936
21.098
10.367
44.978
6.077
40.006
14.927
13.868
17.024
7.225
26.124
10.403
8.593
5.754
6.956
10.823
9.305
14.755
9.508
37.729
14.625
17.038
8.444
6.676
21.2
11.227
43.978
6.077
40.006
14.927
13.868
15.139
7.225
25.407
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Determination of Similarity Measure on MRI brain clustered
Image
S.Rani1 D.Gladis2 Radhakrishnan Palanikumar3
1.
Research Scholar, PG and Research Department of Computer Science, Presidency College, Chennai , India
[email protected]
2.
Associate Professor, PG and Research Department of Computer Science, Presidency College, Chennai
3.
Associate Professor, Dept of Computer Science , College of Computer Science , King Khalid university ,Abha,
Kingdom of Saudi Arabia
Abstract:Medical Image processing using data mining techniques are applied to determine the features on obtained
MRI images for the analysis. The analysis process used to determine the difference or similarity as per the
requirement of medical image process. The clustering processes are used to identify the unique features or similar
objects on the data or images. The medical MRI brain image analysis process used to identify the similar neurons
from the pre processed functional MRI. The similar objects represent the active and significant process part of the
brain. The cluster process shows the significant area but it contains the noise on the clustered objet. This research
work attempted in two iterative process of cluster. The image initially clusters into 8 as well as 16 clusters. The
cluster objects are evaluated based on its associative relationship and its variations are determined.
Key words: Similarity Measure, Clustering, Neuron Image Analysis, Equal Interval Algorithm.
INTRODUCTION
Step 5 : Generate clusters using equal interval algorithm
Step 6 . Determination of similarity between different
levels of clusters
Medical image processing adopted the data mining
techniques such as association, clustering, classification
and predication techniques for the identification of
similarity measures. It is used to identify the variations of
the human brain functional process and its variations. The
MRI is provided signal variations of the human brain and
its transmissions at the instance of while we are observing
the person. The variations on the obtained images are
shows the activeness, variation , abnormality of the
human brain functionality . The clustering and
classification techniques aid to identify the slimier unique
objects according its features. This paper aimed to process
the obtained .nnrf format MRI images into frame to
cluster the same. The clustered images are iterated and its
relational variants are computed. The significant brain
functional area cluster and its variations are obtained .
The high variations are identified as a noise cluster object
from the similarity measure.
II. SCOPE AND OBJECTIVES
3.1 Procedure for Equal Interval Method
1.
Collect the pre processed MRI with the process able
Image.
2. Convert the multilayer integrated image into the
Digital vales.
3. convert the cubical values into two dimensional
array ( Number of Pixel ,5)
4. Each row represents ( X,Y,R,G,B) values
5. Collect the number of classification( NC) aimed to
process
6. determine the minimum (Min) and maximum(Max)
value form the Digital vales
7. Determine the difference dx = Max – Min
8. The Range R = dx / NC
9. Fix the starting pixel value and End pixel vales for
each classification based on the range values
10. process all row values and verify the rage .
According to the individual and combinational
range vales construct the classification data and sub
image .
11. Repeat the step 9 until all the classification to be
processed
This paper is attempted to segment the frames
from th captured MRI file and cluster the same using
equal interval algorithm. The clustered object
relationships are determined with different cluster level
such as 8 clusters and 16 clusters. The similarity and
variations are identified. The high level variations are
identified as a noise on the pre-processed clustered MRI.
III. METHODOLOGY
The similarity measure on clustered object is
attempted using the following procedure
Step 1: Fetch the MRI image
Step2 . Convert the fetched image into the nrrd file
format
Step 3 : Convert and represent the images into cubical
data set
Step 4 : Adopt the liner data set and compute the cluster
index
VI.. SOURCE OF THE DATA
A MRI converted nrrd image is captured from
the slicer public database. The data set is presented in the
nrrd format and its is converted into the cubical data
format using matlab. The converted cubical data set
28
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
presented into 512x512x139 representation. The each
layer could presentable in a two dimensional format. The
three dimensional axies points of the data are fetched and
the changes between the 139 pixels are computed and
graph is generated.
data_ind = zeros(256,256);
data_avr_abs = zeros(256,256);
cluster_img = zeros(256,256);
for i=1:size(dataset1,1)
for j = 1:size(dataset1,2)
data_sum(i,j) = sum (dataset1(i,j,:));
data_avr(i,j)
=
round(mean
(dataset1(i,j,:)));
end
end
nc = 8;
% refixing the range for avr values
for i=1:size(data_avr,1)
for j=1:size(data_avr,1)
data_avr_abs(i,j)
=
data_avr(i,j)+
abs(min(min(data_avr)));
end
end
d
=
max(max(data_avr_abs))min(min(data_avr_abs));
range = round(d/nc);
% calcuation of index
for i=1:size(data_avr,1)
for j=1:size(data_avr,2)
data_ind(i,j)
=
round(data_avr_abs(i,j)/range);
end
end
V. MAGNETIC RESONANCE IMAGING (MRI) INTO
CUBICAL DATABASE
Magnetic resonance imaging (MRI) is one
among the familiar and famous
three-dimensional
viewing of the brain and structures, precise spatial
relationships . the image resolution is somewhat limited.
Stained sections, on the other hand, offer excellent
resolution and the ability to see individual nuclei (cell
stain) or fiber tracts (myelin stain), however, there are
often spatial distortions inherent in the staining process.
For this work ,nrrd file is fetched from slicer 3d
download data base. Nrrd is a library and file format
designed to support scientific visualization and image
processing involving N-dimensional raster data. Nrrd
stands for "nearly raw raster data".
The network path way analysis is made by
Modha, et.al., identified the movement and the distance
of the neuron via analysing the MRI three dimensional
coordinated image. The similar approach made to attain
the signal communication analysis to increase the speed
of the neuron process. The images which is fected at the
time of MRI scanning is processed using matlab and
converted to the two dimensional image and converted
into the corresponding digital values. The image is
sequenced into 1: 112 based on the nature of the MRI file
presented as fig 1 . The files corresponding digital values
are converted and presented to compute the frequency of
the changes.
for k = 1:nc
count = 0;
for i=1:size(data_avr,1)
for j=1:size(data_avr,2)
if (data_ind(i,j) == k)
cluster_img(i,j) = 150;
count = count +1;
else
cluster_img(i,j) = 0;
end
end
end
image(cluster_img);
cluster_count(kk,k) = count;
disp(count);
fname2=
strcat('H:\...\resultimg\cluster8_',num2str(
kk),'_',num2str(k),'.tif');
saveas(gcf,fname2);
The cluster ranges are divided into equal interval and
each frames are clustered into 8 equal clusters .The fig 2
represents the clustered images of frame 1 and the fig 3
represents the clustered images of frame 2.
Fig 1 . Sample Converted frames of MRI Images
The converted images are clustered with the following
clustering function
Equ_cluster(img1,clusterimg)
Dataset1 = img1;
data_sum = zeros(256,256);
data_avr = zeros(256,256);
29
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
As per the cluster view, the brain cluster 1 shows the
numbers of active neurons are initially frames are
moderate and decreased then increased at the end of the
observations. Similarly all the clusters are reflecting the
changes on each cluster level as per the represented above
table values.
Fig 2. 8 Clustered images of frame 1-6
The images are clustered into 16 cluster of each frame
and its number of pixels are computed. The percentage of
pixels for 8 and 16 cluster computed and its variations are
computed and tabulated below
The cluster ranges are divided into equal interval and
each frames are clustered into 16 equal clusters .The fig
3 represents the clustered images of frame 1-6 .
Fig 2. 16 Clustered images of frame 1-2
The number of pixels on each cluster is computed and
presented in the table1.
table 1. Sample Number of Pixels in clustered Objects
Cluste
r
1
1
7658
2
3
4
5
6
7
8
9
10
...
8546
1032
2
1208
1
1329
2
1323
3
1320
6
1317
4
1288
4
1305
5
2
742
2
869
4
753
9
555
5
379
1
369
8
398
3
432
0
514
4
523
6
3
399
1
169
0
122
0
102
4
4
6
20
9
569
5
47
4
22
6
431
98
53
254
67
23
756
183
69
827
232
890
295
966
116
0
111
6
259
c2
1
6.27
8.15
c3
10.6
9
2
5.59
2.33
3
9.30
11.0
8
9.10
0.81
4
9.82
8.29
1.21
5
0.21
0.48
0.40
6
1.71
1.10
0.24
7
1.56
1.37
0.31
8
4.48
3.20
0.76
9
0.10
0.16
0.29
10
0.36
0.28
0.21
11
6.51
3.64
1.65
12
8.88
5.18
2.43
13
0.64
0.08
14
1.19
10.9
5
6.35
3.08
15
0.19
0.30
0.52
16
0.78
0.88
1.26
17
6.55
1.32
4.08
18
1.94
2.60
0.17
19
0.10
0.91
0.24
20
1.85
0.51
0.40
21
3.07
0.32
1.06
8
20
7
4
9
1
9
1
2
1
1
1
1
70
24
9
2
16
42
6
1
3
1
4
1
3
4
339
91
10
4
12
2
13
3
...
...
...
22
2.28
0.60
1.12
42
1
0
3
23
3.45
1.11
1.21
24
1.12
0.64
25
2.23
10.1
2
3.04
1.06
26
5.32
0.71
1.10
954
336
...
...
...
...
...
7991
261
2
241
2
113
4
21
6
112
c1
58
17
35
6
4
4
2
1
4
1
0
1
30
c4
1.7
2
0.6
8
0.8
5
0.3
3
0.2
7
0.3
1
0.2
2
0.3
3
0.0
0
0.2
4
0.7
0
0.5
3
0.2
1
1.0
2
0.1
2
0.2
9
2.5
9
0.2
4
0.6
0
0.3
8
1.3
1
1.1
0
1.5
7
0.6
8
2.7
9
1.7
8
c5
1.1
4
0.6
4
0.1
5
0.0
3
0.0
1
0.1
1
0.0
6
0.0
7
0.0
5
0.0
7
0.2
8
0.3
5
0.2
0
0.2
8
0.0
2
0.1
1
0.7
6
0.2
4
0.1
3
0.2
0
0.7
4
0.5
3
1.1
1
0.6
3
1.9
8
1.1
6
c6
0.7
1
0.0
2
0.1
5
0.0
1
0.0
2
0.0
5
0.0
0
0.0
9
0.0
3
0.0
5
0.1
6
0.3
0
0.0
2
0.1
4
0.0
9
0.0
3
0.2
8
0.0
3
0.0
5
0.2
7
0.2
0
0.0
6
0.5
0
0.4
0
0.9
7
0.4
3
c7
0.1
4
0.0
3
0.0
0
0.0
0
0.0
1
0.0
2
0.0
4
0.0
0
0.0
1
0.0
2
0.0
5
0.0
4
0.0
4
0.0
8
0.0
4
0.0
2
0.1
6
0.0
3
0.0
1
0.0
7
0.0
7
0.0
6
0.1
5
0.0
5
0.2
6
0.1
4
c8
0.0
1
0.0
0
0.0
1
0.0
0
0.0
1
0.0
1
0.0
0
0.0
3
0.0
5
0.0
0
0.0
4
0.0
4
0.0
0
0.0
1
0.0
1
0.0
0
0.0
0
0.0
0
0.0
1
0.0
1
0.0
0
0.0
0
0.0
1
0.0
1
0.0
1
0.0
2
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
27
4.09
2.07
1.41
28
5.13
0.15
1.43
29
1.78
0.92
1.04
30
0.80
0.42
0.37
31
0.03
0.62
0.58
32
1.18
1.54
1.41
33
0.03
0.35
0.01
34
2.08
1.77
1.29
35
0.70
0.51
0.82
36
0.20
0.52
0.43
37
0.70
0.70
0.08
38
0.43
0.42
0.22
39
0.34
0.94
0.54
40
2.88
0.06
1.22
41
5.17
3.16
0.24
42
6.45
3.73
0.77
43
1.23
0.93
0.34
44
0.87
1.71
0.67
45
2.38
1.21
0.80
46
0.26
0.67
0.18
47
2.42
1.87
0.18
48
3.63
2.60
0.38
49
1.91
1.89
0.35
50
4.25
3.68
0.11
51
0.49
0.18
0.24
52
5.81
4.78
0.24
53
1.62
1.42
0.14
54
0.41
0.28
0.27
55
2.61
1.98
0.28
56
1.57
1.42
0.13
57
1.39
1.09
0.17
58
1.39
0.99
0.09
59
0.68
0.61
0.06
60
0.59
0.39
0.10
61
1.68
1.06
0.10
62
0.95
0.96
0.16
63
2.87
2.01
0.20
0.2
8
1.9
1
1.1
7
0.6
4
0.1
3
0.8
2
0.3
9
1.8
6
0.5
0
0.4
3
0.5
1
0.2
2
0.1
8
0.9
5
0.7
6
1.1
4
0.3
8
0.3
6
0.0
1
0.2
2
0.1
6
0.3
4
0.1
5
0.1
5
0.0
3
0.4
9
0.0
7
0.1
8
0.2
5
0.1
8
0.0
5
0.1
9
0.1
2
0.0
7
0.3
7
0.1
2
0.3
1
0.2
1
1.2
2
0.4
4
0.2
3
0.1
4
0.4
2
0.0
4
0.5
1
0.1
6
0.2
5
0.3
9
0.0
2
0.0
5
0.5
7
0.6
0
0.4
2
0.0
7
0.0
7
0.2
1
0.0
0
0.1
6
0.2
6
0.1
6
0.1
6
0.0
8
0.1
8
0.0
0
0.0
7
0.0
5
0.1
0
0.0
4
0.0
9
0.0
6
0.0
1
0.0
6
0.0
2
0.2
2
0.0
8
0.3
8
0.0
2
0.0
2
0.0
0
0.0
3
0.0
4
0.1
6
0.0
4
0.0
4
0.0
6
0.0
4
0.1
6
0.1
7
0.3
4
0.3
2
0.1
4
0.1
0
0.1
4
0.0
5
0.0
4
0.0
3
0.0
1
0.0
8
0.0
4
0.0
5
0.0
2
0.0
2
0.0
1
0.0
0
0.0
2
0.0
0
0.0
7
0.0
2
0.0
2
0.0
3
0.0
5
0.0
5
0.0
4
0.0
3
0.0
1
0.0
4
0.0
5
0.0
0
0.0
3
0.0
0
0.0
3
0.0
1
0.0
1
0.0
1
0.0
3
0.0
5
0.0
6
0.0
6
0.0
2
0.0
2
0.0
4
0.0
2
0.0
1
0.0
3
0.0
2
0.0
2
0.0
6
0.0
9
0.0
6
0.0
6
0.0
0
0.0
3
0.0
1
0.0
2
0.0
2
0.0
5
0.0
3
0.0
4
0.0
0
0.0
1
0.0
0
0.0
1
0.0
1
0.0
1
0.0
1
0.0
0
0.0
1
0.0
0
0.0
0
0.0
2
0.0
2
0.0
0
0.0
2
0.0
1
0.0
0
0.0
0
0.0
1
0.0
1
0.0
0
0.0
1
0.0
1
0.0
3
0.0
1
0.0
2
0.0
2
0.0
0
0.0
2
0.0
1
0.0
0
0.0
2
0.0
2
0.0
3
0.0
1
0.0
2
0.0
4
31
64
2.74
1.74
0.24
65
0.59
0.33
0.14
66
1.27
0.56
0.19
67
1.64
0.76
0.21
68
0.27
0.24
0.17
69
0.30
0.04
0.13
70
2.37
1.12
0.41
71
3.16
1.43
0.89
72
2.35
1.43
0.37
73
1.88
0.16
1.08
74
6.73
4.20
1.44
75
4.02
2.86
0.35
76
6.30
4.99
0.28
77
7.95
6.81
0.02
78
1.79
1.97
0.05
79
4.26
3.64
0.32
80
6.46
6.15
0.09
81
0.68
0.95
0.11
82
1.43
1.17
0.16
83
4.58
3.87
0.82
84
5.37
5.41
0.82
85
3.34
2.39
0.32
86
6.09
12.6
7
0.44
87
5.66
14.4
6
88
0.22
0.60
0.11
89
3.35
3.97
0.28
90
2.17
0.79
0.11
91
4.27
1.97
0.45
92
7.29
7.62
2.61
93
7.33
3.67
1.51
94
1.76
3.13
0.61
95
2.87
4.67
2.32
96
1.91
1.75
1.09
97
2.63
2.39
0.75
98
1.42
2.29
1.73
99
0.42
1.32
3.23
100
2.30
1.30
3.40
2.09
0.5
0
0.0
1
0.2
9
0.4
2
0.1
0
0.1
3
0.3
8
0.3
4
0.2
2
0.4
2
0.4
9
0.3
6
0.2
8
0.3
1
0.2
2
0.3
6
0.1
8
0.2
6
0.1
0
0.6
9
0.0
2
0.6
8
0.6
4
0.3
5
0.5
9
0.7
6
0.4
5
0.0
6
1.5
3
0.2
2
1.4
9
1.3
0
1.3
4
0.7
8
1.8
8
1.0
7
0.4
8
0.1
2
0.1
7
0.1
6
0.2
0
0.0
0
0.0
6
0.3
6
0.3
7
0.1
8
0.1
9
0.2
9
0.2
0
0.4
8
0.5
1
0.2
0
0.3
9
0.1
0
0.0
9
0.4
3
0.6
1
0.1
6
0.1
5
0.0
8
1.7
3
0.0
4
0.0
3
0.5
4
1.5
0
1.8
0
2.7
1
0.2
6
0.7
9
1.0
5
2.1
2
0.3
0
0.9
6
2.7
5
0.1
1
0.1
0
0.0
1
0.0
1
0.0
3
0.0
0
0.0
9
0.0
7
0.0
6
0.0
3
0.2
9
0.2
0
0.2
0
0.1
9
0.1
8
0.0
0
0.2
4
0.0
2
0.1
0
0.2
2
0.4
9
0.3
5
0.4
6
1.3
6
0.1
3
0.2
0
0.1
6
0.8
1
1.3
3
1.5
6
0.2
5
0.6
5
0.2
0
2.1
9
0.3
4
1.1
8
3.1
7
0.0
1
0.0
3
0.0
6
0.0
5
0.0
0
0.0
2
0.0
0
0.0
3
0.0
7
0.0
1
0.0
2
0.0
3
0.0
9
0.1
3
0.0
9
0.1
8
0.1
9
0.0
2
0.0
1
0.0
1
0.1
3
0.0
8
0.1
0
0.4
1
0.1
5
0.1
9
0.1
3
0.4
6
0.6
3
0.6
6
0.2
6
0.3
3
0.0
1
0.6
3
0.1
9
0.4
7
1.4
7
0.0
2
0.0
1
0.0
1
0.0
0
0.0
0
0.0
1
0.0
1
0.0
2
0.0
3
0.0
0
0.0
0
0.0
2
0.0
2
0.0
2
0.0
3
0.0
1
0.0
4
0.0
1
0.0
0
0.0
2
0.0
2
0.0
1
0.0
1
0.0
2
0.0
0
0.0
1
0.0
1
0.0
4
0.0
5
0.0
1
0.0
4
0.0
4
0.0
0
0.0
6
0.0
6
0.0
4
0.0
8
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
8.44
0.72
12.4
3
1.2
9
3.5
9
0.4
5
0.1
3
0.8
7
0.1
0
2.1
3
6.8
7
5.0
3
2.2
7
6.7
1
0.5
4
1.4
8
0.0
5
0.0
7
0.3
6
1.6
1
0.2
7
6.2
6
1.6
6
2.8
9
0.2
2
1.5
8
3.5
6
1.0
2
1.3
6
0.4
9
0.4
6
3.6
8
1.8
0
2.1
7
2.5
8
1.6
1
0.9
9
2.0
4
0.2
1
0.3
1
0.3
8
0.3
5
3.1
3
2.5
3
2.1
9
0.5
4
1.5
2
0.0
5
0.2
5
0.0
0
0.0
6
0.0
8
0.2
2
0.2
3
0.2
2
0.1
7
0.0
1
0.0
6
7
3
3
0
0
0
0
0
27
16
4
3
1
0
0
0
101
1.41
0.09
3.37
102
3.00
1.28
5.46
103
0.50
1.54
1.77
104
2.29
0.53
1.29
105
4.52
1.52
1.80
106
2.95
2.19
1.89
107
6.54
11.0
9
4.44
11.9
7
0.68
19.6
4
4.95
110
1.54
31.6
3
16.8
6
11.7
3
111
7.74
10
%
5%
108
109
VII. REFERNCES
1.
2.
3.
4.
5.
6.
7.
The highlighted variations are identified as a noise
cluster during different threshold values.
8.
The threshold values are assigned as 5 and 10 and the
numbers of noise clusters are identified.
9.
10.
no of Noise
No of
cluste
rs
8
16
Noise In %
no of
frame
s
112
Cluste
red
Objec
t
896
TH
.05
51
TH
.1
13
TH .05
5.69
TH .1
1.45
112
1792
54
8
3.01
0.45
11.
12.
As per the evaluations, the 112 frames are divided into
896 and 1792 cluster objects as per equal interval values.
The 8 cluster process, 51 noise cluster values are
indentified while threshold value is .05 and 13 noise are
identified while threshold value is .1. The 16 cluster
process, 54 noise cluster values are indentified while
threshold value is .05 and 8 noise are identified while
threshold value is .1.The clusters are increased then the
percentage of noise are decreased . It will aid to evaluate
the specific cluster for medical analysis process.
13.
14.
15.
16.
17.
VI. CONCLUSION
This paper arrived to identify noise clustered neuron
based on the variations of the Brain MRI analysis. The
similar parts of the neurons between the sliced images
are identified and its variations are computed. The each
clusters number of pixels and the percentage of variations
are computed. It shows that the brain neurons are active
or inactive . The variation level of the neuron stated the
active and impact level of the person. The further work
evaluates each cluster variations and its associative
activities to predict the medial analysis.
18.
19.
20.
32
Antonie, M. L., Zaiane, O. R.,Coman, A.,(2001) “Application of
Data Mining Techniques for Medical Image Classification”,
Proceedings of the Second International Workshop on Multimedia
Data Mining MDM/KDD 2001) in conjunction with ACM
SIGKDD conference, San Francisco, August 26,2001
B. Andreopoulos , A. An , X. Wang and M. Schroeder "A
roadmap of clustering algorithms: Finding a match for a
biomedical application", Briefings Bioinformatics, vol. 10, no.
3, pp.297 -314 2009
Cios KJ, Moore GW. Medical data mining and knowledge
discovery: an overview. In: Cios KJ, editor.Medical data mining
and knowledge discovery. Heidelberg: Springer, 2000. p. 1–16
[chapter 1].
Dharmendra S Modha's A scalable simulator for an architecture
for Cognitive Computing IBM and LBNL presented the next
milestone towards fulfilling the vision of DARPA SyNAPSE
program at Supercomputing 2012.
Dharmendra S Modha's, (2012) A scalable simulator for an
architecture for Cognitive Computing IBM and LBNL presented
the next milestone towards fulfilling the vision of DARPA
SyNAPSE program at Supercomputing 2012.
Dunham, M. H., Sridhar S.,(2006) “Data Mining: Introductory
and Advanced Topics”, Pearson Education,New Delhi, ISBN: 817758-785-4, 1st Edition, 2006
Fadi Thabtah, A review of associative classification mining, The
Knowledge Engineering Review, Volume 22 , Issue 1 (March
2007),Pages 37-65, 2007.
Gladis.D, Rani S, K-Means Clustering To Identify High Active
Neuron Analysis For Lsd, International Journal of Innovative
Research in Science, Engineering and Technology, ISSN: 23198753Vol. 2, Issue 9, September 2013
Goertzel, B. and Pennachin, C. Artificial General Intelligence.
Springer, Berlin, Heidelberg, 2009.
Gruber O, Tost H, Henseler I et al. Pathological amygdala
activation during working memory performance: evidence for a
pathophysiological trait marker in bipolar affective disorder. Hum
Brain Mapp 2010; 31: 115–125.
Harleen Kaur , Siri Krishan Wasan and Vasudha Bhatnagar, THE
IMPACT OF DATA MINING TECHNIQUES ON MEDICAL
DIAGNOSTICS, Data Science Journal, Volume 5, 19 October
2006pp119-126.
J. Sherbondy, R. Ananthanrayanan, R. F. Dougherty, D. S.
Modha,and B. A. Wandell, (2009) “Think global, act local;
projectome estimation with bluematter,” in Proceedings of
MICCAI 2009. Lecture Notes in Computer Science.
Jiawei Han and Micheline Kamber, “Data Mining Concepts and
techniques”, 2nd ed., Morgan Kaufmann Publishers, San
Francisco, CA, 2007.
K. Jain, M. N. Murty, and P. J. Flynn. Data clustering: a
review.ACM Computing Survays, 31(3):264-323, 1999
Kandel, E.R., Schwartz, J.H., and Jessell, T.M. Principles of
Neural Science, Fourth Edition. McGraw-Hill Medical, New
York, 2000.
M. C. Jobin Christ, R. M. S. Parvathi, “Segmentation of Medical
Image using K-Means Clustering and Marker Controlled
Watershed Algorithm European Journal of Scientific Research
ISSN 1450-216X Vol.71 No.2 (2012), pp. 190-194
M.C. Su and C. H. Chou, “ A Modified Version of the K – Means
Algorithm with a Distance Based on Cluster Symmetry,” IEEE
Trans. On Pattern Analysis and Machine Intelligence, vol.23, no.6,
pp. 674 – 680, June. 2001
Modha, D.S. and Singh, R. (June 2010) Network architecture of
the long-distance pathways in the macaque brain. Proceedings of
the National Academy of Sciences of the USA 107, 30, 13485–
13490.
Moore GW, Berman JJ. Anatomic pathology data mining. In: Cios
KJ, editor. Medical data mining and knowledge discovery.
Heidelberg: Springer, 2000. p. 61–108 [chapter 4].
Onkamo, P. and Toivonen, H., “A survey of data mining methods
for linkage disequilibrium mapping”, Henry Stewart Publications
1473 - 9542. Human Genomics. VOL 2, NO 5, age No. 336-340,
MARCH 2006.
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
21. R. Xu and D. Wunsch II "Survey of clustering algorithms", IEEE
Trans. Neural Networks, vol. 16, no. 3, pp.645 -678 2005
22. Strakowski SM, Adler CM, Holland SK, Mills NP,DelBello MP,
Eliassen JC. Abnormal fMRI brain activationin euthymic bipolar
disorder patients during a counting Stroop interference task. Am J
Psychiatry 2005; 162: 1697–1705.
23. Sunita Soni, O.P.Vyas, Using Associative Classifiers for
Predictive Analysis in Health Care Data Mining, International
Journal of Computer Application (IJCA, 0975 –8887) Volume 4–
No.5, July 2010, pages 33-34.
24. Wenger DA, Coppola S, Liu SL. Insights into the diagnosis and
treatment of lysosomal storage diseases. Arch Neurol
2003;60(3):322-8.
33
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Driving Sequence Information from AAIndex for Protein Hot
Spots Prediction
Peipei Li, Keun Ho Ryu
Database & Bioinformatics Laboratory, Chungbuk National University, Korea
{lipeipei, khryu} @dblab.chungbuk.ac.kr
Abstract— Protein hot spots are a fraction of residues on interaction interface and finding them is important for
examining the actions and properties during protein function occurs. However computational approaches still have
limitations on feature interpretation. In this paper, we investigate salient physicochemical properties of hot spots
from AAIndex to obtain sequence information for hot spots prediction. Value of each feature for each hot spots
residue is calculated by average values of its neighbors which in a defined cutoff. Feature selection is carried on for
obtaining features with cutoffs from 4Å to 15 Å by Information Gain. Support vector machine is used for prediction
on ASEdb as training set and BID as independent test set. With experimental results best F-score are gotten as 0.6
on 10-cross validation on training set when cutoff is 15Å and 0.29 on test set when cutoff is 14Å.
Keywords- Protein hot spots; sequence information; feature selection
performance, they are still under limitation. First the
features used in predicting method are not comprehensive.
Second the features previously identified as being
correlated with hot spots are still insufficient.
INTRODUCTION
When two or more proteins bind together major
binding free energy is contributed by a small part of
interface residues which are usually called protein hot
spots [1]. To identify hot spots is important for examining
the actions and properties occurring around the binding
sites, and therefore provides important clues to the
function of a protein.
In this paper, we present an investigation on salient
physicochemical properties of hot spots neighbor residues
using AAIndex [9]. Values of physicochemical properties
for each residue are calculated by average values of its
neighbors which in a defined cutoff which are from 4Å to
15 Å. Feature selection is carried on with information
gain for obtaining best features used for hot spots
prediction. Finally simple naive bayesian is used for hot
spots prediction on ASEdb as training set and BID as
independent test set.
Alanine scanning is a main laboratory approach used
to examine the energetic importance of a residue in the
binding of two proteins. Two databases, Alanine
Scanning Engergetics database (ASEdb) [2] and binding
interface database (BID) [3] are constructed based on
laboratory experiments with high accuracy but time
consuming, expensive and with few hot spots data.
MATERIALS AND METHODS
In recent years several studies focus on researching
different characteristics between hot spots and non-hot
spots residues. It is proved that hot spots are clustered at
the core of the protein interface surrounding by O-ring
residues at its rim [4]. Another study finds that hot spots
are statistically correlated with structurally conserved
residues [5, 6]. Based on these researches, computational
methods have been developed to predict hot spots
residues from interface residues. Especially feature based
methods achieve relative good predictive results. In [7],
an efficient approach namely APIS that uses support
vector machine (SVM) to predict hot spot using a wide
variety of 62 features from a combination of protein
sequence and structure information is developed. F-score
method is used as a feature selection method to remove
redundant and irrelevant features and improve the
prediction performance. Nine individual features based
predictor is finally developed to identify hot spots with F1
score of 0.64. HotPoint [8] is a server providing the hot
spot prediction results considering criteria: Hot spots are
buried, more conserved, packed, and known to be mostly
of specific residue types. Based on the benchmark dataset
it achieves an accuracy of 0.70.
Datasets
Training set of 196 protein interface residues from 20
protein complexes was downloaded from ASEdb [2]. 77
residues with binding energy changes resulting from
mutations of protein side-chains to alanine higher than 2.0
kcal/mol are treated as hot spots, and 119 residues with
binding energy changes lower than 0.4 kcal/mol are
considered as non-hot spots.
Test set of 125 protein interfaces derived from BID [3]
are used as an independent test set. 38 residues with label
of strong are classified as hot spots, and 87 residues with
label of intermediate, weak, or insignificant are
considered as non-hot spots. The residues are from 18
protein complexes.
Salient physicochemical properties
The 544 salient physicochemical properties are from
AAindex [8], which is a database of numerical indices
representing various physicochemical and biochemical
properties of amino acids. After we remove 13 NAN
values, 531 properties are remained for future work.
One physicochemical value of one hot spot is defined
as the average value of its neighbor residues in a defined
cutoff. Large cutoff value may include many irrelevant
Although computational methods have been well
developed and achieve a relative success with good
34
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
neighbors, and small cutoff may miss effect of its
neighbors. So we set this parameter to be from 4Å to 15Å
and do experiments to choose an appropriate cutoff value.
EXPERIMENTS AND RESULTS
Feature selection
In Fig. 1, we list numbers of selected features after
feature selection step using information gain with cutoff
from 4Å to 15Å. From the table, we can see that when
cutoff is quite small as from 4Å to 8Å, only one or two
physicochemical properties are selected. And when cutoff
is set to be quite big as 15Å, nearly 50 features are
selected. We need an appropriate feature number to
supply useful information for hot spots prediction.
Feature selection
We know that feature selection is a necessary step in
data mining to remove redundant features and irrelevant
features to improve classification accuracy. Here feature
selection is processed by evaluating the worth of a salient
physicochemical feature by measuring the information
gain with respect to the class.
Classification
Support vector machine (SVM) is widely used for
classification and regression analysis. The standard SVM
takes a set of input data and predicts, for each given input,
which of two possible classes forms the input, making the
SVM a non-probabilistic binary linear classifier. Given a
set of training examples, each marked as belonging to one
of two categories, an SVM training algorithm builds a
model that assigns new examples into one category or the
other. An SVM model is a representation of the examples
as points in space, mapped so that the examples of the
separate categories are divided by a clear gap that is as
wide as possible.
Weka [10] implements John C. Platt's sequential
minimal optimization algorithm for training a support
vector classifier using polynomial or RBF kernels. Here
we use it to construct hot spots prediction model.
Feature selection results with cutoff from 4 to 15
Predection relults
Hotspots prediction results using SVM are shown in
the next Fig. 2 and Fig. 3. Fig.2 shows results by
experiments on ASEdb with 10-cross validation. Fig.3
shows results by experiments using ASEdb as training set
and BID as an independent test set. It is clearly that best
F-score is obtained when cutoff is set to be 14 with 0.6
and 0.29 respectively.
Evaluation measure
Precision, recall and accuracy are three widely used
metrics employed in classification. And in additional F1
measure as a weighted average of the precision and recall
is also used for assessment of protein-protein interface hot
spot prediction methods.
Let TP, FP, TN, and FN denote the numbers of true
positive (a predicted residue included in the benchmark
dataset), false positive (a predicted residue not listed in
the benchmark dataset), true negative (a hot spot residue
in the benchmark dataset which has been missed by
prediction method) and false negative (a non-hot spot
residue in the benchmark dataset which has been correctly
predicted) respectively. A formal definition of these
metrics is given below.
P
R
A
TP
TP  FP (1)
TP
TP  FN (2)
TP  TN
TP  FP  TN  FN (3)
2 P R
F1 
P  R (4)
Hotspots prediction preformance using SVM on ASEdb with 10-cross
validation
35
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Features
DAWD72010
1
PALJ810111
FUKS010111
CEDJ970101
CHOP780208
KUMS000102
Feature selection
In the next table 1, we list the detail description of 42
selected features when cutoff is set to be 14Å. Relative
preference, entropy of information, linker propensity,
weights, interior composition, slope in regression analysis,
distribution of amino acid residues, transfer free energy,
AA composition, length, size and relative population are
included in. These sequence characteristics are proved to
be important for hot spots prediction.
FEATURE SELECTED WHEN CUTOFF IS 14 Å
FUKS010107
ISOY800106
RICJ880113
PRAM820102
QIAN880129
RICJ880112
QIAN880137
PALJ810109
QIAN880117
KUMS000101
QIAN880131
RADA880102
FUKS010106
NAKH900109
AURR980103
QIAN880118
QIAN880128
FAUJ880104
VASM830102
Normalized frequency of beta-sheet in alpha+beta
class
Entire chain composition of amino acids in
extracellular proteins of mesophiles
Composition of amino acids in extracellular proteins
Normalized frequency of N-terminal beta-sheet
Distribution of amino acid residues in the 18 nonredundant families of mesophilic proteins
CONCLUSION
Indentifying protein hot spots is necessary for
investigate the biological functions when important
molecular processes occur in the cell such as signal
transmission. In this paper in order to investigate
sequence characteristics of protein hotspots, we use
salient physicochemical properties calculated from
AAIndex. The value of each property for each hot spots
residue is calculated by average values of its neighbors
which in a defined cutoff. Feature selection is carried on
for obtaining features with cutoffs from 4Å to 15Å by
Information Gain. Support vector machine is used for
prediction on ASEdb as training set and BID as
independent test set. With experimental results best Fscore are gotten as 0.6 on 10-cross validation on training
set when cutoff is 15Å and 0.29 on test set when cutoff is
14Å. We prove that selected features will be useful for
future hot spots prediction.
Hotspots prediction performance using SVM on BID as independent
test set
Features
RICJ880111
HUTJ700103
GEOR030107
QIAN880124
FASG760102
GEOR030102
RICJ880108
Description
Size
Description
Relative preference value at C4
Entropy of formation
Linker propensity from long dataset
Weights for beta-sheet at the window position of 4
Melting point
Linker propensity from 1-linker dataset
Relative preference value at N5
Interior composition of amino acids in extracellular
proteins of mesophiles
Normalized relative frequency of helix end
Relative preference value at C2
Slope in regression analysis x 1.0E1
Weights for coil at the window position of -4
Relative preference value at C3
Weights for coil at the window position of 4
Normalized frequency of alpha-helix in alpha/beta
class
Weights for beta-sheet at the window position of -3
Distribution of amino acid residues in the 18 nonredundant families of thermophilic proteins
Weights for coil at the window position of -2
Transfer free energy from oct to wat
Interior composition of amino acids in intracellular
proteins of mesophiles
AA composition of membrane proteins
Normalized positional residue frequency at helix
termini N"
Weights for beta-sheet at the window position of -2
Weights for coil at the window position of -5
STERIMOL length of the side chain
Relative population of conformational state C
ACKNOWLEDGMENT
This research was supported by the MSIP(Ministry
of Science, ICT and Future Planning), Korea, under the
ITRC(Information Technology Research Center) support
program(2014-H0301-14-1022) supervised by the
NIPA(National IT Industry Promotion Agency) and
Basic Science Research Program through the National
Research Foundation of Korea (NRF) funded by the
Ministry of Science, ICT & Future Planning
(No.2013R1A2A2A01068923).
REFERENCES
P Li, G Pok, KS Jung, HS Shon, and KH Ryu, QSE: A new solvent
exposure measure for the analysis of protein structure,
Proteomics, 2011, Vol. 11, No. 19, pp: 3793-3801.
KS Thorn, and AA Bogan, ASEdb: a database of alanine mutations and
their effects on the free energy of binding in protein interactions,
Bioinformatics, 2011, Vol. 17, No. 3, pp: 284-5.
TB Fischer, KV Arunachalam, D Bailey, V Mangual, S Bakhru, and
et al., The binding interface database (BID): a compilation of
amino acid hot spots in protein interfaces, Bioinformatics, 2003,
Vol. 19, No. 11, pp: 1453-4.
AA Bogan and KS Thorn, Anatomy of hot spots in protein interfaces,
J Mol Biol., 1998, Vol. 280, No. 1, pp:1-9.
B Ma, T Elkayam, H Wolfson, and R Nussinov, Protein-protein
interactions: structurally conserved residues distinguish between
binding sites and exposed protein surfaces. Proc Natl Acad Sci.,
2003, Vol. 100, No. 10, pp: 5772-7.
O Keskin, B Ma. and R Nussinov. Hot regions in protein--protein
interactions: the organization and contribution of structurally
36
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
conserved hot spot residues. J Mol Biol., 2005, Vol. 345, No. 5,
pp:1281-94.
JF Xia, XM Zhao, J Song, and DS Huang, APIS: accurate prediction of
hot spots in protein interfaces by combining protrusion index with
solvent accessibility. BMC Bioinformatics, 2010, Vol. 11, pp: 174.
N Tuncbag, O Keskin, and A Gursoy, HotPoint: hot spot prediction
server for protein interfaces. Nucleic Acids Res., 2010, pp: W4026.
S Kawashima, P Pokarowski, M Pokarowska, A Kolinski, T Katayama,
and M Kanehisa, AAindex: amino acid index database, progress
report 2008. Nucleic Acids Res., 2008, Vol. 36, pp: D202-D205.
IH. Witten, E Frank, MA. Hall, Data Mining: Practical machine
learning tools and techniques, 3rd Edition", 2011.
37
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Biomedical Implants: Failure & Prevention techniques – A review
Praveen. R1*, V. JaiGanesh2, S. Prabakar3
1
Research Scholar, Sathyabama University, Chennai, India
*Assistant Professor, AVIT, Chennai, India
1*[email protected]
2
Professor, Deptartment of Mechanical Engg. SA Engg. College, Chennai, India
3
Assistant Professor, AVIT, Chennai, India
Abstract— The material used as biomaterial should not cause any adverse effect to body like allergy or toxicity after
insertion into the body. The material used as biomaterial for implants should possess good mechanical strength to
bear different loading conditions. The material should also possess very high corrosion and wear resistant were it
has to serve in a highly corrosive and stressed environment. The material should also have longer life span of
minimum 10 to 20 years. The challenges faced by the biocompatible material and prevention method are discussed
in this article.
Keywords- Biomaterial, Surface modification, Corrosion prevention, Corrosion resistance
INTRODUCTION
Various classes of materials such as metals, alloys,
polymers ceramics and composites have been widely
used to fabricate the bioimplants. These implants
encounter different biological environments of very
different physico-chemical nature and their interaction
with the tissues and bones is a complex problem.
Corrosion a major challenge for implant material The
reasons for their failure are which includes mechanical,
chemical, tribological, surgical, manufacturing and
biocompatibility issues. Out of all these issues, the
failure of an implant due to corrosion has remained as
one of the challenging clinical problems. This important
field of research, over the years, has been discussed at
length by several authors in the form of books [1-10] and
comprehensive review articles [11-15]
The materials that are used as implants are widely vary
from metals to non-metals. The materials such as
stainless steel, cobalt, chromium, titanium and its alloys,
Bio-ceramics, composites and polymers are widely used.
The material which are in constant contact with the
aggressive body fluid, they often fail and finally fracture
due to corrosion [1]. The corrosion behavior of various
implants and the role of the surface oxide film and the
corrosion products on the failure of implants are
discussed. Surface modification of implants, which is
considered to be the best solution to combat corrosion
and to enhance the life span of the implants
DIFFERENT TYPES OF BIOMATERIAL IMPLANTS
Fig. 2. Failure of implants [26]
WHY CORROSION OCCURS IN HUMAN BODY?
The implants face severe corrosion environment which
includes blood and other constituents of the body fluid
which encompass several constituents like water,
sodium, chlorine, proteins, plasma, amino acids along
with in the case of saliva [16]. the human body consists
of various anions such as chloride, phosphate, and
bicarbonate ions, cations like Na +, K+, Ca2+, Mg2+ etc.,
organic substances of low-molecular-weight species as
well as relatively high molecular - weight polymeric
components, and dissolved oxygen [17, 18]. The
Fig 1. Various Biomaterial Implants in human body
38
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
biological molecules upset the equilibrium of the
corrosion reactions of the implant by consuming the
products due to anodic or cathodic reaction. Changes in
the pH values also influence the corrosion. Though, the
pH value of the human body is normally maintained at
7.0, this value changes from 3 to 9 due to several causes
such as accidents, imbalance in the biological system due
to diseases, infections and other factors and after surgery
the pH value near the implant varies typically from 5.3
to 5.6.
Stainless steel. The main cause for the failure of the
orthopedic implants is wear, which in turn is found to
accelerate the corrosion. Hence, high wear resistant
materials such as ceramics, Co-Cr are often preferred to
fabricate orthopedic implants. In hip implants, Ti based
alloys are used only for making the femoral component
and the ball is either made of Co-Cr or other hard
ceramics.
The most common forms of corrosion that occur are
uniform corrosion, intergranular, galvanic and stress
corrosion cracking, pitting and fatigue corrosion. Even
though new materials are continuously being developed
to replace implant materials used in the past, clinical
studies show that these materials are also prone to
corrosion to a certain extent [26]. The two physical
characteristics which determine implant corrosion are
thermodynamic forces which cause corrosion either by
oxidation or reduction reaction and the kinetic barrier
such as surface oxide layer which physically prevents
corrosion reactions [26].
There has been a constant attempt by engineers and
scientists to improve the surface-related properties of
biomaterials to reduce the failure of implants due to poor
cell adhesion and leaching of ions due to wear and
corrosion. The various surface modification techniques
used for bioimplants have been reviewed [19].
Preventing corrosion using inhibitors is not possible in
an extremely sensitive and complex bio system and
hence several coating methods have been adopted. The
techniques such as chemical treatment, plasma ion
implantation, plasma source ion implantation (PSII)),
laser melting (LSM), laser alloying (LSA), laser
nitration, ion implantation, and physical vapor deposition
(PVD) and also surface texturing [19]. These methods
are more advantageous over the other conventional
techniques as they lead to better interfacial bonding, nonequilibrium phases, faster processing speed, and reduced
pollution. However, each of these methods also has some
limitations. Hence, some of the widely applied methods
are described in the following subsections.
CORROSION PREVENTION OF BIO
MATERIALS
Table 1 - Effects of Corrosion in Human Body Due to Various
Biomaterials
Biomaterial
Metals
Nickel
Effect of Corrosion
Cobalt
Affects skin - such as dermatitis
dermatitis
Anemia B inhibiting iron from
being absorbed into the blood
stream
Ulcers and Central nervous system
disturbances
Alzheimer’s disease
Toxic in the elementary state
Chromium
Aluminum
Vanadium
In the case of Ni-Ti stents, the release of nickel ions from
Ni-Ti has been reported in a few cases and the released
ions are found to be responsible for the endothelial cell
damage. The various coating methods such as
passivation, plasma immersion ion implantation, electro
polishing is used. Recently carbon based coatings
namely Diamond Like Carbon (DLC) are found to be
more promising and the corrosion resistance of NiTi
alloys with this coating has shown tremendous
improvement [20].
CORROSION OF ORTHOPEDIC IMPLANTS
ASTM Standards
ASTM G 61-86,
and ASTM G 5-94
ASTM G71-81
ASTM F746-87
ASTM F2129-01
Specifications
Corrosion performance of
metallic biomaterials
Galvanic
corrosion
in
electrolytes
Pitting or crevice corrosion of
metallic
surgical
implant
materials
Cyclic
potentiodynamic
polarization easurements
Ti dental implants are generally surface modified to
reduce corrosion, improve osseointegration and increase
the biocompatibility. To achieve this, surface treatments,
such as surface machining, sandblasting, acid etching,
electropolishing, anodic oxidation, plasma-spraying and
biocompatible/biodegradable coatings are performed to
improve the quality and quantity of the bone-implant
interface of titanium-based implants [21].
Unlike the above treatments, laser-etching technique was
introduced in material engineering originally which
resulted in unique microstructures with greatly enhanced
hardness, corrosion resistance, or other useful surface
properties [22]. Laser processing also is now being used
in implant applications to produce a high degree of purity
with enough roughness for good osseointegration [20].
the excimer laser to modify the surface of the Ti-6Al-4V
Orthopedic implants include both temporary implants
such as plates and screws and permanent implants that
are used to replace hip, knee, spinal, shoulder, toe, finger
etc. The corrosion mechanisms that occur in temporary
implants are crevice corrosion at shielded sites in
screw/plate interface and beneath the heads of fixing
screws and pitting corrosion of the implants made of
39
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
alloy to improve its corrosion resistance and there was a
seven fold increase in the corrosion resistance [21]. With
regard to orthopedic implants also, different surface
modification methods have been adopted to improve
their corrosion resistance [23].
[3]
Dee KC, Puleo DA, Bizios R. An introduction to tissuebiomaterial interactions. New York: Wiley-Liss 2002; pp. 53-88.
[4] Park JB. Biomaterials science and engineering. Plenum. New
York: Wiley-Liss 1984; pp. 193-233.
[5] Ducheyne PL, Hasting GW. Functional behavior of rthopedic
biomaterials applications. UK: CRC Press 1984; vol. 2: pp. 3-45.
[6] Kamachi MU, Baldev R. Corrosion science and technology:
mechanism, mitigation and monitoring. UK: Taylor & Francis
2008; pp. 283-356.
[7] Héctor AV. Manual of biocorrosion.1 st ed. UK: CRC-Press 1997;
pp. 1-8.
[8] Fontana
MG.
Corrosion
Engineering.
McGraw-Hill
Science/Engineering/Math; Sub edition: (November 1, 1985).
2006; vol. 3: pp. 1-20.
[9] Yoshiki O. Bioscience and bioengineering of titanium materials.
1sted. USA: Elsevier 2007; pp. 26-97.
[10] Mellor BG. Surface coatings for protection against wear. UK:
CRC Press 2006; pp. 79-98.
[11] Hanawa T. Reconstruction and regeneration of surface oxide film
on metallic materials in biological environments. Corrosion Rev
2003; 21: 161-81.
[12] Manivasagam G, Mudali UK, Asokamani R, Raj B. Corrosion
and microstructural aspects of titanium and its alloys. Corrosion
Rev 2003; 21: 125-59.
[13] Chaturvedi TP. An overview of the corrosion aspect of dental
implants (titanium and its alloys). Ind J Dent Res 2009; 20: 91-8.
[14] Geetha M, Singh AK, Asokamani R, Gogia AK. Ti based
biomaterials, the ultimate choice for orthopaedic implants - A
review. Prog Mater Sci 2009; 54: 397-425.
[15] Gonzalez EG, Mirza-Rosca JC. Study of the corrosion behavior
of titanium and some of its alloys for biomedical and dental
implant applications. J Electroanal Chem 1999; 471: 109-12.
[16] Lawrence SK, Gertrude M. Shults. Studies on the relationship of
the chemical onstituents of blood and cerebrospinal fluid. J Exp
Med 1925; 42(4): 565-91.
[17] Scales JT, Winter GD, Shirley HT. Corrosion of rthopaedic
implants, screws, plates, and femoral nail-plates. J Bone Joint
Surg 1959; 41B: 810-20.
[18] Williams DF. Review-Tissue-biomaterial interactions. J Mater
Sci 1987; 22: 3421-45.
[19] Kurella A, Dahotre NB. Surface modification for bioimplants: the
role of laser surface engineering. J Biomater Appl 2005; 20: 550.
[20] Nakamura S, Degawa T, Nishida T, et al. Preliminary experience
of Act-OneTM coronary stent implantation. J Am Coll Cardiol
1996; 27: 53-65.
[21] Glass JR, Dickerson KT, Stecker K, Polarek JW. Characterization
of a hyaluronic acid-Arg-Gly-Asp peptide cell attachment matrix.
Biomaterials 1996; 17: 1101-8.
[22] Picraux ST, Pope LE. Tailored surface modification by ion
implantation and laser treatment. Science 1984; 226: 615
[23] Geetha M, Mudali UK, Pandey ND, Asokamani R, Raj B.
Microstructural and corrosion evaluation of Laser surface nitride
Ti-13Nb-13Zr alloy. Surf Eng 2004; 20(1): 68-74.
[24] Slonaker M, Goswami T. Review of wear mechanisms in hip
implants: Paper II - ceramics IG004712. Mater Des 2004; 25: 395
- 405.
[25] Liping L. Nanocoating for improving biocompatibility of medical
implants. WO Patent 022887, 2006.
[26] Biomedical Implants: Corrosion and its Prevention - A Review,
Geetha Manivasagam*, Durgalakshmi Dhinasekaran and
Asokamani Rajamanickam, Recent Patents on Corrosion
Science, 2010, 2, 40-54
Further, laser is highly advantageous if one requires
processing functionally integrated and structured
materials so as to mimic the bone. The unique properties
of nano-ceramic materials have stimulated intense
research so that they can be used to obtain orthopedic and
dental implants with much superior properties compared
to the conventional coatings which have been done
hitherto with micron sized particles. Studies on corrosion
behavior of nano crystalline diamond films coated Ti6Al-4V showed that this coating provided significant
protection against electrochemical corrosion in a
biological environment [23].
CURRENT AND FUTURE DEVELOPMENT
Nano structured graded metallo ceramic coatings have
also been tried to achieve better adhesion between the
metal and ceramic coatings and thus nano ceramic
coatings are gradually receiving greater attention.
Ceramics are another class of materials which have high
biocompatibility and enhanced corrosion resistance.
They are widely used today for total hip replacement,
heart valves, dental implants and restorations, bone
fillers and scaffolds for tissue engineering, but ceramics
are brittle, have high elastic modulus and can fracture as
they posses low plasticity. In addition, when they are
oxidized they release ions into the body and this may lead
to degradation of the implant [24]. Alumina and zirconia
are considered to be as alternatives for metallic materials
for load bearing applications as they show no corrosion
in the body and also posses high wear resistance.
Surface modifications are often performed on the
biomedical implants to improve corrosion resistance,
wear resistance, surface texture and biocompatibility
[25]. All the modified surfaces should be tested for its
corrosion behavior invariably apart from improving
other desired properties.
REFERENCE
Williams DF. Current perspectives on implantable devices. India: Jai
Press 1990; 2: 47-70.
[2] Ratner BD, Hoffman AS, Schoen FJ, Lemon JE. Biomaterials
science: an introduction to materials in medicine. Academic
Press: 1996; Chapter 6: 243-60.
40
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Anti Hijack System with Eye Pressure Control System
M.Barathvikraman1, H.Divya2, Praveen. R3
1&2 School of Diploma in Electronic Robotics, Thiru Seven Hills Polytechnic College, Chennai, India
1 [email protected], 2 [email protected]
3Assistant Professor, AVIT, Chennai, India
[email protected]
Abstract— Now a days terrorists anti social practice of hijacking planes and killing people as the main weapon to
demand cores & cores from the nations involved. They keep the release of the victims at stakes and play with their
precious lives. In order to prevent this at the level of Diploma Engineers we submit this project as a solution. When
the hijacked persons are paralyzed at the gun point, they can only send message through the movements of their
eyes. At first we decided to use iris sensors as sensing device. But it is costly and cause eye-defects. So we finalized to
use CRD electrodes as sensing element to measure the pressure in the veins around the eyes and produce control
signals. The signals from the CRD electrodes are given as inputs to the PIC micro controller. From the IC it transfers
to the EOG board and the modulated signals to zigbbe(Transmitter). Another EOG board of the vehicles receives
the signal through the zigbbe(Receiver) and control the drive system. The flight of the hijacked plane is controlled
by eye movements forward, reverse, left, right. And if the victim closes his eyes for 3 seconds, the control is
transferred to the nearest base station from where the flight is hijacked the operator at the base station can control
the flight. The camera inside the cabin watches the hijackers and a gun fitted with the camera can be controlled by
the operator down at the station. We have made a prototype model to be implemented based on our invention &
succeed in the project.
Keywords- Electro Cardio Graph, Electrode Ortho Graph, Anti Hijack System, Eye pressure control system
I.INTRODUCTION
The microcontroller that has been used for this project is
from PIC series. PIC microcontroller is the first RISC based
microcontroller fabricated in CMOS (complementary metal
oxide semiconductor) that uses separate bus for instruction
and data allowing simultaneous access of program and data
memory. The main advantage of CMOS and RISC
combination is low power consumption resulting in a very
small chip size with a small pin count. The main advantage
of CMOS is that it has immunity to noise than other
fabrication techniques.
PIC (16F877): Various microcontrollers offer different kinds
of memories. EEPROM, EPROM, FLASH etc. are some of
the memories of which FLASH is the most recently
developed. Technology that is used in pic16F877 is flash
technology, so that data is retained even when the power is
switched off. Easy Programming and Erasing are other
features of PIC 16F877.
PIC START PLUS PROGRAMMER:
The PIC start plus development system from microchip
technology provides the product development engineer with
a highly flexible low cost microcontroller design tool set for
all microchip PIC micro devices. The picstart plus
development system includes PIC start plus development
programmer and mplab ide.
The PIC start plus programmer gives the product developer
ability to program user software in to any of the supported
microcontrollers. The PIC start plus software running under
mplab provides for full interactive control over the
programmer.
II. EASE OF USE
The block diagram of an Anti Hijack System is been shown
in figure 1.The hardware of the system has been interfaced
with the ECG and there are four electrodes where they are
been placed on the corners of both left and right eye. The rest
are placed on left eye above fore head & below left cheeks.
This can control the model which is kept to be promoted as
demo. After that passing through the defibrillation protection
system, these three inputs are fed into the amplifier part as
the signals are too small to be useful. Then it is interfaced to
the input of the internal ADC of the microcontroller. And
there is a 16F877 PIC at the output to move the required
41
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
commands. There is a LED display which is attached to
monitor the commands.
< 2mA typical @ 5V, 4 MHz
20mA typical @ 3V, 32 kHz
< 1mA typical standby current
III. MICROCONTROLLER & IR TRNSMITTERRECIVER
 PERIPHERAL FEATURES :
 • Timer0: 8-bit timer/counter with 8-bit prescaler
 Timer1: 16-bit timer/counter with prescaler, can be
incremented during sleep
 via external crystal/clock
 Timer2: 8-bit timer/counter with 8-bit period register,
prescaler and postscaler
 Two Capture, Compare, PWM modules
 Capture is 16-bit, max resolution is 12.5 ns Compare is
16-bit, max resolution is 200 ns,
 PWM max. resolution is 10-bit
 10-bit multi-channel Analog-to-Digital converter
 Synchronous Serial Port (SSP) with SPI. (Master Mode)
and I2C. (Master/Slave)
 Universal Synchronous Asynchronous Receiver
Transmitter (USART/SCI) with 9- bit address detection.
 Brown-out detection circuitry for Brown-out Reset
(BOR)
Microcontrollers will combine other devices such as:
A.CONCEPTS OF MICROCONTROLLER:
Microcontroller is a general purpose device, which
integrates a number of the components of a microprocessor
system on to single chip. It has inbuilt CPU, memory and
peripherals to make it as a mini computer. A microcontroller
combines on to the same microchip:
 The CPU core
 Memory(both ROM and RAM)
 Some parallel digital i/o
SPECIALFEATURES OF PIC MICROCONTROLLER





























CORE FEATURES :
High-performance RISC CPU
Only 35 single word instructions to learn
All single cycle instructions except for program branches
which are two cycle
Operating speed: DC - 20 MHz clock input
DC - 200 ns instruction cycle
Up to 8K x 14 words of Flash Program Memory,
Up to 368 x 8 bytes of Data Memory (RAM)
Up to 256 x 8 bytes of EEPROM data memory
Pin out compatible to the PIC16C73/74/76/77
Interrupt capability (up to 14 internal/external
Eight level deep hardware stack
Direct, indirect, and relative addressing modes
Power-on Reset (POR)
Power-up Timer (PWRT) and Oscillator Start-up Timer
(OST)
Watchdog Timer (WDT) with its own on-chip RC
Oscillator for reliable operation
Programmable code-protection
Power saving SLEEP mode
Selectable oscillator options
Low-power, high-speed CMOS EPROM/EEPROM
technology
Fully static design
In-Circuit Serial Programming (ICSP) via two pins
Only single 5V source needed for programming
capability
In-Circuit Debugging via two pins
Processor read/write access to program memory
Wide operating voltage range: 2.5V to 5.5V
High Sink/Source Current: 25 mA
Commercial and Industrial temperature ranges
Low-power consumption:
 A timer module to allow the microcontroller to perform
tasks for certain time periods.
 A serial i/o port to allow data to flow between the
controller and other devices such as a PIC or another
microcontroller.
 An ADC to allow the microcontroller to accept analogue
input data for processing.
ARCHITECTURE OF PIC 16F877 :
The complete architecture of PIC 16F877 is shown in the fig
2.1. Table 2.1 gives details about the specifications of PIC
16F877. Fig 2.2 shows the complete pin diagram of the IC
PIC 16F877.
ARCHITECTURE
SPECIFICATIONS
OF
PIC
DEVICE
PROGRAM
FLASH
DATA
MEMORY
DATA
EEPROM
PIC
16F877
8K
368 Bytes
256 Bytes
PIN DIAGRAM OF PIC 16F877
42
16F877
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
The transmitted signal is given to IR transmitter
whenever the signal is high, the IR transmitter LED is
conducting it passes the IR rays to the receiver. The IR
receiver is connected with comparator. The comparator is
constructed with LM 358 operational amplifier. In the
comparator circuit the reference voltage is given to inverting
input terminal. The non inverting input terminal is connected
IR receiver. When interrupt the IR rays between the IR
transmitter and receiver, the IR receiver is not conducting. So
the comparator non inverting input terminal voltage is higher
than inverting input. Now the comparator output is in the
range of +5V. This voltage is given to microcontroller or PC
and led so led will glow.
PIN OUT DESCRIPTION
When IR transmitter passes the rays to receiver, the
IR receiver is conducting due to that non inverting input
voltage is lower than inverting input. Now the comparator
output is GND so the output is given to microcontroller or
PC. This circuit is mainly used to for counting application,
intruder detector etc.
Legend: I = input O = output I/O = input/output P = power
— = Not used
input
TTL = TTL input
ST = Schmitt Trigger
IV. DC MOTOR CONTROL & POWER SUPPY
Infrared transmitter is one type of LED which emits
infrared rays generally called as IR Transmitter. Similarly IR
Receiver is used to receive the IR rays transmitted by the IR
transmitter. One important point is both IR transmitter and
receiver should be placed straight line to each other.
Circuit working Description:
This circuit is designed to control the motor in the forward
and reverse direction. It consists of two relays named as
43
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
relay1, relay2. The relay ON and OFF is controlled by the
pair of switching transistors. A Relay is nothing but
electromagnetic switching device which consists of three
pins. They are Common, Normally close (NC) and normally
open (NO). The common pin of two relay is connected to
positive and negative terminal of motor through snubber
circuit respectively. The relays are connected in the collector
terminal of the transistors T2 and T4.
range over which the input voltage can vary to maintain a
regulated output voltage over a range of load current. The
specifications also list the amount of output voltage change
resulting from a change in load current (load regulation) or in
input voltage (line regulation).
The series 78 regulators provide fixed regulated
voltages from 5 to 24 V. Figure 19.26 shows how one such
IC, a 7812, is connected to provide voltage regulation with
output from this unit of +12V dc. An unregulated input
voltage Vi is filtered by capacitor C1 and connected to the
IC’s IN terminal. The IC’s OUT terminal provides a
regulated + 12V which is filtered by capacitor C2 (mostly for
any high-frequency noise). The third IC terminal is connected
to ground (GND). While the input voltage may vary over
some permissible voltage range, and the output load may vary
over some acceptable range, the output voltage remains
constant within specified voltage variation limits. These
limitations are spelled out in the manufacturer’s specification
sheets. A table of positive voltage regulated ICs is provided
in table 19.1.
When high pulse signal is given to either base of the T1 or T3
transistors, the transistor is conducting and shorts the
collector and emitter terminal and zero signals is given to
base of the T2 or T4 transistor. So the relay is turned OFF
state. When low pulse is given to either base of transistor T1
or T3 transistor, the transistor is turned OFF. Now 12v is
given to base of T2 or T4 transistor so the transistor is
conducting and relay is turn ON. The NO and NC pins of two
relays are interconnected so only one relay can be operated at
a time.
The series combination of resistor and capacitor is called as
snubber circuit. When the relay is turn ON and turn OFF
continuously, the back emf may fault the relays. So the back
emf is grounded through the snubber circuit.
TABLE 19.1 Positive Voltage Regulators in 7800 series
IC Part
 When relay 1 is in the ON state and relay 2 is in the OFF
state, the motor is running in the forward direction.
|
IC VOLTAGE REGULATORS:
Output Voltage
(V)
7805
7806
7808
7810
7812
7815
7818
V. EOG & LCD
Voltage regulators comprise a class of widely used ICs.
Regulator IC units contain the circuitry for reference source,
comparator amplifier, control device, and overload protection
all in a single IC. Although the internal construction of the IC
is somewhat different from that described for discrete voltage
regulator circuits, the external operation is much the same. IC
units provide regulation of either a fixed positive voltage, a
fixed negative voltage, or an adjustably set voltage. A power
supply can be built using a transformer connected to the ac
supply line to step the ac voltage to a desired amplitude, then
rectifying that ac voltage, filtering with a capacitor and RC
filter, if desired, and finally regulating the dc voltage using
an IC regulator. The regulators can be selected for operation
with load currents from hundreds of milli amperes to tens of
amperes, corresponding to power ratings from milliwatts to
tens of watts.
THREE-TERMINAL VOLTAGE REGULATORS:
Fig shows the basic connection of a three-terminal voltage
regulator IC to a load. The fixed voltage regulator has an
unregulated dc input voltage, Vi, applied to one input
terminal, a regulated output dc voltage, Vo, from a second
terminal, with the third terminal connected to ground. For a
selected regulator, IC device specifications list a voltage
Electrocardiogram:
44
+5
+6
+8
+10
+12
+15
+18
Minimum Vi (V)
7.3
8.3
10.5
12.5
14.6
17.7
21.0
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
An electrocardiogram (ECG or EKG, abbreviated
from the German Elektrokardiogramm) is a graphic produced
by an electrocardiograph, which records the electrical activity
of the heart over time. Analysis of the various waves and
normal vectors of depolarization and repolarization yields
important diagnostic information.
Filter selection:
Modern ECG monitors offer multiple filters for
signal processing. The most common settings are monitor
mode and diagnostic mode. In monitor mode, the low
frequency filter (also called the high-pass filter because
signals above the theshold are allowed to pass) is set at either
0.5 Hz or 1 Hz and the high frequency filter (also called the
low-pass filter because signals below the threshold are
allowed to pass) is set at 40 Hz. This limits artifact for routine
cardiac rhythm monitoring. The low frequency (high-pass)
filter helps reduce wandering baseline and the high frequency
(low pass) filter helps reduce 60 Hz power line noise. In
diagnostic mode, the low frequency (high pass) filter is set at
0.05 Hz, which allows accurate ST segments to be recorded.
The high frequency (low pass) filter is set to 40, 100, or 150
Hz. Consequently, the monitor mode ECG display is more
filtered than diagnostic mode, because its bandpass is
narrower.

It is the gold standard for the evaluation of
cardiac arrhythmias
 It guides therapy and risk stratification for
patients with suspected acute myocardial
infarction.

It helps detect electrolyte disturbances (e.g.
hyperkalemia and hypokalemia)

It allows for the detection of conduction abnormalities
(e.g. right and left bundle branch block)

It is used as a screening tool for ischemic heart disease
during a cardiac stress test

It is occasionally helpful with non-cardiac diseases
(e.g. pulmonary embolism or hypothermia)
The electrocardiogram does not assess the contractility of the
heart. However, it can give a rough indication of increased or
decreased contractility.
Limb Leads:
Leads I, II and III are the so-called limb leads
because at one time, the subjects of electrocardiography had
to literally place their arms and legs in buckets of salt water
in order to obtain signals for Einthoven's string galvanometer.
They form the basis of what is known as Einthoven's triangle.
Eventually, electrodes were invented that could be placed
directly on the patient's skin. Even though the buckets of salt
water are no longer necessary, the electrodes are still placed
on the patient's arms and legs to approximate the signals
obtained with the buckets of salt water. They remain the first
three leads of the modern 12 lead ECG.


ECG on graph paper:
A typical electrocardiograph runs at a paper speed of 25
mm/s, although faster paper speeds are occasionally used.
Each small block of ECG paper is 1 mm2. At a paper speed
of 25 mm/s, one small block of ECG paper translates into
0.04 s (or 40 ms). Five small blocks make up 1 large block,
which translates into 0.20 s (or 200 ms). Hence, there are 5
large blocks per second. A diagnostic quality 12 lead ECG is
calibrated
at
10
mm/mV.

45
Lead I is a dipole with the negative (white) electrode on
the right arm and the positive (black) electrode on the left
arm.
Lead II is a dipole with the negative (white) electrode on
the right arm and the positive (red) electrode on the left
leg.
Lead III is a dipole with the negative electrode (black)
on the left arm and the positive (red) electrode on the left
leg.
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Leads aVR, aVL, and aVF are augmented limb
leads. They are derived from the same three electrodes as
leads I, II, and III. However, they view the heart from
different angles (or vectors) because the negative electrode
for these leads is a modification of Wilson's central terminal,
which is derived by adding leads I, II, and III together and
plugging them into the negative terminal of the EKG
machine. This zeroes out the negative electrode and allows
the positive electrode to become the "exploring electrode" or
a unipolar lead. This is possible because Einthoven's Law
states that I + (-II) + III = 0. The equation can also be written
I + III = II. It is written this way (instead of I + II + III = 0)
because Einthoven reversed the polarity of lead II in
Einthoven's triangle, possibly because he liked to view
upright QRS complexes. Wilson's central terminal paved the
way for the development of the augmented limb leads aVR,
aVL, aVF and the precordial leads V1, V2, V3, V4, V5, and
V6.
The instrumentation amplifier is constructed by the
TL 072 operational amplifier.
The TL072 are high speed J-FET input dual operational
amplifier incorporating well matched, high voltage J-FET
and bipolar transistors in a monolithic integrated circuit. The
deivces feature high slew rates, low input bias and offset
current and low offset voltage temperature coefficient.
The instrumentaion amplifier amplifiy the
differential signal from the both electrode. This amplified
ECG waves contains the line frequency, high frequency
and low frequency noise signals. So the ECG wave
is fed to filter section. The filter section consists of high pass
filter and low pass filter which is used to remove the high
frequency and low frequency noise signal. After the
filteration the ECG wave is given to pulse width modulation
unit. In this section the ECG wave convert to pulse format in
order to perform the isolation. The isloation is construct by
the opto coupler. The isolation is necessary to isolate the
humant body and monitoring equipment such as CRO, PC
etc.

Lead aVR or "augmented vector right" has the positive
electrode (white) on the right arm. The negative
electrode is a combination of the left arm (black)
electrode and the left leg (red) electrode, which
"augments" the signal strength of the positive electrode
on the right arm.
 Lead aVL or "augmented vector left" has the positive
(black) electrode on the left arm. The negative electrode
is a combination of the right arm (white) electrode and
the left leg (red) electrode, which "augments" the signal
strength of the positive electrode on the left arm.
 Lead aVF or "augmented vector foot" has the positive
(red) electrode on the left leg. The negative electrode is
a combination of the right arm (white) electrode and the
left arm (black) electrode, which "augments" the signal
of the positive electrode on the left leg.
The augmented limb leads aVR, aVL, and aVF are amplified
in this way because the signal is too small to be useful when
the negative electrode is Wilson's central terminal. Together
with leads I, II, and III, augmented limb leads aVR, aVL, and
aVF form the basis of the hexaxial reference system, which
is used to calculate the heart's electrical axis in the frontal
plane.
Then the ECG pulse format wave is given to PWM
demodulation unit in which the pulse format is reconstruct to
original wave. Then the wave is fet to notich filter section in
order to remove the line frequency noise signal.
A notch filter is a band-stop filter with a narrow
stopband (high Q factor). Notch filters are used in live sound
reproduction (Public Address systems, also known as PA
systems) and in instrument amplifier (especially amplifiers or
preamplifiers for acoustic instruments such as acoustic guitar,
mandolin, bass instrument amplifier, etc.) to reduce or
prevent feedback, while having little noticable effect on the
rest of the frequency spectrum. Other names include 'band
limit filter', 'T-notch filter', 'band-elimination filter', and
'band-rejection filter'.
Circuit description:
Typically, the width of the stopband is less than 1 to
2 decades (that is, the highest frequency attenuated is less
than 10 to 100 times the lowest frequency attenuated). In the
audio band, a notch filter uses high and low frequencies that
may be only semitones apart. Here the notch filter is
constructed by the operational amplifier TL074. Finally noise
free ECG wave is given to amplifier. Then the amplifed
signal is given to monitored device such as CRO, PC etc.
In this circuit there are three electrod is used to
measure the ECG waves in which two electrod is fixed with
left and right hand another one electrod is fixed in the right
leg which acts as reference ground electrod. Electrod 1 and
Electrod 2 pick up the ECG waves from the both hands. Then
the ECG waves are given to instrumentation amplifier
section.
46
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Crystalonics dot–matrix (alphanumeric) liquid crystal
displays are available in TN, STN types, with or without
backlight. The use of C-MOS LCD controller and driver ICs
result in low power consumption. These modules can be
interfaced with a 4-bit or 8-bit microprocessor /Micro
controller.
LIQUID CRYSTAL DISPLAY (LCD)
Liquid Crystal Display (LCD’s) have materials,
which combine the properties of both liquids and crystals.
Rather than having a melting point, they have a temperature
range within which the molecules are almost as mobile as
they would be in a liquid, but are grouped together in an
ordered form similar to a crystal.




An LCD consists of two glass panels, with the liquid
crystal material sand witched in between them. The inner
surface of the glass plates are coated with transparent
electrodes which define the character, symbols or patterns to
be displayed polymeric layers are present in between the
electrodes and the liquid crystal, which makes the liquid
crystal molecules to maintain a defined orientation angle.






One each polarizes are pasted outside the two glass
panels. These polarizes would rotate the light rays passing
through them to a definite angle, in a particular direction.
When the LCD is in the off state, light rays are rotated by the
two polarizes and the liquid crystal, such that the light rays
come out of the LCD without any orientation, and hence the
LCD appears transparent.

When sufficient voltage is applied to the electrodes,
the liquid crystal molecules would be aligned in a specific
direction. The light rays passing through the LCD would be
rotated by the polarizes, which would result in activating /
highlighting the desired characters. The LCD’s are
lightweight with only a few millimeters thickness. Since the
LCD’s consume less power, they are compatible with low
power electronic circuits, and can be powered for long
durations.
The built-in controller IC has the following features:
Correspond to high speed MPU interface (2MHz)
80 x 8 bit display RAM (80 Characters max)
9,920-bit character generator ROM for a total of 240
character fonts. 208 character fonts (5 x 8 dots) 32
character fonts (5 x 10 dots)
64 x 8 bit character generator RAM 8 character
generator RAM 8 character fonts (5 x 8 dots) 4
characters fonts (5 x 10 dots)
Programmable duty cycles
1/8 – for one line of 5 x 8 dots with cursor
1/11 – for one line of 5 x 10 dots with cursor
1/16 – for one line of 5 x 8 dots with cursor
Wide range of instruction functions display clear,
cursor home, display on/off, cursor on/off, display
character blink, cursor shift, display shift.
Automatic reset circuit, which initializes the
controller / driver ICs after power on.
VI. RELAY & RS232
The LCD does not generate light and so light is
needed to read the display. By using backlighting, reading is
possible in the dark. The LCD’s have long life and a wide
operating temperature range. Changing the display size or the
layout size is relatively simple which makes the LCD’s more
customers friendly.
Relay:
A relay is an electrically operated switch. Current
flowing through the coil of the relay creates a magnetic field
which attracts a lever and changes the switch contacts. The
coil current can be on or off so relays have two switch
positions and they are double throw (changeover) switches.
Relays allow one circuit to switch a second circuit which can
be completely separate from the first. For example a low
voltage battery circuit can use a relay to switch a 230V AC
mains circuit. There is no electrical connection inside the
relay between the two circuits; the link is magnetic and
mechanical.
The LCDs used exclusively in watches, calculators
and measuring instruments are the simple seven-segment
displays, having a limited amount of numeric data. The recent
advances in technology have resulted in better legibility,
more information displaying capability and a wider
temperature range. These have resulted in the LCDs being
extensively used in telecommunications and entertainment
electronics. The LCDs have even started replacing the
cathode ray tubes (CRTs) used for the display of text and
graphics, and also in small TV applications.
47
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
The coil of a relay passes a relatively large current,
typically 30mA for a 12V relay, but it can be as much as
100mA for relays designed to operate from lower voltages.
Most ICs (chips) cannot provide this current and a transistor
is usually used to amplify the small IC current to the larger
value required for the relay coil. The maximum output
current for the popular 555 timer IC is 200mA so these
devices can supply relay coils directly without amplification.
The relay common pin is connected to supply
voltage. The normally open (NO) pin connected to load.
When high pulse signal is given to base of the Q1 transistors,
the transistor is conducting and shorts the collector and
emitter terminal and zero signals is given to base of the Q2
transistor. So the relay is turned OFF state. When low pulse
is given to base of transistor Q1 transistor, the transistor is
turned OFF. Now 12v is given to base of Q2 transistor so the
transistor is conducting and relay is turned ON. Hence the
common terminal and NO terminal of relay are shorted. Now
load gets the supply voltage through relay.
Relays are usually SPDT or DPDT but they can
have many more sets of switch contacts, for example relays
with 4 sets of changeover contacts are readily available. Most
relays are designed for PCB mounting but you can solder
wires directly to the pins providing you take care to avoid
melting the plastic case of the relay. The animated picture
shows a working relay with its coil and switch contacts. You
can see a lever on the left being attracted by magnetism when
the coil is switched on. This lever moves the switch contacts.
There is one set of contacts (SPDT) in the foreground and
another behind them, making the relay DPDT.
VOLTAGE SIGNAL
FROM MICROLLER
1
0
Transistor
Q1
ON
Transistor
Q2
OFF
Relay
OFF
ON
OFF
OFF
RS232-SETUP
Interfacing the hard ware with the PC has the following
advantages:


Storing and retrieval of data becomes easier.
Networking can be done and hence the entire system can
be monitored online.
 Access can be user friendly.
Interfacing the hard ware with the PC is done using MAX232
(rs232)
The MAX220–MAX249 family of line
drivers/receivers is intended for all EIA/TIA-232E and
V.28/V.24
communications
interfaces,
particularly
applications where ±12V is not available. These parts are
especially useful in battery-powered systems, since their lowpower shutdown mode reduces power dissipation to less than
5μW. The MAX225, MAX233, MAX235, and
MAX245/MAX246/MAX247 use no external components
and are recommended for applications where printed circuit
board space is critical.
The relay's switch connections are usually labeled COM, NC
and NO:



COM = Common, always connect to this, it is the
moving part of the switch.
NC = Normally Closed, COM is connected to this
when the relay coil is off.
NO = Normally Open, COM is connected to this
when the relay coil is on.
Features:
 Operate from Single +5V Power Supply (+5V and
+12V—MAX231/MAX239)
 Low-Power
Receive
Mode
in
Shutdown
(MAX223/MAX242)
 Meet All EIA/TIA-232E and V.28 Specifications
 Multiple Drivers and Receivers
 3-State Driver and Receiver Outputs
 Open-Line Detection (MAX243)
ZIGBEE
The mission of the ZigBee Working Group is to bring about
the existence of a broad range of interoperable consumer
devices by establishing open industry specifications for
Circuit description:
This circuit is designed to control the load. The load may
be motor or any other load. The load is turned ON and OFF
through relay. The relay ON and OFF is controlled by the
pair of switching transistors (BC 547). The relay is connected
in the Q2 transistor collector terminal. A Relay is nothing but
electromagnetic switching device which consists of three
pins. They are Common, Normally close (NC) and normally
open (NO).
48
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
unlicensed, untethered peripheral, control and entertainment
devices requiring the lowest cost and lowest power
consumption communications between compliant devices
anywhere in and around the home.
There are three different ZigBee device types that operate on
these layers in any self-organizing application network.
These devices have 64-bit IEEE addresses, with option to
enable shorter addresses to reduce packet size, and work in
either of two addressing modes – star and peer-to-peer.
The ZigBee specification is a combination of HomeRF Lite
and the 802.15.4 specification. The spec operates in the
2.4GHz (ISM) radio band - the same band as 802.11b
standard, Bluetooth, microwaves and some other devices. It
is capable of connecting 255 devices per network. The
specification supports data transmission rates of up to 250
Kbps at a range of up to 30 meters. ZigBee's technology is
slower than 802.11b (11 Mbps) and Bluetooth (1 Mbps) but
it consumes significantly less power.
1. The ZigBee coordinator node: There is one, and only one,
ZigBee coordinator in each network to act as the router to
other networks, and can be likened to the root of a (network)
tree. It is designed to store information about the network.
2. The full function device FFD: The FFD is an intermediary
router transmitting data from other devices. It needs lesser
memory than the ZigBee coordinator node, and entails lesser
manufacturing costs. It can operate in all topologies and can
act as a coordinator.
ZigBee/ General Characteristics:
1 Dual PHY (2.4GHz and 868/915 MHz)
2 Data rates of 250 kbps (@2.4 GHz), 40 kbps (@
915 MHz), and 20 kbps (@868 MHz)
3 Optimized for low duty-cycle applications
(<0.1%)
4 CSMA-CA channel access Yields high throughput and
low latency for low duty cycle devices like sensors and
controls
5 Low power (battery life multi-month to years)
6 Multiple topologies: star, peer-to-peer, mesh
7 Addressing
space
of
up
to:
- 18,450,000,000,000,000,000 devices (64 bit IEEE
address)
- 65,535 networks
8 Optional guaranteed time slot for applications requiring
low latency
9 Fully hand-shaked protocol for transfer reliability
10 Range: 50m typical (5-500m based on environment)
3. The reduced function device RFD: This device is just
capable of talking in the network; it cannot relay data from
other devices. Requiring even less memory, (no flash, very
little ROM and RAM), an RFD will thus be cheaper than an
FFD. This device talks only to a network coordinator and can
be implemented very simply in star topology.
ZigBee/ addresses three typical traffic types.
accommodate all the types.
1. Data is periodic. The application dictates the rate, and the
sensor activates checks for data and deactivates.
ZigBee - Typical Traffic Types Addressed
1
MAC can
2. Data is intermittent. The application, or other stimulus,
determines the rate, as in the case of say smoke detectors. The
device needs to connect to the network only when
communication is necessitated. This type enables optimum
saving on energy.
Periodic data
2 Application defined rate (e.g., sensors)
3 Intermittent data
4 Application/external stimulus defined rate (e.g.,
light switch)
5 Repetitive low latency data
3. Data is repetitive, and the rate is fixed a priori. Depending
on allotted time slots, called GTS (guaranteed time slot),
devices operate for fixed durations.
ZigBee is an established set of specifications for wireless
personal area networking (WPAN), i.e. digital radio
connections between computers and related devices.
ZigBee employs either of two modes, beacon or non-beacon
to enable the to-and-fro data traffic. Beacon mode is used
when the coordinator runs on batteries and thus offers
maximum power savings, whereas the non-beacon mode
finds favor when the coordinator is mains-powered.
WPAN Low Rate or ZigBee provides specifications for
devices that have low data rates, consume very low power
and are thus characterized by long battery life. ZigBee makes
possible completely networked homes where all devices are
able to communicate and be controlled by a single unit.
In the beacon mode, a device watches out for the
coordinator's beacon that gets transmitted at periodically,
locks on and looks for messages addressed to it. If message
49
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
transmission is complete, the coordinator dictates a schedule
for the next beacon so that the device ‘goes to sleep'; in fact,
the coordinator itself switches to sleep mode.
While using the beacon mode, all the devices in a mesh
network know when to communicate with each other. In this
mode, necessarily, the timing circuits have to be quite
accurate, or wake up sooner to be sure not to miss the beacon.
This in turn means an increase in power consumption by the
coordinator's receiver, entailing an optimal increase in costs.
Figure 1: ZigBee Network Model [ZigBee: 'Wireless Control
That Simply Works']
For the sake of simplicity without jeopardizing robustness,
this particular IEEE standard defines a quartet frame structure
and a super-frame structure used optionally only by the
coordinator.
The four frame structures are
1 Beacon frame for transmission of beacons
2 Data frame for all data transfers
3 Acknowledgement frame for successful frame
receipt confirmations
4 MAC command frame
These frame structures and the coordinator's super-frame
structure play critical roles in security of data and integrity in
transmission.
Figure 1: Beacon Network Communication [ZigBee:
'Wireless Control That Simply Works']
The non-beacon mode will be included in a system where
devices are ‘asleep' nearly always, as in smoke detectors and
burglar alarms. The devices wake up and confirm their
continued presence in the network at random intervals.
On detection of activity, the sensors ‘spring to attention', as it
were, and transmit to the ever-waiting coordinator's receiver
(since it is mains-powered). However, there is the remotest of
chances that a sensor finds the channel busy, in which case
the receiver unfortunately would ‘miss a call'.
All protocol layers contribute headers and footers to the
frame structure, such that the total overheads for each data
packet range are from 15 octets (for short addresses) to 31
octets (for 64-bit addresses).
The coordinator lays down the format for the super-frame for
sending beacons after every 15.38 ms or/and multiples
thereof, up to 252s. This interval is determined a priori and
the coordinator thus enables sixteen time slots of identical
width between beacons so that channel access is contentionless. Within each time slot, access is contention-based.
Nonetheless, the coordinator provides as many as seven GTS
(guaranteed time slots) for every beacon interval to ensure
better quality.
Figure 2: Non-Beacon Network Communication [ZigBee:
'Wireless Control That Simply Works']
The ZigBee Alliance targets applications "across consumer,
commercial, industrial and government markets worldwide".
Unwired applications are highly sought after in many
networks that are characterized by numerous nodes
consuming minimum power and enjoying long battery lives.
The functions of the Coordinator, which usually
remains in the receptive mode, encompass network set-up,
beacon transmission, node management, storage of node
information and message routing between nodes. The
network node, however, is meant to save energy (and so
‘sleeps' for long periods) and its functions include searching
for network availability, data transfer, checks for pending
data and queries for data from the coordinator.
ZigBee technology is designed to best suit these applications,
for the reason that it enables reduced costs of development,
very fast market adoption, and rapid ROI.
50
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Airbee Wireless Inc has tied up with Radio crafts AS to
deliver "out-of-the-box" ZigBee-ready solutions; the former
supplying the software and the latter making the module
platforms. With even light controls and thermostat producers
joining the ZigBee Alliance, the list is growing healthily and
includes big OEM names like HP, Philips, Motorola and
Intel.
VIII. CONCLUSION
Some times for practicing section it can be used, and easy
way to drive a vehicle and can be used for physically
challenged persons. It can change the worlds driving and
secure the flight from accidents.
REFERENCES
With ZigBee designed to enable two-way communications,
not only will the consumer be able to monitor and keep track
of domestic utilities usage, but also feed it to a computer
system for data analysis.
[1] PIC Microcontroller & Embedded system Mazidi, MCKinlay causey
Pearson Publisher.
[2] Embedded systems (Admistration, Programming and design ) Author
Rajkannan, MC graw Hill Exhaustions.
A recent analyst report issued by West Technology Research
Solutions estimates that by the year 2008, "annual shipments
for ZigBee chipsets into the home automation segment alone
will exceed 339 million units," and will show up in "light
switches, fire and smoke detectors, thermostats, appliances in
the kitchen, video and audio remote controls, landscaping,
and security systems."
Futurists are sure to hold ZigBee up and say, "See, I told you
so". The ZigBee Alliance is nearly 200 strong and growing,
with more OEM's signing up. This means that more and more
products and even later, all devices and their controls will be
based on this standard. Since Wireless personal Area
Networking applies not only to household devices, but also
to individualized office automation applications, ZigBee is
here to stay. It is more than likely the basis of future homenetworking solutions.
VII. SOFTWARE IMPLEMENTATION
The figure shows the softer based command that is worked
from the system to operate the commands. Each command is
been programmed to collect the signal an transmitte through
the transmitter. The software used is the Visual Basic.
The basic operational steps of fig may be briefly described
as follows: (1 comport setting ), (2 manual setting).
51
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Design and Development of Electrical Resistance Tomography
to Detect Cracks in the Pipelines
Obay Fares Alashkar*, Venkatratnam Chitturi
Department of Mechatronic Engineering, Asia Pacific University, Bukit Jalil, Malaysia
*[email protected]
Abstract - The paper presents a novel approach to identify and evaluate cracks in the pipelines by using Electrical resistance
tomography (ERT). The electrical resistance is technically used for imaging sub-surface structures, based on the voltage
measurements made at the surface of the pipeline by using electrodes. The electrodes will act as a conductivity detector, whereby
the conductivity varies when the pipeline thickness changes with respect to time due to the corrosion or cracks.
The experimental results show that the developed system is capable of detecting the cracks within the pipeline by using the
neighboring method of current injection. The crack is detected and presented as a 2D as well as 3D tomography through
LabVIEW software.
Keywords- Electrical resistance tomography (ERT), crack detection, buffer circuit, multiplexing, image reconstruction.
Another technique being used for outer surface
pipeline inspections for corrosion and cracking, is a noninvasive method. The Electrical capacitance tomography
(ECT) is used for industrial process monitoring
applications.
INTRODUCTION
Ensuring the safety of fluid transportation within the
pipelines in industries is a critical issue, because, any
damage in pipelines due to cracks or corrosion can cause
not only to environmental problems, but also can lead to
human loss. Hence, a way of controlling, managing and
limiting the potential risk that may occur, is the inspection
of the pipelines in order to identify the flaws in the
pipeline. A good review of the pipeline health monitoring
can be found in [1]. There are several techniques for
pipeline inspections, including inner pipeline inspections
by utilizing robots and the inspection on the outer surface
of the pipeline through Ultrasonic sensors or Electrical
capacitance tomography (ECT) system.
The ECT system consists of electrodes placed on the
outer surface area associated with the pipeline. The
electrodes will then measure the capacitance across the
two electrodes and relate this to the corrosions or cracks
within the pipeline as shown in figure 2. The electrodes
are placed on the circumference of the pipe and when the
first electrode is measuring the capacitance, all remaining
electrodes are grounded to act like detector electrodes.
Likewise, the operation continues until the last electrode
is excited. Any change in the thickness of the pipeline will
cause alterations to the capacitance. The capacitancesensitive field distribution is measured using the finite
element method and related to cracks. However the
capacitance changes measured by this method is very
small, usually in Pico or even Femto Farads [3].
LITERATURE REVIEW
On the list of techniques used for outer surface
pipeline inspection is a non-destructive testing (NDT)
using Ultrasonic sensors. The Ultrasonic sensors are
placed on the outer surface of the pipe. The sensors send
as well as receive the pulsed waves in the form of sound.
The sensor transmits the sound waves through the
pipeline, simultaneously a receiver, receives the echoes
after travelling through the pipeline. Finally an electricaldischarge-machine (EDM) evaluates the signals and
detects the cracks in the pipeline. However the ultrasonic
echoes from these structures are usually noisy as shown
in figure 1. Therefore, in order to overcome this issue a
filter is used to reduce the noise of the ultrasonic signal
based on the wavelet analysis and the least mean squares
(LMS) method. However the performance of this
technique depends on the signal-to-noise ratio (SNR) of
the ultrasonic signals [2].
(a)
Figure 2: Methodology of measuring the capacitance [3]
One of the techniques used for crack detection in
concrete is Electrical Resistance Tomography (ERT). The
basic principle of the ERT operation is based on current
injection and voltage measurement throughout the
electrodes placed on the outer surface of the concrete.
Using an array of 16 electrodes, an alternating current
with 100Hz frequency is injected in a pair of electrodes
and voltages are measured from the remaining electrodes
as shown in figure 3 [4].
(b)
Figure 1: (a) Corrupted ultrasonic signal SNR of −0dB (b) De-noised
ultrasonic signal with SNR of 19dB [2]
52
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
These readings are used in the reconstruction of a 2dimensional (2D) or even 3-dimensional (3D) image of
the concrete.
An alternating current (AC) is passed through the
insulating layer of the pipeline wall. As the accumulation
of electrons on electrodes (plates) causes the plates to
charge and discharge, this results in the AC current to
flow through the pipeline wall. Based on the voltage
measurements made on the surface of the pipeline the
cracks can be detected. In the presence of the crack, the
thickness of the pipeline wall decreases, which in turn
causes a decrease in the resistance of the pipeline. As a
consequence of the lower resistance, the conductivity
goes higher and hence the voltage readings are higher at
the location of the cracks (refer Table 2).
Voltage-controlled oscillator (VCO)
AD9850 is chosen for the Voltage controlled
oscillator (VCO) circuit, as it can be controlled by using
microcontroller and in addition, it also uses advanced
DDS technology. It is coupled with an internal high speed
and high performance generating a range of frequencies
up to 40MHz. Here Arduino UNO is used as the
microcontroller for generating clock signals for the
AD9850. Hence AD9850 operates through a referenced
accurate clock source. The AD9850 generates a spectrally
pure, programmable, analog output sine wave [6].
Figure 3: Schematic illustration of the measurement setup
in ERT [4]
PROPOSED METHODOLOGY
An ERT system using 8 silver silver-chloride
electrodes is proposed for crack detection in the pipelines.
These electrodes are non-polarizable in nature allowing a
free flow of electrons across the interface. These
electrodes also exhibit low noise levels compared to other
metallic electrodes [5]. A pipe of 16.5cm external
diameter and 4mm thickness is considered for the
proposed study. The developed ERT system along with
the main components is shown in figure 4 for the crack
detection in the pipelines.
Voltage and current amplification
The output of the AD9850 is 1Vpp along with a
current of 1mA for 1KΩ load. A Non-inverting
operational amplifier is used, where the output voltage
from the AD9850 is applied directly to the non-inverting
(+) input terminal of the LM7171 operational amplifier
and the inverting (-) input terminal is connected with Rƒ
& R1 voltage divider network as shown in figure 5.
Figure 4: The ERT system design for detecting cracks in pipelines
Figure 5: Non-inverting Operational Amplifier U1 connected to Buffer
circuit U2.
The ERT system consists of the following:
1.
Voltage Controlled Oscillator (VCO): An
AD9850 controlled by Arduino UNO to
generate the required frequencies is used.
2.
Voltage and current amplification: Noninverting operational amplifier are used using IC
LM 7171
3.
Multiplexer: IC CD 4067BE interfaced with
Arduino Mega 2560 microcontroller for shifting
the currents among the electrodes.
4.
LabVIEW: For acquiring voltage measurements
and processing the data to detect the cracks
The closed loop voltage gain of the Non-inverting
operational amplifier is given by:



𝑉𝑜𝑢𝑡
v 𝑉𝑖𝑛1 
𝑅𝑓
𝑅2


The Rƒ value used is 3.9kΩ and the R2 is 1kΩ. For
1Vpp input voltage, the output voltage is 5Vpp. This
amplified signal is then connected to the buffer circuit in
order to overcome impedance issues before sending the
signals to the multiplexer circuit. The operational
amplifier used for both the above circuits is IC LM7171
53
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
as it has high slew-rate up to 4100 V/µs with a bandwidth
of up to 200MHz [7].
reading was calculated to identify the cracks. The entire
experimental set up is shown in figure 7.
Multiplexer
Multiplexers are used for shifting the currents to the
predetermined electrode pair’s one by one [8]. Arduino
Mega 2560 microcontroller is used for controlling the
multiplexer circuit. Three multiplexers have been used,
the MUX-I1 for the current source, MUX-I2 for current
sink and the MUX-V for voltage measurement (refer
figure 4). The neighboring method is used for current
injection and voltage measurement is done with respect to
common ground. Differential voltage measurements is
not suitable for this application. This is shown in figure 6.
Figure 7: Experimental set-up for the crack detection.
The specifications of the physical components used is
shown in the table 1.
TABLE 1. PHYSICAL SPECIFICATION OF TEST
Figure 6: Schematic illustration of the current injection and voltage
measurement methods
Image reconstruction
For the image reconstruction, LabVIEW software is
used to create an image along with the location of the
crack within the pipeline. Firstly a circle of pipe radius is
created. Then 8 points are located at the edges of the
circle, which represent the location of the electrodes on
the pipeline. Furthermore by using the LabVIEW tools,
2D image can be presented as a 3D image, where the third
axis, indicates the severity of the crack. The cracks will
appear as the colored tomographic image, where color
scale obtains the size of the crack and it changes based on
the depth of the crack.
|𝑉𝑎𝑣𝑟−𝑉𝑁|
𝑉𝑎𝑣𝑟
× 100
Specifications
8
4cm x 7cm
1mm
2.5cm
16.5cm
4mm
2
The Arduino Mega was used for controlling the
current switching through the multiplexer circuit in the
LabVIEW environment, as well as used for acquiring the
voltage from the electrodes which are not supplied by the
current signal. The acquired data is shown in table 2.
TABLE 2: AVERAGE VOLTAGE MEASUREMENTS USING CRACKED PIPE
ON ELECTRODES NUMBER 3 AND 4
To validate the results of the developed Electrical
Resistance Tomography system, the percentage error of
each reading was calculated using the formula below:
Percentage Error =
Physical Components
Number of electrodes
Dimensions of the of electrode
Thickness of electrodes
Distance between each
electrode
Diameter of the pipe
Thickness of pipe
Number of cracks in the pipe
(2)
Where N is the electrode number and Vavr is the
average of the voltage reading from each individual
electrode. Based on the results of the percentage errors the
cracks are identified i.e. if the percentage error is higher
than a specified value, then it indicates the presence of the
cracks in the pipeline.
Electrode number
Average Output Voltage (V)
1
2
3
4
5
6
7
8
0.8921
0.9289
1.1066
1.2506
0.7983
0.7941
0.7652
0.8331
A column graph in figure 8 shows the average of the
output voltage in each electrode for tested pipe.
TEST RESULTS
For the crack detection test, an already cracked PVC
pipe filled with saline water was used. Eight electrodes
were being placed on the pipeline. The pipeline had two
inner cracks at the location of the electrode number 3 and
4. Using the neighboring method, an input current of
1.53mA at a frequency of 250 KHz is injected. Individual
voltages of the non-current carrying electrodes were
measured in a cyclic manner. Finally the average voltage
readings were obtained and the percentage errors of each
54
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Figure 8: Column graph of the output voltage versus electrode
number of the damaged pipe at electrode location 3 and 4.
decrease, decreasing the resistance of the surface of the
pipeline.
The average voltage is 0.9211, and the percentage
errors of each individual electrode is tabulated in table 3.
However the resolution of the reconstructed image
was not good. The next step is to increase the number of
electrodes to improve the quality of the images and hence
to identify the severity of the cracks.
TABLE 3: PERCENTAGE ERRORS OF THE VOLTAGE MEASUREMENTS
USING CRACKED PIPE ON ELECTRODES NUMBER 3 AND 4
Electrode number
1
2
3
4
5
6
7
8
REFERENCES
Percentage error
3.15
0.85
20.14
35.77
13.34
13.79
16.93
9.56
Zheng Liu and Yehuda Kleiner ‘State-of-the-Art Review of
Technologies for Pipe Structural Health Monitoring’ IEEE
Sensors Journal, Vol. 12, No. 6, June 2012.
Chen H and Zuo M ‘Ultrasonic Material Crack Detection With
Adaptive LMS-Based Wavelet Filter’ Symposium on Photonics
and Optoelectronics, Wuhan, 14th to 16th August 2009.
Wael A ‘Simulation analysis of sensitivity for corrosion of pipe wall
using electrical capacitance tomography technique’ African
Journal of Engineering Research. 1(2). p. 50-51, 2013.
Seppanen K et al. ‘Electrical resistance tomography imaging of
concrete’ International Conference on Concrete Repair. Cape
Town, pp. 571-576, 24th to 26th November 2008.
Alberto Botter et al. ‘Introduction to neural enginerring for motor
rehabilitation’ first edition, Chapter 6 – Surface electromyogram
detection, John Wiley and sons, inc, 2013.
Aaron S. Tucker et al. ‘Biocompatible, High Precision, Wideband,
Improved
Howland
Current
SourceWith
Lead-Lag
Compensation’ IEEE Transactions On Biomedical Circuits And
Systems, Vol. 7, No. 1, February 2013.
Mohd Y et al. ‘Front-End Circuit in Electrical Resistance Tomography
(ERT) for Two- Phase Liquid and Gas Imaging’ Jurnal
Teknologi. 70(2). p. 50-52, 2014.
Venkatratnam Chitturi et al. ‘A Low Cost Electrical Impedance
Tomography (EIT) for Pulmonary Disease Modelling and
Diagnosis’ TAEECE2014, ISBN: 978-0-9891305-4-7 SDIWC
page 83-89, 2014.
By comparing the percentage error values of each
measured value, it was assumed that the percentage error
for the crack should be greater than 16.93%
(approximately 17%) , and regarding to the voltage
measurements it shows that the crack has higher output
voltage than the average value of the output voltages from
the all electrodes. The LabVIEW program then constructs
the 2D and 3D tomography that indicates the location of
the crack based on the above assumption. The 2D and 3D
tomography of the cracked pipe on electrode number 3
and 4 is shown in the figure 9.
Figure 9: 2D and 3D tomography of the cracked pipe on electrodes
number 3 and 4
It is clearly seen that the crack on the electrode
number 4 is greater than the crack of the electrode number
3.
CONCLUSION
An electrical resistance tomography system is
developed with the voltage-controlled oscillator (VCO) is
made using AD9850 which is programmed by Arduino
UNO, the voltage and current amplification is done by
Non-inverting operational amplifier connected to buffer
circuit using LM7171 op-amp, multiplexer circuit using
CD4067BE IC, Arduino mega 2560 as the main
controller. The experimental results show that the
developed system is capable to detect the cracks within
the pipeline by using the neighboring method for current
injection with individual voltages being measured for the
non-current carrying electrodes with respect to a common
ground. It observed that the conductivity increase in the
outer surface of the pipeline when the crack is present
inside the pipe, as the thickness of the pipe wall will
55
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Hot-Point Probe Measurements for
Aluminium Doped ZnO Films
Benedict Wen-Cheun Au, Kah-Yoong Chan*, Yew-Keong Sin, Zi-Neng Ng, Chu-Liang Lee
Centre for Advanced Devices and Systems, Multimedia University, Persiaran Multimedia,
63100 Cyberjaya, Selangor, Malaysia
*Corresponding Author: [email protected]
Abstract—N-type zinc oxide (ZnO) films were fabricated on glass substrates using the sol-gel spin coating technique.
Aluminium nitrate nonahydrate was used as the source of aluminium (Al) dopant in the N-type ZnO thin films. A
low-cost hot-point probe setup was developed to facilitate the measurements for the Al doped ZnO (AZO) thin films.
The effect of Al doping concentration and temperature dependence on measured voltage were studied and analyzed.
At 1 at.% doping concentration of Al in ZnO, the thin films showed highest measured positive voltage compared to
higher Al doping concentration. The measured voltage is highest at a probing temperature of 450 oC and the hotpoint probe measurements revealed that the measured voltage increases with increasing probing temperature.
Keywords-Hot-point probe measurement, sol-gel spin-coating, Al doped ZnO (AZO) films.
concentration and hot-point probing temperature on the
measured voltage of AZO films were investigated and
discussed in this paper.
INTRODUCTION
In recent years, ZnO has been the hot topic of research
due to its promising characteristics such as light trapping
abilities and large exciton binding energy. It is a II-VI
wide bandgap semiconductor of 3.37eV [1]. Moreover,
ZnO exhibits low resistivity, is abundant in nature and
non-toxic. ZnO exists in two main forms namely the
hexagonal wurtzite and cubic zincblende [1]. ZnO has a
wide range of applications such as optoelectronics
devices, UV detector, gas sensors and solar cells [2]. Due
to its high optical transmittance in visible region and high
absorbance in the UV region, ZnO-based materials are
important for visible-blind UV photon detection [2]. The
high excition binding energy of 60meV ensures effective
excitonic emission up to room temperature. Besides that,
ZnO can undergo bandgap engineering to have its
bandgap value altered accordingly. Doping magnesium
increases the bandgap while doping cadmium reduces the
bandgap [3].
EXPRIMENTAL DETAILS
Glass substrates were cleansed in ultrasonic bath in
isopropanol (IPA) and blew dry with nitrogen nozzle.
Zinc acetate was used as ZnO precursor and aluminium
nitrate nonahydrate was used as Al source of doping. Both
precursor and dopant were dissolved in IPA and
monoethanolamine was used a stabilizer. The molarity of
the solution was set to 0.5M. Al doping concentration was
varied from 1 at.% to 4 at.%. The resulting solution was
stirred for 2 hours to get a clear and homogeneous
transparent sol. AZO thin films were deposited on the
glass substrates using the spin-coating method at a speed
of 3000rpm. Finally, the substrates were annealed in
ambient for 1 hour at 450oC.
There are many techniques of fabricating ZnO thin
films. Some examples are dry processing pulsed laser
deposition, RF sputtering, molecular beam epitaxy, and
wet processing sol-gel techniques [4]. Among all
fabrication techniques, sol-gel techniques are used by
many researchers because these are low-cost and simple
methods of fabricating ZnO films. In addition, it is easy
for dopant incorporation and it has large coating area. [5].
On the other hand, the conventional hot-point probe
setup consists of a digital multimeter, a pair of probes and
a soldering station. This setup is a simple and effective
way to distinguish between N-type and P-type
semiconductor [6]. Heat is supplied to the positive probe,
hence the hot probe while the negative probe is the coldprobe. While both probes are connected to the sample, Ntype semiconductor would give a positive readout while
a P-type semiconductor would give a negative readout
[6].
Fig. 1 Hot-point probe setup for AZO films measurement.
Fig. 1 shows the developed low-cost hot-point probe
setup for the measurements of the fabricated AZO films.
The developed low-cost hot-point probe setup consists of
a soldering station, a digital multimeter, a pair of probe
and a pair of copper wire for enhanced heat transfer from
the hot-probe to the AZO films surface. The soldering
iron was probed at the hot-probe for 10 minutes and the
measured voltages were recorded.
In this work, a low-cost hot-point probe setup was
developed. Aluminium doped ZnO (AZO) thin films of
different doping concentrations were fabricated using the
sol-gel spin-coating method. The effects of doping
56
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
RESULTS AND DISCUSSION
12
Fig. 2 shows the effect of doping concentration on
measured voltage of the AZO thin films at 1 at.%, 2 at.%,
3 at.% and 4 at.% Al doping concentration. Measured
voltage was highest at 1 at.% at any given temperature,
followed by 2 at.%, 3 at.% and 4 at.%. Measured voltage
at undoped films was insignificant compared to doped
ZnO due to its very small voltage. The Al atoms
introduced into the ZnO lattice act as shallow donor
impurities that enhance the N-type conductivity of AZO
[7]. The Al atoms was ionized into A3+ ions and
subsequently substituted the Zn2+ ions in the ZnO lattice,
at the same time releasing one electron. This led to an
increased in the conductivity in the AZO thin films [8].
As the doping concentration was increased, the measured
voltage decreased accordingly. This is due to the
solubility limit in the ZnO lattice. The lattice cannot
accommodate
for
excessive
amount
Al
a t om s and th er efor e these n eutra l Al a t om s ar e
segregated in the grain boundaries. This caused the
formation of carrier traps that trapped active Al carriers in
the lattice. As a result, the conductivity of AZO decreased
[9]. The measured voltages obtained with increasing
doping concentration were consistent with Muiva’s work
[8].
10
500
Voltage [mV]
Fig. 3. Effect of probing temperature on AZO thin films.
CONCLUSION
AZO thin films of different Al doping concentrations
were fabricated on glass substrates using the sol-gel spincoating method. Moreover, a low-cost hot-point probe
setup was developed and deployed to measure the
fabricated AZO thin films. AZO film at 1 at.% doping
showed the highest measured voltage due to minimum
carrier traps in the ZnO lattice. Besides that, the hot-point
probe measurements revealed that the highest measured
voltage was at probing temperature of 450oC. This is due
the fact that more electrons diffuse across the sample at
high probing temperature.
REFERENCES
A. Janotti, Van de Walle, C.G, “Fundamentals of zinc oxide as a
semiconductor,” Reports on Progress in Physics, vol. 72,
pp.126510, October 2009.
S.V. Mohite, K.Y. Rajpure, “Synthesis and characterisation of Sb
doped ZnO thin films for photodetector application,” Optical
Materials, vol. 36, pp. 833-838, December 2013.
A. Janotti, Van de Walle, C.G, “Native point defects in ZnO,” Physical
Review B, vol.76, no. 16, pp. 165202, 2007.
H. Mahdhi, Z. Ben Ayadi, S. Alaya, J.L. Gauffier, K. Djessas, “The
effects of dopant concentration and deposition temperature on the
structural, optical and electrical properties of Ga-doped ZnO thin
films,” Superlattices and Microstructures, vol. 72, pp. 60-71,
April 2014.
Jianguo Lu, Kai Huang, Jianbo Zhu, Xuemei Chen, Xueping Song,
Zhaoqi Sun, “Preparation and characterisation of Na-doped ZnO
thin films by sol-gel method,” Physica B, vol. 405, pp. 31673171, April 2010.
G. Golan, A. Axelevitch, B, Gorenstein, V. Manevych, “Hot-probe
method for evaluation of impurities concentration in
semiconductors,” Microelectronics Journal, vol. 37, pp. 910-915,
March 2006.
M.G. Wardle, J.P. Goss, P.R. Briddon, “First-principles study of the
diffusion of hydrogen in ZnO,” Physical review letters, vol. 96,
pp. 205504.
C.M. Muiva, T.S. Sathiaraj, K. Maabong, “Effect of doping
concentration on the properties of aluminiu doped zinc oxde thin
films prepared by spray pyrolysis for transparent electrode
applications,” Ceramics International, vol.37, pp. 555-560,
September 2010.
B.Benhaoua, A.Rahal, S. Benramache, “The structural, optical and
electrical properties of nanocrystalline ZnO:Al thin films,”
Superlattices and Microstructures, vol. 68, pp. 38-47, January
2014.
0
4
400
o
2
3
300
Temperature [ C]
4
2
4
200
6
1
6
0
8
0
8
2
o
12
Voltage [mV]
10
450 C
o
400 C
o
350 C
o
300 C
o
250 C
o
200 C
14
AZO 0 at.%
AZO 4 at.%
AZO 3 at.%
AZO 2 at.%
AZO 1 at.%
5
Doping Concentration [at.%]
Fig. 2. Effect of doping concentration on measured voltage
Fig. 3 presents the effect of probing temperature on
AZO thin films. The probing temperature was varied
from 200oC to 450oC. For undoped ZnO films, the
probing temperature hardly had any effect on measured
voltage and therefore it was insignificant. There is an
explanation for the observed measured voltage trend.
When the hot-probe is supplied with heat, heat energy is
transferred to the majority carriers (electrons) in the AZO
films. As a result, the thermally excited electrons diffuse
from the hot-probe to the cold-probe. The movement of
electrons across the sample surface induces a voltage
across the AZO films [6]. With increasing probing
temperature, more electrons are thermally excited.
Therefore, more electron movement lead to the increase
of measured voltage in the AZO films [6].
57
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Relative Humidity Sensor Employing Optical Fibers Coated
with ZnO Nanostructures
Z. Harith 1,2,5, N.Irawati. 1,2, M. Batumalay2,4,5, H.A. Rafaie3, G. Yun II5, S.W.Harun2,4, R. M. Nor3, H. Ahmad2,4,
1
Institute of Graduate Studies, University of Malaya, 50603 Kuala Lumpur, Malaysia.
Photonics Research Centre, University of Malaya, 50603 Kuala Lumpur, Malaysia.
3
Department of Physics, University of Malaya, 50603 Kuala Lumpur, Malaysia.
4
Department of Electrical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia.
5
INTI International University, 71800 Nilai, Malaysia
2
Abstract - We demonstrate two simple relative humidity sensors using a tapered plastic optical fiber (POF) and silica microfiber.
The POF is tapered by chemical etching method whereby the fiber is immersed into acetone and polished by a sand paper to
reduce the fiber’s waist diameter from 1 mm to about 0.4 - 0.5 mm. The silica microfibers were fabricated using flame brushing
techniques. Both tapered fibers are then coated with zinc oxide (ZnO) nanostructures using sol-gel immersion method before it
is used to sense relative humidity. It is found that the tapered POF performed better compared to tapered silica microfiber. The
tapered POF based sensor has linearity and sensitivity of 100 % and 0.01 mV/%, respectively while silica microfiber yielded
linearity and sensitivity of 96.7 % and 0.0038 mV/% respectively.
Keywords - Fiber optic sensor; tapered plastic optical fiber; tapered silica microfiber; relative humidity (RH) sensor; Refractive
index (RI); Zinc Oxide (ZnO)
I.
mass production of components and systems compared
to silica based fiber [6]. Besides that, POFs stand out for
their greater flexibility and resistance to impacts and
vibrations, as well as greater coupling of light from the
light source to the fiber [7]. In this work, the 1 mm POF
with core and cladding refractive index of 1.492 and
1.402 is used respectively. The acetone was applied to
the POF using a cotton bud and neutralized with the deionized water. The acetone reacted with the surface of the
polymer to form milky white foam on the outer cladding
which was then removed by polishing using a sand paper.
Then tapered POF was cleansed using de-ionized water.
These etching, polishing and cleaning processes were
repeated until the tapered fiber has a stripped region
waist diameter between 0.4 to 0.5 mm. The total length
of the tapered section was 10 mm. The fabricated tapered
POF probe was then coated with ZnO nanostructures
using sol-gel immersion method. To prepare the ZnO
nanostructures, 0.01M zinc nitrate hexahydrate
(Zn(NO3)2.6H2O) and 0.01M hexamethylenetetramine
(HMTA) were dissolved in 100 ml deionized water. To
deposit ZnO nanostructures, the prepared tapered POF is
immersed and suspended into the solution at 60°C for 15
hours.
INTRODUCTION
Tapered optical fibers are famous for sensing
applications as it allows a higher portion of evanescent
waves to interact with its surrounding [1][2]. On the
other hand, the need to sense moisture in moisturesensitive environments such as semiconductor
manufacturing and packaging has become essential. To
date, a number of evanescence wave based sensors have
been demonstrated for humidity measurement. For
example, Muto et al. demonstrated a humidity sensor
based on reversible absorption of water (H2O) from the
ambient atmosphere into a porous thin-film
interferometer that sits on the tapered plastic optical fiber
(POF) [3]. In another work, Gaston et al. (2003)
demonstrated a humidity sensor based on the interaction
of the evanescent field in side-polished standard single
mode fibers (SMFs) with the surrounding ambient [4].
In this paper, two evanescent wave based
relative humidity (RH) sensor are demonstrated using a
tapered POF and silica microfiber as a probe. The tapered
POF and silica microfiber are obtained by chemical
etching method and flame brushing technique,
respectively. Both fiber probes are coated with ZnO
nanostructures sensitive material since its optical
properties change in response to surrounding humidity
[5].
The silica microfiber was fabricated from a
standard SMF using a flame brushing technique [8]. An
oxy-butane burner was used as the heat source and the
gas pressure was controlled at the lowest level of 5 psi to
ensure that the convective air flow from flame is very
low. Prior to tapering process, a small region of the fiber
protective polymer buffer jacket was stripped and
mounted onto a pair of motorized translation stages.
During the tapering process, the fiber was being
stretched out by pulling while heating by a moving torch
to ensure the consistent heat is applied to the uncoated
B. PREPARATION OF SENSOR PROBES
At first, a tapered POF is prepared based on the
chemical etching technique using acetone, de-ionized
water and sand paper. POF-based sensor was chosen
because it shows several advantages such as ease of
handling, mechanical strength, disposability and easy
58
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
region of the fiber. The repeated heating process
produced good uniformity of microfiber. The
transmission spectrum of the microfiber is then
monitored in real time using an amplified spontaneous
emission (ASE) source and optical spectrum analyzer
(OSA). ZnO nanostructures coating onto the silica
microfiber was done using sol-gel immersion method as
previously described.
the performance of the proposed sensor was calibrated
for relative humidity between the ranges of 50 to 70 %
using 1365 data logging humidity-temperature meter.
The output lights were sent into the 818 SL, Newport
silicon photo-detector and the electrical signal was fed
into the SR-510, Stanford Research System lock-in
amplifier together with the reference signal of the
mechanical chopper. The output that resulted from the
lock-in amplifier was connected to a computer and the
signal was processed using Delphi software. The
function of chopper was to match the input signal with
the reference signal, in order to permit sensitive detection
system and remove noise.
Both POF and microfiber probes were then
characterized using Field Emission Scanning Electron
Microscope (FESEM) to investigate the morphology of
ZnO nanostructures on the tapered fibers. Figs. 1(a) and
(b) show the morphology of ZnO nanostructures that are
coated on the tapered POF and silica microfiber,
respectively. As shown in Fig. 1(a), the ZnO structure on
the tapered POF is a nanorod type with a hexagonal
cross-section. These nanorods absorb water and increase
the sensitivity of the sensor as reported by Liu et al. and
Batumalay et al. [9][10]. For tapered silica microfiber as
shown in Fig. 1(b), the homogenous particles of ZnO
nanostructures can be observed.
Chopper wheel
Tapered Fiber Sensor
Photo detector
(convert optical signal to
electrical signal)
Light source
(He-Ne Laser)
Sealed Chamber
Modulator
Humidity – temperature
sensor
Saturated Salt
Solution
Reference point
Lock – in Amplifier
Computer
Fig. 2: Experimental setup for the proposed RH sensor for POF.
(a)
Fig. 3 shows the experimental setup to measure
RH using a silica microfiber coated with ZnO
nanostructures as a probe. As shown in the figure, ASE
light from the Erbium-doped fiber amplifier (EDFA) is
launched into the silica microfiber probe placed in a
sealed chamber with a dish filled with saturated salt
solution while monitoring the output spectrum using an
OSA. The sealed chamber is constructed with a hole and
the tapered silica microfiber is introduced through it into
the sealed receptacle and suspended to saturated salt
solutions in order to simulate different values of RH. In
the experiment, the performance of the proposed sensor
was calibrated for relative humidity ranging from 50 to
70 % using 1365 data logging humidity-temperature
meter.
(b)
Fig. 1: FESEM images of ZnO nanostructures coated on (a) tapered
POF (b) silica microfiber
III.
EXPERIMENTAL SETUP
Fig. 2 shows the experimental setup for the RH
measurement. The setup comprises of a He – Ne light
source (wavelength of 633 nm with an average output
power of 5.5 mW), an external mechanical chopper, a
tapered fiber coated with ZnO nanostructures, a silicon
photo-detector, a lock-in amplifier and a computer. The
light source was chopped by a mechanical chopper at a
frequency of 113 Hz to avoid the harmonics from the line
frequency which is around 50 to 60 Hz. The 113 Hz
frequency was selected because it is an acceptable value
of output with greater stability. Noted that, the output
voltage stability degrades as the chopper frequency
increases.
Tapered Fiber Sensor
Sealed Chamber
Optical
spectrum
analyzer
(OSA)
Amplified
spontaneous
emission
(ASE)
Humidity – temperature
sensor
Saturated Salt
Solution
Fig. 3: Experimental setup for the proposed RH sensor silica
microfiber.
The 633 nm light is launched into the tapered
POF, which is placed in a sealed chamber with a dish
filled with saturated salt solution. The sealed chamber is
constructed with a hole and the tapered POF is
introduced through it into the sealed receptacle and
suspended to saturated salt solutions in order to simulate
different values of relative humidity. In the experiment,
IV.
RESULTS AND DISCUSSION
Fig. 4 shows the variation of the transmitted light from
both tapered POF and silica microfiber coated with ZnO
nanostructures with data of output voltages against the
RH. As shown in the figure, the tapered POF based
sensor has a linearity of 100 % and sensitivity of 0.01
59
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
mV/% while the linearity and the sensitivity of the silica
microfiber based sensor are 96.75 % and 0.0038 mV/%
respectively. This sensors work based on the refractive
index changes. The variation of refractive index between
core, cladding and sensitive material provides changes in
the output voltage which has been discussed. The
changes shown by both tapered fibers indicate that the
ZnO coating has successfully functioned as a sensitive
material and thus the performance of the sensor is
significantly improve with the coating.
POF fiber has performed better as a RH sensor compared
to that of silica microfiber. Tapered POF coated with
ZnO nanostructures exhibited better linearity and
sensitivity with 100 % and 0.01 mV/% respectively,
while tapered silica microfiber coated with ZnO
nanostructures shows linearity of 96.72 % and sensitivity
of 0.0038 mV/%. The differences in the obtained results
are most probably due to the different tapering method
applied to the fibers. The diameter of the POF core will
not change when chemical etched method is applied in
the tapering process and the coated ZnO nanostructures
acted as external stimulus. The ZnO nanostructures on
Performances
Tapered POF
Tapered Silica
Microfiber
Output voltage (mV)
0.7
y = -0.01x + 0.7
R² = 1
0.5
0.4
0.3
y = 0.0038x + 0.2299
R² = 0.9355
0.01 mV/%
0.0038 mV/%
Linearity
100 %
96.72 %
Standard
deviation
0.0789 mV
0.0184 mV
the tapered region play an important role by causing
rapid adsorption of water molecules. Apart from that, the
gradual process of the chemical etching enables much
simpler monitoring of the waist diameter. In contrast, the
flame brushing technique that was used to taper silica
microfiber evenly reduced further the core and cladding
diameters and altered the refractive index profile. Due
thinning of the core, light propagated through the tapered
region will be more distorted.
0.8
0.6
Sensitivity
ZnO POF
ZnO Silica
0.2
0.1
Table 1: The performance comparison for both RH sensors
0
50
55
60
65
Relative humidity (%)
70
V.
Fig. 4: Output voltage against relative humidity for the proposed
tapered POF and silica microfiber coated with ZnO nanostructure.
CONCLUSION
Simple sensors are demonstrated and compared using
tapered POF and tapered silica microfiber. The tapered
POF was fabricated using chemical etching method
while tapered silica microfiber was fabricated using
flame brushing technique. These tapered fibers were
coated with ZnO nanostructures and then were used to
detect changes in RH. When the tapered reqion is coated
with ZnO, visible nanorods structures can be observed on
POF. These hexagonal cross-section nanorods can
increase the area of water adsorption and improve the
sensitivity of the sensor. From the data collected, it was
found that performances of tapered POF is better
compared to silica microfiber. The output voltage of the
sensor using tapered POF coated with ZnO
nanostructures shows better sensitivity of 0.01 mV/%
and 100 % slope linearity compared to silica microfiber
with sensitivity of 0.0038 mV/% and linearity of 96.72
% respectively. The ZnO nanostructures exposed to
humid environment, it leads to rapid surface adsorption
of water molecules and causes changes in optical
properties. Any change in optical provokes a change in
As reported by Liu et al. [9], the RI of ZnO
composite changes from 1.698 to 1.718 as RH changes
between 10 to 95%. When ZnO composite exposed to
humid environment, it leads to rapid surface adsorption
of water molecules and causes changes in optical
properties. The RH value increases linearly with the
amount water molecules being absorbed on ZnO
composite and leads to larger leakage of light [9]. Liu et
al. also reported that the increasing water molecules
cause an increase in both effective refractive index of
surrounding medium and absorption coefficient of the
ZnO composite surfaces and leads to a larger leakage of
light or losses [9]. In addition, the interaction within the
fiber and the target analyte results in refractive index
change and as an approach for evanascence wave
sensing. The higher the portion of evanecence wave
travelling inside the fiber cause it to become more
sensitive to physical ambience of its surrounding.
The performance comparison for both sensors
is summarized in Table 1. As seen in the table, tapered
60
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
effective index of the optical fiber, changing its
transmission properties.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
Yeo, T. L., Sun, T., & Grattan, K. T. V. (2008). Fibre-optic
sensor technologies for humidity and moisture
measurement. Sensors and Actuators A: Physical,144(2),
280-295.
Batumalay, M., Lokman, A., Ahmad, F., Arof, H., Ahmad,
H., & Harun, S. W. (2013). Tapered plastic optical fiber
coated with HEC/PVDF for measurement of relative
humidity. Sensors Journal, IEEE, 13(12), 4702-4705.
Muto, S., Suzuki, O., Amano, T., & Morisawa, M. (2003). A
plastic optical fibre sensor for real-time humidity
monitoring. Measurement Science and Technology, 14(6),
746..
Gaston, A., Lozano, I., Perez, F., Auza, F., & Sevilla, J.
(2003). Evanescent wave optical-fiber sensing (temperature,
relative humidity, and pH sensors).Sensors Journal,
IEEE, 3(6), 806-811.
Wei, A., Pan, L., & Huang, W. (2011). Recent progress in the
ZnO nanostructure-based sensors. Materials Science and
Engineering: B, 176(18), 1409-1421.
Rahman, H. A., Harun, S. W., Yasin, M., Phang, S. W.,
Damanhuri, S. S. A., Arof, H., & Ahmad, H. (2011). Tapered
plastic multimode fiber sensor for salinity detection. Sensors
and Actuators A: Physical, 171(2), 219-222.
Zubia, J., & Arrue, J. (2001). Plastic optical fibers: An
introduction to their technological processes and
applications. Optical Fiber Technology, 7(2), 101-140.
Jung, Y., Brambilla, G., & Richardson, D. J. (2009). Optical
microfiber
coupler
for
broadband
single-mode
operation. Optics express, 17(7), 5273-5278.
Liu, Y., Zhang, Y., Lei, H., Song, J., Chen, H., & Li, B.
(2012). Growth of well-arrayed ZnO nanorods on thinned
silica fiber and application for humidity sensing. Optics
express, 20(17), 19404-19411.
Batumalay, M., Harith, Z., Rafaie, H. A., Ahmad, F.,
Khasanah, M., Harun, S. W., ... & Ahmad, H. (2014).
Tapered plastic optical fiber coated with ZnO nanostructures
for the measurement of uric acid concentrations and changes
in relative humidity. Sensors and Actuators A: Physical, 210,
190-196.
61
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
GPR Principle for Soil Moisture Measurement
Yap, C.W.1,2, Mardeni R.1, and Ahmad, N.N.1
1
2
Multimedia University, Faculty of Engineering, Cyberjaya, Selangor, Malaysia
Asia Pacific University of Technology and Innovation, Faculty of Computing, Engineering and Technology, TPM,
Bukit Jalil, Kuala Lumpur, Malaysia
Abstract— Soil moisture measurement is a critical area often focused in soil characterization. This parameter can
affect the physical and electromagnetic characteristics of soil, such as density and permittivity. Soil characterization
further restricts soil application in civil, geological and agricultural industry. Unfortunately, a simple and effective
non-destructive model for accurate soil moisture measurement is challenging to be discovered. In this article, the
concept and development of soil moisture determination via ground penetrating radar (GPR) principle and surface
reflection method is explained. The system is designed to be used with standard horn antenna with a sweep
frequency of 1.7 – 2.6GHz along with vector network analyzer (VNA). The proposed system can measure soil
moisture of three types of soil samples such as sand, loamy, and clay with high degree of accuracy. In this research,
microwave surface reflection method is applied to analyze the effect of soil moisture with its electrical properties
using GPR principle. The result of the research is promising with high percentage of agreement with Topp
theoretical value. The values are 31% to 61% for sand, 5% to 42% for clay, and 44% to 54% for loamy. For
validation on the system, a new type of soil is used for measurement, and the result has an accuracy of 93%. By
using the proposed developed models, soil moisture estimation can be easily determined with minimal data input
through a novelty GPR surface reflection method.
Keywords- GPR; soil moisture; surface reflection; microwave; radio wave
The study presented here is to investigate a better
alternative to measure soil moisture via radio wave
reflection method. In this article, we proposed a soil
moisture model which gives a faster moisture estimation
that can benefit agricultural surveyors performing routine
soil moisture test. We are expecting to enable in-situ soil
moisture measurement without damaging the soil ecosystem.
INTRODUCTION
Immense growth in radar technologies has increased
its role in a variety of fields, and ground penetrating radar
(GPR) is seen to gain tractions for increasing applications.
Ground penetrating radar (GPR) principle has been
widely employed in non-destructive tests (NDT) for a
variety of fields [1]. The relationship of soil physical and
electrical characteristics are often discussed by some
researchers and these properties contributed in many
aspects of structure estimation [2]. However, results often
associated with drawbacks. Other researchers proposed a
study of GPR measurement that relates density and
attenuation of road pavement slabs, a frequency range of
1.7-2.6 GHz. The experiment was constructed using a
signal generator, spectrum analyzer, directional coupler
with adapter and a horn antenna [3]. But the drawback of
the study was that soil moisture cannot be measured.
ELECTROMAGNETIC PROPERTY OF SOIL
Electromagnetic property of soil is another critical
area to be assessed. Each soil type possesses unique
characteristics such as permittivity, permeability and
conductivity [6]. These characteristics are widely
research as they influence moisture measurement. As
with standard research methods, this study limits to
permittivity determination. The notion of the work is that
theoretical value of soil moisture is developed as a
benchmark for the measurement result performed in the
lab. This can be completed via comparing the result with
nominal range of permittivity and development of
electromagnetic method for radar.
Jusoh studied the moisture content in mortar at near
relaxation frequency, and developed an equation from the
study [4], and parameter water content and attenuation
with mortar is correlated. But, the drawback of the
research is that it is simple and soil characterization
overlooked and not considered a variable in equation [4].
At another study, permittivity of a material is measured
using a network analyzer connected to a GPR antenna and
a resonator, but the drawback is that they are not able to
characterize the sample permittivity [5]. Another current
commonly used methods of soil moisture measurement is
coring, which involved physical removal of soil from the
ground [1], [3]. After removing soil from the ground, the
soil is brought to the lab for moisture measurement.
Unfortunately, coring is time-consuming and destructive
to the eco-system.
Nominal range of Return Loss
The nominal range for different soil of sand, loamy,
and clay soil are investigated. Loamy soil and clayey soil
has the closest relative dielectric constant or permittivity,
which is in between 3 – 30. The relative dielectric
constant of sandy soil is on the upper range which is in
between 10 – 30. These permittivity data with
corresponding return loss are used as a benchmark for the
study [7].
62
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Theoretical Estimation
As with standard research method, theoretical basis is
established for benchmarking. However, this is a
challenging task as few research is conducted to
determine soil moisture with GPR. Hence the theoretical
method of soil moisture is derived from TDR method. In
1980, prominent researcher Topp and his team conducted
a study via electromagnetic method and developed a
formula to correlate soil water content with permittivity.
The experiments were completed in laboratory where
samples were positioned in a coaxial transmission line
and the complex dielectric permittivity were measured
from dry soil up to saturated condition. Empirical model
of soil moisture content in terms of dielectric permittivity
was introduced and expressed as (1) and (2) [8].
1  
 

1  
RL = -20 log||

 = 3.03 + 9.3v + 146.0v + 76.7
(7)
Following these equations, permittivity of a material
can be converted to return loss. Through the derivation,
permittivity from (2) and (3) can be expressed in terms of
return loss. The theoretical determinations are then
compared with the measurements from this work.
(1)
3
v
(6)
In this study, the equipment setup is conducted in the
frequency of 1.7GHz to 2.6GHz. This range is within the
microwave frequency range of 300MHz to 300GHz,
where return loss can be correlated to reflection
coefficient by (7).
v = -5.3 × 10-2 + 2.92 × 10-2 - 5.5 × 10-4 + 4.3 × 10-6
2
2
METHODOLOGY
This proposed study was implemented by
investigating the properties of soil samples, collecting
data from laboratory experiments, performed analysis and
integrated the result with existing research.
(2)
where ε represents real part of complex relative
permittivity and θv represents volumetric water content.
Rearranging (1) for permittivity provides us with (2),
which is known as the standard Topp equation [8].
Properties of Soil Sample
In this study, three types of soil were determined as
samples in the lab experiment. The typical moisture
content for three types of soil are also shown in Table I,
where moisture content, w, is the ratio of weight of solids
(g) to weight of water (g) in percentage [7].
Researcher Hallikainen had further the research and
proposed a polynomial equation that correlates water
content and permittivity, with the addition of a new
variable - soil type [9].
 = (a0 + a1S + a2C) + (b0 + b1S + b2C)v + (c0 + c1S +
c2C)v2
(3)
SOIL TEXTURE CLASSIFICATION OF SOIL SAMPLES WITH EQUIVALENT
DIAMETER SIZE AND TYPICAL MOISTURE CONTENT.
where a, b, and c are polynomial coefficients, S is sandy
ratio and C is clay ratio. In the study, soil types have
shown that dielectric permittivity changes significantly in
the lower frequency range, particularly between 1.4GHz
to 5GHz. According to the study, the permittivity at
1.4GHz is represented with polynomial coefficients as (4)
and (5).
Soil
Type
Equivalent Diameter
Size (mm)
Typical Moisture
Content (%)
Sand
0.05 – 2.00
5 – 15
Silt
0.02 – 0.05
5 – 40
Clay
< 0.02
10 – 50 (or more)
Soil Characterization
In this study, soil samples are selected and prepared
through characterization process. Soil samples
determined are clay, sand, and loamy, as these three types
of soil are the standard for various different soil
composition.
' = (2.862 – 0.012S + 0.001C) + (3.803 + 0.462S –
0.341)v + (119.006 + 0.5S + 0.633C)v2
(4)
” = (0.356 – 0.003S – 0.008C) + (5.507 + 0.044S –
0.002)v + (17.753 – 0.313S + 0.206)v2
(5)
Soil samples are collected and prepared as laboratory
test objects. The samples are weighed before drying, as
the drying process takes 24 hours in the oven at 110oC per
British Standards [10]. After drying, the soil is weighed
and the moisture content are determined with (8).
where ε’ and ε” are respectively the real part and
imaginary part of complex dielectric permittivity. In this
work, Topp and Hallikainen equations are used as a
benchmark for the result. These equations will be
correlated with the return loss, and integrated with the
measurement using GPR principle.
v 
Permittivity obtained from Topp Equation (2) and
Hallikainen Equation (3) are derived further with GPR
method. In the investigation of thin layers in concrete
using reflection of GPR signals and pulse lengths, the
permittivity of a material, ɛ can be calculated in the
expression of reflection coefficient, Γ by using (6) [1].
Vwater
Vtotal
(8)
where v is volumetric soil moisture, Vwater is the water
content, Vtotal is the total volume content including soil
volume, water volume and air volume.
After the drying process, soil samples prepared is
settled for soil characterization. In electromagnetic
63
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
approach, soil dielectric models are used to relate physical
characteristics of soil with its electrical properties. In this
study, the soil physical parameters are to be determined
before experiment is conducted. Physical parameters of
soil types are determined using sieve test analysis method
as approved [10].
measured is used to propose a new soil moisture model.
The setup and experiment procedure are followed.
The experiment is conducted in a laboratory with horn
antenna, vector network analyzer (VNA), N-type cable,
glass container, soil samples and metal sheet. VNA used
is manufactured by Agilent Technologies, model
E5062A. The VNA operates in the range of 300kHz –
3GHz, and support the horn antenna operates in the range
of 1.7GHz to 2.6GHz. The equipment setup is shown in
Fig. 2, the setup is arranged with distance determined and
components placed according to the GPR principle.
The sieving process uses different screen size of sieve
containers as shown in Fig. 1. In this process, samples are
placed in the top of the stacked sieve containers. The sieve
containers are arranged as the containers with the largest
screen size stays at the top and the finest screen size stays
at the bottom. The stacked containers are placed in the
sieve shaker for 10 minutes. During the sieving process,
the coarse particles remains at the top and progressively
to the finest particles at the bottom. The particles
collected at each stage is weighed and the percentage of
clay and sand is determined.
Lab measurement setup with standard horn antenna, antenna frame,
soil samples, metal plate and VNA.
RESULTS ANALYSIS
As the soil sample is prepared, the experiment is
performed in steps. For every step, 250cm3 of water is
added to the soil sample and water volume percentage is
calculated. Return loss is measured from the VNA and
normalized return loss is determined. Normalized result
only accounts for the changes in soil moisture with all
other variables such as air to sample distance, cable and
antenna impedance being constant.
Sieve containers
Result of the sieving analysis is collected and
analyzed and soil components are categorized and shown
in Table II.
RESULT OF SOIL CHARACTERIZATION
Soil Type
Clay (%)
Sand (%)
Silt (%)
Sand
0.02
0.96
0.02
Clay
0.5
0.37
0.13
Loamy
0.08
0.84
0.08
Referring to Table II, sand sample contains 96% of
sand per classification and this is regarded as pure sand.
Clay sample contains 37% of sand and 5% of clay. Loamy
soil contains 84% of sand and 8% of clay. This shows a
diverse sample of soil components within the soil
samples.
Plot of graph for normalized return loss in comparison with Topp and
Hallikainen theoretical values.
MEASUREMENT SETUP
In this work, the experiment is summarized in Fig. 2.
The length, width and thickness of the glass container are
0.4m, 0.6m and 0.8m, respectively. Also, h is the soil
thickness and d = 0.3m is the distance from antenna to the
soil surface. The antenna height and sample surface area
are calibrated and optimization is performed in order to
comply with GPR setup requirement.
From Fig. 3, it is shown that return loss decreases as
water content increases. Topp and Hallikainen curves
generally agree with each other within a range of 10%.
The measurement of the soil, in particular loamy soil
follows the curve but at a lower return loss. This is due to
the different composition of soil that was used. The
measurements of the three types of soil are shown in the
polynomial equations (9) – (11). In these equations, return
loss is expressed in terms of volumetric moisture. For
example, in (9), return loss of sand, RLsand is expressed in
Equipment Setup
The objective of the experiment is to obtain the soil
return loss in dB for each step of soil moisture. The data
64
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
terms of volumetric moisture, v. This notation is
followed by the subsequent equations.
RLsand = -819.6v3 + 216.3v2 – 129.5v + 4.224
error is less than 7%. This verifies that the model works
with other general soil types.
The verified model is developed into a Graphical User
Interface (GUI) with Matlab platform as shown in Fig. 5.
In this GUI, the user inputs return loss indicated from
GPR device and soil moisture is determined per model. It
cross references with the type of soil and determines bulk
density. The bulk density is referred for the suitability for
agriculture activity [11].
(9)
RLclay = -821v3 + 339.5v2 – 47.59v + 7.702 (10)
RLloamy = -530.1v3 + 270.2v2 – 51.88v + 6.759 (11)
In order to observe the agreement between theoretical
data and measurement, the errors against theoretical value
of Topp and Hallikainen are calculated. Clay has the
smallest error 4.93% to 42.01% when compared to Topp
theoretical values; and 12.75% to 47.56% when
compared to Hallikainen theoretical values. Sand has the
largest error in comparison to both Topp and Hallikainen
values, which are 31.21% to 63.96%, and 38.09% to
66.93% respectively.
Practical approach is developed to measure
volumetric moisture in realistic environment where
composition of the soil is unknown. In this scenario,
further work is conducted to develop a General Equation
that encompasses all soil types. Return Loss, RL is plot
against Volumetric Moisture, v for measured data on
loamy, sand and clay. General Equation is formulated
based on the best-fit curve on the measured data as shown
in (12). In this equation, return loss of General Equation,
RLGen Eq is expressed in terms of volumetric moisture, v.
RLGen Eq = -723.5v3 + 275.3v2 – 37.47v + 6.228
GUI developed in Matlab platform with model of the General
Equation.
Further work is to be performed on site. However,
setting up the equipment outdoor requires protection to
the equipment such as horn antenna, VNA, computer and
cabling. These equipment are sensitive to heat and
humidity and the protection is challenging and costly.
(12)
MODEL VERIFICATIOIN
A new soil type is developed with mixed sand and
loamy soil, and this new sample undergoes the consistent
procedure for volumetric moisture determination. Result
from Fig. 4 shows that volumetric moisture measurement
is within the range of new equation. General Equation is
verified to operate with allowable error. Prefix M in the
legend represents measured data.
CONCLUSION
In this article, we have presented novel soil moisture
determination method via microwave surface reflection.
The results from GPR measurement on clay, loamy and
sand have been analyzed and the results are found in good
agreements with Topp and Hallikainen method.
Optimization and verification are performed on the
results for model development. The results are further
verified with a new soil type on the same procedure.
Microwave surface reflection method is useful for the
researchers who wish to evaluate soil moisture using the
same parameters discussed in this article. The novel
feature that contributes is the development of the soil
moisture equation as an application of non-destructive
technique for future engineering applications.
REFERENCES
D. Daniels and M. Skolnik, Radar Handbook, 3 rd Ed. USA: McGraw
Hill, 2008.
D.J. Daniels et al., “Introduction to subsurface radar,” IEEE
Proceedings, vol. 135, no. 4, 1988.
J.R. Leon Peters et al., “Ground penetrating radar as a subsurface
environmental sensing tool,” IEEE, 1984.
M.A. Jusoh et al., “Determination of moisture content in mortar at near
relaxation frequency 17GHz,” Measurement Science Review,
vol. 11, no. 6, pp. 203-206, 2011.
The experiment is repeated with the new type of soil and the
measured data is plot with the General Equation.
The performances of the return loss were measured
and compared to the General Equation. New soil type
agrees well with the General Equation. From the soil with
less than 21.31% volumetric moisture, the corresponding
65
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
R.S.A Raja Abdullah et al., “Evaluation of road pavement density using
ground penetrating radar,” Journal of Environmental Science and
Technology, vol. 2, no. 2, pp. 100-111, 2009.
A. Tarantino, M.A. Ridley, and G.D. Toll, “Field measurement of
suction, water content, and water permeability,” Laboratory and
Field Testing of Unsaturated Soils, pp. 139-170, 2009.
S. Ahmed Turk, A. Koksal Hocaoglu, and A. Alexey, Vertiy:
Subsurface Sensing, Wiley, pp. 62, 2011.
G.C. Topp, J.L. Davis, and A.P. Annan, “Electromagnetic
determination of soil water content: measurements in coaxial
transmission lines,” Water Resources Research, vol. 16, no. 3, pp.
574–582, 1980.
T.M. Hallikainen et al., “Microwave dielectric behavior of wet soil –
part I: models and experimental observations,” IEEE
Transactions on Geoscience and Remote Sensing, GE-23(1), pp.
25-34, 1985.
British Standard Institution. British standard methods of test for soils
for civil engineering purposes: part 2: classification tests.
London: BSI., 1990.
USDA, Soil quality indicators. USDA Natural Resources Conservation
Service.
[online]
http://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/nrcs142
p2_053256.pdf <Accessed 15 Febaruary 2015>, 2008.
.
66
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Automatic White Blood Cell Detection in Low Resolution
Bright Field Microscopic Images
Usanee Apijuntarangoon 1,2, Nipon Theera-Umpon1,2,3*, Senior Member, IEEE,
Chatchai Tayapiwattana2,4, and Sansanee Auephanwiriyakul1,2,5, Senior Member, IEEE
1
Biomedical Engineering Program, Faculty of Engineering, Chiang Mai University, Thailand
2
Biomedical Engineering Center, Chiang Mai University, Thailand
3
Department of Electrical Engineering, Faculty of Engineering, Chiang Mai University, Thailand
4
Division of Clinical Immunology, Department of Medical Technology, Faculty of Associated Medical Sciences,
Chiang Mai University, Thailand
5
Department of Computer Engineering, Faculty of Engineering, Chiang Mai University, Thailand
*
Corresponding author: [email protected]
Abstract— Detecting cell individually in a fluorescence image is rather unreliable. Even though a cell of interest may
appear as a color spot, every spot cannot be ensured as a cell. Employing an additional corresponding bright field
image can provide better detection reliability since a technician basically uses the bright field image for cell
observation. In this study, we propose an unsupervised approach to automatically detect human white blood cell
(WBC) which is semi-transparent in bright field images, especially in the low resolution ones. The experiment was
conducted on 20 microscopic images containing 3607 WBCs. Our proposed method is capable of detecting WBCs
in bright field images with 89.2% positive predictive value (PPV) and 92.8% sensitivity.
Keywords-white blood cell, bright field, microscopic image, cell detection
INTRODUCTION
WBCs pointed by black arrows, is used to prove the color
spots in Fig. 1(a) are truly T-helper cells. Comparing Fig.
1(a) with Fig. 1(b), only 3 pairs of the spots match
between two images, which means there are 3 T-helper
cells (white circles in Fig. 1(a) (bottom) and black circles
in Fig. 1(b) (bottom)) and 1 false spot represented by the
white dash arrow (Fig. 1(a) (bottom) and Fig. 1(b)
(bottom)) which is the debris. So, this is a reason why
using an additional bright field image can help
distinguish T-helper cells from the red spots in
fluorescence images more correctly and thus, guarantees
the reliability of cell detection in fluorescence images.
Nowadays, there is an abundance of researches about
automatic cell detection in microscopic images,
especially in fluorescence ones. However, counting cells
in fluorescence image individually is rather unreliable
because the interested cell appears as a color spot but the
spot cannot be ensured as a cell. Fortunately, this
problem can be taken care of by using an additional
corresponding bright field image. For example, for
detection of T-helper cell which is a type of white blood
cell (WBC), red fluorescent dye is often used to stain the
T-helper cell. Among the WBC population, only the Thelper cell appears as a red spot while the other types of
WBC cannot be seen when the sample is examined under
the fluorescence excitation light. Fig. 1 shows two types
of images with very similar scene but are captured under
different light sources. According to the fluorescence
image (Fig. 1(a)), we assume that 4 spots pointed by the
white arrows (Fig. 1(a) (top)) are T-helper cells. Then,
the bright field image (Fig. 1(b)), which consists of 6
30 m
30 m
30 m
30 m
The bright field is the simplest technique for cell
examination which generates a low contrast image. So it
is quite troublesome to detect cell in bright field image
because the WBC is rather transparent. Previous studies
on the cell detection in bright field image can be classified
into two categories, i.e., machine learning-based methods
and unsupervised methods. The studies [1-4] used the
patch-based method [1, 2] or heuristic method [3, 4] to
generate the appropriate input for machine learning
system. However, the learning-based method requires a
training process. We want to avoid an imbalanced sample
size since our data set contains various numbers of WBC,
debris, trash, and background.
(a)
(b)and bright field image. Four white
Figure 1: T-helper cell represented(a)
by using fluorescence
arrows in fluorescence image (a) (top) indicate the suspected T-helper cells and the 6 black
arrows in bright field image (b) (top) represent the WBC population
(b)
Figure2: The original image (a) contains WBCs. The magnified image of single cell (b) shows
the round shape and obvious boundary. Also the Halo effect indicated by the black arrow can
be seen.
Figure 3: The diagram of our proposed algorithm
Bright field image
Image appearance
improvement
WBC segmentation using
multi-gray scale image
67
Re-segmentation
using FCM
Touching and
overlapping cell
separation
WBC selection
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
can be generated by multiplying the invert version of
bivariate Gaussian function with original image.
The unsupervised techniques were also used to detect
cells in bright field images [5-7]. The thresholding is a
simple method for segmenting the WBCs in the bright
field images [5, 6]. However, it is not applicable to our
study since the contrast between cellular content and
background is too low. The study in [7] employed an
active contour approach to detect each yeast cell in dense
population based on its outstanding cell boundary. WBCs
also have the round shape, prominent cell membrane (Fig.
2(a)) and sometimes the observed cell is surrounded by
“Halo” effect indicated by the black arrow in Fig. 2(b). So
using edge-based method to segment WBCs candidate is
more suitable to our data. The diagram of our proposed
algorithm is shown in Fig. 3. The details will be presented
in the next section.
2)
We used three types of gray images to detect WBC
candidates, namely NTSC conversion, R channel and
chrominance images in order to increase the opportunity
to detect WBC candidates in bright field images. NTSC
gray image (Fig. 5(b)) can properly represent a clearlyseen cell membrane and Halo ring. Also the R channel
image (Fig. 5(c)) is used to represent the WBC whose
boundary is quite unclear but with a color shade that is
still green. However, some WBCs cannot be displayed
properly by the NTSC and R channel images since they
are too bright and have faded boundary which cause low
difference of intensity between cell and background.
However, the contrast of the cell and background
intensity is evident when displayed by a chrominance
image (Fig. 5(d)). Since the edge of the WBC is clearly
seen, an edge-based segmentation method is applied to
the NTSC and R channel images. While an intensitybased segmentation method is applied to chrominance
images. Each gray image is operated separately in step 2.
Then, all resulting images are merged at the end of step
4.
This paper is organized as follows. Section 2 presents
materials and methods in which the details of the
proposed algorithm are described. Section 3 shows and
discusses the results obtained by the algorithm. Then the
conclusion of this paper is drawn in section 4.
MATERIALS AND METHODS
Sample preparation and image acquisition
The white blood cells (WBC) were obtained from
peripheral blood mononuclear cells (PBMC). The image
is captured by using an Olympus DP21 microscope digital
camera through an Olympus bx41 fluorescence
microscope under the visible light.
a) Edge-based WBC candidate detection: NTSC
and R channel images
The Canny method [11] is applied to find edges in the
image obtained from step 1. We assume that the cell
whose boundary is clearly seen normally has a closed
edge which can be easily drawn out by performing a hole
filling operation. However, some of these cells have open
contours. So a morphological dilation operation should
be applied for gap closing before filling the holes. The
images obtained from the double filling step are
combined after discarding the objects whose size was out
of the range of 100 to 1000 pixels. This estimated range
of values gave good results for our data. The remaining
objects will be segmented again using the fuzzy c-mean
clustering which will be explained later in step 3.
Proposed method
1)
WBC detection algorithm
Image appearance improvement
The dispersed light is always visible in the
microscopic image. The characteristic of the dispersed
light is that the intensity at the center of image is brighter
and gradually becomes darker toward the image rim.
Consequently, the intensity of the cells located around
the center is higher than that located around the image
border. Therefore, the image appearance improvement
should be done in order to help increase the algorithm
performance. There are many techniques for improving
the image quality in spatial domain globally, such as
histogram equalization [8], contrast stretching [6], and
locally morphological contrast enhancement [9, 10].
Since the dispersed light
b) Intensity-based WBC candidate detection:
chrominance image
The top-hat transform [9, 12] is a good option for
dragging cells out of the uneven background based on
intensity. This technique allows the enhancement of
(b)
Figure 4:(a)
The intensity profile along the dash line (a) is plotted to represent the characteristic
of the dispersed light which is evidently demonstrated by the red line (b).
(a)
affects the whole image, global enhancement is more
suitable. The intensity of each pixel along the dash line
(Fig. 4 (a)) was plotted (Fig. 4(b)) to show the
characteristic of the dispersed light. According to the
plot, a bivariate Gaussian function can represent the
characteristic of the dispersed light as demonstrated by
the red curve. Then the eliminated dispersed light image
(b)
(c)
(d)
Figure 5: The magnified RGB images (a) which are cropped from the same image represent
the WBC. Several types of gray scale images, i.e., NTSC (b), R channel (c) and chrominance
(d) images are used together to detect WBC.
an object smaller and brighter than a structuring element
using the opening operation [12]. In contrast, the closing
operation intends to enhance the darker objects [9]. We
68
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia

used the closing operation since the intensity of WBC is
darker than background when represented by
chrominance image. The image is first closed by a
smaller structuring element b1 to generate the image h1
which is the top of the hat. Then the larger structure
element b2 is used for closing to generate the brim of the
hat image h2. Then the enhanced image g is
g = h1  h2.
If I ( x, y) is attached to the image border, the
attached length must be shorter than half of
image border or at most one attached side can be
larger than half of image
iii. Return the position ( x, y ) of I in image F back
to (i, j ) of image H whose size is similar to the image g
where H (i, j )  1 .
4)
Touching and overlapping cell separation
(1)
The region of interest is highlighted if it is darker than
the background and smaller than b2. Then the object
which has an intensity value below a threshold value T is
kept and will be re-segmented in step 3.
The touching and overlapping cells that appear in our
data set need to be isolated. The watershed algorithm is
popular and capable of doing this [14-16]. This algorithm
generates the detached line based on the 3D topography
[15] which can be generated from the binary image by
H-minima transform [16] or distance transform [17]. In
our study, the 3D topography of the binary image is also
generated using the Euclidean distance transform. After
that, the watershed algorithm [15] whose initial marker
was indicated by the region minima of the distance map
is applied.
3)
Re-segmentation using the Fuzzy-c-means
(FCM) algorithm
a) FCM algorithm for image segmentation
The fuzzy c-means (FCM) clustering algorithm is
used to distinguish the pixels belonging to WBC from
those belonging to the background based on intra-cell
intensity. Thus, the number of clusters is 2 to represent
the WBC candidate and the background. In order to
perform individual segmentation, each candidate in g is
cropped using a rectangular window 2 pixels larger than
its actual diameter along the horizontal and vertical
directions. Then the pixel intensities of R, G and B of the
image inside the cropped area are used as input.
5)
WBC selection
The roundness is used to evaluate the shape of
isolated WBC candidates as a criterion for selecting
which candidates correspond to actual WBCs. The
candidates with perimeter P and area A can be classified
as WBC if their roundness is higher that the threshold
value Tc, i.e.,
b) Region of interest (ROI) selection
After applying FCM, each pixel ( x, y) in the FCM
resulting image F is labeled into 2 clusters: WBC and
non-WBC. We assume that the cluster which belongs to
the cell content should be located in the middle of the
patch image. However, it is possible that the cluster not
belonging to the cell content might appear around the
center of F since the intra-cell intensity is not
homogenous. We select the cluster that probably
corresponds to WBC by following the algorithm
described as follows:
i. Get the cluster labeled pixels located around the
origin x0 and y0 of the image F along both vertical and
horizontal directions into the array C whose size is 5×5
pixels. The array C can be denoted by a 5×5 subimage
centered at (x0, y0).
ii. Consider the number of clusters labeled in C
Case 1: There is only 1 cluster labeled in C which
means the connected component I ( x, y) of the array C
where C  I ( x, y) possibly corresponds to WBC.
Case2: There are 2 clusters labeled in C which means
any of them possibly correspond to WBC. In this case,
we find the connected component of each cluster of array
C. Then the connected component I ( x, y) whose area is
larger will be chosen.

 4A 
1,  2   Tc
decision  
 P 
0, otherwise

(2)
RESULTS AND DISCUSSION
The experiment was conducted on 20 images
containing 3607 WBCs. The input image size is
1200×1600 pixels which was captured under 20x
objective lens with various brightness. The detection
results of the proposed algorithm were compared to the
manually detection by an expert’s opinion for
performance evaluation. Sensitivity and predictive
positive value (PPV) are used as performance evaluation
indices. The sensitivity is the ability to detect WBC
correctly while the PPV is the predicted value of correct
detection among the detected results. The result of
applying the segmentation without re-segmentation by
the FCM yielded a sensitivity and a PPV of 95.4% and
72.7%, respectively. The sensitivity is high but the PPV
is not high enough which means that high percentage of
WBC can be detected while too many false detections
occurred, such as the background. That is the reason why
we develop the re-segmentation process to reduce
background detection. By applying the FCM to eliminate
this false detection based on the contrast between the
internal cell content and the background, the PPV vastly
increased to 89.2%. Meanwhile, the sensitivity slightly
decreased to 92.8%, which is still acceptable.
For both cases, pixel (x,y) will be defined as a WBC
candidate if I ( x, y) satisfied these conditions:
 None of pixels in I ( x, y) is attached to the image
border.
69
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Fig. 6 shows the result of WBC detection in bright
field images. The manually detection of the human
expert was drawn with blue color (Fig. 6(b)) and the
automatic detection of our proposed algorithm was
drawn with red color (Fig. 6(c)). Our algorithm is capable
of detecting the WBCs, especially the cells whose
boundary is completely round and clearly seen (Fig. 6
(top)). Also the small cell cluster can be detected
correctly (Fig. 6 (middle)). Moreover, cells whose
internal intensity is transparent can be found (Fig. 6
(bottom)). However, excluding debris (red arrow) from
the WBCs population is still very difficulty since its
basic characteristic is rather similar to WBC, i.e.,
prominent boundary, size, and internal content. Also
WBCs whose boundary adhere the debris can hardly be
detected (yellow arrow).
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
(a)
(b)
[15]
(c)
Figure 6: Three enlarged bright field images (a) in which WBCs are manually detected (Blue
contours) by expert technician (b) and automatically detected (Red contours) by our proposed
algorithm (c). The arrows in (c) indicate the false detections.
[16]
CONCLUSION
Our proposed algorithm is capable of detecting
WBCs in low resolution bright field images. Very good
sensitivity and predictive positive value (PPV) are
achieved. We are presently conducting cell fluorescence
detection for a medical application by using additional
bright field images to guarantee the reliability of the
detection method and the algorithm proposed in this
study is applied. Moreover, we intend to improve the
accuracy of our proposed algorithm by increasing the
sensitivity and decreasing the false detection caused by
debris.
REFERENCES
[1]
[2]
[3]
T. Kazmar, M. Smid, M. Fuchs, B. Luber, and J. Mattes,
"Learning cellular texture features in microscopic cancer cell
images for automated cell-detection" IEEE Annual. Int. Conf.
Eng. Med. Bio. Soc., pp. 49-52, 2010.
X. Long, W. L. Cleveland, and Y. L. Yao, "Effective automatic
recognition of cultured cells in bright field images using fisher's
linear discriminant preprocessing" Image Vision Comp., vol. 23,
pp. 1203-1213, 2005.
M. Tscherepanow, F. Zollner, and F. Kummert, "Classification of
Segmented Regions in Brightfield Microscope Images," 18th Int.
Conf. Pattern Recognition, pp. 972-975, 2006.
70
G. Lupica, N. M. Allinson, and S. W. Botchway, "Hybrid Image
Processing Technique for the Robust Identification of Unstained
Cells in Bright-Field Microscope Images," Int. Conf.
Computational Intel. Modelling Cont. Auto., pp. 1053-1058,
2008.
D. Hong, G. Lee, N. C. Jung, and M. Jeon, "Fast automated yeast
cell counting algorithm using bright-field and fluorescence
microscopic images," Bio. Procedures Online, vol. 15, p. 13,
2013.
A. Korzyńska and M. Iwanowski, "Multistage morphological
segmentation of bright-field and fluorescent microscopy images,"
Opto-Electronics Review, vol. 20, pp. 174-186, 2012.
K. Bredies and H. Wolinski, "An active-contour based algorithm
for the automated segmentation of dense yeast populations on
transmission microscopy images," Comput. Visual. Sci., vol. 14,
pp. 341-352, 2011.
M. E. Plissiti, C. Nikou, and A. Charchanti, "Automated
Detection of Cell Nuclei in Pap Smear Images Using
Morphological Reconstruction and Clustering," IEEE Trans. Inf.
Tech. Biomed., vol. 15, pp. 233-241, 2011.
S. Mukhopadhyay and B. Chanda, "A multiscale morphological
approach to local contrast enhancement," Sig. Pro., vol. 80, pp.
685-696, 2000.
C. Wählby, J. Lindblad, M. Vondrus, E. Bengtsson, and L.
Björkesten, "Algorithms for cytoplasm segmentation of
fluorescence labelled cells," Anal Cell Pathol, vol. 24, pp. 10111, 2002.
J. Canny, "A Computational Approach to Edge Detection," IEEE
Trans. Pattern Ana. Machine Intel., vol. PAMI-8, pp. 679-698,
1986.
I. Smal, M. Loog, W. Niessen, and E. Meijering, "Quantitative
Comparison of Spot Detection Methods in Fluorescence
Microscopy," IEEE Trans. Med. Imaging., vol. 29, pp. 282-301,
2010.
L. P. Coelho, A. Shariff, and R. F. Murphy, "Nuclear
segmentation in microscope cell images: A hand-segmented
dataset and comparison of algorithms," IEEE Int. Sym. Biomed.
Imaging: From Nano to Macro., pp. 518-521, 2009.
F. Meyer, "Topographic distance and watershed lines," Signal
Processing, vol. 38, pp. 113-125, 1994.
J. Chanho and K. Changick, "Segmenting Clustered Nuclei Using
H-minima Transform-Based Marker Extraction and Contour
Parameterization," IEEE Trans. Biomed. Eng., vol. 57, pp. 26002604, 2010.
J. Duan and B. Qinglong, "Cell Image Processing Based on
Distance Transform and Regional Growth," in 5th Int. Conf.
Internet Comp. Sci. Eng., pp. 6-9, 2010.
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Role of Classification Algorithms in Medical domain: A
Survey
E.Venkatesan1, T. Velmurugan2
1
Research Scholar, 2Associate Professor,
PG and Research Dept. of Computer Science and Applications, D. G. Vaishnav College, Chennai, India
Email: [email protected], [email protected]
Abstract - In medical field, there are variety of diseases are available. In which breast cancer is the most common
types of disease in women worldwide. Breast cancer is an uncontrolled growth of cell in the breast tissues.
Tumor is abnormal cell growth that can be either benign or malignant. Benign tumors are noninvasive while
malignant tumors are cancerous and spread to other parts of the body. Data Mining (DM) is the process of
analyzing large quantities of data and summarizing it into useful information. In DM, number of classification
algorithms used for medical data analysis. Some of such techniques involve analyzing breast cancer diagnosis.
In this research, the classification algorithms C4.5, ID3, CART and C5.0 are compared with each other using
medical data in general and breast cancer data in particular. Finally, among the analyzed algorithms, the best
algorithm is identified from the different researcher’s perspectives.
Keywords - Breast Cancer Analysis, Classification Algorithms, C4.5 Algorithm, ID3 Algorithm, CART Algorithm.
benign or malignant. Benign tumors are noninvasive
while malignant tumors are cancerous and blowout to any
part of the body. The rapid advancement in an
information technology, many different data mining
techniques and approaches has been applied to
complementary medicines for the tumors. Cancer data
has higher complications due to various types of cancer
and various methods of finding [24]. Breast cancer occurs
when a malignant tumor originates in the breasts. It
occurs in both men and women. Breast cancers are
potentially life threatening malignancies that develop in
one or both breasts. The interior of the female breast
consists mostly of fatty and fibrous connective tissues.
Breast cancer is not just a woman's disease. It is quite
possible for men to get breast cancer, although it occurs
less frequently in men than in women [25].
Classification is a supervised Machine Learning
technique which assigns labels or classes to not the same
objects or groups. Classification is a two-step process
First step is model construction which is defined as the
analysis of the training records of a database. Second step
is model usage to the constructed model is used for
classification. The classification accuracy is estimated by
the percentage of test samples or records that are correctly
classified. In the Classification has been successfully
applied to a wide range of application areas, such as
scientific experiments, medical diagnosis, weather
prediction, credit approval, customer segmentation,
target marketing and fraud detection. In Decision tree
classifiers, it is used extensively for diagnosis of breast
tumor in ultrasonic images, ovarian cancer, and heart
sound diagnosis and so on [12].The Classification
methods like Decision tree algorithms are widely used in
medical field to classify the medical data for diagnosis.
I. INTRODUCTION
Data mining techniques use to find new hidden and
useful patterns of knowledge from database. The
numerous mining functions are association rules,
classification, prediction, clustering.To find useful
patterns; the process of discovering new patterns in large
data sets embraces methods like statistics and artificial
intelligence and also database controlling. The
Advancement of Information Technology led to huge
data accumulation in the recent years in several domains
including banking, retail, telecommunications and
medical diagnostics. The data from all such domains
includes valuable information and knowledge which is
often hidden. Processing the vast data and retrieving
meaningful information from it is a difficult task. DM is
a magnificent tool for handling this task. The term DM,
also known as Knowledge Discovery in Databases
(KDD) refers to the non-trivial extraction of implicit,
previously unknown and potentially useful information
from data in large databases [5].
The Breast cancer is a very common disease found in
woman in which breast masses are rises abnormally. A
contemporary survey in united kingdom proved that
breast cancer is not only a problematic of young woman
but it is also a problem of old age woman those who have
crossed the age of sixty or even seventy. An early
identification and then prevention with proper treatment
of breast cancer can save life of human being [7]. The
Cancer is one such disease that has wider range of spread
in India. Statistically, India is found to have higher rate of
increase in cancer patients. The main reason of cancer is
tumor. Tumor is abnormal cell growth that can be either
71
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
For the classification activity, the highly impact tool is
Decision Tree [24]. These Classification is one of the
most fundamental and important task in data mining and
machine learning. Many of the researchers performed
experiments on medical datasets using decision tree
classifier [14]. This article survived about the various
research works carried out using ID3, C4.5,CART and
C5.0 algorithms done by different researchers. This will
identify to predict general and individual performance of
patient. The remaining of this paper is organized as
follows. In Section II, some of the data mining techniques
used for cancer analysis is discussed. Section III explores
about the use of classification algorithms in medical
domain. The various applications of classification
algorithms used for breast cancer analysis are discussed
in section IV. Finally section V concludes the survey
work.
to recur in patients that have their cancers excised. This
article summarizes various review and technical articles
on breast cancer diagnosis and prognosis alsothey focus
on current research being carried out using the data
mining techniques to enhance the breast cancer diagnosis
and prognosis.
Subasini A. et al. has discussed various data mining
approaches that have been utilized for breast cancer
diagnosis and prognosis in [21]. In this work, they
explore the applicability of association rule data mining
technique to predict the presence of breast cancer. Also,
it analyzes the performance of conventional supervised
learning algorithms viz. C5.0, ID3, APRIORI, C4.5 and
Naive Bayes. Experimental results prove that C5.0 serves
to be the best one with highest accuracy. Shellygupta et
al. analyzed about the use of data mining techniques for
diagnosis and prognosis of cancer diseases in their
research work [17].This paper provides a study of various
technical and review papers on breast cancer diagnosis
and prognosis problems and explores that data mining
techniques offer great promise to uncover patterns hidden
in the data that can help the clinicians in decision making.
From the above study it is observed that the accuracy for
the diagnosis analysis of various applied data mining
classification techniques is highly acceptable and can
help the medical professionals in decision making for
early diagnosis and to avoid biopsy. The prognostic
problem is mainly analyzed under ANNs and its accuracy
came higher in comparison to other classification
techniques applied for the same. But more efficient
models can also be provided for prognosis problem like
by inheriting the best features of defined models. In both
cases we can say that the best model can be obtained after
building several different types of models, or by trying
different technologies and algorithms.
Sujatha G. et al. carried out a survey on effectiveness
of data mining techniques on cancer data sets. In this
Research study [22]. A summary of various review and
technical articles on Tumor and Breast cancer data sets
carried out with the help of data mining techniques. A
Survey of Data Mining Techniques on Medical Data for
Finding Locally Frequent Diseases is carried out by
Mohammed A. K. et al. [15]. The main focus of this paper
is to analyze data mining techniques required for medical
data mining especially to discover locally frequent
diseases such as heart ailments, lung cancer, and breast
cancer and so on. They evaluate the data mining
techniques for finding locally frequent patterns in terms
of cost, performance, speed and accuracy. They also
compare data mining techniques with conventional
methods. A research paper by Vikas Chaurasia and
Saurabh Pal presents a Data Mining Techniques to
Predictand Resolve Breast Cancer Survivability [29].This
paper presents a diagnosis system for detecting breast
cancer based on RepTree, RBF Network and Simple
II. MINING TECHNIQUES FOR CANCER
ANALYSIS
Data mining is a powerful technique to find a new
field having various techniques to analyses the recent real
world problems. It converts the raw data into useful
information in various research fields and finds the
patterns to decide future trends in medical field.There are
various major data mining techniques that have been
developed and used in data mining projects recently for
knowledge discovery from database [16]. Breast Cancer
is the leading cause of death in women in developing
countries as per the statistics of national cancer institute.
The breast cancer can occur in both male and female, but
the occurrence is high in female throughout the world.
Breast cancer is most frequently discovered as an
asymptomatic nodule on a mammogram. A new breast
symptom should be taken seriously by both patients and
their doctors by the possibility.
A research work by K.Rajesh et al. discusses about to
classify SEER breast cancer data into the groups of
“Carcinoma in situ” and “Malignant potential” using
C4.5 algorithm [16]. They used training set of a random
sample of 500 records and then applied the classification
rule set obtained to the full breast cancer dataset. They
obtained an accuracy of 94% in the training phase and an
accuracy of % in the testing phase. They compared the
performance of C4.5 algorithm with other classification
techniques. Future enhancement of this work includes
improvisation of the C4.5 algorithms to improve the
classification rate to achieve greater accuracy.
Shwetakharaya explores the use of data mining
techniques for diagnosis and prognosis of cancer disease
in their research work [19]. In this, various data mining
approaches that have been utilized for breast cancer
diagnosis and prognosis. Breast Cancer Diagnosis is
distinguishing of benign from malignant breast lumps and
breast Cancer Prognosis predicts when breast Cancer is
72
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Logistic. This research demonstrated that the Simple
Logistician be used for reducing the dimension of feature
space and proposed Rep Tree and RBF Network model
canbe used to obtain fast automatic diagnostic systems for
other diseases.
An Efficient Classifier for Classification of
Prognostic Breast Cancer Data through Data Mining
Techniques is disused by Shomona Gracia Jacob et al. in
[18]. The objective of this paper is to identify an efficient
classifier for prognostic breast cancer data. This research
work involves designing a data mining framework that
incorporates the task of learning patterns and rules that
will facilitate the formulation of decisions in new cases.
Harshnika Bhasin et al. carried out a survey on a study on
data mining techniques for breast cancer prediction [6].
This research provides a study of assorted technical and
review papers on breast cancer identification and
prognosis problems and explores those data processing
techniques and. supply nice promise to uncover patterns
hidden in the information that can facilitate the clinicians
in decision making. From the above study it is determined
that the accuracy for the diagnosis analysis of numerous
applied data processing classification techniques is very
acceptable and may facilitate the medical professionals in
deciding for early identification and to avoid biopsy. Jaya
Suji. R et al. discusses about breast cancer analysis using
logistic regression [8].The research analysis about head
and neck cancer (H & N cancer) is the 6th most common
cancer in all over the world. In the proposed work of this
research, the datasets are obtained from different
diagnostic centers which contain both cancer and noncancer patient’s information and collected data is preprocessed for duplicate and missing information.
A Study of Detection of Lung Cancer Using Data as
neural network & SVMs for detection and classification
of Lung Cancer in X-ray chest films. Mining
Classification Techniques Survey by Ada et al.[1].
Research work is to classify digital X-ray chest films into
two categories: normal and abnormal. Different learning
experiments were performed on two different data sets,
created by means of feature selection and SVMs trained
with different parameters, the results are compared and
reported. Due to high number of false positives extracted,
a set of 160 features was calculated and a feature
extraction technique was applied to select the best
feature. The normal or negative ones are those
characterizing a healthy patient. Abnormal or positive
ones include types of lung cancer.
Survey analysis about Brijain R Pat el al. [3].In this
paper they will use classification methods in order to
classify problems aim to identify the characteristics that
indicate the group to which each case it belongs Varies
classical algorithm of the decision tree ID3, C4.5, C5.0
algorithms have the merits of high classifying speed,
strong learning ability and simple construction.
However, these algorithms are also unsatisfactory in
practical application. When using it to classify, there
does exists the problem of inclining to choose attribute
which have more values, and overlooking attributes
which have less values. This paper provides focus on the
various algorithms of Decision tree their characteristic,
challenges, advantage and disadvantage.
III. CLASSIFICATION ALOGORITHMS IN
MEDICAL FIELD
In recent years, there are a number of techniques used
and applied to analyze the other diseases. Here, such
techniques are illustrated. A Survey carried out by
Visalatchi. G et al. in [30]. There are different data
mining classification techniques can be used for the
identification and prevention of diabetes disease among
patients. This paper describes some classification
techniques in data mining to predict diabetes disease in
patients namely C4.5, SVM, K-NN, Naive Bayes, and
Apriority. These techniques are compared by disease
among patients using five classification algorithms
accurately. One of the algorithms has highest accuracy
of above 85% , that is C4.5 algorithm. They are used in
various healthcare units in and around the world.
Data mining techniques to find out heart diseases
analysis is explored by Aquila Ahmed et al. in [2].
Decision tree algorithms and SVM perform
classification more accurately than the other methods are
discussed in their paper. They reported that DM
application in heart disease reported that the major
advantage of data mining technique shows the 92.1
%and 91.0 % accuracy for the heart disease classification
of SVM result.
Another research in the same field discussed in [28].
The aim of the research is to compare the decision tree
algorithms in their paper. In classifying tuberculosis
patient’s response under randomized clinical trial
condition is carried out by them. Classification of
patient’s responses to treatment is based on
bacteriological and radiological methods. Three decision
tree approaches namely C4.5, Classification and
regression trees (CART), and Iterative dichotomizer3
(ID3) methods were used for the classification of
response.The result shows that C4.5 decision tree
algorithm performs better than CART and ID3 methods.
A research work by Karthiga et al. is given some
comparative analysis in [9]. They discussed about heart
disease database is preprocessed to make the mining
process more efficient. The preprocessed data is clustered
using clustering algorithms like k-Means to cluster relevant
data in database. Maximal Frequent Item set Algorithm
(MAFIA) is used for mining maximal frequent patterns in
heart disease database. The frequent patterns can be
classified using C4.5 algorithm as training algorithm using
the concept of information entropy. They concluded in their
73
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
results that the designed prediction system is capable of
predicting the heart attack with good accuracy.
build a model and size of the tree on various Breast
Cancer Datasets. The results show that a particular
feature selection using CART has enhanced the
classification accuracy of a particular dataset.
In a review work by Syed Shajahaan.S et al. [25].
They explore the applicability of decision trees to predict
the presence of breast cancer. Also it analyzes the
performance of conventional supervised learning
algorithms viz. Random tree, ID3, CART, C4.5 and
Naive Bayes. Experimental results prove that Random
Tree serves to be the best one with highest accuracy.
Subasini.A et al. has been conducted to analyze
Breast Cancer Data in their research work [21]. In this
work, they explore the applicability of association rule
data mining technique to predict the presence of breast
cancer. Also, it analyzes the performance of conventional
supervised learning algorithms viz. C5.0, ID3, APRIORI,
C4.5 and Naive Bayes. Experimental results prove that
C5.0 serves to be the best one with highest accuracy.
Lavanya.D et al. carried out a survey in [13]. A hybrid
approach, CART classifier with feature selection and
bagging technique has been considered to evaluate the
performance in terms of accuracy and time for
classification of various breast cancer datasets in this
work. Sujatha. et al. [22] published a research paper about
the ID3, C4.5 and Simple CART classifier with ensemble
techniques such as boosting and bagging have been
considered for the comparison of performance of
accuracy and time complexity for the classification of two
tumor datasets. By conducting the experiments it is
observed that C4.5 with Bagging is the best algorithm for
finding out whether the tumor is benign or malignant on
the tumor datasets which are used as they are available.
On increasing the number of instances of the data sets ID3
with boosting is best for Primary tumor data set and ID3
with bagging is best for Colon tumor data set.
Rajiv Gandhi et al. give an idea of breast cancer
analysis in their paper about the use of classification rules
using the particle swarm optimization algorithm for
breast cancer datasets [4]. In this research study, they
have to cope with the heavy computational efforts and
problem of feature subset selection as a pre-processing
step used by fuzzy rules based on genetic algorithm
implementing the Pittsburgh approach. The resulted
datasets after feature selection were used for particle
swarm optimization algorithm. Gopala Krishna
murthynookala et al. survey discussed about the use of
data mining techniques for performance analysis and
evaluation in their research work [5]. A comprehensive
comparative analysis of 14 different classification
algorithms and their performance has been evaluated by
using 3 differentcancer data sets. The results indicate that
none of the classifiers outperformed all others in terms of
the accuracy when applied on all the 3 data sets. Most of
the algorithms performed better as the size of the data set
Classification of diabetes disease using support
vector machine by Anuja et al. in [10] are elaborated. In
their proposed work, SVM with Radial basis function
kernel is used for classification. The performance
parameters such as the classification accuracy,
sensitivity, and specificity of the SVM and RBF have
found to be high thus making it a good option for the
classification process. In future, the performance of
SVM classifier can be improved by feature subset
selection process.
A research work carried out by Vanaja, S. and K.
Rameshkumar titled as Performance Analysis of
Classification Algorithms on Medical Diagnoses: a
Survey in [26]. This research work discusses about the
data constraints such as volume and dimensionality
problems. This paper also discusses the new features of
C5.0 classification algorithm over C4.5 and performance
of classification algorithm on high dimensional datasets.
In this analysis, C5.0 algorithm is applied on high
dimensional dataset and it must incorporate any one of
the best feature selection algorithm for better
performance which is our future work.
IV. CLASSIFICATION ALGORITHM FOR
BREAST CANCER ANALYSIS
The Classification is the process of finding a model or
function that describes and distinguishes data, classes or
concepts for the purpose of being able to use the model to
predict the class of object whose class label is unknown.
In Classification, they make software that can learn how
to classify the data items into groups. Derived model can
be presented as classification or rules many researchers
have been applying various algorithms to help health care
professionals with improved accuracy in the diagnosis of
breast cancer. An analysis of SEER Dataset for breast
cancer diagnosis using C4.5 Classification algorithm is
carried out by Rajesh et al. [16]. This research applied to
SEER breast cancer dataset to classify patients into either
“Carcinoma in situ” (beginning or pre-cancer stage) or
“Malignant potential” group. Pre-processing techniques
have been applied to prepare the raw dataset and identify
the relevant attributes for classification. Random test
samples have been selected from the pre-processed data
to obtain classification rules.
In recent years, there are number of techniques used
and applied to analyze the breast cancer. Yusuff et al.
[31] Explains about the breast cancer analysis using
Logistic regression analysis was performed using the
variables from the mammogram results which are mass,
architectural distortion, skin thickening, and calcification.
Lavanya.D et al. [12].This paper analyzes the
performance of Decision tree classifier-CART with and
without feature selection in terms of accuracy, time to
74
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
is increased. We recommend the users not to stick to a
particular classification method and should evaluate
different classification algorithms and select the better
algorithm.
Kung Jeng Wang et al. in [11] proposed a hybrid
method by combining Synthetic Minority OverSampling Technique (SMOTE) and Artificial Immune
Recognition System (AIRS) to handle the imbalanced
data problem that are prominent in medical data. They
used the Wisconsin Breast Cancer (WBC) and
Wisconsin Diagnostic Breast Cancer (WDBC) datasets
to compare their proposed method with other popular
classifiers i.e. AIRS, CLONALG, C4.5, and BPNN. The
comparison based on the accuracy, sensitivity,
specificity and G-mean. They confirmed that the
proposed method superior to other compared classifiers.
Based on the experimental results, they conclude that the
proposed approach can be used as an efficient method to
handle imbalanced class problem. Moreover, the
combination of SMOTE with classifier algorithm can
improve the classification performance. Additionally,
the proposed method can serve as a supplementary tool
for doctors to diagnose the malignant and benign tumors
early in breast cancer disease.
Varun Kumar [27] analysis about a large database
‘Wisconsin Breast Cancer Database’ containing 10
attributes and 699 instances to perform comparative
study of various data mining classification algorithms
namely ID3, K-NN, C4.5, and SVM. They compare
these algorithms on various parameters in the
classification tasks of the diagnosis of patient’s breast
cancer as begin or malignant using TANAGRA, a Data
Mining Tool. There suits of the experiment show that
instance based learning algorithm K-Nearest Neighbor
gives a promising classification results with utmost
accuracy rate and robustness.
G. Sujatha et al. presents a research paper titled as
Evaluation of Decision Tree Classification Datasets [23].
In this paper, performance of decision tree induction
algorithms on tumor medical data sets in terms of
Accuracy and time complexities are analyzed. Sivagami
et al. presents the implementation of supervised learning
algorithms for Classification such as Multilayer
Preceptor, One Decision Tree induction and Support
vector machine in [20]. The prediction accuracy of the
classifiers was evaluated using 10-fold cross validation
and the results were compared .Finally, it was found out
that Support Vector Machines has better performance
than the other algorithms.
This research work mainly focused and explored about
the various disease and illness. It is not possible to find
best algorithm for prediction of disease because a large
amount of medical data are available in various
repositories. The data mining techniques such as
classification algorithms are effectively utilized for the
analysis of medical data. Particularly, the role of
classification algorithms ID3, C4.5, CART and C5.0 are
taken for this analysis. In which, the algorithm C4.5 plays
a vital role to predict the breast cancer diagnosis and
prognosis. Hence, this paper concludes, among the
different classification algorithms, C4.5 gives better
classification results.
REFERENCES
Ada, Rajneet Kaur., “A Study of Detection of Lung Cancer Using
Data Mining Classification Techniques”, International Journal of
Advanced Research in Computer Science and Software
Engineering, Vol. 3, Issue 3, 2013, pp. 131-134.
[2] Aqueel Ahmed and Shaikh Abdul Hanna, “Data Mining
Techniques to Find out Heart Diseases: An
Overview”,
International Journal of Innovative Technology and Exploring
Engineering, Vol. 1, Issue 4, 2012, pp. 18-23.
[3] Brijain R Patel and Kushik K Rana, “A Survey on Decision Tree
Algorithm for Classification”, International Journal of
Engineering Development and Research, Vol. 2, Issue 1, 2014,
pp.1-5.
[4] Gandhi Rajiv K., Karnan Marcus, Kannan S., “Classification
Rule Construction using Particle Swarm Optimization Algorithm
for Breast Cancer Datasets”, IEEE Int. Conference on. Signal
Acquisition and Processing, 2010, pp. 233 – 237.
[5] Gopala Krishna Murthy Nookala, Bharath Kumar Pottumuthu,
Nagaraju Orsu, Suresh B.Mudunuri, “Performance Analysis and
Evaluation of Different Data Mining Algorithms used for Cancer
Classification”, International Journal of Advanced Research in
Artificial Intelligence, Vol. 2, Issue 5, 2013, pp.49-55.
[6] Harshnika Bhasin, “A Study on Data Mining Techniques for
breast Cancer prediction”, International Journal of Advanced
Research in Computer Science and Software Engineering, Vol.
4, Issue 5, 2014, pp. 427-430.
[7] Hota H. S., “Identification of Breast Cancer Using Ensemble of
Support Vector Machine and Decision Tree with Reduced
Feature Subset”, International Journal of Innovative Technology
and Exploring Engineering,Vol. 3, Issue 9, 2014, pp. 99-102.
[8] Jaya Suji.R., Rajagopalan S.P., “An automatic Oral Cancer
Classification us Data Mining Techniques”, International Journal
of Advanced Research in Computer and Communication
Engineering, Vol. 2, Issue 10, 2013, pp. 3759-3765.
[9] Karthiga G., C. Preethi., R. Delshi Howsalya Devi, “Heart
Disease Analysis System Using Data Mining Techniques”,
International Journal of Innovative Research in Science,
Engineering and Technology, Vol. 3, Issue 3, 2014, pp. 31013105.
[10] Kumari, V. Anuja, and R. Chitra., “Classification of Diabetes
Disease Using Support Vector Machine “, International Journal
of Engineering Research and Applications, Vol. 3, Issue 2, 2013,
pp. 1797-1801.
[11] Kung Jeng Wang and Angelia Melani Adrian, “Breast Cancer
Classification Using Hybrid Synthetic Minority Over-Sampling
Technique and Artificial Immune Recognition System
Algorithm”, International Journal of Computer Science and
Electronics Engineering, Vol. 1, Issue 3, 2013, pp. 408-412.
[12] Lavanya D. and Usha Rani K., “Performance Evaluation of
Decision Tree Classifiers on Medical Datasets”, International
[1]
V. CONCLUSIONS
In medical domain there are a number of researchers
carried out by many persons which are used to predict the
diseases and also gives suggestions about the symptoms
and other type of medical treatments in the same field.
75
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29] VikasChaurasia., SaurabhPal, “Data Mining Techniques To
Predict and Resolve Breast Cancer Survivability”, International
Journal of Computer Science and Mobile Computing, Vol. 3,
Issue 1, 2014, pp. 10-22.
[30] Visalatchi G., Gnanasoundhari S.J., Balamurugan M., “A Survey
on Data Mining Methods and Techniques for Diabetes Mellitus”,
International Journal of
Computer Science and Mobile
Applications, Vol. 2, Issue 2, 2014, pp. 100-105.
[31] Yusuff H., Mohamad N., Ngah U.K., Yahaya A.S., “Breast
Cancer
Analysis
Using
Logistic
Regression”,
International Journal of Research and Reviews in Applied
Sciences , Vol. 10, Issue 1, 2012, pp. 14-22.
Journal of Computer Applications, Vol. 26, Issue 9, 2011, pp. 14.
Lavanya D., Usha Rani K., “Ensemble Decision Tree Classifier
for Breast Cancer Data”, International Journal of Information
Technology Convergence and Services, Vol. 2, Issue 1, 2012,
pp.17-24.
Lavanya D., Usha Rani K., “Analysis of Feature Selection with
Classification Breast Cancer Data Sets”, India Journal of
Computer Science and Engineering, Vol. 2, Issue 5, 2011, pp.
756-763.
Mohammed Abdul Khaleel and Sateesh Kumar Pradham, “A
Survey of Data Mining Techniques on Medical Datafor Finding
Locally Frequent Diseases”, International Journal of Advanced
Research in Computer Science and Software Engineering, Vol.
3, Issue 8, 2013, pp. 149-153.
Rajesh K. and Sheila Anand, “Analysis of SEER Dataset for
Breast Cancer Diagnosis using C4.5 Classification Algorithm”,
International Journal of Advanced Research in Computer and
Communication Engineering, Vol. 1, Issue 2, 2012, pp. 72-77.
Shelly gupta, Dharminder kumar, Anand Sharma, “Data Mining
Classification Techniques Applied for Breast Cancer Diagnosis
and Prognosis”, Indian Journal of Computer Science and
Engineering, Vol. 2, Issue 2, 2011, pp. 188-195.
Shomona Gracia Jacob, R. Geetha Ramani, “Efficient Classifier
for Classification of Prognostic Breast Cancer Data Through
Data Mining Techniques”, Proceedings of The World Congress
on Engineering and Computer Science, Vol. 1, 2012, pp. 24-26.
Shweta Kharya, “Using Data Mining Techniques for Diagnosis
and Prognosis of Cancer Disease”, International Journal of
Computer Science Engineering and Information Technology, Vol
.2, 2012, pp.55-66.
Sivagami P., “Supervised Learning Approach for Breast Cancer
Classification”, International Journal of Emerging Trends and
Technology in Computer Science, Vol. 1, Issue 4, 2012, pp. 115129.
Subasini A., Nirase Fathimaabu backer, Rekha, “Analysis of
classifier to improve Medical diagnosis for Breast Cancer
Detection using Data Mining Techniques”, International Journal
Advanced Networking and Applications Vol. 5 Issue 6, 2014, pp.
2117-2122.
Sujatha.G., K. Usharani., “A Survey on Effectiveness of Data
Mining Techniques on Cancer Data Sets” , International Journal
of Engineering Sciences Research, Vol. 04, Issue 01, 2013, pp.
1298-1304.
Sujatha G., Usha Rani K., ”Evaluation of Decision Tree
Classifiers on Tumor Data sets”, International Journal of
Emerging Trends & Technology in Computer Science, Vol. 2,
Issue 4, 2013, pp. 418-423.
Sujatha G., Usha Rani K., “An For Classification Experimental
Study on Ensemble of Decision Tree Classifiers”, International
Journal of Application or Innovation in Engineering &
Management, Vol. 2, Issue 8, 2013, pp.300-306.
Syed Shajahaan S., Shanthi S., Manochitra V., “Application of
Data Mining Techniquesto Model Breast Cancer Data”,
International journal of Emerging Technology and Advanced
Engineering, Vol. 3, Issue 11, 2013, pp. 363-369.
Vanaja S., and Rameshkumar K., “Performance Analysis of
Classification Algorithms on Medical Diagnoses-A Survey”,
Journal of Computer Science Vol. 11, Issue 1, 2014, pp. 30-52.
Varun Kumar and Luxmi Verma, “Binary Classifiers for Health
Care Databases: A Comparative Study of Data Mining
Classification Algorithms in the Diagnosis of Breast Cancer”,
International Journal of Computer Science and Technology, Vol.
1, Issue 2, 2010, pp. 124-129.
Venkatesan P., Yamuna N R., “Treatment Response
Classification in Randomized Clinical Trials A Decision Tree
Approach”, Indian Journal of Science and Technology, Vol. 6,
Issue 1, 2013, pp. 3912-3917.
76
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
A study on feature vectors of heart rate variability and image
of carotid for cardiovascular disease diagnosis
Hyeongsoo Kim, Soo Ho Park, Kwang Sun Ryu, Minghao Piao, Keun Ho Ryu
Database and Bioinformatics Laboratory, College of Electrical and Computer Engineering, Chungbuk National
University, Cheongju, South Korea
{hskim, soohopark, ksryu, bluemhp, khryu}@dblab.chungbuk.ac.kr
Abstract— In this paper, we propose a feature vector extraction methodology of heart rate variability from
ultrasound images of carotid and electrocardiogram signal for the diagnosis of cardiovascular disease. For inventing
the multiple feature vectors, we extract a candidate feature vector through image processing and measurement of
thickness of carotid intima media. As a complementary way, the linear and/or nonlinear feature vectors are also
extracted from heart rate variability, a main index for a cardiac disorder. The significance of the multiple feature
vectors is tested with several machine learning methods such as artificial neural networks (ANN), support vector
machine (SVM), decision induction and Bayesian methods. The ANN and SVM show about 87-percent and 82percent respectively in terms of diagnosis accurate rate after evaluating the diagnosis/prediction methods using the
final chosen feature vectors. The feature vector analysis and diagnosis/prediction techniques devised in this paper
are expected to be used by domestic cardiologists in the PC and a web based system.
Keywords- feature vector, heart rate variability, carotid intima medica, Disease diagnosis, data mining
through complex diagnostic feature vector extraction
process is explained in section 2. In section 3, a feature
vector selection process as pre-processing steps and
experimental
evaluations
results
using
classification/forecasting techniques for disease diagnosis
will be described. Finally, concluding remarks will show
in section 4.
INTRODUCTION
According to the recent World Health Organization
(WHO)’s report about main causes of death, the number
one and two causes are still cardiovascular diseases (CVD)
[1]. In case of Korea, CVD is ranked second in causes of
death and is turning to demographical structure of high
incidence of CVD [2]. Due to CVD, the number of deaths
of Koreans increased, early diagnosis and the reliability
of the diagnosis has been recognized as a very important
social issue. Nowadays early diagnosis of CVD has been
realized after the introduction of a method measuring
carotid arterial intima-media thickness by ultrasound that
can prescreen the coronary artery diseases. The thickness
of the common carotid artery has been identified to be
related with CVD in the various studies and becomes one
of the typical cardiovascular risk factors. Also it is known
as an independent predictor of CVD [3, 4].
CAROTID ARTERY AND HRV ANALYSIS
Carotid Artery Scanning and Image Processing
The carotid artery consists of common carotid artery
(CCA), carotid bifurcation (BIF), internal carotid artery
(ICA), external carotid artery (ECA). The intima-media
thickness (IMT) of the carotid can be measured at the far
wall CCA region 10mm proximal to bifurcation of carotid
rather than the ICA or carotid artery or BIF itself (See Fig.
1). Intima is the high-density band-shaped and the media
looks like band with a low brightness between intima and
adventitia. Adventitia generally has the brightest pixel
value and it is corresponding to the thick part below the
intima-media having the high brightness. In addition,
since the intima is thinnest among the three floors and its
brightness is so similar to that of media, the endometrial
thickness is difficult to detect. Thus, in general, they have
measured IMT including the intima and media. The IMT
of the carotid can be measured at the far wall common
carotid artery region 10mm proximal to bifurcation of
carotid rather than the ICA or carotid artery or BIF itself.
The correlation between the autonomic nervous
system and mortality of CVD including sudden cardiac
death has been proved as significant factor during the past
30 years. The development of indicators that can evaluate
quantitatively the activity of autonomic nervous system
was urgently required; heart rate variability (HRV) has
been one of the most promising indicators. The wide
variety of linear and nonlinear characteristics of HRV has
been studied as indicators to improve the diagnostic
accuracy [5]. Therefore, the carotid artery and HRV
diagnostic feature vectors need to be analyzed to ensure
the reliability and early diagnosis of CVD.
After we select at least the 10mm–long image of ROI
(Region of interest) picture at 10mm proximal around
area of BIF transition to CCA, we can evaluate the quality
of the selected ROI image and remove speckle noise.
After obtaining the edge image by applying edge
detection algorithm, IMT is measured [6].
The steps followed in this paper for the diagnosis of
CVD are as follows; (1) Diagnostic feature vectors
extraction and (2) Evaluation on feature vector and
classification method for diagnosis of CVD. The paper is
organized as follows. Carotid imaging and HRV analysis
77
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
data is re-sampled at a rate of 4 Hz in order to extract the
indicators in frequency domain that is one of linear
analysis methods. We extract linear feature vector in time
and frequency domain and extract non-linear feature
vector of HRV. The literatures on HRV feature vector
extraction was described in detail in [9].
Poincare plot of nonlinear feature vectors: The
Poincare plot may be analyzed quantitatively by fitting an
ellipse to the plotted shape. The center of the ellipse is
determined by the average RRI. SD1 means the standard
deviation of the distances of points from the y=x axis,
SD2 means the standard deviation of the distances of
points from y   x  RR axis, where RR is the
average RRI as shown in Fig. 4. We also compute the
features, SD2/SD1, and SD1×SD2, describing the
relationship between SD1 and SD2 in our study.
Non-linear vector: Approximate Entropy (ApEn):
Defined as the rate of information production, entropy
quantifies the chaos of motion. ApEn quantifies the
regularity of time series, so is also called a “regularity
statistic”. It is represented as a simple index for the overall
complexity and predictability of each time series. In our
study, ApEn quantifies the regularity of the RRI. The
more regular and predictable the RRI series, the lower
will be the value of ApEn. First of all, we reconstructed
the RRI time series in the n-dimensional phase space
using Takens theorem [10].
Ultrasonographic measurement of intima-media thickness (IMT) in
the carotid artery
IMT measurement from carotid image
After acquisition of carotid image and IMT
measurement, all the diagnostic feature vectors for CVDs
are extracted. The feature vector extraction will be
performed in the following 8 steps [7].
The ROI image with 64ⅹ100 pixels is acquired by
defining the area of ‘+’ and ‘+’ markers on the image of
the carotid IMT in Fig. 2.
Each pixel is expressed in terms of 0~25528.
The trend of variation is shown in a graph in a vertical
line.
30 vertical lines are randomly selected as samples
among total 100 vertical lines.
Difference between V1 and V2 (V2-V1) is calculated
using the 30 random samples of vertical lines.
The only IMT (V2-V1) values within one sigma in
Gaussian distribution are extracted.
4 basic feature vectors are extracted and an average
value is calculated.
Other 18 further feature vectors are extracted through
calculation with 4 basic feature vectors and the mean
value is obtained.
RRIs extraction process in ECG signal.
Linear and Non-linear feature vector of HRV
It starts from ECG to extract the linear and non-linear
indicators of HRV, main diagnosis indices for
cardiovascular disease such as angina pectoris or acute
coronary syndrome. To do HRV analysis, one can
calculate all the RR intervals (RRIs) of the ECG signal
using Thomkin's algorithm [8], time-series data is
generated as shown in Fig. 3. Also, RRIs times-series
Diagnosis indicators in a Poincare plot
78
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
CLINICAL CHARACTERISTICS OF THE SUBJECTS
Takens suggested the time delay method for the
reconstruction of the state space as follow: Dt =
[RR(t),RR(t+τ),…,RR(t+(n-1)τ)], where n is the
embedding dimension and τ is the time delay. In this
study, the optimal value of τ was 10. The mean of the
fraction of patterns with length m that resemble the
pattern with the same length beginnings at interval i is
defined by
 m (r ) 
Group
N
Sex (male/female)
Age (years)
Control
36
20/23
56.70 ± 9.23
AP
51
25/26
59.98 ± 8.41
ACS
13
6/7
59.08 ± 9.86
Data preprocessing
The extracted vectors from carotid imaging and HRV
are evaluated in order to determine whether those vectors
can be a representative indicator of cardiovascular
diseases
or
not
by
applying
typical
classification/prediction models of machine. As a preprocessing step, feature selection method is used for
eliminating the information improper to disease diagnosis.
The performing steps are composed of feature ranking
and feature selecting steps. Selection algorithm evaluates
the redundancy in feature vectors and prediction
capability of each vector.
N m1
 number of Dm ( j )  Dm (i)  r 
1
ln 


N  m  1 i1 
N  m 1

In the above equation, Dm(i) and Dm(j) are state
vectors in the embedding dimension m. Given N data
points, we can define ApEn as
ApEn (m, r , N )  m (r )  m 1 (r ) ,
where ApEn estimates the logarithmic likelihood that
the next intervals after each of the patterns will differ. In
general, the embedding dimension m, and the tolerance, r
are fixed at m=2 and r=0.2×SD in physiological time
series data.
Feature ranking considers one feature at a time to see
how well each feature alone predicts the target class. The
features are ranked according to a user-defined criterion.
Available criteria depend on the measurement levels of
the target class and feature. In the feature vector selection
problem, a ranking criterion is used to find feature vectors
that discriminate between healthy and diseased patients.
The ranking value of each feature is calculated as (1-p),
where p is the p-value of appropriate statistical test of
association between the candidate feature and the target
class. All diagnostic feature vectors are continuousvalued, we use p-values based on F-statistics. This
method is to perform a one-way ANOVA F-test [11] for
each continuous feature. We perform feature selection
only once for each dataset and then different classification
methods are evaluated. The results of feature selection
and evaluation for dataset are described in Table 2.
Hurst Exponent (H) non-linear vector: Hurst Exponent
H is the measure of the smoothness of a fractal time series
based on the asymptotic behavior of the rescaled range of
the process. The Hurst Exponent H is defined as, log(R/S)
/ log(T), where T is the duration of the sample of data and
R/S is the corresponding value of the rescaled range. If H
= 0.5, the behavior of the time series is similar to a
random walk. If H < 0.5, the time series covers less
distance than a random walk. But if H > 0.5, the time
series covers more distance than a random walk.
Exponent  of the 1/f Spectrum( fα ) non-linear vector:
Self-similarity is the most distinctive property of fractal
signals. Fractal signals usually have a power spectrum of
the inverse power law form, 1 / f , where f is frequency,
since the amplitude of the fluctuations is small at high
frequencies and large at low frequencies. The exponent 
is calculated by a first least-squares fit in a log-log
spectrum, after finding the power spectrum from RRIs.
The exponent  is clinically significant because it has
different values for healthy and heart rate failure patients.
Verification of feature vectors using classification
methods
In order to discriminate that all the 21 vectors
extracted from carotid imaging and HRV can be
diagnostic indicators of CVDs, the famous classification
or prediction method of machine learning is used as the
way of evaluation. The classification method generates
and compares several models including Artificial neural
network (ANN), Support Vector Machine (SVM),
Bayesian network (BN), decision tree induction model
(C4.5). Every classifier utilizes the following source code
provided by Java WEKA project [12]. We apply each
classification model to data set that passed feature
selection step. In our experiment, we build the above
classifiers from the preprocessed CVD training data.
Accuracy was obtained by using the methodology of
stratified 10-fold cross-validation (CV-10) for three
classes.
EVALUATION OF DIAGNOSTIC FEATURE VECTORS
All the data used in our experiment were provided as
a sample by the Bio-signal research center of KRISS
(Korea research institute of standards and science). In this
experiment, following that coronary arteriography was
performed for every 100 cardiovascular patients, the
patients showing more than at least 50% of stenosis are
categorized as CAD (Coronary Artery Disease) but the
other patients having less than 50% stenosis are
designated as the control group. Further, CAD patients
are also re-sorted by cardiologists into two groups,
Angina Pectoris (AP) and Acute Coronary Syndrome
(ACS). Clinical characteristics of the studied patient are
show in Table 1.
We also used Precision, Recall, F-measure and
Accuracy to evaluate the classifiers’ performance for
analyzing our training sets (see Table 3). Formal
definitions of these measures are given below.
79
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
CONCLUSIONS
This paper suggests multiple diagnostic feature
vectors with the carotid artery and HRV analysis for the
purpose of more accurate prediction and early diagnosis
of cardiovascular diseases recently growing in a rapid
speed. Moreover we performed experiments and
evaluations to verify the reliability of the prediction
system and test the significance of diagnostic feature
vectors. According to the results of experiments, 21 types
of feature vectors are determined as the essential elements
for diseases diagnosis and ANN and SVM shows an
excellent result in terms of the appropriate
classification/prediction algorithm. This kind of complex
diagnosis indicators would be useful for the automatic
diagnosis of cardiovascular diseases in Korea.
TP ,
TP
,
recall 
TP  FP
TP  FN
precision 
F  measure 
Accuracy 
2  precsion  recall
precision  recall ,
TP  TN
TP  FP  TN  FN
The results of classifiers' accuracy comparison are
also shown in Fig. 5. According to the results shown in
Table 3 and Fig. 5, ANN and SVM perform very well.
They achieve higher accuracy than BN and C4.5
classifiers.
SELECTED FEATURE VECTORS OF IMT AND HRV
1
V3
Relevance
score (1-p)
1.000
12
V8
Relevance
score (1-p)
0.963
1
V10
1.000
13
V21
0.962
3
SD2
0.998
13
V23
0.962
4
SDRR
0.997
15
nLF
0.960
5
fα
0.986
16
nHF
0.958
6
SD2/SD1
0.985
17
H(Supine)
0.955
7
SD2
0.979
17
ApEn
0.955
7
V2
0.979
19
V20
0.954
9
V18
0.965
20
V11
0.952
9
SD2/SD1
0.965
21
V16
0.951
9
H
0.965
Rank
Feature
Rank
Feature
ACKNOWLEDGMENT (HEADING 5)
This research was supported by Export Promotion
Technology Development Program, Ministry of
Agriculture, Food and Rural Affairs (No.114083-3) and
Basic Science Research Program through the National
Research Foundation of Korea (NRF) funded by the
Ministry of Science, ICT & Future Planning
(No.2013R1A2A2A01068923).
REFERENCES
WHO reports, “The top 10 causes of death,” Retrieved Apr.1, 2015,
from “http://who.int/”.
Korea National Statistical Office. "Statistics of causes of death,"
Retrieved Apr.1, 2015, from “http://www.kosis.kr/”.
J.H. Bae, K.B. Seung, H.O. Jung, et al., “Analysis of Korean carotid
intima-media thickness in Korean healthy subjects and patients
with risk factors,” Journal of Korean Circulation, vol. 35, 2005,
pp. 513-524.
K. S. Cheng, D. P. Mikhailidis, G. Hamilton, and A. Seifalian, “A
review of the carotid and femoral intima-media thickness as an
indicator of the presence of peripheral vascular disease and
cardiovascular risk factors,” Cardiovascular research, vol. 54,
2002, pp. 528-538.
H. ChuDuc, K. NguyenPhan, D. NguyenViet, “A Review of Heart Rate
Variability and its Applications, APCBEE Procedia,” vol. 7,
2013, pp. 80-85.
J. H. Bae, W. S. Kim, C. S. Rihal, A. Lerman, "Individual measurement
and significance of carotid intima, media, and imtima-media
thickness by B-mode ultrasonographic image processing,"
Arteriosclerosis, Thrombosis, and Vascular Biology, vol.26,
2006, pp.2380-2385.
M. Piao, H. Lee, G. C. Pok, and K. H. Ryu, “A data mining approach
for dyslipidemia disease prediction using carotid arterial feature
vectors,” IEEE Computer Engineering and Technology (ICCET
2010), vol. 2, 2011, pp. 171-175.
W. J. Tompkins and E. M. O’Vrien, "Bimedical digital signal
processing," Annals of Biomedical Engineering, vol. 23, 1995, pp.
526.
H. G. Lee, W. S. Kim, K. Y. Noh J. H. Shin, U. et al., "Coronary artery
disease prediction method using linear and nonlinear feature of
heart rate variability in three recumbent postures," Information
Systems Frontiers, vol.11, 2009, pp.419-431.
F. Takens, "Detecting strange attractors in turbulence," Lecture Notes
in Mathematics, vol. 898, 1981, pp. 366–381.
G. Bhanot, G. Alexe, B. Venkataraghavan, and A. Levine. "A robust
meta-classification strategy for cancer detection from MS data,"
Proteomics, vol.6, 2006, pp. 592-604.
I.H. Witten, E. Frank, G. Holmes, M. Mayo, et al., "Data Mining
Software in Java, Weka Machine Learning Project," Available:
http://www.cs.waikato.ac.nz /~ml/weka/index.html, 2005
A DESCRIPTION OF SUMMARY RESULTS
Classifier
ANN
BN
C4.5
Acurracy (%)
SVM
Precision
Recall
F-measure
Class
0.823
0.976
0.894
AP
1.000
0.87
0.930
Control
0.846
0.611
0.710
ACS
0.748
0.873
0.805
AP
0.688
0.550
0.611
Control
0.647
0.423
0.512
ACS
0.809
0.873
0.881
AP
0.769
0.750
0.937
Control
0.579
0.423
0.900
ACS
0.880
0.863
0.871
AP
0.822
0.925
0.871
Control
0.565
0.500
0.531
ACS
Acur
Acur
racy
racy
(%)…
(%)…
Acur
racy
(%)…
Acur
racy
(%)…
Classifier
Accuracy comparison
80
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Image Segmentation in Medical Data: A Survey
S. Mahalakshmi1, T.Velmurugan2
1
Research Scholar, 2Associate Professor
PG and Research Department of Computer Science, D. G Vaishnav College, Arumbakkam, Chennai, India
E-Mail: [email protected], [email protected]
Abstract - Image processing is an active research area in which processing medical images is a highly
challenging field. Image segmentation plays a significant role in image processing as it helps in the
extraction of suspicious regions from the medical images. The role of image segmentation is very efficient
and effective in medical domain. Particularly, analyzing various kinds of disease and illnesses through
medical images are utilizing the image segmentation concepts. The watershed transform is a popular
segmentation method coming from the field of mathematical morphology. This research work explored the
prediction of diseases through MR images by image segmentation methods and also the use of watershed
algorithm in the same field.
Keywords - Medical Images, Image Segmentation, Segmentation Algorithms, Watershed Transform
Algorithm.
survived about the various techniques and algorithms
for medical image, done by different researchers. This
research paper is organized as follows. Section II
discusses about the image segmentation and its
application towards the medical domain. The other
image segmentation methods are explored in section
III. Section IV deliberates about the watershed
algorithm in image segmentation for medical images.
Section V concludes the research work.
I. INTRODUCTION
Image Processing (IP) is a rapidly evolving field
with growing applications in science and engineering.
IP uses computers to perform image processing on
digital images. The processing helps in maximizing
clarity, sharpness and details of object features and
further analysis. The digital image is fed into a
computer and computer is programmed to manipulate
medical data using an equation, or series of equations
and then store the results of the computation for each
pixel (picture element).Digital image processing is the
use of algorithms and procedures for operations such
as image enhancement, image compression, image
segmentation, image analysis, mapping and geological
referencing. The influence and impact of digital
images on modern society is tremendous and are
considered as a critical component in a wide range of
areas including pattern recognition, computer vision,
industrial automation and healthcare industries.
Image segmentation is a fundamental step in many
image, video and computer vision applications. It is
necessary to extract various features of the images
which can be merged or split in order to build objects
of interest on which analysis and interpretation can be
performed. Digital image processing has a broad
spectrum of applications, such as remote sensing via
satellites and other spacecrafts, image transmission
and storage for business applications, medical
processing, radar, sonar and acoustic image
processing, robotics and automated machine. The
rapid progress in computerized medical image
reconstruction and the associated developments in
analysis methods, computer-aided diagnosis have
propelled medical image processing into the most
important sub-fields in medical imaging.
This research work is carried out a survey on the
medical image segmentation using watershed
algorithms in image processing. Also, this paper
II. IMAGE SEGMENTATION AND ITS APPLICATION IN
MEDICAL DIAGNOSIS
Image segmentation refers to the process of
portioning an image into groups of pixels which are
homogeneous with respect to some criterion. Different
groups must not intersect with each other, and adjacent
groups must be heterogeneous. Segmentation
algorithms are area oriented instead of pixel oriented.
The result of segmentation is the splitting up of the
image into connected area. Thus the segmentation is
concerned with dividing an image into meaningful
regions. Image segmentation can be broadly divided
into two types: Local segmentation and Global
segmentation. Local segmentation deals with
segmenting with sub-images which are small windows
on a whole image. The number of pixels available to
local segmentation is much lower than the global
segmentation .Global segmentation is concerned with
segmenting a whole image. Global segmentation deals
mostly with segments consisting of a relatively large
number of pixels. Image segmentation categorized
from three different philosophical perspectives. The
three approaches are Region approach, Boundary
approach and Edge approach. These approaches are
the efficiently used for the segmentation of medical
images.
Image segmentation is used to detect cancerous
cells from medical images. Analyzing medical images
81
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
for the purpose of computer-aided diagnosis (CAD)
and therapy planning makes the segmentation as a
preliminary stage for the visualization or
quantification. For medical CT and MR images, many
methods were recently employed for segmentation
[16].
N. Manousakas et al. carried out a research work
in magnetic resonance imaging (MRI) for human brain
through split and merge (SM) techniques in image
segmentation. The edge based segmentation is used in
SM and the methods are extended to 3D image and
quantitatively compared with their 2D counterparts in
their research work. Their method reduces the number
of regions by 22% and the further reduction can be
done by boundary elimination [14]. Freixenet et al.
discussed different segmentation proposals which
integrate edge and region information and highlights 7
different strategies and methods to fuse image
information. In contrast with other surveys which only
describe and compare qualitatively different
approaches. This research deals with a real quantitative
comparison and with real images [4].
Two-stage neural network for volume
segmentation of medical images research work
describes a new system for feature extraction and
unsupervised clustering is presented for CT/MRI brain
slices. They uses two stage in neural network one as
SOPCA and the SOFM. The first stage is a selforganizing principal components analysis SOPCA
network that is used to project the feature vector onto
its leading principal axes found by using principal
components analysis. This step provides an effective
basis for feature extraction. The second stage consists
of a self-organizing feature map SOFM which
automatically clusters the input vector into different
regions[1].
Zhengrong et al. carried out a research work in
EM frame for segmenting tissue mixtures from the
medical images. By segmenting the tissue mixtures
they can diagnosis the problem more easily [12] in
their research work. Segmentation in medical images
by Xiaolan Zeng et al. are discusses various medical
applications like region growing anatomical
information, vessel segmentation. Digital acquisition
systems for creating digital images include digital Xray radiography, computed tomography(“CT”)
imaging, magnetic resonance imaging (“MRI”)
andnuclear medicine imaging techniques, such as
positron emission tomography (“PET”) and single
photon
emission
computed
tomography
(“SPECT”)[23].
Segmentation of Medical Images Using a Genetic
Algorithm [5] is discuss about the automating the
segmenting curve for the prostrate 2D pelvic CT
images. The genetic algorithm discussed is divided
into two approach as training stage and segmentation
stage. In the segmentation stage it carries some
procedure to recognize the shape and texture of the
objects from the images.
The automated segmentation of vessels in color
images of the retina is main focus for the research work
in Ridge-Based Vessel Segmentation in Color Images
of the Retina[19]. The segmentation in images of the
retina can be divided into two groups. The first group
consists of rule-based methods and comprises vessel
tracking and second group consists of supervised
methods, which requires training for labeled images
manually.
G. Castellano, L. Bonilha, L.M. Li, and F. Cendes
carried out their research work in the texture analysis
of medical images [3]. The analysis of texture in
medical images is an ongoing field of research with
applications ranging from the segmentation of
anatomical structure and detection of problems in the
image. The research paper uses radiological images
and groups the mathematical computations performed
with the date in the images. They reviewed some of the
previously study techniques.
III. OTHER TECHNIQUES FOR IMAGE SEGMENTATION
Zhen Ma et al. carried out a research work in a
review on the current segmentation algorithms for
medical images. They reviewed on the current usage
of medical images and segmentation algorithms. The
algorithms are classified into three categories
according to their main concept behind it: the first
based on threshold, the second based on pattern
recognition techniques and the final is based on the
deformable models. They discuss about each
classification in detail and produce some experimental
results 13].
A survey of current methods in medical image
segmentation is carried out by Palm Dzung, L., Xu
Chenyang, and L. Prince Jerry. The researchers
discuss the current segmentation approaches emphasis
placed on revealing the advantages and disadvantages
of these methods for medical imaging applications.
The use of image segmentation in different imaging
modalities is also described along with the difficulties
encountered in each modality. This survey paper
concludes that the different methods and its
implementation with the experimental results[15].
A Shape-Based Approach to the Segmentation of
Medical Imagery Using Level Sets is carried by Andy
Tsai, et al. The researchers propose a shape-based
approach to curve the evolution for segmenting the
medical images containing object types. The
parameters of this representation are then manipulated
to minimize an objective function for segmentation.
The resulting algorithm is able to handle
multidimensional data, robust to noise and initial
contour placements, and is computationally efficient
[20].
John Ashburner.T and Karl J. Friston carried out
a research work titled as “Unified segmentation”. This
work fully based on the segmentation of the brain
images. The purpose of this paper is to unify the
82
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
procedures into a single probabilistic framework.
Automatic selection of representative voxel can be
achieved by first registering the brain volume to some
standard space in modeling the intensity distributions
a mixture of Gaussians, and related approach is used.
But using tissue probability maps to weigh the
classification according to Bayes rule.[2].
Wavelets in Medical Image Processing: Denoising, Segmentation, and Registration is carried out
by the researchers YinpengJin, Elsa Angelini, and
Andrew Laine. The wavelet transform itself offers
great design flexibility. Basis selection, spatialfrequency tiling, and various wavelet threshold
strategies can be optimized for best adaptation to a
processing application, data characteristics and feature
of interest. Fast implementation of wavelet transforms
using a filter-bank framework enable real time
processing capability. Instead of trying to replace
standard image processing techniques, wavelet
transforms offer an efficient representation of the
signal, finely tuned to its intrinsic properties. By
combining such representations with simple
processing techniques in the transform domain, multiscale analysis can accomplish remarkable performance
and efficiency for many image processing
problems.[9].
Bregman Iterative Algorithms for E11_1Minimization with Applications to Compressed
Sensing is carried out by Wotao Yin et al. in [22]. They
proposed simple and extremely efficient methods for
solving the basis pursuit problem which is used in
compressed sensing. They show analytically that this
iterative approach yields exact solutions in a finite
number of steps and present numerical results that
demonstrate that as few as two to six iterations are
sufficient in most cases. The approach is especially
useful for many compressed sensing applications
where matrix-vector operations involving A and AT
can be computed by fast transforms. Utilizing a fast
fixed-point continuation solver that is based solely on
such operations for solving the above unconstrained
sub problem, there were able to quickly solve huge
instances of compressed sensing problems on a
standard PC.
A discovery that certain types of constrained
problems can be exactly solved by iteratively solving
a sequence of unconstrained sub problems generated
by a Bregman iterative regularization scheme is new.
They extend this result in several ways. They yield
even simpler iterations (5.19) and (5.20). They hope
that their discovery and its extensions will lead to
efficient algorithms for even broader classes of
problems [22].
Rohini Paul Joseph, C. Senthil Singh and M.
Manikandan done their research work in Brain tumor
MRI image segmentation and detection in image
processing[10]. They proposed a algorithm for
segmentation of brain images. In this paper they have
proposed segmentation of brain MRI image using K-
means
clustering
algorithm
followed
by
morphological filtering. The filtering is used to avoid
the misclustered regions that can inevitably be formed
after segmentation of the brain MRI image for
detection of tumor location.
Medical image segmentation on GPUs – A
comprehensive review is a research work carried by
Erik Smistad et.al. They proposed the most common
medical image segmentation algorithms for graphic
processing units (GPUs). Through this comparison, it
is shown that most segmentation methods are data
parallel with a high amount of threads, which makes
them well suited for GPU acceleration. They discusses
many segmentation techniques in medical field with
parallel data and produces the experimental results
[18].
A.R.Kavitha, S.Rekha carried out their research
work in image segmentation for MRI brain medical
image. They proposed an efficient algorithm for
combined watershed and threshold with multilayer
perceptron(CWTMP) and with image segmentation
technique, to segment tumor portion in a given MRI
medical image. This new proposed method consists of
preprocessing, segmentation, and classification and
performance evaluation. Preprocessing is done with
the Gaussian smoothing, improved watershed method
is applying for segmentation process, a Multilayer
Perceptron neural network (CWTMP) classification
method is used for classification. The validation for
both quantitatively and qualitatively using
performance metrics such as peak signal noise ratio is
done for the CWTMP [11].
Watershed-based Segmentation of 3D MR Data
for Volume Quantization is discussed by Sijbers J et
al. in their research work [17].The purpose of this
research work is the development of a semiautomatic
segmentation technique for efficient and accurate
volume quantization of Magnetic Resonance (MR)
data. The over segmentation is reduced by properly
merging small volume primitives with similar gray
level distributions. The outcome of the preceding
image processing steps is presented to the user for
manual segmentation. After the manual segmentation,
the subsequent slices are automatically segmented by
extrapolation. The proposed segmentation technique is
tested on phantom objects, where segmentation errors
less than 2% are observed.
This section discusses about so many algorithms
and techniques for image segmentation in medical
images. Among these the k-means and Watershed
transform methods plays a vital role and solve many
complex problems in medical imaging.
V. REVIEW ON WATERSHED TRANSFORM IN MEDICAL
IMAGING
The watershed transform (WST) is a popular
segmentation method originating from the field of
mathematical morphology. The WST has been widely
83
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
used in many fields of image processing, including
medical image segmentation, due to the number of
advantages that it possesses: it is a simple, intuitive
method; it is fast and can be parallelized. The two main
advantage of WST is, it always correspond to the most
significant edges between the markers. So the
technique is not affected by lower-contrast edges, due
to noise, that couldproduce local minima and, thus,
erroneous results, in energy minimization methods. if
there is too strong edges between the markers, the
WST always detects a contour in the area. This contour
will be located on the pixels with higher contrast [6].
Watersheds are one of the classics in the field of
topography. Everyone has heard for example about the
great divide, this particular line which separates the
U.S.A. into two regions. A drop of water falling on one
side of this line flows down until it reaches the Atlantic
Ocean, whereas a drop falling on the other side flows
down to the Pacific Ocean. The two regions it
separates are called the catchment basins of the
Atlantic and the Pacific Oceans, respectively. The two
Oceans are the minima associated with these
catchment basins.
The Skull Stripping Problem in MRI Solved in a
Single 3D Watershed Transform method by Horst K.
Hahn and Heinz-Otto Peitgen [7]. In this paper neither
preprocessing of the MRI data nor refinement is
required. The watershed algorithm has been modified
by the concept of pre-flooding, which helps to prevent
the image from over segmentation. They use the
magnetic resonance (MR) brain images and remove
the non-cerebral tissue. The modified watershed
transform method is a powerful tool for segmenting the
whole brain from MRI datasets. In this algorithm they
include voxel - basin merging and basin-basin merging
instead of preprocessing the MRI data of brain. The
described algorithm is able to successfully segment the
whole brain in all 133 datasets, without any
preprocessing. In the comparison to the manual
segmentation estimation it is more than 96% is high in
sensitivity. The difference less than 4% are mainly in
the dark intensity region of the brain boundary. The
watershed algorithm provides the basis of brain
segmentation procedure that increases reliability,
efficiency and reproducibility in the field of neuro
imaging.
Kostas,Haris, Serafim N. Efstratiadis, Nicos
Maglaveras, and Aggelos K. Katsaggelos are carried
out a research work in Hybrid Image Segmentation
Using Watersheds and Fast Region Merging[4].
According to their work the edge and region based
techniques through the morphological algorithm of
watersheds. They used a preprocessing stage to
compute the image gradient. This initial segmentation
is the input to the great efficient hierarchical region
merging process that processes the final segmentation.
The next section uses the region adjacency graph
(RAG) for the image regions. The fastness of the
algorithm is maintained by the so-called nearest
neighbor graph and priority queue size and processing
time are drastically reduced. The output of the
algorithm is the RAG of the final segmentation based
on which closed, one-pixel wide object may readily be
extracted. The general framework to the overall
approach is the combination of gradient and regionbased techniques. In addition, the RAG provides
information about the spatial relationships between
objects and can drive knowledge-based higher level
processes as a means of description and recognition.
A watershed algorithm for based on immersion
simulations is carried out by Luc Vincent and Pierre
Soille [7]. They proposed an algorithm in which
grayscale image is introduced. Using a queue of pixels,
the flooding of water in the picture is efficiently
simulated. The proposed algorithm basic idea is to
sorting the pixels in the increasing order of gray level
values and the second step is the flooding step. The
first step in the algorithm is pixels are being scanned
one by one in the predetermined order. They designed
the algorithm by taking into the account the fact, that
only the values of the small number of pixels can be
modified. Rather, than the scanning entire image to
modify one or two pixels. So, that the algorithm has
been designed to have a direct access to these pixels.
That the image pixels are stored in a simple array by
satisfying two conditions.
1.
2.
Random access to the pixels
Direct access to the neighbor of a given pixel.
Application of this algorithm with regards to
picture segmentation of the spinal cord image of
human being is extracted. They use the breadth first
scanning structure for the image and FIFO data
structure is used to implement the algorithm [7].
Improved Watershed Transform for Medical Image
Segmentation Using Prior Information done by V.
Grau et al. They present an original modification of the
classical watershed transform, which enables the
introduction of prior knowledge about the objects.
They introduce a method to combine atlas registration
and WST through the use of markers. They applied this
proposed algorithm in two challenging areas knee
cartilage and gray matter segmentation in MRI [6].
V. CONCLUSION
Image segmentation plays a key role in many
medical-imaging applications, by automating or
facilitating the description of anatomical structures
and other regions of interest. This research work
presents a critical appraisal of the current methods and
application field for the segmentation of medical
images. Terminology and important issues in image
segmentation are first presented. The segmentation of
medical images are discussed by comparing with
various methods. Current segmentation approaches are
then reviewed with an emphasis on the advantages of
84
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
these methods for medical imaging applications. A
number of algorithms used for medical imaging for
various diseases in order to find the illnesses. Hence it
is concluded that the Watershed Transform (WST)
algorithm stamps its superiority in terms of the
performance of other algorithms.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
Ahmed, Mohamed N., and Aly A. Farag, "Two-Stage Neural
Network or Volume Segmentation of Medical Images" , Pattern
Recognition Letters, Vol. 18, Issue 11, 1997, pp. 1143-1151.
Ashburner, John, and Karl J. Friston. "Unified
Segmentation", Neuroimage, Vol. 26, Issue 3, 2005, pp 839851.
Castellano, G., L. Bonilha, L. M. Li, and F. Cendes, "Texture
Analysis of Medical Images", Clinical radiology, Vol. 59, Issue
12, 2004, pp. 1061-1069.
Freixenet, Jordi, Xavier Muñoz, David Raba, Joan Martí, and
Xavier Cufí, "Yet Another Survey On Image Segmentation:
Region And Boundary Information Integration", In Computer
Vision—ECCV 2002, pp. 408-422.
Ghosh, Payel, and Melanie Mitchell, "Segmentation of Medical
Images Using a Genetic Algorithm", In Proceedings of the 8th
annual conference on Genetic and evolutionary computation,
2006, pp. 1171-1178.
Grau, Vicente, A. U. J. Mewes, M. Alcaniz, Ron Kikinis, and
Simon K. Warfield, "Improved Watershed Transform for
Medical
Image
Segmentation
Using
Prior
Information", Medical Imaging, IEEE Transactions on Vol. 23,
Issue 4, 2004, pp. 447-458.
Hahn, Horst K., and Heinz-Otto Peitgen, "The Skull Stripping
Problem in MRI Solved by a Single 3D Watershed Transform",
In Medical Image Computing and Computer-Assisted
Intervention–MICCAI Springer Berlin Heidelberg, 2000, pp.
134-143.
Haris, Kostas, Serafim N. Efstratiadis, Nikolaos Maglaveras,
and Aggelos K. Katsaggelos, "Hybrid Image Segmentation
Using Watersheds and Fast Region Merging", Image
Processing, IEEE Transactions on, Vol. 7, Issue 12, 1998, pp.
1684-1699.
Jin, Yinpeng, Elsa Angelini, and Andrew Laine, "Wavelets in
Medical Image Processing: Denoising, Segmentation and
Registration",s In Handbook of Biomedical Image Analysis,
Springer US, 2005, pp. 305-358.
Joseph, Rohini Paul, C. Senthil Singh, and M. Manikandan,
"Brain Tumor MRI Image Segmentation and Detection in
Image Processing", Int. J. Res. Eng. Technol, Vol. 3, Issue 1,
2014, pp. 1-5.
Kavitha, A. R., and S. Rekha, "Brain Cancer Segmentation in
MRI Medical Image Using Combined Watershed Algorithm
and Thresholding with Multilayer Perceptron Neural Network",
Vol. 2, Issue 1, 2014.
Liang, Zhengrong, Xiang Li, Daria Eremina, and Lihong Li,
"An EM Framework for Segmentation of Tissue Mixtures from
Medical Images", In Engineering in Medicine and Biology
Society, Proceedings of the 25th Annual International
Conference of the IEEE, Vol. 1, 2003, pp. 682-685.
Ma, Zhen, João Manuel RS Tavares, and Renato M. Natal Jorge
"A Review on the Current Segmentation Algorithms for
Medical Images," In IMAGAPP 2009-Proceedings of the First
International Conference on Computer Imaging Theory and
Applications, Lisboa, Portugal, 2009, pp. 135-140.
[14] Manousakas, I. N., P. E. Undrill, G. G. Cameron, and T. W.
Redpath, "Split-and-merge Segmentation of Magnetic
Resonance Medical Images: Performance evaluation and
Extension to three Dimensions", Computers and Biomedical
Research Vol. 31, Issue 6, 1998 pp. 393-412.
[15] Palm Dzung, L., Xu Chenyang, and L. Prince Jerry, "A Survey
of Current Methods in Medical Image Segmentation",
Technical report.–Johns Hopkins University, Baltimore, 1998.
[16] Pohle, Regina, and Klaus D. Toennies, "Segmentation of
Medical Images Using Adaptive Region Growing", In Medical
Imaging, International Society for Optics and Photonics, 2001,
pp. 1337-1346.
[17] Sijbers, J., P. Scheunders, M. Verhoye, A. Van der Linden, D.
Van Dyck, and E. Raman, "Watershed-Based Segmentation Of
3D MR Data For Volume Quantization", Magnetic Resonance
Imaging, Vol. 15, Issue 6, 1997, pp. 679-688.
[18] Smistad, Erik, Thomas L. Falch, MohammadmehdiBozorgi,
Anne C. Elster, and Frank Lindseth, "Medical Image
Segmentation on Gpus–A Comprehensive Review", Medical
Image Analysis, Vol. 20, Issue 1, 2015, pp. 1-18.
[19] Staal, Joes, Michael D. Abràmoff, MeindertNiemeijer, Max A.
Viergever, and Bram van Ginneken, "Ridge-Based Vessel
Segmentation in Color Images Of The Retina", Medical
Imaging, IEEE Transactions on, Vol. 23, Issue 4, 2004, pp.
501-509.
[20] Tsai, Andy, Anthony Yezzi Jr, William Wells, Clare Tempany,
Dewey Tucker, Ayres Fan, W. Eric Grimson, and Alan Willsky,
"A Shape-Based Approach to the Segmentation of Medical
Imagery Using Level Sets", Medical Imaging, IEEE
Transactions on, Vol. 22, Issue 2, 2003, pp. 137-154.
[21] Vincent, Luc, and Pierre Soille, "Watersheds in Digital Spaces:
An
Efficient
Algorithm
Based
On
Immersion
Simulations", IEEE transactions on pattern analysis and
machine intelligence, Vol. 13, Issue 6, 1991,pp. 583-598.
[22] Yin, Wotao, Stanley Osher, Donald Goldfarb, and Jerome
Darbon, "Bregman Iterative Algorithms for Ell_1-Minimization
with Applications to Compressed Sensing", SIAM Journal on
Imaging Sciences, Vol. 1, Issue 1, 2008, pp. 143-168
[23] Zeng, Xiaolan, Wei Zhang, and Alexander C. Schneider,
"Segmentation in Medical Images", U.S. Patent 7,336,809,
Issued February 26, 2008.
85
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
A Survey on Medical Images Extraction using Parallel
Algorithm in Data Mining
A.Naveen1, T.Velmurugan2
1
Research Scholar, 2Associate Professor
PG and Research Department of Computer Science, D. G. Vaishnav College, Arumbakkam, Chennai, India
E-Mail: [email protected], [email protected]
Abstract - Image clustering creates a set of same images into a group. Nowadays, very large amount of
images available in various data repository including would wide web and some other large repositories.
The use of all images in current real would is very high. Particularly, the medical field has lot of images,
which are used to predict the different kind of diseases. It is not possible to find some types of diseases
without using the medical images. This research work analyses about the use of clustering techniques in
the domain of medical images. The purpose of this paper is to present analysis of recent publications
concerning medical images using parallel algorithm in particular. This survey find out some of the best
algorithms in the list of discussed algorithms in the medical field.
Keywords - Parallel Algorithm, Medical Images, Image Clustering, Parallel k-Means Algorithm.
INTRODUCTION
The Data Mining (DM) techniques are applicable
to all domains according to the need of applications.
Data Mining is defined as Knowledge Discovery in
Databases (KDD). These technologies phases are
majorly having preprocess analysis and pattern
generation. DM has got more and more matures as a
field of major research in information technology,
computer science and got widely applied in several
other fields. DM can be implemented on various types
of databases and information repositories, but the kind
of patterns to be found are specified by several data
mining functionalities like class and concept
description, association, correlation analysis,
classification, prediction, cluster analysis etc. It can be
performed on various types of clustering and
classifications. Clustering is one of the most important
subroutines in machine learning and data mining tasks.
A cluster is a set of objects grouped together
because of their similarity or proximity. The parallel
clustering algorithms can be applied to any of
applications using clustering algorithms for efficient
computing. This research work is aimed to take a
survey on parallel algorithms using medical images by
applying data mining techniques and applications. The
k-Means algorithm is the widely used algorithm in all
domains. The k-Means is one of the simplest
unsupervised learning algorithms that solve the wellknown clustering problem. In k-Means, Euclidean
distance computation is the most time consuming
process. The parallel k-Means algorithm is designed in
such a way that each P participating node is
responsible for handling n/P data points. Where n is
the number of data and P is the number of processors.
provides access to a cluster, which controls the work
queue, and distributes tasks to workers for
performance.
A medical image is the visualization of body
parts, tissues, or organs, for use in clinical diagnosis,
treatment and disease monitoring. Imaging techniques
encompass
the
fields
of
radiology,
nuclear medicine and
optical imaging and imageguided intervention. This research initiated to enhance
medical images extraction using parallel algorithm in
clustering method from data mining. This research
work discussed about the use of parallel algorithms in
medical images using data mining techniques, done by
different researchers.
This research work is organized as follows.
Section II discusses about the applications of parallel
algorithm in data mining, which are used for medical
images in the domain of data mining. The medical
image extractions in data mining techniques are
illustrated in section III. Section IV discusses about
the medical image extraction using parallel algorithms.
Finally section IV concludes the survey work.
I.
APPLICATION OF PARALLEL ALGORITHM
The application of parallel algorithm covers basic
concept of the data mining and the applications. A
parallel algorithm is a set of rules that have been in
detail written for execution on two or more processors.
The data mining tool and its techniques are highlighted
in medical images. Data mining is a great and a
different field having various techniques in medial
filed to analyses the recent real world problems. It
converts the raw data into useful information in
various research areas of medical field. There are
various major data mining techniques that have been
developed and used in data mining projects recently
for knowledge discovery from database.
A parallel algorithm is an algorithm that has been
specifically written for performance on a computer
with two or more processors. A parallel cluster object
86
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Zaki et al. discussed about parallel algorithms for
discovery of association rules in their research work
[25]. In this paper they described new parallel
association mining algorithms. They proposed two
clustering schemes based on equivalence classes and
maximal hyper graph cliques, and study two lattice
traversal techniques based on bottom-up and hybrid
search. They use a vertical database layout to cluster
related transactions together. The algorithms minimize
I/O overheads by scanning the local database portion
only twice. Using the above techniques they presented
four new algorithms. They implemented the
algorithms on a 32 bit processor DEC cluster
interconnected with the DEC memory channel
network, and compared it against a well-known
parallel algorithm “Count Distribution”. The Count
Distribution algorithm is a straight-forward
parallelization of Apriori algorithm. Experimental
result indicates that a substantial performance
improvement is obtained using their techniques.
Efficient parallel data mining with the apriori
algorithm on FPGAs in [5] is explored by Zachary et
al. In this work, basic concept of Apriori algorithm is
discussed. It’s a computationally expensive algorithm
and running times can stretch to days for large
databases, as database sizes can reach from Gigabytes
and computation requires multiple passes. Design in
these article FPGA implementations of the Apriori
algorithm can provide significant performance
improvement over software-based approaches,
observed strategies involving the interchange of the
nested loops that provide performance in a way that is
complimentary.
Rahmani et al. explored in their research work and
they discussed about clustering of image data using kMeans and Fuzzy k-Means [14]. The cluster is a major
technique used for grouping of numerical and image
data in data mining and image processing applications.
Clustering makes the job of image retrieval informal
by finding the images as similar as given in the medical
image. Medial image data are grouped on the basis of
structures such as color, texture, shape and pixels. The
purpose of efficiency and better results of medical
image data are segmented before applying clustering
algorithms. The k-Means and Fuzzy k-Means
algorithms are very time saving and efficient. They
concluded that Fuzzy k-means is better than k-means
by many factors, it given better results when compared
with k-means algorithm by increasing the fuzzy factor.
They concluded that Fuzzy k-Means takes lesser time
to cluster the medical images than k-Means.
Another work carried out by N. Senthilkumaran
and R. Rajesh in [16]. One of the most important
applications is edge detection for image segmentation.
This process of partitioning a digital image into
multiple regions or sets of pixels is called image
segmentation. Their main objective is to evaluate the
model of edge detection for image segmentation using
soft computing approach based on the Fuzzy logic,
Genetic Algorithm and Neural Network.
A research paper by Sona Baby et al. discussed
about a survey of data mining in medical diagnosis
[21]. They combine the key points of networks, Large
Memory Storage and Retrieval. They state that the
kNN, differential diagnosis, clinical decision support
system to get accurate result. They presented various
data mining techniques employed for medical data
mining summary of data mining techniques used for
medical data mining besides the diseases they have
classified. The main standard methods form
association, classification, clustering techniques and
prediction.
A research work titled as application of data
mining techniques for medical image classification is
carried out in [3]. They analyze tumor detection in
digital mammography using the different data mining
techniques, neural networks and association rule
mining. For anomaly detection, classification and
clustering, the medical data are analyzed. The
performance of both techniques and its approaches are
well.
A fuzzy Hopfield neural network for medical
image segmentation [10] explored by Jzau-Sheng Lin
et al. An unsupervised parallel segmentation approach
using a fuzzy Hopfield neural network is proposed in
this work. The main purpose is to embed fuzzy
clustering into neural networks so that on-line learning
and parallel implementation for medical image
segmentation are feasible. Their idea is to cast a
clustering problem as a minimization problem where
the criteria for the optimum segmentation are chosen
as the minimization of the Euclidean distance between
samples to class centers. They suggested fuzzy cmeans clustering strategy has also been proven to be
convergent and to allow the network to learn more
effectively than the conventional Hopfield neural
network. The fuzzy Hopfield neural network based on
the within class scatter matrix shows the promising
results in comparison with the hard c-means method.
Aastha Joshi and Rajneet Kaur Carried out a
research work by a review paper titled as “A Review:
Comparative Study of Various Clustering Techniques
in Data Mining” [1]. They find out a structure in a
collection of unlabeled data. They review six types of
clustering techniques like k-Means Clustering,
Hierarchical Clustering, DBSCAN clustering,
OPTICS, STING. The k-Means algorithm has biggest
advantage of clustering large data sets and its
performance increases as number of clusters increases.
Performance of k-Mean algorithm is better than
Hierarchical Clustering Algorithm. Density based
methods OPTICS, DBSCAN are designed to find
clusters of arbitrary shape whereas partitioning and
hierarchical methods are designed to find the spherical
shaped clusters. Density based methods typically
consider exclusive clusters only, and donot consider
fuzzy clusters. Moreover STING is a query-
87
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
independent approach since the statistical information
exists dependently of queries. The representation of
the data in each grid cell, which can be used to
facilitate answering a large class of queries, facilitates
parallel processing and incremental updating and
hence facilitates fast processing.
A comparative study of various clustering
algorithms in data mining is done by Verma et al. in
[11]. Clustering methods like k-Means Clustering,
Hierarchical Clustering, DBScan clustering, Density
Based Clustering, Optics, EM Algorithm are analyzed
very effectively. Performance of these techniques are
presented and compared. All the algorithms have some
ambiguity in some noisy data remover form clustered
methods. The quality of EM and k-Means algorithm
become very good using huge dataset, DBSCAN and
OPTICS does not perform well on small datasets. The
k-Means algorithm is faster than other clustering
algorithm and also produces quality clusters when
using huge dataset. Hierarchical clustering algorithm
is more sensitive for noisy data.
II.
important data cleaning phase is in building an
accurate data mining architecture for image
classification.
Another work carried out by Shi Tingna et al. in
[18]. A grid-based k-Means algorithm is proposed for
image segmentation in this work. The advantages of
the proposed algorithm over the existing k-Means
algorithm have been validated by some benchmark
datasets. They analyze the basic characteristics of the
algorithm and propose a general index based on
maximizing grey differences between investigated
objective grays and background grays. Without any
additional condition, the proposed index is robust in
identifying an optimal number of pixels. The g-kMeans algorithm has not only linearly computational
complexity but also simple and effective operation.
The advantage of the g-k-Means algorithm over the
existing k-Means algorithm has been demonstrated by
its fast convergence and clustering performance.
Subasini and Nirase Fathima Abubacker carried
out an article titled as analysis of classifier to improve
medical diagnosis for breast cancer detection using
data mining techniques [23]. They discussed various
data mining approaches that have been utilized for
breast cancer diagnosis and prognosis. They explore
the applicability of association rule mining technique
to predict the presence of breast cancer. Also it
analyzes the performance of conventional supervised
learning algorithms viz. C5.0, ID3, APRIORI, C4.5
and Naïve Bayes. Experimental results prove that C5.0
serves to be the best one with highest accuracy.
A survey of GPU-based medical image computing
techniques is presented major purpose of the analyses
to provide a comprehensive reference data for the
starters or researchers involved in GPU-based medical
image processing in a research work in [19]. In this the
continuous advancement of GPU computing is
reviewed and the existing traditional applications in
three areas of medical image processing, namely,
clustering, segmentation, registration and visualization
are compared. They presented a comprehensive
analysis of GPU based medical image computing
techniques. In medical and clinical applications,
medical images from similar or different modalities
often need to be aligned with the reference image as a
preprocessing scheme for many further procedures, for
instance, atlas-based segmentation, clustering
identification and visualization tasks.
A research work titled as hybrid medical image
classification using association rule mining with
decision tree algorithm is discussed in [15]. In this,
image mining approaches with a hybrid manner have
been proposed. The frequent patterns from the CT scan
images are generated by frequent pattern tree
algorithm that mines the association rules. The
decision tree method has been used to classify the
medical images for diagnosis. They state that the
system enhances the classification process to be more
accurate. The hybrid method improves the efficiency
MEDICAL IMAGE EXTRACTION IN DATA MINING
Medical imaging is the method, development and
art of creating graphical representations of the interior
of
a
body
for
clinical
analysis
and
medical intervention. They analyzed about the
following. Medical imaging seeks to reveal internal
structures hidden by the skin and bones, as well as to
diagnose and treat disease. Medical imaging refers to
a number of techniques that can be used as noninvasive methods of looking inside the body. This
means that the body does not have to be opened up
surgically for medical practitioners to look at various
organs and areas. It can be used to assist diagnosis or
treatment of different medical conditions.
Masroor Ahmed et al. give an idea of
segmentation of brain MR images for tumor extraction
by combining k-Means clustering and perona-malik
anisotropic diffusion model [2]. They describe an
efficient method for automatic brain tumor
segmentation for the extraction of tumor issues from
MR images. It combines peron and malik anisotropic
diffusion model for image enhancement and k-Means
clustering technique for grouping issues belonging to
a specific group. They proposed system is efficient and
is less error sensitive. The results of unsupervised
segmentation methods are better than the supervised
segmentation methods. Because for using supervised
segmentation method a lot of pre-processing is needed.
Use of k-Means clustering method is fairly simple
when compared with other algorithms.
Associative classifiers for medical images in [4] is
analyzed in a research work to classification systems
for medical images based on association rule mining
in propose consists of a pre-processing phase, a phase
for mining the resulted transactional database, and a
final phase to organize the resulted association rules in
a classification model. This research gives an
88
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
of the proposed method than the traditional image
mining methods. The extracted objects using canny
edge detection technique provides better results as
compared to conventional method. The proposed
hybrid approach of association rule mining and
decision tree algorithm classifies the brain tumors cells
in an efficient way. The proposed algorithm has been
found to be performing well compared to the existing
classifiers. The accuracy of 95% and sensitivity of
97% were found in classification of brain tumors. The
developed brain tumor classification system is
expected to provide valuable diagnosis techniques for
the physicians.
Erik Smistad et al. discuses about medical image
segmentation on GPUs – a comprehensive review [20]
of segmentation of anatomical structures, medical
image format computed tomography and magnetic
resonance imaging and ultrasound, the supporting
technology for medical applications such as
diagnostics, planning and guidance. Best segmentation
methods are computationally expensive and the
medical imaging data is rising. Graphic processing
units can solve large data parallel problems at a higher
speed than the traditional CPU, while being more
affordable and energy efficient than distributed
systems. Using GPU enables concurrent visualization
and interactive segmentation algorithm to achieve a
satisfactory result. Factors such as synchronization,
branch divergence and memory usage can limit the
speedup. They presented in most common medical
image segmentation algorithms has been discussed.
Through comparison most segmentation methods are
data parallel with a high amount of threads, which
kinds them well matched for GPU acceleration. The
impact of these limiting factors, several GPU
optimization techniques is discussed.
A survey of current methods in medical image
segmentations is discussed in [12]. The image
segmentation in different imaging modalities is
described along with the difficulties encountered in
each modality in this paper. The researchers found that
among research in the segmentation of medical images
will strive towards improving the accuracy, precision,
and computational speed of segmentation methods, as
well as reducing the amount of manual interaction.
Accuracy and precision can be improved by
incorporating prior information from atlases and by
combining
discrete
and
continuous-based
segmentation methods.
III.
clustering algorithm is k-means because of its easy
implementation, simplicity, efficiency and empirical
success. The algorithm enables applying the clustering
algorithm effectively in the parallel environment.
Their study demonstrates M k-means is relatively
stable and portable, and it performs with low overhead
of time on large volumes of data sets. Experimental
results show that M k-means is relatively stable and
portable, and it is efficient in the clustering on large
data sets and weight of clustering performance varying
with the number of processes.
A lightweight method to parallel k-means
clustering by Kerdprasop, Kittisak, and Nittaya
Kerdprasop is discussed in [9]. They propose the
parallel method as well as its approximation scheme to
the k-means clustering. The parallelism is
implemented through the message passing model
using a concurrent functional language Erlang. The
experimental results show the speedup in computation
of parallel k-Means. The clustering design and
implementation of two parallel algorithms: PKM and
APKM. The PKM algorithm parallel k-Means method
by partitioning data into equal size and send them to
processes that run distance computation concurrently.
The parallel programming model used in our
implementation is based on the message passing
scheme. The APKM algorithm is an approximation
method of parallel k-Means. They design this
algorithm for streaming data applications. Parallel
method considerably speedups the computation time,
especially with tested with multi-core processors. The
approximation scheme also produces acceptable
results in a short period of running time.
Wenbin Fang et al. discuses about parallel data
mining on graphics processors in their research work
[7]. They provide the visualization module to facilitate
users to observe and interact with the mining process
online. They have implemented the k-Means
clustering and the Apriori frequent pattern mining
algorithms in GPU Miner. A result shows significant
speedups over state-of-the-art CPU implementations
on a PC with a G80 GPU and a quad-core CPU. The
input number of clusters observed in the visualization,
and improves the convergence speed of the algorithm.
Another work carried out by Imran Qureshi et al. in
[13]. The parallel and distributed environments for
generating parallel and distributed association rule
mining and creation of clusters using the datasets by
visiting through distributed and shared memory based
systems are explored in this work. They resolved the
main issue of workload balancing because of the active
nature of association rule mining where it uses static
task scheduling mechanisms by focusing on
minimizing the data dependence across processes in
multiprocessor algorithms which are based on parallel
computing environment. They have to work more on
parallel environment where they have to implement
the branch penalty for all the algorithms and they also
MEDICAL IMAGE EXTRACTION IN PARALLEL
ALGORITHMS
In this section, it is discussed about various
methods in medical image extraction using parallel
algorithms that is for the performance on a computer
with multiple processors which controls the work done
by the workers and distributes the tasks to workers for
performance. A parallel clustering algorithm with MPI
M k-Means [26] is discussed. The most well-known
89
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
have to work on cross cutting issues while generating
association rule mining.
Srinivas K., et al. discussed about a scientific
approach for segmentation and clustering technique of
improved k-Means and neural networks in [22]. They
apply neural network segmentation relies on
processing small areas of an image using an artificial
neural network or a set of neural networks. After such
processing the decision-making mechanism marks the
areas of an image accordingly to the category
recognized by the neural network. The k-Means
cluster algorithm that uses both the proposed updating
methods and also better segmentation result for both
bio-medical and natural image processing to be
applied. Due to the strong correlation between the
good clustering and the overall RBF performance,
both the proposed updating methods provide
significantly better overall performance than the other
three updating methods that are considered.
Sanpawat Kantabutra et al in [8] discuses about
improvement by a factor of O(K/2), where K is the
number of desired clusters, by applying theories of
parallel computing to the algorithm. In addition to time
improvement, the parallel version of k-Means
algorithm also enables the algorithm to run on larger
collective memory of multiple machines as memory of
a single machine is insufficient to solve a problem. The
performance of the parallel k-Means algorithm against
the performance of the serial algorithm by using
speedup datasets. The domain decomposition for
possibility to applied divide-and-conquer strategies to
parallelize the algorithm for better speedup.
A novel approach to medical image segmentation
is discussed in [17]. In this research article, a modified
k-Means clustering algorithm, called Fast SQL kMeans is proposed using the power database
environment. In k-Means, Euclidean distance
computation is the most time consuming process. Here
it computed with a single database and no joins. This
method takes less than 10 sec to cluster an image size
of 400×250 (100K pixels), whereas the running time
of direct k-Means is around 900 sec. Since the entire
processing is done with database, additional overhead
of import and export of data is not required. The 2D
echo images are acquired from the local cardiology
hospital for conducting the experiments. They
proposed algorithm was tested by considering a
number of echo images in apical four chamber, longaxis and short axis views. They have compared the
direct k-Means implementation with the proposed
algorithm. The pattern of the data and the number of
clusters had almost no impact on the clustering time.
Fast algorithms are required for immediate analysis of
echo images within ICUs, remote places, telemedicine.
The challenge is that ultrasound images are prone to
speckle noise, segmented echo images carry gaps in
the cardiac regions which in turn cause difficulties in
boundary tracing and selection of seed values for the
k-Means.
A research work discuss by Vijayalakshmi et al.
[24], is described for segmenting MR brain image into
K different tissue types, which include gray, white
matter and CSF, and maybe other abnormal tissues in
their work. MR images considered can be either scaleor multivalued. Each scale-valued image is modeled as
a collection of regions with slowly varying intensity
plus a white Gaussian noise. The proposed algorithm
is an adaptive k-Means clustering algorithm for three
dimensional and multi-valued images. The k-Means
algorithm is a popular clustering algorithm applied
widely, but the standard algorithm which selects k
objects randomly from population as initial centroids
cannot always give a good and stable clustering.
Experimental results show that selecting centroids by
our algorithm can lead to a better clustering. The
improved k-Means algorithm presented in this paper is
a solution to handle large scale data, which can select
initial clustering center purposefully, reduce the
sensitivity to isolated point, avoid dissevering big
cluster, and overcome defluxion of data in some
degree that caused by the disproportion in data
partitioning owing to adoption of multi-sampling.
Parallel implementation of k-Means on multi-core
processors is explored by Fahim Ahmed M [6]. He
proposes the parallelization of the well-known kMeans clustering algorithm. He employs Parallel forLoops in MATLAB. Where a loop of n iterations could
run on a cluster of m MATLAB workers
simultaneously, each worker executes only n/m
iterations of the loop. The experimental results
demonstrate considerable speedup rate of the proposed
parallel k-Means clustering method run on a
multicore/multiprocessor machine, compared to the
serial k-Means approach. He propose the design and
implementation of parallel k-Means algorithm
paralyzed the k-means method by using parallel for
that run distance computation concurrently on
processors of multi-cores machine.
IV.
CONCLUSION
Data clustering is now a common task applied in
many application areas such as grouping similar
functional genomes, segmenting images that
demonstrate the same pattern, partitioning web pages
showing the same structure, and so on. The k-Means
clustering is the most well-known algorithm
commonly used for clustering similar data. This
research work addresses various method, techniques
and performance of Parallel Algorithms in Medical
Images. From the various researchers’ perspectives, it
is not possible to predict which one is the best and
worst algorithm in the medical field. Among the
algorithms discussed in this work, it is concluded that
the performance of k-Means parallel algorithm is
better than the other algorithms. In future, the MRI
datasets are applied to find the performance of some
parallel algorithms as well as some of the other
algorithms.
90
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
Aastha Joshi and Rajneet Kaur. “A Review: Comparative
Study of Various Clustering Techniques in Data Mining”,
International Journal of Advanced Research in Computer
Science and Software Engineering, Vol. 3, Issue 3, 2013, pp.
55-57.
Ahmed, M. Masroor, and Dzulkifli Bin Mohamad.
“Segmentation of brain MR images for tumor extraction by
combining k-Means clustering and perona-malik anisotropic
diffusion
model”, International
Journal
of
Image
Processing, Vol. 2, Issue 1, 2008, pp. 27-34.
Antonie, Maria-Luiza, Osmar R. Zaiane, and Alexandru
Coman. “Application of Data Mining Techniques for Medical
Image
Classification”, Proceedings
of the
Second
International Workshop on Multimedia Data Mining, 2001,
pp. 94-101.
Antonie, Maria-Luiza, Osmar R. Zaiane, and Alexandru
Coman. “Associative Classifiers for Medical Images”, Mining
Multimedia and Complex Data, Springer Berlin Heidelberg,
2003, pp. 68-83.
Baker, Zachary K., and Viktor K. Prasanna. “Efficient Parallel
Data Mining
with the
Apriori
Algorithm on
FPGAs”, Proceedings of IEEE Symposium on FieldProgrammable Custom Computing Machines, 2005 pp. 1-16.
Fahim Ahmed, M. “Parallel Implementation of k-Means on
Multi-Core
Processors”,
Computer
Science
and
Telecommunications, Vol. 13, Issue 41, 2014, pp. 53-61.
Fang, Wenbin, Ka Keung Lau, Mian Lu, Xiangye Xiao, Chi
Kit Lam, Philip Yang Yang, Bingsheng He, Qiong Luo, Pedro
V. Sander, and Ke Yang. “Parallel Data Mining on Graphics
Processors”, Hong Kong University of Science and
Technology, Tech. Rep. HKUST-CS08-07, Vol. 2, 2008.
Kantabutra, Sanpawat, and Alva L. Couch. “Parallel k-Means
Clustering Algorithm on NOWs”, NECTEC Technical
Journal, Vol. 1, Issue 6, 2000, pp. 243-247.
Kerdprasop, Kittisak, and Nittaya Kerdprasop. “A Lightweight
Method to Parallel k-Means Clustering”, International Journal
of Mathematics and Computers in Simulation Vol. 4, Issue 4,
2010, pp. 144-153.
Lin, Jzau-Sheng, Kuo-Sheng Cheng, and Chi-Wu Mao. “A
Fuzzy Hopfield Neural Network for Medical Image
Segmentation”, Nuclear Science, IEEE Transactions on
Vol. 43, Issue 4, 1996, pp. 2389-2398.
Manish Verma, Mauly Srivastava, Neha Chack, Atul Kumar
Diswar, Nidhi Gupta. “A Comparative Study of Various
Clustering Algorithms in Data Mining”, International Journal
of Engineering Research and Applications, Vol. 2, Issue 3,
2012, pp. 1379-1384.
Pham, Dzung L., ChenyangXu, and Jerry L. Prince. “Current
Methods in Medical Image segmentation”, Annual Review of
Biomedical Engineering Vol. 2, Issue 1, 2000, pp. 315-337.
Qureshi, Imran, Kanchi Suresh, Mohammed Ali Shaik, and G.
Ramamurthy. “Designing Parallel and Distributed Algorithms
for Data Mining and Unification of Association Rule”,
International Journal of Advances in Engineering Science and
Technology, Vol. 3, Issue 3, 2014, pp.157-163.
Rahmani, Md Khalid Imam, Naina Pal and Kamiya Arora.
“Clustering of Image Data Using k-Means and Fuzzy kMeans”, International Journal of Advanced Computer Science
and Applications, Vol. 5, Issue 7, 2014, pp. 160-163.
Rajendran, P., and Madheswaran M., “Hybrid Medical Image
Classification using Association Rule Mining with Decision
Tree Algorithm”, Journal of Computing, Vol. 2, Issue 1, 2010,
pp. 127-136.
Senthilkumaran, N., and Rajesh R., “Edge Detection
Techniques for Image Segmentation–A Survey of Soft
Computing Approaches”, International Journal of Recent
Trends in Engineering, Vol. 1, Issue 2, 2009, pp. 250-254.
Shanmugam, Nandagopalan, Adiga B. Suryanarayana, S. Tsb,
Dhanalakshmi Chandrashekar, and Cholenally Nanjappa
Manjunath. “A Novel Approach to Medical Image
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
91
Segmentation”, Journal of Computer Science Vol. 7, Issue 5,
2011, pp. 657-663.
Shi Tingna, Penglong Wang, Jeenshing Wang, and Shihong
Yue. “Application of Grid-Based k-Means Clustering
Algorithm for Optimal Image Processing”, Computer Science
and Information Systems, Vol. 9, Issue 4, 2012, pp. 16791696.
Shi, Lin, Wen Liu, Heye Zhang, Yongming Xie, and Defeng
Wang. “A Survey of GPU-Based Medical Image Computing
Techniques”, Quantitative Imaging in Medicine and Surgery,
Vol. 2, Issue 3, 2012, pp.188-206.
Smistad, Erik, Thomas L. Falch, Mohammadmehdi Bozorgi,
Anne C. Elster, and Frank Lindseth. “Medical Image
Segmentation on GPUs–A Comprehensive Review” Medical
Image Analysis, Vol. 20, Issue 1, 2015, pp1-18.
Sona Baby, Ariya T.K., “A Survey Paper of Data Mining in
Medical Diagnosis”, International Journal of Research in
Computer and Communication Technology, Vol. 3, Issue 3,
2014, pp. 098-101.
Srinivas K. and Srikanth V., “A Scientific Approach for
Segmentation and Clustering Technique of Improved k-Means
and Neural Networks” International Journal of Advanced
Research in Computer Science and Software Engineering, Vol.
2, Issue 7, 2012, pp. 183-189.
Subasini, A., Nirase Fathima Abubacker and Rekha. “Analysis
of Classifier to Improve Medical Diagnosis for Breast Cancer
Detection using Data Mining Techniques”, International
Journal of Advanced Networking and Applications, Vol. 5,
Issue 6, 2014, pp. 2117-2122.
Vijayalakshmi, P., Selvamani K., and Geetha M.,
“Segmentation of Brain MRI using k-Means Clustering
Algorithm”, International Journal of Engineering Trends and
Technology Vol. 3, 2011, pp. 113-115.
Zaki, Mohammed J., Srinivasan Parthasarathy, Mitsunori
Ogihara, and Wei Li. “Parallel Algorithms for Discovery of
Association
Rules”, Data
Mining and
Knowledge
Discovery, Vol. 1, Issue 4, 1997, pp. 343-373.
Zhang, Jing, Gongqing Wu, Xuegang Hu, Shiying Li, and
Shuilong Hao. “A Parallel Clustering Algorithm with MPI– m
k Means”, Journal of Computers Vol. 8, Issue 1, 2013, pp. 1017.
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
A Study of Arc Fault Temperature in Low Voltage
Switchboard
1
Kuan Lee Choo, 2Pang Jia Yew
1
Infrastructure University Kuala Lumpur, Selangor, Malaysia
[email protected]
2
Asia Pacific University, Selangor, Malaysia
[email protected]
Abstract—This paper presents an arc fault temperature detector that is used to detect the overheated condition in
the low voltage switchboard prior to the occurrence of an arcing fault. The behavior and characteristics of arcing
fault in low voltage switchboard are studied prior the design of the arc fault temperature detector. The simulation
results show that the proposed arc fault temperature detector is able to detect the overheating enclosed in the
switchboard and therefore reduce the possibilities of arc occurrences.
Keywords- LM 335 temperature sensor; Arc Fault Temperature Detector; Low Voltage Switchboard.
INTRODUCTION
subsequently describes the system description of the arc fault
temperature detector. Section IV provides the experimental and
simulation results of the arc fault temperature detector and lastly,
Section V concludes the findings of this paper.
Arc is defined as flow of electric current in
nonconductive/ insulating media such as air. Generally, arc is
an electrical discharge flowing between two electrodes through
a gas or vapor [1]. An arcing fault is the flow of current through
a higher impedance medium, typically the air, between phase
conductors or between phase conductors and neutral, ground or
even a non-conducting medium [2]. There are various reasons
that cause the arc to occur such as connection that is loosed,
connection that is corroded, object falls onto the bus bar,
insulation failures and etc [3]. Schneider-electric worldwide
expert has concluded that joint fault is the main reason for low
voltage switchboard to have failure [4]. Joint fault is the series
arc occurs at the joint.
BEHAVIOR AND CHARACTERISTIC
OF ARC FAULT
An arc fault is the discharge of electricity through the air
between two conductors which creates huge quantities of heat
and light [6]. It is a high resistance fault with resistance similar
to many loads and it is a time varying resistor which can
dissipate large amount of heat in the switchboard [7].
Circuit breakers are tested by bolting a heavy metallic short
across the output terminals to determine their capabilities of
handling an essentially zero resistance load [7]. The zero
resistance fault is named as bolted fault. Bolted fault current is
the highest possible current supplied by the source [7] and a
protective system is designed according to the value of bolted
fault current. The protective system must be able to detect the
bolted fault and the protective devices must be capable of
interrupting this value of current [8].
A low voltage switchboard is an essential piece of equipment
used to receive electricity from the utility company and
distribute electricity to various loads [5]. The International
Electrotechnical Commission (IEC) defines low voltages as any
voltages in the range of 50-1000 VAC or 120-1500 VDC [6].
Technically, a low voltage switchboard is a panel with one or
more low voltage switching, control, measuring, signaling,
protective and more devices. An arc fault is not a short across a
circuit. It is a high impedance fault with the fault current in the
range of the rated current. Hence, circuit breakers or other
protective devices could not detect the existence of the arc fault
and isolate it before serious damages occur.
Due to the high resistance loads, an arcing fault will result in
much lower values of current. Thus, the protective devices such
as circuit breakers, fuses and relays, which are designed to
operate for bolted fault, may not detect these lower values of
current. As a result, the arcing fault will persist until severe burn
down damage occurs. The magnitude of the arc current is limited
by the resistance of the arc and the impedance of the ground path
[9].
Overheating in low voltage switchboard is not limited to the
arc fault but also other failure causes such as overloads,
harmonics and malfunction of ventilation. Since arc faults can
cause vast damages and fires besides they are hazardous to
equipments and humans, arcing fault occurrences should be
avoided and prevented. Arc faults should be detected and
isolated prior to their occurrences. The aim of this paper is to
design of arc fault temperature detector to protect not only
human lives, but properties and equipments. In addition, it
reduces fires and explosions caused by the arcing faults,
preventing arcing faults occurrences and preventing the
destructive effects of arcing faults.
Arc faults are categorized into series arc faults and parallel
arc faults. Series arc faults happen when the current carrying
paths in series with the loads are unintentionally broken whereas
parallel arc faults happen between two phases, phase to ground
or phase to neutral of the switchboard [10].
Large amounts of heat will be dissipated during an arc event.
A portion of this heat is coupled directly into the conductors, a
portion heats the air and another portion is radiated in various
optical wavelengths [6]. Hasty heating of
This paper is organized as follows: Section II presents the
behavior and characteristic of the arc fault. Section III
92
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
the air and the expansion of the vaporized metal into gas
produces a strong pressure wave which will blow off the covers
of the switchboards and collapse the substations [6]. Arcing fault
damage increases with the existence of the busbar insulation.
30 ºC to 90 ºC. The temperature detector proposed in the paper
is able to detect the temperature range from -40oC to 100oC.
Figure 2. Joint progressive loosening test assembly and results for the gripped
joint. [4]
SYSTEM DESCRIPTION OF ARC FAULT TEMPERATURE
DETECTOR
Figure 1. Damage to the Side of a Switchboard versus Arc Current and
Time. [7]
The proposed design of an arc fault temperature detector
consists of a temperature sensor, a buffer and a voltage
comparator. The block diagram of an arc fault temperature
detector is shown in Fig. 3.
Fig. 1 shows the time, current and damage for the 53 arcing
tests [7]. When the circuit breakers are tripped within less than
0.25 second, the damage will be limited to smoke damage [7].
The triangle markers represent arcs that left only smoke
damages to the side of switchboards. The square markers
represent arcs that left surface damage to the side of
switchboards whereas the star pointers represent holes of several
square inches at the side of the switchboards [7].
Figure 3. Block Diagram of an Arc Fault Temperature Detector
The LM 335 temperature sensor is used to detect the
presence of an arcing fault by sensing the temperature changes
in the switchboard. LM 335 has a breakdown voltage directly
proportional to the temperature, which is +10 mV/ oK. LM 335
is chosen because it is precise, easily calibrated and integrated
circuit temperature sensor. In addition, it has a linear output and
it is cheaper compared to other types of temperature sensors.
When it is calibrated at 25oC, it has typically less than 1oC error
over a 100oC. The temperature range for LM 335 is -40oC to
100oC. In other words, the output voltage of this temperature
sensor will range from 2.33 V to 3.73 V. The calculations for the
output voltage are shown below:
When an arc is ignited, the plasma cloud expands
cylindrically around the arc. The expansion of the plasma is
constrained by the parallel bus and thus the plasma expands
more to the front and the back of the bus [7]. As the plasma
reaches any obstructions such as the switchboard, plasma
expansion is retarded by the obstructions. Due to the lower
velocity of the arc, the plasma becomes more concentrated and
its temperature and current will increase [7].
The root of the arc where the arc contacts the conductor is
reported to reach temperatures exceeding 20000ºC, whereas the
plasma portion or positive column of the arc is around 13000
ºC [11]. For reference, surface of the sun is reported to be about
5000 ºC. The components in the switchboard can only withstand
this temperature within 250 milliseconds before sustaining
severe damages [12].
Output voltage for -40oC = (-40 + 273) x 10 x 10-3 V
= 2.33 V (1)
o
Output voltage for 100 C = (100 + 273) x 10 x 10-3 V
Fig. 2 shows the change of temperature when the joint is
loosening at different percentage of the rated tightening torque
[4]. It can be observed from Fig. 2 that the significant
overheating only occurs when the joint is loosening down to less
than 1/8 of the rated torque. Fig. 2 also reveals that the
temperature range just before the occurrence of arc is from
= 3.73 V (2)
Temperature of an arc can reach 20000oC at its root [6].
Before an arcing fault occurrence, the temperature in the
switchboard will increase with increasing arc current. The
temperature sensor will sense the temperature changes in the
93
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
surrounding and produce a voltage based output signal. The
signal is then sent to the voltage comparator through the buffer
to be compared with the reference voltage.
The output voltage of the buffer is used as the input voltage of
the comparator. Theoretically, the input voltage of U2 is
identical to the output voltage of U1 and is also identical to the
output voltage of the temperature sensor, which is represented
by an AC source in this circuit. uA741 op-amp is used as the
voltage comparator. A DC input voltage, Vref, of 3.65 V is
placed at the negative feedback (pin 2) of U2 to produce a
With every 1oC increase in the temperature of the
surrounding, 1oK increase will take place in the LM 335
temperature sensor. For every 1oK increase, the output voltage
will increase by 10 mV. Under normal condition, the
temperature inside a switchboard should not be more than
100oC. As calculated in Eq. (2), when the temperature inside a
switchboard is 100oC, the output voltage of LM 335 is 3.73 V.
constant value of reference voltage. The voltage value of
V is equal to the temperature value of 92oC.
A buffer amplifier provides electrical impedance
transformation from one circuit to another circuit. It is used to
transfer a voltage from the temperature sensor to voltage
comparator. A unity gain buffer is used in the circuit design. The
output of the op-amp (buffer) is connected to its inverting input,
which is the negative feedback. Therefore, the output voltage is
simply equal to the input voltage of the buffer. The output from
the temperature sensor is connected to the non-inverting input of
the buffer (op-amp), which is the positive feedback, and the
output from the buffer is identical to the temperature sensor
output.
3.65
A +9 V DC supply is connected to pin 7 and a -9 V DC
supply is connected to pin 4 of U2 to supply voltage for this
component. Output voltage from U2 (pin 6), Vout, is used to
indicate the comparison result of the input voltage and the
reference voltage.
The temperature sensor will generate a voltage based signal
with respect to the amount of temperature detected from the
surrounding. The signal is then sent to a voltage comparator
through a buffer. The voltage comparator is used to compare the
signal with a reference voltage and indicate which is larger at its
output. The output of the comparator will produce a positive
value which will then send a trip signal to the trip indicator if the
signal from the temperature sensor exceeds the reference voltage
of the voltage comparator. Else, the output voltage of the voltage
comparator will indicate a negative value which will not trigger
the trip indicator.
Figure 4. PSpice Schematic Diagram of an Arc Fault Temperature Detector
Before detect the changes of temperature in the environment
and operate the buzzer when the temperature exceeds the
predetermined limit. The arc fault temperature detector is
modeled to lower values of temperature with respect to the
practical temperature.
SIMULATION RESULTS
The PSpice simulation result from the schematic diagram of
an arc fault temperature detector is shown in Fig. 5. The straight
line in green in Fig. 5 represents the value of the reference
voltage (pin 2) of U2 which is set to 3.65 V. The waveform in
yellow color is the AC input voltage of the circuit, Vin. The
waveform in red color, which is the same as the waveform in
yellow, is the output voltage of U1, Vin”. The square wave in
blue color is the output voltage of U2, Vout.
Fig. 4 shows the schematic diagram of an arc fault
temperature detector using PSpice program. The input for this
circuit is an AC supply. An AC supply is used to represent the
output signal from the temperature sensor. The output voltage
range of the sensor is used as the input voltage range for the
circuit. The AC input voltage, Vin, is ranged from 2.33 V
(corresponding to -40oC) to 3.73 V (corresponding to 100oC) as
obtained from Eq. (1) and Eq. (2).
The waveforms in red and yellow colors are the same
because the input and the output voltages of the buffer are
identical. The AC input voltage, which is indicated by the yellow
color waveform, forms a sinusoidal wave and it is in the range
of 2.33 V to 3.73 V. The output of the buffer, which is in red, is
in the range of 2.33 V to 3.73 V as well. The output voltage of
the buffer is then compared with the reference voltage of 3.65
V. From Fig.5, it is shown that for the portions where the yellow
color waveform is higher than the straight line in green, the blue
color square wave indicates a positive value, which is 4.061 V.
Else; the blue color square wave indicates a negative value of 4.061 V. In other words, when
U1 is an op-amp, which represents a buffer in this circuit.
The AC supply is connected to the positive feedback of U1 and
the negative feedback of U1 is connected to the output of U1 to
produce a unity gain buffer. The output voltage of U1 is same as
the input voltage since it is a unity gain buffer.
Then, the output voltage of U1, Vin”, is connected to the
positive feedback (pin 3) of U2, which is a voltage comparator.
94
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
the output voltage from U1 is larger than the reference voltage
of U2, the output of U2 produces a positive value which will
trigger the trip indicator. However, when the output voltage from
U1 is smaller than the reference voltage, the output of U2
produces a negative value which will not trigger the trip
indicator. The trip indicator is responsible to send a signal to the
circuit breaker in order to isolate the arc fault immediately to
prevent further damages.
personal injury and building. In addition, it improves the system
reliability without power interruption which is particular
essential to hospitals and certain industries with sensitive loads.
REFERENCES
T. Gammon and J. Mattews, “ The Historical Evolution of Arcing-Fault Models
for Low Voltage Systems”, IEEE Industrial & Commercial Power
Systems Technical Conference, 1999.
Max F. Hoyaux, “Arc Physic”, New York: Springer-Verlag, 1968.
H. Bruce Land III, Christopher L. Eddins, John M. Klimek, “Evolution of Arc
Fault Protection Technology at APL”, John Hopkins APL Technical
Digest, vol.25, no.2, 2004.
K. N'guessan, E. Jouseau, G. Rostaing, F. Francois, "A New Approach for
Local Detection of Failures and Global Diagnosis of LV Switchboards",
IEEE International Conference on Industrial Technology, ICIT 2006.,
pp.506,511, 15-17 Dec. 2006.
[online].
Wikipedia,
the
free
encyclopedia
http://en.wikipedia.org/wiki/Electric_switchboard [2009, November 20]
[online]. Wikipedia,
the
free
encyclopedia
http://en.wikipedia.org/wiki/Low_voltage [2009, November 20]
H. Bruce Land, III, The Behavior of Arcing Faults in Low Voltage
Switchboards, IEEE Transactions on Industry Applications, Vol. 44,
No. 2, March/April 2008.
Keith Malmedal and P. K. Sen, “Arcing fault current and the criteria
for setting ground faultrelays in solidly-grounded low voltage systems”,
Industrial and Commercial Power Systems Technical Conference, 2000.
Tammy Gammon and John Matthews, “Arcing Fault Models for Low Voltage
Power Systems”, Industrial and Commercial Power Systems Technical
Conference, 2000..
Peter Muller, Stefan Tenbohlen, Reinhard Maier and Michael Anheuser
“Artificial Low Current Arc Fault for Pattern Recognition in Low Voltage
Switchgear”, Institude of Power Transmission and High Voltage
Technology (IEH).
B. R Baliga, E Pfender, “Fire Safety Related Testing of Electric Cable
Insulation Materials”, Univ. Minnesota, 1975.
H. Bruce Land, III, Christopher L. Eddins and John M. Klimek, “Evolution of
Arc Fault Protection Technology at APL”, John Hopkins APL Technical
Digest, Vol. 25, No. 22, 2004.
Figure 5. Simulation Result of an Arc Fault Temperature Detector Circuit
CONCLUSION
Arcing faults in low voltage switchboards is a serious issue as
the effects of the arcing faults are devastating. In this paper, a
temperature sensor in the propose arc fault temperature detector
circuit is able to generate a voltage based signal with respect to
the amount of temperature detected from the surrounding. The
signal is then sent to a voltage comparator through a buffer to
trigger the trip indicator. An early detection of arc fault in low
voltage switchboard enable the isolation of the power supply to
the consumer side just before the occurrence of arc fault and
thereby reduce the danger to
95
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Power factor improvement with SVC based on the PI
controller under Load Fault
Saeid Gholami Farkoush, Sang-Bong Rhee
Department of Electrical Engineering,Yeungnam University, Gyeongsan-si, Korea
[email protected]
Department of Electrical Engineering,Yeungnam University, Gyeongsan-si, Korea
[email protected]
Abstract—In this paper, to improve the power quality and the efficiency, the power factor correction in the system
is done by using SVC (Static Var Compensator) in load transient condition. The SVC is used TCR (thyristor
controlled reactors) and TSC (thyristor switch capacitor). The system power factor is become constant by using
SVC in the PCC (point of common coupling), where it is changed dramaticly in different load conditions. To obtain
the best power factor in the system, the PI controller is used to check the necessary reactive power by connecting
the capacitance and the reactance to the PCC with TSC and TCR respectively. The simulation results are displayed
with MATLAB/Simulink to verify the effectiveness of the proposed algorithm.
Keywords-Power factor correction; SVC; MATLAB/SIMULINK
INTRODUCTION
STATIC VAR COMPENSATOR (SVC)
Unbalanced loads and poor power factors are two
crucial challenges associated with electric power
distribution systems. Load unbalancing along with the
reactive power flow, which is a direct consequence of the
poor power factor, increase the losses of the distribution
system and cause a variety of power quality problems.
Accordingly, the reactive power compensation has been
become an issue with a great deal of importance. Static
Var Compensators (SVCs) have been investigated and
deployed to reactive power compensation in order to
achieve the power factor correction [1]-[6].
Static Var Compensators are shunt connected static
generators/absorbers whose outputs are varied so as to
control voltage and also control of power factor of the
electric power systems. In its simple form, SVC is
connected as thyristor switch capacitor-Thyristor
Controlled reactor (TSC-TCR) configuration with control
system as shown in Fig. 1.
Vn
Load Voltages
Voltage
Measurement
Load
Secondary Voltages
Gen
In this area some conception have been proposed to
control of SVC is a delta-connected TCR-FC with using
PID controller for power factor correction, however PID
controller is complicated in comparison PI controller in
power system, also in the SVC [TCR-FC] system,
capacitor is constant in the system and capacitor value
control is impossible and it is not good idea for achieving
best power factor correction [7].
Ve
BSVC
Voltage
Regulator
-
+
Vref
Xe
n_TSCs
Pulse
TCR
TSC
Synchronizing Unit
Pulse Generator
α
Distribution
Unit
Control System
Fig. 1. Single-line Diagram of an SVC and Control system
DESCRIPTION OF SYSTEM
The Assume the SVC comprising of one TCR bank
and three TSC banks connected to the 22.9 kV bus via a
333- MVA, 22.9/16-kV transformer on the secondary
side with Xk=15%. The voltage drop of the regulator is
0.01pu/100VA (0.03Pu/300 VA). When the SVC
operating point changes from fully capacitive to fully
inductive, the SVC voltage varies between 1-0.03=0.97pu
and 1+0.01=1.01 pu.
Basis for the algorithm that is used in this paper to
calculate the compensation susceptances associated with
each phase of a delta connected three-phase SVC for
power factor correction [8].
The fuzzy logic SVC for power factor correction is
presented in [9]. It is an important tool to control
nonlinear, complex, vague, and ill-defined systems,
nevertheless speed of the performance of fuzzy logic is
lower than the PI controller that is used for SVC.
SIMULATION
SVC is simulated in MATLAB/SIMULINK software
and connects to the power system. In this system SVC is
applied for power factor correction in system. Firstly
system is simulated not including load fault and SVC then
is as shown in Fig. 2
In this paper, The SVC for improving power factor in
load transient condition is proposed. The SVC is used
TCR (thyristor controlled reactors) and TSC (thyristor
switch capacitor), to obtain the best power factor in the
system, the PI controller is used to check the necessary
reactive power by connecting the capacitance and the
reactance to the PCC with TSC and TCR respectively.
96
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
CONCLUSION
This paper presents the model of SVC for control of
power factor of a system to maintain steady- state of the
system when the load of the system is changed. By using
SVC in power system, power factor is increased. Also by
using SVC the varying levels of power factor is decreased
when load is changed.
Fig. 2. SVC mode
Therefore SVC in a power system is caused stability
of the system is improved while the load of the system is
changed. The proposed SVC shows better performance
and also regulates the power factor in the power system.
Firstly system is simulated including load fault and no
SVC then power factor is shown in Fig.3.
ACKNOWLEDGMENT
The research was supported by Korea Electric Power
Corporation Research Institute through Korea Electrical
Engineering & Science Research Institute.
0.9
Power Factor
0.8
0.7
[grant number : R14-XA02-34]
0.6
REFERENCES
0.5
0.4
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
Power and productivity ABB company brochure, “SVC (Static Var
Compensator) An insurance for improved grid system stability
and reliability”
Power and productivity ABB company brochure, “Power factor
correction and harmonic filtering in electrical plants”
Alisha Banga and S.S. Kaushik, “Modeling and simulation of SVC
controller for enhancement of power system stability,”
International Journal of Advances in Engineering & Technology,
July 2011, ISSN: 2231-1963
Alok Kumar Mohanty and Amar Kumar Barik, “Power System
Stability Improvement Using FACTS Devices,” International
Journal of Modern Engineering Research (IJMER),Vol.1,
Issue.2, pp-666-672 ISSN: 2249-6645.
BOUDJELLA.Houari, F.Z. Gherbi, S.Hadjeri and F. Ghezal,
“Modelling and Simulation of Static Var Compensator with
Matlab,” 4th International Conference on Computer Integrated
Manufacturing CIP, November 2007.
Houari Boudjella, Fatima Zohra Gherbi and Fatiha Lakdja,
“Modelening and simulation of static var compensator (SVC) in
power system studies by MATLAB, ” The annalas of “ dunarede
jos” university of galati fascicle III, Vol.31, No.1, ISSN 1221454X, 2008
Habibur Rahman, Dr. Md. Fayzur Rahman, Harun-Or-Rashid,
“Stability Improvement of Power System By Using SVC With
PID Controller” International Journal of Emerging Technology
and Advanced Engineering, ISSN 2250-2459, Volume 2, Issue 7,
July 2012
M.Mokhtari, S.Golshannavaz, D.Nazarpour, M.Farsadi, “Control of an
SVC for the Load Balancing and Power Factor Correction with a
new Algorithm based on the Power Analysis” Power Quality
Conference (PQC), 2010 First, Page(s):1 – 5, E-ISBN:978-964463-063-7
Hagh, M.T, Abapour, M, “Fuzzy logic based SVC for reactive power
compensation and power factor correction” Power Engineering
Conference, 2007. IPEC 2007. International, 1241 –
1246,ISBN:978-981-05-9423-7
2
Time(s)
Fig3. Power factor including fault load, no SVC
In third section system is simulated with load fault,
with SVC then is shown in fig.4.
1
0.95
Power Factor
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Time(s)
Fig4. Power factor including fault load and SVC
In fig.3 when a load fault is happened power factor
also is changed between 0.6 and 0.8 and it is not constant
when load is changed. For solving this problem SVC is
imported into the system with with a PI controller. When
SVC is not connected to system, power factor were 0.8,
when it is connected to the system, power factor is
increased to 0.9 and also when the system didn’t use SVC,
variety level of power factor were 0.3, while the SVC is
used, the variety level of power factor is changed to .05.
Fig. 4 shows power factor of system when SVC is
connected to the system.
97
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Unit Commitment Considering Vehicle to Grid and Wind
Generations
Zhile Yang, Kang Li
School of Electronics, Electrical Engineering and Computer Science, Queen’s University Belfast,
Belfast, BT9 5AH, United Kingdom (e-mail:{zyang07,kli}@qub.ac.uk).
Abstract—Unit commitment for thermal generation units has long been a key issue for power system operators and
a challenge in smart grid implementation. The task of unit commitment is to minimize the economic generation cost
while maintaining the power balance between power generation units and user load. On the other hand, the fast
development of plug-in electric vehicles provides options to shift peak load of the power system and even provides
ancillary service and feed power back to the grid during peak time. The interaction between thermal generators,
plug-in electric vehicles and renewable generations has a potential to further reduce the generation cost and enhance
the flexibility of the power system. In this paper, 50000 plug-in electric vehicles serving as vehicle to grid mode and
80 MW wind generation over multiple seasons are integrated in a conventional 10-unit thermal generation system.
A hybrid solving approach combining a binary particle swarm optimization, an integer differential evolution
algorithm and the Lagrangian relaxation method is employed to solve the mixed integer nonlinear unit commitment
problem. The results show that the wind generation and PEV vehicle to grid service could work together to
significant save the fossil fuel cost. The intelligent scheduling method could simultaneously determine the unit
commitment and PEV discharge power distribution.
Keywords-unit commitment, electric vehicle, vehicle to grid, wind generation, particle swarm optimization
complicated situation for system operators. Some studies
[9,10] integrated PEVs and renewable energy generations
in the 10-unit system and solved the UC problem by basic
binary particle swarm optimization [9] and genetic
algorithm combining Lagrangian relaxation [10]. In our
previous work [11], multiple PEV charging scenarios are
comparatively employed in the UC problem and solved
by a quantum-inspired PSO method. It should be noted
that the state-of-the-art PEVs chargers are of high power
and fix rate (for example 100KW), due to which the
power output of PEV aggregator (providing V2G service)
cannot generate smooth linear curves. The integer number
of online chargers is therefore becoming variables in the
UC problem formulation.
INTRODUCTION
Unit commitment (UC) aims to minimize the
generation cost by determining the on/off status and
power delivered from thermal generation units under
several system-constrains [1]. It is a large scale mixinteger nonlinear problem and presents a significant
challenge to be solved. A number of methods have been
proposed in the past or recent years including
conventional methods such as dynamic programming [2],
Lagrangian relaxation [3] and intelligent algorithm such
as genetic algorithm [4], binary particle swarm
optimization (BPSO) [5], quantum-inspired particle
swarm algorithm [6] and gravitational search algorithm
[7], etc.
In this paper, four scenarios of wind generation are
comparatively studied in UC problem, together with V2G
scheduling namely UCVW problem. Hybrid approaches
including a novel binary PSO method and an integer
differential evolution (DE) algorithm are employed to
solve the UCVW problem. The optimization results are
analyzed from the economic perspective.
The latest technical development and successful
commercialization bring the electric vehicles (EVs) back
to the spotlight. The EVs could be categorized as pure
battery electric vehicle (BEV), hybrid electric vehicle
(HEV) (normally non-plug-in), and plug-in hybrid
electric vehicle (PHEV), with both BEV and PHEV
referring to plug-in EV (PEV) [8]. Due to the continuing
technical development, the capacity of EV batteries is
increasing fast and achieved 85 KWh for a single vehicle.
The EV battery packs of large capacity are able to store
more energy for the driving distance extension.
Moreover, the high penetration of PEVs are also potential
to provide energy storage services for absorbing
intermittent renewable energy generation during off-peak
load period as well as vehicle to grid (V2G) service for
providing ancillary service and relieving the peak load
level.
PROBLEM FORMULATION
The new UCVW problem shares the same
formulation with the traditional UC problem with the
objective function and several system constraints. Some
PEV constraints and wind generation are complemented
in the formulation.
Objective function
The objective function is the economic cost from the
generation perspective. The cost is composed of two parts
as fossil fuel cost and start-up cost respectively.
The integration of conventional thermal units, PEVs
and renewable energy sources propose an even more
Fuel cost
98
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
where the Pwind,t and PD,t are wind generation and user
demand. The PEV and Ndsch,t represent the V2G power of
a single PEV and the online number of PEV that feed
power back to grid respectively.
Fj,t (Pj,t ) = a j + bj Pj,t + cj Pj,t2 Fj,t (Pj,t ) = a j + bj Pj,t +
2
cj Pj,t2 𝐹𝑗,𝑡 (𝑃𝑗,𝑡 ) = 𝑎𝑗 + 𝑏𝑗 𝑃𝑗,𝑡 + 𝑐𝑗 𝑃𝑗,𝑡
(1)
Fuel cost is a quadratic formulation shown as (1) with
the Pj,t and Fj,t denoting the determined power and fuel
cost. aj, bj and cj are the fuel cost coefficients of the
corresponding unit.
Spinning reserve limit
System load prediction may fail to precisely reflect the
real system load demand. The spinning reserve is
therefore necessary to provide redundant power reserve
to meet unpredicted demand requirement.
Start-up cost
𝑆𝑈𝑗,𝑡 =
𝑛
∑ 𝑃𝑗,𝑚𝑎𝑥 𝑢𝑗,𝑡 + 𝑃𝑊𝑖𝑛𝑑,𝑡 + ∑ 𝑃𝐸𝑉 𝑁𝑑𝑠𝑐ℎ,𝑡
𝑆𝑈𝐻,𝑗 , 𝑖𝑓 𝑀𝐷𝑇𝑗 ≤ 𝑇𝑂𝐹𝐹𝑗,𝑡 ≤ 𝑀𝐷𝑇𝑗 + 𝑇𝑐𝑜𝑙𝑑,𝑗
{
(2)
𝑆𝑈𝐶,𝑗 , 𝑖𝑓 𝑇𝑂𝐹𝐹𝑗,𝑡 > 𝑀𝐷𝑇𝑗 + 𝑇𝑐𝑜𝑙𝑑,𝑗
𝑗=1
Minimum up/down time limit
Traditional thermal power generation units especially
coal fueled generators endures minimum up and down
time shown as below,
Note that due to the various types of EV batteries and
long experimental period, very few contributions have
been made to quantitatively evaluate the battery cost.
Therefore in this paper, the battery depletion is ignored
and the final objective cost function is given below,
1, 𝑖𝑓 1 ≤ 𝑇𝑂𝑁𝑗,𝑡−1 < 𝑀𝑈𝑇𝑗
𝑢𝑗,𝑡 = { 0, 𝑖𝑓1 ≤ 𝑇𝑂𝐹𝐹𝑗,𝑡−1 < 𝑀𝐷𝑇𝑗
0 𝑜𝑟 1, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
+ 𝑆𝑈𝑗,𝑡 (1 − 𝑢𝑗,𝑡−1 )𝑢𝑗,𝑡 ]
(3)
Constraints
The new UCVW problem integrates the plug-in
electric vehicle and wind generation into the power
system. Some constraints of the inherent power system as
well as limitations of V2G service of PEVs and wind
generation are considered.
Discharging number limit
The integer numbers of PEV discharging in each hour
are limited with the maximum and minimum values. It is
also assumed that the sum number of PEVs which would
join in the V2G service is limited by the total PEV
numbers and that all the PEVs are assumed to provide
one hour V2G service in whole day horizon. The
limitations are shown as below,
Generation limit
Generation limit is the maximum and minimum
power generation of each unit shown as,
(4)
where the Pj,min, Pj,max are the minimum and maximum
power limits respectively.
∑𝑇𝑡=1 𝑁𝑑𝑠𝑐ℎ,𝑡 = 𝑁𝑡𝑜𝑡𝑎𝑙
(8)
𝑁𝑑𝑠𝑐ℎ,𝑚𝑖𝑛 ≤ 𝑁𝑑𝑠𝑐ℎ,𝑡 ≤ 𝑁𝑑𝑠𝑐ℎ,𝑚𝑎𝑥
(9)
Ntotal is the total PEV number plugged in the system while
the Ndsch,min and Ndsch,max are upper and lower boundaries
of the discharging number.
Power demand limit
Power demand limit illustrates the power balance
between power generation and user demand. In the
UCVW problem, the wind generation and V2G power
are accumulated as parts of generation shown as below,
∑𝑛𝑗=1 𝑃𝑗,𝑡 𝑢𝑗,𝑡 + 𝑃𝑊𝑖𝑛𝑑,𝑡 + ∑𝑛𝑗=1 𝑃𝐸𝑉 𝑁𝑑𝑠𝑐ℎ,𝑡 = 𝑃𝐷,𝑡
(7)
where the unit is forced on or off within minimum
periods.
where the uj,t denotes the binary status of on/off-line unit.
𝑢𝑗,𝑡 𝑃𝑗,𝑚𝑖𝑛 ≤ 𝑃𝑗,𝑡 ≤ 𝑢𝑗,𝑡 𝑃𝑗,𝑚𝑎𝑥
𝑗=1
≥ 𝑃𝐷,𝑡 + 𝑆𝑅𝑡
(6)
In the spinning reserve limit (6), the SRt is the reserved
power amount. The system capacity should not be less
than the sum of predicted load and spinning reserve. The
system capacity is the accumulation of the maximum
capacity of on-line units, the predicted wind generation
and the V2G power.
Start-up cost SUj,t is an inevitable cost to ‘turn on’ an
off-line generator. The cold generator is required to be reheated and enduring a higher cold-start cost SUC,j, while
the hot-start cost is denoted as SUH,j. The minimum down
time and minimum up time are denoted as MDTj and
MUTj for an on-line unit to be turned off and vice versa.
Tcold,j is the cold-start hour, while TOFFj,t is the off-line
duration time.
𝑚𝑖𝑛 ∑𝑇𝑡=1 ∑𝑛𝑗=1[𝐹𝑗 (𝑃𝑗,𝑡 )𝑢𝑗,𝑡
𝑛
HYBRID HEURISTIC APPROACH
The complicated UCVW problem calls for powerful
computational techniques. Basic binary PSO has been
employed in some early research [Error! Bookmark
not defined.] associated with integer PSO. However,
basic BPSO endures low convergence speed and is easy
(5)
99
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
to be trapped within local optimal. In this paper, a
modified BPSO in which the sigmoid probability
function was redesigned as symmetric shape.
NUMERICAL RESULT AND ANALYSIS
Due to the length limitation, two cases are considered
for analysis. The 10-unit UC problem without wind and
V2G is considered in Case 1 to illustrate the performance
of MBPSO methods. Case 2 comparatively integrates
four scenarios of wind generation and V2G service on
the 10-unit system.
Binary particle swarm optimisation
The BPSO is an important variant of PSO and has been
widely used for solving UC problem [Error! Bookmark
not defined.]. The BPSO uses updated velocities to
achieve binary status from the sigmoid probability
function. The velocities are updated as below,
Case 1: 10-unit only
In Case 1, the 10-unit 24 hour thermal plant data is
from [1]. The MBPSO associated with Lagrangian
Relaxation method is used to solve this benchmark case.
In terms of the parameter setting for the algorithm, the
number of particles in a population is 20 and the
maximum iteration is 1000. The maximum and minimum
of velocity is [-6, 6], and the weighting factor w inertially
decreases from 0.9 to 0.4. The learning factors C1 and C2
are 1.5 and 2.5 respectively. The methods are tested in
30 independent runs to eliminate the occasionality. To
comparatively study the performance, the optimization
results of a quantum inspired PSO (QPSO) [Error!
Bookmark not defined.], a binary gravitational search
algorithm (GSA) [Error! Bookmark not defined.] and
a BPSO are also listed. The MBPSO is implemented in
the MATLAB® 2014a on an Intel i5-3470 CPU at
3.20GHz and 4GB RAM personal computer.
𝑣𝑖 (𝑡 + 1) = 𝑤(𝑡) ∙ 𝑣𝑖 (𝑡) + 𝐶1 (𝑡) ∙ 𝑟𝑎𝑛𝑑1 ∙ (𝑝𝑙𝑏𝑒𝑠𝑡,𝑖 −
𝑥𝑖 (𝑡)) + 𝐶2 (𝑡) ∙ 𝑟𝑎𝑛𝑑2 ∙ (𝑝𝑔𝑏𝑒𝑠𝑡 − 𝑥𝑖 (𝑡))
(10)
where vi (t + 1), vi (t) and xi (t) are the updated velocity,
current velocity and the discrete variable of the ith
particle at tth iteration. The w(t), C1(t) and C2(t) represent
the weighting, social and cognitive coefficients
respectively. plbest,i and pgbest are the binary local and
global best solutions. The original sigmoid probability
function converges slow and is easy to be pre-mature.
This is partly due to that when the value of updated
velocity is small, the probability of the binary variable in
the according position should not be changed. While in
the original function, the probability is 0.5 when vi is 0,
leading to an unsteady status for the optimal solution. To
remedy this drawback, the probability is redesigned as
(11), where an absolute value operator is utilized to
convert the probability distribution to be symmetric as
follow,
𝑃(𝑣𝑖 (𝑡 + 1)) = 2 × |
1
1+𝑒 −𝑣𝑖(𝑡+1)
− 0.5|
SIMULATION RESULTS OF CASE 1 ($/DAY)
Method
QPSO[Er
ror!
Bookmar
k
not
defined.]
BGSA[Er
ror!
Bookmar
k
not
defined.]
BPSO
(11)
According to this probability, the new iteration of binary
variable xi is generated as:
𝑖𝑓 𝑟𝑎𝑛𝑑 < 𝑃(𝑣𝑖 (𝑡 + 1))
𝑡ℎ𝑒𝑛 𝑥𝑖 (𝑡 + 1) = 1;
𝑒𝑙𝑠𝑒 𝑥𝑖 (𝑡 + 1) = 0
(12)
.
MBPSO
In terms of the parameter selection for (10), the original
configuration is remained for implementation. This
modified BPSO is named as MBPSO.
Cost ($/day)
Best
Worst
Mean
563,977
563,977
563,977
563,937
564,241
564,031
563,937
564,765
564,139
563,937
563,977
563,964
It could be observed from Table I that the new MBPSO
outperforms QPSO on the best and mean value and
performs better than BPSO and BGSA on worst and
mean value.
Integer differential evolution method
Differential evolution method is another popular
heuristic optimization method and has also been widely
used in various applications and engineering fields [12].
Two key phases are employed in the process of DE
namely mutation and crossover. The original DE method
is employed in this paper.
It should be noted that the variables in the
conventional DE method are continuous real-valued. In
order to utilized DE to optimize the integer value, an
extra step where the round function is employed to
ensure all the new generated variables are integer
illustrating the number of PEVs for the V2G service.
Case 2: 10-unit with wind power and V2G
In this case, the wind generation and V2G service are
integrated in the 10-unit power system. The wind data is
the real wind farm generation data from the record in a
specific year of EirGrid in Ireland, and different
scenarios are shown in Figure 1. Note that the prediction
error of the wind generation is ignored. Four seasons of
scenarios including winter (Jan), spring (Apr), summer
(July) and autumn (Oct) with the maximum generation
of 80 MW/hour are illustrated with total wind generation.
The total wind generations in 24 hour horizon are 1277
MW, 1031 MW, 764 MW and 1285 MW respectively for
the corresponding season scenarios. In terms of PEVs,
the Ntotal is assumed as 50000 for joining in the V2G
100
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
service. The rated power of each PEV battery is
calculated
as
15KW×50%
(SOC)×85%
(efficiency)=0.006375MW [Error! Bookmark not
defined.]. The Ndsch,max is set as 10% of total PEV number
and the Ndsch,min is 0. It is also assumed that all the PEV
are charged from renewable energy and the cost is not
considered in this paper. The parameter F and Cr in DE
algorithm is set as 0.7 and 0.9. The rest of the
configurations are the same with Case 1 and only the
hybrid method combing MBPSO and DE is employed.
The maximum iteration is set as 200.
of the autumn scenario as 24.34$/MW.day compared
with 25.25$/MW.day for the winter one.
CONCLUSION AND FUTURE WORK
With the increasing penetration of PEV and renewable
energy, intelligent scheduling methods are gaining more
attentions to enhance the ‘smartness’ of the power grid.
In this paper, a hybrid intelligent method has been
proposed to schedule the unit commitment problem
integrated with plug-in electric vehicles and wind power
generation. The results show that the wind and PEV V2G
service could work together to significantly save the
fossil fuel cost. The intelligent scheduling method could
simultaneously determine the unit commitment and PEV
discharge power distribution.
Future work will focus on the development of
intelligent algorithms as well as scheduling strategies for
charging and discharging of PEVs to efficiently work
together with high penetration of renewable energy
generations.
ACKNOWLEDGMENT
This work was financially supported by UK EPSRC
under grant EP/L001063/1 and China NSFC under grants
51361130153 and 61273040.The authors would like to
thank the EirGrid for providing the wind generation
datasets.
Wind distribution offour season’s scenarios
SIMULATION RESULTS OF CASE 2 ($/DAY)
REFERENCES
Scenario
Cost
($/day)
Saving
($/day)
PEV+wind
(MW)
Saving rate
($/MW.day)
10-unit only
563,937
0
0
0
554,587
9,350
319
29.31
523,638
40,299
1596
25.25
531,045
32,892
1350
24.36
537,322
26,615
1083
24.58
524,888
39,049
1604
24.34
10-unit
+V2G
10unit+V2G
+Windwinter
10unit+V2G
+Windspring
10unit+V2G
+Windsummer
10unit+V2G
+Windautumn
[1] T. Ting, M. Rao, C. Loo, A novel approach for unit commitment
problem via an effective hybrid particle swarm optimization, Power
Systems, IEEE Transactions on 21 (1) (2006) 411–418.
[2] X. Tang, B. Fox, K. Li, Reserve from wind power potential in
system economic loading, IET Renewable Power Generation 8 (2014)
558–568.
[3] Q. Jiang, B. Zhou, M. Zhang, Parallel augment Lagrangian
Relaxation method for transient stability constrained unit commitment,
Power Systems, IEEE Transactions on 28 (2) (2013) 1140–1148.
[4] A. Kazarlis, A. Bakirtzis, V. Petridis, A genetic algorithm solution
to the unit commitment problem, Power Systems, IEEE Transactions
on 11 (1) (1996) 83–92.
[5] X. Yuan, H. Nie, A. Su, L. Wang, Y. Yuan, An improved binary
particle swarm optimization for unit commitment problem, Expert
Systems with applications 36 (4) (2009) 8049–8055.
[6] Y. Jeong, J. Park, S. Jang, K Lee. A new quantum-inspired binary
PSO: application to unit commitment problems for power systems.
Power Systems, IEEE Transactions on, 2010, 25(3): 1486-1495.
[7] B. Ji, X. Yuan, Z. Chen, H. Tian, Improved gravitational search
algorithm for unit commitment considering uncertainty of wind power,
Energy 67 (2014) 52–62.
[8] Z. Yang, K. Li, A. Foley, C. Zhang, Optimal scheduling methods to
integrate plug-in electric vehicles with the power system: a review, in:
19th World Congress of the International Federation of Automatic
Control, IFAC, 2014, pp. 8594–8603.
[9] A. Y. Saber, G. K. Venayagamoorthy, Resource scheduling under
uncertainty in a smart grid with renewables and plug-in vehicles,
Systems Journal, IEEE 6 (1) (2012) 103–109.
[10] Talebizadeh E, Rashidinejad M, Abdollahi A. Evaluation of plugin electric vehicles impact on cost-based unit commitment. Journal of
Power Sources, 2014, 248: 545-552.
[11] Z. Yang, K. Li, Q. Niu, A. Foley, Unit Commitment Considering
Multiple Charging and Discharging Scenarios of Plug-in Electric
Vehicles, in International Joint Conference on Neural Networks
(IJCNN), 2015. IEEE, accepted.
[12] S. Das, P N. Suganthan. Differential evolution: a survey of the
state-of-the-art. Evolutionary Computation, IEEE Transactions on,
2011, 15(1): 4-31.
Table II shows the cost of multiple integration
scenarios and compares the economic savings due the
introduction of PEV and wind. The maximum savings is
40,299$/day in winter wind scenario together with V2G.
The saving rate is calculated as the cost saving divided
by the extra power (e.g. PEV+ wind). Note that the V2G
only mode sees the highest saving rate 29.3 $/MW.day
due to the intelligent scheduling for properly support the
grid on peak load. Through the autumn scenario
generates more wind power than winter scenario, but it
contributes less cost saving for the autumn scenarios.
This is due to that the wind boosted during off-peak time
in the evening while failed to reduce the peak load. This
conclusion could be also referred to the lower saving rate
101
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Theoretical Analysis and Software Modeling of Composite
Energy Storage Based on Battery and Supercapacitor in
Microgrid Photovoltaic Power System
Wenlong Jing*, Chean Hung Lai, Wallace S.H. Wong, M.L. Dennis Wong
Faculty of Engineering, Computing and Science,
Swinburne University of Technology Sarawak Campus, Malaysia
*[email protected]
Abstract— The PV power system is gaining its popularity as a renewable energy solution in microgrids. However,
owing to its randomness and intermittent features, energy storage system is required to balance generation and
demand. This paper presents a study of the performance improvements by employing composite energy storage in
a 500W rated PV system. Computer experiments show that the composite energy storage can enhance the
instantaneous peak power and reduce the battery stresses. The study consists of two main parts: first, a thorough
theoretical analysis is done to evaluate the performance of the battery-supercapacitor composite energy storage
under periodic pulse power load condition. A battery relief factor is defined to indicate the level of reduction on
battery stresses. Second, the system is modeled in Matlab/Simulink and its performances are validated. The results
show that fast dynamic load power regulation can be achieved by utilizing the supercapacitor and all impact power
demands are satisfied. Besides, the battery is able to provide smooth load power with noticeably decreased stress.
Keywords- Hybrid Energy Storage, Microgrid, PV System, Supercapacitor, Battery.
Microgrids can operate as an autonomous power
island or in a grid-connected mode [4]. Under normal
circumstances, a microgrid generates power while
connecting to the utility system. When the accidents
occur in the grid, the microgrid will adopt towards the
islanding operation mode and continue to serve its
electrical load. During the conversion of these two
operation modes, the switching process will cause power
shortage and power oscillation [5]. The energy storage
devices can be used to offset the power shortage. In the
course of conversion, the storage units can smooth surge
power and enhance the system stability [6].
fluctuation and flicker. As energy storage device, battery
is one common and promising solution to serve the
microgrid. However, the cycle life of chemical battery
deteriorates significantly when subject to overcharging,
high charge or discharge rate and deep-discharged and
etc. Thus, regulator and limiter are always needed to be
integrated into the system to protect the battery from
being damaged by impact power demand, over charging
or discharging current and voltage [8]. Consequently, the
battery is unable to provide corresponding power to
satisfy the sudden load demand. Moreover, battery is
unable to rapidly respond to sudden load demand and has
only hundred times of charging/discharging cycle [9].
Therefore, with the battery being the main energy storage
device, the overall system service life and performance
are limited. To enhance the practicability of the energy
storage within the microgrid, it is important to overcome
the aforementioned shortcomings. Recent researchers
have introduced the supercapacitor into the battery energy
storage system to form a novel composite system [10][12].
In small scale microgrid, its self-regulation is weak.
The load fluctuations and power grid failure will inflict a
great impact on its stability. Efficient energy storage can
commendably solve this problem. It can store the excess
energy when the load is low and provide the energy to the
microgrid under high load demands. Consequently, the
stability and adjust flexibility of the microgrid will have
a reasonable improvement. Moreover, the energy storage
can solve the voltage dips, voltage oscillation and other
issues [7]. Without these negative limitations, the
microgrid can satisfy the variable load demand with a
reliable power quality. The energy storage helps the
microgrid to satisfy the peak load electricity demand,
compensate the reactive power, suppress the voltage
The supercapacitor (SC) (sometimes ultracapacitor,
formerly electric double-layer capacitor) is an
electrochemical capacitor, which is composed of two
porous conducting electrodes. Its capacitance values up to
the range of thousands of farads that bridges the gap
between electrolytic capacitor and rechargeable batteries
[13]. The SC, as a high power density device, typically
stores 10 to 100 times more energy per unit volume than
ordinary electrolytic capacitors, can accept and deliver
charge much faster as well as tolerate many more
charging/discharging cycles than battery [14]. Compared
to ordinary capacitors, SC has higher dielectric constant,
rated voltage and capacity, and faster time for releasing
and charging energy. Moreover, the SC does not require
INTRODUCTION
In microgrid, the load condition and renewable energy
sources are typically random and intermittent. This causes
great impact on the stability of the microgrid operation
[1]-[2]. To balance the microgrid generation and demand,
an efficient energy storage system is of great significance
in ensuring operation stability and internal power
steadiness [3].
102
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
io
additional supporting devices, thus it gives high reliability
with minimal maintenance workload. However, the SC
does have some limitations such as unequal voltages in
series connection, large fluctuations in the voltage range,
and low energy density [15].
ic
ib
Rc
Rb
To exploit the advantages of both the battery and the
SC, a composite energy storage is suggested which
combines these two storage units [16]. The composite
system can compensate for the shortcomings of the two
storage units and hence improve the overall performance
of energy storage system. Numerous studies have
demonstrated that the composite energy storage system
prolongs the overall system operation lifetime compared
to the system without SC, reduces internal system losses
and reliefs the battery charging/discharging stress [17][18].
+
v0
vb
C
Io(s)
Io(s)
-
Ic(s)
Ib(s)
Rc
Rb
+
ZTh(s)
Vo(s)
Vc0 /C
1 /sC
+
Vb /s
-
Vo(s)
VTh(s)
-
Figure 1 The Composite Energy Storage Equivalent Circuit
Using Laplace transform, the circuit is transformed
into the frequency domain, and the corresponding
Thevenin equivalent voltage and impedance are:
V(s) =
Vb
Rb
Vco − Vb
+
∗
1
s
Rb + Rs s +
(R b + R s )C
Z(s) = R b //(R s +
In this paper, a 500W Photovoltaic (PV) system with
composite energy storage units, combining SC and
battery is proposed.. The operation of the PV system is
evaluated via both theoretical analysis and simulation
verification using Matlab/Simulink. Based on the
theoretical analysis, a battery relief factor is proposed.
The factor defines the enhancement level of power
transfer ability within the microgrid with composite
energy storage system. The results show that the
composite energy storage can enhance the instantaneous
peak power to achieve fast dynamic load power
regulation, stabilize energy provision, increase the
elimination rate of surge load power, relieve the battery
stress and prolong the battery lifetime as well.
1
s+
1
RbRs
R sC
)=
∗
1
sC
Rb + Rs s +
(R b + R s)C
(1)
(2)
where s is the complex frequency and Vco is the SC initial
voltage. Assuming a periodic pulse loading, I0 is the peak
input current, T is the period and D is the duty cycle then
the output current, io (t) , is:
N−1
io (t) = Io ∑[∅(t − kT) − ∅(t − (k + D)T)] , (k = 0,1,2, … )
(3)
k=0
where the ∅(t) is the Heavyside step function and its
corresponding Laplace transform is as follows:
N−1
e−skT e−s(k+D)T
Io (s) = Io ∑ [
−
] , (k = 0,1,2, … )
s
s
(4)
k=0
The rest of the paper is organized as follows: Section
2 reports the theoretical analysis of the composite energy
storage units. Section 3 details the simulation and
assessment of the proposed method. Finally, section 3
concludes the paper.
Thus, the voltage drop across the impedance Z(s) is then:
THEORETICAL ANALYSIS
The charging and discharging process of the battery is
affected by chemical reactive ion diffusion rate.
Therefore, it is difficult to release large instantaneous
power in cases where the load draws large power
impulses. Compare to the battery, the SC has high power
density and energy efficiency, high charge and discharge
rate, long cycle life and is suitable for impact power
output occasions. However, it has low energy density and
therefore cannot supply large energy to the system. For
example, supply energy during night time when no
energy is produced by PV system. In this case, the battery
which has high energy density can be utilized. To
overcome the deficiency of both storage devices, the
battery-SC composite energy storage system is proposed.
For the evaluation of composite energy storage
performance, it is important to derive the mathematical
model to theoretically analyze the system in terms of
energy efficiency, power capabilities and system stability.
As a result, the voltage drop across the load is then:
VZ (s) = Io (s) ∗ Z(s) =
1
N−1
s+
R b R s Io
e−skT − e−s(k+D)T
Rs C
] (5)
∑[
∗
1
Rb + Rs
s
k=0 s +
(R b + R s )C
V0(s) = V(s) − VZ (s)
V0(s) =
Vb
Rb
Vco − Vb
+
∗
− VZ (s)
1
s
Rb + Rs s +
(R b + R s )C
(6)
(7)
The inverse Laplace transform of V0 (s) is:
vo (t) = vb +
N−1
R b Io ∑ [(1 −
k=0
t
Rb
−
∗ (Vco − Vb ) ∗ e (Rb+Rs )C −
Rb + Rs
(8)
t−kT
t−(k+D)T
Rb
Rb
−
−
e (Rb +Rs)C) ∅(t − kT) − (1 −
e (Rb +Rs)C )∅(t − (k + D)T)]
Rb + Rs
Rb + Rs
and the branch currents of the battery (ib) and the SC (ic)
are respectively:
ib (t) =
Based on the study in [18] and [29], the equivalent
circuit of the composite energy storage is shown in Fig.
1. The SC is typically regarded as a large capacitance and
equivalent series resistance, the battery as a voltage
source and equivalent series resistance.
t
(Vco − Vb ) −
vb − vo (t)
=−
∗ e (Rb+Rs )C +
Rb
Rb + Rs
t−kT
Rb
−
N−1
(1 −
e (Rb+Rs )C) ∅(t − kT) −
Rb + Rs
Io ∑
t−(k+D)T
Rb
−
k=0 (1 −
e (Rb+Rs )C )∅(t − (k + D)T)
[
]
Rb + Rs
ic (t) = i0(t) − ib (t)
The steady state currents are:
103
(9)
(10)
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
t−kT
Rb
−
e (Rb+Rs )C) ∅(t − kT) −
Rb + Rs
ibss (t) = Io ∑
t−(k+D)T
Rb
−
k=0 (1 −
e (Rb+Rs )C )∅(t − (k + D)T)
[
]
Rb + Rs
N−1
N−1
icss(t) =
rated capacity is 50 Ah. The SC equivalent DC series
resistance is 𝑅𝑠 = 0.0021 Ω and its rated capacitance is
500F. The period and duty ratio of the required pulse load
power are 3s and 30%, respectively. According to
Equation (14), the battery relief factor ε is calculated as
4.05. It means that 4.05 times as much power is able to be
supplied via the composite storage compared to a battery
alone.
(1 −
−
(11)
t−kT
R b Io
e (Rb+Rs )C ∗ ∅(t − kT) −
]
∑[
t−(k+D)T
Rb + Rs
−
k=0 − e (Rb +Rs )C ∗ ∅(t − (k + D)T)
(12)
Assuming t = (k + D)T and the steady state current of
battery is:
−
ibp
(1−D)T
DT
Rb
1 − e (Rb+Rs )C
Io
−
)=
= Io (1 −
∗ e (Rb+Rs )C ∗
T
Rb + Rs
ε
−
1 − e (Rb+Rs )C
(13)
where
−
(1−D)T
−1
DT
Rb
1 − e (Rb+Rs )C
−
)
ε = (1 −
∗ e (Rb+Rs )C ∗
T
Rb + Rs
−
1 − e (Rb+Rs )C
(14)
Figure 2 The PV System with Composite Energy Storage via Matlab/Simulink
We term ε as the battery relief factor. Equation (13)
shows that the steady state current of battery is always
smaller than the load current. The battery just provides a
part of current and the remaining current is supported
through the SC. The battery peak rated power and
composite energy storage instantaneous peak power are
Pbp =ibp *vb=
Io
*v
ε b
Ppeak = Io*vb =Pbp *ε
In order to reasonably verify the performance of
composite energy storage, two cases are presented.
A. The PV System with Battery Alone
From Fig. 3, it is apparent that the PV arrays output
power varies randomly. During the on-state period, the
battery discharges gradually, the summation of power
from both PV arrays and battery fails to provide enough
energy to satisfy the load power requirement. During the
off-state period, the load power is zero and the PV arrays
charges the battery.
(15)
(16)
Due to the presence of SC, ε is always larger than
unity. This indicates that composite energy storage
provides extra power when compared to the battery only
system. Thus ε describes the level of power enhancement
introduced by the composite storage system. As Io and vb
are given constant values, when ε increases, the required
battery output power decreases. Therefore for a fixed
power rating required from a composite energy storage
system, the output power from the battery can be reduced
and this relieves the battery stress in the system. For the
specific case D = 0,
Rb + Rs
Rb
= 1+
Rs
Rs
Required Power
Battery Output
1500
SuperCap Output
PV Output
Power
1000
500
0
-500
2.5
2000
3
3.5
4
4.5
5
5.5
6
6.5
7
7.5
8
SC+Battery+PV Output
Required Power
1500
1000
(17)
Power
ε=
Power Profile
2000
500
0
where R s and R b are the internal resistor of SC and
battery respectively. ε increases as R s decreases. This
implies that the smaller the internal resistance from SC is,
the lesser the power is required from the battery; hence
the overall system lifetime is therefore prolonged.
-500
2.5
3
3.5
4
4.5
5
5.5
Time (s)
6
6.5
7
7.5
8
Figure 3 The PV System with Battery Alone
B. The PV System with Composite Energy Storage
Performance
From Fig. 4, it can be observed that the summation of
power from both SC and battery is sufficient to match
load requirement. The output power shows the PV system
stabilizes the energy provision, smooth the peak power
and increase the elimination rate of surge load power.
When the load power is turned on suddenly, the SC, with
high power density, jumps rapidly to a high value and
delivers the power to the load; then the power curve
descends gradually. When the load power changes to
zero, the SC power flow changes quickly to the opposite
polarity and gets charged by the power from battery
during the off-site period. The battery reaction is
different. During the on-state period, the battery starts to
NUMERICAL SIMULATION AND RESULTS
The simulation model of the PV system implemented
in Matlab/Simulink is illustrated in Fig. 2. For the
composite energy storage, the SC and battery are both
connected to a bidirectional Buck/Boost converter. To
simulate the limitations of the battery itself, for example,
the battery cannot respond to impact power immediately
and to release enough energy to meet the load
requirements, a limiter is added to the battery model to
restrict its input/output power. Due to the limiter, all
impulse power demands would be satisfied by the SC. In
the system, the PV arrays rated power is 500 W. The
internal resistance of battery is 𝑅𝑏 = 0.0064 Ω and its
104
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
increase the output power gradually from a low value. In
the off-state period, it falls off to a value which equals to
the SC output power in the opposite direction. The battery
keeps on releasing energy to charge the SC. As a result,
the adverse influence on the battery from the variable
required power is greatly reduced.
[4]
[5]
[6]
Power Profile
2000
Required Power
Battery Output
1500
SuperCap Output
PV Output
Power
1000
500
[7]
0
-500
2.5
2000
3
3.5
4
4.5
5
5.5
6
6.5
7
7.5
8
SC+Battery+PV Output
[8]
Required Power
1500
Power
1000
[9]
500
0
-500
2.5
3
3.5
4
4.5
5
5.5
Time (s)
6
6.5
7
7.5
8
[10]
Figure 4 The PV System with Composite Energy Storage Performance
CONCLUSION
[11]
A 500W PV system with composite energy storage
units, which combines the SC for fast dynamic power
regulation and battery for long-term flatness power
provision, is modeled via Matlab/Simulink in this paper.
A battery relief factor has been derived from athorough
theoretical analysis to describe the stress level for the
composite energy storage under certain pulse load
demand. Moreover, the factor also indicates the peak
output power enhancement level which is the ability of
facing impact power demand. The systematic modeling
of the system is evaluated and the results show that the
PV system with composite energy storage can stabilize
the energy provision, smooth the peak power, increase the
elimination rate of surge load power, and prolong the
battery lifetime.
[12]
[13]
[14]
[15]
[16]
REFERENCES
[1]
[2]
[3]
[17]
Chen, Haisheng, Thang Ngoc Cong, Wei Yang, Chunqing Tan,
Yongliang Li, and Yulong Ding. "Progress in electrical energy
storage system: A critical review." Progress in Natural
Science 19, no. 3 (2009): 291-312.
Xie, Le, and Marija D. Ilic. "Model predictive dispatch in electric
energy systems with intermittent resources." In Systems, Man
and Cybernetics, 2008. SMC 2008. IEEE International
Conference on, pp. 42-47. IEEE, 2008.
Furushima, Kaoru, Yutaka Nawata, and Michio Sadatomi.
"Prediction of PV (PV) power output considering weather
effects." In Proceedings of the SOLAR, pp. 7-13. 2006.
[18]
[19]
105
Hatziargyriou, Nikos, Hiroshi Asano, Reza Iravani, and Chris
Marnay. "Microgrids." Power and Energy Magazine, IEEE 5, no.
4 (2007): 78-94.
Kroposki, Benjamin, Robert Lasseter, Toshifumi Ise, Satoshi
Morozumi, S. Papatlianassiou, and Nikos Hatziargyriou.
"Making microgrids work." Power and Energy Magazine,
IEEE 6, no. 3 (2008): 40-53.
Kim, Jong-Yul, Jin-Hong Jeon, Seul-Ki Kim, Changhee Cho,
June Ho Park, Hak-Man Kim, and Kee-Young Nam.
"Cooperative control strategy of energy storage system and
microsources for stabilizing the microgrid during islanded
operation." Power Electronics, IEEE Transactions on 25, no. 12
(2010): 3037-3048.
Zamora, Ramon, and Anurag K. Srivastava. "Controls for
microgrids with storage: Review, challenges, and research
needs." Renewable and Sustainable Energy Reviews 14, no. 7
(2010): 2009-2018.
Divya, K. C., and Jacob Østergaard. "Battery energy storage
technology for power systems—An overview." Electric Power
Systems Research 79, no. 4 (2009): 511-520.
Zhao, Bo, Xuesong Zhang, Jian Chen, Caisheng Wang, and Li
Guo. "Operation optimization of standalone microgrids
considering lifetime characteristics of battery energy storage
system." Sustainable Energy, IEEE Transactions on 4, no. 4
(2013): 934-943.
Etxeberria, Aitor, Ionel Vechiu, Haritza Camblong, and JeanMichel Vinassa. "Hybrid energy storage systems for renewable
energy sources integration in microgrids: A review." In IPEC,
2010 Conference Proceedings, pp. 532-537. IEEE, 2010.
Zhou, Lin, Yong Huang, Ke Guo, and Yu Feng. "A survey of
energy storage technology for micro grid." Power System
Protection and Control 39, no. 7 (2011): 147-152.
Glavin, M. E., and W. G. Hurley. "Optimisation of a PV battery
ultracapacitor hybrid energy storage system." Solar Energy 86,
no. 10 (2012): 3009-3020.
F. Belhachemi , S. Rael and B. Davat "A physical based model of
power electric double-layer supercapacitors", Proc. IEEE Ind.
Appl. Conf., pp.2069 -3076,2000.
S. Mallika and R. S. Kuma "Reniew on ultracapacitor-battery
interface for energy management system", Int. J. Eng. Technol.,
vol. 3, no. 1, pp.37 -43,2011.
Krishna, C. M. "Managing battery and supercapacitor resources
for real-time sporadic workloads." IEEE embedded systems
letters 3, no. 1 (2011): 32-36.
Glavin, M. E., and W. G. Hurley. "Optimisation of a PV battery
ultracapacitor hybrid energy storage system." Solar Energy 86,
no. 10 (2012): 3009-3020.
Zubieta, Luis, and Richard Bonert. "Characterization of doublelayer capacitors for power electronics applications." Industry
Applications, IEEE Transactions on 36, no. 1 (2000): 199-205.
Dougal, Roger A., Shengyi Liu, and Ralph E. White. "Power and
life extension of battery-ultracapacitor hybrids." Components and
Packaging Technologies, IEEE Transactions on 25, no. 1 (2002):
120-131.
KObAyASHi, Hirokazu, K. Takigawa, E. Hashimoto, A.
Kitamura, and H. Matsuda. "Method for preventing islanding
phenomenon on utility grid with a number of small scale PV
systems." In PV Specialists Conference, 1991., Conference
Record of the Twenty Second IEEE, pp. 695-700. IEEE, 1991
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
On Enery-Efficient Time Synchronization based on Source
Clock Frequency Recovery in Wireless Sensor Networks
Kyeong Soo Kim, Sanghyuk Lee, and Eng Gee Lim
Department of Electrical and Electronic Engineering, Xi'an Jiaotong-Liverpool University, Suzhou, P. R. China
{Kyeongsoo.Kim, Sanghyuk.Lee, Enggee.Lim}@xjtlu.edu.cn
Abstract—In this paper we study energy-efficient time synchronization schemes with focus on asymmetric wireless
sensor networks, where a head node, which is connected to both wired & wireless networks, is equipped with a
powerful processor and supplied power from outlet, and sensor nodes, which are connected only through wireless
channels, are limited in processing and battery-powered. It is this asymmetry that we focus our study on; unlike
existing schemes saving the power of all sensor nodes in the network (including the head node), we concentrate on
battery-powered sensor nodes in minimizing energy consumption for synchronization. Specifically, we discuss a
time synchronization scheme based on source clock frequency recovery, where we minimize the number of message
transmissions from sensor nodes to the head node, and its extension to network-wide, multi-hop synchronization
through gateway nodes.
Keywords-Time synchronization; source clock frequency recovery; packet delay; wireless sensor networks
INTRODUCTION
SCFR-BASED WSN TIME SYNCHRONIZATION
Real-time wireless data acquisition networks, e.g.,
large-scale wireless sensor networks (WSNs) deployed
over a vast geographical area, have been the focus of
extensive studies due to their versatility and broad range
of applications. Time synchronization is one of critical
components in WSN operation, as it provides a common
time frame among different nodes. It supports functions
such as fusing data from different sensor nodes, timebased channel sharing and media access control (MAC)
protocols, and coordinated sleep wake-up node
scheduling mechanisms [1]. As a sensor node is a lowcomplexity, battery-powered device, energy efficiency is
the key in designing schemes and protocols for WSNs.
The major idea is to allow independent,
unsynchronized slave clocks at sensor nodes but running
at the same frequency as the master clock at a head node
through the asynchronous SCFR schemes described in
[4], which need only reception of messages with
timestamps, while carrying out the two-way message
exchange, which is unavoidable for recovery of clock
offset in existence of propagation delay [6], using normal
data packets to reduce the number of transmissions at
sensor nodes. In this way, the head node can estimate time
offsets of sensor nodes and correctly interpret the
occurrence of data measurements with respect to its own
master clock.
In a typical WSN, a master/head node is equipped
with a powerful processor, connected to both wired &
wireless networks, and supplied power from outlet
because it serves as a gateway between the WSN & a
backbone and a center for fusion of sensory data from
sensor nodes, which are limited in processing and
electrical power because they are connected only through
wireless channels and battery-powered. It is this
asymmetry that we focus our study on; unlike existing
schemes which save the power of all WSN nodes
including the head (e.g., [2] and [3]), we concentrate on
battery-powered sensor nodes, which are many in
numbers, in minimizing energy consumption for
synchronization. Specifically, in this paper we discuss a
time synchronization scheme based on the source clock
frequency recovery (SCFR) [4], where we minimize the
number of message transmissions at sensor nodes because
the energy for packet transmission is typically higher than
that for packet reception [5]. We also discuss its extension
to network-wide, multi-hop synchronization through
gateway nodes.
Fig. 1 illustrates this idea in comparison to ordinary
schemes shown in Fig. 1 (a) that are based on two-way
message exchange: First, the proposed scheme shown in
Fig. 1 (b) does not have periodic, dedicated two-way
message exchange sessions with special control messages
like “Request” and “Response”; instead, the two-way
message exchange is carried out using an ordinary
message from a sensor node and the most recent message
from the head, both of which have embedded timestamps.
Secondly, the direction of two-way message exchange in
the proposed scheme is reversed, i.e., it is the master that
requests, not the slave, unlike the existing schemes; as a
result, the master knows the current status of the slave
clock, but the slave does not. So the information of slave
clocks (i.e., time offsets with respect to the master clock)
is centrally managed at the head node.
Note that, for operations like coordinated sleep wakeup node scheduling, the head node first adjusts the time
for future operation (with respect to its own master clock)
based on the time offset of a recipient sensor node before
sending it to that node in the proposed scheme. In this
way, even though sensor nodes in the network have
106
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Mater (Head Node)
Mater (Head Node)
Slave (Sensor Node)
Slave (Sensor Node)
Request
Response
Two-Way
Message
Exchange
Beacons or
Messages for
Other Nodes
(used for
asynchronous
SCFR)
Request
Measurement
Data Report
Two-Way
Message
Exchange
(a)
Response
Measurement
Data Report
(b)
Figure 1. Reducing message transmissions at sensor nodes: (a) A scheme based on two-way message exchange as in time-sync protocol for
sensor networks (TPSN) [7] and (b) the proposed scheme.
Finally, the head nodes receives the message from S,
which is just relayed by G1, and translates the time stamp
value based on the information on the time offset of G1 it
manages. In this way, the head node can obtain the event
& related data and its occurrence in time reported by S
with respect to its own master clock.
Mater (Head Node)
...
G1
...
Gateway Nodes
SUMMARY
In this paper we have proposed an energy-efficient
time synchronization scheme for asymmetric WSNs,
which is based on the asynchronous SCFR and masterinitiated two-way message exchange to minimize the
number of message transmissions at sensor nodes. We
have also shown how the proposed scheme can be
extended to a hierarchical structure for network-wide,
multi-hop synchronization through gateway nodes.
G2
...
S
Sensor Nodes
ACKNOWLEDGMENT
This work was supported by the Centre for Smart Grid
and Information Convergence (CeSGIC) at Xi’an
Jiaotong-Liverpool University.
Figure 2. Extension of the proposed time synchronization
scheme to a hierarchical structure for network-wide, multi-hop
synchronizaion through gateway nodes.
clocks with different time offsets, their operations can be
coordinated based on the common master clock in the
head node.
REFERENCES
Yik-Chung Wu et al., “Clock synchronization of wireless sensor
networks,” IEEE Signal Process. Mag., vol.28, no.1, pp.124-138,
Jan. 2011.
M. Akhlaq and T. R. Sheltami, “RSTP: An accurate and energyefficient protocol for clock synchronization in WSNs,” IEEE
Trans. Instrum. Meas., vol. 62, no. 3, pp. 578-589, Mar. 2013.
D. Macii et al., “Power consumption reduction in wireless sensor
networks through optimal synchronization,” Proc. I2MTC 2009,
May 2009.
K. S. Kim, “Asynchronous source clock frequency recovery through
aperiodic packet streams,” IEEE Commun. Lett., vol. 17, no. 7,
pp. 1455-1458, Jul. 2013.
A. Mainwaring et al., “Wireless sensor networks for habitat
monitoring,” Proc. WSNA’02, Sep. 2002.
K. S. Kim, “Comments on “IEEE 1588 clock synchronization using
dual slave clocks in a slave”,” IEEE Commun. Lett., vol. 18, no.
6, pp. 981-982, Jun. 2014.
S. Ganeriwal et al., “Timing-sync protocol for sensor networks,” Proc.
SenSys ‘03, pp. 138-140, Nov. 2003.
Fig. 2 shows how the proposed scheme can be
extended to a hierarchical structure for network-wide,
multi-hop synchronization through gateway nodes which
act as both head nodes (for nodes below) and normal
sensor nodes (for nodes above): For example, consider
the message transmission from the sensor node S to the
head node through the two gateway nodes G1 and G2 as
shown in Fig. 2. Because G2 acts as a head node for the
sensor node S, it translates the value of time stamp based
on the information on the time offset of S. Then G2 relays
the message from S to G1 with translated time stamp
value (with respect to its own clock). From G1’s point of
view, by the way, G2 is just one of sensor nodes it
manages. Again, based on the information on time offset
of G2, G1 translates the value of time offset with respect
to its own clock and relays the message to the head node.
107
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Improved Multi-Axes solar Tracking sytem and Analysing on
power Generated power consumed by the system
Arun Seeralan Balakrishnan, Dr Sathish Kumar Selvaperumal, Ravi Lakshmanan, Tan Chin Sern
1
Lecturer, School of Engineering, Asia Pacific University of Innovation and Technology,
Kuala Lumpur, Malaysia
[email protected]
2,3
Senior Lecturer, School of Engineering, Asia Pacific University of Innovation and Technology,
Kuala Lumpur, Malaysia
[email protected]
4
Ecosensa Technologies Sdn. Bhd.,
Kuala Lumpur, Malaysia
[email protected]
Abstract—In this paper, an improved design of sustainable multi-axes solar tracker and analyzing power
consumption is proposed. To provide an efficient solar distributed generations system and analyzing power
consumption, a scaled down multi-axes solar tracker was designed, built and tested. Multi-axes tracking mechanism
was incorporated into the proposed solar tracker to enable the solar tracking system to become versatile in openloop tracking operation. Then, Andes Solar Home System was integrated into the system design to provide power
for system operation from the solar energy it harnessed. System testing results for power generation reveal that the
power efficiency gained from the dual-axes open-loop tracking approach is 23.61%. On the other hand, various
system parameters study on open-loop tracking scheme based on different experimental setup. Thus, system testing
results for power consumption reveals that low-power microcontroller, lightweight solar panel, and low
environment temperature can reduce the power consumption in the solar tracking operation.
Keywords-solar tracking, open loop, power consumption
INTRODUCTION
(PV) modules to collect the solar energy. However, the
maximum attainable solar energy cannot be achieved
when the solar panel is fixed at certain angle and position
which limits the area of exposure to the direct solar
radiation. On the other hand, more energy can be
extracted in a day, if the solar panel or solar collector is
installed on a tracker, with an actuator that follows the
sun like a sunflower.
Energy is defined as the ability to do work and it
exists in various types, where all serve the same purpose.
The most common and important type of energy that we
are using in our everyday lives is electrical power. It can
be generated from burning fossil fuels, nuclear reactors
and from renewable sources, such as wind, water,
sunlight, and geothermal heat. However, non-renewable
fossil fuels (coal, natural gas, and crude oil) currently
supplying for the electrical power needs in the world.
Due to limited resources that are available in the earth
and environment pollution caused by the fossil fuels
makes renewable energy rapidly to gain importance as an
energy resource.
It has been observed that Solar Photovoltaic (PV) has
shown a steady growth in Malaysia as of until September
30, 2013, Solar PV shows the highest percentage for
approved applications; 39.72 per cent or 192.26MW of
installed capacity compared to bio mass with 152.49
MW or 31.5 per cent; while small hydro and biogas made
up the balance of 23.77 per cent (115.05MW) and 5.01
per cent (24.23MW) respectively (National News
Agency Malaysia, 2013). Thus, solar technology is the
fastest growing among Renewable Energy (RE)
Technology initiated today, mainly because the primary
source (sun) is unlimited and available all year round in
Malaysia.
In electric power generation system, solar panel uses
collectors in the form of optical reflectors or photovoltaic
PROPOSED METHOD
The methodology of designing and building of the
proposed multi-axis solar tracker is depicted. In this
proposed method, a 12V DC solar PV panel which has a
mass of 5.1kg and dimension of 666mm x 608mm x
25mm is chosen.
System Architecture
As shown in Fig.1, block diagrams that consist of
“microcontroller 2” and “CPU” are included in the
proposed system architecture to serve the purpose of data
acquisition and analysis for power generation of the solar
panel and power dissipation in the system. The multi-axis
solar tracker for the proposed system is complete along
with other blocks.
According to the system architecture, in the open loop
system, Real-Time Clock (RTC) for data processing is
used. The RTC is implemented for generating accurate
information such as time and date for the microcontroller
108
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
to compute astronomical prediction on the sun trajectory.
In other words, the RTC is required for the open-loop
tracking system.
RTC
Sun
LDR
Microcontroller 1
Current
Sensor 1
DC
Battery
Figure 3: 100mm × 5mm screw and nut for altitude
locking feature
Charge
Controller
Motor
Driver
Current
Sensor 3
On the other hand, the designated angle for the monthly
fixed altitude angle is as shown in Table I.
Solar Panel
Current Sensor 2
Microcontroller 2
Motor
Motor
Azimuth
Altitude
CPU
TABLE II. DESIGNATED ANGLE FOR THE MONTHLY FIXED
ALTITUDE ANGLE
Month
Sun
path Sun
path Designated
altitude
altitude
angle
angle on the angle on the
1st of the 15th of the
month
month
January
113°
111.0°
111.50°
February
107°
103.0°
105.00°
March
97°
92.0°
94.50°
April
86°
81.0°
83.50°
May
75°
71.0°
73.00°
June
69°
67.0°
68.00°
July
67°
68.0°
67.50°
August
72°
76.0°
74.00°
September 82°
87.0°
84.50°
October
92°
97.0°
94.50°
November 104°
109.0°
106.50°
December
112°
113.5°
112.75°
Figure 1. Flow chart of the system architecture for the
proposed solar tracker
Last but not least, the microcontroller 2 and the CPU
are included to collect the data acquired from the current
sensors for data analysis. Technically, it is not part of the
system design as it is implemented solely for data
acquisition and Graphic User Interface (GUI) monitoring
system for research purpose
Final Design
However, the proposed solar panel was changed to a
lightweight custom-made solar panel due to the overload
issue faced by the motor.
The spur gear was initially implemented into the
system design of the proposed solar tracker in order to
increase the output torque of the altitude motor. However,
altitude locking feature was included at the same time as
part of the tracking system for altitude angle so that the
altitude motor can be neglected. As shown in Fig.2, the
numbers written on the spur gear range from 1 to 12
represents the months throughout the year as to which
altitude angle has to be fixed with a 100mm long and
4mm thick of screw as shown in Fig. 3, so that the
tracking module is in line with the sun trajectory.
The results obtained are based on the analysis of the
sun trajectory using sun trajectory tool from
SunEarthtools.com. For example, the sun trajectory path
throughout a day on the 1st August 2014 and 15th August
2014 is between 72° and 76°, by applying mean formula
to the data acquired will thus result in 74°. Therefore, the
same method was used to obtain the rest of the result for
all months throughout the year and results are tabulated
as shown in Table II.
EXPERIMENTAL RESULTS
TABLE I. LABELLED COMPONENTS OF THE SOLAR
TRACKER PROTOTYPE
Abbreviation
Description
A
Azimuth servo motor
B
Custom-made 12V solar PV module
C
Tracking sensor module
D
Servo motor driver
E
Spur gear and pinion
F
Altitude servo motor
For every start-up of system testing, the overall
experimental setup is implemented as shown in Figure
3. The procedure for every fresh start of new system
testing for solar tracking analysis is such that all
hardware in the solar tracking system will be activated
when the Andes Solar System Housing is switched on
and thus allowing the developed solar tracker to start its
sun tracking operation depending on the type of tracking
approach selected and microcontroller assigned through
the designated buttons.
Figure 2. Custom-made spur gear with altitude locking
feature
109
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Table III. EXPERIMENTAL SETUP FOR THE
SYSTEM TESTING WITH OPEN LOOP
Experiment
Setup
1
2
3
4
5
Microcontroll
er
5
V
Y
es
-
Tracking
Approac
h
Open
Solar
Panel
Location
Yes
-
Yes
Yes
5V
17.3V
Outdoor
Outdoor
Yes
Yes
Yes
Yes
Yes
Yes
17.3V
-
Outdoor
Outdoor
Indoor
3.3V
the total solar power generation and total power
consumption for motors and microcontroller which they
will be compiled and provided in table form followed by
the waveform graph of the power consumption in motors
and microcontroller. However, the necessary data
collection for data analysis as mentioned below:
Experimental Setup 1
TABLE IV. DATA COLLECTION FOR EXPERIMENTAL SETUP
1
Open-loop tracking approach, 5V solar PV module, 3.3V
microcontroller, Outdoor
Local
Instant Power
Total Power Generation
Time
Generation
24
Fixed
Tracking
Fixed (Wh)
Tracking (Wh)
hour
(W)
(W)
format
9 :30
0.1607
0.2185
0
0
10:30
0.5536
0.9032
1412.46
2095.40
11:30
1.0052
1.0780
4126.56
6012.77
12:30
0.1602
0.3205
7946.03
9311.01
13:30
0.8303
0.9324
11522.52
13259.99
14:30
1.1654
1.1654
15098.28
17091.41
15:30
0.5681
0.7284
18125.57
20343.63
16:30
0.7721
1.1217
19743.88
22859.45
17:30
0.5972
1.3839
21190.58
25667.06
18:30
0.1165
0.2622
22117.37
28953.21
Data Analysis
As shown in Table IV, Experimental Setup 1 and
Experimental Setup 2 were carried out to determine the
overall power efficiency for tracking approaches, openloop by using different solar PV module (5V). Different
solar PV module was utilised because there were 2 units
of 5V solar PV module as compared to the single unit of
original 17.3V solar PV module where global produced
power efficiency can only be determined when there is a
similar solar PV module as a reference for comparison.
Thus, the acquired overall power efficiency will be used
to perform a reverse calculation for determining the total
power generated from the fixed solar PV panel without
the implementation of solar tracking system.
Thus, the power efficiency in the open-loop tracking
scheme is 23.61%.
In addition, Experimental Setup 2 to Experimental
Setup 3 was initiated to compare the significant
difference in power consumption between two different
microcontrollers with different voltage rating and
different tracking approach.
Last but not least, Experimental Setup 5 was carried
out to investigate on the difference in power
consumption in motors operating in different
temperature by using the recorded data in Experimental
Setup 4 as a reference, thus it has been set for outdoor for
high temperature as it is exposed to direct sunlight and
indoor for low temperature without the exposure to
sunlight. All experimental setup were conducted in total
of 9 hours of time from 9.am. to 6 p.m. or 9.30a.m. to
6.30 p.m. and the measured power were recorded in
either Wh or kWh.
Figure 5. Instant Power Generation Waveform Graph in Fixed PV
module System.
Figure 6. Instant Power Generation Waveform Graph in Open-loop
Tracking System
Experimental Setup 2
Data Collection
The data collection will be based on the experiment
setup as shown in Table 3. The layout for the data
collection for Experimental Setup 1 and Experimental
Setup 2 incorporate the total solar power generation for
fixed solar PV implementation and implementation with
solar tracking system followed by its respective
waveform graph.
Table V. DATA COLLECTION FOR
EXPERIMENTAL SETUP 2
On the other hand, the layout of the data collection for
Experimental Setup 2 and Experimental Setup 3 includes
110
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Open-loop tracking approach,
microcontroller, Outdoor
Local
Total
Power
Time
Generation
24
Fixed Tracking
hour
(W)
(Wh)
format
9 :00
0
0
10:00
11244.86
11:00
24876.98
12:00
30012.78
13:00
39130.23
14:00
43242.13
15:00
51984.24
16:00
57697.32
17:00
65001.37
18:00
68581.28
Figure 9. Instant Power Consumption Waveform Graph of Motors in
Open-loop Tracking System
17.3V solar PV module, 5V
Total Power Consumption
Motor
(Wh)
Microcontroller
(Wh)
0
2852.28
4437.97
7812.46
9187.43
11681.19
13826.28
14940.62
15924.01
16995.76
0
915.11
1791.55
2718.31
3576.61
4608.65
5448.54
6286.97
7115.20
7974.56
Figure 10. Instant Power Consumption Waveform Graph of 3.3V
microcontrollers in Open-loop Tracking System
TABLE VI.: DATA COLLECTION FOR EXPERIMENTAL SETUP
3
Open-loop tracking approach, 17.3V solar PV module,
3.3V microcontroller, Outdoor
Local
Total Power
Total Power Consumption
Time
Generation
24
Fixed
Tracking
Motor
Microcontroller
hour
(W)
(Wh)
(Wh)
(Wh)
format
9 :00
0
0
0
0
10:00
10363.03
2154.31
637.54
11:00
20204.60
3283.27
1219.57
12:00
38058.10
4601.02
1807.98
13:00
55467.58
6336.10
2402.34
14:00
74111.95
9498.91
3052.29
15:00
90291.58
11454.56
3754.93
16:00
98947.75
12763.11
4390.03
17:00
112352.88 13852.97
4978.70
18:00
122228.36 15002.28
5601.21
The extra produced power with the open-loop solar
tracking system can be determined using the power
efficiency of the open-loop tracking approach which is
23.61%.
Thus, the extra produced power in experimental setup
2 is 16.192kWh.
Waveform Graph for Experimental Setup 2
Experimental Setup 4
This experimental setup was carried out in outdoor
without solar panel with the aim of determining the effect
of load on power consumption in the system.
Figure 7. Instant Power Consumption Waveform Graph of Motors in
Open-loop Tracking System
TABLE VII: DATA COLLECTION FOR
EXPERIMENTAL SETUP 4
Open-loop tracking approach, 3.3V
microcontroller, Outdoor
Local
Total Power
Total Power
Time
Generation
Consumption
24
Fixe
Tracki
Motor
Microcontrol
hour
d
ng
(Wh)
ler (Wh)
form
(W)
(W)
at
9 :00
0
0
10:00
1271.3
313.87
5
11:00
2264.2
653.09
8
12:00
3317.2
1059.27
7
13:00
4486.2
1437.91
8
14:00
6294.7
1766.20
8
15:00
8311.4
2099.92
7
16:00
10193.
2453.64
39
17:00
11142.
2769.44
76
18:00
12099.
3084.28
14
Figure 8. Instant Power Consumption Waveform Graph of 5V
microcontrollers in Open-loop Tracking System
Experimental Setup 3
The extra produced power with the open-loop solar
tracking system can be determined using the power
efficiency of the open-loop tracking approach which is
23.61%.
Thus, the extra produced power in experimental setup
3 is 28.858kWh. Waveform Graph for Experimental
Setup 3
111
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Waveform Graph for Experimental Setup 4
Figure 14. Instant Power Consumption Waveform Graph of 3.3V
microcontrollers in Open-loop Tracking System carried out Indoor
Data Analysis -Tracking Efficiency
Figure 11. Instant Power Consumption Waveform Graph of Motors in
Open-loop Tracking System carried out Outdoor
TABLE IX. SUMMARY OF RESULTS OBTAINED
FROM DATA COLLECTION
ES
Figure 12. Instant Power Consumption Waveform Graph of 3.3V
microcontrollers in Open-loop Tracking System carried out Outdoor
1
2
3
4
5
6
7
8
9
Experimental Setup 5
This experimental setup was carried out in indoor
without solar panel with the aim of determining the effect
of temperature on power consumption in the system by
using the result in experimental setup 4 as a reference for
comparison.
Total
Power Produced
Fixed
Open
(kWh)
(kWh)
Total
Power Consumption
Motor
Microcontroller
(kWh)
(kWh)
22.117
20.727
-
16.996
9.011
15.002
9.241
7.675
12.099
9.384
28.953
68.581
122.228
-
7.975
16.131
5.601
8.171
7.643
3.084
3.013
Total
Extra
Power
Produced
(kWh)
16.192
20.947
28.858
18.132
-
TABLE X. SUMMARY OF RESULTS OBTAINED
FROM EXPERIMENT SETUP 1 & 2
TABLE VIII. DATA COLLECTION FOR
EXPERIMENTAL SETUP 5
Open-loop tracking approach, 3.3V microcontroller,
Indoor
Local
Total Power
Total Power
Time
Generation
Consumption
24
Fixed Tracking
Motor
Microcontroller
hour
(W)
(W)
(Wh)
(Wh)
format
9 :00
0
0
10:00
674.77
267.72
11:00
1427.41
592.87
12:00
2263.96
928.68
13:00
4977.34
1205.50
14:00
5925.45
1552.06
15:00
6785.84
1918.29
16:00
7644.96
2282.94
17:00
8524.74
2659.75
18:00
9383.53
3013.30
Waveform Graph for Experimental Setup 5
Experimen
t
Setup
(ES)
1
2
Total
Power Produced
Fixed
Open
(kWh)
(kWh)
22.117
28.953
68.581
Power
Efficienc
y Gained
(%)
23.61
16.192
Experimental setup 1 and experimental setup 2 were
conducted to determine the tracking efficiency or power
efficiency for open-loop system. As shown in Table11,
the power efficiency gained in open-loop tracking
system is 23.61%.
However, the global power efficiency will vary
depending on the power consumption in the system.
Therefore, the tracking efficiency or power efficiency is
determined with the purpose of extracting the extra
power produced from experimental setup 2 to
experimental setup 3 for further analysis on power
consumption in the system.
Power consumption & Total Net Power Gained.
Figure 13. Instant Power Consumption Waveform Graph of Motors in
Open-loop Tracking System carried out Indoor.
112
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
TABLE XI.SUMMARY OF RESULTS OBTAINED
FROM EXPERIMENT SETUP 2 TO 3
REFERENCES
E
S
2
5
Microcontroller
Operatin
g Voltage
(V)
Total
Power
Produced
Open
(kWh)
Total
Power Consumption
5
3.3
68.581
122.228
16.996
15.002
Motor
(kWh)
Microcontroller
(kWh)
7.975
5.601
Total
Extra
Power
Produced
(kWh)
[1] Ahmad, A.N. et al., 2010. Efficiency Optimization of a 150W PV
System Using Dual Axis Tracking and MPPT. In Energy Conference
and Exhibition (EnergyCon), 2010 IEEE International., 2010.
[2] Cheng, S. et al., 2010. An Improved Design of Photo-Voltaic Solar
Tracking System Based on FPGA. In Artificial Intelligence and
Computational Intelligence (AICI), 2010 International Conference.,
2010.
[3] Chong, K.-K. & Wong, C.-W., 2010. General Formula for On-Axis
Sun-Tracking System. In Photovoltaic Specialists Conference (PVSC),
2010 35th IEEE., 2010.
[4] Engin, M. & Engin, D., 2012. Optimization mechatronic sun
tracking system controller's for improving performance. In
Mechatronics and Automation (ICMA), 2013 IEEE International
Conference., 2012.
[5] Huynh, D.C., Nguyen, T.M., Dunnigan, M.W. & Mueller, M.A.,
2013. Comparison between Open- and Closed-Loop Trackers of a Solar
Photovoltaic System. In Conference on Clean Energy and Technology
(CEAT)., 2013. IEEE.
[6] Kates, R.W., Parris, T.M. & Leiserowitz, A.A., 2005. What is
Sustainable Development? . Environment: Science and Policy for
Sustainable Development, pp.8-21.
Lee, C.Y., Chou, P.C., Chiang, C.M. & Lin, C.F., 2009. Sun Tracking
Systems : A Review. Sensors, pp.3875-90.
[7] Littig, B. & Griepler, E., 2005. Social sustainability : a catchword
between political pragmatism and social theory. International Journal
of Sustainable Development, 8, pp.65-79.
[8] Mazurkiewicz, J. & Electric, B., 2005. Advantages of servos. In
Electrical Insulation Conference and Electrical Manufacturing Expo,
2005. Proceedings. Indianapolis, 2005.
Minor, M.A. & Garcia, P.A., 2010. High–Precision Solar Tracking
System. In Proceedings of the World Congress on Engineering 2010
Vol II., 2010.
[9] Oo, L.L. & Hlaing, N.K., 2010. Microcontroller-Based Two-Axis
Solar Tracking System. In Computer Research and Development, 2010
Second International Conference., 2010.
Ponniran, A., Hashim, A. & Munir, H.A., 2011. A Design of Single
Axis Sun Tracking System. In Power Engineering and Optimization
Conference (PEOCO), 2011 5th International., 2011.
[10] Rahman, R. & Khan, M.F., 2010. Performance Enhancement of
PV Solar System by Mirror Reflection. In Electrical and Computer
Engineering (ICECE), 2010 International Conference., 2010.
[11] Zhao, Q., Wang, P. & Goel, L., 2010. Optimal PV Panel Tilt Angle
Based on Solar Radiation Prediction. In IEEE., 2010.
[12] Zhan, T.-S., Lin, W.-M., Tsai, M.-H. & Wang, G.-S., 2013.
Design and Implementation of the Dual-axis Solar Tracking System.
In Annual Computer Software and Application Conference., 2013.
IEEE.
16.192
28.858
To ensure that the tracking system actually produced
more power that it used, data collection were taken for
the power consumption of the associated individual
hardware component of the system such as motor and
microcontroller. Thus, experimental setup 4 to
experimental setup 5 was conducted for that purpose
without the solar tracker and the power consumed in
open loop by considering the location to be at indoor and
outdoor.
CONCLUSION
The open-loop tracking approach was incorporated in
dual-axis automatic mechanism. As a result, the power
efficiency gain of the open-loop tracking approach is
23.61% . In comparison with the dual-axis tracking
system (open-loop), it was shown that the power
consumption of the motor can be reduced to 46.89% in
tilted-single axis tracking system. However, the power
consumption of the microcontroller can be reduced to
about 49.34% when implementing low power
microcontroller. It was shown that the power
consumption of the microcontroller can be reduced to
50.56% in dual-axis tracking system (open-loop) but
suffer from an increase of about 88.61% in motor power
consumption. However, the power consumption of the
motor can be reduced to about 96.41% after
troubleshooting as discussed. In addition, the effect of
lighter load and lower temperature on power
consumption of the motor contributes 19.35% and
22.44% respectively in power reduction in open-loop
system. Thus, it was concluded that open-loop tracking
system contributes lower power consumption.
113
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Effect of Injection Time on the Performance and Emissions of
Lemon Grass Oil Biodiesel Operated Diesel Engine
G.Vijayan1*, S.Prabhakar2,S.Prakash3, M.Saravana Kumar4, Praveen.R5
1*
2,3,4,5
B.E, Final Year, Department Of Automobile Engineering, Avit, VMU, Chennai,Tamil Nadu, India
, Assistant Professor, Department Of Mechanical Engineering, Avit, VMU, Chennai,Tamil Nadu, India
Email:id: [email protected]
Abstract—Vegetable oils are considered as good alternative to diesel fuel due to their properties which are much
closer to that of diesel. Thus, they offer the advantage of being readily used in existing diesel engines without much
modification. They have a reasonably high cetane number. In this project esterified Lemon grass oil is used as an
alternate fuel. A single cylinder stationary kirloskar engine is used to compare the performance and emission
characteristics between pure diesel and Lemon grass oil blends. In this project selection of suitable lemon grass oil
blend and selection of optimized injection timing for the blend is done. The Lemon grass oil blends are in percentage
of 20%, 40%, 60%, 80%, and 100% of Lemon grass oil to 80%, 60%, 40%, 20% & 0% of diesel.
From this project it is concluded that among all lemon grass oil and diesel blends 20% of lemon grass oil and 80%
of diesel blend at 30º BTDC gives better performance nearing the diesel. When comparing the emission
characteristics HC, CO is reduced when compared to diesel, however NOx emission is slightly increased when
compared to diesel.
Keywords: Lemon grass oil, Injection timings, Esterification.
methanol. Certain impurities like sodium hydroxide
(NaOH) etc are still dissolved in the obtained coarse
biodiesel. These impurities are cleaned up by washing
with 350 ml of water for 1000 ml of coarse biodiesel.
This cleaned biodiesel is the methyl ester of Lemon grass
oil. This bio-diesel of Lemon grass oil is being used for
the performance and emission analysis in a diesel engine.
INTRODUCTION
Vegetable oils have a structure similar to that of diesel
fuel, but differ in the type of linkage of the chains and
have a higher molecular mass and viscosity. The heating
value is approximately 90% of diesel fuel. A limitation
on the utilization of vegetable oil is its cost. In the present
market the price of vegetable oil is higher than that of
diesel. However, it is anticipated that in future the cost of
vegetable oil will get reduced as a result of developments
in agricultural methods and oil extraction techniques.
For the present work N20, N40, N60, N80 and N100
blends of Lemon grass oil bio diesel are being used.
ENGINE SPECIFICATION
EXPERIMENTAL METHOD
Engine manufacturer
Bore& stroke
Number of cylinders
Compression ratio
Speed
Cubic capacity
Method of cooling
Fuel timing
Clearance volume
Rated power
Nozzle opening pressure
TRANSESTERIFICATION OF LEMON GRASS OIL
To reduce the viscosity of the Lemon grass oil,
trans-esterification method is adopted for the preparation
of biodiesel. The procedure involved in this method is as
follows: 1000 ml of lemon grass oil is taken in a three
way flask. 12 grams of sodium hydroxide (NaOH) and
200 ml of methanol (CH3OH) are taken in a beaker. The
sodium hydroxide (NaOH) and the alcohol are
thoroughly mixed until it is properly dissolved. The
solution obtained is mixed with Lemon grass oil in three
way flask and it is stirred properly. The methoxide
solution with lemon grass oil is heated to 60ºC and it is
continuously stirred at constant rate for 1 hour by stirrer.
The solution is poured down to the separating beaker and
is allowed to settle for 4 hours. The glycerin settles at the
bottom and the methyl ester floats at the top (coarse
biodiesel). Methyl ester is separated from the glycerin.
This coarse biodiesel is heated above 100ºC and
maintained for 10-15 minutes to remove the untreated
-Kirloskar oil engines ltd
-87.5 x 110 (mm)
-1
- 17.5: 1
-1800 rpm
-0.661 litres
-water cooled
-27º by spill (btdc)
-37.8 cc
-7 and 8 hp
-200 bars
EXPERIMENTAL SETUP
The engine used for the investigation is kirloskar SV1,
single cylinder, four stroke, constant speed, vertical,
water cooled, high speed compression ignition diesel
engine. The kirloskar Engine is mounted on the ground.
The test engine was directly coupled to an eddy current
dynamometer with suitable switching and control facility
for loading the engine. The liquid fuel flow rate was
114
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
At normal injection timing of 27ºBTDC the brake
thermal efficiency for neat diesel at full load is 26.48
%,where as it was 24.08% ,23.56% ,22.45% ,21.923%
,21.07% for N20,N40,N60,N80 and N100 as shown in
Fig 2.1.The best thermal efficiency was obtained for N20
blend and was 2.4% less than that of diesel for full load.
From the Fig 2.2 it was observed that brake thermal
efficiency for different injection timings for best
efficiency
blend(N20)
at
24ºBTDC
was
22.60%,30ºBTDC was 26.12% and 33ºBTDC was
24.61%.For N20 at 30ºBTDC it was found to be 2.04%
higher than N20 at 27ºBTDC. This may be due to better
spray characteristics and effective utilization of air
resulting in complete combustion of the fuel. For
24ºBTDC the brake thermal efficiency is 1.48 less than
normal the efficiency of injection timing. This is because
of incomplete combustion due to retardation of injection
timing.
BTE (%)
measured on the volumetric basis using a burette and a
stopwatch. AVL smoke meter was used to measure the
CO and HC emissions from the engine. The NOX
emission from the test engine was measured by chemical
luminescent detector type NOX analyser. For the
measurement of cylinder pressure, a pressure transducer
was fitted on engine cylinder head and a crank angle
encoder was used for the measurement of crank angle.
The sound from the engine was measured by Rion sound
level meter. The experimental setup is shown in the
Fig.1.
30
DIESEL
25
N 20
20
N 40
15
N 60
10
N 80
5
N 100
0
0
5
BRAKE POWER (kW)
10
FIG. 2.1 Percentage of lemon grass oil with diesel
30
24 BTDC
25
27 BTDC
BTE (%)
20
Fig.1
TEST METHOD
The engine was operated initially on diesel for warm up
and then with Lemon grass oil blends. The experiment
aims at determining appropriate proportions of biodiesel
and diesel for which higher efficiency was obtainable.
Hence experiments were conducted for different
proportions of biodiesel mixed with diesel. The blends
were in the ratio 20%, 40%, 60%, 80%, and 100% with
diesel. First these blends were tested at normal injection
timing 27º BTDC at constant injection pressure 200 bar
and with a constant compression ratio 17.5.Then for the
best efficiency blend, the test were conducted at three
different injection timings 24º BTDC, 30º BTDC and 33º
BTDC and above procedure was followed. Shims were
used to change the injection timings.
PERFORMANCE ANALYSIS
BRAKE THERMAL EFFICIENCY
30 BTDC
15
33 BTDC
10
5
0
0
2
4
6
BRAKE POWER(KW)
8
Fig. 2.2 variation of BTE with BP for different injection
timings for best efficiency blend
SPECIFIC ENERGY CONSUMPTION
Comparison of the specific energy consumption for the
four different injection timings for best efficiency blend
(N20) is shown in Fig no.3. It can be seen that the SEC
isthe highest in the case of the 33°BTDC and is least in
115
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
SEC (Kg/Kw-hr)
the case of 30ºBTDC. This is because at 30ºBTDC the
fuel is optimally injected such that proper diffusion of the
biodiesel takes place. At 33º BTDC more amount of fuel
is injected into the combustion chamber because of the
advance in the timing which leads to excess consumption
of biodiesel. At 27º BTDC and 24º BTDC there is not
enough fuel for the diffusion to takes place which results
in poor diffusion and as a result the amount of fuel
required to produce one kW of power is higher.
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
24 BTDC
27 BTDC
30 BTDC
5
BRAKE POWER(KW)
10
2
4
6
BRAKE POWER(KW)
800
700
600
500
400
300
200
100
0
8
24 BTDC
27 BTDC
NOx(PPM)
EMISSIONS
&
24 BTDC
30 BTDC
33 BTDC
0
27 BTDC
HC(PPM)
33 BTDC
OXIDES OF NITROGEN & CARBON DI-OXIDE
EMISSION ANALYSIS
80
70
60
50
40
30
20
10
0
30 BTDC
Fig.5 variation of CO with BP for different injection
timings for best efficiency blend
Fig.3 variation of SEC with BP for different injection
timings for best efficiency blend
UNBURNT HYDROCARBON
CARBON MONOXIDE
27 BTDC
0
33 BTDC
0
24 BTDC
CO(%)
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
injection timing, the delay period increases which leads
to poor combustion. At 24º BTDC and 27º BTDC there
is very less time for the diffusion of the fuel to takes place
which leads to increase in emissions.
2
4
6
BRAKE POWER(KW)
8
30 BTDC
Fig.6 variation of NOx with BP for different injection
timings for best efficiency blend
33 BTDC
0
2
4
6
BRAKE POWER(KW)
Comparison of the oxides of nitrogen emissions for the
four different injection timings for best efficiency blend
(N20) is shown in Fig no.6. Comparison of the carbon dioxide emissions for the four different injection timings
for best efficiency blend (N20) is shown in Fig no7. In
both cases it can be seen that the oxides of nitrogen and
carbon di-oxide emission is the highest in the case of the
30º BTDC and is least in the case of 24º BTDC. This is
because at 30º BTDC the peak temperature in the
combustion chamber increases due to the proper
combustion which leads to increase in emissions. At 33º
BTDC because of the advancement in injection timing,
the peak pressure is lowered due to poor combustion. At
24º BTDC and 27º BTDC due to the poor combustion
and spray characteristics, the oxygen content in the fuel
is not fully burnt which results in lower emissions.
8
Fig.4 variation of UBHC with BP for different injection
timings for best efficiency blend
Comparison of the UBHC emissions for the four
different injection timings for best efficiency blend
(N20) is shown in Fig no.4. Comparison of the carbon
monoxide emissions for the four different injection
timings for best efficiency blend (N20) is shown in Fig
no5. In both cases it can be seen that the UBHC and
carbon monoxide emission is the highest in the case of
the 24º BTDC and is least in the case of 30º BTDC. This
is because at 30º BTDC proper diffusion and combustion
of the biodiesel takes place which results in lower
emissions. At 33º BTDC because of the advancement in
116
2
24 BTDC
1.5
27 BTDC
80
70
60
50
40
30
20
10
0
-10 180
30 BTDC
1
33 BTDC
0.5
0
0
2
4
6
BRAKE POWER(KW)
480
INSTANTANEOUS HEAT RELEASE RATE
Comparison of the instantaneous heat release rate for the
four different injection timings for best efficiency blend
(N20) is shown in Fig no.10. Instantaneous Heat release
rate for pure diesel is 76.50 J/deg CA at 27 deg BTDC.
Heat release rate of N20 for 30º BTDC is 78.6 J/deg CA,
33º BTDC is 79.7 J/deg CA, 27º BTDC is 80.23 J/deg
CA, and 24º BTDC is 86.12 J/deg CA.
24
BTDC
27
BTDC
30
BTDC
10
100
24 BTDC
80
27 BTDC
60
30 BTDC
HRR (J/deg CA)
SOUND(decible)
380
Comparison of the peak pressure rise for the four
different injection timings for best efficiency blend
(N20) is shown in Fig no.9. Peak pressure for pure diesel
at 27ºBTDC is 72 bar. Peak pressure of N20 for 30º
BTDC is 70 bar, 33º BTDC is 67 bar, 27º BTDC is 66
bar and 24º BTDC is 63 bar. This is because complete
usage of the fuel is observed at 30º BTDC which results
in increase in the pressure as a result of proper
combustion. At 33º BTDC due to increase in delay
period, proper diffusion does not take place which results
in lower pressure in the combustion chamber. At 24º
BTDC and 27º BTDC due to a part of combustion taking
place during the expansion stroke, the peak pressure
drops.
Comparison of the sound characteristics for the
four different injection timings for best efficiency blend
(N20) is shown in Fig no.8. It can be seen that the sound
characteristics is the highest in the case of the 33º BTDC
and is least in the case of 30º BTDC. This is because at
30º BTDC the proper combustion takes places and due
to this the power developed helps in smooth running
which results in lower noise level. At 24º BTDC and 27º
BTDC due to improper combustion the noise level is
marginally greater. At 33º BTDC due to higher amount
of fuel accumulation in the combustion chamber initially,
the engine tends to knock and this leads to increase in
noise level.
5
BRAKE POWER(KW)
280
CRANK ANGLE(deg)
Fig. 9 variation of peak pressure with crank angle for
different injection timing for best efficiency blend.
SOUND CHARACTERISTICS
0
27
BTDC
8
Fig.7 variation of CO2 with BP for different injection
timings for best efficiency blend
92
90
88
86
84
82
80
24
BTDC
PRESSURE(bar)
CO2(%)
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Fig. 8 Variation of noise level with BP for different
injection timings for best efficiency blend
COMBUSTION ANALYSIS
PEAK PRESSURE RISE
40
33 BTDC
20
0
-20 180
280
380
480
580
-40
-60
CRANK ANGLE(deg)
Fig.10.Instantaneous heat release rate with crank angle
for different injection timing for best efficiency blend.
117
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
This is because at 30º BTDC, the increase in thermal
efficiency indicates the complete burning of fuel and
lower release of the heat to the exhaust and this reduces
the instantaneous heat release rate. At 33º BTDC because
of poor combustion the heat release rate is marginally
higher. At 27º BTDC and 24º BTDC because of poor
diffusion which causes the hot exhaust gases to escape
out at a higher rate



CUMULATIVE HEAT RELEASE RATE

Comparison of the cumulative heat release rate for the
four different injection timings for best efficiency blend
(N20) is shown in Fig no.11. Cumulative heat release
rate for pure diesel is 329.04 J/deg CA at 27deg BTDC.
Cumulative heat release rate of N20 for 30º BTDC is
335.01 J/deg CA, 33º BTDC is 340.23 J/deg CA, 27º
BTDC is 349.04 J/deg CA, and 24º BTDC is 366.60
J/deg CA.
CUM M. HRR( J/deg CA)
400
350
300
250
200
150
100
50
0
-50 180

24
BTDC
27
BTDC
30
BTDC
280
380
480
REFERENCES
[1].Prabhakar S. and Annamalai K., “Performance and Emission
Characteristics of CI Engine Fueled with Esterified Algae Oil”
International Review of Applied Engineering Research, ISSN 22489967 Vol.3, No.1, pp. 81-86, (2013).
[2].Senthil Kumar P, and Prabhakar S., “Experimental Investigation of
Performance and Emission Characteristics of Coated Squish Piston in
a CI Engine Fueled With Vegetable Oil”, Journal of Scientific and
Industrial research, Vol.72(08), page 515-520, August 2013.
[3]. Ashfaque Ahmed S. and Prabhakar S. “Performance test for lemon
grass oil in twin cylinder diesel engine” ARPN Journal of Engineering
and Applied Sciences”, ISSN 1819-6608, Vol. 8, No. 6, June (2013).
[4].Binu K. Soloman and Prabhakar S. “Performance test for lemon
grass oil in single cylinder diesel engine” ARPN Journal of Engineering
and Applied Sciences”, ISSN 819-6608, Vol. 8, No. 6, June (2013) .
[5].Ranjith Kumar P. and Prabhakar S. “Comparison of Performance
of Castor and Mustard
Oil with Diesel in a Single and Twin Cylinder Kirsloskar Diesel
Engine”, International Journal of Engineering Research and
Technology, ISSN 0974-3154 Vol.6, No.2, pp. 237-241, (2013).
[6].Niraj Kumar N. and Prabhakar S. “Comparison of Performance of
Diesel and Jatropha
(Curcas) in a Single Cylinder (Mechanical Rope Dynamometer) and
Twin Cylinder (Electrically Eddy Current Dynamometer) in a Di
Diesel Engine”, International Review of Applied Engineering
Research, ISSN 2248-9967 Vol.3, No.2, pp. 113-117, (2013).
[7].UdhayaChander and Prabhakar S. “Performance of Diesel, Neem
and Pongamia in a Single Cylinder and Twin Cylinder in a DI Diesel
Engine”,International Journal of Mechanical Engineering Research,
ISSN 2249-0019 Vol.3, No.3, pp. 167-171(2013).
[8].Anbazhagan R. and Prabhakar S., “Hydraulic rear drum brake
system in two wheeler ”, Middle - East Journal of Scientific Research,
Volume 17, Issue 12, Pages 1805-1807,(2013).
[9].Anbazhagan R. and Prabhakar S., “Developement of automatic
hand break system”, Middle - East Journal of Scientific
Research,Volume 18, Issue 12, 2013, Pages 1780-1785
[10].Anbazhagan R. and Prabhakar S., “Automatic vehicle over speed
controlling system for school zone”, Middle - East Journal of Scientific
Research, Volume 13, Issue 12, 2013, Pages 1653-1660
[11]. Premkumar S. and Prabhakar S., “Design and Experimental
Evaluation of Hybrid Photovoltaic-Thermal (PV/T) Water Heating
System”, International Journal of advance research in electrical ,
electronics and instrumentation engineering, Volume 2, Issue 12,
December 2013.
580
CRANK ANGLE(deg)
Fig.11 Variation of Cumulative heat release rate
with crank angle for different injection timings for best
efficiency blend
This is because at 30º BTDC due to proper combustion,
the amount of heat released is lower as the heat is utilized
to produce better efficiency resulting in lower
cumulative heat release rate. At 33º BTDC the
cumulative heat release rate is higher due to improper
burning at different zones in the combustion chamber. At
27º BTDC and 24º BTDC because of poor combustion
which takes place even after the expansion stroke
commences which causes the cumulative heat release
rate to rise higher
CONCLUSION
From the above results and discussions, the
following important points are observed and the effect of
injection timing are listed,

After trans-esterification of Lemon grass oil,
the kinematic viscosity and density is reduced
while the calorific value is increased.

For Lemon grass oil, fuel injection at 30º BTDC
results in approximately 2% rise in BTE when
compared to 27º BTDC where as there is a fall
of just 0.36% when compared to diesel at 27º
BTDC.
The UBHC, CO is significantly reduced with
biodiesels and its blends.
Compared to diesel fuel NOx emissions are
high for pure diesel and its low for N20 fuel.
Based on the engine performance and emission
tests, at 30º BTDC, the 20% blends of methyl
esters with diesel fuel have better performance
and
lower
emissions
characterististics
,compared to other injection timings.
The experimental results such as performance
characteristics, emissions characteristics and
combustion characteristics of the blends of
lemon grass oil biodiesel are almost
comparable to that of diesel fuel results.
Hence Lemon grass oil, being non-edible oil
proves to be a very effective alternate fuel and
can substitute mineral diesel with minimum
modification in the engine.
118
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
The Development of Automated Fertigation System
Yap Chee Wei, Vinesh Thiruchelvam, Rajaram Govindarajal
SOE, APU, Kuala Lumpur, Malaysia
[email protected]
[email protected]
[email protected]
Abstract— Fertilization is a process where nutrition is added to crops for better yields. However, this process is
often subjective as it relies on farmer’s judgment. Often time crops are under-fertilized or over-fertilized. To
overcome this problem, fertigation system is utilized to facilitate the process of feeding water and fertilizer solution
to crops. In this research, an automated fertigation system developed consists of an ejector-motor, inline pipe
mixer, control, along with wireless connection for data collection by means of a graphical user interface (GUI).
Initial stage explores material investigations and design conceptualizations. Upon finalizing the basic requirements,
the system is fabricated and assembled. According to the experimental results, the ejector-motor system is able to
perform at 90.63% while the inline pipe mixer has an accuracy of up to 97.47%. Also, the system is tested for
efficiency towards achieving the objectives. The results are then tabulated and expressed graphically for overall
evaluation. The overall performance was obtained at 89.36%. The objectives are achieved with cost effectiveness
for providing a sustainable solution to the agricultural industry.
Keywords-component; fertigation, ejector-motor, in-line pipe mixing, farm control system
these solutions will be balanced with a specified ejector
designed for the system [6]. Consistent readings from the
conductivity sensor will not be obtained if a
homogeneous mixture is not successfully attained.
INTRODUCTION
Fertilization and watering of crops is essential to crops
yields. Of the two processes, fertilization is often
overlooked as the process is subjective. Fertilization is
the process of supplying nutrients to the crops for better
yields. Crops are often under-fertilized or over-fertilized
since the process are performed manually based on
individual judgment.
METHODOLOGY
The project consists of multiple critical components
require sizing, planning and implementation. The flow
chart for the research is as shown in Figure 1.
Another issue with regards to the conventional
method is that limited information can be obtained while
fertilizer process is taking place. According to the
research done on agriculture [1], the amount of fertilizer
in the soil affects the crops’ growth. Hence, the
uncertainty and inconsistency of fertilizer dosage and
water irrigation amount must be researched upon to
propose a better solution. Since there is a need for
accuracy and consistency, data logging on the amount of
fertilizer and water being used in addition to the
comparison with rainfall is essential. Hence, automated
fertigation system is developed for the intention of assist
farmers [2].
START
Perform system analysis
Develop inline pipe mixer,
ejector-motor system,
control and skid.
Develop GUI
Assemble, test and evaluate performance
END
Fertigation is defined as the process of injecting
fertilizer solution into the main water stream via an
irrigator [3]. In a fertigation system, the two factors
affecting the process are predominantly the fertigation
level in terms of water consumption and the concentration
of the fertilizer mixture [4]. In order to accommodate an
accurate fertigation system, measures need to be taken to
further improve the conventional process.
Implementation methodology
According to Figure 1, system analysis is performed
to identify critical components. After identification, the
components such as inline pipe mixer, ejector-motor
system, control and skid is developed. This is followed
by software development – Graphical User Interface
(GUI). The components are assembled, tested and
performance evaluated.
Although irrigation system has previously been
automated [5], the automated fertigation system that is
equipped with high levels of accuracy and consistency is
yet to be achieved, at a lower cost for a wider application
in the agricultural industry. Fundamentally, when the
fertilizer is introduced to the water stream, the blend of
System analysis is performed to identify critical
components such as pipe sizing and pressure.
119
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Pipe Sizing
The system is developed to supply motive flow of 30
litre per minute. Hence, pipe sizing is important in the
aspect of safety and water pressure efficiency within the
system. Referring to handbook [7], PVC pipe standards
for safe pipe pressure and safe velocity was estimated to
be 150 psi (10.34 bar)
and 8 ft/s (2.44 m/s)
respectively.
Static Mixer Pressure Drop and Determination of
Number of Elements
Static mixer’s characteristics can be described by
pressure drop and its number of elements. Optimum flow
velocity and Reynold’s number is determined with (7)
and (8).
v
Continuity equation from fluid engineering as shown
in (1) and (2) is used for illustration.
mi  iVi Ai
Where m is mass flow rate,  is fluid density, v is the
speed of flow and A is the cross sectional area. Subscript
i represents input and all variables can be replaced with
subscript o as output. With uniform density,  the
density is factored out and the flow rate can be
simplified. As this is a non-restrictive flow, the flow rate
entering and leaving the pipe assumed to be constant.
The flow rate equation, q is determine in (3) – (5).
(3)
qo  vo Ao
(4)
vi Ai  vo Ao
(5)

96.82 e / D  
 95
f   2 log 0.983 


Re
3.7 
 Re

Ppipe 
where A is the cross sectional area and D is the diameter
of the cylindrical pipe. The diameter is determined to be
0.0162 m or approximately ¾ inch. The basic variables
determined are summarized in Table I.
Psm  K T Ppipe
Units
30
0.0005
0.006
1 x 10-6
0.001
0.75
0.01905
0.25
1.08
1029
1.028
L/min
m3/s
L/min
m3/s
kg/s
in
m
m
CPS
kg/m3
-
Fertilizer Flow Rate, Qf
Pipe Diameter, D
Length, L
Viscosity, μ
Density, ρ
Specific Gravity, SG
(10)
(11)
RESULTS
C. System Setup
The completed system setup is as shown in Figure 2.
Ejector-motor control system, inline static mixer, motor
and control are assembled and mounted on skid [9].
VARIABLES DETERMINATION
Value
62.54 flQ 2 
D5
(9)
Pressure drop of a static mixer Psm can be
determined from (11) as 0.027 bar. The corresponding
number of elements in the static mixer is determined as
6 numbers [7].
(6)
Variables
2
where f is Darcy friction factor, e is surface roughness.
As PVC is used, e is 0.0015. From (9), f is determined
to be 0.02341 and Ppipe is 0.004833 x 10-4 bar.
4
Motive Flow Rate, Q
(8)
The subsequent determination of pressure drop on the
pipe requires Darcy friction factor as in (9) and Darcy
Equation as in (10) [8].
The cross sectional area for a pipe cylinder can be
related to its diameter by (6).
D 2
vD

From (7), it is determined that v is 1.75 m/s, and from
(8), Reynold’s number is determined as 31,753.
Reynolds number is used to assess if the fluid flowing
inside a pipe is of laminar, transitional or turbulent flow.
As the number is greater than 4,000, it is characterized as
turbulent flow.
where q is the volumetric flow rate. The required motive
flow is set to 30 litres per minute and the required
fertilizer flow is set to 0.2% of motive flow 0.06 litres
per minute. The flow rate is determined to be 0.0005 m3/s
and velocity of the flow rate is 2.44 m/s. From (5), A is
determined to be 2.05 x 10-4m2.
A
(7)
2
Re 
(1)
qi  vi Ai
Q
D / 4
Automated fertigation system built on skid
120
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
The system operates well within expectations.
Fertilizer is fed into the mainstream water by ejectormotor system and the solution is mixed by the static
mixer. The solution is checked with conductivity meter
before dispensed to the valves.
increment of flow rate eases the injection of fertilizer into
the main stream.
Performance of Inline Pipe Mixer
Static mixer was fabricated in stainless steel with 6
elements and fitted inside a 1.5 inch PVC pipe. The
mixer was tested for its mixing capability with TDS
meter and the result is illustrated in Figure 4 [11].
D. GUI with Wireless Connectivity
The GUI in Figure 3 was scripted with HTML, JAVA
and C on Arduino controller. The GUI was developed to
enable user to control the system wirelessly. This can be
performed by initiating the system to start and stop,
controlling the fertilizer dosing, monitoring the
conductivity, motive flow rate, and total dissolved solids
(TDS). Data log is downloaded to the connected device
[10].
Inline pipe mixer performance with 190 PPM reference
In the test, the input is set at 190 PPM and the
measurement is shown in Figure 4. The performance of
the mixer was found to be 97.47%.
Performance of Wireless Connection
The fertigation system was tested for its wireless
connectivity. It was found to have good connection with
a slight delay, mainly caused by the enormous codes
occupying the Arduino’s flash memory. The
connectivity was successful with Received Signal
Strength Indicator (RSSI) measurements shown in
Figure 5. Ideally, RSSI should be close to zero for
perfect connection. The connection shows -36.3dBm
interpreted as good connection [12].
GUI developed to enable wireless control and monitoring
PERFORMANCE ASSESSMENT
Performance of Ejector-Motor System
The ejector-motor system developed comprises of a
DC geared motor, a metering valve and a venturi ejector.
The user provides the fertilizer input, and the modulation
output of the DC geared motor and metering valve is
tabulated. The results obtained periodically for various
inputs between 20 cm3/min - 300 cm3/min and as shown
in Table II.
AVERAGE PERFORMANCE RATING OF EJECTOR -MOTOR SYSTEM
Fertilizer Input
(cm3/min)
20
60
100
200
300
Average Error (%)
Performance Rating (%)
Measured RSSI reading over 60 minutes
Average Percentage
of Error (%)
27.50
5.00
6.00
8.00
0.33
9.37
90.63
The GUI and data logging feature runs efficiently as
the network connection is stable. Connection to the
network is affected by the default hardware
configuration such as the bandwidth. The connection is
secure and stable up to 8 out of 10 tries with 80%
performance.
E. Overall Performance
The overall performance of the system is established
on the performance of individual element of the system.
The ejector motor has moderate performance with about
90.63% accuracy. The inline pipe mixer provides good
response and achieves high accuracy of 97.47%.
Referring to Table 2, the highest percentage of error
occurs when fertilizer input is set at 20 cm 3/min. This is
mainly because fluid flow was disrupted at low flow rate
especially when entering the main stream. As fertilizer
flow rate increases, error decreases because the
121
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Connectivity testing achieves accuracy of 80%. Hence,
the overall performance, as the average of the elements
performance, is determined to be 89.36%.
CONCLUSION
In conclusion, the results are consistent with the
objectives. The ejector-motor system developed feeds
fertilizer at a provided input. This is directed to the
injector and the solution is exposed to full mixing at the
initial stage with 90.63% efficiency.
[2]
[3]
[4]
[5]
Subsequently, the inline static mixer is developed to
create homogenous solution. From the PPM
measurement, TDS meter verifies the mixer achieves
97.47% performance. Also, the GUI developed with
HTML, JAVA and C is connected to Arduino. The
wireless system provides good performance and the data
is logged with the wireless connection.
[6]
[7]
[8]
The overall performance was determined as 89.36%
and this concludes that objectives are achieved. Hence
automated fertigation system is developed with low cost
for the benefits of agriculture industry.
[9]
ACKNOWLEDGMENT
Acknowledgment goes to Signal Transmission (M)
Sdn. Bhd for providing opportunity and funding to the
project.
[10]
[11]
REFERENCES
[1]
[12]
Chen, J.H., “The combined use of chemical and organic
fertilizers and / or biofertilizers for crop growth and soil
fertility,” Soil Science Society of America, vol. 2, no. 1, pp.7-9,
2006.
122
Swamy, D.K. et al., “Microcontroller based drip irrigation
system,” International Journal of Emerging Science and
Engineering, vol. 1, no. 6, pp.1–4, 2013.
Snyder, D. G., “The basics of injecting fertiliser for field grown
tomatoes,” U.S Department of Agriculture, vol. 1, no. 11, pp.16, 2011.
Bayindir, R. and Cetinceviz, Y., “A water pumping control
system with a programmable logic controller (PLC) and
industrial wireless modules for industrial plants-An
experimental setup,” ISA Transactions, vol. 50, no. 2, pp.321328, 2011.
Ingale, H.T. and Kasat, N.N. “Automated Irrigation System,”
International Journal of Engineering Research and
Development, vol. 4, no. 11, pp.51–54, 2012.
Yan, Y.C., “Effect of structural optimization on performance of
venturi injector,” Symposium on Hydraulic Machinery and
Systems, vol. 26, no. 1, pp.1-8, 2012.
Paul E.L., A.-O. V. and Kresta S.M., “Handbook of Industrial
Mixing,” New Jersey: John Wiley and Sons, 2004.
Paglianti, A.G. M., “A mechanical model for pressure drops in
corrugated plates static mixers,” Chemical Engineering Science,
vol. 97, no. 1, pp.376-384, 2013.
Miralles, J., “Development of irrigation and fertigation control
using 5TE soil moisture, electrical conductivity and temperature
sensors,” The Third International Symposium on Soil Water
Measurement Using Capacitance, Impedance and TDT, vol. 2,
no. 1, pp.1-9, 2010.
ElShafee, A., “Design and implementation of a Wi-Fi based
home automation system,” World Academy of Science,
Technology and Engineering, vol. 68, no. 1, pp.2177-2183,
2012.
Heaney, M., “Experimental techniques for measuring
resistivity,” Electrical Conductivity and Resistivity, vol. 3, no.
1, pp.133-135, 1999.
Al-Kadi, T., Al-Tuwaijri, Z. and Al-Omran, A., “Arduino WiFi network analyzer,” Procedia Computer Science, vol. 21,
pp.522–529, 2013.
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Experimental Investigation on Ethanol Fuel in VCR-SI Engine
S.Prabhakar1*, K.Annamalai2, Praveen.R3, M.Saravana Kumar4, S.Prakash5
1*,3,4,5
2
, Assistant Professor, Department Of Mechanical Engineering, Avit, Vmu, Chennai,Tamil Nadu, India
, Professor, Department Of Automobile Engineering, MIT, Anna University, Chennai,Tamil Nadu, India
Email:id: [email protected]
Abstract—Fuel additives are very important, since many of these additives can be added to fuel in order to improve
its efficiency and its performance. One of the most important additives to improve fuel performance is oxygenates
(oxygen containing organic compounds). Several oxygenates have been used as fuel additives, such as methanol,
ethanol, tertiary butyl alcohol and methyl tertiary butyl ether. Alcohols, like ethanol can be produced by leavening
of biomass crops, like sugarcane, wheat and wood.
The most positive properties of ethanol include its ability to be produced from renewable energy sources, its high
octane number, and its high laminar flame speed. The negative aspects include its low heating value compared to
petrol, and it causes corrosion in the metal and rubber parts of an engine. The engine power improves with ethanol
as it has better anti-knock characteristics qualities, which improves engine power with an increase in compression
ratio. Ethanol has high latent heat of vaporization. The latent heat cools the intake air and hence increases the
density and volumetric efficiency.
An overview of techniques on the effects of alcohol blends on the performance of a spark ignition engine. For
carbureted single cylinder, the effect of ethanol addition to petrol on engine performance, exhaust gas emissions
and noise level at various engine loads. The effects of using ethanol - unleaded petrol blend on spark ignition engine
performance and exhaust gas emission. The effects of using oxygenates as a replacement of lead additives in petrol
on performance of a typical spark ignition engine. The analysis of fuel air Otto cycle for Iso-octane (C8H18) and
ethanol (C2H5OH) by including twelve combustion products i.e. CO 2, H2O, O2, N2, Ar, CO, H2, O, OH, H, NO and
N. The general perception is that alcohol - petrol blended fuels can effectively lower the emissions and enhance the
engine performance without major modifications to the engine design to the engine design.
KEYWORDS:ALCOHOL,PERFORMANCE,MODIFICATIONS.
PETROL ETHANOL BLENDING
For conducting research on ic engine we should prepare
the fuel blend in appropriate percentage of ethanol and
petrol.
S.NO ETHANOL
PETROL
COMPOSITION COMPOSITION
1
5%
95%
2
10%
90%
3
15%
85%
4
20%
80%
5
25%
75%
6
30%
70%
The above image shows that the blending process of
petrol and ethanol
EXPERIMENTAL TEST RIG
A test rig as shown in the above figure was developed to
run a single cylinder, 4-stroke, 661 cc, and variable
compression ratio spark ignition engine. The engine was
coupled to an electrical dynamometer, which is equipped
with an instrument cabinet (column mounted) fitted with
a torque gauge, electric tachometer and switches for the
load remote control.
THIS TABLE SHOWS THAT THE
PERCENTAGE OF FUEL BLENDS
Blending has been done in our college chemistry
laboratory. With the help of following tools, they are
listed below:
Conical flask, Beaker, Measuring jar, Glass Funnel
Stick, Plastic bottle for storage
Electrical dynamometer thus allows the trouble-free
starting as well as towage. In conjunction with a
regenerative feedback unit, this also allows extremely
123
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
RESULTS:
BRAKE THERMAL EFFICIENCY:
The graph shows that the brake thermal efficiency
increases as the engine speed and load increases and
reaches a maximum speed and load and then it decreases
with an increase in engine speed and load for all fuel
blends except pure petrol, where the effect of mechanical
loss has been more significant. Also, it is observed that
the brake thermal efficiency increases by 18.16%,
16.91%, 15.42%, 14.61%, 11.31% and 10.85 with30%,
25%, 20%, 15%, 10% and 5% ethanol-petrol blends
respectively compared to pure petrol.
PROCEDURES:
The engine was started and allowed to warm up for a
period of about 30 min. The air–fuel ratio was adjusted
to achieve maximum power on unleaded petrol. Engine
tests were performed at 1000 rpm engine speed at
varying load like no load, 2kg, 4kg, 6kg 8kg, 10kg at full
throttle opening position.
The required engine load was obtained through proper
dynamometer control. Before running the engine to a
new fuel blend, it was allowed to run for sufficient time
to consume the remaining fuel from the previous
experiment. The operating conditions were fixed and the
parameters were continuously measured and recorded.
For each experiment, three runs were performed to
obtain an average value of the experimental data.
The variables that were continuously measured include
engine rotational speed (rpm), torque, 30s time required
to consume the amount of fuel blend (s), and air–fuel
ratio. The parameters, such as fuel consumption, air
consumption, equivalence air–fuel ratio, volumetric
efficiency, brake power, brake mean effective pressure,
brake specific fuel consumption, brake thermal
efficiency, stoichiometric air–fuel ratio and lower
heating value (LHV) of the fuel blends, were determined
by using the standard equations. The experimental
engine is water–cooled, carbureted SI engine made up of
grey cast - iron. Table I lists the important engine
specifications.
Generally the addition of ethanol shows higher brake
thermal efficiency compared to petrol and this would
provide more engine brake power within fuel consumed.
Compression ratio
10:1
Throttle opening
position
Full
Ignition timing
(degrees)
20° BTDC
Engine speed
1000 rpm
Engine load
(0, 2, 4, 6, 8, 10)kg
Fuel blends
(0, 5, 10, 15, 20, 25,
30) % Ethanol
BRAKE THERMAL EFFICIECY
(%)
economical operation by feeding the braking power back
into electrical network. A piezo electric pressure
transducer was used to measure the cylinder pressure.
Fuel consumption was measured by using a calibrated
burette and a stopwatch with an accuracy of 0.2s.
.
The ethanol was blended with unleaded petrol to get 7
test blends ranging from 0% to 30% ethanol with an
increment of 5%. The fuel blends were prepared just
before starting the experiment to ensure that the fuel
mixture is homogenous and to prevent the reaction of
ethanol with water vapor.
2 KG
LOAD
4 KG
LOAD
35
30
25
20
15
10
5
0
0
20
40
% OF ETHANOL IN PETROL
SFECIFIC FUEL CONSUMPTION:
In figure 4, the brake specific fuel consumption
decreases as the engine speed increases and reaches a
minimum at engine speed of 1700 rpm and then it
increases with an increase in engine speed for all fuel
blends except pure petrol. It was found that the brake
specific fuel consumption also decreases by 15.07%,
12.56% and 10.56% with 15%, 10% and 5% ethanolpetrol blends respectively compared to pure petrol.
Because of oxygen content available in ethanol, the
blend causes better combustion compared to pure petrol
and causes enhanced power output.
RESULTS ON EMISSION:
The simulated results for exhaust gas emissions for CO,
CO2, NO and O2 have been shown in graph with respect
to pure petrol and ethanol concentrations (5, 10, 15, 20,
25and 15%). Ethanol addition up to 20% to unleaded
petrol has two major effects. It decreases the
concentrations of carbon monoxide and carbon dioxide.
The nitric oxide and oxygen concentration show an
increasing trend. The figures show that the 20 percent
ethanol addition to unleaded petrol reduces the
concentration of CO by about 65 % and the concentration
of CO2 by about 60.89 % for 15 percent ethanol
substitution compared to pure petrol and this is due to the
reduction in carbon atoms concentration in the blended
ENGINE OPERATING CONDITIONS
124
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
CARBON MONOXIDE,
PPM
OXYGEN
CONCENTRATION
fuel and the high molecular diffusivity and high
flammability limits which improves mixing process and
hence combustion efficiency. Generally above 20%
ethanol substitution, this effect can be seen more clearly.
Increase in CO2 can be seen after 20% ethanol blend.
Little variation does exist in simulated results. O2
concentration shows an increasing trend as ethanol is an
oxygenate fuel, it releases oxygen while burning.
8KG LOAD
6 KG LOAD
0
20
40
% OF ETHANOL IN PETROL
4 KG LOAD
2 KG LOAD
CONCLUSION:
6KG LOAD
0
50
The main conclusions deduced from these investigations
are as follows:
The engine performance and pollutant emissions of a SI
engine have been investigated by adding a maximum
value of 20% ethanol– 80% gasoline blend over pure
gasoline. The basic aim of this study was to substitute
only up to 20% ethanol in unleaded gasoline in a small
engine, with an idea to apply this investigation in engines
of smaller size.
4KG LOAD
% OF ETHANOL IN PETROL 2 KG LOAD
GRAPH FOR EMISSION OF
CARBON DIOXIDE
CARBON DIOXIDDE (%)
8 KG LOAD
OXYGEN CONCENTRATION:
12000
10000
8000
6000
4000
2000
0
EMISSION OF CARBON MONOXIDE:
20
15
10
5
0
8 KG LOAD
6 KG LOAD
0
20
40
% OF ETHANOL IN PETROL
4 KG LOAD
2 KG LOAD
EMISSION OF CARBON DIOXIDE:
NITRIC OXIDE,
PPM
15
10
5
0
8 KG
LO…
3000
2000
1000
0
0
Engine performance has increased with using ethanol
additive to gasoline, where the maximum increment in
brake power, brake thermal efficiency, volumetric
efficiency, brake torque and brake mean effective
pressure were found to be higher than pure gasoline by
about 11.06 %, 18.16 %, 1.54 %, 11.99 % and 11.99 %
respectively. Also it was found that a decrement in brake
specific fuel consumption was about 15.07 %.
Combustion processes inside the cylinder is better with
ethanol blend with gasoline, where the maximum
cylinder pressure during combustion stroke was found to
be higher than pure gasoline by about 1.95 %.
Exhaust gas emissions are lower by using ethanolgasoline blends, where the maximum reduction in
emissions was found to be 65 % and 60.89 for CO and
CO2 respectively over pure gasoline, while the NO
emission was found to be higher than pure gasoline.
Usually CO concentration decreases due to leaning effect
with ethanol addition and CO2 shows increasing trends
after 15% due to better combustion with ethanol blends.
Usually the 15 percent ethanol blend was found to be the
beneficial substitution that achieves satisfactory engine
performance and exhaust gas emissions.
50
% OF ETHANOL CONTENT
EMISSION OF NITRIC OXIDES:
REFERENCES
1.Prabhakar S. and Annamalai K., “Performance and Emission
Characteristics of CI Engine Fueled with Esterified Algae Oil”
International Review of Applied Engineering Research, ISSN 22489967 Vol.3, No.1, pp. 81-86, (2013).
2.Senthil Kumar P, and Prabhakar S., “Experimental Investigation of
Performance and Emission Characteristics of Coated Squish Piston in
125
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
a CI Engine Fueled With Vegetable Oil”, Journal of Scientific and
Industrial research, Vol.72(08), page 515-520, August 2013.
3. Ashfaque Ahmed S. and Prabhakar S. “Performance test for lemon
grass oil in twin cylinder diesel engine” ARPN Journal of Engineering
and Applied Sciences”, ISSN 1819-6608, Vol. 8, No. 6, June (2013).
4.Binu K. Soloman and Prabhakar S. “Performance test for lemon grass
oil in single cylinder diesel engine” ARPN Journal of Engineering and
Applied Sciences”, ISSN 819-6608, Vol. 8, No. 6, June (2013) .
5.Ranjith Kumar P. and Prabhakar S. “Comparison of Performance of
Castor and Mustard
Oil with Diesel in a Single and Twin Cylinder Kirsloskar Diesel
Engine”, International Journal of Engineering Research and
Technology, ISSN 0974-3154 Vol.6, No.2, pp. 237-241, (2013).
6.Niraj Kumar N. and Prabhakar S. “Comparison of Performance of
Diesel and Jatropha
(Curcas) in a Single Cylinder (Mechanical Rope Dynamometer) and
Twin Cylinder (Electrically Eddy Current Dynamometer) in a Di
Diesel Engine”, International Review of Applied Engineering
Research, ISSN 2248-9967 Vol.3, No.2, pp. 113-117, (2013).
7.UdhayaChander and Prabhakar S. “Performance of Diesel, Neem
and Pongamia in a Single Cylinder and Twin Cylinder in a DI Diesel
Engine”,International Journal of Mechanical Engineering Research,
ISSN 2249-0019 Vol.3, No.3, pp. 167-171(2013).
8.Anbazhagan R. and Prabhakar S., “Hydraulic rear drum brake system
in two wheeler ”, Middle - East Journal of Scientific Research, Volume
17, Issue 12, Pages 1805-1807,(2013).
9.Anbazhagan R. and Prabhakar S., “Developement of automatic hand
break system”, Middle - East Journal of Scientific Research,Volume
18, Issue 12, 2013, Pages 1780-1785
10.Anbazhagan R. and Prabhakar S., “Automatic vehicle over speed
controlling system for school zone”, Middle - East Journal of Scientific
Research, Volume 13, Issue 12, 2013, Pages 1653-1660
11. Premkumar S. and Prabhakar S., “Design and Experimental
Evaluation of Hybrid Photovoltaic-Thermal (PV/T) Water Heating
System”, International Journal of advance research in electrical ,
electronics and instrumentation engineering, Volume 2, Issue 12,
December 2013.
12. S.Prabhakar S. and Prakash S., and “Performance analysis of
ventilated brake disc for its effective cooling”, Journal of Chemical and
Pharmaceutical Sciences www.jchps.com ISSN: 0974-2115, JCHPS
Special Issue 7: 2015 NCRTDSGT 2015 Page 358
13.Prakash.S and S.Prabhakar S., “Design optimization of a heat
exchanger header with inlet modifier”, Journal of Chemical and
Pharmaceutical Sciences www.jchps.com ISSN: 0974-2115, JCHPS
Special Issue 7: 2015 NCRTDSGT 2015 Page 347
14. Saravana Kumar. S and S.Prabhakar S., “Performance analysis of
ventilated brake disc for its effective cooling”, Journal of Chemical and
Pharmaceutical Sciences www.jchps.com ISSN: 0974-2115, JCHPS
Special Issue 7: 2015 NCRTDSGT 2015 Page 362 .
126
International Conference on Information, System and Convergence Applications
June24-27, 2015 in Kuala Lumpur, Malaysia
Active Cell Equalizer by a Forward Converter with Active
Clamp
Thuc Minh Bui, Sungwoo Bae†
Dept. of Electrical Engineering, Yeungnam University, Gyeongsan, Gyeongbuk, Korea
†Corresponding Author
Abstract—This paper proposes an active cell equalizing circuit by a forward converter with active clamp (FAC) in
which the stored energy in the transformer magnetizing inductance is recycled to the input source by the FAC circuit.
By this circuit, the saturation of transformer can be prevented, which results in a reduction of the power loss, while
the switch is protected from a voltage spike due to the charge balance of the clamp capacitor. Consequently, the
proposed balancing circuit has higher efficiency and lower voltage stress than an RCD circuit. This active cell
equalizing circuit operates for all cells to be equilibrated simultaneously. Therefore, the cells balancing time are short.
The operational principle of the proposed circuit has been analyzed and simulated by PSIM software.
Keywords-Cell equalizer, forward converter, active clamp, FAC circuit.
INTRODUCTION
The circuit structure based on a forward active clamp
(FAC) circuit transfers each energy cell to be balanced.
The high voltage battery system with the multi-cell
battery group has recently studied and used for the
secondary cell battery system in the industrial
applications such as energy storage systems and electric
vehicles. This FAC is one of the most widespread
topologies to attain high efficiency for low and medium
power applications at higher frequencies [1]. This FAC
circuit is composed of the auxiliary switch and the clamp
capacitor used to repress voltage stress at the active
switch in the magnetizing inductance of the transformer
[2]. The cell balancing circuit for ns cells connected series
with the multi-windings transformer Tm with the cell
capacities of the voltage cells were diverse [3]. However,
the transformer was not prevented from saturation
because this circuit had not the reset circuit. A snubber
capacitor is used to reset the core in the resonant forward
converter. The RCD clamp method has been proposed
and analyzed to reduce the voltage stress of the switch
devices. However, the energy stored in the magnetizing
inductance is dissipated on the resistor and the conversion
efficiency is limited [4].
Figure 3. Proposed cell balancing circuit.
assumed that a cell string is consists of four cells due to
the simplicity for a circuit operational analysis. In this
proposed balancing circuit with FAC, the currents were
transmitted from the highest voltage cell to the lowest
voltage cell by selectively working the power switches
Sk and the auxiliary switches Sak. It has six operating
modes during the switching period Ts, resulting in the
theoretical waveforms shown in Fig. 2.
This paper presents an active cell balancing circuit
with a multi-winding transformer based on. In the
proposed circuit, the auxiliary switches are used to drive
the active clamp switches. The advantage of the proposed
cell balancing circuit is that the transformer can prevent
from saturation. The simulation results have been shown
to verify the validity of the presented method for cell
balancing.
Operational principles and Mode Analysis
The operation principle of the proposed cell balancing
circuit consists of six modes during one switching period
Ts, which is shown in Fig. 2.
PROPOSED CELL BALANCING CIRCUIT AND
ITS OPERATION PRICINPLE
Mode 1 [t0, t1]: At t0, four power switches Sk (S1, S2,
S3, S4) are turned on, while four auxiliary switch Sak
(Sa1,Sa2, Sa3, Sa4) are turned off simultaneously. Thus, the
voltage of these power switches Sk are zero. The
magnetizing inductance Lm current is increased linearly
with a slope of Vcellk/Lm. where Vcellk is the voltage of the
kth cell.
Proposed Cell Balancing Circuit
Fig. 1 shows the proposed active cell balancing
circuit. The proposed circuit includes N cells connected
in series, where each cell connects a power switch Sk, the
FAC reset circuit and a multi-windings transformer Tm to
balance the voltage of each cell in the battery string. It is
127
International Conference on Information, System and Convergence Applications
June24-27, 2015 in Kuala Lumpur, Malaysia
In this mode, the energy is transmitted from highest
voltage cell (Vcell4) to the lowest voltage cell (Vcell1)
through the same multi-windings transformer (Tm). The
FAC reset circuit can be ignored. At t1, these power
switches begin to be turned off. Thus, the switches
voltage increase.
auxiliary switches is 1-D with switching frequency fa = 40
kHz.
Mode 2 [t1, t2]: At t1, all power switches, Sk start to be
turned off and all auxiliary switches, Sak are turned off,
which causes the power switches voltage to increase as
shown in Fig. 2. In this mode, the stored energy in the
magnetizing inductance Lm begins to be discharged
through diode.
Figure 3. Simulation voltage waveforms of the proposed balancer.
Mode 3 [t2, t3]: At t2, all power switches Sk are turned
off as shown Fig. 2. The magnetizing inductance Lm reset
through diode resulting in a reduction of the inductance
current iLm.
Mode 4 [t3, t4]: At t3, all auxiliary switches Sak start to
be turned on.
Mode 5 [t4, t5]: At t4, all auxiliary switches Sak are
turned on. The magnetizing inductance Lm keeps on
resetting through the auxiliary switches. The discharging
process of magnetizing inductance Lm energy is
completed by the FAC circuit in this mode.
Figure 4. Simulation current waveforms of the proposed balancer.
CONCLUSIONS
Mode 6 [t5, t6]: At t5, all power switches Sk start to be
turned on and all auxiliary switches Sak are turned off. The
magnetizing inductance current iLm decreased. At t6, all
power switch Sk turned on and all auxiliary switches Sak
turned off. A cycle is completed at this mode.
An active cell balancing circuit based on FAC has
been presented. The proposed cell balancing circuit used
FAC circuit to preclude the transformer to be saturated.
The proposed circuit operates simultaneously as all the
power switches have the same PWM signal with a
constant duty ratio (D) while the auxiliary switches with
a constant duty ratio (1-D). The energy was transfer from
the highest voltage cell to the lowest voltage cell. Thus,
the cells of the battery string are balanced equivalent their
average value. The simulation results are presented to
prove the validity of the proposed cell balancing circuit
by PSIM software.
SIMULATION RESULTS
Simulation studies were carried out to verify the
feasibility of the proposed circuit using PSIM software.
For the simplicity of simulations, four cells were replaced
by four series capacitors which initial voltages are: Vcell1
= 3.5 V, Vcell2 = 3.6 V, Vcell3 = 3.7 V, and Vcell4 = 3.8 V.
To illustrate the FAC reset circuit, Figs. 3 and 4 show the
simulation results of the voltage, the current waveforms,
respectively, as shown in Fig. 1. The circuit parameters
used in the simulation using PSIM software are as
follows: Lm = 2.5 mH, Cc = 22 nF, the duty ratio (D) of
PWM signals to power switches is 0.375, a switching
frequency is f = 40 kHz, the duty ratio of PWM signals to
ACKNOWLEDGMENT
This research was supported by Basic Science
Research Program through the National Research
Foundation of Korea (NRF) funded by the Ministry of
Science,
ICT and Future Planning (NRF2014R1A1A1036384).
REFERENCES
Q. Li, F. C. Lee, and M. M. Jovanovic, “Large-signal transient analysis
of forward converter with active-clamp reset,” in IEEE PESC
Rec., 1998, pp. 633–639.
B. Carsten, “Design techniques for transformer active reset circuit at
high frequencies and power levels,” in Proc. HFPC, 1990, pp.
235–246.
M. Einhorn, W. Roessler, and J. Fleig, “Improved performance of
serially connected Li-ion batteries with active cell balancing in
electric vehicles,” IEEE Trans. Veh. Technol., vol. 60, no. 6, pp.
2448–2457, Jul. 2011.
Jaejung Yun, Taejung Yeo Jangpyo Park, "High efficiency Christopher
D. Bridge, “Clamp Voltage Analysis for RCD Forward
Converters,” in Proc. IEEE APEC’00 Mar. 2000, pp. 959-965.
Figure 4. Operating waveforms of the proposed cell equalizer.
128
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Optimization of Process Parameters of Dissimilar Alloys
AA5083 and 5456 by Friction Stir Welding
Jaiganesh. V
Professor, Department of Mechanical Engineering, S. A. Engineering College, Chennai-600 077, India
[email protected]
Abstract—In this study, dissimilar aluminium alloys AA5083 and 5456 has been welded by using the solid state
Friction stir welding (FSW) method. Joining of dissimilar alloys in FSW is obtained by means of frictional heat
which is generated by means of a rotational tool and the workpiece. In order to obtain a high-quality of welds in
optimum level, number of experiments is carried out in FSW by selecting the suitable process parameters. The
optimum values obtained are tool rotational speed at 1200 rpm and welding speed at 50 mm/min. The welded joint
has been investigated by means of microstructural analysis using SEM tests. The mechanical properties of the welds
were performed for joint efficiency based on tensile strength, yield strength and percentage of elongation.
Keywords- Friction stir welding; Aluminium Alloys AA 5083-5656; Dissimilar joint; Mechanical Properties;
Microstructure
compared with the experimental values. The Friction Stir
Welding machine is shown in the Figure 1.
INTRODUCTION (HEADING 1)
In this paper two dissimilar alloys (AA5083 and
5456) are welded by the using friction stir welding
process. Friction Stir Welding is done by the use of
frictional heat between the rotational tool and the
workpiece. In industries, joining of two or more
combination of materials plays a vital role for making a
component and structure. FSW consists of a rotating nonconsumable tool with shoulder and pin configuration.
This aluminium alloys are taken because of its high
strength and light weight property. AA5083 aluminium
alloy is taken because of its higher machinability rate,
high corrosion resistance and higher yield strength. 5456
aluminium alloy is chosen for its high strength and better
weldability. The friction stir welding process is nothing
but the unconventional welding process which is more
suitable to weld the aluminium materials than any other
conventional welding processes. When compared to
traditional welding process, FSW provides less distortion,
ease of automation, superior mechanical properties and
minimum residual stresses. Moreover consumable filler
material, shielding gas and edge preparations are not
necessary in FSW. In welding process initially plates to
be welded (AA5083 and 5456) are fixed in the fixture of
the FSW machine. The rotational tool made of High speed
steel is penetrated into the joint from one end to another
end. Due to frictional heat between the tool and
workpiece, the two plates are welded together. After
welding, the tests such as SEM analysis, Tensile test and
micro, macro analysis are carried out. SEM analysis is
conducted in order to determine the crystalline orientation
and external texture of the welded portions. The tensile
test was conducted as per the American standard for
testing and material (ASTM) to calibrate the tensile
strength of the welded portions. Macro structural analysis
is carried out to check the defect at the cross section of the
weld, whereas micro structural analysis is conducted to
figure out the grain structure of welded portions. Finally
the analysis of variance (ANOVA) techniques are used to
predict the results numerically, and the results are
Friction Stir Welding Machine
EXPERIMENTAL PROCEDURE
In this setup the aluminium alloy (AA5083 5456)
with plate size of 100mm length, 50mm breadth, 6mm
thickness is taken. The tool used for welding the material
is high speed steel with taper cylindrical tool pin profile.
The two plates are joined together and fixed in the fixture
of Friction stir welding machine. To fabricate the weld,
low power electro motor of 11KW is used. The tool is
penetrated from one end of the welding path to another.
Due to the rotation of the tool, the frictional heat is
generated between the tool pin and workpiece to undergo
the plastic deformation for joining of two materials. This
experiment is conducted with various ranges of
parameters such as welding speed and tool rotational
speed. The welding speed ranges from 50mm/min to
150mm/min and tool rotational speed ranges 900 to
1500rpm. While conducting the experiment, it was
observed that when the welding speed and tool rotational
speed increases more than 100mm/min and 1400rpm
respectively, the quality of the weld was not proper. To
select the best range of rotational speed it was raised
129
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
from 900 to 1500rpm. It was observed that as it increases
above 1200rpm, the weld quality was not good. Also,
decreasing speed below 1000rpm does not create the
enough friction between tool and workpiece, which stops
the tool to rotate. Then the experiment was conducted in
nine pairs of plate with different parameters of
50,100,150mm/min as a welding speed and 1000, 1200,
1400 rpm as tool rotational speeds.
welding the material. There are a wide variety of tool
profile are used for joining the material. Taper cylindrical
tool pin profile are used to increase the frictional area and
also it sweeps less material for achieving the better tensile
strength compared to other tool profile like straight tool
profile. The picture of taper cylindrical tool is shown in
Figure 2.
F. Selection of Work Material
There are various series of aluminium alloy available
for manufacturing. In manufacturing of automotive,
aerospace and shipbuilding, especially 2000 and 5000
series are mostly used. In the above series different
grades are used for various applications. Two different
grades (AA5083 and 5456) of 5000 series were chosen.
The hardness of AA50583 is lesser than as compared to
5456. The mechanical property of the two different
materials is shown in table 1. Also, the chemical
composition of the aluminium alloy is shown in table 2.
MECHANICAL PROPERTIES OF ALUMINIUM ALLOY
Material
Yield
strength
(Mpa)
Tensile
strength
(Mpa)
Dimensions of Taper Cylindrical Tool
Hardness
in HRB
AA5083
195
305
26
5456
230
325
27.5
H. Welding Parameters
There are various parameters that affect the welding
characteristics such as welding speed (mm/min),
rotational speed (rpm), load (KN). The parameters that
influences the welding process are shown in the Table 3.
The experimental values that were obtained for the
welding process are shown in the Table 4. The Friction
Stir Welding Process of the Aluminium plates are shown
in the Figure 3.
CHEMICAL COMPOSITION OF ALUMINIUM ALLOY
Element
Wt % for AA
5083
Wt% for AA
5456
Si
0.117
0.044
Fe
0.180
0.417
Mn
0.620
0.451
Cu
0.016
0.041
Mg
4.400
4.886
Zn
0.010
0.012
Cr
0.084
0.069
Ti
0.030
0.010
Ni
-
0.005
Al
Bal.
Bal.
Friction Stir Welding of Aluminium Plates
G. Tool Selection
The selection of tool material is an important
consideration for obtaining the better quality of weld. In
this experiment, high speed steel (HSS) is taken as a tool
material for joining the two different aluminium alloys.
HSS is selected because it is having a greater strength, life
time of the tool is high and high thermal resistivity. A
cylindrical tapered tool with the tool dimensions of 10mm
shoulder diameter, 6 × 3 mm pin diameter and 5.8mm of
pin length at an inclination of 15 degree are used for
PARAMETERS INFLUENCING THE WELDING PROCESS
S. No
Process
Weld
Conditions
1
Rotational Speed of the FSW tool
(N),rpm
1000 – 1400
Welding Speed (S), mm/min
50 – 150
2
130
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
3
Axial force (F) kN
1–4
Shoulder diameter (Sd), mm
20
Tool pin profile
Taper
cylindrical
Pin diameter (Pd), mm
3
Pin length (Pl), mm
5.8
4
5
6
7
EXPERIMENTAL VALUES
Speed of Rotating
Tool (rpm)
Feed Rate
(mm3/min)
Load (KN)
1000
50
0.35
1000
100
2.49
1000
150
3.15
1200
50
2.85
1200
100
2.99
1200
150
3.12
1400
50
2.96
1400
100
2.69
1400
150
3.42
Specimen Before Tensile Test
RESULTS AND DISCUSSIONS
I. Testing
The welded aluminium alloy plates were cut in Isection and reduced to the required thickness. The welded
aluminium alloy plates were cut in the I-section and
reduced to the required thickness using wire cut EDM
process. After cutting the Aluminium plates in the Wire
Cut EDM process, the extra profiles which were present
on the welded areas were filed and the flat surfaces were
obtained. The tensile test was then carried out on the
welded aluminium plates to determine the ultimate tensile
stress (UTM) and also yield strength (YS). The tensile test
was carried on the welded aluminium plates using a
universal testing machining (UTM). The evaluation of
various tensile properties like ultimate strength, yield
strength and elongation were carried out on the welded
aluminium plates. After fabricating the joint according to
ASTM standard (American standard for testing and
material), cross section of the weld are subjected to the
tensile stress to determine the mechanical property of the
welded joint. The Dimensions of the I-section as per the
ASTM which was cut in the Wire Cut EDM process. The
picture of the I-section of the welded plates before the
tensile test is shown in the Figure 4. The picture of the Isection of the welded plates after the tensile test is shown
in the Figure 5.
Specimen After Tensile Test
The results were obtained for the various specimens
by the Universal Testing Machine (UTM). The Stress Vs
Strain graphs were also obtained by the testing process.
The Stress Vs Strain Graph of the Specimen is obtained
by the Tensile Test is shown in the Figure 6.
131
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Micro Structure of Welded Portion
Stress Vs Strain Graph of the Specimen
J. Microstructural Analysis
The structural analysis is used to analyze the fine grain
structure of the cross section of the weld. It is used to
identify the formation of the new grain. Metallographic
weld evaluations can take many forms. In its most simple
form, a weld deposit can be visually examined for large
scale defects such as porosity or lack of fusion defects.
On a micro scale, the examination can take the form of
phase balance assessments from weld cap to weld root or
a check for non-metallic or third phase precipitates.
Examination of weld growth patterns are also used to
determine reasons for poor mechanical test results. The
microstructure of the parent metal AA5083 in wrought
form at 100X. The grain orientation along the direction of
the rolling is observed. The constituents are fine particles
of Mg2si phases present as un-dissolved in aluminium
solid solution. The other constituents are the inter metallic
Al6 (Fe, Mn)The picture of the Microstructure of the
Aluminium Alloy, AA5083 is shown in the Fig. 7. The
Microstructure of the Welded portion of the Aluminium
Alloy, AA5083 is shown in the Fig. 8. The fusion zone
with alternate layers of base metal and the fusion zone
showing good plasticity. The parent metal 5456
microstructure which is similar to that of 5083 except
more eutectic particle/phases due to higher alloy elements
content. The Microstructure of Aluminium Alloy,
AA5456 is shown in the Fig. 9.
Micro Structure of AA 5456
K. Some Common Mistakes
The SEM analysis is conducted to concentrate
on the crystalline orientation as well as external texture of
the welded zone. Other application such as crystal
structure and chemical composition can also be taken
using scanning electron microscope. The welded portion
can be viewed in magnification from 100X to 300X. The
image of the crystalline orientation of the welded portion
(magnification of 500x and 100um) is shown in the Fig.
9. The image of the non –uniform material transformation
in the welded portion (magnification of 1000x and 10um)
is shown in the Fig. 10.
Crystalline orientation of the welded portion (magnification of 500x
and 100um)
Micro Structure of AA 5083
132
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Nnon –uniform material transformation in the welded portion
(magnification of 1000x and 10um)
CONCLUSION
After In this investigation, the mechanical
property and microstructural analysis of the dissimilar
joints of AA5083 and 5456 alloys are evaluated.
It is concluded that the good quality of weld can
be obtained by adjusting the two parameters such as
welding speed and tool rotational speed. Moreover,
welding speed and tool rotational speed plays a
significant role in determining the weld quality.
Experimentally, it was observed that the
optimum welding speed and tool rotational speed is
obtained at 50mm/min and 1200 rpm respectively and
corresponding output parameters of the welded portion
are as follows,
 Tensile strength
= 191 Mpa
 Yield Strength
= 170 Mpa
 Percentage of Elongation = 8.27%
REFERENCES
TWI Ltd (UK) in 1991, 28 July 2012, 2215h
American Welding Society draft Specification for Friction Stir
Welding of Aluminium Alloys for Aerospace Hardware, 28 July
2012, 0900h .
Hong Liu1, Kazuhiro Nakata1, Naotsugu Yamamoto2 and Jinsun
Liao2 journal, 6 august 2012,1800h.
G. M. D. Cantin*1, S. A. David2, W. M. Thomas3, E. Lara-Curzio2
and S. S. Babu2 journal, 10 august 2012, 1555h
Arora,a A. Deb and T. DebRoya journal , 24 august 2012, 1015
Terry Khaled, Ph.D. An Outsider Looks At Friction Stir Welding
journal, 10september 2012, 2323h
Ahmed Khalid Hussain , Evaluation Of Parameters Of Friction Stir
Welding For Aluminum AA6351 Alloy journal, 20september
2012, 2012h.
Universal milling machine Milko 37 manual books, 1 November 2012,
1500h.
Dj.M. Maric, P.F. Meier and S.K. Estreicher: Mater. Sci. Forum Vol.
83-87 (1992), p. 119
R.J. Ong, J.T. Dawley and P.G. Clem: submitted to Journal of Materials
Research (2003)
133
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Use of Vegetables Oil as Alternate Fuels in Diesel Engines – A
Review
B.Gokul1*, S.Prabhakar2,S.Prakash3, M.Saravana Kumar4, Praveen.R5
1*
2,3,4,5
B.E, Final Year, Department Of Mechanical Engineering, Avit, Vmu, Chennai,Tamil Nadu, India
, Assistant Professor, Department Of Mechanical Engineering, Avit, Vmu, Chennai,Tamil Nadu, India
Email:id: [email protected]
Abstract—The world is confronted with the twin crises of fossil fuel depletion and environmental degradation. The
indiscriminate extraction and consumption of fossil fuels have led to a reduction in petroleum reserves. Petroleum
based fuels are obtained from limited reserves. These finite reserves are highly concentrated in certain region of the
world. Therefore, those countries not having these resources are facing a foreign exchange crisis, mainly due to the
import of crude petroleum oil. Hence it is necessary to look for alternative fuels, which can be produced from
materials available within the country. Although vegetative oils can be fuel for diesel engines, but their high
viscosities, low volatilities and poor cold flow properties have led to the investigation of its various derivatives.
Among the different possible sources, fatty acid methyl esters, known as Biodiesel fuel derived from triglycerides
(vegetable oil and animal fates) by transesterification with methanol, present the promising alternative substitute
to diesel fuels and have received the most attention now a day. It does not contribute to a rise in the level of carbon
dioxide in the atmosphere and consequently to the green house effect.
Keywords: Vegetable oil, Biodiesel, Diesel engines.
It is well known that biodiesel is not toxic,
contains no aromatics, has higher biodegradability than
diesel, is less polluting to water and soil and does not
contain sulphur ( Paramanik 2003). Bio-diesel contains
no petroleum, but it can be blended at any level with
petroleum diesel to create a bio-diesel blend or can be
used in its pure form. Just like petroleum diesel, biodiesel operates in compression ignition engine; which
essentially require very little or no engine modifications
because bio-diesel has properties similar to petroleum
diesel fuels. It can be stored just like the petroleum diesel
fuel and hence does not require separate infrastructure.
The use of bio-diesel in conventional diesel engines
results in substantial reduction of un-burnt hydrocarbons,
carbon monoxide and particulate matters. Bio-diesel is
considered clean fuel since it has almost no sulphur, no
aromatics and has about 10 % built- in oxygen, which
helps it to burn fully.Its higher cetane number improves
the ignition quality even when blended in the petroleum
diesel (Advani 2003).
Introduction
It is known that the remaining global oil
resources appear to be sufficient to meet demand up to
2030 as projected in the 2006– 2007 world energy
outlook by the International Energy Information
Administration (Kjarstad et al 2009). There is, therefore,
a demand to develop alternative fuels motivated by the
reduction of the dependency on fossil fuel due to the
limited resources. In this respect biodiesel have been
proposed as alternate solution for increasing of energy
demand and environmental awareness. Vegetable oil is
not a new fuel for CI engine hundred years ago Mr.
Rudolf Diesel tested vegetable oil for his engine. (Chen
Hu et al 2010). Diesel demonstrated his engine at the
Paris Exposition of 1900 using peanut oil as fuel. In 1911
he stated “The Diesel engine can be fed with vegetable
oils and would help considerably in the development of
Agriculture of the countries which use it”. In 1912, Mr.
Rudolf Diesel said, “The use of vegetable of oils for
engine fuels may seem insignificant today. But such oils
may become in course of time as important as petroleum
and the coal tar products of the present time” ( Babu et al
2003). With the advantages of the cheap petroleum,
appropriate crude oil fractions were refined to be used as
fuel and Diesel engine were evolved together. In the
1930s and 1940s vegetable oils used as diesel fuels from
time to time, but usually only in emergency situations.
Recently, because of rise in crude oil prices, limited
resources of fossil fuel, environmental concerns, there
has been a renewed focus on vegetable oils to make bio
diesel fuels (Hak-Joo Kim et al 2004).
Diesel Engines
Diesel engines are usually classified into two
categories; these are direct and indirect injection engines.
Direct injection means the fuel is directly injected into
the combustion chamber. The fuel is injected under high
pressure through a nozzle with single or multiple tiny
orifices. This results in a fuel spray with very fine
droplets thus making it easier for the fuel to evaporate
and burn. But in the indirect injection engines, the fuel is
injected into an auxiliary chamber that is adjacent and
connected to the main combustion chamber. Most
combustion start sooner in this chamber and burning
134
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
gases exit the chamber with high velocities giving a
greater ability for mixing of fuel and air. These types of
engines are not very sensitive on the ignition ability of
the fuels.The advantages of diesel engines are it has
greater efficiency, durability and good fuel economy
compared to gasoline engines. Therefore, the application
range of diesel engines is very wide. Most of the
applications of diesel engines are in major transportation
sector such as bus, truck, train and ship, and heavy
machinery like construction equipments (Jhon B
Heywood 1988).
significant environmental benefit can be derived from
the combustion of vegetable oil based biodiesel rather
than petroleum based diesel fuels (Peterson et al 1992
and Agarwal 1988).
Generally, there are three forms to use
vegetable oils as fuel in diesel engines. These are neat or
pure vegetable oils, blends of vegetable oils and diesel
fuel, and transesterified vegetable oils. The first and
second forms have problems associated with the long
term performance of diesel engines because of higher
fuel viscosity. But the esters of vegetable oils have
significantly lower viscosities than the neat or blended
vegetable oil fuels thus the viscosity related problems are
greatly reduced. The most promising applications of
vegetable oils as diesel fuels are of course the ester of
vegetable oils. Methyl, ethyl and butyl esters produced
by means of the transesterification of vegetable oils are
usually known. Presently, the well known method of
biodiesel usage is blending with conventional diesel fuel
(Agarwal 2007).
Need of alternative fuels
The world energy supplying has relied heavily
on non-renewable crude oil derived (fossil) liquid fuels
out of which 90 % is estimated to be consumed for
energy generation and transportation. It is also known
that emissions from the combustion of these fuels are the
principal causes of global warming and many countries
have passed legislation top arrest their adverse
environmental consequences with population increasing
rapidly and many developing countries expanding their
industrial base and output, worldwide energy demand is
bound to increase on the other hand, known crude oil
reserves cloud be depleted in less than 50 years at the
present rate of consumption. This situation initiated and
has sustained interest in identifying and channeling
renewable raw materials into the manufacture of liquid
fuel alternatives because development of such biomass
based power would ensure that new technologies are
available to keep pace with society need for new
renewable power alternative for future. To go a long way
in finding solutions to future fuel needs the answer surely
lies in Alternative Fuels. (Abdul Kalam 2011).
Biodiesel
Biodiesel (or biofuel) refers to a vegetable oilor animal fat-based diesel fuel consisting of long
chain alkyl (methyl, propyl or ethyl) esters. Biodiesel is
typically
made
by
chemically
reacting lipids (e.g.,vegetable oil, animal fat (tallow))
with an alcohol. Biodiesel is meant to be used in standard
diesel engines and is thus distinct from the vegetable and
waste oils used to fuel converted diesel engines.
Biodiesel can be used alone, or blended with petrodiesel.
Biodiesel can also be used as a low carbon alternative to
heating oil (Agarwal 2001). The choice of feed is country
specific and depends on availability. The United States
uses soybean, Europe rapeseed and sunflower, Canada
canola, Japan animal fat and Malaysia palm oil. In India,
non-edible oil is most suitable as biodiesel feedstock
since the demand for edible oil exceeds the domestic
supply. It is estimated that the potential availability of
such oils in India amounts to about 1 million tons per
year, the most abundant oil sources are Jatropha oil,
mahua oil, neem oil and Pongamia oil, also known as
Karanja oil. Also, implementation of biodiesel in India
will lead to many advantages like providing green cover
to wasteland, support to agricultural and rural economy,
and reduction in dependency on imported crude oil and
reduction in air pollution (Tewari et al 2003, Pant et al
2003 and Demirabas et al 2007).
BIO-DIESEL SCENARIO IN OTHER COUNTRIES
Vegetable oil as an alternative fuel
Vegetable oils present a very promising
alternative to diesel oil since they are renewable and can
be produced easily in rural areas where there is an acute
need for modern forms of energy. This was stated with
remarkable foresight by none less than the inventor of
diesel engine, Rudolf Diesel ‘The Diesel engine can be
fed with vegetable oils and would help considerably in
the development of the countries which will use it. This
may appear like a futuristic dream but I can predict with
great conviction that this way of using a diesel engine
may in future be of great importance’. (Babu et al 2003).
During the early stages of the diesel engine,
strong interest was shown in the use of vegetable oils as
fuel but this interest declined in the late 1950`s after the
supply of petroleum products become abundant . During
the early 1970`s, oil shock however caused a renewed
interest in vegetable oil fuels. This interest Evolved after
it became apparent that the world’s petroleum reserves
were dwindling. At present, in order to replace a part of
petroleum based diesel usage, the use of vegetable oil
product biodiesel has been starting in some countries.
Vegetable oils are renewable energy source and
Several countries in the world have active
biodiesel programs. They also have provided legislative
support and have drawn up national polices on biodiesel
development. France is the world’s largest producer of
biodiesel; its conventional diesel contains 2 to 5 per cent
biodiesel and that will soon apply to the whole of Europe.
(Schlautman et al. 1986). Germany has more than 1,500
135
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
biodiesel filling stations. Sunflower based biodiesel has
made good success in France and UK. The full potential
of jatropha is far from being realized. The Agricultural
Research Trust (ART), Zimbabwe has developed nontoxic varieties of Jatropha, which would make the seed
cake following oil extraction suitable as animal feed
without its detoxification. Jatropha cultivation and
management is poorly documented in South Africa and
little field experience is available there. Currently,
growers are unable to achieve the optimum economic
benefits from the plant. The markets for the different
products have not been properly explored or quantified,
nor have the costs or returns (both tangible and intangible)
to supply raw materials or products to these markets.
Consequently, the actual or potential growers including
those in the subsistence sector do not have an adequate
information base about the potential and economics of
such plants to make decisions relating to their livelihood,
not to mention its commercial exploitation (Meher et al.
2006).
gallons in 2012. U.S. Senate approved Energy Bill in
August 2003 with tax provisions for the Bio-diesel.
Major transport companies of different cities in U.S.A
are using this fuel for their City bus services and this fuel
is picking up every day in U.S.A (Annual Energy
Outlook 2009). In general, Bio-diesel scenario all over
the World is growing at a rapid pace with U.S.A., France
and Germany are the leaders. Additional capacities are
also expected from Japan and palm oil producing
countries like Indonesia & Malaysia in near future
(Stefan Majer et al. 2008).
BIO-DIESEL SCENERIO IN INDIA
The India’s energy demand is expected to grow
at an annual rate of 4.8 per cent over the next couple of
decades. Most of the energy requirements are currently
satisfied by fossil fuels – coal, petroleumbased products
and natural gas. Domestic production of crude oil can
only fulfill 25-30 per cent of national consumption rest
we are importing from other countries. In these
circumstances biofuels are going to play an important
role in meeting India’s growing energy needs. Projected
requirement of biofuel for blending under different
scenario are given in Table 1
Australian Bio-diesel Industries has opened
35,000 tonnes/year plant in New South wales with use of
Vegtable oils, fats and used oils. The Australian Govt.
has proposed a national standard for Bio-diesel and also
announced funding to help Biodiesel production. The
Brazilian Govt. has incorporated 5% of its Veg. Oils
(palm oil, soya oil, castor oil) with fuel to produce Biodiesel in 2005. It is estimated that about 2 million tonnes
of vegetable oil is used to meet this target. In 2008,
Petrobras Bio-fuels inaugurated its first plant to produce
57 million litres of Bio-diesel a year. The present
production of China is more than 50,000 tonnes with
plants in Fujian, Sichuan & Hebei areas.About 60,000
tonnes of Bio-diesel was produced in Czech Republic in
the early 90S. Today, the largest producer has two plants
of Cap. 39,000 & 13,000 tones each while another
producer has plant with 50,000 tonnes capacity. More
units are planned in the country. Setuza the largest
producer of rapeseed oil methyl ester will be producing
50,000 tonnes of Bio-diesels. (Asia-TissueWorld
Magazine2009)
Year
Petrol
Deman
d
Mt
Diesel
Deman
d
Mt
2006
-07
2011
-12
2016
-17
10.07
52.32
Biodiesel blending
requirement (in metric
ton)
@5 @10 @20
%
%
%
2.62 5.23
10.46
12.85
66.91
3.35
6.69
13.38
16.40
83.58
4.18
8.36
16.72
Table 1: Projected demand for petrol and diesel and
biofuel requirements
The demand for diesel is five times higher than the
demand for petrol in India. But the biodiesel industry is
still in its infancy. India's current biodiesel technology of
choice is the transesterification of vegetable oil. . India
has great potential for production of bio-fuels like bioethanol and biodiesel from non-edible oil seeds. From
about 100 varieties of oil seeds, only 10-12 varieties have
been tapped so far. The Government of India has
developed an ambitious National Biodiesel Mission
comprising six micro missions covering all aspects of
plantation, procurement of seed, extraction of oil, transesterification, blending & trade, and research and
development to meet 20 per cent of the country’s diesel
requirements by 2011-2012. Diesel forms nearly 40% of
the energy consumed in the form of hydrocarbon fuels,
and its demand is estimated at 40 million tons. As India
is deficient in edible oil and demand for edible vegetable
oil exceeds supply, the Government decided to use nonedible oil from Jatropha curcas oilseeds as biodiesel
feedstock. (Vijai Pratap Singh 2011).
In European Markets, there is increased
demand for Bio-diesel every year. The Dutch Govtment
has decided to encourage the availability of Bio-diesel
from January 2006 onwards to meet the target of 2% by
makng it economically attractive. The situation is
reviewed periodically to get results in forward direction.
Indonesia will also follow up Malaysia’s action of using
excess Palm oil in production of Bio-diesel as per
announcement of Indonesia’s Agriculture Minister.
Indonesia’s planned Bio-diesel Capacity was nearly 3.4
Millon tonnes in 2008. Malaysia will be using surplus
Palm Oil into Bio-diesel soon (Bernama – The Malaysian
National News Agency 2001).
U.S.A. has Bio-diesel production of about 444.5
million gallons in 2007 and will be using 7.50 billion
136
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
India's demand for petroleum products is likely
to rise from 97.7 million tonnes in 2001-02 to around
139.95 million tonnes in 2006-07, according to
projections of the Tenth Five-Year Plan. The plan
document puts compound annual growth rate (CAGR) at
3.6 % during the plan period. Domestic crude oil
production is likely to rise marginally from 32.03 million
tonnes in 2001-02 to 33.97 million tonnes by the end of
Conclusion
The following conclusions are made based on the study
are listed below,

th
the 10 plan period (2006-07). India’s self sufficiency in
oil has consistently declined from 60% in the 50s to 30%
currently. Same is expected to go down to 8% by 2020
(Bureau of Energy Efficiency 2011). Final energy
consumption is the actual energy demand at the user end.
This is the difference between primary energy
consumption and the losses that takes place in transport,
transmission & distribution and refinement. The actual
final energy consumption (past and projected) is given in
Table 1.2
Source
Units
Electrici
ty
Coal
Billion
Units
Million
Tonnes
Million
Tonnes
MillionCu
bic Meters
Million
Tonnes
Lignite
Natural
Gas
Oil
Product
s
199
495
289.
36
76.6
7
4.85
988
0
63.5
5
2001
-02
200607
201112
480.
08
109.
01
11.6
9
1573
0
99.8
9
712.6
7
134.9
9
16.02
1067.
88
173.4
7
19.70
1829
1
139.9
5
2085
3
196.4
7



Vegetable oils selected can be successfully
applied in CI engine through fuel modifications
and engine modifications.
When comparing the emission characteristics
HC, CO is reduced when compared to diesel,
however NOx and CO2 emission is slightly
increased when compared to diesel.
Biodiesel are its renewability, better quality
exhaust gas emission, its biodegradability and
the organic carbon present in it is
photosynthetic in origin.
The current availability of vegetable oil limits
the extent to which biodiesel can displace
petroleum to a few percent, new oil crops could
allow biodiesel to make a major contribution in
the future.
REFERENCES
[1].Chen Hu, Shi-Jin Shuai and Jian-Xin Wang (2007) ‘Study on
combustion characteristics and PM emission of diesel engines using
ester-ethanol-diesel blends’, International Journal of Proceedings of the
Combustion Institute, Vol. 31, pp. 2981-2989.
[2].Babu A.K. and Devaradjane G. (2003) ‘Vegetable oils and their
derivatives as fuels for CI engines’, SAE paper 2003-01-0767.
[3].Hak-Joo Kim, Bo-Seung Kang, Min-Ju Kim, Young Moo Park,
Deog-Keun Kim, Kwan-Young Lee, (2004) ‘ Transesterification of
vegetable oil to biodiesel using heterogeneous base catalys’ , Catalysis
Today, Vol.93, pp. 315-320.
[4]. Pramanik K. (2003) ‘Properties and Use of Jatropha Curcas oil
and Diesel fuel blends in CI Engine’, Journal of Renewable Energy,
Vol 28, pp.239-248.
[5].Vijai Pratap Singh (2011) ‘An Assessment of Science and Policy’,
Indian Biofuel Scenario 2011.
[6].Shahi R.V. (2006) ‘Energy markets and technologies in India’,
Keynote Address in Global Energy Dialogue at Hanover (Germany) on
April 25, 2006.
[7].Bureau of Energy Efficiency 2011.
[8].Barnwal B.K., Sharma M.P. (2005) ‘Prospects of biodiesel
production from vegetable oils India’, Renewable and sustainable
energy reviews , Vol. 9, pp. 363-378.
[9].Deepak Agarwal, Lokesh Kumar, Avinash Kumar Agarwal (2008)
‘Performance Evaluation of a Vegetable oil fuelled CI Engine’,
Renewable Energy, Vol.33, pp. 1147-1156.
Table 1.2 demands for commercial energy for final
consumption
Approximately 85 per cent of the operating cost
of biodiesel plant in India is the cost to acquire feedstock.
Securing own feedstock to insure supply at a fair price
and sourcing it locally to avoid long haulage for delivery
of seeds to biodiesel plant are critical factors in
controlling profitability. The capital cost both in India
and internationally is around Rs 15,000-20,000 per MT
of biodiesel produced. At 10000 MTPA, the capital cost
of oil extraction and transesterification plant would be Rs
20,000/MT capacity. A plant size of 10,000 MTPA can
be considered optimal assuming cost of oil extraction at
Rs 2360/MT and cost of transesterification at Rs
6670/MT with byproducts produced @ 2.23 MT seed
cake/MT of biodiesel and 95 kg of glycerol per MT of
biodiesel. Fixed costs towards manpower, overheads and
maintenance is 6 per cent of capital cost, and
depreciation is 6.67 per cent of capital cost. The return
on investment (ROI) is 15 per cent pretax on capital cost.
As per the Government of India’s Vision document
2020, cultivating 10 million ha with Jatropha would
generate 7.5 million tonnes of fuel a year, creating yearround jobs for five million people (Shanker et al 2006
and Shahi 2006).
137
International Conference on Information, System and Convergence
Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
A Telepresence And Autonomous Tour Guide Robot
Alpha Daye Diallo, Suresh Gobee, Vickneswari Durairajah
Asia Pacific University of Technology and Innovation
Technology Park Malaysia, Bukit Jalil, 57000
Kuala Lumpur, Malaysia
[email protected], [email protected], [email protected]
Abstract — This paper describes a novel approach for implementing a low cost multitasking robot. The robot has
the ability to operate both as a telepresence and a tour guide robot. It can be remotely controlled through a website
from anywhere in the world, thus giving users the sensation of being in two places at once. Besides, it is also an
autonomous indoor tour guide robot for Asia Pacific University Engineering Labs. The entire system runs on a
credit card size embedded computer which is the Raspberry pi 2. The tour guide system uses wall following and a
very simple yet fast and accurate image processing technique for the robot navigation and localization. Google
speech to text and text to speech API’s has been used for the speech recognition in order for the robot to efficiently
interact with visitors through voice recognition.
Keywords — Telepresence Robot, Autonomous Tour Guide Robot, Vision Based Navigation, Voice recognition.
same standard of those currently available in the
market.
I.
This robot is mostly suitable for educational
environments such as universities and colleges
whereby students and lectures can use it to remotely
attend classes while they could not be there in person.
It is also very popular in museums whereby it is used
as a tour guide robot to guide visitors through the
place. Furthermore, this kind of robot is also used in
big firms whereby managers while on leave could
use it to stay in- touch with their employees, monitor
the work progress, or attend important meetings.
These are just some few applications of the
following telepresence and tour guide robot.
INTRODUCTION
Video conferencing applications are great ways
of communicating. However, those devices suffer
from a minor drawback which is the lack of
flexibility. When emitting a video call, users can
only see the area covered by the device receiving the
call and has no autonomy on the view such as
looking at another direction or moving around
unless they receive help from the person on the other
side of the line. To address this problem, engineers
came up with the concept of telepresence robots
which is nothing but a video conferencing device
(such as a
II.
RELATED WORK
phone or a tablet) on wheels.
A. Telepresence Robot
Telepresence systems suffer of major challenging
problems due to their high dependence on the internet.
The faster the exchange of information such as live video
feed and commands between the remote control station
and the robot, the better the performance of telepresence
system. Slow or unreliable internet connection may result
in lagging or bad quality video feed, and also in huge
delays between the control station and the telepresence
robot [2]. Beside internet related problems, telepresence
robots also constitute a safety concern as they are
operating in populated environment whereby there are all
kind of obstacles and people moving around. To
overcome these problems, researchers came up with
different approaches such as autonomous navigation,
obstacle avoidance, etc.
The term telepresence refers to technologies
built for remote control of machines or devices that
gives the human operator the sensation of remotely
being in another location. In recent years, the market
demand for telepresence robots has significantly
increased. These robots find their use in various
domains such as the educational, health, and
business environments [9].
The second part of this paper focuses on building
an autonomous indoor tour guide robots capable of
guiding visitors around Asia Pacific University
Engineering Labs facilities. Instead of building two
separate robots, the current research aim is to create
a multipurpose robot that incorporates both a tour
guide and a telepresence system. Hence, instead of
paying for two robots which could cost thousands of
dollars without including the maintenance expenses,
the present robot combine those two technologies
into one and comes at an affordable cost with the
Do, H.M, et al in their research study came up to a
conclusion that current features introduced into
telepresence robots such as autonomous navigation or
obstacle avoidance are not enough to tackle complex
issues related to telepresence robots. As a result, a new
138
International Conference on Information, System and Convergence
Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
concept called local situational awareness was presented
whereby the remote user would be assisted by a local user
to overcome all obstacles [2].
Reference [10] proposed a different approach where the
tour guide robot uses android text to speech application.
The robot converts a preloaded string of text into audio
and read it to visitors whenever it reaches a place of
interest. Unfortunately, both proposed methods of human
robot interaction were very limited as it could only talk to
visitors but was not able to collect users’ voice commands.
Escolano, C, Antelis, J.M. and Minguez, J introduced
an EEG operated telepresence robot designed for patient
with neuromuscular disabilities. As the robot receives the
destination through the internet, it autonomously travels
to the chosen location while avoiding all type of obstacles.
Labonte, D, Boissy, P and Michaud, F in their research
demonstrated that small resolution videos (320x240
pixels resolution) are easier to stream compare to videos
with high resolution. Furthermore, it was argued that
mixed reality visualization interfaces with video-centric
and mapcentric modalities considerably improve users
performance as compare to web interfaces even though
such methods requires a software and not a website.
Another low cost human machine interaction through
voice recognition was presented by [5]. The proposed
system consists of using a Raspberry pi as the main
processing unit to recognize 6 different languages using
web applications.
III.
DESIGN GUIDELINES
A. Design Specification
The robot proposed is around 140 cm tall and 50 cm
wide as shown in figure 1. The height of the robot was
chosen to be 140 cm so that it won’t exceed the average
human height which ranges from 160cm to 180 cm.
B. Tour Guide Robot
The key requirement to a successful tour guide robots
is how well it localizes itself and how well it interacts
with people as analyzed in reference [1]. A tour guide
robot which uses RFID for localization and ultrasonic and
IR sensors for obstacle avoidance by researches in
reference [11]. However, passive RFID readers have a
limited operating range which makes them less reliable as
the robot has high chances of missing a tag and they are
also quite costly. Other alternative to RFID based
autonomous navigation is vision-based navigation system
using QR (Quick Response) code recognition. Seok Ju
Lee, Jongil Lim, Tewolde, G. and Jaerock Kwon
introduced a very efficient wall following navigation
techniques based on real time QR code recognition that
allows the robot to localize itself.
Figure 1: 3D design overview
Several localization and mapping approaches have
been proposed in the past. However, most of those
approaches might not be efficient because they often
require considerable amount of time to accomplish the
mapping [6]. Reference [8] suggested a tour guide robot
using a very simple method called weighted centroid
technique. This method consists of placing ZigBee
modules at known location to provide reference
information to the robot. Unfortunately, the result
obtained was not satisfying as the robot consistently
missed the final destination by a distance of 3.3m up to
4.5m.
B. Control System Structure
A Raspberry pi has been used the main processor of
the robot to deal with most of the processing and
computations. The Ultrasonic sensors and the motors will
be connected to an Arduino Mega microcontroller and
I2C communication will be used for the data exchange
between the Arduino Mega and the Raspberry pi. An
android tablet placed on top of the robot serve as a
monitor to display the user interface of the robot. The
tablet is also be used as a video conferencing tool for
when the robot is being remotely controlled. Hypertext
Transfer Protocol (HTTP) which is a client server
communication protocol is used to exchange information
between the Pi (server) and the tablet (client).
Beside the QR codes recognition technique employed
by [10] in their study, other vision-based autonomous
mobile robots localization and navigation technique have
been proposed in the past. Zaklouta, F. and Stanciulescu,
B. proposed machine learning classifiers to overcome
issues related to road traffic signs recognition which
could be used by indoor mobile robots for navigation as
well. Similar recognition technique was proposed by [4].
The only difference is that in addition to traffic signs
recognition, the method proposed includes a color
segmentation and text recognition.
IV.
OPERATING PRINCIPLE
A. User Interface
The Flask module installed on the Raspberry pi runs
directly after the robot is switched on as shown in figure
2. This module is what turns the Raspberry pi into a server
in order to make the communication between the user
interface and the android tablet possible. On the user
interface as shown in figure 3, users will have two options.
Different researchers employed different approach for
implementing human interaction with tour guide robots.
Reference [11] suggested a tour guide robot that would
communicate with visitor through a touch screen.
139
International Conference on Information, System and Convergence
Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
The first option converts the robot into an autonomous
tour guide robot. The second option is to activate the
telepresence remote control mode which makes it
possible to control the robot from the internet.
Figure 4: Block diagram of the web control system for
the telepresence robot.
However, before controlling the robot users will be
asked to enter a name and password for authentication.
Once approved, the user will be redirected to the control
page which contains a live video feed of the webcam
mounted on top of the robot and a control panel. When
the user presses a button, the command is sent to the
Raspberry pi which executes the commands if no obstacle
is detected while simultaneously streaming video feed
back to the user. Video conferencing applications such as
skype or Viber are necessary for the user to be able to
communicate with the people on the other side.
D. Image Processing Algorthim
Figure 2: Overview of the robot main menu
In the image processing subroutine shown above, the
Raspberry pi first capture an image from the webcam
placed on top of the robot. Then the image is smoothened
to reduce noise before the edge is detected using the
Canny filter. The contours in the image are found using
the find coutours function and the rectangular objects are
isolated using the approxpolyDP function because
rectangles have 4 sides. If a rectangle is found, it is
compared to all the images stored in a database to find
appropriate match using the bitwise xor function.
V.
EXPERIMENTAL RESULT
The experimental study was conducted based on three
aspects: 1) the web remote control system for the
telepresence, 2) the robot autonomous navigation using
image processing and wall following, and 3) the
interaction between the people and the robot through
voice recognition. All these three studies and testing was
conducted in Asia Pacific University. Feedbacks from the
new visitors during the university open days highly
contributed to the improvement of the system.
Figure 3: User interface displayed on the tablet
Figure 3 shows an android application was designed
to represent the robot user interface. A cartoonish
animated face is displayed when the application is opened.
B. Robot Interaction
The speech recognition algorithm shown above is
what helps the robot understand what users say. The two
main components used are the Google Text-to-speech
and the Speech-to-text conversion engines.
A. Telepresence System
In this section, the remote control system was
evaluated based on several criteria.
C. Telepresence Control
There are three stages in the remote control module: the
user station, the internet, and the robot station as shown
in figure 4. Both the user and the robot exchange
information via a website hosted on the internet.
During normal days with the crowded internet at the
university, it would take a delay of 1 to 1.7 s between
when the user presses a button and when the robot report
back the execution of the command. Before executing
any command, the robot first checks for obstacles. In case
an obstacle is found, the command would not be executed.
As for the video stream to the website, a 320 x 240 pixel
resolution video was streamed at a frame rate of 10 fps.
The reason for streaming lower resolution video is to
facilitate the streaming even for bad quality internet. This
significantly improved the streaming quality and
considerably reduced the delay in the streaming. Besides,
it also contribute in reducing the processing load on the
server (Raspberry pi 2) which is very important as other
operations are also taking place in the raspberry pi thus
making it crucial to optimize each process.
140
International Conference on Information, System and Convergence
Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
B. Robot Navigation
The navigation of the robot is mainly handled with
the camera and the ultrasonic sensors feedback data.
The robot uses data collected from the sensors to avoid
all obstacles and to follow and adjust to the wall.
ACKNOWLEDGMENT
A work of this nature could not be possible without
the immense help, goodwill, co-operation and assistance
of my supervisor Mr. Suresh Gobee. Consequently, I
would like to thank him for his guidance, encouragement,
and academic support. Furthermore, I would like to
acknowledge APU and APCORE (Asia Pacific
University Center of Robotic Engineering) members for
their valuable contribution to the development of the
robot physical structure. Finally, my profound gratitude
goes to my university and all my amazing lecturers.
The image processing algorithm is as shown below.
It is a very simple yet powerful algorithm. The
recognition rate is around 90%. The only time the
recognition rate drops is when the robots move at high
speed. Fortunately, it will require the robot to move 2 to
3 time faster in order for it to not be able to detect the
labs.
REFERENCES
C. Human Interaction
The human interaction is a critical factor for any tour
guide robot. As opposed to previous tour guide robot,
the current system disposes of a unique user interface.
The virtual face shown in the tablet totally changes the
way people see the robot. It makes the robot more
interactive and user friendly.
[1] Byung-Ok Han; Young-Ho Kim; Kyusung Cho; Yang, H.S.
(2010) Museum tour guide robot with augmented reality. Virtual
Systems and Multimedia (VSMM), 2010 16th International
Conference on , vol., no., pp.223,229.
[2] Do, H.M.; Mouser, C.J.; Ye Gu; Weihua Sheng; Honarvar,
S.; Tingting Chen (2013) An open platform telepresence robot
with natural human interface. Cyber Technology in Automation,
Control and Intelligent Systems (CYBER), 2013 IEEE 3rd Annual
International Conference on , vol., no., pp.81,86.
[3] Escolano, C.; Antelis, J.M.; Minguez, J. (2012) A
Telepresence Mobile Robot Controlled With a Noninvasive
Brain–Computer Interface. Systems, Man, and Cybernetics, Part
B: Cybernetics, IEEE Transactions on , vol.42, no.3, pp.793,804.
[4] Gonzalez, A.; Bergasa, L.M.; Yebes, J.J. (2014) Text
Detection and Recognition on Traffic Panels From Street-Level
Imagery Using Visual Appearance. Intelligent Transportation
Systems, IEEE Transactions on , vol.15, no.1, pp.228,238.
[5] Haro, L.F.D.; Cordoba, R.; Rojo Rivero, J.I.; Diez de la
Fuente, J.; Avendano Peces, D.; Bermudo Mera, J.M. (2014) LowCost Speaker and Language Recognition Systems Running on a
Raspberry Pi. Latin America Transactions, IEEE (Revista IEEE
America Latina) , vol.12, no.4, pp.755,763.
[6] Hung-Hsing Lin; Wen-Yu Tsao (2011) Automatic mapping
and localization of a tour guide robot by fusing active RFID and
ranging laser scanner. Advanced Mechatronic Systems
(ICAMechS), 2011 International Conference on , vol., no.,
pp.429,434.
[7] Labonte, D.; Boissy, P.; Michaud, F. (2010) Comparative
Analysis of 3-D Robot Teleoperation Interfaces With Novice
Users. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE
Transactions on , vol.40, no.5, pp.1331,1342.
[8] MacDougall, J.; Tewolde, G.S. (2013) Tour guide robot
using wireless based localization. Electro/Information
Technology (EIT), 2013 IEEE International Conference on , vol.,
no., pp.1,6.
[9] Oh-Hun Kwon; Seong-Yong Koo; Young-Geun Kim;
Dong-Soo Kwon (2010) Telepresence robot system for English
tutoring. Advanced Robotics and its Social Impacts (ARSO), 2010
IEEE Workshop on , vol., no., pp.152,155.
[10] Seok Ju Lee; Jongil Lim; Tewolde, G.; Jaerock Kwon (2014)
Autonomous tour guide robot by using ultrasonic range sensors
and QR code recognition in indoor environment.
Electro/Information Technology (EIT), 2014 IEEE International
Conference on , vol., no., pp.410,415.
[11] Yelamarthi, K.; Sherbrook, S.; Beckwith, J.; Williams, M.;
Lefief, R. (2012) An RFID based autonomous indoor tour guide
robot. Circuits and Systems (MWSCAS), 2012 IEEE 55th
International Midwest Symposium on , vol., no., pp.562,565.
[12] Zaklouta, F.; Stanciulescu, B. (2012) Real-Time TrafficSign Recognition Using Tree Classifiers. Intelligent
Transportation Systems, IEEE Transactions on , vol.13, no.4,
pp.1507,1514.
Unfortunately the design is only a small part of the
user interface. The most important is how well the robot
captures what the user request for and how well it replies
or reacts to it. At this point, the robot highly depends on
the internet for the voice recognition. Google provide an
offline speech recognition engine. However, those
online are far more accurate and give several other
suggestions to a single input.
There are some few limitations to the speech
recognition. It only works perfectly in an environment
with less or almost no noise. The noisier the place, the
less the accuracy of the voice recognition system. Also,
the recognition range is higher in quiet place as compare
to noisy places. Currently, in order to address the robot
the user has to be standing in less than 1m away from
the robot. This has been tested in a normal ambient
place whereby there were crowd of people moving
around and talking.
VI.
CONCLUSION AND FUTURE WORKS
In today’s world, one of the most important factors
to be taken into consideration by engineers while
creating a new product is the cost. Instead of paying for
two robots which could cost thousands of dollars
without including the maintenance expenses, the present
robot combine those two technologies into one and
comes at a cheap price with the same standard of those
currently available in the market.
In summary a telepresence and autonomous tour
guide robot has been implemented. The results were
very satisfying as it was shown that both these two
technologies can cohabite together in one robot and be
powered with a credit card size embedded mini
computer (Raspberry pi 2). The robot could be
controlled from the internet and it also could show
visitors the engineering labs successfully.
Further research can be made on how to incorporate
an artificial intelligence system into the robot so that it
could be smarter and can answer wider range of questions.
141
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
An Effective Approach for Parallel Processing with Multiple
Microcontrollers
Gayeon Kim , Abdul Rahim Mohamed Ariffin, Scott Uk-Jin Lee
Gayeon Kim – Dept. Computer Science and Engineering, Hanyang University, South Korea
[email protected]
Abdul Rahim Mohamed Ariffin – Dept. Computer Science and Engineering, Hanyang University, South Korea
[email protected]
Scott Uk-Jin Lee – Dept. Computer Science and Engineering, Hanyang University, South Korea
[email protected]
Abstract— Multithreading is a common technique used to develop a system that processes very large data while
producing fast execution and maintaining the efficiency of the program. However, non-determinism aspects of
multithreaded program are always ignored due to the low impact on the system. Thus, there are various arguments
to be discussed in determining non-determinism as one of the major aspects for developing a multithreaded
program. In this paper, we propose a new effective approach for parallel processing with multiple microcontrollers.
Keywords-component; Multithreading, Non-determinism, Parallel Processing, Microcontroller
stable multithreading along with their related techniques
and limitations are described in detail. Then, we propose
multiple processing in parallel with connected
mircocontrollers in section 4. Finally in section 5, we
conclude the paper and present possible future works.
INTRODUCTION
Parallel computing is essential for software
development. The importance of parallelism is
emphasized more than ever due to the rise of multicore
hardware and high demand of computation for scientific
computing, video and image processing, and big-data
analytics. It is also expected to be increased for demand
of embedded equipment with the growth of Information
and Communication Technology (ICT) industry.
Multithreading is one of the mainstream technologies in
parallel programming. It is widely used in hardware,
operating systems, libraries, and programming languages
[1, 2]. However, it remains challenge to implement
multithreaded programs. The main reason is that
multithreaded programs are non-deterministic. In the
sequential programs, the same inputs bring about the
same results. In the multithreaded programs, on the other
hand, we are not able to predict results when executing
the same program with the same inputs. Even if the same
multithread program is executed, we have to consider all
different interleavings to recognize all possible results [3,
4].
DIFFICULTIES OF MULTIHTREADING
Although developments of concurrent programs are
continuously increasing to satisfy the high demand, it is
still difficult to implement, test, analyze, and validate
them when compared to sequential programs with similar
complexity. Multithreading, which has multiple threads
running in a process, is the most commonly used type of
parallel programs. However, there still are various
problems to exploit multithreading in practice.
The major problem of multithreaded programs is nondeterminism. It is the behavior of a typical
implementation of multithreading where the same output
is not guaranteed when the same input is provided. In such
situation, it is almost impossible to find bugs with the
traditional methods used in sequential programs. In
addition, multithreaded programs can cause concurrency
errors, such as deadlocks and race conditions due to the
non-determinism [7, 8]. In a multithreading environment,
sequentially running threads are called interleavings
which actually are processes that execute threads in a very
short time rather than running then parallel. Hence, with
interleavings, threads seem as if it is running in parallel.
The sequence of executing threads is determined by
various aspects such as priority, request order, and
optimization. For instance, optimizations of a compiler
may cause a thread to be executed in the order that is not
intended by the programmer.
In this paper, we present known solution for nondeterministic solution, deterministic multithreading
(DMT) [3, 5, 6] and stable multithreading (Stable MT)
[1], and limitation of the solutions. For the overcoming
the limitation we propose an effective approach for
parallel processing with multiple microcontrollers.
Through this approach, non-deterministic problem can be
reduced when multiple processing is used instead of
multithreading. The performance of the system is also
satisfying the concept of distributed computing.
The rest of this paper will be organized in the
following manner. Section 2 provides the descriptions of
the main problem of multithreading. In section 3, nondeterminism as well as deterministic multithreading,
In order to avoid previously mentioned side effect of
non-determinism, many programmers apply mutual
exclusive locks, semaphores, and monitors. Applying
142
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
such techniques involves a very complicated and tedious
tasks where there are likely chances of applying these
techniques incorrectly. In addition, even a very simple
part of a program is difficult to implement in
multithreading [4]. Non-deterministic can cause
problems like deadlocks or race conditions. Therefore,
multithreaded programs have to be implemented very
carefully. It is unstable to use common libraries and
design patterns when implementing multithreaded
programs because they are developed without
considering the possibility of non-deterministic problems.
can obtain better robustness and reliability. However, this
system is still immature to be used in application level due
to the lack of thorough code analysis and testing [1].
Table 1 describes the features, limitations and goals of
deterministic multithreading and stable multithreading.
Through the table provided, the differences of each
multithreading methods are described.
TABLE I. Features, Limitations, and Goals of Deterministic and
Stable Multithreading
NON-DETERMINISTIC, DETERMINISTIC AND STABLE
MULTITHREADING
There are several researchers who have proposed
different solutions for non-deterministic problems.
Among these proposed solutions, only a few suggests
deterministic multithreaded systems to prevent
unintended results [3, 5, 6]. Previously, proposed
methods devised deterministic multithreading systems to
assign each input to a schedule. There are also different
approaches suggesting to reduce number of schedules
instead of constructing deterministic algorithm [1]. Stable
multithreading is based on the idea to reduce possible
interleavings by decreasing the total number of schedules.
In this section, we provide a comparison between nondeterministic
multithreading,
deterministic
multithreading, and stable multithreading as follows:
Multithreading
Methods
Feature
Schedule
Deterministic interleavings in
Multithreading deterministic
order
Stable
Multithreading
Remove
unnecessary
schedules
Limitation
Goal
Does not work
in specific
situation or has
large overhead
Always get
the same
output with
the same
input
Not proper for
application yet
Prevent to
map buggy
schedules
PARALLEL PROCESSES AND MICROCONTROLLERS
Parallel processing provides the characteristics of
simultaneous processes producing the same shared inputs
is a traditional feature of parallel computing. In order to
produce a fast and accurate output while maintaining the
performance of the parallelism have always been a major
concern to developers.
L. Non-Deterministic Multithreading
Common multithreaded programs are nondeterministic. Each thread creates interleavings and they
are executed at very short period of time with context
switch. The sequence of execution for the interleavings
changes every time and it also depends on the situation.
Normally it is impossible to predict the result of
executions. The non-determinism leads to some common
issues in parallel processing such as deadlocks or race
conditions.
O. Parallel Computing
M. Deterministic Multithreading
In deterministic multithreading, threads run the same
number of thread interleavings. Consequently, these
systems enable multithreaded programs to produce the
same results for the same input [3, 5, 6]. There are variety
of
systems
which
implement
deterministic
multithreading. The concepts adopted in these systems
are very similar to controlling access to the shared
memory where the threads are synchronized at the end of
their executions. Hence, the output of the program
execution with the same input can always be predicted.
However, these systems still have limitations such as
large overhead and not providing determinism in
particular environments.
Figure 1 shows the overview of parallel computing architecture
Parallel computing is a form of computation in which
calculations are carried out simultaneously [9]. The inputs
for the computations are derived from the distributed
shared memory where the buses are authorized to send the
inputs towards multiple processors as shown in Figure 1.
Parallelism has been traditionally used to develop highperformance computing. For such purposes, various
forms of parallel computing such as bit-level, instructionlevel, data, and task parallelisms have been developed
[10]. However, in recent years, parallel computing has
been used for different purposes such as concurrent
processes using multi-core processor and parallel
processes in microcontrollers.
N. Stable Multithreading
The main difficulty of multithreading is that
multithreaded programs have too many schedules [1].
Even with deterministic multithreading approach, threads
can be mapped in a schedule which is prone to produce
bugs. Stable multithreading finds unnecessary schedules
and excludes them. By reducing the number of possible
cases to schedule interleavings, a multithreaded program
P. Microcontrollers
Microcontrollers have become the backbone of many
appliances. They are widely used in embedded systems
143
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
such as robots, cars, peripherals and other appliances.
New development of microcontrollers occurs at very fast
pace where even multi-core microcontrollers become
available in recent years [11]. This has created a more
visible availability in solving multithreading nondeterminism problem. Solving non-determinism
constraint in multithreading is very important to provide
a multithreaded system or application with better
robustness, maintainability, and testability.
applying parallelism method with microcontroller system
units. Previously, the proposed approach were
controversial due to the high cost of microcontrollers.
However, a new approach for programming or
developing a concurrent and parallel applications can
now be introduced due to the decrease of price for
microcontrollers. According to the 2015 McClean Report
[13], the estimation of all types of microcontrollers
(MCU) with 8-bit, 16-bit, and 32-bit designs used in new
systems being attached to the Internet of Things in 2019
is expected to be about 1.4 billion. This is a dramatic
increase for the demands of MCUs when compared to 306
million in 2014. It proves that microcontrollers are
currently on high demands for the most current system
and technology especially in the field of embedded
system where multiple microcontrollers are required.
Figure 2. Intel Galileo Microcontroller Unit Chip
Q. Parallel Processing with Multiple Microcontrollers
Unit (MCU) System
Microcontrollers in embedded system controls
external hardware operations and also provide cost
efficiency in terms of having small number of program
tasks stored in permanent memory with lowest possible
cost. Thus, multiple controllers running concurrent
processes provide a solution to non-deterministic aspects
in multithreading by enhancing the performance of each
operations run by MCUs. Conceptually, multithreading is
equivalent to a context switch at the operating system
level. The difference is that a multithreaded CPU can do
a thread switch in one CPU cycle whereas a normal
context switch requires hundreds or thousands of CPU
cycles. This is achieved by replicating the state hardware
(such as the register file and program counter) for each
active thread. A further enhancement is simultaneous
multithreading which allows superscalar CPUs to execute
instructions
from
different
programs/threads
simultaneously in the same cycle [12]. However, similar
to other multithreading techniques, the non-deterministic
aspect of multithreading is still being ignored in
developing a multithreaded program. Thus, in this paper,
we propose a new solution to handle non-deterministic
aspects of multithreading by applying parallelism method
such as parallel computing in microcontroller system unit.
In terms of performance, microcontroller works much
faster than a single computer handling multithreading
processes because memory in a microcontroller is much
smaller in size. Hence, the tasks operated by the
microcontroller will produces the output faster in term of
time. Recently, the prices of microcontrollers such as
Arduino, Intel Galileo (Figure 2) and others have reduced
dramatically. By just having this factors, programmers
with less knowledge on multithreading will have the
encouragement and aspiration to do simultaneous or
parallel operation similar to multithreading through
Figure 3. MCUs executes single task per one thread
Through our approach, we provide an approach to solve
non-deterministic issues in multithreading by applying
multiple controllers. Although there has been similar
research in recent years, the proposed solutions from
related research are still unable to provide direct solution
to non-determinism. Figure 3 illustrates that the input
from the distributed shared memory are divided and
processed by multiple MCUs to produce the output where
each MCU will executes a single thread. The goal is to
provide better performance with multiple controllers
instead of traditional multithreading methods. Through
parallel processing with multiple controllers, the system
is able to prevent deadlock occurrence by specifically
running a single thread in one microcontroller. As
discussed earlier, each microcontroller will run a single
thread and produces output in a short amount of time
since microcontroller have a small size memory spaces.
R. Limitations and Discussion
Although the proposed approach provides sufficient
solution, there still are some unresolved issues that may
occur in the future and become a concern for the
application of MCUs in parallel computing system. One
of the issues is the maintainability caused by using large
number of MCUs for parallel processing. Maintaining
each of the MCUs will be very tedious. Another issues is
rather physical where programmers require to purchase
multiple MCUs and stack it up on top of their work
144
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
station. When a large number of MCUs is required, it can
lead to the waste of physical spaces as well as the effort
required to configure each MCUs. However, this problem
will be resolved as the size and price of a microcontroller
are reducing and performance of microcontrollers are
increasing continuously.
ACKNOWLEDGMENT
This work was supported by the ICT R&D program
of MSIP/IITP.[12221-14-1005, Software Platform for
ICT Equipment].
REFERENCES
Junfeng Yang, Heming Cui, Jingyue Wu, Yang Tang, Gang Hu,
“Determinism is not enough : making parallel programs reliable
with stable multithreading”, Columbia University (2013)
Heming Cui, Jingyue Wu, Chia-che Tsai, Junfeng Yang, “Stable
Deterministic Multithreading through Schedule Memoization”,
Computer Science Department, Columbia University, 2010
Tongping Liu, Charlie Curtsinger, Emery D. Berger, “DThreads :
Efficient and Deterministic Multithreading”, SOSP ‘11, October
Edward A. Lee, “The problem with threads”, Electrical Engineering
and Computer Sciences University of California at Berkeley,
Technical Report, January 10th, 2006
Nissim francez, C. A. R. Hoare , et. al, “Semantic of Nondeterminism,
Concurrency, and Communication”, Journal of Computer and
System Science (1979)
Emery D. Berger, Ting Yang, Tongping Liu, Gene Novark, “Grace :
safe multithreaded programing for C/C++”, OOPSLA 2009
Marek Olszewski, Jason Ansel, Saman Amarasinghe, “Kendo :
efficient deterministic multithreading in software”, ASPLOS
2009
Robert H.B. Netzer, Barton P. Miller, “What is race conditions? : Some
issues and formalizations” , ACM 1992
http://en.wikipedia.org/wiki/Parallel_computing (visited 04/2015)
W. Pornsoongsong, P. Chongstitvatana, “A Parallel Compiler for
Multi-core Microcontrollers”, IEEE 2012
Derek G. Murray, Steven Hand, ”Non-deterministic parallelism
considered useful”, University of Cambridge Computer
Laboratory, 2011
http://en.wikipedia.org/wiki/Microarchitecture
(visited
052015)
Ic Insights, “Microcontroller Sales Regain Momentum After Slump”,
February 2015.
CONCLUSION
In this research, we have designed conceptual
approach for replacing multithread with multiple
processing by connected multiple microcontroller. We
are planning to improvise this approach in future research
by deriving the experiments for better performance
through implementation and quantitative analysis. We
will also compare our approach with multithreading in the
economical perspective through the experiments. Surely,
providing reliable, low-priced, and easy to implement
multithread programming methodology is the main
objective. However, there is no known solution which
satisfies all three conditions. Parallel processing with
interconnected multiple microcontrollers not only
satisfies the conditions for taking parallel processes
instead of multithread, but also provides sufficient
performance by adapting the concept of distributed
computing. Moreover, it costs much less than using
multithreaded program. Therefore, we believe the
proposed approach can be a reasonable alternative until
the proper development of competent solution for
multithreading covering non-deterministic problem.
145
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Hand Gesture Recognition Using Ternary Content
Addressable Memory Based on Pattern Matching Technique
T. Nagakarthik1 and Jun Rim Choi*
School of Electronics Engineering, Kyungpook National University,
Daegu, 702-701, South Korea.
1
[email protected]
*
[email protected]
Abstract—Hand gesture recognition system has received a great attention in the recent few years because of its application and
ability to interact with machine effectively by the user. In this paper, we present a hand gesture recognition system using ternary
content addressable memory (TCAM) based on pattern matching technique. TCAM is a prominent device in designing network
routers, network switches, high-performance processors such as 3D vision processors used in smart phones, portable and
multimedia devices which have high demand in the market. Based on this advancement, numerous applications on image
processing came into progress. Of all the image processing techniques, gesture detection using image processing is given high
priority. Simulations using 65 nm CMOS logic shows TCAM one cell characteristics with its timing analysis and right hand
gesture simulation using 4 X 8 TCAM array in which the maximum voltage to which match line (ML) is charged for match and
mismatch shows 752 mV and 614 mV respectively. Therefore, we proposed a method using TCAM for pattern matching that
can be incorporated for image processing where device characteristics have potential importance.
Keywords-ternary content addressable memory; pattern matching; match line sense amplifier; hand gesture recognition.
INTRODUCTION
SEARCH DATA REGISTER
In General, the memory which can be accessed by its
content and not by its address is called as content
addressable memory (CAM). It is an outgrowth of
random access memory (RAM). In order to access the
content in memories, search data is compared with the
stored data in parallel to find the exact match. Binary
CAM (BCAM) and ternary CAM (TCAM) are two types
of CAM. BCAM is a simple type of CAM in which it can
store only two states logic ‘0’ and ‘1’. Whereas, TCAM
is a special type of CAM which allows storing and
searching ternary states logic ‘0’, ‘1’ and don't care
(represented as X). The additional don't care state is used
for the partial matching either for logic ‘0’ or ‘1’ [1].
WLk
TCAM
CELL
TCAM
CELL
TCAM
CELL
TCAM
CELL
MLSOk
MLSO3
TCAM
CELL
TCAM
CELL
TCAM
CELL
TCAM
CELL
WL3
MLSO2
TCAM
CELL
TCAM
CELL
TCAM
CELL
TCAM
CELL
WL2
WL1
E
N
C
O
D
E
R
MLSO
MLSO1
TCAM
CELL
1
TCAM
CELL
2
TCAM
CELL
3
TCAM
CELL
n
WRITE AND READ REGISTER
SL
SLB
ML
Storage Part
Vdd
Vdd
P
P
P
P
N
P
Vdd
P
P
N
N
N
N
WL
Vdd
BLB1
Figure 2 represents the conceptual view of TCAM
array. A TCAM array consists of ‘k’ words. Each word
contains of ‘n’ bits and a mask bit which indicates
whether the match signal of the particular word is valid or
invalid. All the TCAMs in a row share a match line (ML),
word line (WL), and in column it shares search lines (SLs)
and bit lines (BLs). Partial matching in TCAM results in
multiple match and mismatch detections [4]. The rest of
the paper is organized as follows: Section 2 represents
background of the pattern matching, Section 3 shows the
proposed work for hand gesture recognition and Section
4 shows the performance analysis of TCAM one cell and
hand gesture based pattern matching.
P
N
N
Figure 2. Block diagram of k-word x n-bit TCAM array.
Storage Part
BL2
N
WL
BLB2
Comparison Logic
Figure 1. Schematic of 16T conventional ternary CAM
TCAM is a specialized type of high speed associative
memory that searches its entire content against the
preloaded stored data in single clock cycle at a wire speed.
It is capable of high speed parallel search operation, large
storage capacity and low power consumption due to its
fast parallel processing capability [2]. Figure 1 shows the
schematic of 16T conventional TCAM with storage
memories and comparator circuit. A conventional TCAM
consists of 2 SRAM cells and a comparator. Search data
is given through the search line pair namely search line
and search line bar (SL, SLB) [3].
RELATED WORK
The evolution of computer technology has enabled
many practical applications based on pattern matching
(PM) which is a crucial technique in digital image
processing. It requires high scan rate in order of gigabytes
to scan the entire image by evaluating the distance
between the patterns [5]. By employing TCAM, we can
146
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
improve the performance of the system and it also
supports motion detection, long patterns, short patterns
and pattern correlation which requires high scan rate up
to multi gigabytes. Generally PM is divided into
categories like software and hardware based pattern
matching.
Working
Suppose let the width of the TCAM be ‘n’ bytes. The
width ‘n’ of the pattern can be different from one pattern
to other pattern. If the pattern length is shorter then we
store it with don't care state (represented as X) bits.
Patterns are arranged in array according to their lengths
because to identify all the multiple matching patterns. If
we arrange the patterns in reverse order, then we may
miss some of the matching results. The process for
finding the patterns is as follows: the ‘n’ bytes are mapped
into TCAM array as shown in Fig.3. If the input pattern
gets matched with stored pattern then it will report as
match otherwise mismatch. This process is repeated until
the input pattern search the entire stored pattern.
The most prominent software based PM techniques
only in algorithms are Aho-Corasick (AC) and
Commentz-Walter (CW) designed for multiple PM and
Knuth-Morris-Pratt(KMP) and Boyer-Moore (BM)
designed for single PM [6]. All these algorithms build a
finite mechanism which can process the incoming
patterns and also increases the performance of pattern
scan. Although both the algorithms are fast they suffer
from exponential state explosion which cost too much of
area [6].
Hand Gesture Recognition
Now a days touch screen devices are increasing the
popularity that demands the better image processing
techniques of which hand gesture recognition is critical.
In Fig. 4, for each pixel an address is assigned for the
entire hand gesture. So, when the sensor on the device
recognize the hand gesture then it search the entire pixels
for the matching gesture. Based on this application, we
implement different hand motions using TCAM based on
PM technique. In our work, the implementation of TCAM
based pattern matching technique for hand gesture
recognitions is as follows: when a hand gesture is
recognized by the sensor on a particular device, it scans
the entire array at a wire speed for the matching pattern.
If the pattern gets matched then output is activated and
result is displayed on the device. Similarly, if the input
pattern gets mismatch then output is deactivated and no
result is displayed on the device. Figure 5 represents some
of the various types of hand gestures like swipe right,
swipe left, swipe up, swipe down, zoom in, zoom out,
rotate left, rotate right which can be applicable for various
applications.
Hardware based PM technique resolves the
performance issue of the software based PM. In hardware
based PM technique, CAM or TCAM and bloom filter
with co-processor as a main hardware component are
used. We adopt TCAM and SRAM to implement our
proposed pattern matching technique for hand gesture
recognition. TCAM is used to scan the existing stored
pattern with the incoming pattern at a wire speed. It scans
the entire pattern at multi gigabyte rates and returns the
required information to SRAM and FPGA. It also
increases the performance of the PM due to its parallel
processing capability.
TCAM BASED PATTERN MATCHING
In general, memories like DRAM, Flash memory and
SRAM search their data with address but CAM or TCAM
search with data not by address. TCAM is a special type
of memory which performs parallel search operation at
high speeds. The don't care state in TCAM can be used
for matching variable prefix in IP address which is used
in IP lookups [7]. Several systems use TCAM based PM
because they are scalable, have high throughput and easy
to implement. TCAM based PM also suffers from high
power consumption, low speed and high cost. TCAM
arrays have been designed for high speed pattern
recognition machines and also have extensive memories
which can able to search information in one clock cycle.
A set of patterns has been stored as data in the array and
the input pattern is used as a search data as shown in Fig
3. The input pattern compares with all the existing stored
pattern in parallel and if any exact match is obtained with
the input pattern then corresponding outline is activated
and the result is displayed as match.
Figure 5. Different types of hand gestures.
SIMULATION RESULTS
In our work, we simulated swipe right hand gesture
detection using 4 x 8 TCAM array. We also simulated
TCAM one cell to figure out the characteristics and its
timing analysis using 65 nm 1.2 V CMOS logic. Since our
TCAM size is different we performed comprehensive
simulations to evaluate various aspects of the TCAM. The
simulation results for TCAM one cell and hand gesture
recognition using 4 X 8 TCAM array are simulated in
cadence spectra. This work also focuses on enhancing the
performance of search speed, search power for fast
detection of hand gesture and low power consumption.
Figure 3. TCAM based pattern matching technique.
147
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
the pattern stored for a particular gesture. Timing analysis
is described in previous section. Figure 7 represents the
pattern stored for swipe right hand gesture in the TCAM
array. If the input pattern gets matched with existing
stored pattern in the TCAM array then ML charges to
maximum voltage and output is activated at match line
sense output (MLSO). Similarly, if the input pattern gets
mismatch then ML discharges to ground and output is
deactivated. When there are multiple matches for the
given input pattern, then TCAM reports only the first
match. Since, we use ternary states of the TCAM for
memory optimization, the order in which the pattern has
stored in TCAM executes the multiple matches. Figure 8
shows the simulations of swipe right hand gesture. When
the hand is swiped from left to right on the device it
checks the entire stored pattern for the matching pattern.
Based on the simulation results it clarifies that input
pattern gets matched at MLSO3 by charging the ML to
maximum voltage according to our PML. Remaining all
the MLSOs discharges to ground due to mismatch with
the input pattern. The maximum voltage to which ML is
charged for match and mismatch is 752 mV and 614 mV
respectively. Table II represents the analysis of swipe
right hand gesture.
TCAM Characteristics
The performance of the TCAM one cell is evaluated
by characterizing all the analysis of the cell. We simulated
TCAM one cell to find various characteristics of the cell
like area, noise tolerance, match time and power
consumption which are tabulated in the Table 1. Noise
tolerance is defined as the difference between the match
and mismatch voltages. TCAM is a special type of
memory that can able to search multiple patterns at a high
speed rate simultaneously. Timing analysis is done in
three phases: precharge phase, test-charge phase and
selective charging phase which is represented as 1st, 2nd
and 3rd phases respectively as shown in Fig 6. Firstly in
precharge phase, match lines are precharge to Vdd.
Secondly in test-charge phase MLs are charged to Vdd
until the ML approaches the sensing transistor and power
consumption in this phase is low. Finally in selective
charging phase, charging to all match lines except the
fully matched ML and the matched ML is charged to Vdd.
So the time taken in selective charging phase by the
MLSO is 1.2 ns and mismatched ML discharges to
ground.
TABLE I. TCAM ONE CELL CHARACTERISTICS
FEATURES
TCAM one cell
Supply Voltage
1.2 V
Nosie Tolerance
165 mV
Power Consumption
392 µW
Match Time
1.2 ns
15 X 16
STORED PATTERN
65 nm
Configuration
µm2
01111110
00101000
10011001
MATCH
10001000
Figure 7. Pattern stored in the TCAM array
1.2
1.0
0.8
0.6
0.4
0.2
0.0
1.2
1.0
0.8
0.6
0.4
0.2
0.0
1.2
1.0
0.8
0.6
0.4
0.2
0.0
1.2
1.0
0.8
0.6
0.4
0.2
0.0
1.2
1.0
0.8
0.6
0.4
0.2
0.0
T1X (V)
T2X (V)
1
Voltage (V)
Stored pattern
0
1st
phase
SL1 (V)
rd
nd
3
phase
Voltage (V)
2
phase
SLB1 (V)
Voltage (V)
MLSO1 (V)
ML1X (V)
Match
Mismatch
MLRST (V)
Matchtime
0
2
4
Time (ns)
6
Voltage (V)
Voltage (V)
Voltage (V)
Voltage (V)
Voltage (V)
10011001
Process Technology
Area
Voltage (V)
INPUT PATTERN
VALUE/RESULT
8
Figure 6. Simulations of TCAM one cell.
Hand Gesture simulation
TCAM based PM technique for hand gesture
detection mainly depends on two scenarios, timing and
1.2
1.0
0.8
0.6
0.4
0.2
0.0
1.2
1.0
0.8
0.6
0.4
0.2
0.0
1.2
1.0
0.8
0.6
0.4
0.2
0.0
1.2
1.0
0.8
0.6
0.4
0.2
0.0
30
MLSO1 (V)
net0161(V)
Mismatch
ML-Mismatch
MLSO2(V)
net0159 (V)
Match
MLSO3 (V)
net0157 (V)
ML-Match
MLSO4 (V)
net0147 (V)
35
40
45
50
Time (ns)
Figure 8. Simulation result for swipe right hand gesture.
148
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
TABLE II. ANALYSIS OF SWIPE RIGHT HAND GESTURE
MLSO1
Stored
Pattern
01111110
-
614 mV
MLSO2
00101000
-
614 mV
MLSO3
10011001
MATCH
752 mV
MLSO4
10001000
MLSO
Input
Pattern
10011001
Match
Voltage
ACKNOWLEDGMENT
This work was supported by the Korea Science and
Engineering Foundation (KOSEF) grant funded by the
Korean
Government
(MOST)
(No.2013R1A1A4A0102624) and Kyungpook National
University Research Fund 2012.
614 mV
REFERENCES
[1].
Layout
In order to substantiate the above mentioned
simulation results, we implementaed TCAM one cell and
TCAM array of size 4 X 8 on a test chip (5mm X 5mm).
TCAM array consists of 4 words, which contains 8 bits.
The 8 bit portion of each word is arranged in an array of
4 X 8 cells and all the words are connected to each of the
MLSA shown in Fig. 9. One cell TCAM is also
implemented in this chip for testing the various
characteristics of the TCAM cell. The test chip was
fabricated in samsung 65 nm CMOS technology. Table 4
represents the overview of the chip.
[2].
[3].
[4].
[5].
[6].
TCAM
ONE CELL
[7].
[8].
Figure 9. Layout of TCAM array with TCAM one cell.
CONCLUSION
In this paper, we presented a hand gesture using
TCAM based on pattern matching technique with high
operating frequency search operation and also verified
TCAM one cell characteristics and its timing analysis.
Implementation of TCAM one cell using prominent 6T
SRAM cells and 4 X 8 TCAM array for swipe right hand
gesture using PM technique is done. Simulations using
Samsung 65 nm CMOS technology shows, for swipe left
hand gesture the input pattern gets matched with stored
pattern at MLSO3 by charging ML to the maximum
voltage and remaining all the MLSOs gets mismatched
by the discharging ML to ground. The proposed work
offers an excellent performance to the important
parameters like speed, power and area. The recognition
of hand gesture based on pattern matching technique can
be used in many applications like smart phones,
multimedia and portable devices and also in high
performance processors with some minor modifications.
TCAM continues to be a prominent choice for many
intensive applications.
149
K. Pagiamtiz and A.Sheikholeslami, “Content-Addressable
Memory (CAM) Circuits and Architecture: A Tutorial and
Survey,” IEEE J. Solid-State Circuits, vol. 41, pp. 712–727,
2006.
N.Mohan, W.Fung, D.Wright, and M.Sachdev, “Design
Techniques Test Methodology for Low-power TCAMs,” IEEE
Trans. Very Large Scale Integr. (VLSI) Syst, vol. 14, pp. 573–
586, 2006.
N.Mohan, and M.Sachdev, “Low-Leakage Storage Cells for
Ternary Content Addressable Memories,” IEEE Trans. Very
Large Scale Integr. (VLSI) Syst, vol. 17, pp. 604–612, 2009.
I. Hayashi, T.Amano, N.Watanabe, Y.Yano, Y.Kuroda,
M.Shirata, K.Dosaka, N.Noda and H.Kawai, “A 250-MHz 18Mb Full Ternary CAM With Low-Voltage Match line Sensing
Scheme in 65nm CMOS,” IEEE J. Solid-State Circuits, vol.
48, pp. 2671–2680, 2013.
Hoang Le and Viktor K. Prasanna, “A Memory-Efficient and
Modular Approach for Large-Scale String Pattern Matching,”
IEEE Trans. Computers, vol.62, pp. 844–857, 2013.
G.A Stephen, “String searching Algorithms Amplifier,”
Lecture Notes Series on Computing, vol. 3, 1994.
M. Fish and G. Varghese, “Fast Content-Based Packet
Handling for Intrusion Detection,” UCSD technical report
CS2001-0670, 2001
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Effects of Mobile Cloud Computing on Health Care Industry
Mohammad Ahmadi1, Mahsa Baradaran Rohani2, Aida Hakemi3, Mostafa Vali1, Kasra Madadipouya1
1
Faculty of Computing, Asia Pacific University of Technology and Innovation (APU), Malaysia
2
Department of Information System, Universiti Teknologi Malaysia, Malaysia
3
Faculty of Computing, Universiti Teknologi Malaysia, Malaysia
[email protected], [email protected], [email protected], [email protected], [email protected]
Abstract—The rapid growth of using cloud-based technologies in different industries and environment is an impossible fact to
be denied as it has increased the efficiency and reliability especially in recent years and provides an unique opportunity to process
and access required information in different industries. Accordingly, health care and medical industry as an important industry
in human daily life could use this newfound technology to increase the efficiency of services in all over the world. In this paper,
effects of cloud-based communications on Health Care industry have been investigated and reviewed. In fact, mobile cloud
computing as a subsidiary of cloud computing has been explained and the effects of this emerging technology on health systems
have been reviewed and previous researches and manufactured products were described in this paper.
Keywords- Mobile Cloud; Cloud Computing; Health Care; E-Services.
the mobile internet is ramping faster than desktop internet
[1].
INTRODUCTION
The growing number of web-enabled devices such as
mobile provides wide range ability for end user to work
and to enjoy by working with them. According to rapid
growing, developer should study about their platform and
its advantages or disadvantages in order customize and redesign mobile development to enhance its usage at
majority of devices.
According to the speed of changes and innovation of
mobile devices, the projection presented that at 2013
desktop Internet user decreasing below the global mobile
Internet users.
Fig. 2. Rojection of Internet User from Desktop vs. Mobile (Nema et al.
2010)
Fig. 1. Comparing Mobile vs. Desktop on Internet Browsing (Nema et
Therefore, developers now are attempting specially to
develop or to re-engineer their application and services
for optimizing, customizing into new generation of
mobile platforms.
al. 2010)
The astounding rate of growing number of mobile
device by early 2010 had exceeded 4.6 billion [1]. The
interesting point in Nema 2010 research is that the over
40% of Internet user consists of only 5 countries such as
India, China, Russia, Brazil and USA. The combination
of leading Internet market with only 5 countries and
rapidly growing number of mobile devices shows that
number of user who use mobile instead of desktop to
browse websites is rapidly growing. He mentioned that
At the moment, there are two types of choice for
mobile development such as mobile website and native
application for mobile. There are many definitions about
those two choices. An application that was developed for
certain and specific device is called native application [2].
150
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
According to Buettner 2011, he mentioned that the
native application must be developed for each of mobile
devices separately from specific knowledge. Android,
Apple or Blackberry devices only support its application
with its languages. The native application can use all
function from software or hardware of devices such as
contact number list or camera, GPS or Storage, which
associated to its device. Native application can be
downloaded from apps store where managed and
monitored by mobile operation system provider.
with 7-days-a-week, real-time data collecting, eliminates
manual collection work and the possibility of typing
errors, and eases the deployment process [7].
Nkosi and Mekuria described a cloud computing
protocol management system that provides multimedia
sensor signal processing and security as a service to
mobile devices. The system has relieved mobile devices
from executing heavier multimedia and security
algorithms in delivering mobile health services. This will
improve the utilization of the ubiquitous mobile device
for societal services and promote health service delivery
to marginalized rural communities [8].
At the other side, the mobile website is only ordinals
type of websites that have been developed and have been
customized for best fitting on mobile device screen. It
functions competently with cellular network speeds and
device navigator control such as trackball or finger.
Rao et al. reported a pervasive cloud initiative called
Dhatri, which leveraged the power of cloud computing
and wireless technologies to enable physicians to access
patient health information at anytime from anywhere [9].
Koufi et al. described a cloud-based prototype emergency
medical system for the Greek National Health Service
integrating the emergency system with personal health
record systems to provide physicians with easy and
immediate access to patient data from anywhere and via
almost any computing device while containing costs [10].
Despite of those disadvantages and advantages, those
two main choices are rapidly growing in high rate of
development [2]. Taptu reported that number of
customized and optimized website for mobile devices
specially for touch such as tablet and Smartphone raise to
35%, 440,100 websites during December and April of
2010 [3].
Therefore, it is assuming to project annual growth rate
of 232% in compare of App Store at Apple by hosting
185,000 apps and it is wondering about its growing with
annual rate of 144% At the other side of market, at end of
2009, Google Android’s Open Platform was stated to
have more than 20,000 mobile application based at
Android open platform in Android Market online store
[4].
Numerous of articles and resources also reported the
successful application of cloud computing in
bioinformatics research [11]. For example, Avila-Garcia
et al. proposed a framework based on the cloudcomputing concept for colorectal cancer imaging analysis
and research for clinical use [12]. Bateman and Wood
used Amazon’s EC2 service with 100 nodes to assemble
a full human genome with 140 million individual reads
requiring alignment using a sequence search and
alignment by hashing (SSAHA) algorithm [13].
Taptu also reported that Android marketplace has
growing to 35,947 applications in April which shows the
rapid growing of Android platform development. It is
projected growth rate annually 403% [3]. The mobile
applications are not new concept and even it was in late
90s. The mobile development was considered to create
hot market [4].
Kudtarkar et al also used Amazon’s EC2 to compute
orthologous relationships for 245,323 genome-to-genome
comparisons. The computation took just over 200 hours
and cost US $8,000, approximately 40% less than
expected [14]. The Laboratory for Personalized Medicine
of the Center for Biomedical Informatics at Harvard
Medical School took the benefits of cloud computing to
develop genetic testing models that managed to
manipulate enormous amounts of data in record time [15].
Allen (2011) mentioned that at those days, the process
of mobile application installation was one of the most
difficult tasks. Because of unhandy installation task in
mobile devices, the most end user did not attempt to
install new application into their Smartphone or PDA.
Besides academic researchers, many world-class
software companies have heavily invested in the cloud,
extending their new offerings for medical records
services, such as Microsoft’s Health Vault, Oracle’s
Exalogic Elastic Cloud, and Amazon Web Services
(AWS), promising an explosion in the storage of personal
health information online.
In terms of growing mobile Internet user, the most
important question posed such as which choice of mobile
development should be treated? Could differences about
disadvantages or advantages affect overall of business
projects?
MOBILE CLOUD COMPUTING IN HEALTH CARE
Also, the use of health cloud computing is reported
worldwide. For example, the AWS plays host to a
collection of health care IT offerings, such as Salt Lake
City-based Spears tone’s health care data storage
application, and Disk Agent uses Amazon Simple Storage
Service (Amazon S3) as its scalable storage infrastructure
[15].
Many previous studies reported the potential benefits
of cloud computing and proposed different models or
frameworks in an attempt to improve health care service
[5]-[6]. Among them, Rolim et al. proposed a cloud-based
system to automate the process of collecting patients’
vital data via a network of sensors connected to legacy
medical devices, and to deliver the data to a medical
center’s “cloud” for storage, processing, and distribution.
The main benefits of the system are that it provides users
The American Occupational Network is improving
patient care by digitizing health records and updating its
clinical processes using cloud-based software from IBM
151
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Business Partners Med Track Systems. The company
now can provide faster and more accurate billing to
individuals and insurance companies, shortening the
average time to create a bill from 7 days to less than 24
hours, and reducing medical transcription costs by 80%
[16].
Available Online: [http://gigaom.com/2010/04/12/mary-meekermobile-internet-will-soon-overtake-fixed-internet/], 2010.
[2]
K. Buettner, and Simmons, A. M. “Mobile Web and Native Apps:
How One Team Found a Happy Medium.” Lecture Notes in
Computer Science:Design, User Experience, and Usability.
Theory, Methods, Tools and Practice pp. 549-554, 2011.
The US Department of Health & Human Services’
Office of the National Coordinator for Health Information
Technology recently chose Acumen Solutions’ cloudbased customer relationship management and project
management system for the selection and implementation
of EHR systems across the United States. The software
enables regional extension centres to manage interactions
with medical providers related to the selection and
implementation of an EHR system.
[3]
Taptu. “The state of the mobile touch web a Taptu report,”
Available Online, 2010.
[4]
Allen, S., Graupera, V., & Lundrigan, L. “ The Smartphone is the
New PC. Deploying a Mobile Web Site” pp. 1-14, 2011.
[5]
Alagoz, F., Valdez, A., Wilkowska, W., Ziefle, M., Dorner, S., &
Holzinger, A. “From cloud computing to mobile Internet, from
user focus to culture and hedonism: the crucible of mobile health
care and wellness applications.” In Proc. of The 5th International
Conference on pervasive Computing and Applications (ICPCA).
New York, NY: IEEE, 2010.
Telstra and the Royal Australian College of General
Practitioners announced the signing of an agreement to
work together to build an eHealth cloud. Telstra is one of
the leading telecommunications providers in Australia;
the College is the largest general practice representative
body in Australia with more than 20,000 members and
over 7000 in its National Rural Faculty.
[6]
Sittig, D., & Singh, H. “Eight rights of safe electronic health
record.” JAMA, vol. 10, no. 309, 2009.
[7]
Rolim, C., Koch, F., Westphall, C., Werner, J., Fracalossi, A., &
Salvador, G. “A cloud computing solution for patient’s data
collection in health care institutions.” In Proceedings of the 2nd
International Conference on eHealth, Telemedicine, and Social
Medicine. New York, NY: IEEE. 2010.
[8]
Nkosi, M., & Mekuria, F. “Cloud computing for enhanced mobile
health applications.” In Proceedings of the 2010 IEEE 2nd
International Conference on Cloud Computing Technology and
Science (CloudCom). New York, NY: IEEE. 2010.
[9]
Rao, G., Sundararaman, K., Parthasarathi, & Dhatri, J. “A
pervasive cloud initiative for primary healthcare services.” In
Proceedings of the 2010 14th International Conference on
Intelligence in Next Generation Networks (ICIN). Berlin: IEEE.
2010.
The eHealth cloud will host health care applications
including clinical software, decision-support tools for
diagnosis and management, care plans, referral tools,
prescriptions, training, and other administrative and
clinical services.
In Europe, a consortium including IBM, Sirrix AG
security technologies, Portuguese energy and solution
providers Energias de Portugal and EFACEC, San
Raffaele Hospital (Italy), and several European academic
and corporate research organizations contracted
Trustworthy Clouds—a patient-centered home health
care service—to remotely monitor, diagnose, and assist
patients outside of a hospital setting. The complete
lifecycle, from prescription to delivery to intake to
reimbursement, will be stored in the cloud and will be
accessible to patients, doctors, and pharmacy staff [17].
[10] Koufi, V., Malamateniou, F., & Vassilacopoulos, G. “Ubiquitous
access to cloud emergency medical services.” in Proceedings of
the 2010 10th IEEE International Conference on Information
Technology and Applications in Biomedicine (ITAB). IEEE.
2010.
[11] Arrais, J., & Oliveira, J. “On the exploitation of cloud computing
in bioinformatics.” in Proceedings of the 2010 10th IEEE
International Conference on Information Technology and
Applications in Biomedicine (ITAB). 2010.
[12] Avila-Garcia, M., Trefethen, A., Brady, M., Gleeson, F., &
Goodman, D. “Lowering the barriers to cancer imaging.” In Proc.
The 4th IEEE International Conference on eScience. IEEE. 2008.
CONCLUSION
The rapid growth of using cloud-computing services
in various industries and environments is an impossible
fact to be denied as it has increased the efficiency and
reliability especially in recent years. Cloud computing is
a newfound technology that is based on the concepts of
virtualization, processing power, connectivity, and
storage to store and share resources via a broad network
[18].
[13] Bateman, A., & Wood, M. “Cloud Computing. Bioinformatics”
Available Online, 2010.
Accordingly, effects of cloud-based communications
on Health Care industry were investigated in this paper.
In fact, mobile cloud computing as the subsidiary of
cloud computing was explained and the effects of this
emerging technology on health systems were reviewed
and previous researches and manufactured products were
described in this paper.
[16] Strukhoff, R., O’Gara, M., Moon, N., Romanski, P., & White, E.
“Healthcare Clients Adopt Electronic Health Records with
Cloud-Based Services.” Available Online: website Cloud Expo,
2009.
REFERENCES
[18] M. Malathi, “Cloud Computing Concepts,” in Proc. 3rd
International Conference on Electronics Computer Technology
(ICECT), 2011, vol. 6, pp. 236–239.
[1]
[14] Kudtarkar, P., Deluca, T., Fusaro, V., Tonellato, P., & Wall, D.
“Cost-effective cloud computing: a case study using the
comparative genomics tool, Roundup.” Evol Bioinform Online.
2010.
[15] Amazon, W. S. “AWS Case Study: Harvard Medical.
AMAZON.” 2012, Available Online: Retrieved from
[http://aws.amazon.com/solutions/case-studies/harvard/].
[17] IBM Press Room, I. European Union Consortium Launches
Advanced Cloud Computing Project With Hospital and Smart
Power Grid. IBM. 2010, Available Online, Retrieved from
http://www-03.ibm.com/press/us/en/pressrelease/33067.wss
N. Nema, J. Dawson, S. Gardiner, P. Tanaka, H. Lipacis, M.
Killa, “Mobile Will Be Bigger than Desktop in 5 Years”,
152
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
An Associative Index Method for Pyramid Hierarchical
Architecture of Social Graph
Ling Wang1*, Wei Ding1, Tie Hua Zhou2
Department of Information Engineering, Northeast Dianli University, Jilin, China
{[email protected], [email protected]}
Database/Bioinformatics Laboratory, School of Electrical & Computer Engineering, Chungbuk National University,
Cheongju, Korea
[email protected]
Abstract—In order to deal with the challenges of processing rapidly growing graph and network data, social graph
search becomes a hot issue with the rise of world-wide social network services. Especially for graph structure
optimization, designing reasonable hierarchy index is a better way to reduce the graph complexity and easier to
retrieve more accurate results. In this paper, we present an associative index method to build a pyramid hierarchical
architecture for large-scale social graph datasets. We experimentally verify the effectiveness and the efficiency of
our proposed method.
Keywords- Graph structure; pyramid hierarchical architecture; approximate matching; social network
INTRODUCTION
*Corresponding author.
In recent years, a tremendous amount of resource is
being collected and processed by many online social
networks and communication networks. Graphs have
become increasingly important research area to represent
highly interconnected structures data in a variety of
applications. There are many challenges to be faced by
search engines, such as how to build a subgraph indexing,
use an efficient way to update graph, and subgraph
matching in network [1]. The main problem is that
modern graph datasets are huge. The best example is the
World Wide Web, which currently consists of over one
trillion links and is expected to exceed tens of trillions in
the near future. Facebook [2] also consists of over 800
million active users, with hundreds of billions of friend
links. In addition, Twitter [3] has over 41 million users
with 1.47 billion social interactions. Examples of large
graph datasets are not only limited to the web and social
networks. Graphs are very large with millions of vertices
and more than tens of millions edges, as shown in Figure
1, there are a lot of challenging problems when resources
searching and updating [4], such as how to efficiently
search and update the resources in social network graph.
Therefore, designing scalable methods for analyzing and
mining large-scale graphs have become increasingly
important.
In this paper, we focus on exploring how to build an
efficient social network structure for high-performance
graph analytics. A subgraph g of social network G may
contains several hundreds of thousands of nodes and
edges, the indexes are likely to contain numerous
relationships among all vertices. From the perspective of
the hierarchical diagram to store resources in large-scale
datasets, it will extract the specific relevant loop and
build frequent nodes set, greatly improving the efficiency
of the search and update. This paper explore the search
graph as shown in Figure 2, two loops are actually
connected, because the loop diagram defined for each
vertex traversal only once, ignoring such relations which
contained in G to maintain the integrity of the
specifications.
Figure 2. A special social graph
On the basis of extensive analyses of approaching
correlation approach to solve searching and updating
problems, we present an associative index method for
building a pyramid hierarchical architecture to obtain
valuable data, by integrating and decomposing of the
complex relationship among social network diagram, in
order to reduce the need for spending time and effectively
answer users query in large-scale graph datasets.
Figure 1. Social network
———————————
153
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
The rest of the paper is organized as follows. Section
2 reviews the popular graph analytics approaches. Section
3 formalizes the preliminaries of our research. Section 4
presents our proposed method in detailed Section 5
presents the experimental evaluation. Section 6 gives
conclusions.
PROPERTY 3 (Special Loop). Our study construct a
pyramid hierarchical architecture for searching and
updating traversal, extracted PROPERTY 1 and
PROPERTY 2 to construct a graph structure as shown in
Figure 2.
Formal Definition
The common notations used in this paper are
summarized in Table 1.
RELATED WORK
There have been a lot of interests in executing
complex analytics on large-scale graph data management
and graph mining. In the search for a social network
diagram, an important task is to quickly search the
relevant subgraph [5], many studies having been shown
that subgraph search and user-related terms from time
efficiency is much lower than searching the entire graph.
Many new storage and querying systems optimized for
graph algorithms has mainly focused on the study of
static graphs, while some have also considered dynamic
graphs as a sequence of updates to static graphs. In
particular, a number of so-called “vertex-centric”
systems such as Google’s Pregel [6], and its open-source
implementations, such as Giraph [7] and GPS [8], are
distributed message-passing systems targeted to largescale graph computations. Mining frequent subgraphs is
a central and well-studied problem in graphs, and plays
a critical role in many data mining tasks that include
graph classification [9], graph clustering [10]. Trend
detection in social networks has been an important
research area in the recent years [3, 11]. Kwak et al. [3]
study trending topics reported by Twitter and compare
them with trends in other media, showing that the
majority of topics are headlines or persistent news. In [11]
Leskovec et al. study temporal properties of information
by tracking “memes” across the blogosphere. The master
machine communicates with slaves after each super-step,
to guide them for the next step. The algorithm terminates
if all the nodes halt. Several graph query algorithms
(distance, PageRank, etc.) are supported by Pregel (see
[6]). GraphLab [12] is an asynchronous parallelcomputation framework for graphs, optimized for
scalable machine learning and data mining algorithms.
The major difference between Pregel and GraphLab is
that the latter decouples the scheduling of computation
from message passing, by allowing “caching”
information at edges. SAPPER algorithm proposed
vertex index to add a 2-near the vertex set properties for
improving the effect of graph pruning [13, 14].
SUMMARY OF NOTATIONS
Notation
G
SG
V(*)/E(*)
X
X.s
d(*)
α
β
Θ
Description
Initial social network
Second layer memory
Vertex/edge set of *
Path or trajectory of loop
Specific loop
Frequency of vertex set of *
Frequency of special loop
Vertex set except the frequent vertex
Thresholds of the frequent vertex
The fundamental idea of our proposed approach is to
build pyramid hierarchical architecture, which layered to
optimize search and update purposes. The first step is to
construct a direction social network diagram; the
following definitions describe the proposed approach.
DEFINITION 1 (SOCIAL NETWORK). Social
network is a directed graph G = (V, E) where V is a set
of vertices representing social network users and E is a
set of edges representing the relationship between users.
DEFINITION 2 (TRAJECTORY). Given G, a
trajectory X is a sequence ((x1, x2), (x2, x3), . . . , (xk, x1))
such that there exists a path x1 → x2,→, . . . ,→ xk→x1 on
G, this track recorded and marked for further operation.
DEFINITION 3 (LOOP). For G, the x1–xn path is a
nonempty graph X.s = (Vi , Ei ), and Vi = {x1, x2,. . ., xn}
and Ei = {(x1, x2), . . . , (xn−1, xn), (xn, x1)}, so X.s is a subgraph of G and the paths are not identical.
DEFINITION 4 (VERTEX FREQUENCY). Given G,
vertex set V(i) ∈ G,we calculate the thresholds d to
determine which vertices may be added to the data set
SG. For example in Figure 3, vertex11 is a frequent
vertex and vertex2 is not, so we add v1 to the special layer
graph.
DEFINITION 5 (SECOND LAYER). Given SG,
SG={X, X.s, d, α, β}, d and β∈ α, d stored as a table
set, βis the corresponding mapping with d, it is also
need to build the relationship except the frequent loop,
because of the data Integrity.
The following sections will show you how to build a
hierarchical chart, and the implementation of the
proposed approach.
PRELIMINARIES
Many attributes are defined as follows.
PROPERTY 1 (Frequent Node). We make sure that we
have a value of hierarchical structure design. For the
pyramid hierarchical graph, there is intermediate node
d in the second layer memory SG, i.e. d∈SG, d is greater
than the threshold value of the point.
PROPERTY 2 (Closest Relationship). Any point has a
link with the intermediate node d should be satisfied, {d,
d1, d2, d3...dn} is the loop diagram to express the
relationship between hobbies and nearly set of attributes.
PYRAMID HIERARCHICAL GRAPH CONSTRUCTION
The pyramid hierarchical architecture search
algorithm begins with the breadth-first matching
algorithm, and then build the pyramid architecture for
154
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
improving efficiency. During the query of subgraph in
G, it consists of three main steps: breadth-first search,
constructing stratified subgraph and building indexing
structure.
the results of the processing will be aggregated for
implement in the next step.
Breadth-First Search
Breadth-first search algorithm (BFS) is a blind search
method that aims to expand and examine entire graph or
sequence without considering the goal until finds result.
The breadth-first search algorithm consists of following
characteristics:
Figure 4. Storage of frequent node
Constructing Stratified Subgraph
For constructing the pyramid hierarchical, subgraphs
SG is a better way to search and update entire social
network structure. The main task is to search for system
architecture based on the special loop diagram.
We integrate adjacent nodes to satisfy the threshold
criteria for secondary graph with the concept of system
layering, representing an event or a collection of other
linked relations. At the same time, in order to reduce the
complexity of system architecture, we extract critical
value to satisfy all loops that nodes have unified mark.
Thus, the performance of social network structures can
be improved.
Particular loop diagram has been completed in
preprocess. Threshold is necessary for frequent node of
loop diagram in second layer data extraction filter,
because the matching between conditions set α and
frequent node are stored in hash table. Ideally, node
mapping build loop diagram to connect with other node
set β, we ignore the relationship between specific point
and point, all the nodes as a loop in the storage table (as
shown in Figure 4).
Figure 3. Part of social network relationship
When a problem has solution, it will be found;
For the solvable problem, it can find the optimal
solution;
Methods irrelevant to the question, because of the
versatility.
MAIN STEPS FOR THE BFS AND PRUNING
Input :Ω←|V (G)| × |V (G)| matrix;
Output: the Streamlined graph SG
1: Initialization queue Q ,array map ;
2: for each v ∈ Ω do
3: visit vertex v: visit[v] ← 1;
4:
v ← Q;
5:
if Q! = NULL
6:
v ← w;
7:
if v != 1
8:
visit[v] ← 1;
9:
w ← w+1;
10: if v != w
11: repeat steps;
12: for each i ∈ Q do
13:
if map[i] >= Θ;
14:
SG ← Q;
15:
if map[i] is the hot vertex of Q
16:
α← map[i];
17:
else β← map[i];
18: return SG;
Building Indexing Structure
Our proposed index structure is different from the
common method, which have learned in social
networking-related knowledge [15, 16, 17]. The
proposed pyramid hierarchical index is a two levels
structure: 1-level structure is supplemented by the index,
and 2-level structure is the main index of entire datasets.
The purposed two levels index structure can easily and
fast access to data. The index is mainly based on the
frequent node α, because of the loop is composed of
multiple nodes, which have their own communications
are stored in hash table.
EXPERIMENTS
In this section, we compare the performance of our
method with SAPPER due to its excellent timeliness. For
experiments, we selected twitter datasets to evaluate the
proposed approach. The statistics of data are summarized
in Table Ⅲ, which includes more than 80,000 nodes and
1,760,000 edge. The experiments using Java
programming language based on Dell 64G memory
graphics processor.
Our proposed method is based on pruning rules in
breadth-first search and our own criterion, which can
save time and be more effective for experimental data.
The main framework for the BFS and pruning are
presented in Table Ⅱ.
Our proposed method is called DEMIX, which is
only focus on valuable data and certain-relationship
structures, then removes invalid and undetermined
relationships to reduce the index structure complexity.
Each loop presents a group of followers for an event, and
DATASET STATISTICS:
155
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Parameter
Number of vertices in G
Number of edges in G
Total loop
Average clustering coefficient
Number of triangles
Diameter (longest shortest path)
REFERENCES
Default Value
81306
1768149
41065894
0.5653
13082506
A. Gionis, F. Junqueira, V. Leroy, M. Serafini, and I. Weber,
“Piggybacking on Social Networks,” J. VLDB Endowment,
ACM press. vol. 6, April 2013, pp. 409-420.
Facebook. http://facebook.com/press/info.php?statistics, 2012.
H. Kwak, C. Lee, H. Park, and S. Moon, “What is twitter, a social
network or a news media?,” In Proceedings of the 19th
international conference on World wide web, ACM press.
Raleigh, NC, USA, April 26-30, 2010, pp. 591-600.
Y. Zhou, H. Cheng, and J. X. Yu, “Graph Clustering Based on
Structural/Attribute Similarities,” J. VLDB Endowment, ACM
press. vol. 2, August 2009, pp. 718-729.
F. Zhao, and A. K. H. Tung, “Large Scale Cohesive Subgraphs
Discovery for Social Network Visual Analysis,” J. VLDB
Endowment, ACM press. vol. 6, December 2012, pp. 85-96.
G. Malewicz, M. H. Austern, A. J. C. Bik, J. C. Dehnert, I. Horn, N.
Leiser, and G. Czajkowski, “Pregel: A System for Large-Scale
Graph Processing,” In Proceedings of the SIGMOD International
Conference on Management of Data, ACM press. Indianapolis,
IN, USA, June 06-11, 2010, pp. 135-146.
Apache Incubator Giraph. http://incubator.apache.org/giraph/.
S. Salihoglu, and J. Widom, “GPS: A Graph Processing System,” In
Proceedings of the International Conference on Scientific and
Statistical Database Management, ACM press. Baltimore, MD,
USA, July 29-31, 2013.
M. Deshpande, M. Kuramochi, N. Wale, and G. Karypis, “Frequent
sub-structure-based approaches for classifying chemical
compounds,” J. IEEE Transactions on Knowledge and Data
Engineering, IEEE press. vol. 17, August 2005, pp. 1036-1050.
V. Guralnik, and G. Karypis, “A scalable algorithm for clustering
sequential data,” In Proceedings of the IEEE Conference on Data
Mining, IEEE press. San Jose, CA, USA, November 29December, 2001, pp. 179-186.
J. Leskovec, L. Backstrom, and J. Kleinberg, “Meme-tracking and the
dynamics of the news cycle,” In Proceedings of the 15th SIGKDD
international conference on Knowledge discovery and data
mining, ACM press. Paris, France, June 28-July 01, 2009, pp.
497–506.
Y. Low, D. Bickson, J. Gonzalez, C. Guestrin, A. Kyrola, and J. M.
Hellerstein, “Distributed GraphLab: A framework for machine
learning and data mining in the cloud,” J. VLDB Endowment,
ACM press. vol. 5, April 2012, pp. 716-727.
S. Zhang, S. Li, and J. Yang, “GADDI: distance index based subgraph
matching in biological networks,” In Proceedings of the 12th
International Conference on Extending Database Technology:
Advances in Database Technology, ACM press.
SaintPetersburg, Russian Federation, March 23-26, 2009, pp. 192-203.
S. Zhang, J. Yang, and W. Jin, “SAPPER: Subgraph Indexing and
Approximate Matching in Large Graphs,” J. VLDB Endowment,
ACM press. vol. 3, September 2010, pp. 1185-1194.
P. Gupta, V. Satuluri, A. Grewal, S. Gurumurthy, V. Zhabiuk, Q. Li,
and J. Lin, “Real-Time Twitter Recommendation: Online Motif
Detection in Large Dynamic Graphs,” J. VLDB Endowment,
ACM press. vol. 7, August 2014, pp. 1379-1380.
A. Pavan, K. Tangwongsan, S. Tirthapura, and K. L. Wu, “Counting
and Sampling Triangles from a Graph Stream,” J. VLDB
Endowment, ACM press. vol. 6, September 2013, pp. 1870-1881.
C. Budak, T. Georgiou, D. Agrawal, and A. E. Abbadi, “GeoScope:
Online Detection of Geo-Correlated Information Trends in Social
Networks,” J. VLDB Endowment, ACM press. vol. 7, December
2013, pp. 229-240.
7
Compare to the traditional SAPPER method, our
proposed DEMIX is much better than it, the index
structure is not becomes more complexity until nodes
increasing as shown in the Figure5. Although nodes
increasing, DEMIX keeps a balance because of
unavailable edges have been pruned as shown in Figure
6.
Figure 5. Index Size
Figure 6. Query rate
CONCLUSION
In this paper, we presented the hierarchical structure
for social network diagram to design fast and effective
search algorithm by the special loop diagram. The
experimental results shown our proposed method is an
efficient way for improving the retrieval in large-scale
graph datasets.
In the future, we will continue to study the potential
relation loops existed in social network for building other
multi-level and high-sensitivity graph structures.
Acknowledgments. This work was supported by the
Science and Technology Plan Projects of Jilin city
(No.201464059), by the Ph.D. Scientific Research Startup Capital Project of Northeast Dianli University
(No.BSJXM-201319), and by the National Natural
Science Foundation of China (No.51077010).
156
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
A Reliable User Authentication and Data Protection Model in
Cloud Computing Environments
Mohammad Ahmadi1, Mostafa Vali1, Farez Moghaddam1, Aida Hakemi2, Kasra Madadipouya1
1
Faculty of Computing, Asia Pacific University of Technology and Innovation (APU), Malaysia
2
Faculty of Computing, Universiti Teknologi Malaysia, Malaysia
[email protected], [email protected], [email protected], [email protected], [email protected]
Abstract— Security issues are the most challenging problems in cloud computing environments as an emerging technology.
Regarding to this importance, an efficient and reliable user authentication and data protection model has been presented in this
paper to increase the rate of reliability cloud-based environments. Accordingly, two encryption procedures have been established
in an independent middleware (Agent) to perform the process of user authentication, access control, and data protection in cloud
servers. AES has been used as a symmetric cryptography algorithm in cloud servers and RSA has been used as an asymmetric
cryptography algorithm in Agent servers. The theoretical evaluation of the proposed model shows that the ability of resistance
in face with possible attacks and unpredictable events has been enhanced considerably in comparison with similar models
because of using dual encryption and an independent middleware during user authentication and data protection procedures.
Keywords- Cloud Computing; Data Protection; User Authentication; Cryptography; Access Controls.
INTRODUCTION
Cloud computing is an emerging service that use the
benefits of modern technologies (e.g. grid computing,
clustering, virtualization, and processing power) to store
and share resources via pool of resources. Cloud
computing services have considerable benefits that
enhance the efficiency and reliability of on-demand IT
services. However, numerous challenging issues face
cloud computing and have attracted the attention of many
researchers and service providers [1].
RELATED WORKS
According to the importance of user authentication
functionality in cloud computing environments, several
models and algorithms have been presented in recent
years.
An efficient user authentication framework was
suggested at 2011 [6]. The main aim of that model was
ensuring about the verification of user legitimacy before
enter into a cloud environment by providing identity
management, mutual authentication, session key
establishment between the users and cloud server. The
presented scheme could resist many popular attacks such
as replay attack, man in the middle attack, and denial of
service attack. However, the computational costs and
scalability of the model were affected by this resistance.
Data management, resource allocation, security,
privacy and access controls, load balancing, scalability,
availability and interoperability are the most challenging
issues in cloud-based environments that have affected
the reliability of the newfound technology [2]. These
concerns have been classified to various parts and the
most important part is ensuring about the user
authentication processes [3] and managing authorized
and un-authorized accesses when users outsource
sensitive data share on public or private cloud servers [4].
In 2013, a user authentication scheme on multi-server
environments for cloud computing were presented by
Yang et al. [7]. The suggested framework could be
applied to multi-server environments because the IDbased concept that was used. Hence, the rate of efficiency,
security, and flexibility in user authentication procedure
were improved and the computational costs were
decreased in comparison with similar models.
There are two main processes that investigate the
procedure of secure and reliable user authentication in
cloud-based environments:

Investigating unique identifiers of users during the
initial registration phase.
 User authentication and validating user legal
identities and acquiring their access control
privileges for the cloud-based resources and services
during the service operation phase [5].
These two procedures have been faced with several
challenges regarding to security issues and scalability
concerns according to the nature of cloud computing
environments. Hence, an efficient user authentication
model has been presented in this paper to enhance the rate
of security and reliability in cloud-based services.
A dynamic ID-Based remote mutual authentication
model [8] based on Elliptic Curve Cryptosystem (ECC)
was proposed by Tein-Ho et al. in 2011. Subsequently, a
Cloud Cognitive Authenticator (CCA) [9] was proposed
in 2013 based on integrated authentication functionality.
CCA uses the concepts of one round Zero Knowledge
Proof (ZKP) and Advance Encryption Standard (AES) to
enhance the security in public, private or hybrid clouds
by four procedures providing with two levels of
authentication and encrypting the user identifiers. The
main specification of CCA in comparison with other
models is the coverage of the two levels of authentication
157
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
end-user’s performance. The main cryptography is based
on AES.
together with strength of the encryption algorithm.
However, interoperability and compatibility with AES
are the major weaknesses of CCA.
Yang and Lin [10] proposed an ID-based user
authentication model by introducing three roles in the
model: the user, the server, and the ID provider. The main
responsibility of ID provider is to generate the registration
and authentication information for both user and server.
Moreover, two main phases have been presented
regarding to the described investigation procedures: the
registration phase and the mutual authentication phase.
This model is compatible with various cloud
environments and considerably cheaper in comparison
with other models.
Secondary Cryptography
An asymmetric cryptography algorithm that is
completely based on users’ performance to manage user
authentication procedures and control accesses. The
secondary cryptography is based on RSA.
PROPOSED MODEL
The proposed model has been designed to manage
accesses and track the performance of data transmission
between cloud servers and end users. Fig. 1 shows the
suggested model in brief.
In 2013 an agent-based user authentication model in
cloud computing environments was introduced [11] to
increase the performance of user authentication
processes according to the concept of agent. The
theoretical analysis of this model shows that the
suggested model increases the reliability and rate of trust
in cloud-based environments. However, the idea of using
agents in the process of authentication can be more
efficient and reliable according to the capabilities of
agents.
Cloud Server
In 2014, Fatemi Moghaddam et al. presented a
scalable user authentication model based on the concept
of multi authentication and two main agents [5]: a clientbased user authentication agent for confirming identity of
the user in client-side, and a cloud-based software-as-aservice application for confirming the process of
authentication for un-registered devices. The theoretical
analysis of the suggested scheme showed that, designing
this user authentication and access control model have
enhanced the reliability and rate of trust in cloud
computing environments. However, the computational
costs were still a challenging issue in this model.
Cloud Agent
End User
End User
End User
End User
End User
Fig. 1. The Proposed Model in Brief.
DEFINITIONS
The proposed model has been suggested in this part
by combination of two cryptography algorithms and other
technologies for improving the security and reliability of
user authentication procedures in cloud computing
environments. Accordingly, following concepts should
be defined:
Regarding to the nature of data in cloud storages, data
is classified to three main categories: Public, Private and
Shared. As was described the performance of main
cryptography is absolutely independent of the
performance of end-users or the characteristics of data.
The main cryptography procedure is done in cloud servers
with AES-256. Regarding to the nature of main
cryptography, a symmetric key encryption is most
appropriate for this process.
Agent
The concept of agent in the proposed model is an
independent middleware between the end-user and cloud
servers to authorize user accesses and manage these
procedures.
The secondary cryptography establishes a secure
connection between end-users and cloud servers for user
authentication, data transmission, access controls. In fact,
the key of main cryptography is re-encrypted by a RSA
to protect the main cryptography procedure. Fig. 2 shows
the performance of main and secondary cryptography
procedures in brief.
Main Cryptography
The main cryptography is a cloud-based symmetric
encryption algorithm that is absolutely independent of
158
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Data
Main
Encrypted Data with
Main Cryptography
Procrdure
decrypting data by the public key of data that was stored
in Agent’s cloud storage before. Fig. 5 Shows this process
in details.
Encrypted AES Keys
AES Keys
Secondary
Encrypted Data with
Main Cryptography
Procedure
Encrypted Access Request
Agent
Encrypted Verification
Data Owner User Authentication and
Submission of the Verification
Twice Encryption
Fig. 5. Data Owner User Authentication
Fig. 2. Performance of Main and Secondary Cryptography
Managing Accesses to Main Cloud Servers
The private key of data is encrypted by the private key
of the user and is sent to the cloud server. In cloud server,
the private key of data is decrypted by the public key of
user, the AES main key is decrypted by the private key of
data, and the data is decrypted by the AES main key. Fig.
6 shows these procedures in details.
As was described, Agent is an independent
middleware between the cloud servers and end users to
establish secure connection between the cloud server and
end-users. Agents in the proposed model have their own
servers and cloud storages to define and store access rules
and keys. There are four main responsibilities for Agent
in the suggested model.
Secondary Cryptography
The process of secondary cryptography has been
shown in Fig. 3. The RSA keys are generated in Agent
and the public key is transferred to the cloud server. After
encrypting the AES key with the RSA public key, the
public and private key of RSA are stored in Agent servers
and a copy of public key is transferred to data owner.
Encrypted Key
Agent
Decrypted Data
Triple Decryption
Fig. 6. Managing Accesses to Main Cloud Servers
Storing Public &
Private Key
Sending the Public Key
DISCUSSION
Agent
Security Justification
The security justification of the suggested model has
been evaluated according to the following table:
Secondary
Cryptography
Sending the Private Key
Fig. 3. Secondary Cryptography.
TABLE I: SECURITY JUSTIFICATION OF THE PROPOSED
MODEL
User Authentication of Data Applicant
According to Fig. 4, the request of data applicant is
encrypted with the own private key and is sent to the
Agent. The Agent decrypts the request with the public key
and verifies the identity of data applicant.
Encrypted Request
Agent
User Authentication
of the Data Applicant
Challenges
Issues
Reasons
Data protection in
servers
Losing Data
Un-Secure
Cryptography
Secure Data
Transmission
Losing Data
Un-Secure
Transmission
User Authentication
Losing Data
Un-Secure
Authentication
User Authentication
Losing Server
Un-Predictable
Attacks
Access Controls
UnAuthorized
Un-Reliable
Algorithm
Access Controls
UnAuthorized
Lack of Scalability
Lack of Resistance
Losing Data
Un-Efficient
Resistance
Fig. 4. User Authentication of the Data Applicant
User Authentication of Data Owner
The access request is sent to the data owner. Data
owner verifies the request and encrypts the verification
twice with his private key and the data private key and
sends them to the Agent. Agent verifies the identity of
data owner with decrypting data by the public key of data
owner. Furthermore, the verification is submitted by
Security Analysis
The process of security analysis was considered as
follows:
Two-Step Cryptography
159
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
By using two symmetric and asymmetric
cryptography algorithms the reliability of the system is
enhanced considerably. Accordingly, with a failure of one
cryptography algorithm during unpredictable events or
attacks the security of the system is guaranteed with the
other cryptography algorithm and the time for resistance
is provided. Furthermore, by using two step of
cryptography for several security procedures (i.e. user
authentication, data protection in cloud servers, data
protection in data transmission, key exchange and key
generation) the efficiency of this security model is
enhanced significantly.
REFERENCES
[19] X. Tan, “The Issues of Cloud Computing Security in High- Speed
Railway,” in Proc. of International Conference on Electronic and
Mechanical Engineering and Information Technology (EMEIT),
2011, pp. 4358-4363.
[20] F. Fatemi Moghaddam, M. Ahmadi, S. Sarvari, M. Eslami, and
A. Golkar, “Cloud Computing Challenges and Opportunities: A
Survey” in Proc. of International Conference on Telematics and
Future Generation Networks (TAFGEN), Kuala Lumpur,
Malaysia, May 2015.
[21] D.G. Chandra, and R.S. Bhadoria, “Cloud Computing Model for
National E-governance Plan (NeGP),” in Proc. 4th International
Conf. on Computational Intelligence and Communication
Networks (CICN), Mathura, 2012, pp. 520-524.
[22] F. Fatemi Moghaddam, M. T. Alrashdan, and O. Karimi, “A
Comparative Study of Applying Real-Time Encryption in Cloud
Computing Environments,” in Proc. of IEEE 2nd International
Conference on Cloud Networking (CloudNet), San Francisco,
USA, November 2013
[23] F. Fatemi Moghaddam, S. Gerayeli Moghaddam, S. Rouzbeh, S.
Kohpayeh Araghi, N. Morad Alibeigi, and S. Dabbaghi
Varnosfaderani, “A Scalable and Efficient User Authentication
Scheme for Cloud Computing Environments,” in Proc. of IEEE
Region 10 Symposium, 2014, pp. 508–513.
[24] A. J. Choudhury, P. Kumar, M. Sain, L. Hyotaek, and J. L. Hoon,
“A Strong User Authentication Framework for Cloud
Computing,” in Proc. of IEEE Asia-Pacific Services Computing
Conference (APSCC), Jeju Island, South Korea, 2011, pp. 110115.
[25] J. H. Yang, Y. F. Chang, and C. C. Huang, “A User
Authentication Scheme on Multi-Server Environments for Cloud
Computing,” in Proc. of 9th International Conference on
Information, Communications and Signal Processing (ICICS),
Tainan, 2013, pp. 1-4.
[26] C. Tien-Ho, Y. Hsiu-lien, and S. Wei-Kuan, “An Advanced ECC
Dynamic ID-Based Remote Mutual Authentication Scheme for
Cloud Computing,” in Proc. 5th FTRA International Conference
Multimedia and Ubiquitous Engineering (MUE), Loutraki,
Greece, 2011, pp.155-159.
[27] L. B. Jivanadham, A.K.M.M Islam, Y. Katayama, S. Komaki, and
S. Baharun, “Cloud Cognitive Authenticator (CCA): A Public
Cloud Computing Authentication Mechanism,” in Proc.
International Conference on Informatics, Electronics & Vision
(ICIEV), Dhaka, Bangladesh, 2013, pp. 1-6.
[28] J. H. Yang, and P. U. Lin, “An ID-Based User Authentication
Scheme for Cloud Computing,” in Proc. of Tenth International
Main Cryptography
The powerful AES cryptography algorithm is
responsible for the main cryptography procedure and
because of the stability of the keys and the lack of
transmission in all scenarios of this model; this symmetric
algorithm is the most appropriate algorithm for main
cryptography procedure.
Man in the Middle Attack
One of the most important weaknesses of the RSA
algorithm is the possibility of the failure in Man in the
Middle attack. In the suggested model the possibility of
failure in face with Man in the Middle attack has been
reach to 0% because of using an Agent.
The attacker can attacks by being in the middle of
Data Owner-Agent or Data Applicant-Agent or Data
Owner-Data Applicant. However in none of these cases
that attacker can broke the encryption and access to the
cloud server because of dual encryption and secure
transmission between all entities.
Discrete Logarithm Attack
In the proposed model, by using AES for the main
data the possibility of discrete logarithm attack is
decreased. Furthermore, the key of AES algorithm is reencrypted with RSA-2048 that increases the rate of
efficiency in face with this attack.
Conference on Intelligent Information Hiding and Multimedia
Signal Processing (IIH-MSP), Kitakyushu, Japan, 2014, pp. 98101.
[29] M. Hajivali, F. Fatemi Moghaddam, M. T. Alrashdan, and A. Z.
M. Alothmani, “Applying an Agent-Based User Authentication
and Access Control Model for Cloud Servers,” in Proc. IEEE
International Conference on ICT Convergence (ICTC), 2013,
Jeju Island, South Korea, pp. 807–812.
CONCLUSION
Regarding to the importance of security issues in
cloud computing environments, an efficient and reliable
user authentication and data protection model was
presented in this paper to increase the rate of reliability in
this emerging technology. Accordingly, two encryption
procedures were established in an independent
middleware (Agent) to perform the process of user
authentication, access control, and data protection in
cloud servers. AES was used as a symmetric
cryptography algorithm in cloud servers and RSA was
used as a asymmetric cryptography algorithm in Agent
servers. The theoretical evaluation of the proposed model
showed that the ability of resistance in face with possible
attacked and unpredictable event was enhanced
considerably in comparison with similar models because
of using dual encryption and an independent middleware
during user authentication and data protection
procedures.
160
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Recommendations of IT Management in a Call Centre
Ibrahim Bala Muhammed, Kamalanathan Shanmugam and Naresh Kumar Appadurai
Asia Pacific University College of Technology and Innovation, Kuala Lumpur, Malaysia
[email protected], [email protected], [email protected]
Abstract—Call centers have fast become the vital point of contact for customer service and the creation of new revenue
in various industries. Nowhere is this growth in the importance of call centers more apparent than in the Tele
Communication industry. This paper presents challenges faced by both employee and top level management in the
execution of information technology management and profound recommendations that clearly indicate the importance
of information technology in achieving First Call Resolution (FCR) therefore creating a high-performance call center
environment. A case study is undertaken on SCICOM (MSC) Berhad, a leading call center service provider.
This research paper seeks to establish the challenges of
information technology management that deters FCR as
perceived by the management and staff employees of the
organisation, profound recommendations to the challenges
will be established.
I. INTRODUCTIONS
Nearly all businesses are involved in providing information
and assistance to existing and prospective customers.
Recently, the low cost of telecommunication and
information technology has made it economically feasible
to consolidate such information delivery functions; thus
leading to the development of groups that dedicates them
in handling customer phone calls through the establishment
of call centers. The modern day call center organization
invests more in technology than most departments in the
same spectrum. Investment in call center technology has
been mainly to aim at the First Caller Resolution (FCR)
that therefore helps to enhance customer satisfaction and
improve customer relationship management
III. BACKGROUND
In call centers, inbound customer services assist the
company of a product to administer customer inquiries and
support, the company's job is to answer live calls from the
customer and provide them with the support needed for that
product (First Call Resolution).
This paper emphasizes on the challenges of information
technology management based on Customer Relationship
Management (CRM) in a call center to achieve First Call
Resolution (FCR) as perceived by the Top Level
management and the employees of the organization, thus
providing recommendations of information technology
management, highlighting various challenges and
providing the needed recommendations.
A call center is a company used for the purpose of
answering calls from clients or sending large volume of
needs by telephony system. An inbound customer services
assist the company of a product to administer customer
enquiries and support, the company's job is to answer live
calls from the customer and provide them with the support
needed for that product (First Call Resolution), whereas an
outbound customer call line manage the telesales,
governmental contributions or solicitation of non-profit,
financial debt collection, also market research and give
customers assurance calls as follow up to previously made
calls. Even though outbound calls are as important as
incoming calls the current research paper only focuses on
incoming calls because it’s the incoming calls that have to
meet first call resolution and the organisation under study
Scicom, considering the Singtel Project, mainly focuses on
incoming calls rather than outbound calls. A call center
operation also includes sending emails to customers, live
remote software support, collecting and management of
letter and at the same time achieving customer satisfaction
(Subramanian, 2008).
IV. OVERVIEW OF A CALL CENTER
A case study is undertaken on SCICOM (MSC) Berhad, an
organisation that was incorporated in 1997 in Malaysia and
is a Publicly Listed Company (PLC) listed on the main
board of Bursa Malaysia. SCICOM has been a PLC since
2005. This organisation is one of the leading outsourcing
and Services Company specializing in Customer Contact
Management, Call Centre Solutions.
The call centers are located in Kuala Lumpur, Colombo
and Jakarta servicing both large local conglomerates and
multi-national clients. SCICOM supports various sectors
of the economy including; Central Government, Corporate,
Education, Emergency Services, Financial Services,
Health, Insurance, Local Government, Retail, Retail
Banking, Telecommunications, Media, Transport and
II. RESEARCH OBJECTIVES
161
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Travel. They offer 24 x 7 x 365 operations and provide
services for over 35 blue-chip clients, supporting
customers from over 89 countries, from the office complex
in Kuala Lumpur, Colombo and Jakarta. With 16 years of
experience and track record, also support customers in over
40 languages.
Dean,2009,Sin et al,2005;Yim et al.,2005 Roland
Wener,2005 cited in Aliyu el al 2011). Aliyu et al (2011)
has brought in reliable data that shows a great deal of major
issues that affects call centers such as poor technology
management and shortage of technologically skilled
employees and many others just but to mention a few.
V. STATEMENT OF PROBLEM
A. EFFECTIVE INFORMATION TECHNOLOGY
As compared to traditional call centers nowadays call
centers are filled with technology. Once the customer picks
up the phone up until the resolution of the call, a variety of
technologies are involved. In every call center the most
critical role is the effective management of the
technologies that includes achievement, implementation,
and continuous maintenance and management. These
technologies are commonly clustered as call delivery that
includes telecommunications structure, call handling
technologies, and call center management tools.
An inbound call center administering a call from a caller is
sometime faced with the issue of inability to leverage
systems results in higher telephony costs by allowing
limited self-service ability that leads to longer delays in
order to reach an analyst. Also multiple numbers result in
misdirected calls and member frustration with no end-toend view of volume. At some point callers issues are
escalated as a result of limited IT knowledge that can result
in delays and management having to pass the bulk of
workload to only some personnel.
This paper seeks to establish the challenges of information
technology management that deters FCR as perceived by
the management and staff employees of the organisation,
hence proffering recommendations to the established
challenges. Regardless of the great increase in the
acknowledgement of customer relationship management
not much has been done so far on the relationship between
technology and caller satisfaction and first call resolution
within the inbound customer care industry therefore the
need for establishing and understanding of challenges of
information technology management that deters FCR as
seen by both management and staff.
B. THE CHALLENGES OF IT MANAGEMENT IN A
CALL CENTRE
One of the most important components required in an
organization is a call center strategy aimed at delivering
excellent possible experience throughout every interaction
at the same time managing minimum costs. Furthermore,
nowadays customers are highly demanding and need to
communicate across a frequently expanding choice of
contact method. Thus call center managers on inbound call
service confirm that their key measure of success measure
is customer satisfaction as determined by first call
resolution
C. LACK OF TECHNICAL AND SOFT SKILLS
TRAINING.
The three categories of the call center are, Customer
Service Representatives (CSRs), supervisors, and
managers should be well trained in the use of the modern
IT Call center Applications in the call center. An
information technology consultant should authorize the
training course material and training topics for the three
groups of staffs. The amount of employees in every one of
the category and the instruction specifications in a supplied
call center operation will definitely depend on the size of
the customer service center, that is, the numbers of "seats"
available in the call center.
VI. LITERATURE REVIEW
Aliyu, Sany & Rushami, (2011) carried out a study to test
the impact of technology based CRM (Customer Relations
Management) on inbound call center performance. They
collected data from a pool of 168 call center managers thus
analysed the data through structural equation modeling.
The findings of the study revealed that technology based
CRM has a significant effect on first call resolution (FCR)
and identified service quality but inadequately impact
caller satisfaction through the mediating role of first call
resolutions. In conclusion Aliyu et al (2011) reported that
customer contact centers as the first point of contact for the
company depends not only on technology management but
also company policy, product quality, customer
characteristics and according to Aliyu el al (2011) it is
unfortunate that the above mentioned factors fall out of
operational control of call center activities. Both academic
and industrial research reported that in attempting to
establish good customer relationship and achieving FCR
companies have digitalised staff's knowledge about
customers issues through the establishment of computer
telephony integration (CTI),fax, email ,web chatting
,CITRIX live remote assistance and many other (
E. SOME INTERNAL DRIVERS OF IT
MANAGEMENT IN CALL CENTRE
The higher the annual IT spending, the more complex the
technological design. This result implies that as
organizations spend more on IT, they are more likely to
build additional complexity into their system, hence, the
more complex the technology, the less calls are able to be
handled by the Voice Response Unit (VRU); resulting in
customers bailing out and choosing to deal directly with an
162
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
agent. This implies that these institutions are spending their
money, at least in part, on system efficiencies. The more
complex the IT setup, the less customer-focus the
personnel.
succession of connections or key presses. For example the
system provides customers with directions to their
destination that is respective departments and customer
service officer to address their individual concerns.
VII. CURRENT FRAMEWORK
The below diagram illustrates a universal technology
framework in SCICOM that supports an integrated Call
Centre. The diagram outlines five basic layers of the
SCICOM a Call Centre thus shows the entire SCICOM
SingTel Project framework which in all is not only specific
and specially to Fibre Helpdesk but therefore not all
contact points will be discussed but only specific relevant
ones will be discussed in this study
FACE-TO-FACE:
Remote customer care applications and walk-in customers
support this particular channel and in this case to the retail
outlets and kiosks that are provided.
MAIL:
Permitted by Fax, E-Mail and maintained by the significant
document management and recording systems required to
categorise, track and save the information. Moreover
through this channel the customer service officers, can
achieve, for instance, supporting
picture or softcopies of documents in support of complaint
or complement of previous and current cases.
VIII. THE NEW INTERGRATED FRAMEWORK
A multi-channel Integration has been devised where by all
systems and applications are integrated as they connect
with every part and nit gritty of the systems. When
analyzing the new framework it has been brought to
attention that there is no application that stands on it’s on
as an individual entity. All the systems communicates with
one another both directly and in directly. In the existing
frame work there was no direct integration between
telephone system and customer database and application
server. Customer service officers have to answer calls and
use their computer screens to look up for information on
the customers database which in turn is a cause for concern
currently as responses from the research interview
questions report the inefficiency of the systems due to
outdated applications which causes staggering of the
system in capturing information.
There were some call management issues whereby calls are
channeled to the subsequently available CCO by the
Automatic Call Distribution System (ACD) therefore
configured initially in the data system installer and the
functionality cannot be changed by the users apart from
managers but in the new Framework the system
automatically assign thus no human effort is needed.
Though system assignment is an advantage there is need
for frequent follow up and maintenance form IT
management otherwise the system will cause more harm
than good.
CUSTOMER CONTACT POINT
Now, in a multi-channel Call Centre atmosphere, the
subsequent potential access channels by the customers
need to be supported, as the following will illustrate how
this is achieved;
TELEPHONE:
Facilitated by an integrated call center system utilizing
spontaneous technology that consist of telephone switches,
automatic call distributors, voice processing, computer
telephony interfaces and many other customer care
applications.
SELF-SERVICE:
This is enhanced by integrated Interactive Voice Response
(IVR), where the customers can surf the Singtel portal or
call to get the relevant information required through a
163
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
more they are willing to continue doing business with us
thus referring more and more other customers”. They
reported that if customers are satisfied and they get their
issues resolved at one short there would be no need to send
down field technicians to customer’s house thus saving
costs.
XI. RECOMMENDATIONS
the main challenge of Information technology basically in
SCICOM SingTel project is management of IT systems
which most of the staff indicated that the acknowledge that
systems are actually in existence but is not updated and
technologically comparative with what is in the market
currently. The world of IT evolves daily in this era as it can
be vividly seen in the mobile and computer technology
Zhou (2012). So far we have seen fast changing of
operating systems, phone models, computer models and
software versions just to mention but a few. If systems are
there but not well managed and maintained it’s as good as
there is nothing in place. As a call center we do use a lot of
systems as shown in the call center framework and those
systems need to be well managed and maintained.
IX LIMITATIONS OF STUDY
Empirical support from the theoretical views shows that
technology application is an important input to the
management thus if not managed it does affect service
level and deters first call resolution (Eid, 2007). If the call
center has up to date tools with the rightful staff that are
well trained and knowledgeable in Information
Technology Systems and techniques there will be a
positive outcome in call handling which in turn determines
First Call Resolution. Even though the staff has knowledge
and skill that doesn’t determine customer satisfaction let
alone first call resolution as the systems in place also need
to be competent enough to detect the issues customers are
facing with their devices thus enabling customer care
officers to troubleshoot accordingly and resolve the issue
immediately without customers having to call back again.
There are a number of limitations in this study as it applies
to any other studies. The first being that this research study
has practically assessed one call information technology
management through a combination of questionnaires and
interview questions. Therefore two types of research
studies were carried that is a quantitative and qualitative
study. Having analyzed this study in a qualitative and
quantitative manner a set of two different perspectives has
been drawn as quantitative research did not yield
significant results while qualitative yielded constructive
and empirical results.
X. TESTING OF THE NEW FRAME WORK
After having looked at the current Information Technology
Framework in Scicom specifically SingTel project a new
framework was designed based on past research and
responses from interviews carried in the current study.
Not all managers basically had the same opinion on the
new framework as out of the six interviewed four that is
about “70% highlighted that the improved framework will
be of significant use in the day today running of the call
center IT operations thereby enhancing FCR and customer
satisfaction thus improving the financial status of an
organization”.The other 30 % opinionated that the
significance of the new framework cannot be certain as
from the outlook it shows liable but they cannot conclude
yet up until it is implemented and put into practice.
XII CONCLUSION
Hence the global IT industry continues to grow, systems
and applications are born everyday thus bring a challenges
to the operations and management of IT in call center and
any other industry that deals with technical services.
Research findings reports that the main challenge faced is
the evolving IT systems that in turn is not catered for in the
company under study as evidenced in the research
interviews. SCICOM as an organization is struggling to
cope with the changes in Information Technology industry.
Enhanced customer satisfaction and first call resolution is
unachievable without advanced Information Technology
that can pull the systems. Similar indication was found by
According to the managers that concluded the significance
of the framework, “the more customers are satisfied the
164
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Aliyu et al (2011) as they highlighted that great deal of
major issues that affects call centers from achieving FCR
and Customer satisfaction is poor technology management
and execution.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
Abdullateffet, O.A., Mokhtar, M.S.S. (2011). The Strategic Impact
of Technology CRM on Call Center Performance, Journal of
Banking and Commerce, 16 (1)
Belfiore, B. L.,Chatterley, J.& Petouhouff, N. (2012) The Impact of
Technology on Call Center Performance.Cisco, Benchmark Portal,
LLC
Castillo,J.J. (2009), Convenience Sampling, Retrieved, [August 2,
2011]
from
ExperimentResources:http://www.experimentresources.com/convenience-sampling.html
Dean, A. M. (2009). The impact of customer orientation of call
center Employees on Customers’ Affective Commitment and
Loyalty, Journal of Service Research, 10(2) pp. 161 – 173
Everson,A. , Frei, F. X. & Harker,T.P.(2009). Effective Call Center
Management: Evidence from Financial Services. Financial
Institutions Center The Wharton School University of Pennsylvania
Philadelphia &Harvard Business School Harvard University
Boston, 99 (110)
Lee, Y., & Barnes, F. B. (2010). The Effects of Leadership Styles
on Knowledge Based Customer Relationship Management
Implementation, International journal of management and
marketing research, 3(1), pp 1-19
Mehrotra, V., Ross. K.,Ryder. Anand, K. J. (2008). Customer
Satisfaction and Service Quality Measurement in Indian Call
Centers, Managing Service Quality, 18(4), pp. 405-414G., Chen, L.
Winiecki, D. J. (2009). The Call Centre and its Many Players,
Organization Articles,16(5): 705–731
9.
Robbins, S. P. 2009, Organisational behaviour: global and
Southern African perspectives. Cape Town: Person Education
South Africa (Pty) Ltd. p. 144
10. Scicom-intl.com 2012. Board of Directors 27 Feb 2012
[online]: http://www.scicomintl.com/Board_of_Directors.html#Leo
11. Training.com 2011. Challenging Work. 27 Feb 2012
[online]: http://www.leadership-and-motivationtraining.com/challenging-work.html
12. Subramaniam, L. V. (2008-02-01). "Call Centers of the
Future" (PDF). I.t. magazine. pp. 48–51. Retrieved 2008-0529.
13. "US Patent 7035699 - Qualified and targeted lead selection
and delivery system". Patent Storm. 2006-04-25. Retrieved
2008-05-29.
14. Zhou .P. Y. (2012) Routing to Manage Resolution and Waiting
Time in Call Center with Heterogeneous Servers. M&SOM
Manufacturing and Service Operations Management, 14 (1),
66-81
165
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
DARVENGER
(Digitally advance rescue vehicle with free energy generator )
S. Sivapriyan, R. D. Jaishankar, Tamilamuthan, B. Vigenesh, M. Kaviya and K. Rajalakshmi
Sree sastha institute of engineering and technology, Chennai, India
[email protected]
Abstract— The present invention is based on robots controlled by motions of human controller through an exoskeleton
worn by them. Exoskeleton converts the physical motions of human controller and sends them as signals to robot which
perform the parallel motions of human controller. These robots can be used in area such as accidents, natural disaster,
radioactive zone, military and space expeditions. In particular the robot provides as to survey the area where human
cannot survive. Many human have risked and lost their life to save peoples from disasters, terrorist activities,
expeditions to dangerous area. By sending these robots we save people from risk factors. It’s far more difficult and
dangerous to program a robot that can able to perform a task by itself but in this method we may able to control and
make them to act as our wish. The robot is powered by a generator which is capable of generating double the amount
of electricity used to run it. In which one part is used for the robot another part is used to charge the battery which is
going to run the generator in next session. Thus the robot may able to run for a long period of time without any external
power supply. We can also use solar cells to rectify the power loss during regeneration of electricity
Keywords- Exoskeleton, Rescue robots, Parallel motion, human controlled robot
INTRODUCTION:
SOFTWARE AND HARDWARE OF DARVENGER
In these days the technology reaches far beyond the sky
but there is no guaranty for safety of a human life. Life of
a human being is precious because they cannot be bought
back. To ensure the safety of human being this project is
carried out. Robots are boundary less they can survive any
circumstance but humans cannot. Humans are far better in
handle new situations and new circumstance but robots are
limited under programming. To overcome these things we
need an interface that connects humans and robots.
Exoskeleton will acts as an interface. The human motions
are converted and transmitted to robots which will perform
the parallel motions. Controlling robot via motions of
humans will be easier to access more difficult and
dangerous task.
This robot is programmed by c language and using arduino
software programs which are uploaded into the
microcontroller. This robot is installed with some hardware
components includes arduino, servo, dc-motors, sensors,
camera and generator.
Xbee
arduino
WHAT IS DARVENGER?
Potentiometer
A Darvenger is humanoid robot controlled by human
motions. They are programmed to store the data send from
exoskeleton worn by human controllers and able retrieve it
when they faces the same circumstances. In this way we
may able to teach robots. It can analyze the performance of
a human for a specific task. They are capable of perform a
rescue operation controlled by humans without their
presence in that area. Using this technology we can teach a
single robot and update the data to all. This is like sending
a mail so several people at a time.
Fig1: Transmission part of Darvenger
In the above diagram the potentiometer converts the
physical motion into analog signals which are received by
arduino microcontroller and they are transmitted to robot
by xbee.
Xbee
arduino
166
Servomotor
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Fig2: Receiver part of Darvenger
In the above diagram the xbee receives the signals from
exoskeleton and send it to arduino which rotates the servo
motor.
Building constructions:
They can be also used in building constructions which will
reduce human effort it also can work 24*7 hours.
POWER GENERATOR
This generator is capable of producing double the amount
of electricity used to run it. In which one part is used for
powering the entire system and another part is used to
charge the battery which run the generator for next session.
Thus we can recycle electricity again and again. There will
some loss in electricity but we can use solar power to
balance it.
Military operations:
It can even be used in military for fighting, carrier and
medical support etc.
Transportation:
In transportation these robots can carry heavy objects from
one place to another. They can also be used as delivery
robots that can ship the products ordered by customer in
online shopping.
APPLICATIONS OF DARVENGER
Rescue operations:
These robots can be used in rescue operations such as fire
accidents, floods, earthquake, volcanic eruption and
tsunami. They may able rescue people without any risk.
DRAWBACKS
They cannot be able to perform when the camera doesn’t
send any data to receiver or if any error occurs. Servo
motors load capacity cannot be increase instead we can use
hydraulic which will provide you a necessary stuff.
CONCLUSION
Today many humans have risk their life to save fellow
humans and lost their life. By using these robots we can
save people from risk. In particular these robots will help
us to survey the area were humans cannot survive. Thus
Darvenger will be a better companion for human being s.
167
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
An investigation study of a printed array antenna for 900MHz
bands
Gyoo-Soo Chae
Division of Information & Communication Eng., Baekseok University, Korea
[email protected]
Abstract—In this paper, a printed inverted-F array antenna for 900MHz band RFID is investigated. The presented
antenna has four radiating elements and is fabricated on a plastic substrate. Four inverted-F antennas are sequentially
fed to generate a circularly polarized wave. We have performed numerous simulations to achieve precise circular
polarization and miniaturization. The simulation study is done using by CST MWS and it is shown a S11 of 12dB, gain
of 3.46dBic over the 902–928MHz band. In addition, further parametric studies are accomplished to improve the
radiation performance. The effect of a parasitic element to enhance the gain and bandwidth of patch antennas is
demonstrated. We present simulation results and discuss design parameters and their impact on the antenna
performance.
Keywords-printed antenna; inverted-F; UHF; simulation; RFID
antenna with a parasitic patch. The design procedure and
measured results are presented here.
INTRODUCTION
Because of their utility for a wide variety of wireless
applications, antenna miniaturization techniques have been
a topic of great research interest for many years [1-2].
However, because of their compact size, the antennas are
generally not efficient radiators and they have narrow
bandwidths. There have been many efforts to overcome the
conflicting performance characteristics such as efficiencies,
bandwidths, and directivities, using various structures [3-4].
When an RFID tag comprising an antenna and a chip is
located in the reading zone of the reader antenna, the tag is
activated and interrogated for its content information by the
reader. The querying signal from the reader must have
enough power to activate the tag chip to perform data
processing, and transmit back a modulated string over a
required reading distance. Since the RFID tags are always
arbitrarily oriented in practical usage and the tag antennas
are normally linearly polarized, circularly polarized reader
antennas have been used in UHF RFID systems for ensuring
the reliability of communications between readers and tags
[5-6]. Nowadays there are several portable UHF RFID
readers available on the market. In many of these reader
units the reader antenna is placed into an external antenna
module [7]. A typical single element inverted-F antenna has
a linearly polarized far-field pattern. For a wide range of
circular polarization to cover as many signals from RFID
tags as possible, four inverted-F elements are printed in
each corner of a square substrate with equal amplitude and
quadrature phases (0, 90 , 180 and 270) [8-9]. This enhances
the read ranges also when a circularly polarized reader
antenna is used to eliminate tag orientation sensitivity. This
paper deals with a miniaturization technique applied to a
compact antenna structure. The original antenna structure,
which is classically considered either as an inverted-F
antenna. In this paper, we proposed a small UHF RFID
ANTENNA DESIGN
The antenna and feeding structure design are based on
our previous work [1]. The proposed antenna has been
simulated using CST Microwave Studio. The geometry of
the inverted-F antenna with a parasitic element and feeding
structure are shown in figure 1. The meandered antenna
elements are printed on the plastic substrate with the size of
60(W)×60(L)× 15(H)mm. The length and width of the
antenna element is chosen to be 96mm, 3mm. There is a
shorting post which is used to improve the impedance
matching performance. The distance between feeding post
and matching line is chosen to be 3mm. The thickness of
matching line is also 3mm. The feeding network is
fabricated on the bottom of the antenna and implemented to
produce equal amplitudes and four different phases (0, 90,
180 and 270). Figure 2 shows the S11 for the original
inverted-F array antenna. Electric energy density for the
original inverted-F array antenna is shown in figure 3.
168
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Figure 1. The geometry of the original inverted-F array
antenna
antenna radiating element by 3mm. Electric energy density
for the inverted-F array antenna with a parasitic patch is
shown in figure 6. Figure 7 shows the radiation pattern
comparison of two antennas. The peak gain of original and
array with a parasitic patch is 3.34 and 3.06dBic,
respectively.
0
0
-10
S11[dB]
Gap Width
3mm
2mm
1mm
-10
S11[dB]
-20
-20
-30
0.5
0.7
0.9
1.1
Frequency[GHz]
1.3
1.5
Figure 2. S11 for the original inverted-F array antenna
-30
0.5
0.7
0.9
1.1
Frequency[GHz]
1.3
1.5
Figure 5. S11 for different gap width of the array antenna
with a parasitic patch
Figure 3. Electric energy density for the original invertedF array antenna
Figure 6. Electric energy density for the inverted-F array
antenna with a parasitic patch
0
Original antenna
With Parasitic element
315
Figure 4. The geometry of the inverted-F array antenna
with a parasitic patch
45
270
90
-30
Figure 4 shows the geometry of the inverted-F array antenna
with a parasitic patch. The parasitic element is placed on the
center of the antenna. Figure 5 presents S11 for different gap
width of the array antenna with a parasitic patch. The S11 is
similar for both antennas. However, we use a shorter
225
-10
0
135
180
169
-20
10
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Figure 7. Radiation pattern comparison of two antennas
CONCLUSIONS
In this study, a small printed inverted-F antenna with a
parasitic element for mobile UHF RFID reader is presented.
The simulation was done by using CST MWS and an
antenna and feed network are fabricated based on the
simulation. It is shown that S11 of the original geometry is
17dB, gain of 3.34dBic over the RFID bands. In addition,
further parametric studies are accomplished to improve the
radiation performance. We demonstrate that the parasitic
element enhances the gain and bandwidth of patch
antennas. We present simulation results for the effect of the
gap between antenna radiating patch and the parasitic
element. It is clear that the mutual coupling affects to the
antenna performance.
ACKNOWLEDGMENT
This work was supported by 2015 Baekseok University
research fund.
REFERENCES
Peng Jin and Richard W. Ziolkowski, “Broadband, Efficient, Electrically
Small Metamaterial-Inspired Antennas Facilitated by Active NearField Resonant Parasitic Elements,” IEEE Trans. On Antennas &
Propag., Vol. 58, No. 2, pp. 318-321, 2010.
Jen-Yea Jan andLiang-Chih Tseng, “Small Planar Monopole Antenna
With a Shorted Parasitic Inverted-L Wire for Wireless
Communications in the 2.4-, 5.2-, and 5.8-GHz Bands,” IEEE Trans.
On Antennas & Propag., Vol. 52, No. 7, pp. 1903-1905, 2004.
Sarah Sufyar and Christophe Delaveaud, “A Miniaturization Technique
of a Compact Omnidirectional Antenna,” Radioengineering, Vol. 18,
No. 4, pp. 373-380, 2009.
Tsien Ming Au and Kwai Man Luk, “Effect of Parasitic Element on the
Characteristics of Microstrip Antenna,” IEEE Trans. On Antennas
& Propag. Vol. 39, No. 8. pp. 1247-1251, 1991.
Gyoo-Soo Chae, “A Design of a circularly polarized small UHF RFID
antenna,” Koran Convergence Society, Vol. 6, No. 1, pp. 109-114,
2015.
Zhi Ning Chen, Xianming Qing and Hang Leong Chung, “A Universal
UHF RFID Reader Antenna”, IEEE Trans. Microwave Theory and
Technic., vol. 57, no. 5, May 2009
Leena Ukkonen, Lauri Sydänheimo, and Markku Kivikoski, “Read Range
Performance Comparison of Compact Reader Antennas for a
Handheld UHF RFID Reader,” Proceedings on IEEE International
Conference on RFID, TX, USA, pp. 63-70, March 26-28, 2007
Wang-Ik Son et al., “Design of Compact Quadruple Inverted-F Antenna
with Circular Polarization for GPS Receiver”, IEEE Trans.
Antennas. Propag., vol. 58, no. 5, May 2010.
L. Soo-Ji, L. Dong-Jin, J. Hyeong-Seok, T. Hyun-Sung, and Y. JongWon, “Planar square quadrifilar spiral antenna for mobile RFID reader”
in Microwave Conference (EuMC), 2012 42nd European, 2012, pp. 944947.
170
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Economic Operation Scheme of a Green Base Station
Sungwoo Bae
Dept. of Electrical Engineering, Yeungnam University, Gyeongsan, Gyeongbuk, Korea
Abstract—This paper presents a planning and operational strategy of a Li-ion battery powered base station (BTS). Wireless
telecommunication service providers have str
ived for reducing the operating expenditure (OPEX) of their base stations because an urban BTS consumes more power as the
data uses of mobile subscribers increase in these days. This power increase tendency is also true for a rural BTS because of its
wide coverage area. Thus, in order to reduce an OPEX, mobile service providers have studied for a green BTS which uses less
electricity from the main power grid for its normal operation with renewable or alternative energy sources. Because these
renewable and alternative energy sources requires high capital expenditure (CAPEX), a green BTS without a proper design or
an operational strategy may increase the total cost of ownership (TCO) that includes OPEX and CAPEX. Although such capital
investment can be retrieved over time, few wireless service providers seem to have focused on the TCO reduction of a green BTS
for its planning and operation. Therefore, this paper proposes a design and operational strategy for a green BTS which uses a
Li-ion battery to reduce its TCO. To achieve this TCO reduction, the proposed paper considered various aspects including BTS
energy profile, electricity rate, battery health and lifetime, charging and discharging cycle for BTS batteries.
Keywords – Battery Management, CAPEX, Green BTS, OPEX, TCO
INTRODUCTION
This paper presents a planning and operational
strategy of a green base station (BTS) powered by a Liion battery. Wireless telecommunication service
providers have strived for reducing an operating
expenditure (OPEX) because an urban BTS consumes
more power as the data uses of mobile subscribers
increase in these days. This power increase tendency is
also true for a rural BTS because of its wide coverage area.
Thus, in order to reduce an OPEX, mobile service
providers and researchers have been studied for a green
BTS which uses less electricity from the main power grid
for its normal operation with renewable or alternative
energy sources such as solar, wind, fuel cells, and battery
systems [1]-[3]. However, a green BTS without a proper
design or an operational strategy may increase the total
cost of ownership (TCO) because these renewable and
alternative energy sources typically requires high capital
expenditure (CAPEX). Although such capital investment
can be retrieved over time, few wireless network service
providers and researchers seem to have been focused on
the TCO of a green BTS. Therefore, this paper proposes
a design and operational strategy for a green BTS which
uses a Li-ion battery to reduce its TCO which includes
both OPEX and CAPEX.
BTS ENERGY PROFILE AND ELECTRICITY RATES
In order to reduce the TCO of a Li-ion powered BTS,
the system designer and operator of the BTS should
consider its energy profile, various electricity rates in a
smarter grid, and the relation between battery’s state-ofhealth (SOH) and TCO.
Energy Profile of a BTS
Figure 1 shows the power flow of a Li-ion battery
powered medium-class BTS which include power line
from the main grid, rectifier, Li-ion battery packs, DC
powered air conditioner, base band unit, and base
transceiver station. This medium-class BTS typically
consumes power ranged from 1 kW to 2 kW [4]. In order
to reduce the TCO of this BTS, its energy profile should
be firstly considered because the charging and
discharging cycle of its battery packs are required to be
optimized by the energy consumption profile. The
majority of BTS energy is typically consumed in radio
equipment (62%), dc power (11%), and cooling devices
(25%).
In order to confine its alternative energy source, the
scope of this work is limited to a battery management
system of a green BTS because the OPEX of a green BTS
can be reduced with the replacement of a lead-acid battery
to a Li-ion battery of which charging and discharging
cycle is significantly increased. This BTS powered by a
Li-ion battery can still be called by a green BTS because
it uses an alternative energy source (i.e., Li-ion battery)
which actively charges and discharges to reduce the
OPEX of a BTS instead of using a back-up battery like a
lead-acid battery in a traditional BTS.
The organization of this paper is as follows: Section
II discusses the energy profile in a green BTS and various
electricity rates in a smarter grid. This paper concludes in
Section III with the summary of findings.
Fig. 1 Power flow of a Li-ion battery powered BTS
171
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
TABLE I
TIME-OF-USE PERIOD EXAMPLE
Electricity Rate Structure in a Smarter Grid
In a smarter grid, various electricity rates such as timeof-use rate, day-ahead rate, hour-ahead rate, and demand
response rate are expected to emerge in the market. Thus,
this complex electricity rate structure should be
considered to optimize a charging and discharging
schedule for a Li-ion battery to reduce the OPEX of a
BTS.
Time-of-Use Electricity Rate
A time-of-use (TOU) electricity rate is not to price
electricity in a fixed rate but to price electricity with
different rates according to off-peak, mid-peak, and onpeak time by seasons such as summer and winter as
shown in Table I. This TOU rate is designed by the basic
rule that electricity price is reduced during the off-peak
time and its rate increases for the peak-time in which
electricity demand increases. The off-peak, mid-peak,
and on-peak time is different by seasons as shown in
Table I because electricity demand is different by
seasons. Table I and II shows an example of this TOU
electricity rate [5] operated by HydroOne which is an
electricity service provider in Ontario, Canada. Although
this TOU rate may reduce electricity demand during the
on-peak time, one of its problems is that an electricity
service company still cannot solve the electricity supply
deficiency when its demand sharply increases due to a
severe daily weather change in summer and winter.
Therefore, in order to handle this imbalance problem
between electricity supply and demand, electricity
service providers present various electricity market
prices such as day-ahead rate, hour-ahead rate, demand
response rate, and real-time rate.
Day-ahead Electricity Rate
A day-ahead electricity rate is to price its tomorrow
rate based on the electricity supply and demand a day
ahead. As aforementioned, the TOU rate may encounter
the problem that electricity supply may be lacking with
its demand sharply increase due to a severe sudden
weather change because this TOU rate determines the
electricity price based on the average electricity
consumption in a year. The day-ahead electricity rate can
compensate this TOU electricity rate problem because it
is based on the day-ahead weather forecast. For instance,
if there is a high chance to be cold or hot tomorrow, the
electricity demand tomorrow will increase in a large
amount. Thus, the bidding electricity price by electricity
suppliers is going to be increased based on the balance
between the electricity supply and demand. Therefore,
the electricity price may be different in the same season
unlike the TOU rate.
Demand Response Electricity Rate
A demand response electricity rate is to increase
electricity reserve although a new power plant is not
installed. In the demand response electricity rate, an
Winter Pricing
Summer Pricing
(May 1 ~ Oct. 31)
Off-peak
19:00 ~ 07:00
(Nov. 1 ~ Apr. 30)
19:00 ~ 07:00
07:00 ~ 11:00
Mid-peak
11:00 ~ 17:00
17:00 ~ 19:00
07:00 ~ 11:00
On-peak
11:00 ~ 17:00
17:00 ~ 19:00
II
electric customer
should TABLE
pay high
electricity price during
TIME-OF-USE ELECTRICITY RATE EXAMPLE
the peak time soSummer/Spring/Fall
that the electricity demand is going to
Symbol
Winter Pricing
decrease. This demand
Pricing response electricity rate
decreases
construction
cost of a potential
Off-peak the 7.5
₵/kWh
7.2 ₵/kWhnew power
plant. Therefore, this demand response is also called by
Mid-peak
₵/kWh
a virtual
power11.2
plant
in this reason. 10.9 ₵/kWh
Real-time 13.5
Based
Electricity Rate 12.9 ₵/kWh
On-peak
₵/kWh
A real-time based electricity rate is to price its rate in
a real time based on the electricity supply and demand
similar to a day-ahead electricity rate. However, the time
interval of this real-time based electricity rate is in a
shorter time unlike a day-ahead or an hour-ahead
electricity rate.
CONCLUSION
This paper presented a planning and operational
strategy of a green base station (BTS) powered by a Liion battery. In order to design such planning strategy of a
green BTS, this paper considered its energy profile and
various electricity rates in a smarter grid.
ACKNOWLEDGMENT
This research was supported by Basic Science
Research Program through the National Research
Foundation of Korea (NRF) funded by the Ministry of
Science,
ICT and Future Planning (NRF2014R1A1A1036384).
References
C. Boccaletti, G. Fabbri, and E. Santini, “Innovative solutions for stand
alone system powering,” INTELEC 2007, pp. 294-301, 2007
S. Bae and A. Kwasinski, “Dynamic Modeling and Operation Strategy
for a Microgrid With Wind and Photovoltaic Resources,” IEEE
Trans. Smart Grid, vol.3, no.4, pp. 1867-1876, Dec. 2012
S. Bae and A. Kwasinski, “Maximum power point tracker for a
multiple-input Ćuk dc-dc converter,” INTELEC 2009, pp. 1-5,
Oct. 2009.
A. Sams, “Various approaches to powering Telecom sites,” INTELEC
2011, pp. 1-8, Oct. 2011.
HydroOne, “The cost of Electricity Rates,” [Online]. Available:
http://www.hydroone.co
172
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Design and Simulation of Microstrip Patch Antenna
for Ultra Wide Band (UWB) applications
S. K. Wong1, T. H. Tan1, Mastaneh Mokayef1,
1
Department of Electrical and Electronic Engineering, UCSI University, Kuala Lumpur, Malaysia
[email protected]; [email protected]; [email protected]
Abstract—Ultra wide band (UWB) antenna is widely used for various applications in communication systems which
has an operating frequency ranging from 3.1 to 10.6 GHz. In this work, a microstrip patch antenna is designed and
simulated for ultra wide band applications using CST Microwave Studio Suite software. Multiple slots have been
applied to the microstrip patch antenna design to improve the bandwidth and directivity for the ultra wide band
applications using FR-4 material. Frequency band from 4.056 GHz to 6.864 GHz with bandwidth of 2.8 GHz is
obtained in this microstrip patch antenna design.
Keywords- Microstrip patch antenna, reflection coefficient, Ultra Wide Band (UWB).
INTRODUCTION
Due to the human’s dream of transmitting data
wirelessly and to cover further range and distance, better
antenna design is proposed and developed. Considering
the mobility of a wireless telecommunication device, the
antenna should be small, compact and light weight.
Microstrip antenna, which also known as patch antenna is
one of the popular choice since it can be easily printed out
on circuit board with the ease of fabrication process.
Different applications need specific types of antennas
which are developed by varying the size and shape of the
patch, types of substrate, and other specifications that
have an impact on the directivity, coverage and frequency
band of a particular antenna [1,2].
Square Slots
Feedline
Waveguide
Ultra wide band (UWB) antennas are generally used
for high speed data communications, radar and safety
applications [3,4]. According to Federal Communication
Commission (FCC), typical bandwidth of UWB is
ranging from 3.1 GHz to 10.6 GHz [5]. It is generally
agreed that UWB antennas with wide bandwidth will
produce a better performance. Generally, higher
bandwidth can be achieved by optimizing antenna design
with various kind of geometries, structures, materials
with different dielectric and sizes [1,2].
Mainfeed
Figure 1: Microstrip patch antenna 3D model.
a
The aim of this work is to design and simulate
microstrip patch antenna to operate at 6.25 GHz with the
aid of multiple slots.
f
h
DESIGN
CST Microwave Studio Suite is employed to simulate
the Microstrip patch antenna. The microstrip patch
antenna is designed on a 28 × 29 mm FR-4 substrate with
the thickness of 1.6 mm from the ground plane at 50 Ω
matching impedance. Figure 1 represents the 3D model
of the microstrip patch antenna. Four square slots are
added into the antenna with the length of 3 mm. The
thickness of the patch, feed line, main feed and the ground
plane are 0.035mm. As shown in Figure 2, a = 13 mm, b
= 8 mm, c = 4 mm, d = 2.4 mm, e = 3.16 mm, f = 3 mm,
g = 0.8 mm. Two rectangular slots are added with the
length of h = 11 mm and i = 4 mm with a width of 0.2 mm
to increase the bandwidth and directivity of the proposed
UWB microstrip patch antenna.
i
g
c
b
d
Rectangular
Slots
Figure 2: Top view of Microstrip patch antenna.
SIMULATION
To investigate the gain and directivity of antenna, Efield and H-field simulation are generated. Figure 3 and
Figure 4 represent the simulated E-field and H-field,
respectively. According to Figure 3 and Figure 4, it is
clearly observed that the E-field and H-field are the
strongest surrounding the slot of the antenna and around
the main feed.
173
e
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
2.808 GHz which larger than the required operating
bandwidth of 500 MHz.
Slots
Feedline
Figure 5: Simulated S-parameter of Microstrip patch
antenna.
Figure 3: Mainfeed
Simulated E-field of Microstrip patch antenna.
Slots
Figure 6: Simulated VSWR of Microstrip patch antenna.
CONCLUSION
In this work, UWB microstrip patch antenna has been
proposed and simulated for the operating bandwidth from
3.1 GHz to 10.6 GHz. Two main parameters of microstrip
patch antenna have been investigated using CST
Microwave studio software namely S-parameters and
VSWR. Future work will be focusing on the theory study,
comparison using different materials with different
properties and design parameters.
Feedline
Figure 4: Simulated H-field of Microstrip patch antenna.
Mainfeed
DISCUSSION
Referring to Figure 5, the minimum magnitude of the
s-parameter which is also known as the reflection
coefficient has a value of -40.02 dB at 6.248 GHz. The
corresponding Voltage Standing Wave Ratio (VSWR) at
6.248 GHz is 1.02 dB as illustrated in Figure 6. The
smaller the VSWR, the better the antenna is matched to
the transmitter line. As a result, more power is delivered
to the antenna. An ideal antenna has a VSWR of 1.0 dB,
which indicates no power is reflected from the antenna.
Therefore, from the simulation results obtained, it can be
denoted that the proposed microstrip patch antenna
design is effective in delivering the power.
REFERENCES
R. Garg, I. J. Bahl and P. Bhartia, “Microstrip Antennas”, Artech
House, Norwood, 1980.
J. R. James and P. S. Hall, “Handbook of Microstrip Patch Antenna,”
Peter Peregrinus Ltd., UK, 1989.
S. Zahran, Omar H. El Sayed Ahmed, Ahmad T. El-Shalakany, Sherif
S. Saleh, and Mahmoud A. Abdalla. "Ultra wide band antenna
with enhancement efficiency for high speed communications."
Radio Science Conference (NRSC), 2014 31st National, pp. 6572. IEEE, 2014.
Siwiak, Kazimierz, Paul Withington, and Susan Phelan. "Ultra-wide
band radio: the emergence of an important new technology."
Vehicular Technology Conference, 2001. VTC 2001 Spring.
IEEE VTS 53rd. Vol. 2. IEEE, 2001.
Federal Communications Commission. "In the Matter of Revision of
Part 15 of the Commission's Rules Regarding Ultra-Wideband
Transmission Systems." First Report and Order in ET Docket 981
The design of four square slots, together with
additional two rectangular slots on the microstrip patch
antenna has improved the overall performance. The
inclusion of these slots have altered the current flow in the
patch and thus producing higher gain and higher
efficiency as less power is reflected back from the
antenna. The bandwidth of the microstrip antenna is
calculated by considering the frequency operating range
with reflection coefficient of less than -10 dB. From
Figure 5, the bandwidth calculated is 6.864 – 4.056 =
174
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Comparison of Estimation method for State-of-Charge in
Battery
Seonwoo Jeon, Sungwoo Bae†
Dept. of Electrical Engineering, Yeungnam University, Gyeongsan, Gyeongbuk, Korea
†Corresponding Author
Abstract— SOC(State-of-Charge) estimation becomes an important part of energy storage applications. Many studies for SOC
estimation methods have been developed for evaluating more accurate SOC value. SOC estimation techniques are influenced by
the battery temperature, the type of battery and the external conditions. This paper analyzes and compares the strengths and
weakness of estimation methods which have been performed by researchers. By comparing each advantage and disadvantage of
methods, this paper shows proper estimation methods suitable for energy storage applications .
Keywords – Battery, SOC(State of Charge), Ampere hour counting, Open circuit voltage, Kalman filter
long time to estimate, and that was considered to be
inappropriate for accurate SOC estimation. However, it is
critical in verifying the accurate value of estimating state
of battery. However, many portable devices, EV and
HEV consist of computable hardware and such as vehicle
PCM (Powertrain Control Module) [6], [7].
INTRODUCTION
According to depletion of fossil fuels energy and
environmental problem, the need for environmental
friendly energy source is increasing. In order to resolve
this need, a battery or ESS(Energy Storage System) that
can be used to store electrical energy are used in various
fields such as EV(Electrical Vehicle) and HEV(Hybrid
Electrical Vehicle [1]. In these systems, it is necessary for
stabilization of the system to check the SOC (State of
Charge) of the battery when the battery is charged and
discharged. In case of the HEV, a vehicle has to
accurately transmit information about the SOC of a
battery for a user since the vehicle performs the charge
and discharge from time to time during driving. Many
studies for estimating the SOC of the battery have been
researched. This paper reviews such studies to date for the
battery SOC estimation [1]-[4]. In addition, this paper
analyzes the strengths and weaknesses of estimation
techniques which have been performed.
Open citcuit volatge
Estimating the battery SOC by an open circuit
voltage is the easiest method although it can be incorrect.
Because a battery depends on ambient temperature of
cells, cell type batteries have different chemical
compositions that transmit varied voltage profiles [8]. In
case of a higher temperature, the open circuit voltage
rises, and in case of lower temperature, open circuit
voltage is lower than the case of higher temperature. This
phenomenon applies to all battery components in
accordance with temperature [9]. And the error of open
circuit voltage SOC estimation method occurs when
disturbing the battery with a charging or discharging.
This problem makes the battery voltage distorts and no
longer estimates the true battery SOC. To obtain accurate
estimation results, the battery needs to rest for attaining
equilibrium of cells. While this open circuit voltage SOC
estimation works especially for a lead-acid battery [10].
In this estimation method, when estimating battery SOC,
the state of battery must be truly “floating-state” without
load. Built in EV or HEV, the existing load present
makes this condition be a CCV(Closed Circuit Voltage)
condition false. Because open circuit voltage method has
hysteresis characteristic, battery SOC can be estimated
with “Takacs model” [11]. Battery model in case of open
circuit voltage was modeled by using the Randles
model[12] as shown in Fig. 1(a), which consists of the
cell internal resistance (Rs), the polarization resistance
(Rct) and the double layer capacitance (Cd) by the effect
of the double layer charge transfer. The battery OCV is
the battery terminal voltage from no-load steady-state
and the battery charge can be expressed as a function of
SOC. The battery terminal voltage (Vterminal) is expressed
as (2) in accordance with the equivalent circuit of Fig. 1.
In Fig. 1, Rs, Rct and Cd of the equivalent circuit measure
this terminal voltage, and its value can be derived by
using (3).
ESTIMATION METHODS OF STATE-OF-CHARGE
Ampere hour counting (Coulomb counting method)
This method is the most common method for
estimating a battery SOC. This estimation method is a
way to better track rapid changes of the SOC [1]. If an
initial value (SOC0) is given, the battery SOC can be
obtained directly from the result of the current integral in
the following equation:
t
SOC = SOCo +
1
(I batt - Iloss )dt
CN t0
(1)
where CN is the rated capacity of battery, Ibatt is the battery
current, and Iloss is the battery current by the loss reactions.
This coulomb counting for a SOC estimation can be used
in laptops and other professional portable devices. This is
a method for integrating the current from battery. It also
works on the principle of calculating the current from
battery. Especially towards the end of charge, there could
be problems such as inefficiencies in charge acceptance
and losses during discharge [5]. The available energy is
always less than the amount of feeding to the battery.
Although there are these irregularities, this method is
especially used for Li-ion batteries [5]. This method takes
175
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
storage systems because this method can transfer SOC
information from a battery most directly and easily.
ACKNOWLEDGMENT
This research was supported by Basic Science
Research Program through the National Research
Foundation of Korea (NRF) funded by the Ministry of
Science,
ICT and Future Planning (NRF2014R1A1A1036384).
Fig. 1 Randles battery equivalent circuit
Vterminal = OCV (SOC ) - i (RS + Rct (1- e-t τ ))
RS =
V1
V
τ
,Rct = 2 ,Cd =
i
i
R ct
(2)
References
(3)
Kalman Filter
A Kalman filter is used as an algorithm that requires
a series of estimations over time and inaccurate factors
including noises. In addition, it produces values of
unknown variables tending to have more accurate than
those based on a single estimation. By modeling battery
system to include the wanted uncertain values in its SOC,
This Kalman filter can be used to estimate their value. An
advantage of estimation method using the Kalman filter
is the fact that it can automatically provide an estimation
value in dynamic state. However, the EKF (Extended
Kalman Filter) method is mainly used because of the
nonlinear characteristics of the battery. Although there is
an advantage that the initial problem can be solved, there
is disadvantage to increase in the SOC estimation time
according to increase of the state variable [13]. The SOC
estimation method using the EKF is required for the
battery model that can be represented exactly the dynamic
state [14].
CONCLUSION
This paper explained a short overview of estimating
methods for battery SOC. Table I shows summaries of
this overview. Through this overview, Ampere hour
counting method is the most used technique for all energy
Table I. Summary of SOC estimation methods
Method
Application
Advantage
Disadvantage
Ampere
All energy
storage
systems(PV,
EV, HEV)
Online,
Sensitive to parasite reaction,
Easy,
Cost intensive for accurate
current estimation,
Hour
Counting
Accurate
Need a model for the losses
Open
Lead
Online,
Low dynamic,
Circuit
Li-ion Zn/Br
Simple
Need long rest time,
Voltage
Kalman
Filter
Sensitive to temperature
Dynamic
application
(HEV)
Online,
Dynamic
Need a
model,
suitable
battery
Problem of initial parameter
176
S. Piller, M. Perrin, A. Jossen, “Methods for state-of-charge
determination and their applications,” Journal of Power Sources,
vol. 96, no. 1, pp.113-120, Jun. 2001
J. H. Aylor, A. Thieme, B. W. Johnso, “A battery state-of-charge
indicator for electric wheelchairs,” IEEE Trans. Ind. Electron.,
vol. 39, no. 5, pp. 398,409, Oct. 1992
F. Huet, “A review of impedance measurements for determination of
the state-of-charge or state-of-health of secondary batteries,”
Journal of Power Sources, vol. 70, pp. 59-69, Jan 1998
F. Pei, K. Zhao, Y. Luo, X. Huang, “Battery Variable Currentdischarge Resistance Characteristics and State of Charge
Estimation of Electric Vehicle,” Intelligent Control and
Automation, 2006. WCICA 2006. 6th World Congress, vol. 2,
pp.8314-8318
K. S. Ng, C. S. Moo, Y. P. Chen, Y. C. Hsieh, “Enhanced coulomb
counting method for estimating state-of-charge and state-ofhealth of lithium-ion batteries,” vol. 86, pp. 1506-1511, Sept.
2009
A. Affanni, A. Bellini, G. Franceschini, P. Guglielmi, C. Tassoni,
“Battery choice and management for new generation electric
vehicles,” IEEE Trans. Ind. Electron, vol. 52, no.5, pp. 13431349, OCT. 2005
E. P. Roth, D. H. Doughty, “Development and characterization of liion batteries for the freedomCAR advanced technology
development program,” Vehicular Technology Conference, 2005.
VTC-2005-Fall. 2005 IEEE 62nd , vol. 4, pp. 2362-2366, 25-28,
Sept. 2005
S. Yang, L. Scudiero, M. C. Gupta, “Temperature Dependence of
Open-Circuit Voltage and UPS Study for P3HT:PCBM Organic
Solar Cells,” Photovoltaics, IEEE Journal of , vol.2, no.4, pp.512518, Oct. 2012
A. Panday, H. O. Bansal, “Temperature dependent circuit-based
modeling of high power Li-ion battery for plug-in hybrid
electrical vehicles,” Advances in Technology and Engineering
(ICATE), 2013 International Conference on , pp.1-6, 23-25 Jan.
2013
J. H. Kim, S. J. Lee, B. H. Cho, “The State of Charge estimation
employing empirical parameters measurements for various
temperatures,” Power Electronics and Motion Control
Conference, 2009. IPEMC '09. IEEE 6th International, pp.939944, 17-20, May. 2009
N. A. Windarko, J. Choi, “OC Estimation Based on OCV for NiMH
Batteries Using an Improved Takacs Model,” Journal of Power
Electronics, vol. 10, no. 2, pp. 181-186, Mar. 2010
S. X. Chen, K. J. Tseng, S. S. Choi, “Modeling of Lithium-Ion Battery
for Energy Storage System Simulation,” Power and Energy
Engineering Conference, 2009. APPEEC 2009. Asia-Pacific ,
pp.1-4, 27-31 Mar. 2009
J. M. Lee, O. Y. Nam, B.H. Cho, “Li-ion battery SOC estimation
method based on the reduced order extended Kalman filtering,”
Journal of Power Sources, vol. 174, pp. 9-15, Nov. 2007
N. A. Windarko, J. Choi, G. B. Chung, “SOC estimation of LiPB
batteries using Extended Kalman Filter based on high accuracy
electrical model,” Power Electronics and ECCE Asia (ICPE &
ECCE), 2011 IEEE 8th International Conference on, pp. 20152022, May. 2011
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
CHANNEL ESTIMATION FOR MIMO-OFDM SYSTEMS
Shahid Manzoor 1, Sunil Govinda2and Adnan Salem 3
1
Faculity of Engineering,Technology& Built Environment, EE department,UCSI University, KL, Malaysia
1
[email protected]
3
Faculity of Engineering,Technology& Built Environment, EE department,UCSI University, KL, Malaysia
2
[email protected]
2
Faculity of Engineering,Technology& Built Environment, EC department,UCSI University, KL, Malaysia
3
[email protected]
Abstract— Channel estimation is a very important process in the operation of MIMO-OFDM systems, as it is vital
for accurately estimating the Channel Impulse Response (CIR) of the channel under various conditions. As such, it
is useful to have a Simulink simulation to model the behavior of the channel estimation process in a MIMO-OFDM
system, in order to study the error rate of the system under different modulation and SNR conditions.
As one of the most common transmitter diversity schemes used in MIMO-OFDM systems is Alamouti’s Space
Time Block Code (STBC), a Simulink model is developed for performing channel estimation, assuming that the
STBC is used. The model will then generate graphs of error rates vs SNR for different modulation schemes.
The results show great improvement in Bit Error Rate (BER) by utilizing a Reed-Solomon Forward Error
Correction code (RS-FEC) method.
Keywords--MIMO-OFDM, Channel estimation, Alamouti’s ST Block Code, MIMO-OFDM MATLAB®/Simulink,
IEEE802.16a.
INTRODUCTION
the receiver side, thus reducing the error rate further.
Simulations are performed to investigate the
improvement in error rates when a Reed-Solomon
Forward Error Correction method is used in the system.
The IEEE 802.16 standard has been developed for
WiMAX (short for World Interoperability for
Microwave Access), which is intended to deliver high
data rates over long distance[1]. MIMO communications
has been incorporated as an option in the IEEE 802.16e
version of this standard, where 2 × 1 and 4 × 4 MIMO
configurations are considered (IEEE 802.16e Part 16
(2004); IEEE 802.16e/ D12: Part 16 (2006)). In some
cases, the multiple antennas are used to carry high data
rates to the customers, and in others, mostly for cellular
networks, the multiple antennas are used for beamforming to improve the overall network capacity, i.e.,
number
of
supported
users.
Due to the parallel nature of data transmission and the
use of multiple antennas on the transmitting and
receiving ends, it is necessary to simulate and study the
performance of the system under various channel
conditions, so that the estimation errors can be estimated
and reduced. Because of this, various channel estimation
techniques has been proposed in the literature for
improving the estimation process at a lower
computational complexity by exploiting various
properties of the channel model.
One of the more common channel estimation
techniques in use for OFDM systems is Alamouti’s ST
Block Code (STBC), which is very useful when
performing channel estimation for OFDM systems. A
Simulink model is developed for investigating the error
rate for different modulation schemes for systems using
STBC [2].
II. SIMULATION OF ALAMOUTI’S STBC
Alamouti’s Space Time Block Code for 2x2 MIMOOFDM systems is basically a transmitter diversity
scheme used to improve the signal quality of the received
signal, whereby the same pilot is being sent from both
transmitters at different times. This is done to reduce
errors caused by fading and noise in the communication
channel. The scheme for a 2x2 MIMO system is as
shown in Figure 1[3]:
Fig.1: Almouti’s STBC for 2x2 MIMO systems
III. CHANNEL ESTIMATION METHOD
The channel estimation method to be simulated
exploits the fact that the Channel Impulse Response
(CIR) length is usually shorter than the cyclic prefix
length. This means that the CIRs of all the channels can
be separated easily from a mixture of CIRs in the time
In this paper, to reduce the error rate, a Reed-Solomon
Forward Error Correction technique is used to enable
some of the incorrectly transmitted bits to be corrected at
177
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
A.2 OFDM Transmitter block
The purpose of the OFDM Transmitter block is to
generate the shifted version of the pilot sequence, based
on the number of shift samples passed in as input to the
block, as shown in the Figure 3 [5]:
domain, by taking the signal at different time durations
to be the CIR for a different channel. The theory behind
it is as follows.
First, based on Alamouti’s STBC scheme, the same
preamble/pilot is transmitted at each transmitter at
different times. Assuming that the preamble signal, x(n),
is transmitted on the first transmitter, and n0 is the
number of samples the preamble is cyclically rotated
before being sent on the second transmitter, the received
signal, r1(n), received at the first receiver can be written
as:
r1(n)= x(n) * h11(n) + x(n – n0) * h21(n)
(1)
A.3 CIR Sequence Generator block
The CIR Sequence Generator block generates the
Channel Impulse Response of the channel between the
transmitter and the receiver, so that the received signal
can be generated. The contents of the block are as shown
in the Figure 4, for the CIR for the channel between the
first transmitter and the first and second receivers
respectively.
Here, h11(n) and h21(n) is the Channel Impulse
Response (CIR) between transmitter 1 and receiver 1,
and the CIR between transmitter 1 and receiver 2
respectively.
A.4 Channel
The Channel block simulates the effect of the channel
on the transmitted signal. The channel is modeled as the
convolution of the transmitted signal with the channel
impulse response (CIR), in the presence of additive
noise, as shown in the Figure 5.
If we calculate the Discrete Fourier Transform of
r1(n), we obtain:
R1(k) = X(k)H11(k) + X(k)H21(k)e-jwn0
(2)
In this case, H11(k) is the Channel Frequency
Response (CFR) for the channel between transmitter 1
and receiver 1, and H21(k) is the CFR for the channel
between transmitter 2 and receiver 1.
Dividing by X(k) gives:
Y1(k) = H11(k) + H21(k)e-jwn0
(3)
Fig.2: Pilot Sequence Generator
Taking the Inverse Discrete Fourier Transform of
Y1(k), we obtain:
y1(n)= h11(n) + h21(n – n0)
(4)
If the CIR length is shorter than the cyclic prefix
length, then the CIRs for transmitter 1 and receiver 1
(h11(n)) and transmitter 2 and receiver 1 (h21(n)) will be
sufficiently separated in time to be separated from the
mixture, y1(n)[4].
Fig.3: OFDM Transmitter Block
Simulink Modeling
The
end
to
end
IEEE802.16
OFDM
MATLAB®/Simulink model created to simulate the
channel estimation process of the system using STBC
which includes:
A.1 Pilot Sequence Generator block
The purpose of the Pilot Sequence Generator block is
to generate the pilot and data sequences, as shown in
Figure 2.
The Generate Pilot and Generate Data MATLAB®
Function blocks above generate the pilot and data
sequences, which can be changed by changing the codes
in the respective MATLAB function blocks.
A pilot frame is sent once every 5 frames, with the
other 4 frames being data frames. This can also be set by
changing the Frames per Pilot input to the block.
Fig.4: CIR Sequence Generator block
178
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
A.5 OFDM Receiver block
The OFDM Receiver block is used to simulate the
effect of fading and noise on the channel. Different
blocks are used to investigate the effects of the different
channel conditions on different modulation types, as
shown below in Figure 6.
As can be seen from the figure, different blocks are
used for distortions caused by different modulation
types, such as QPSK, BPSK, 16-QAM and 16-PSK.
Each of the modulation blocks will add a certain type
of fading or noise to the channel (AWGN, Rayleigh
Fading, Rician Fading and MIMO Channel) to the
system. The block in one of the modulation blocks (in
this case QPSK) is as shown in the following Figure 7.
The system first converts the data to be transmitted
into binary bits, before being sent to the QPSK encoder.
Different types of fading/noise are then added to the
modulated signal, before being decoded by the QPSK
decoder.
In order to improve the error rate of the system
further, a Reed-Solomon Forward Error Correction code
encoder and decoder is performed before modulation and
after demodulation respectively.
Fig.5: Channel
A.6 Channel Estimation (STBC) block
The Channel Estimation (STBC) block performs
channel estimation of the system by calculating the
mixture of the Channel Impulse Responses (CIR)
between the first transmitter and first receiver, h11(n), and
the second transmitter and first receiver, h21(n)[6-9]. The
details of the block are as shown in figure 8.
Fig.6: OFDM Receiver block
IV. RESULTS
The simulations are performed for the developed
system to plot the results to show the Bit Error Rate
(BER) vs. SNR for different modulation types and
different channel distortion.
The results show in Figure 9, 10, and 11 for distortion
channels of Rayleigh Fading, Rician Fading, and SUI
channel, respectively for the model that (RS-FEC) is not
utilized yet in the system for different modulations types.
The result show in Figure 12, 13, and 14 for Rayleigh
Fading, Rician Fading, and SUI channel respectively
with utilized (RS-FEC) in the system for different
modulation types.
Fig.7: QPSK block
V. CONCLUSION
In this paper channel estimation based on STBC in
MIMO-OFDM system is performed. The result of
performance has been shown for different modulation
schemes. The attempted for improve the error rate of the
system by using Reed-Solomon Forward Error Correct is
successfully achieved.
Fig.8: Channel Estimation (STBC) block
179
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
BER vs SNR without FEC (channel 1)
0
10
QPSK, Rayleigh Fading (without FEC)
BPSK, Rayleigh Fading (without FEC)
16-QAM, Rayleigh Fading (without FEC)
16-PSK, Rayleigh Fading (without FEC)
BER vs SNR with FEC (Channel 1)
0
10
QPSK, Rician Fading (with FEC)
BPSK, Rician Fading (with FEC)
16-QAM, Rician Fading (with FEC)
16-PSK, Rician Fading (with FEC)
-1
10
-1
BER (log)
BER (log)
10
-2
10
-2
10
-3
10
-50
0
50
100
150
200
-3
10
SNR/dB
-50
0
50
100
150
200
SNR/dB
Fig.9: BER vs. SNR without proposed method.
Fig.14: BER vs. SNR with proposed method.
BER vs SNR without FEC (channel 1)
0
BER vs SNR with FEC (Channel 1)
0
10
10
QPSK, Rician Fading (without FEC)
BPSK, Rician Fading (without FEC)
16-QAM, Rician Fading (without FEC)
16-PSK, Rician Fading (without FEC)
QPSK, SUI Channel (with FEC)
BPSK, SUI Channel (with FEC)
16-QAM, SUI Channel (with FEC)
16-PSK, SUI Channel (with FEC)
-1
-1
10
BER (log)
BER (log)
10
-2
-2
10
10
-3
10
-3
-50
0
50
100
150
10
200
SNR/dB
-50
QPSK, SUI Channel (without FEC)
BPSK, SUI Channel (without FEC)
16-QAM, SUI Channel (without FEC)
16-PSK, SUI Channel (without FEC)
-1
BER (log)
-2
10
-3
0
50
100
SNR/dB
150
200
250
Fig.11: BER vs. SNR without proposed method.
BER vs SNR with FEC (Channel 1)
0
10
QPSK, Rayleigh Fading (with FEC)
BPSK, Rayleigh Fading (with FEC)
16-QAM, Rayleigh Fading (with FEC)
16-PSK, Rayleigh Fading (with FEC)
-1
BER (log)
10
-2
10
-3
10
-50
0
50
100
150
200
250
IEEE802.16a,“IEEE Standard for Local and Metropolitan
Area Networks. Part 16: Air Interface for Fixed Broadband
Wireless Access Systems–Medium Access Control
Modifications
and Additional
Physical
Layer
Specifications for 2-11GHz,” 2003.
[2] G.D. Durgin, Space-Time Wireless Channels. Prentice Hall,
2003.
[3] S.M.Alamouti, “Simple Transmit Diversity Technique for
Wireless Communications,” IEEE Journal on Select Areas
in Communications,vol.16, pp. 1451-1458, 1998.
[4] M.Belotserkovsky, “An Equalizer Initialization Algorithm
for OFDM receivers,” Digest of Technical Papers,
International Conference on Consumer Electronics, 2002.
pages 372-373,2002.
[5] Ramjee Prasad, “OFDM for Wireless Communications
system’’Artech House, Inc. Publications.
[6] Micheal Drieberg, Yew Kuan Min & Varun
Jeoti“Simulation of 1x1,2x1 and 2x2 MIMO-OFDM:
ACase Study in IEEE802.16a,”Wireless 2004, The
16thInternational
Conference
on
Wireless
Communications, 12-14th July, Calgary, Canada 2004.
[7] Micheal Driberg, Yew Kuan Min & Varun Jeoti “A simple
channel estimation method for MIMO-OFDM in
IEEE802.16a,” IEEE Journal, 0-7803-8560-8,2004.
[8] Yuning Wan et al, “Channel Estimation in DCT Based
OFDM”, The Scientific World Journal, Vol. 2014 .
[9] I Gede Puja et al, “An RF Signal Processing Based
Diversity Scheme for MIMO-OFDM Systems”, IEICE
Transaction
on
Fundmental
of
Electronics,
Communications and Computer Sciences, Vol. E95-A, No.
2, pp. 515-524. 2012.
[1]
10
-50
100
SNR/dB
REFERENCES
BER vs SNR without FEC (channel 1)
10
10
50
Fig.15: BER vs. SNR with proposed method.
Fig.10: BER vs. SNR without proposed method.
0
0
150
200
SNR/dB
Fig.13: BER vs. SNR with proposed method.
180
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Smart load management of Electric Vehicles in distribution
and residential networks with Synchronous Reference Frame
Controller
Saeid Gholami Farkoush, Sang-Bong Rhee
Department of Electrical Engineering,Yeungnam University, Gyeongsan-si, Korea
[email protected]
Department of Electrical Engineering,Yeungnam University, Gyeongsan-si, Korea
[email protected]
Abstract—Utilization of Electric Vehicles (EVs) is gaining popularity in recent years due to the growing concerns about fuel
depletion and the increasing petrol price. Random uncoordinated charging of multiple EVs in residential distribution feeders at
moderate penetration levels are expected in the near future. This paper explores the detrimental impacts of random EV charging
on the bus load voltage profiles of unbalanced smart grids.
This paper describes a high performance voltage controller for the EV charging system, and proposes a scheme of synchronous
reference frame controller in order to compensate for the voltage distortions and unbalance distribution system due to EV
charger. The proposed scheme in this paper is able to completely eliminate the negative sequence voltage distortion due to EV
charger system. In order to compensate for the effects of EV charger, the synchronous reference frame controller with the
negative sequence computation block is proposed. The effectiveness of the proposed scheme has been investigated and verified
through computer simulations by a 22.9kV grid.
Keywords- Synchronous Reference Frame Controller; electric vehicles (EVs); SVC; Unbalanced Load
feasible, and the method by SRFC requires the knowledge
of the leading angle which compensates for the system
delay. The effectiveness of the proposed scheme has been
investigated and verified through computer simulations
by a 22.9kV grid.
INTRODUCTION
ELECTRIC vehicles (EVs) could be an important
contribution to the reduction of greenhouse gases in the
transport Sector. EVs are expected to have a large share
in the future of the transportation system, which will
cause an additional load on the electric grid, but concerns
have been raised about the impacts of a large fleet of EVs
on the electricity distribution grid [1].
SYNCHRONOUS FRAME CONTROLLER
In three-phase, three-wire systems (delta connected
sources and loads), unbalanced loads for example EVs
create negative sequence currents, and likewise, negative
sequence voltage distortion. An unconventional control
technique has been proposed in [12] to compensate for the
negative sequence voltage distortion due to unbalanced
loads in three-phase, three-wire systems.
The increasing use of EVs creates detrimental effects
and degrades the quality of power supplied from the
utility to the customer. These EVs result in power quality
problems like poor power factor, harmonics, voltage
unbalance etc [2]. Custom power devices are commonly
used to overcome these power quality problems [2],
Based on the use of reliable high-speed power electronics,
powerful analytical tools, advanced control and
microcomputer technologies, Flexible AC Transmission
Systems, also known as FACTS, have been developed
and represent a new concept for the operation of power
transmission systems [3], [4]. In these systems, the use of
static VAR compensators with fast response times play an
important role, allowing to increase the amount of
apparent power transfer through an existing line, close to
its thermal capacity, without compromising its stability
limits.
V*cd
V*cq
+
+
PI
-
+
dq
abc
PI
+
-
PWM
L
o
a
d
(ωt)
Vcd
dq
abc
Vcq
+
V*Nd=Vmcos2ωt
V*Nq=-Vmsin2ωt
-
+
PI
dq
(ωt)
abc
PI
-(ωt)
VNd
NVC(Negative-sequence
Voltage Compensator)
VNq
dq
abc
Negative
Sequence
computation
-(ωt)
Fig.1 the concept of SRFC for unbalanced load compensation
Fig.1 show the concept of SRFC for unbalanced load
compensation which is proposed in [12].
The proposed negative sequence voltage controller
(NVC) compensates the negative voltage distortion due
to EVs that caused to unbalance load. Proposed controller
was added to the controller of SVC whereas the
unbalance load is happening in the system because of
external conditions, for example, charging electric
vehicle. When charging EVs connected to the grid,
causing unbalanced in the system. At this time, to
compensate of damaging effects caused by charging EVs,
NVC is entered into system, and it is caused that
compensates voltage distortion due to unbalance load that
The effects of EVs infiltration of voltage drop, power
loss and costs in distribution networks has been already
studied in [5-10] through deterministic or probabilistic
methods. Different approaches have been proposed in
order to diminish the voltage distortion in Loads when
EVs have been connected with PCC. In [11] the selected
harmonic compensation method using discrete Fourier
transform (DFT) and the synchronous reference frame
controller (SRFC) [11] have been proposed. However, the
DFT method requires too much computation, and it is not
181
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
created by charging EVs. Fig 2 shows the proposed
scheme in this paper.
Vn
Load Voltages
Voltage
Measurement
BSVC
Voltage
Regulator
+
Secondary Voltages
Gen
Ve
-
Unbalance Load
distortion in the system. PI controllers in the proposed
scheme operate with pure DC values under the Electric
Vehicle charging condition, in the same as SRFC under
balanced system, which is able to provide the zero steady
state error. Simulation result in this paper shows the
efficiency of system in the state of voltage unbalance at
the bad condition is 95%. Whereas with using
synchronous frame controller when EVs is connected to
grid, the efficiency of system increased to 99%.
Vref
Conventional
Control System
Xe
n_TSCs
Synchronizing Unit
Pulse Generator
Pulse
TCR
Distribution
Unit
α
TSC
PWM
+
V*Nd=Vmcos2ωt
V*Nq=-Vmsin2ωt
PI
-
+
dq
abc
PI
Negative
Sequence
computation
-(ωt)
VNq
dq
REFERENCES
abc
VNd
NVC(Negative-sequence
Voltage Compensator)
-(ωt)
R. A. Verzijlbergh, M. O. W. Grond, Z. Lukszo, J. G. Slootweg, and
M. D. Ilic, “Network impacts and cost savings of controlled EV
charging,” IEEE Trans. Smart Grid, vol. 3, no. 3, pp. 1203–1212,
2012.
Karl E. Stahlkopf and Mark R. Wilhelm, “Tighter Controls for Busier
Systems”, IEEE Spectrum, Vol. 34, N° 4, April1997, pp. 48-52
Rolf Grünbaum, Åke Petersson and Björn Thorvaldsson, “FACTS,
Improving the performance of electrical grids”,ABB Review,
March 2003, pp. 11-18.
N. Hingorani, L. Gyugyi, “Understanding FACTS, Concepts and
Technology of Flexible AC Transmission Systems,” IEEE Press,
New York, 2000.
L.P. Fernández, T.G.S. Román, R. Cossent, C.M. Domingo and P.
Frías, “Assessment of the Impact of Plug-in Electric Vehicles on
Distribution Networks,” IEEE Trans. on Power Systems, Vol. 26,
No. 1, pp. 206-213, Feb. 2011.
K.C. Nyns, E. Haesen, J. Driesen, “The Impact of Charging Plug-In
Hybrid Electric Vehicles on a Residential Distribution Grid,”
IEEE Trans. on Power Systems, Vol. 25, No. 1, pp. 371-380, Feb.
2010.
S. Acha, T.C. Green, N. Shah, “Effects of optimised plug-in hybrid
vehicle charging strategies on electric distribution network
losses,” in Proc. of IEEE Transmission and Distribution
Conference, pp. 1-6, 2010.
P.S. Moses, S. Deilami, A.S. Masoum and M.A.S. Masoum, “Power
quality of smart grids with Plug in Electric Vehicles considering
battery charging profile,” in Proc. of IEEE PES Innovative Smart
Grid Technologies Conference Europe (ISGT Europe), pp. 1–7,
2010.
A.S. Masoum, S. Deilami, P.S. Moses and A. Abu-Siada, “Impacts of
battery charging rates of Plug-in Electric Vehicle on smart grid
distribution systems,” in Proc. of IEEE PES Innovative Smart
Grid Technologies Conference Europe (ISGT Europe), pp. 1–6,
2010.
A.Von Jounne, P.N. Engeti and D.J. Lucas, “DSP Control of High
power UPS Systems Feeding Nonlinear Loads,” IEEE Trans. On
Industrial Electronics, Vol. 43, No.1, 1996, pp. 121-125.
P.Mattavelli, and S.Fasolo, “Implementation of Synchronous Frame
Control for High Performance AC Power Supplies,” Proceedings
of the 2000 IEEE Industry Applications Conference, Vol. 3,
pp.1988-1995.
Michal Pokorny, “Analysis of Unbalance Due to Asymmetrical
Loads,” Iranian Journal of electrical and Computer Engineering ,
VOL. 4, NO. 1, Winter-Spring 2005
Fig. 2 The proposed scheme
DESCRIPTION OF SYSTEM AND SIMULATION
Assume the SVC comprising of one TCR bank and
three TSC banks connected to the 22.9 kV bus via a 333MVA, 22.9/16-kV transformer on the secondary side with
Xk=15%. The voltage drop of the regulator is
0.01pu/100VA (0.03Pu/300 VA). When the SVC
operating point changes from fully capacitive to fully
inductive, the SVC voltage varies between 1-0.03=0.97pu
and 1+0.01=1.01 pu. When 1.5 times of load is connected
between A-phase for example EVs, plug in into system
and it is caused unbalanced load. Neutral point and the
other phase are normal, is used in the computer
simulation. Also 1.3 times of load is connected between
B-phase, whereas A phase is unbalanced.
Vna
Vnc
Vnb
Vnc
Vnb
Vna
80
65
60
40
60
20
ZOOM
0
-20
55
-40
-60
-80
0.4
0.42
0.44
0.46
0.48
50
0.44
0.5
0.45
0.46
0.47
0.48
0.49
0.5
Time(s)
Time(s)
Fig.2. NVC of output voltage without SRFC when A phase is
unbalanced
Vna
Vnc
Vnb
Vnb
Vna
65
80
60
40
60
20
ZOOM
0
-20
55
-40
-60
-80
0.4
0.42
0.44
0.46
0.48
50
0.4
0.5
0.42
0.44
0.46
0.48
0.5
Time(s)
Time(s)
Fig.3. NVC of output voltage without SRFC when A phase B
phase are unbalanced
Vna
Vnc
Vnb
Vna
60
Vnb
Vnc
60
40
58
20
56
ZOOM
0
54
-20
52
-40
-60
0.4
0.42
0.44
0.46
Time(s)
0.48
0.5
50
0.4
0.42
0.44
0.46
0.48
0.5
Time(s)
Fig.4. NVC of output voltage with SRFC when all phases are
unbalanced
Fig.4 shows the simulation after using SRFC, when
all phases are unbalanced or are connected to EVs. It can
be seen from the simulation results that the proposed
SRFC is able to completely eliminate the negative
sequence component of the output voltage. Through these
simulation results, the feasibility of the proposed control
scheme can be verified.
CONCLUSION
This paper has proposed the advanced synchronous
reference frame control scheme for the EVs with SVC
connected the grid. SRFC proposed in this paper is able
to perfectly compensate for the negative voltage
182
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Optimising Maximum Power Demand Using Smart Sequencial
Algorithm
1
Pang Jia Yew, 2Kuan Lee Choo,3Liau Vui Kien, 4Kudzai Nigel Chitewe, 5Dennis Tan
1
Asia Pacific University of Technology & Innovation, Kuala Lumpur, Malaysia
[email protected]
2
Infrastructure University Kuala Lumpur, Selangor, Malaysia
[email protected]
3
Malaysian Invention & Design Society(MINDS), Kuala Lumpur, Malaysia.
[email protected]
4
Asia Pacific University of Technology & Innovation, Kuala Lumpur, Malaysia.
[email protected]
5
Everly Group Sdn Bhd., Malaysia
[email protected]
Abstract— This paper present a research involves the integration of hardware with adaptive software which able to
carry out load shifting following a priority order set by the user. The system able to monitor the load curves of all
connected loads whilst simultaneously comparing them with the desirable minimum load curve. The prototype able
to turn on or off the load according to priority when the maximum demand is being approached and allow the loads
back on when the loading curves is at an allowable level.
Keywords- Maximum Power Demand Controller, Arduino , Sequential Algorithm
solutions to this problem involve load shedding as a
system approaches its peak as mentioned by Pereira et
al., [6]. Maximum demand controllers are used to
monitor this trend and perform a cut-off when the set
demand is being approached. The demand controller has
to be connected to the facilities loads. A demand
controller is a microcomputer system that can adjust the
run times of connected loads for brief periods of time
when the power load is bordering close to a peak
demand. The Demand Controllers usually come
equipped with a display and allow the user select their
peak demand. The device uses programing logic to
determine which devices are to be turned off in order to
ensure the lowest peak demand.[6]
INTRODUCTION
According to Palensky and Dietrich (2011) due to the
ever rising cost of energy, all sectors of society whether
using energy for commercial or individual purposes are
forced to implement cost effective measures to ensure
their financial stability. Implementation of schemes such
as Carbon pricing and or Emission will result in a
noticeable rise in the cost of energy as suppliers will be
forced to make users incur such charges for the
transmission. As a resultant of higher charges in energy
production and the effort required in sustaining
production energy suppliers penalize users who exceed a
set level of maximum power as stated by contract or law.
This being in effect in most countries calls for a
worldwide awakening in energy usage and maximum
power demand monitoring. The maximum power
demand monitoring system described in this project
addresses this problem by removing the need for the user
to monitor the electrical system themselves and ensuring
a noticeable reduction in the electricity bill charges
Palensky and Dietrich [7]
This paper is organized as follows: Section II presents
the behavior and characteristic of maximum power
demand controller .Section III subsequently describes the
system description of maximum power demand control
system. Section IV provides the experimental results of
the maximum power demand control system and lastly,
Section V concludes the findings of this paper.
The basics of this study is to be able to monitor and
maintain the maximum demand set by an individual and
hence not incur extra charges for energy use. The
demand usage refers to the total amount electrical energy
being utilized per given period of operation due to the
various appliances tapping into the electrical system. The
demand for each is a variable affected by time and type
of sector in society the user is using the energy for. The
demand of an electrical energy usage is defined in
kilowatts. The peak demand of a user is noted as the
highest value of energy used during a billing period. This
peak demand is which is monitored and charged by the
electricity power companies when exceeded. Possible
BEHAVIOR AND CHARACTERISTIC OF MAXIMUM
POWER DEMAND CONTROLLER
Based on evaluating past criteria the most applicable
methods for developing the intended maximum demand
controller would to ensure the ability of it to accept user
prioritization whilst also being able to monitor and
facilitate shedding of unnecessary load to ensure the user
does not exceed the expected load curve. The system will
have to be able to plot a desired load curve which will be
compared to the load curve derived from the connected
loads to the system. The desired model will be based on
the main components which are the monitoring unit,
decision criteria and control unit. The proposed
183
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
monitoring unit will be able to check the power utilized
by each load connected to device. The values obtained
should be precise with very minimal margin of error and
plot worthy. All inputs are measured in real time and
registered into a database which will later be used in the
plotting of the graph and creating user preferences.
Monitoring will be carried out by sensors connected
between the appliance and the Arduino control unit.
microcontroller and the connected loads. The reason for
using relays is because they are reliable form of remote
control of loads and offer reliable compatibility with most
microcontrollers. The loads will be turned on or off by the
relay after a command from the microcontroller has been
received. The chosen microcontroller gives a digital
output of
5V and 40mA, therefore the relay must intern be able to
be controlled by such a low voltage switch. The
sequential turning on whilst avoiding load overshoot
schematic diagram is shown in Figure 2.
Decision criteria will be based on the time of use
(TOU), user prioritization of devices, user preferences
saved within a SQL database and load curve created by
user. The aim of the decision criteria is to allow as little
user interference possible
by fully automating all decision. This will be achieved by
programming the controller and interfacing it with a
database and monitoring equipment.
The Arduino will have a memory stick to store
collected data. Decisions will be based upon the modified
code developed to operate like a SCADA system. The
control unit will be based upon the output of the decision
criteria. After the
data has been analyzed and decision made, the Arduino
will then follow a coding commands and control a relay
board which will be connected to the inputs of the
connected appliances.
Figure 2. Schematic diagram of sequential turning on whilst avoiding
load overshoot
SYSTEM DESCRIPTION OF MAXIXIMUM POWER DEMAN
CONTROL SYSTEM
The sensing of the system was done with two different
sensors, one for current and another for voltage. The
current sensor used is a Hall Effect current sensor with
high precision and Arduino compatible. The current
sensor measures alternating current and gives a stepped
down output voltage which is also in analog form to allow
measurement of the power factor of each connection. The
voltage sensing is done using a voltage transformer
technology. The voltage transformer takes the wall supply
AC voltage at the primary end and gives out an analog
voltage reading as an output which will be stepped down
to Arduino measurement level between 1 V to 3,65 V
peak to peak. The values of the current sensor and voltage
sensor are both taken as input into the Arduino analog
input ports. The information of each load in real time will
then be calculated and analyzed to give the list of
commands that will be sent to the relay board. The
Arduino logical process and determining procedures flow
The maximum demand controller has 3 main part
which are the controller, sensor, and user interface . The
block diagram of the collective make up the logic is
shown in Figure. 1.
Figure 1. Block Diagram of the Maximun Power Demand Control
System Circuitry
The control unit of the system is based on the use of a
relay switch which will act as the interface between the
184
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
chart
is
shown
in
Figure
3.
Figure 3.Programming Flow Chart
The logic behind the decision making is based on the
user preferences and the time of use (TOU).This is similar
to the order of preference followed by Ganu et al.,
n.d.,[3]and Pereira et al.,[6]. The system can only turn off
a load according the user preferences. When the program
is running it will continuously check the value of the
power being drawn by the loads and compares it to the
user’s maximum demand setting .Figure 4 shows the
LabView schematic diagram of measuring current and
voltage from the loads.
Figure 5. Power Measurement from Current Reading and Voltage
Waveform using Arduino internal Electrical Suite
The power calculation and energy consumption for
the entire individual load is calculated using the
schematic diagram shown in Figure 5 and Figure 6.
Figure 6. Obtaining energy Usage from Power Calculated from
Individual Load
This is then followed by checking whether the loads
are operating during the peak hours of operation which
are between 12 and 3 pm. The final check is that of the
priority of the load which is set by the user as assumed in
most homesteads some appliances like refrigerators are
never turned off this then allows the user to not closely
monitor the system as it only requires a single visit at the
GUI. The GUI interface is built in LabView which is
connected to the Arduino USB port. The GUI gives the
user valuable information regarding the running system
including the, power being consumed, the rate of usage,
loads that are drawing excessive power and the general
time they have been running. The GUI also has a control
panel where the user can set the priority of each load and
the maximum demand of the overall supply.
Figure 4. LabView Schemetic diagram on measuring current and
voltage readings from loads
SIMULATION RESULTS
The LabView simulation result from the schematic
diagram of maximum power control system for two AC
motors, one computer and a light bulb is shown in Figure
6. The red line in Fig. 6 represents the maximum demand
set as reference point.
185
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
This maximum demand reference is set 6 to 10%
below the real maximum power demand. The other four
colors represent the four different loads. When all the four
loads is turn on at the same time, two of the loads hit the
maximum power demand point as shown in Figure 6. The
loop shown in Figure 7 is activated when the breach of
maximum power demand occurs and immediately
modifies current flow into the loads to become lower than
the set demand according to the algorithms.
CONCLUSION
With the ever evolving technology, more devices are
being created that require an electrical supply whilst there
is not much evolving occurring in controlling the
maximum power demand development. This forces the
cost of electrical production and supply to increase and
the cost being incurred by the end user. This forces the
end user to become active and monitor their electrical
usage in-order to reduce electrical bills on and off peak
hours. This system automatically monitors the user’s
electrical usage and ensures they do not overload the grid
during peak hours by load shifting.
The rising need for grid stability and need for reduction
of carbon emission force development of technology
from both consumer and supplier. This system when used
at a large scale should be able to ensure better stability
during peak hours.
This paper present the fundamental idea of
integration of hardware with adaptive software which
able to carry out load shifting following a priority order
set by the user. The entire system able to calculate the
total power of individual load and monitor the load curves
of all connected loads whilst simultaneously comparing
them with the desirable minimum load curve. The
prototype able to regulate the load according to priority
when the maximum demand is being approached and
allow the loads to operate accordingly when the loading
curves is at an allowable level.
Figure 6. Simulation result for maximum power demand graphs
when the loads exceed maximum power points
REFERENCES
Adika, C. and Wang, L. “,Smart charging and appliance scheduling
approaches to demand side management.” International Journal
of Electrical Power & Energy Systems, 57, pp.232-240, 2014.
Chang, H., “Non-Intrusive Demand Monitoring and Load
Identification for Energy Management Systems Based on
Transient Feature Analyses. Energies” , 5(12), pp.4569-4589,
2012.
Ganu, T., Seetharam, D., Arya, V., Hazra, J., Sinha, D., Kunnath, R.,
De Silva, L., Husain, S. and Kalyanaraman, S. Plug: “An
Autonomous Peak Load Controller”. IEEE J. Select. Areas
Commun., 31(7), pp.1205-1218. , 2013.
Kaira, L., Nthontho, M. and Chowdhury, S. “Achieving Demand Side
Management with Appliance Controller Devices”. IEEE, 1(14),
2014.
Macedo, M., Galo, J., de Almeida, L. and de C. Lima, A. “Demand side
management using artificial neural networks in a smart grid
environment.” Renewable and Sustainable Energy Reviews, 41,
pp.128-133,2015.
Miquel, A., Belda, R., de Fez, I., Arce, P., Fraile, F., Carlos Guerri, J.,
MartÃ-nez, F. and Gallardo, S. “A power consumption
monitoring, displaying and evaluation system for home devices.”
Wave, 5(1889-8297), pp.5-13,2013.
Palensky, P. and Dietrich, D. “Demand Side Management: Demand
Response, Intelligent Energy Systems, and Smart Loads.” IEEE
Trans. Ind. Inf., 7(3), pp.381-388, 2011
Pereira, R., Fagundes, A., MelÃ-cio, R., Mendes, V., Figueiredo, J. and
Quadrado, J. “Fuzzy Subtractive Clustering Technique Applied
to Demand Response in a Smart Grid Scope”. Procedia
Technology, 17, pp.478-486,2014.
Srividyadevi, P., Pusphalatha, D. and Sharma, P. “Measurement of
Power and Energy Using Arduino.” Research Journal of
Engineering Sciences, 2(10), pp.10-15,2013.
Figure 7. Schematic Diagram of Maximum Power Demand
Control Loop
After the activation of the loop the, the system able to
control the maximum power demand in between the
acceptable value which is below the maximum power
demand point set in red color line. The results can be
shown in Figure 8. All the four loads now is operated
below the maximum power demand points.
Figure 8. Simulation result after the maximum power control
system is activated.
186
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
High Speed CNFET Digital Design using Simple CNFET Circuit
Structure
Kyung Ki Kim
Daegu University, Korea
Abstract—Carbon nanotube FET (CNFET) has been evaluated as one of the promising replacements of silicon in the
future nanoscale electronics, but no architecture only based on the CNFET devices has ever been introduced because
self-assembly technology has not been developed enough to form complex carbon nanotube structures. Therefore, this
paper proposes a new reconfigurable CNFET digital logic structure to cost-effectively form complex carbon nanotube
structures. The novelty of the proposed paper is to develop a fast reconfigurable CNFET logic gate only using backgate voltages as control signals in simple silicon CMOS-like CNFET technology.
Index Terms—Carbon nanotube FET, CNFET, reconfigurable digital circuit
simple silicon CMOS-like CNFET technology in this
I. INTRODUCTION
As technology scales down, different types of materials
paper.
have been experimented. Si-MOSFET-like Carbon
nanotube FET (CNFET) devices have been evaluated as
II. CNFET
one of the promising replacements in the future nanoscale
Carbon nanotube FETs employ semiconducting single-wall
electronics to overcome scaling limit of the bulk CMOS
carbon nanotubes to assemble electronic devices, and the
technology. The reason that makes CNFETs a promising
single walled CNFET is obtained by replacing the channel
device is that they are compatible with high dielectric
of a conventional MOSFET with carbon nantotubes (a oneconstant materials and a unique one-dimensional banddimensional conductor obtained by rolling a sheet of
structure which restrains back-scattering and makes neargraphite) as shown in Fig. 1 [5][6]. The nanotubes can be
ballistic operation a realistic possibility. Using this
either a metallic (conductor) or a semiconducting
CNFET, a high-k gate oxide can be deployed for lower
(semiconductor) depending on the angle (represented as a
leakage currents while keeping the on-current drive
chirality integer vector (n,m)) of the atom arrangement along
capability (compared to Si-MOSFET). CNFET has lower
the nanotube. The nanotube is metallic if (n=m) or (n−m =
short-channel effect and a higher sub-threshold slope than
‘a multiple of three’), otherwise the tube is semiconducting.
Si-MOSFET [1]-[7].
Despite this promising progress of CNFETs, CNFET has
been applied only to a simple circuit design such as SRAM,
ring oscillator, etc. because of the high fabrication cost of
CNFETs and fabrication issues regarding imperfection and
variability. Therefore, the fabrication of carbon nanotube
at very large digital circuits on a single substrate has not
been achieved.
Until now, no architecture only based on the CNFET
devices has ever been introduced until now because selfassembly technology has not been developed enough to
form complex carbon nanotube structures.
Although several CNFET-based reconfigurable circuit
design techniques have been proposed as the main way to
cost-effectively form the complex carbon nanotube
structures [8]-[11], either the CNFET device structures for
the reconfigurable circuits are too complicated, or the
CNFET design topologies require many control signals.
Therefore, we develop a fast reconfigurable CNFET logic
gate only using back-gate voltages as control signals in
(a)
(b)
Fig. 1. CNFET structure: (a) Cross sectional view, (b) Top view.
The CNFET device has four terminals (drain, gate, source,
and back-gate), and a dielectric film is wrapped around a
portion of the undoped nanotube in the intrinsic region, and
a metal gate surrounds the dielectric while the other
nanotube regions are heavily doped for a low series
resistance during the ON-state. As shown in Fig. 1 (a), the
top gated CNFETs are fabricated on an oxidized Si-substrate
that can be used as a back-gate in the CNFET. In the early
1990s, most CNFETS studied had adopted a back-gate topcontact structure [1][2], in which the nanotubes are grown
187
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
on a conducting substrate covered by an insulating layer.
Two metal contacts are deposited on the nanotube to serve
as source and drain electrodes while the conducting substrate
is the gate electrode in the three-terminal device. However,
these early CNFETs are found to have poor device
characteristics such as an ambipolar transistor characteristic
and gentle sub-threshold swing. In order to improve the poor
device characteristics, dual-gate CNFET structures have
proposed. The structures show a MOSFET-like unipolar
transistor characteristic, excellent sub-threshold slopes, and
a drastically improved OFF state. Each device has one or
more single-wall semiconducting carbon nanotubes. The
currents of the CNFET device are controlled by adjusting
device parameters such as gate length (Lch), the number of
nanotubes, chirality vector, and pitch between nanotubes [2].
As the gate voltage increases or decreases, the device is
electro-statically turned on or off through the gate node.
Figure 3 shows the back-gate voltage (VBG) impact on the
drain current (IDS) of a 32nm NMOS CNFET; VBG
increases IDS approximately by 30% depending on the topgate voltage (VGS). Especially, a small amount of drain
current can be generated by VBG at zero gate voltage. In this
paper, the back-gate is deployed for the proposed
reconfigurable CNFET circuits.
30u
0.9 V
NMOS CNFET
NMOS MOSFET
IDS (A)
20u
0.9 V
1u
VGS
VDS
0.9 V
NMOS CNFET
100n
VDS
NMOS MOSFET
10n
Log IDS (A)
characteristics of the N-type CNFET in the weak inversion
region, which implies that the CNFET would be a more
practical solution even in the sub-threshold logic design
requiring a smaller area than the MOSFET.
10u
0.1 V
0.1 V
1n
100p
0
0V
10p
0
200m
400m
600m
800m
VDS (V)
1p
(a)
100f
10f
W/ Back gate biasing
1f
W/o Back gate biasing
-0.5
-0.4
-0.2
0
0.2
0.4
40u
0.5
VGS (V)
0.9 V
0.9 V
2.0n
VBG
0.1 V
1.0n
20u
0
0.1 V
-0.5
10u
The drain current characteristics of a 32nm N-type CNFET
are presented in Fig. 2, where the characteristics are
compared to those of the N-type MOSFET. IDS (drain
current) of the CNFET is saturated at higher VDS (drain-tosource voltage) as VGS (gate- to-source voltage) increases
as shown in Fig. 2 (a), where the amount of IDS of the
CNFET is greater than that of the MOSFET although the
CNFET width is 6.35nm (5nm of the pitch length and
1.35nm of the diameter) and the MOSFET width is 64nm.
According to the simulation results, it is possible to reduce
the device size by approximately an order of magnitude if
the CNFET is replaced with the MOSFET. In the subthreshold (weak inversion) region, the characteristics of the
CNFET show that IDS of the CNFET is much greater than
that of MOSFET and the CNFFET almost does not have
Drain-induced barrier lowering (DIBL) and Gate-induced
drain leakage (GIDL) effects. As shown in the figure,
CNFET on- current is higher and leakage current is lower
than the MOSFET transistor. Figure 2 (b) illustrates IDS
VBG =0
IDS (A)
30u
IDS (A)
Fig. 2. Drain current of a 32nm N-type CNFET and a 32nm N-type
MOSFET as a function of: (a) Drain-to-source voltage for different gateto-drain voltage, (b) Gate-to-source voltage for different drain-to-source
voltage, where the (n,m) of the CNFET is (17,0), the number of nanotubes
of the CNFET is 2, the width of the MOSFET is 64nm, the back-gate
voltage is 0V, and temperature is 25C.
VBG
3.0n
(b)
VGS (V)
0
0.2
VBG=0
0
-0.5
0
0.5
1.0
VGS (V)
Fig. 3. Drain current of a 32nm N-type CNFET as a function of Gate-tosource voltage for different back-gate voltage, where the (n,m) of the
CNFET is (17,0), the number of nanotubes of the CNFET is 2, the width
of the MOSFET is 64nm, and temperature is 25C.
III.
RECONFIGURABLE
CNFET
DIGITAL
CIRCUITS
The logic function of the reconfigurable CNFET logic gate
depends on the back-gate voltage of P-type and N-type
CNFETs as shown in Fig. 4 (a), where all the four CNFETs
are under the same conditions (the number of nanotube,
chirality integer vector, and so on). If the N-type and P-type
CNFETs assert a VDD signal as a back-gate, the P-type
CNFETs are weaker than N-type CNFETs as shown in Fig.
4 (b), so the output function is the same as that of a NOR
188
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
gate. On the other hand, if the N-type and P-type CNFETs
assert a GND signal as a back-gate, the N-type CNFETs are
weaker than P-type CNFETs as shown in Fig. 4(c), so the
output function is the same as that of a NAND gate. When
the input A and B are different, the output of the CNFET
logic is determined by the strength of P-type and N-type
CNFETs.
Inp
ut A
Lo
A
B
Output
(Vpb=GND,
Vnb=GND)
High
High
Hig
h
Low
High
w
Hig
h
Lo
Low
High
w
Hig
h
Hig
h
Low
Low
NOR
NAND
w
Lo
Vpb
Output
(Vpb=VD
D, Vnb=VDD)
Lo
w
VDD
Out
Inp
ut B
Function
Vnb
If the input A and B are connected each other, the function is
an inverter.
VSS
(a)
Compared to a conventional NAND or NOR logic gate, the
proposed gate decreases the gate delay by more than 50%
due to the reduced number of stack in the logic gate. On the
other hand, the power consumption of the proposed logic
gate is larger than that of the conventional logic gate due to
the increased static current from supply voltage to ground.
That is, the proposed logic gate can be used more effectively
for high performance blocks rather than for the low power
blocks similar to the pseudo-NMOS logic gate. To degrade
the static current effect on the proposed logic, a new low
power reconfigurable CNFET circuit structure using an
enable signal as shown in Fig. 5.
(b)
VDD
MP1
Enable
Bgate
MP2
MP3
A
Output
B
Enable
MN3
(c)
MN1
Fig. 4. Reconfigurable CNFET circuit structure: (a) Basic cell, (b) NOR
function case, (c) NAND gate case.
MN2
VSS
Fig. 5. Reconfigurable CNFET circuit structure for low power
consumption.
Table 1 shows all the possible functions depending on the
back-gate voltage, where if the input A and B are connected
each other, the function is an inverter.
In Fig. 5, MP3 and MN3 are used to reduce the static current
of the reconfigurable CNFET using an Enable signal
although the propagation delay of each CNFET gate is
increased.
IV. MEASUREMENT RESULTS
Table 2 shows the preliminary simulation results of
ISCAS85 benchmark circuits designed in 32nm Stanford
CNFET model at 0.3V. The circuit delay and average power
Table 1. Function table depending on the back-gate voltage.
189
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
consumption of the ISCAS85 circuits using the proposed
reconfigurable logic gate are compared with those using the
conventional logic gates. As expected, the circuit delay of
the ISCAS85 circuits using the proposed gate has been
reduced by over 30% compared to those using the
conventional gates. Hence, we can develop a high
performance and low power system in the sub-threshold
voltage region.
Based on the aforementioned reconfigurable CNFET logic
gate, our goal is to extend the logic structure to a PLA
architecture as shown in Fig. 6. In addition to the
reconfigurable interconnects of conventional PLA
architectures, reconfigurable back-gate voltage lines should
be inserted to change the function type of each CNFET logic
gate. Since the architecture employs only one gate structure
with the same conditions, the proposed PLA architecture
would be more powerful and simpler than the conventional
one.
Reconfigurable
Interconnects
Vpb1
Reconfigurable
Back-gate
Voltage
Vpb2
Reconfigurable
CNFET
Logic Gate
Vpb3
Vpb1
Vpb2
Vpb2
Fig. 6. Conceptional PLA architecture using the re-configurable CNFET
logic gate.
Table 2. Simulation results for ISCAS85 benchmark circuits (VDD=0.3)
Circuit delay (sec)
Circuit
Avg. Power Consumption (W)
Conventional
logic gates
New logic
gates
Reduction
C432
4.42E-09
C499
C880
Rate (%)
Conventional
logic gates
New logic
gates
Increased rate
(%)
2.89E-09
34.61
1.24E-07
1.78E-07
43.54
1.25E-08
8.48E-09
32.16
6.12E-08
8.98E-08
46.73
4.70E-09
2.97E-09
36.80
7.01E-08
10.12E-08
44.36
[3]
A. Rahman, J. Guo, S. Datta, M.S. Lundstrom, “Theory of
6.43E-08
9.21E-08
43.23Devices, vol. 50, no. 10, pp.
ballistic
nanotransistors,”
IEEE Trans. Electron
1853- 1864, Sept. 2003
1.31E-07 A. Akturk,1.92E-07
[4]
G. Pennington,46.56
N. Goldsman, A. Wickenden,
“Electron transport and velocity oscillations in a carbon nanotube,” IEEE
Trans. Nanotechnol., Volume 6, Issue 4, pp 469 – 474, July 2007.
[5]
H. Hashempour, F. Lombardi, “Device model for ballistic
CNFETs using the first conducting band,” IEEE Design and Test of
Computer, vol. 25, issue 2, pp 178-186, March-April 2008.
[6]
Y. Lin, J. Appenzeller, J. Knoch, P. Avouris, “Highperformance carbon nanotube field-effect transistor with tunable polarities,”
IEEE Trans. Nanotechnol., Vol 4, Issue 5, pp 481 - 489, Sept. 2005.
[7]
Nishant Patil, Albert Lin, Jie Zhang, H.–S. Philip Wong,
Subhasish Mitra, "Digital VLSI logic technology using carbon nanotube
FETs: fre-quently asked questions," In Proc. 2009 IEEE Design
Automation Conf., pp. 304-309, July 26-31.
[8]
M. H. B. Jamaa, D. Atienza, Y. Leblebici, G. D. Micheli,
“Programmable logic circuits based on ambipolar CNFET,”Proc. of IEEE
Design Automation Conf., pp 339-340. June. 2008.
[9]
B. Liu, “Reconfigurable double gate carbon nanotube field
effect transistor based nano-electronic architecture,” Proc. of Asia and South
Pacific Design Automation Conf., pp 853-858, 2009.
[10]
J. Liu, I. O’Connor, D. Navarro, F. Gaffiot, "Design of a Novel
CNTFET-based Recon-figurable Logic Gate," Proc. of IEEE Computer
Society Annual Symposium on VLSI (ISVLSI '07), pp.285-290, 2007.
[11]
J. Liu, I. O’Connor, D. Navarro, F. Gaffiot, "Novel CNTFETbased reconfigurable logic gate design," Proc. of IEEE Design Automation
Conf., pp.276-277, 2007.
V. CONCLUSIONS
C1355
1.15E-08
6.80E-09
40.86
In this paper, our focus is on the development of simple
reconfigurable
logic gates using8.76E-10
back-gate voltage.
C1908 CNFET1.56E-09
37.43
Logic gates in a 32nm CNFET process technology have
been designed to demonstrate the accuracy of the proposed
reconfigurable gate structure. The simulation results show
that the circuit delay of the ISCAS85 circuits using the
proposed gate has been reduced by over 30% compared to
those using the conventional gates. The effectiveness of the
proposed gate and its utilization in PLA architecture will be
investigated in the future.
ACKNOWLEDGMENTS
This research was supported by the Daegu University
Research Grant, 2011.
REFERENCES
[1]
A. Javey, Q. Wang, W. Kim, H. Dai, “Advancements in
complementary carbon nanotube field-effect transistor,” in Proc. 2003 IEEE
Int. Electron Devices Meeting, pp. 31.2.1-31.2.4.
[2]
J. Deng, H. -S. Philip Wong, “A compact spice model for
carbon-nanotube field-effect transistors including nonidealities and its
application,” IEEE Trans. on Electron Devices, V. 54, N. 12, Dec. 2007.
190
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Genetic Algorithm based Pre-Training
for Deep Neural Network
Hongsub An1, Hyeon-min Shim2, Sangmin Lee1,2
– Department of Electronic Engineering, Inha University, Incheon, Korea
[email protected]
2
– Institute for Information and Electronics Research, Inha University, Incheon, Korea
[email protected]
[email protected]
1
Abstract—In this paper, a novel improved pre-training algorithm based on a genetic algorithm (GA) is presented. The
algorithm is used to improve the classification accuracy of deep neural networks (DNNs) by searching optimal network
initialization to select a dominant feature extractor. The proposed algorithm comprises two procedures. The first
procedure pre-trains two individual networks using restricted Boltzmann machines (RBMs), and the second procedure
merges the two pre-trained networks using crossover and mutation of the GA. To evaluate performance of the proposed
algorithm, we conduct experiments for classification accuracy in four networks. As a result, the proposed algorithm has
a lower error rate than the DBNs.
Keywords-Deep Neural Network, Deep Belief Network, Genetic Algorithm
INTRODUCTION
PROPOSED ALGORITHM
Deep belief networks (DBNs) [1] is a powerful
hierarchical generative model for learning compact
representations of high-dimensional data. DBNs are neural
networks consisting of a number of layers of restricted
Boltzmann machines (RBMs) that are trained in a greedy
layer-wise manner. RBMs layers are trained with an
unsupervised learning method to induce abstract
representations of the inputs in subsequent layers [2]. This
greedy layer-wise procedure facilitates supervised training
of deep networks. Consequently, compared to traditional
training methods for deep models, such as multi-layer
perceptron (MLP), DBN can prevent over-fitting by using
RBMs as a pre-training method.
The proposed algorithm is used to identify an optimum
features for the purpose of increasing the classification
accuracy. First, 𝑆𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔 is divided into two subsets. At this
point, the two subsets should have some common data.
Subsequently, the two subsets are pre-trained using their
corresponding RBMs. Each RBMs has the same network
structure; however, they have different weights and biases
because they are trained using different training datasets.
Thus, we denote these networks as 𝑁𝑒𝑡10 and 𝑁𝑒𝑡20, where
the subscripts distinguish the networks and the superscripts
indicate the progress index of a generation in the GA.
The network's input data are divided into two datasets:
The training set (𝑆𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔 ), used for training the networks,
and the test set (𝑆𝑡est ) that evaluates the performance of the
networks in the test phase. It should be noted that 𝑆𝑡𝑒𝑠𝑡
cannot be used in the learning phase. In the proposed
algorithm, 𝑆𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔 is divided into two subsets (𝑆𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔1
and 𝑆𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔2 ). A validation dataset (𝑆𝑣𝑎𝑙𝑖𝑑𝑎𝑡𝑖𝑜𝑛 ) is also
required to evaluate offspring performance.
In this paper, we present an improved pre-training
algorithm based on genetic algorithm (GA) for improving
the performance of neural networks. The GA is a heuristic
searching method that mimics the process of natural
selection. This heuristic is routinely used to generate useful
solutions for function optimization and efficiently find the
nearly global optimum in large or complex spaces [3]. In
biology, genetic material consists of DNA, which forms a
chromosome. Individuals of next generation are created
through the crossover of the partial coupling of
chromosomes, and chromosomes may be slightly modified
by mutation. Individuals selectively flourish depending on
their degree of adaptation to the environment. Such a
phenomenon has been applied to the proposed algorithm to
optimize network initialization for selecting dominant
feature extractor.
The weights and biases of the networks that are trained
by RBMs using split training datasets are used as
chromosomes in the merge phase of the GA. Moreover,
crossover and mutation occur in this phase. The weights and
biases used as the chromosomes in the crossover process are
composed of one matrix between each layer of the neural
network. The first matrix column corresponds to biases, and
the other columns are weights. Thus, bias and weights
between the lower layer neurons and a single upper layer
neuron are represented as one row of the matrix. In previous
works, matrix elements corresponded to specific
chromosomes [4], [5]. However, this is no longer suitable
191
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
because the characteristic of neural networks where one row
of the weight matrix acts as a filter is corrupted. Therefore,
each matrix row should be used as a chromosome. After
determining the type of chromosome, the crossover
operation is required. From among various crossover
methods, we chose to use uniform crossover because it is
appropriate for complex chromosome crossover. In this way,
the dominant and recessive characteristics are implemented
randomly, as follows:
𝑚+1
∀𝑖, 𝑤𝑐,𝑖
=
𝑚
𝑤1,𝑖
{ 𝑚
𝑤2,𝑖
𝑖𝑓 𝑟𝑎𝑛𝑑(𝑖) ≥ 𝑓𝑟
𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Subsequently, 10,000 training images were randomly
selected from each training set and added to the other
training set. Consequently, each training set had 40,000
images. To validate the offspring, 𝑆𝑣𝑎𝑙𝑖𝑑𝑎𝑡𝑖𝑜𝑛 set was
required. For this, 6,000 images were randomly selected
from 60,000 training images.
CONCLUSION
In this paper, we have presented an improved pretraining algorithm for improving the classification
accuracy. The devised approach uses a GA-based feature
extractor selection algorithm for detecting optimized initial
parameters of DNNs.
(4)
where, 𝑖 is the row index of the chromosome matrix, and 𝑚
𝑚
𝑚
is the generation of a GA. Here, 𝑤1,𝑖
and 𝑤2,𝑖
denote the
𝑚+1
parent chromosomes, and 𝑤𝑐,𝑖 denotes the chromosome
inherited from the parents. The population index is 𝑐, and
𝑓𝑟 is the fraction ratio that is a manually configurable
constant. In the crossover procedure, mutation occurs with
a probability of 𝑝(𝑚). Mutation is implemented by setting
a portion of the chromosome to zero. Crossover can be
executed one or several times in a partial region or in the
entire region. In this study, crossover was performed once
on each offspring, and the crossover rate was fixed to 0.7.
The trained networks with different training sets have
different feature extractors. In order to find a more suitable
combination of feature extractors for the entire training
dataset the proposed algorithm merged these different
networks using the GA. As a result of the combination of
feature extractor using GA, the network initialization was
optimized, making it possible to extract the dominant
features during the learning procedure for DNNs. As the
results indicated, the initial error rates decreased compared
with those of the RBMs, and the network performance
improved.
The performance evaluation of a number of offspring
created after the merge phase is conducted using a
validation set. Subsequently, the fittest two offspring,
𝑁𝑒𝑡1𝑚+1 and 𝑁𝑒𝑡2𝑚+1 are selected, where, the subscript is
the ranking in the validation test; both networks are used to
compose the next generation in the merge phase.
In this study, we found a possibility for the proposed
algorithm to lower initial error rates and improve network
performance. Furthermore, the proposed algorithm can be
used as a base algorithm for distribution networks and as a
retraining solution for additional dataset or class data.
After iterating for M generations, the fittest offspring
𝑁𝑒𝑡1𝑁 (𝑁 ≤ 𝑀) is finally selected. This offspring network is
composed of the biases and weights matrix that is used to
create the initial parameters of the networks for feature
selection. However, these values are not optimized.
Therefore, fine-tuning based on the BP is required, and the
𝑆𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔 dataset is used for fine-tuning.
ACKNOWLEDGMENT
This work was supported by Basic Research Program
through the National Research Foundation of Korea (NRF)
funded by the Ministry of Education (2010-0020163) and
the MSIP(Ministry of Science, ICT and Future Planning),
Korea, under the C-ITRC(Convergence Information
Technology Research Center) (IITP-2015-H8601-15-1003)
EXPERIMENTS
REFERENCES
The GA parameters used in the proposed algorithm are
as follows; 100 number of offspring, 0.7 crossover rate,
0.002 mutation probability, 500 generation number and 0.5
fraction ratio. All experiments used the MNIST database for
handwritten digits of zero to nine that contained 60,000
training images and 10,000 test images [6].
G. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for
deep belief nets,” Neural computation, vol. 18, no. 7, pp. 1527–
1554, 2006.
R. B. Palm, “Prediction as a candidate for learning deep hierarchical
models of data,” Technical University of Denmark, Palm, 2012.
C. De Stefano, F. Fontanella, C. Marrocco, and A. Scotto di Freca, “A gabased feature selection approach with an application to handwritten
character recognition,” Pattern Recognition Letters, vol. 35, pp.
130–141, 2014.
J. H. Holland, Adaptation in natural and artificial systems: An
introductory analysis with applications to biology, control, and
artificial intelligence. U Michigan Press, 1975.
D. E. Goldberg and J. H. Holland, “Genetic algorithms and machine
learning,” Machine learning, vol. 3, no. 2, pp. 95–99, 1988.
Y. LeCun and C. Cortes, “The mnist database of handwritten digits,”
1998.
We performed a number of experiments to study the
classification accuracy of the proposed algorithm on several
handwritten digit recognition tasks in various networks
(784-100-100-10, 784-200-200-10, 784-100-100-100-10,
784-200-200-200-10), and compared the accuracy to that of
DBNs with identical network architectures and metaparameters. All the 60,000 MNIST training images were
used to train the original DBNs. However, when using the
proposed algorithm, all MNIST training images were
separated into two sets, 𝑆𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔1 and 𝑆𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔2 , each with
30,000 different training images.
192
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Improved Object Segmentation Using Modified GrowCut
GaOn Kim, GangSeong Lee, YoungSoo Park, YeongPyo Hong, SangHun Lee
GaOn Kim - dep. Plasmabiodisplay of Kwangwoon University, Seoul, Republic of Korea
[email protected]
GangSeong Lee - dept. General Education of Kwangwoon University, Seoul, Republic of Korea
[email protected]
YoungSoo Park - dept. General Education of Kwangwoon University, Seoul, Republic of Korea
[email protected]
YeongPyo Hong - dept. Hospital Management of International University, Jinju, Republic of Korea
[email protected]
SangHun Lee - dept. General Education of Kwangwoon University, Seoul, Republic of Korea
[email protected]
Abstract—This paper presents a modified GrowCut for improved object segmentation using morphology processing
and bilateral filter. The proposed method uses erosion operation to remove noise and bilateral filter to preserve outlines
and edges of image before GrowCut is applied. This procedure improved object segmentation performance in many
circumstances.
Keywords- Morphology processing; Erosion Operator; Bilateral Filter; GrowCut
INTRODUCTION
In the image processing, object segmentation is an
important process identifying objects from background.
GrowCut is one of the segmentation algorithms and it takes
human interaction. User should draw some strokes inside
the object and outside the object. Then the strokes grow to
separate the object from the background. This presents
good results in many cases but there is a limit detecting
objects from complex images.
In this paper, erosion operation is applied to remove noise
and bilateral filter to preserve edges to overcome the
weakness of GrowCut dealing with complex images. This
way, object segmentation results are improved compare to
the standard GrowCut.
RELATED RESEARCH
Morphology processing
Morphology processing is a collection of operations related
to the shape of features in an image. The erosion reduces
the shape of objects and the dilation enlarges it. An image
(a) Input Image, (b) Structural Components, (c) Erosion,
Bilateral filter
Bilateral filter is an edge-preserving and noise-reduction
smoothing filter for images. The intensity value at each
pixel in an image is replaced by a weighted average of
intensity values from nearby pixels.
2
is viewed as a subset of a integer grid Z and the erosion
of the binary image A by B is defined by:
AB  {Z | ( B) z  A}
(d) Result
(1)
Fig. 1 shows the process of erosion.
193
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
N
Y (m, n) 
N
  H (m, n; l, k ) X (l, k )
(2)
l  Nk  N
where Y (m, n) is the filtered image, X (m, n) is the
original image and H (m, n; l , k ) is a non-linear
combination between pixel (l, k) and central pixel (m, n).
Proposed method
(a)
Input Image
The proposed procedure for object segmentation is shown
in Fig. 2.
(b) Result Image
noise reduction process
edge-preserving bilateral filtering
The edge-preserving and noise-reduction bilateral filter
is defined as follows.
m N
wm,n 
n N
 
exp(
(l  m) 2  (k  n) 2
2 s2
l  m N k  n  N
 exp
I (l , k )  I (m  n) 2
2 c2
)
(4)
where  s ,  c are standard deviation of spatial filter
and color filter and expression in Gaussian function.
Flow Chart
Erosion operation for noise reduction
To reduce noise, the following erosion operation is applied.
(a) Input Image
( I  S n )( x, y) 
max{I ( x  l , y  m | (l , m)  S n }
(b) Result Image
edge-preserving process
(3)
improved GrowCut procedure
GrowCut is applied to the noise-reduced and edgepreserved image from the morphology operation and
bilateral filter.
where I ( x, y) is input image and S (l , m) is multi-scale
structuring element. This operation reduces small size of
noise depending on the size of filter and the number of
operations applied.
von Neumann neighborho od :
Fig. 3 shows the result of the erosion operation that
removed background white spots.
N ( p)  {q  Z n : p  q 1 :
n

 pi  qi  1}
i 1
194
(5)
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Huang yong, Tao Yin, Liu huijuan, “Noise Image Restoration Based on
Mathematical Morphology,” Information Science and Engineering
(ICISE), pp. 3840-3843, 2010.
He Youquan, Qiu Hanxing, Wang Jian, Zhang Wei, Xie Jianfang,
“Studying of Road Crack Image Detection Method Based on the
Mathematical Morphology,” Image and Sinal Processing (CISP),
vol 2, pp. 967-969, 2010.
Qiao Yang, Maier, A., Maass, N., Hornegger, j. , “Edge-preserving
bilateral filtering for images containing dense objects in CT,”
Nuclear Science Symposium and Medical imaging Conference
(NSS/MIC), pp. 1-5, 2013.
Hegadi, R.S., Pediredla, A.K., Seelamantula, C.S., “Bilateral smoothing
of gradient vector field and application to image segmentation,”
Image Processing (ICIP), pp. 317-320, 2012.
Ghosh, P., Antani, S.K., Long, L.R., Thoma, G.R., “Unsupervised GrowCut: Cellular Automata-Based Medical Image Segmentation,”
Healthcare Informatics, Imaging and Systems Biology (HISB), pp.
40-47, 2011.
Katsigiannis, S., Zacharia, E., Maroulis, D., “Grow-Cut Based
Automatic cDNA Microarray Image Segmentation,” IEEE, vol 14, No 1,
pp. 138-145, 2015.
Moore neighborho od :
N ( p)  {q  Z n : p  q

:
(6)
 max | pi  qi | 1}
i 1, n
where N is the neighborhood system N of GrowCut.
MG ( x)  1 
x

max C 2
(7)
Eq.(7) shows object segmentation of improved
GrowCut, MG is a decreasing function in range [0,1].
(a)
Input Image
(b) Result Image
The result of proposed procedure
EXPERIMENT
Experiments are performed using the images of animals,
plats, etc. The proposed algorithm is compared with the
standard GrowCut algorithm and some results are shown in
Fig. 6.
Fig. 6 shows the input image(a), the result of standard
GrowCut (b), and the result of proposed method which
showed improved object segmentation performance.
CONCLUSIONS
A modified GrowCut is presented for the improved
object segmentation using morphology processing and
bilateral filter. The proposed method uses erosion operation
to reduce noise, and bilateral filter to preserve edges of
image before GrowCut is applied. This procedure showed
the improved performance in complex and noise images
compared to the standard GrowCut. Further research is
necessary for detecting objects in motion pictures.
REFERENCES
SungKap Lee, YoungSoo Park, GangSeong Lee, JongYong Lee,
SangHun Lee, “An Automatic Object Extraction Method Using
Color Features Of Object And Background In Image,” The Journal
of Digital Policy & Management, vol 11. No 12, pp. 459–465, 2013.
(a)
Input Image (b) GrowCut (c) Proposed Method
Experiment Image
195
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Depth Map Generation using HSV Color Transformation
JiHoon Kim, GangSeong Lee, YoungSoo Park, YeongPyo Hong, SangHun Lee
JiHoon Kim – dept. Plasmabiodisplay of Kwangwoon University, Seoul, Republic of Korea
[email protected]
GangSeong Lee – dept. General Education of Kwangwoon University, Seoul, Republic of Korea
[email protected]
YoungSoo Park – dept. General Education of Kwangwoon University, Seoul, Republic of Korea
[email protected]
YeongPyo Hong – dept. Hospital Management of International University, Jinju, Republic of Korea
[email protected]
SangHun Lee - dept. General Education of Kwangwoon University, Seoul, Republic of Korea
[email protected]
Abstract—In this paper, a depth map generation method is proposed using HSV color transformation. The method
segments objects from the background and generates initial depth information using HSV color transformation. Then
depth map is expressed using the gray color image. The experiment showed that the important depth information could
be extracted using HSV color transformation.
Keywords-component; clustering; image segmentation; hsv color transform; binary; depth map;
considers average and variance of pixel contrast. Figure 2. shows
the images explaining various types of segmentation levels.
INTRODUCTION
Humans detect the difference between the images projected
onto the left and right eyes. It is called binocular disparity. Images
projected through the left and right eyes are synthesized in the
brain which recognizes it as a 3D image. To create a digitized 3D
images, we can use a stereo camera to capture left and right images
which applies the principle of binocular disparity. Another way of
creating 3D image is to use an image editing tool which takes a lot
of effort and time. The other way to make 3D image is converting
from 2D image using 2D/3D converting technique. The
representative algorithm for the 2D/3D transformation is
DIBR(Depth Image Based Rendering)[1,2,3]. In this paper, 2D/3D
transformation is performed by creating a depth map from the
HSV color transformed image after object segmentation is applied.
RELATED RESEATCH
K-Means Clustering Estimated number of cluster : 3
Clustering
Clustering technique is the task of grouping a set of objects
in such a way that objects in the same group are more similar to
each other than to those in other groups(clusters). The technique is
mainly divided into two types called supervised clustering and
unsupervised clustering. Supervised learning is the machine
learning task of inferring a function from labeled training data. In
supervised learning the 'categories' are known and in unsupervised
learning, they are not, and the learning process attempts to find
appropriate 'categories'. The representative clustering technique is
K-Means.
Image Segmentation
Image segmentation is the process of partitioning a digital
image into multiple segments. There are several levels of
segmentation such as pixel, block and quadtree. Pixel level
segmentation uses contrast, rgb color, gradient of contrast, depth
information and moving vector features. Block level segmentation
Image Segmentation
196
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
PROPOSAL METHOD
^
  arg min E ( , k , , z )
Figure 3. shows the flow chart of proposed method.
Eq. (3)
Where E( , k , , z) is the energy function from GMM, 
is  is a variable for segmentation, k is a GMM of pixel, 
is distribution of GMM and z is the image array.
Object Extraction Image
HSV Color Tranform
HSV color transformation is applied to the object area which
Flow chart of proposed method
is extracted in previous step. HSV color model is characterized by
hue, saturation and value. RGB to HSV converting uses the
following equations.
Grabcut with K-means
To get better result than standard Grabcut, image is vector
quantized using K-means[4] and them Grabcut is applied. Kmeans clustering aims to partition n observations into k clusters in
which each observation belongs to the cluster with the
nearest mean, serving as a prototype of the cluster. Given a set of
observations xi, k-means clustering aims to partition
the n observations into k sets S = {S1, S2, …, Sk} so as to
minimize the within-cluster sum of squares. The overall variance
is as follows:
C max  max( R ' , G ' , B ' )
C min  min( R ' , G ' , B ' )
  C max  C min
H, S, V is expressed as follows:
   G'  B'

mod 6 , C max  R '
60  
 


   B'  R' 
H   60  
 2 , C max  G '




   R'  G' 
 4 , C max  B '
 60  




k
V    | x j   i |2
Eq. (1)
i 1 jSi
Where
i
is the mean of points in
Si .
K-means is to find k sets S minimizing the variance V.
Initially k centroids are placed in some way. The next step is to
take each point belonging to a given data set and associate it to
the nearest centroid. This procedure is repeated until no more
changes are done. K-means is applied to the image and then
Grabcut is used to segment objects. Grabcut is proposed by
Rother[6] and is an image segmentation method based on
Graphcut. Grabcut generates trimap T from gray image as
follows:
trimap T  {TFK , TBK , TUK }
Eq. (4)

0
 0,
S  
,   0

 C max
Eq. (5)
V  C max
Eq. (2)
Figure 5. show the initial depth map after applying HSV
Where
color conversion.
TFK is object area, TBK is background and TUK
is unlabeled pixels. To find the distribution of objects and
background, Gaussian Mixture Model(GMM) is used. Grabcut
constructs graph using edges and minimizes the energy function to
^
segment objects and background. Energy function 
defined as follows:
^

is
197
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
The proposed method is tested using the images of nature and
buildings and generated depth maps. Figure 8. shows some results
of depth maps generated by the proposed method.
Applying HSV Color Tranform Image
Binary and Background Assignment
HSV color image can be used as an initial depth information.
Result Image
By converting from HSV to gray scale image we can get images
like in Figure 6.
Applying Binary Image
Binary is a method of representing a 0 or 1. However,
when expressed as a 0 or 1, it is difficult to identify the
human eye. Therefore in the image binarization is expressed
by conversion to 0 or 255.
Gray scale image is binarized using the threshold value T as
in Eq. (6).

0
if
g ( x, y )  

 255 if
f ( x, y )  T
Eq. (6)
f ( x, y )  T
Where f ( x, y) is the input image, g ( x, y) is the output
image and T is the threshold . The black and white image is
inverted and Grabcut is applied to generate depth map. Figure 7.
show an example of the result of the depth map.
EXPERIMENT
The experiment was conducted in the Windows7 operating
system environment running Visual Studio 2010.
Depth Map is generated by the proposed method shown an
approximate result for the region close to white color, for a
relatively distant region showed a close result in a dark color, and
showed the black for the background area.
198
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Applying HSV Color Tranform Image
CONCLUSIONS
In this paper, a depth map generation method is proposed
using HSV color transformation. The method segments objects
from the background and generates initial depth map using HSV
color transformation. And by performing the binarization
proposed a method for generating a final depth map. Using this
method 2D images can be converted to 3D which takes less time
than using image editing tools and less expensive than taking
pictures using stereo camera. The proposed method could be used
to generate depth map to convert to 3D.
REFERENCES
HyeonHo Han, GangSeong Lee, and SangHun Lee, “A Study on 2D/3D
image Conversion Method using Create Depth Map,” Journal of the
Korea Academia-industrial cooperation Society, vol. 12, No. 4, pp.
1897–1903, 2011.
SungHo Han, YoSup Kim, JongYong Lee and SangHun Lee, “2D/3D
conversion method using depth map based on haze and relative
height cue,” The Journal of digital policy & manegement, Vol. 10,
No. 9, pp. 351-356, 2012.
Youngjin Choi, Run Chi and Hyoung Joong Kim, “Enhancing Extracting
Object Information in Defocus Depth Map for Single Image,” The
Institue Of Electronics And Informaion Engineers, Vol. 2014, No.
6, pp. 1420-1423, 2014.
Lui Feng, Liu Xiaoyu and Chen Yi, “An efficient detection method for
rare colored capsule based on RGB and HSV color space,” Granula
Computing (GrC), 2014 IEEE International Conference on, pp. 175178, 2014
J.untao Wang and Xiaolong Su, “An improved K-Means clustering
algorithm,” Communication Sofrware and Networks (ICCSN),
2011 IEEE 3rd International Conference on, pp. 44–46, 2011.
Rother, C., Kolmogorov, V. and Blake, A, “GrabCut – Interactive
Foreground Extraction using Iterated Graph Cut,” ACM Tranaction
on Graphics (TOG) – Proceedings of ACM SIGGRAPH 2004, TOG,
vol. 23, pp. 309–314, 2004.
199
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Find Sentiment And Target Word Pair Model
Wonhui Yu, Heuiseok Lim
1st Author - dept. Computer Science Education, Seoul, Korea
[email protected]
2nd Author - dept. Computer Science Education, Seoul, Korea
[email protected]
Abstract—Finding sentiment-target word pairs is an important research issue in sentiment mining studies.
Particularly, in the case of Korean language, because the predicate appears at the very end, it is not easy to find the
exact word pairs without identifying the syntactic structure of the sentence. In this study we propose a model that parses
sentence structures and extracts from the parse tree sentiment-target word pairs. As a result of testing with data from
4,000 movie reviews, the applicable model showed 93% accuracy and a 75 % recall rate, and compared then measured
the higher accuracy with other models. However, improvements in the recall rate and the reduction of computational
costs are required in future studies.
Keywords- sentiment; target; parser; finding
which is an operation that deciphers and extracts subjective
information or opinions from the source material. The
important research issues of opinion mining can be divided
into two types: one is making dictionaries with words
tagged with opinions, and the other is finding target words
represented by opinions.
I. INTRODUCTION
The question: "What thoughts do other people have?"
has important implications in the decision-making process.
The thoughts of others entail a lot of use, beginning with
the consumption issue of an individual on a small scale to
the establishment of a strategy for a company – or a nation
– on a large scale. Traditionally, such information has been
spread by word-of-mouth up until now, and on an online
Web level, as well, it has been propagated via forms such
as blogs, Twitter, Facebook, etc. This kind of information
is clearly seen in research that observed the consumption
patterns of more than 2,000 adults [1,2].
The results of the analysis of consumption patterns of
adults can be summarized as follows:
-
-
In opinion mining researches that find target words, the
predicate and the object that represent attributes and
sentiments have a significant meaning. However, since the
predicate contains different meanings, depending on the
attribute part of the sentence, it needs to be handled together
with the attribute part. For instance, looking at the sentence,
“This cellphone’s size is large,” and the sentence, “This car
has a trunk that’s large,” the verb “is large” can be thought
of as negative in the former sentence; however, it can be
thought of as positive in the latter sentence. Here, the verb
“is large” has a dependency on the words “size” and
“trunk.” As such, in order to determine whether the verb “is
large” is positive or negative, the part that has a dependency
should be considered together.
81% of Internet users conduct their consumption
activities on the Internet.
20% conduct consumption activities on the
Internet daily.
Reviews written by the opinion readers exert
influence on the consumer activities of others at
73-87% rates.
20-99% of consumers prefer goods that are rated
five stars to goods rated four stars.
32% of the overall grades of merchandises are
decided online via an expert system method and
about 30% or so are determined by the online
comments or product reviews, etc.
Existing methods of research to find the target word was
used for most of the PMI methods and part of speech
tagging methods. but PMI methods method and part of
speech tagging methods, all did not show a high accuracy.
The reason is PMI approach did not consider sequence of
the sentence, and part of speech tagging methods did not
know exactly relationships between words. Because
analysis showed a low rate. To explain in more detail, PMI
methods used by the general formula uses only the
relationship between the two words. This is generally used
for high-frequency words in a sentence is in the wrong
means of analysis has great potential.
Regarding the typical sentiment mining studies until
now, there have been a number of studies in which the target
word is found by the PMI method or by applying the rule
after part of speech tagging, when the target word is not
Studies that show other people’s thoughts by extracting
from online documents is referred to as opinion mining;
200
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
determined or when it is determined[3-8]. In this paper, in
order to accurately find the parts of a sentence that can be
the target word and sentiment word, a statistical model that
analyzes the sentence structure and effectively extracts the
target-sentiment word pair from the analyzed structure is
proposed. More specifically, the problem of figuring out
“up to what point” the sentiment word should be used and
“up to what point” the target word should be used in a
sentence where nouns appear in succession was addressed
through parsing. And both the process of analyzing the
syntactic structure of a sentence and the process of
extracting successive components such as compound words
in a phrase structure tree were solved through statistical
approach methods. The applicable methods are explained in
detail in Section 3, and the results of the proposed methods
are explained in Section 4.
w1 and w2 are words; w1 is used as a candidate element
for identification and w2 is used as an identifier. By
confirming the co-occurrence information between w1 and
w2, an attempt was made to determine whether or not w1
was a target word. The elements of the sentence used as
identifiers are a pattern between structured morphemes and
elements in WordNet.
III.
PROPOSE
MODEL
The model extracts the sentiment-target word pairs
appearing in the sentence by using parsing and statistical
methods. The model is comprised of 2 parts: one that parses
sentences in the inputted documents and one that extracts
word pairs. The part that parses the sentence structure
consists of a morphological analyzer, a part of speech tagger
and a syntactic structure analyzer; whereas, the other
extracts word pairs that consists of a sentiment word
extractor and a target word extractor.
II. RELATED WORK
In order to find the target words, B. Liu used a pattern
in which various commas, periods, semi-colons, hyphens,
&, and, but, etc. appeared in review sentences summarized
by users [9,10]. An example of the review sentences is
shown in Fig 1, as well as the Pros and Cons for the item in
the example. Then in Fig 2 we show how the review
sentences were analyzed.
A. Sentence Structure Analysis
The sentence structure analysis part of the proposed
model is comprised of a parts-of-speech tagging and parser.
First, the parts-of-speech tagging model uses a general
probability model similar to Equation 4; where, T is the
parts-of-speech tagging function of W, and M represents the
morpheme candidate, T is the parts-of-speech candidate,
and W represents words of the sentence.
Γ(W) ≝ argmax 𝑃(𝑀, 𝑇|𝑊) , (4)
𝑀,𝑇
By using the applicable model, the parts-of-speech of
neutral words are attached properly [13-17]. What this is
referring to is the fact that, in the sentence "The sailor dogs
the barmaid," the word “dogs” is not used as the frequently
used noun form but a verb form is determined and attaches
the appropriate parts-of-speech.
Fig. 3 An example review
Fig. 4 The Pros in Fig 1 can be separated into three
By using Web-PMIsegments
method, Popescu and Etzioni
Similar to the parts-of-speech tagging, the parser model
also uses commonly uses a Probabilistic Context-Free
Grammar model[18,-21]. The Probabilistic Context-Free
Grammar model can be expressed as shown in Equation 5.
Tbest is a function that selects the syntax structure with the
highest generation probability from the syntax structure
trees, T represents words that comprise the parse tree, G is
the grammar rules, and t is the sentence, rulei is the i-th
grammar rule in the parse tree, and hi is the history of
appearance of i-th grammar rule.
attempted to find the target words. The typical PMI method
is the same as in Equation (1). As for the P(w) calculation
method it is used to count the number of documents
containing the word (w) in Equation (2). When Equation
(2) is substituted into Equation (1) it then becomes Equation
(3), which is called the Web-PMI[11].
𝑝(𝑤1,𝑤2)
PMI(w1, w2) = log 𝑝(𝑤1)𝑝(𝑤2), (1)
𝑇𝑏𝑒𝑠𝑡 (𝐺, 𝑇1𝑛 ) = argmax 𝑃(𝑇|𝐺, 𝑡1𝑛 ) =
1
p(w) = ℎ𝑖𝑡𝑠(𝑤), (2)
𝑁
Web − PMI(w1, w2) = log
1
ℎ𝑖𝑡𝑠(𝑤1 𝐴𝑁𝐷 𝑤2)
𝑁
,
1
1
ℎ𝑖𝑡𝑠(𝑤1) ℎ𝑖𝑡𝑠(𝑤2)
𝑁
𝑁
𝑇
argmax
, (5)
T ∏𝑖 𝑃(𝑟𝑢𝑙𝑒𝑖 |𝐺, 𝑡1𝑛 ℎ𝑖 )
(3)
201
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
B. Extraction of Word Pairs
IV.
For input sentences that are of the parse tree, the
extraction of the sentiment word and target word are done
in two kinds of processes. The extraction of the sentiment
word is to find the verb or adjective that applies to the toplevel node verb phrase in the tree that’s being analyzed at
the sentence structure analysis level; whereas, the extraction
of the target word is finding the noun word of the noun
phrase that’s the most dependent with the found sentiment
word. This can be represented with an equation – as shown
in Equation 6.
The data used in the experiment consisted of 4,000
movie reviews written in Korean, and for the comparison
with other studies tested in English. Functional words that
appear in the Korean language were removed and
comparative experiments were conducted. For movie
reviews used as input, crawling was done directly on the
movie reviews posted by users about actual movie
openings, and to use any data, sampling was done at
random.
For the experiments, the performances were compared
in the method that measures accuracy and recall rates. To
measure the accuracy and recall rates, two experts extracted
manually sentiment word-target word pairs, and both
experts used equally only the extracted data as evaluation
data.
WordPair(W) = 𝑎𝑟𝑔max 𝑃(𝑆, 𝐴|𝑇), (6)
𝑆,𝐴
In Equation 6, S represents the sentiment words and A
is the target words, and T refers to the parse tree.
The evaluation data set can be separated into sentences
with 1 sentiment word-target word pair, sentences with 2
pairs and sentences with no pairs. The distribution of the
sentences is shown in Table 1. below.
Eventually, the extraction of the pair words refers to
finding S and A that have the highest probability values for
the sentiment words (S) and target words (A) that are seen
in the phrase-analyzed parse tree (T). Equation 6 can be
expressed as Equation 7; where, each of the elements
needed in Equation 7 is calculated by using Equation 8 and
Equation 9.
Table 1 Sentiment word–target word pair ratio of evaluation
data set
Number of Occurrence of Pairs
Ratio (%)
0
28.7
1
51.5
2
19.8
argmax 𝑃 (𝑆, 𝐴|𝑇) = argmax 𝑃 (𝑆|𝑇)𝑃(𝐴|𝑆, 𝑇), (7)
𝑆,𝐴
EXPERIMENT
AND RESULTS
𝑆,𝐴
P(S|T) = 𝑎𝑟𝑔max 𝑝(𝑑𝑖 |𝑛𝑜𝑑𝑒1,𝑛 ), (8)
In addition, the number of words of the evaluation data
set were diverse from 3 words up to 21 words, and the
distribution of words in the sentences are shown in Table 2.
𝑖
The nodei refers to each of the nodes forming the parse
tree and di is the dependency assigned to each node.
Table 2 Distribution of number of words in a sentence in
evaluation data
Number of Words in a Sentence
Ratio (%)
3 or fewer
4.3
4
7.5
5
12.7
6
33.5
7
21.3
8
16.3
9 or more
4.4
In Equation 5, the sentiment word extraction is
calculated by extracting the verb phrase with the highest
dependency at each of the nodes of the analyzed parse tree.
The node with the highest dependency is usually the root
node located on the topmost position.
P(A|𝑠𝑖 , 𝑇) =
argmax 𝑃(𝑑𝑠𝑗 |𝑆𝑖 , 𝑤𝑑1,𝑛 , 𝑤𝑐𝑜1,𝑛 , 𝑝𝑐𝑜1,𝑛 ), (9)
𝑗
In order to conduct a comparative experiment, using the
same data, the method proposed in Long Jiang's model was
implemented[24]. Table 3 shows the accuracy and recall
rates of Long Jiang's model and the model proposed in this
study.
The wd is the distance information apart from the
selected sentiment word; wco with the probability
information of words that can appear together with the
sentiment word; and pco is the probability information for
the parts-of-speech that can appear together with the
sentiment word. Ds are the dependency strength that is
calculated into wd, wco, pco.
CONCLUSION
Because the word order in the Korean language
typically is of a structure in which the predicate appears in
the last part of the sentence, it is necessary to find the
202
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
accurate target word that the predicate explains in a
sentence. In order to find the accurate sentiment-target pairs
proposed in this study is a model that can reflect the
characteristics of the syntax structure of the Korean
language. The proposed model has found in structurally
analyzed sentences the words with a possibility of being
sentiment words and the words with a possibility of being
target words, by using statistical data. As a result of
experiment, 93% accuracy and a 75% recall rate were seen
as compared to the test set. However, due to the large
amount of calculations the parts with a high cost speed wise
would need to require additional research for further
improvement and would render reasons for future studies.
In addition, since compared to other models the recall hasn’t
improved, studies for improving this performance
effectively are needed as well.
[11] A.-M. Popescu, O. Etzioni. Extracting product features and opinions
from reviews, Proceedings of the Human Language Technology
Conference and the Conference on Empirical, 2005, p339-346
[12] P. Turney. Thumbs up or thumbs down? Semantic orientation
applied to unsupervised classification of reviews, Proceedings of the
Association for Computational Linguistics (ACL) ,pp. 417–424,
2002.
[13] N. Jindal, B. Liu. Mining comparative sentences and relations,
Proceedings of AAAI, 2006, p1331-1336
[14] DeRose, Steven J. Grammatical category disambiguation by
statistical optimization. Computational Linguistics 14(1), 1988,
p31–39
[15] Kenneth Ward Church. A stochastic parts program and noun phrase
parser for unrestricted text. ANLC '88: Proceedings of the second
conference on Applied natural language processing. Association for
Computational Linguistics Stroudsburg, PA. 1988.
[16] Charniak, Eugene. Statistical Techniques for Natural Language
Parsing, AI Magazine 18(4), 1997, p33–44.
[17] Hans van Halteren, Jakub Zavrel, Walter Daelemans. Improving
Accuracy in NLP Through Combination of Machine Learning
Systems. Computational Linguistics. 27(2), 2001, p199–229
[18] DeRose, Steven J. Stochastic Methods for Resolution of
Grammatical Category Ambiguity in Inflected and Uninflected
Languages. Ph.D. Dissertation. Providence, RI: Brown University
Department of Cognitive and Linguistic Sciences, 1990.
[19] Jin-Dong Kim, Heui-Seok Lim, Hae-Chang Rim. Twoply Hidden
Markov Model:A Korean POS Tagging Model Based on
Morpheme-Unit with Eojeol-Unit Context, International Journal of
Computer Processing of Oriental Languages, Vol 12, 1998, p5-29
[20] Booth, T. L. & Thompson, R. A. (1973), 'Applying Probability
Measures to Abstract Languages', IEEE Transactions on Computers
C-22 (5) , 442--450
[21] Charniak, E. Statistical Techniques for Natural Language Parsing,
AI Magazine 18 (4) , 1997, p33-44
[22] Black, E.; Jelinek, F.; Lafferty, J. D.; Magerman, D. M.; Mercer, R.
L. & Roukos, S. Towards History-Based Grammars: Using Richer
Models for Probabilistic Parsing., in Lenhart K. Schubert, ed.,
Association for Computational Linguistics, 1993, p 31-37
[23] Charniak, E. Immediate-head parsing for language models, in
'Proceedings of the 39th Annual Meeting of the Association for
Computational Linguistics (ACL-2001)', 2001
[24] Long Jiang, Mo Yu2 Ming Zhou, Xiaohua Liu, Tiejun Zhao. Targetdependent Twitter Sentiment Classification, HLT '11 Proceedings
of the 49th Annual Meeting of the Association for Computational
Linguistics: Human Language Technologies - Volume 1, 2011,
p151-160
ACKNOWLEDGMENT
This work was supported by the ICT R&D program of
MSIP/IITP. [2015(B0101-15-0340), Development of
distribution and diffusion service technology through
individual and collective Intelligence to digital contents].
REFERENCES
[1]
Study Conducted by comScore and The Kelsey Group. Online
Consumer-Generated Reviews Have Significant Impact on Offline
Purchase Behavior, November 29, 2007
[2] John B. Horrigan, Associate Director, Internet users like the
convenience but worry about the security of their financial
information, February 13, 2008
[3] Jaeseok Myung, Dongjoo Lee, Sang-goo Lee. A Korean Product
Review Analysis System Using a Semi-Automatically Constructed
Semantic Dictionary. Korean institute of information scientist and
engineers, 2008, p392-403
[4] Hanhoon Kang, Seong Joon Yoo, Dongil Han. Design and
Implementation of System for Classifying Review of Product
Attribute to Positive/Negative, Korean institute of information
scientist and engineers, 2009 conference, p456-457
[5] Jung-yeon Yang, Jaeseok Myung, Sang-goo Lee. A Sentiment
Classification Method Using Context Information in Product
Review Summarization, Korean institute of information scientist
and engineers, 2009, p254-262
[6] Minqing Hu, Bing Liu. Mining and summarizing customer reviews,
KDD '04 Proceedings of the tenth ACM SIGKDD international
conference on Knowledge discovery and data mining, 2004, p168177
[7] Xiaowen Ding, Bing Liu, Philip S. Yu. A Holistic Lexicon-Based
Approach to Opinion Mining, WSDM '08 Proceedings of the
international conference on Web search and web data mining, 2008,
p231-240
[8] Long Jiang, Mo Yu Ming Zhou, Xiaohua Liu, Tiejun Zhao. Targetdependent Twitter Sentiment Classification, HLT '11 Proceedings
of the 49th Annual Meeting of the Association for Computational
Linguistics: Human Language Technologies - Volume 1, 2011,
p151-160
[9] B. Liu, Web Data Mining: Exploring Hyperlinks, Contents, and
Usage Data. Springer, 2006
[10] B. Liu, M. Hu, and J. Cheng. Opinion observer: Analyzing and
comparing opinions on the web, Proceedings of WWW, 2005
203
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Novel Operation Scheme of Static Transfer Switches for Peak
Shedding
Chang-Hwan Kim, Sang-Bong Rhee
Department of Electrical Engineering, Yeungnam University, Gyeongbuk 712 749, Korea
[email protected]
Abstract—Recently, the operating strategy using emergency generator is aimed in other to handle the demand response
management. For strategy of peak shedding using emergency generator, thyristor based static transfer switch (STS)
should provide a continuous supply for a critical load through fast transfer between two sources. This paper proposes
the STS system using the forced-commutation technique to prevent instantaneous voltage sag during peak transfer
process. The proposed novel method reduces a total transfer time to fulfill power quality. The studies are performed
using electromagnetic transient program (EMTP) to confirm the effectiveness.
Keywords-static transfer switch (STS);peak shedding ;forced-commutation ;EMTP/ATPDraw
Fig. 1 shows the simulation waveforms of three-phase
critical load currents and the variation of three-phase rms
value of critical load voltages during peak transfer process
of STS.
INTRODUCTION
Recently, power electricity consumption has rapidly
increased along with economic growth in Korea.
Government is trying to increase the reserve margin of
power in order to handle the increasing electricity demand.
The operating strategy using emergency generator is aimed
to resolve a demand response management [1].
When use emergency generator, it is needed to
introduce the fast transfer switching device to provide
connected load with continual power through overall
transfer process. Many devices based on power electronics
technology have been applied in many cases where transfer
device between dual power sources is needed. One of the
most effective transfer devices is a static transfer switch
(STS) based on thyristors [2].
However conventional STS prolongs the transfer
process beyond a quarter cycle because of the natural
commutated thyristor. This characteristic should anticipate
short duration voltage sag. The STS system thus requires
more than a quarter cycle to successfully complete transfer
process.
Waveforms of STS during the peak transfer process. The top is load
currents and bottom is voltages(rms value).
The peak power detected and SCRs on the preferred
feeder simultaneously receive the blocking signal at 88 ms.
The SCRs of each phase is then cut off at its next zero
crossing and then The SCRs on the Alternate feeder is
turned on at 94.5 ms. The Results based on the variation of
three-phase rms value of critical load voltages are presented
in Table 1.
This paper proposes the operation scheme of the STS
system using the forced-commutation technique to prevent
instantaneous voltage sag. When the transfer process is
conducted, the precharge capacitor of forced-commutation
circuit starts discharging the capacitor, and the thyristor
current is cut off immediately. Proposed STS system fulfills
the peak load shedding of improved power quality.
Performance of the proposed STS system and case studies
are evaluated using electromagnetic transient program
(EMTP)/ATPDraw.
THE VARIATION OF VRMS VALUE
CONVENTIONAL STS SYSTEM
The STS system consists of anti-parallel connected SCR
thyristor switches and mechanical switches. The breakbefore-make (BBM) strategy is used for the transfer
process. It means that two power sources are never
connected in parallel.
204
Phase
Duration (Vrms < 0.9 pu)
Minimum
Magnitude
Phase A
91.69 ms ~109.75 ms (18.06 ms)
0.72
Phase B
91.69 ms ~109.75 ms (16.60 ms)
0.84
Phase C
None
0.99
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Instantaneous voltage sag is defined as decrease to
between 0.1 pu and 0.9 pu in rms voltage for a duration not
only greater than 0.5 cycles but less than or equal to 1
minute at the power frequency in IEEE std 1159 [3]. A
comparison between results in Table 1 and IEEE std 1159
clearly shows that instantaneous voltage sag should be
occurred in the phase A and B. Hence, the forcedcommutation method must be required to provide
uninterruptible power to the loads during peak transfer time.
the load voltage at specific level during the peak transfer
process.
IMPROVED STS SYSTEM
Waveforms of proposed STS during the peak transfer process. From
the top to bottom, load currents, voltages(rms value) and charging
voltages of forced-commutation capacitors.
THE VARIATION OF VRMS VALUE (PROPOSED STS)
Improved STS system
Fig. 2 shows the complete model of STS system with
forced commutation circuit using EMTP/ATPDraw. If peak
demands exceed the preset value, the STS controller
commutates the SCRpre.main1 and the charged capacitor of
forced-commutation circuit is discharging through the
SCRpre.aux.1 then SCRpre.main1 will be reverse biased and
turned-off. After SCRpre.main1 and SCRpre.aux1 are turned off,
SCRalt.main1 receives a firing signal to STS controller. The
improved STS operation modes are as blow;
-
Measurement state
-
PTS standby state
-
Forced-commutation state
-
Peak transfer state
Phase
Duration (Vrms < 0.9 pu)
Minimum
Magnitude
Phase A
None
0.91
Phase B
None
0.98
Phase C
None
1.02 (max)
CONCLUSION
This paper proposed the STS system utilize the forcedcommutation circuit for operating strategy of peak shedding.
The EMTP simulation results show that proposed scheme is
able to maintain the load voltage within a normal voltage
during the peak transfer process.
ACKNOWLEDGMENT
The research was supported by Korea Electric Power
Corporation Research Institute through Korea Electrical
Engineering & Science Research Institute.[grant number :
R14-XA02-34]
Fig. 3 shows the results of transfer strategy of the
proposed STS system. At the time 88.8 ms, SCRpre.main.1 is
turned off. The SCRpre.aux.1 instantaneously receive turn-on
signal from the controller and the charged capacitor is start
discharging. The load current commutates to the forcedcommutation circuit and SCRpre.aux.1 is extinguished at 90.66
ms. The alternate source side SCRalt.aux.1 of STS system
become forward biased at 90.68 ms.
REFERENCES
Jongkee Choi, Jihong Jung, Jihoon Lim, Samsun Ma, Kijun Park “A
Study on Utilization of Customer Owned Generators for Demand
Side Management,” KEPCO, 2012
H. Mokhtari and M. Reza Iravani, “Effect of source phase difference on
static transfer switch performance,” IEEE Trans. Power Del., vol.
22, no. 2, pp. 1125–1131, Apr. 2007.
“IEEE Recommended Practice for Monitoring Electric Power Quality,”
IEEE Std. 1159-2009, 2009.
The Results based on the variation of three-phase rms
value of critical load voltages are presented in Table 2. It
can be seen that proposed STS system is able to maintain
205
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Detection of Incorrect Sitting Posture by
IMU Built-in Neckband
Hyeon-min Shim1, SangYong Ma2, and Sangmin Lee1,2
1
- Institute for Information and Electronic Research, Inha University, Incheon, Korea
[email protected]
2
- Department of Electronic Engineering, Inha University, Incheon, Korea
[email protected]
Abstract—In this paper, an algorithm to detect incorrect sitting position with PCA-SVM and LDA-SVM are
proposed. . Subjects wore the IMU built-in neckband. The changes on the sensor values of the three positions were
measured. As a result, classification performance of the PCA-SVM algorithm is 0.956 and this method will be useful
algorithm for system which prevent incorrect posture.
Keywords-component; IMU; Posture; LDA, PCA, SVM
Last, transformed components which are results of
PCA process are used. PCA is a statistical procedure that
uses an orthogonal transformation to convert a set of
observations of possibly correlated variables into a set of
values of linearly uncorrelated variables called principal
components[3]. The number of principal components is
less than or equal to the number of original variables.
INTRODUCTION
Most of modern people spend on most of their time
sitting on a chair to work or to study. However, it is
difficult to incorrect posture is cause of various physical
disorders such as lumbar disc, scoliosis or other spinal
problems[1,2]. Although importance of solution for
posture correction while sitting, researches on sitting
posture are not proceeded actively yet.
Consider a data matrix 𝐱 with column-wise zero
empirical mean, where each of the 𝑛 rows represents a
different repetition of the experiment, and each of the 𝑝
columns gives a particular kind of datum.
In this study, an algorithm to detect incorrect sitting
position. To measure angle of the posture, the device has
developed which is 6-DOF(degree of freedom)
IMU(Inertial Measurement Unit) built-in neckband, and
three types of data were measured which are Neutral
Position, Smartphoning and Writing. To enhance
performance of the classifier, a feature vectors are
extracted by linear discreminant analysis(LDA), and
principle component analysis(PCA). Then, they are
classified by support vector machine(SVM)[3-5].
The PCA transformation is defined by set of 𝑝 dimensional vectors of weights 𝐰(𝑘) = (𝑤1 , ⋯ , 𝑤𝑝 ) that
map each row vector 𝐱 (𝑘) of 𝐱 to a new vector of
principal component scores 𝐭 (𝑖) = (𝑡1 , ⋯ , 𝑡𝑝 )(𝑖) given by
𝑡𝑘(𝑖) = 𝐱(𝑖) ⋅ 𝐰(𝑘)
(1)
In such a way that the individual variables of 𝐭
considered over the data set successively inherit the
maximum possible variance from 𝐱, with each loading
vector 𝐰 constrained to be a unit vector.
METHOD
Data acquisition and Feature Extraction
In this paper, three healthy male subjects were
participated. They wore the IMU built-in neckband and
maintained three types of sitting postures which are
neutral, smart-phoning and study during 10 minutes
respectively. Then, three types of feature extraction
algorithms are applied.
The first component 𝐰(1) has to satisfy
2
𝐰(1) = argmax {∑𝑖(𝐱(𝑖) ⋅ 𝐰) }
‖𝐰‖=1
(2)
It equivantly also satisfies
𝐰(1) = argmax {
First 3-axis of accelerometer RAW data are used to
feature vector set. This is one of the simplest way to
collect feature vector set. However, it cannot treat an
optimized method.
𝐰𝑇 𝐗 𝑇 𝐗𝐰
𝐰𝑇 𝐰
‖𝐰‖=1
}
(3)
The 𝑘th components can be found by subtraction the
first 𝑘 − 1 principal components from 𝐗.
Second, transformed components which are results of
LDA process are used. LDA is a generalization of
Fisher’s linear discriminant, a method used in statistics,
pattern recognition and machine learning to find a linear
combination of features that characterizes or separates
two or more classes of objects or events. The resulting
combination may be used as a linear classifier or
dimensionality reduction before later classification.
Limitation of LDA is that LDA produce at most 𝐶 − 1
feature projection. Where, 𝐶 is number of class.
𝑇
̂ 𝑘 = 𝐗 − ∑𝑘−1
𝐗
𝑠=1 𝐗𝐰(𝑠) 𝐰(𝑠)
(4)
And then finding the weight vector which extract the
maximum variance from this new data matrix.
̂𝑇𝐗
̂𝐰
𝐰𝑇 𝐗
𝐰(k) = argmax{
‖𝐰‖=1
206
𝐰𝑇 𝐰
}
(5)
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Figure 1. Maximum-margin hyperplane and margins for an
SVM trained with samples from two classes. Samples on the
margin are called the support vectors.
Classification
After feature extraction procedure, SVM classifier
used to classify postures. In machine learning, SVM are
one of supervised learning model with associated learning
algorithm that analyze data and recognized patterns. SVM
construct hyperplane or set hyperplanes in high
dimensional space, which can be used for classification.
A good separation is achieved by the hyperplane that has
the largest distance to the nearest training-data point of
any class, since in general the larger the margin to lower
the generalization error of the classifier. Figure 1 shows
concept of the SVM.
Figure 2. Feature maps of each algorithms.
(a) RAW (b) LDA (c) PCA
priority is configured differently. Many other cases of
classification problems. LDA-SVM shows a good
performance. However, if number of class is small and
data set are overlapped like this case, LDA-SVM provide
poor performance. In the PCA-SVM, axes are
transformed to more distinguishable direction by the PCA.
Therefore, distance between distributions of each position
are widen and performance are enhanced.
Implementation
To verify algorithms, Python 2.7.6 is used[6]. Python
provide various numerical library modules such as
Numpy, Scipy and Scikit-Learn[7-9]. Numpy and Scipy
are used for general numerical method like vectors and
matrices calculation, data load stores and so forth. ScikitLearn provide LDA, PCA and SVM algorithm.
The PCA-SVM algorithm results showed fine
performance to classify sitting position. This algorithm is
considered a proper method will be embedded in the
nackband to prevent incorrect posture.
ACKNOWLEDGMENT (HEADING 5)
This work was partly supported by Basic Science
Research Program through the National Research
Foundation of Korea (NRF) funded by the Ministry of
Science,
ICT
&Future
Planning
(NRF2013R1A2A2A04014796) Korea, and the CITRC(Convergence Information Technology Research
Center) (IITP-2015-H8601-15-1003).
RESULTS
Figure 2 shows feature maps of each algorithms.
Figure 2(a) is map of raw data, figure 2(b) is feature map
extracted from LDA and this case, dimension of LDA’s
feature vector is 2D because feature dimension is decided
by number of class. Figure 2(C) is feature map extracted
from PCA. Distance study and smart-phoning is quite
close and someone are overlapped. Therefore it is hard to
classify by operator hand-picked method. As shown as
table 1, comparison between the performances results of
the classifier are shown. In case of the PCA-SVM, mean
of the success rate of the classifier is 0.956. In case of the
RAW-SVM mean of the success rate is 0.933. Therefore,
the case of the PCA-SVM is 0.023 better result than
result of the RAW-SVM. LDA-SVM shows least
performance.
REFERENCES
D. Falla, G. Jull, T. Russell, B. Vicenzino, and P. hodges, “Effect of
Neck Exercise on Sitting Posture in Patients with Chronic Neck
Pain,” Physical Therapy, vol. 87, no. 4, 2007, pp. 408–417
O. Evans and Kim Patterson, “Predictors of neck and shoulder pain in
non-secretarial computer users,” International Journal of
Industrial Ergonomics, vol. 26, 2000, pp.357–365
S.Wold, K. Esbensen, P. Geladi, “Principal component analysis,”
Chemometrics and intelligent laboratory systems, vol. 2, 1987, pp.
37-52
McLachlan, G. J., “Discriminant Analysis and Statistical Pattern
Recognition,” Wiley Interscience, 2004
C. Cortes, and V. Vapnik, “Support vector networks,” Machine
Learning, vol. 20, no. 3, pp. 273-297
https://www.python.org/
http://www.numpy.org/
http://scikit-learn.org/stable/
CLASSIFIER PERFORMANCE COMPARISON
PCASVM
LDASVM
RAWSVM
Average accuracy E(𝑝ℎ )
0.956
0.878
0.933
Standard deviation(σℎ )
0.394
× 10−2
0.410
× 10−2
0.606
× 10−2
DISCUSSION AND CONCLUSION
The PCA-SVM algorithm shown better performance
than the RAW-SVM and LDA-SVM. Because of
distributions of each axis are different, coefficient or
207
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Modeling of a Learner Profiling System based on
Learner Characteristics
Hyesung Ji, *HeuiSeok Lim
Department of Computer Science Education. Korea University, Seoul, Korea
[email protected]
Department of Computer Science and Engineering. Korea University, Seoul, Korea
[email protected]
Abstract—We propose a learner profiling system based on the Learner characteristic. In this paper, we proposed a real
time monitoring system for extract learner’s information and analyzing characteristics of learners in learning
environments. The extracted information on the characteristics of learners is automatically constructed into
personalized learner profiles through the learner profiling system. The contents of learner profiles consist of personal
information of learners , cognitive ability of learners, and teacher assessment.
Keywords Learner Profile , Characteristics Analysis, Real Time Monitoring System, Learner Characteristic
In this study, we propose a learner profiling system that
can extract the characteristics of learners through a realtime learner monitoring system. Proposed method can
automatically construct learners’ profiles through learner
characteristics analysis. In order to correctly understand the
characteristics of learners, observation and analysis on
learners during the learning process is needed. The learner
profiling system is able to automatically generate profiles
by automatically extracting and analyzing the
characteristics of learners through real time learner
monitoring.
INTRODUCTION
Information and Communications Technology (ICT) have
been used in multiple area and it has been changing many
areas of human lives. Specially, diverse teaching and
learning methods, learning applications, and learning
contents are developed through ICT in the field of
education. Network and learning tools are used in education
to enhance learning effectiveness for learners. Typically, Elearning is most famous services to merge about education
and ICT in education fields. E-learning is a form of
education based on ICT. Since the development of World
Wide Web (WWW), e-learning has continued to develop
through various services such as, cyber universities,
specialized education that confer degrees through the
completion of online lectures, thus generating a social issue
[1]. Furthermore, various learning methods have been
studied, where the concept of e-learning has expanded to
include mobile learning, ubiquitous learning, and smart
learning. Moreover, e-learning is increasingly applied not
only traditional education fields, but also corporate
education, informal learning, and lifelong education. The
final goal of learning methods that use ICT to provide
effective and efficient and personalized learning to learner
without spatio-temporal limit.
RELATED WORKS
The method with the greatest educational effectiveness
models is the personalized instruction method [3]. The
personalized learning system method combines the
individualized instruction method with ICT technology.
Various types of individually modified learning systems
include intelligent tutoring system, personalized learning,
and adaptive learning. A personalized learning system takes
into consideration the learning level, attitude, method, and
motive of learners to recompose the learning material
accordingly and thus provide a service to the learners. A
personalized learning system combines various ICT
technologies. This chapter will explain the part of
personalized learning, namely, learner profiles.
Nevertheless these many advantages of e-learning, various
problems have surfaced in e-learning fields. The most
significant of the lack of interaction between the teacher and
the learner who use IT devices as a school media. One side
of learning contents, the learner is not considered about
learning contents in the learning process, can cause the
reduction of learning effectiveness and leaner can loss
interest about learning. Learning is achieved by founding
itself on the experience, culture, gender, cognitive ability
and so forth of the learners [2]. However, learning that uses
existing IT devices does not take the level of understanding
about learning and the situation of the learners into
consideration, so it may proceed as one-sided learning.
Learner profiles are important sources of information that
not only contain basic information such as the name, age,
and gender of learners, but also reveal the learning ability,
characteristics, and condition of the learners. Research
about learner profiles is not limited to studies that aim to
manage the learners’ information but it is expanding to
include studies that provide personalized learning considers
the characteristics of learners. In [4], a system is proposed
learners’ information into a Resource Description
Framework (RDF) translates learners’ information into
learner profiles, and recommends contents using users’
profiles.
208
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
LEARNING PROFILE COMPONENTS
System architecture
Category
Contents
Description
Personal Information
Name, grade, class
Personal information of learners
Cognitive Ability
Memory
Concentration
Visual cognitive perceptron
Measurements of cognitive ability of affects learning
Assessment Information
Level of comprehension
Level of concentration
Level of attitude
Level of learners assessment information by teachers
However, only simple and basic information used profiles
in order to analyze learner’s characteristics, and method to
extract the learner’s characteristics was not used. In [5], the
preference level of learners was predicted by extending the
learning time into a fuzzy theory, converting learning time
into fuzzy numbers and giving levels to the learning time.
However, the learning time of learners could not explain the
deep connection between leaner’s preference and
characteristics.
Real time monitoring and assessment are composed of the
THE LEARNER PROFILING SYSTEM
3.1. An Overview of the System.
In this paper, we proposes a learner profiling system that
can extract learner’s characteristics through a real-time
learner monitoring system. Proposed method can
automatically construct learners’ profiles through learner’s
characteristics analysis. Figure 1 shows an organized
diagram of a real time monitoring and learner profile. The
proposed method uses the real-time monitoring system to
extract information about the learners in order to
automatically extract the learner’s characteristics during
learning situations. The real-time monitoring system allows
the teachers to monitor events and situations that occur
during the learning process in real time and saves the
assessment information of the learners. Also, the system
records the assessment information of teachers (student,
class, and event), which is an important element for specific
analysis of learners. The recoded information about learner
assessment is used to construct learner profiles, and that can
be applied to the reconstruction of learning and
personalized learning systems, such as intelligent tutoring
systems.
monitoring module that can monitor various events and the
activity of student in learning situations and of the
assessment module for teachers in which the teachers can
assess the learners on their class. The monitoring module
for learners provides a function in which teachers can check
the results of the events occurring during learning situations
in real time. At the same time, the module saves the
measurement results from the events. The saved results are
used to extract the characteristics of learners. The function
of the assessment module for teachers is that of extracting
the assessment information of teachers during learning
situations. The assessment information of teachers
regarding learning is as follows:
(1) Level of Comprehension: information on the level of
Comprehension of the learner regarding learning as
assessed by the teacher subjectively.
3.2. Real-Time Monitoring System.
(2) Level of concentration: information on the level of
concentration of the learner regarding learning as assessed
by the teacher objectively.
The real-time monitoring system is composed of functions
including real-time learner monitoring, assessment, and
extraction of learners’ characteristics. The real-time
monitoring system monitors the learning situation of
learners and extracts the information of learners that occurs
during learning situations in real time and provides this
information to teachers. In this process, the information on
the characteristics of learners is automatically extracted and
transmitted to the learner’s profile
(3) Learning attitude: information on the attitude of the
learner regarding learning as assessed by the teacher
objectively.
The assessment information of teachers is a subjective
assessment of learners carried out while teachers are
teaching. The Likert scale was used for assessment [6]. The
results of the assessment are used in analyzing the
characteristics of learners and in the learner profiling
system.
209
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
CONCLUSION
In this paper, We proposed a learner profiling system for
providing customized learning by analyzing the
characteristics of learners. proposed system can measured
learner’s comprehension, concentration and attitude. It will
be useful to provide customized learning in e-learning
environments Future work. We test proposed system. And
verity the effect of this system.
ACKNOWLEDGMENT
THIS RESEARCH WAS SUPPORTED BY THE ICT R&D
PROGRAM OF MSIP/IITP [B0101-15-0340]
REFERENCES
[1] D. R. Garrison, E-Learning in the 21st Century: A Framework for
Research and Practice, Taylor & Francis, London, UK, 2011.
[2] D. Held and A.McGrew, The Global Transformation Reader: An
Introduction to the Globalization Debate,
[3] H. J. Walberg, “Losing local control,” Educational Researcher, vol.
22, no. 59, pp. 19.26, 1994.
[4] C.-W. Song, J.-H. Kim, K.-Y. Chung, J.-K. Ryu, and J.-H. Lee,“
Contents recommendation search system using personalized profile on
semantic web,” The Journal of the Korea Contents Association, vol. 8, no.
1, pp. 318.327, 2008.
[5] K. H. Joon, C. D. Keun, and H. K. Seok, “A multimedia recommender
system using user playback time,” Korea Society for Internet Information,
vol. 10, no. 1, pp. 111.121, 2009.
[6] R. Likert, “A technique for the measurement of attitudes,” Archives
of Psychology, vol. 22, no. 140, 1932.
210
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Context Reasoning Approach for Context-aware Middleware
Yoosoo Oh
School of Computer and Communication Engineering, Daegu University, Gyeongsan 712-714, Republic of Korea
[email protected]
Abstract— In this paper, we survey and review context reasoning approach for context-aware middleware. We
present several examples of context reasoning methods and then we compare the research activities with their
features for context reasoning. As an analysis result, we have a requirement of a generalized architecture that
generates a reliable output by incorporating real-time context reasoning process in large systems.
Keywords-component; context-awareness; reasoning; context-aware middleware; context integration
INTRODUCTION
COMPARISON OF REPRESENTATIVES OF RELATED WORKS
Since context-aware applications were developed,
there have been developed several research activities
about context-aware middleware. Especially, it is
important to consider information processing algorithms
of the context-aware middleware for a better
understanding. In this paper, we survey and review
context reasoning approach as the information processing
algorithm for context-aware middleware. In particular,
we describe several examples of context reasoning from
the referenced literatures. We also compare the research
activities with their definitions, approaches, conditions,
and features for context reasoning process.
Integration
Intelligence
CONTEXT REASONING APPROACH REVIEW
Reasoning is the cognitive process of looking for
reasons for actions. Context reasoning is a context process
of looking for reasons for current situation and makes a
semantic decision. To describe the context reasoning
approach, we explain several examples of context
reasoning for context data management. Table 1
represents a comparison of representatives of related
research activities to context reasoning of the contextaware middleware.
Integration
Stage
Input /
Output
Context
Fusion
Network [1]
understan
ding level
sensory
data /
context
CoCo [2]
understan
ding level
informat
ion /
context
CoCoGraph
controlling:
parallel
composition
Software
Engineering
Framework
[3]
abstract
level
context /
context
Simple
aggregation
iQueue [4]
understan
ding level
raw data
/ highlevel
data
iQL:
Composer
understan
ding level
context /
context
DempsterShafer
approach
Sensor
Fusion
using
DempsterShafer [5]
Context Fusion Network [1] computes higher-level
understanding from lower-level sensory data with a set of
environmental states and interactions. CoCo [2] derives
high-level context from lower-level context information
by using specific information to a certain entity at a
specific point in time. Software Engineering Framework
[3] has interpretation and data fusion to bridge the gap
between raw sensory output and the level of abstraction
based on information originating from a wide variety of
sources. iQueue [4] accepts data from one or more
sources, and acts as sources of higher-level data. Sensor
Fusion using Dempster-Shafer Theory [5] collects all the
relevant context information about each major entity and
creates a higher-level context.
high-level
middlelevel
Reasoning
algorithm
Operator
Composition:
simple
logical
combinations
in Operator
Graph
PRACTICAL REASONING ANALYSIS FOR CONTEXTAWARE MIDDLEWARE
The 5W1H (Who, What, Where, When, Why, How)
context [6] has a hierarchy which consists of subcontexts. Based on our survey, we matched appropriate
reasoning methods to each context gathering. For context
reasoning, we can simply employ some reasoning tools
such as JESS, CLIPS, and JADE. JESS [7] and CLIPS [8]
are easy to use for rule-based context reasoning. JESS is
a Java-enabled and platform-independent tool, whereas
CLIPS is the C++ enabled tool. JADE [9] is easy to build
for behavior modeling. JADE supports a multi-agent
system through middleware and behavior agents by
ontology and agent programming. Additionally, JADE
has a debugging and deployment phase, and it can be
integrated with JESS for reasoning. Statistical context
reasoning applies the probabilistic approach. A naïve
Bayes classifier and Bayesian reasoning can easily
predict the classification of data and check the confidence
of the reasoned output; they are also appropriate for
statistical context reasoning.
As shown in Table 1, we compared the research
activities with their definitions, approaches, conditions,
and features for context reasoning. The related activities
focused on real-time analysis that derives high-level
context from low-level sensory data. However, they did
not provide a way for evaluating semantic information by
utilizing various contexts in large systems.
211
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Multimodal fusion achieves robust and reliable results
of context reasoning. As shown in Table 2, the 5W1H
context reasoning method is selected according to each
characteristic of the sub-contexts. The 5W1H context
fusion adopts an appropriate reasoning method and
simultaneously uses several reasoning methods in case of
necessity as the multimodal fusion.
DISCUSSION
After analyzing related research activities, we found
several discussion issues. The previous research activities
have some advantages which represent re-usability, selfmanagement, real time processing, interoperability, and
automatic reconfigurations. Moreover, the related works
have easy involvement of the context model without
changing the source code or the system support for
applications by the semantics of context composition.
5W1H CONTEXT REASONING METHODS
5W1H
Subcontext
Name
Who
What
Reasoning
method
Uncertainty
prediction
Reason to select
Gender,
Constitutio
n,
Peculiarity
,
Preference
Event-based
fusion
For static
information (not
frequently changed)
Sensor ID,
Sensor
location,
Sensor
owner
Event-based
fusion
For static
information (not
frequently changed)
Contents
Rule-based
reasoning,
Weighted sum
To decide semantics
(frequently
changed)
Symbolic
location
(Weighted)Voting
method,
Fuzzy logic,
Calculation
To decide the
majority of
candidates
Absolute
location
Event-based
fusion
For static
information (not
frequently changed)
Symbolic
time
Calculation
To interpret the
measured time
Absolute
time
Time stamp
To represent current
time
Body
condition,
Behavior
(Weighted)Voting
method, Fuzzy
logic, Naïve Bayes
classifier
To extract semantic
information using
independent
property
Activity
Rule-based
reasoning, Naïve
Bayes classifier,
Statistical
reasoning
To extract semantic
information using
dependent property
Stress,
Emotion,
Intention
Rule-based
reasoning,
Statistical
reasoning
To infer new
high-level
information
Where
However, the previous research activities have some
constraints that are to need semantic functionality at a
higher layer and an information model specifying the
semantics of the information. Also the related research
needs a relational database and context modeling
language. Therefore, we can conclude that we need to
consider a generalized architecture that generates a
reliable output by incorporating real-time context
reasoning in large systems.
To predict name of
an anonymous user
ACKNOWLEDGMENT
This research was supported by Basic Science
Research Program through the National Research
Foundation of Korea(NRF) funded by the Ministry of
Education(NRF-2014R1A1A2056194)
REFERENCES
Anand R, Roy C (2003) A middleware for context-aware agents in
ubiquitous computing environments. In: Proceedings of the
ACM/IFIP/USENIX 2003 international Conference on
Middleware, Rio de Janeiro, Brazil, June 2003, pp 143-161
Harry C, Tim F, Anupam J (2004) A Context Broker for Building Smart
Meeting Rooms. In: Proceedings of the Knowledge
Representation and Ontology for Autonomous Systems
Symposium (AAAI Spring Sympoisum2004), Stanford CA,
March 2004, pp 53-60
Tao G, HungKeng P, DaQing Z (2004) Toward an OSGi-based
infrastructure for context-aware applications. IEEE Pervasive
Computing 3(4): 66-74
Patrick F, Siobhan C (2004) CASS – Middleware for Mobile ContextAware Applications. In: Proceedings of Mobisys 2004, Boston,
USA, July 2004
Huadong W (2004) Sensor Data Fusion for Context-Aware Computing
Using Dempster-Shafer Theory. Carnegie Mellon University
Doctoral Thesis, UMI Order Number: AAI3126933
Y. Oh, J. Han, and W. Woo, “A Context Management Architecture for
Large-scale Smart Environments,” IEEE Communications
Magazine, vol. 48, 2010, pp. 118-126.
Sandia National Laboratories, JESS. http://herzberg.ca.sandia.gov/.
Sourceforge.net, CLIPS. http://clipsrules.sourceforge.net/.
Nikolaos S, Pavlos M (2007) An Ambient Intelligence Application
Integrating Agent and Service-Oriented Technologies. In:
Proceedings of the 27th SGAI International Conference on
Artificial Intelligence (AI 2007), Cambridge, UK, December
2007
When
How
Why
212
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Role of NT-proBNP(N-terminal pro-brain natriuretic peptide)
for Prognostic in Non ST-segment Elevation Myocardial
Infarction Patients from KorMI database
Ho Sun Shon*, Wooyeong Jang*, Soo Ho Park**, Jang-Whan Bae****, Kyung Ah Kim***, Keun Ho Ryu**
* Database and Bioinformatics Laboratory, PSM of School of Medicine,
Chungbuk National University Cheongju, South Korea
{shon0621, jangwy8838}@ gmail.com
**Database and Bioinformatics Laboratory, School of Electrical & Computer Engineering,
Chungbuk National University Cheongju, South Korea
{soohopark, khryu}@dblab.chungbuk.ac.kr
***Department of Biomedical Engineering, School of Medicine,
Chungbuk National University, Cheongju, South Korea
[email protected]
****Department of Internal Medicine, School of Medicine,
Chungbuk National University, Cheongju, South Korea
[email protected]
Abstract—Recently, N-terminal probrain natriuretic peptide is used for diagnosis and prognosis decision about a cardiac
disorder. Generally, it is secreted by hemodynamic stimulus mostly in the ventricles of the heart. And it is known that if there is
malfunction in the left ventricle, it is increasing. Especially, it appears in proportion to the symptom of cardiac insufficiency and
is used for diagnosis of cardiac insufficiency and prognosis decision. In this paper, we plan to estimate the prognosis through
NT-proBNP as a risk evaluation marker, when the patients who are as risky as STEMI patients visit a hospital despite early
NSTEMI patients. We find out the prognosis estimation results after conducting PCI with the patients in the severely risk group
within 24 hours among NSTEMI patients. As the estimation method, we classified NT-proBNP measured values into two groups
and conducted the survival analysis of MACE and Death about NT-proBNP, matching the variables necessary for revision
through propensity score matching. We found out that as log(NT-proBNP) value increase by 1 through hazard function of COX's
analysis, the risk of MACE increases 1.312 times. This means that according to the degree of measured value of NT-proBNP, it
is possible to evaluate the prognosis estimation to NSTEMI patients and it influences MACE.
Keywords- N-terminal pro-B type natriuretic peptide; Non ST-segment Elevation Myocardial Infarction; Prognosis
single prognostic factor which can be translated to make
decision of necessity of urgent revascularization in
NSTEMI (non-ST segment elevation myocardial
infarction) is still under investigated. There are some
multifactorial laboratories or clinical decision criteria to
support the efficacy of urgent revascularization, but
useful single prognostic factor is still ambiguous in
NSTEMI. [2, 3, 4, 5, 6]. NT-proBNP is very useful
biomarker to diagnosis for HF (heart failure), predicts
short and long-term prognosis, and determines treatment
strategy for HF patients [7].
INTRODUCTION
In Korea, with the development of society and
economy and westernization of our life environment,
cardiovascular disorders have been increasing steadily.
Especially, in company with aging phenomenon the death
rate caused by myocardial infarction has also been
increasing. Statistics estimates that only 2~15 percent of
acute cardiac disease patients arrive at the hospital right
on time but lots of them die. Also in Korea about 50,000
people, 1~2 per 1000 people are estimated to die from
unexpected death. Of acute coronary syndrome, NSTEMI
(Non-ST segment elevation myocardial infarction) and
unstable angina have been developed from the same
pathological conditions, the frequency of which has been
increasing in the modernized society recently. They have
been outpacing the frequency of ST-segment
elevation myocardial infarction. There are many data to
support the benefit of the timely fashioned primary
revascularization for STEMI (ST segment elevation
myocardial infarction), and several kinds of prognostic
factors were already enlightened including rapid
revascularization, Killip classification etc. But, the useful
Therefore, concerns about the treatment for NSTEMI
patients have been increasing, and especially Early
Invasive Strategy through PCI for high risk patients has
been known to be better than conservative therapy
method. According to the guidelines of ACC/AHA
(American College of Cardiology/American Heart
Association), Early invasive strategy has been presented
for high risk NSTEMI patients. Early Invasive Strategy
suggests using PCI (percutaneous coronary intervention)
within 48 hours. Recently prompt treatment such as
within 12 hours or 24 hours has been reported to be
213
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
effective for protecting myocardium and to show better
prognosis. Despite the early NSTEMI patients, there are
risky patients as STEMI patients. Therefore, in this paper
we plan to research prognosis estimation results after
treating severely risky patients within 24 hours of these
patients by PCI using NT-proBNP as the indicator for
evaluating risk degree in the hospital.
(propensity Score Matching). The variables used for
propensity score matching of NT-proBNP, bio marker are
Age, Gender, BMI, hypertension, Hyperlipidemia,
smoking, Prior_MI, and Family history
POPULATION AND METHODS
Study Population
To prevent acute myocardial infarction and develop
treatment guidelines suitable for Korean people, as
registered research about acute myocardial infarction
patients, KorMI(Korea Working Group of Myocardial
Infarction) have been operated and collecting the
necessary data[1]. There were registered 15,533 patients
in KorMI database under the diagnosis of AMI from 2008
to 2013 including 8,382 STEMI patients and 6,711
NSTEMI patients. The populations for analysis are the
patients who had chest pain and got PCI treatment within
24 hours of early NSTEMI patients from KorMI data.
Figure 1 shows the whole sampling procedure for study
population. First, the data are classified into 8,382(55.5%)
of STEMI, 6,711(44.5%) of NSTEMI, and 440 of missing
value. Next, the data are classified into 4,916(76.2%) of
pain, 1,539(23.8%) of no-pain, and 256 of missing value.
Next, we classified the data into whether the patients get
PCI treatments within 24 hours or not. They are shown as
2,411(65.8%) of PCI treatments within 24 hours,
1,252(34.2%) of after 24 hours, and 50 of missing value.
Figure 4. Propensity score distribution of log(NTproBNP) by two
group
Figure 5. Revision of variables through the changes of standardized
differences about the original data and used variables
Mace identified whether All case, ST, TVR, and MI
have difference or not through survival analysis, and
through Cox's Regression analysis, we identified how
much NT-proBNP influences Mace. There were little
events in ST, TVR, and MI, so we determined not to
analyze them because they are not suitable for survival
analysis.
Figure 3. Sampling procedure for study population of
NSTEMI patients
EXPERIMENTAL RESULTS
Analysis Methods
We represented continuous variable as average ±
standard deviation and used SAS 9.3 program for
analysis[8].
For experiments, we replaced NT-proBNP values into
log and classified into two groups, and then we revised
the variables to control independent variables by PSM. In
Figure 2, to identify how well the evaluation standard of
NT-proBNP marker matches through PSM, we showed
the distribution of two groups. We identified half of the
samples which two groups overlapped were matched.
And there were matching in the point of 1,373 first, and
then the subject patients are reduced to 760. In Figure 3,
we identified how well the variables were revised through
In case of transposing log, NT-proBNP forms a very
good normal distribution, and shows Quantile such as Q1
4.4, Q2 5.7, and Q3 6.9. Therefore, using 6 approximate
to 5.7 of median, we classified NT-proBNP into two
groups. The group with more than 6 of log(NT-proBNP)
is classified into High, and the one with less than 6 is Low.
To control independent variable we used PSM
214
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
the changes of standardized differences about the original
data and used variables. The results shows that the
revision was well done from –0.1 to 0.1.
We classified the subject patients into two groups by
early NT-proBNP measured value and compared and
analyzed the characteristics before and after PSM
matching. As a result, all the revised variables turned out
to be significant. Table 1 shows the results before and
after matching for the data necessary for revision among
the original data as baseline characteristics. The results
shows the revision of the variables such as Age, Gender,
BMI, hypertension, Hyperlipidemia, smoking, Prior_MI,
Family history, and so on.
Figure 4. MACE before the PSM of NT-proBNP
For survival analysis we compared the whole
MACE and Death among MACE before and after
revision of PSM using Kaplan-Meier method. Figure 4
and Figure 5 shows the whole MACE before and after
PSM revision, and there turned out to be some differences
between two groups classified into NTproBNP measured
values. Figure 6 and Figure 7 shows about Death among
MACE, and there turned out to have significant
difference before PSM revision.
Figure 5. MACE after the PSM of NT-proBNP
TABLE I. BASELINE CHARACTERISTICS OF STUDY
PATIENTS
Origin (N=1,373)
Matched (N=760)
NT-proBNP
High
Low
NT-proBNP
pValue
N(%) or Mean±Std
High
Low
pValue
N(%) or Mean±Std
NT-proBNP
4.49±1.03 7.25±0.94 0.0001 4.60±1.03 7.06±0.85 0.0001
Gender(Male)
387(64.3) 638(83.3)
0
BMI
23.5±3.3
0
HTN
314(52.6) 310(41.2)
24.6±3
Hyperlipidemia
68(11.9)
smoking
206(34.6) 371(48.9)
0
119(16.4) 0.021
0
289(76.1) 288(75.8) 0.932
24±3.2
24±2.9
49(13.4)
63(17.2) 0.146
150(39.5) 159(41.8) 0.506
Prior_MI
25(4.1)
17(2.2)
0.04
11(2.9)
12(3.2)
family
32(5.7)
85(11.5)
0
27(7.1)
34(8.9)
0.35
killip
43(7.6)
27(3.8)
0.003
16(4.5)
13(3.7)
0.575
lvef
52.7±10.9
57.1±9.2
0
SBP
133±27.4 135.6±25.2 0.076 133.2±26.6 133.8±25.7 0.752
DBP
79.5±15.9 82.1±15.7 0.003 80.4±15.6 80.6±15.8 0.897
77.2±17.7 73.6±14.1
HR
0
Figure 6. Death before the PSM of NT-proBNP
0.966
181(47.6) 186(48.9) 0.717
52.6±10.2 56.7±9.4
76.6±16
72.5±14.1
0.832
0
0
184.4±44.1 191.4±41.8 0.003 186.1±42.2 186.6±38.7 0.869
TC
117±78.3 139.7±94.9
TG
LDL
HDL
44.1±12.7 44.5±15.7 0.639
1.1±1.2
Cr
RBS
0
120.4±82.2 127.4±79.9 0.251
Figure 7. Death after the PSM of NT-proBNP
117.5±35.9 121.7±36.4 0.043 119.3±35.8 118.2±35.1 0.669
0.9±0.4
0.001
44.3±13
1.1±1.3
45.3±16.3 0.366
0.9±0.2
Next, through Cox's regression, we calculated survival
function of prediction model according to the changes of
time. Through Hazard function, there shows the
conditional probability of death right after t point of the
people survived to t point. Hazard function is used in
proportional hazard regression model and identical to the
definition of instantaneous rate of mortality used in
epidemiology.
0.004
130.6±43 131.8±34.7 0.596 129.4±41.9 134±34.5 0.106
hsCRP
7.6±20.7
2.8±11.9
0
7.2±21.2
age
67.2±11.7 58.5±11.4
0
63.4±10.8 63.3±10.5 0.892
2.2±8.5
0
Maximum_CKMB 80.6±134.6 95.3±155 0.068 87.5±145.1 90.8±160.9 0.764
TroponinI
21.4±37.8 21.7±35.5 0.868 21.7±33.8 22.6±36.9 0.75
.
215
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Cox's regression model represents log risk function in
t point using linear expression of a lot of discrimination
variable in t point. That is, if in Cox model with p of
discrimination variables, the values of discrimination
proBNP measuring value as the evaluation marker of the
risk degree, when they visit a hospital among these
patients. We classified the subject patients into two
groups by early NT-proBNP measured value and
compared and analyzed the characteristics before and
after PSM matching. As a result, all the revised variables
turned out to be significant. MACE about NT-proBNP
identified the differences of All case , ST, TVR, MI
through survival analysis. We identified how much NTproBNP influenced Mace through Cox's Regression
analysis. As a result, we identified that through hazard
function of COX's analysis, as the value of log(NTproBNP) increases 1, the risk of MACE increase 1.312
times. Also confidence interval was 1.014: 1.699.
Therefore, through the research results we made the
evaluation standard about prognosis estimation and post
evaluation by NT-proBNP estimation value to NSTEMI
patients and can utilize this method as cardiac marker.
variable of ith characteristics are x′i = (xi1, xi2, · · · , xip),
and regression model coefficient is β = (β1, β2, · · · , βp),
Cox model is represented as the following expression.
hi (t )  h0 exp( xi )
 h0 (t ) exp( 1 xi 1   2 xi 2    p xi p )
Here, h0(t) means baseline hazard function, and we
assume that there is no influence of a lot of discrimination
values to risk function.
Figure 4 identifies that as log(NT-proBNP) increases 1,
the risk of MACE increases 1.312 times through COX's
hazard function. Also, we can identify confidence interval
(1.014: 1.699)
ACKNOWLEDGMENT
This research was supported by Basic Science
Research Program through the National Research
Foundation of Korea (NRF) funded by the Ministry of
Science,
ICT
&
Future
Planning
(No2013R1A1A206518).
TABLE 2. COX’S ANALYSIS OF LOG(NT-PROBNP)
Parameter
DF
Parameter
SD
EstiError
mate
Log
(NT-proBNP)
1
0.272
0.132
Chi- Pr > Hazard
Square ChiSq Ratio
95% IC
REFERENCES
low high
4.258 0.0391
1.312
http://www.kormi.org/
Kim SS, Choi HS, Jeong MH, Cho JG, Ahn YK, Kim JH, Chae SC,
Kim YJ, Hur SH, Seong IW, Hong TJ, Choi D, Cho MC, Kim CJ,
Seung KB, Chung WS, Jang YS, Rha SW, Bae JH, Park SJ; Korea
Acute Myocardial Infarction Registry Investigators, Clinical
outcomes of acute myocardial infarction with occluded left
circumflex artery, J Cardio. 2011, 57(3), pp. 290-296.
SONG Y, Analyses of Studies on Cardiac Rehabilitation for Patients
with Cardiovascular Disease in Korea, J Korean Acad Nurs.
2009, 39(3), pp. 311-320.
Deedwania PC, Ahmed MI, Feller MA, Aban IB, Love TE, Pitt B,
Ahmed A, Impact of diabetes mellitus on outcomes in patients
with acute myocardial infarction and systolic heart failure, Eur J
Heart Fail. 2011, 13(5), pp. 551-559.
Cho JY, Jeong MH, Choi OJ, Lee S, Jeong SY, Kim IS, Cho JS, Hwang
SH, Hwang SH, Yoon NS, Moon JY, Hong YJ, Kim JH, Kim W,
Ahn YK, Cho JG, Park JC, Kang JC, Predictive factors after
percutaneous coronary intervention in young patients with acute
myocardial infarction, , Korean Circ J. 2007, 37(8), pp. 373-379.
Haaf P, Balmelli C, Reichlin T, Twerenbold R, Reiter M, Meissner J,
Schaub N, Stelzig C, Freese M, Paniz P, Meune C, Drexler B,
Freidank H, Winkler K, Hochholzer W, Mueller C, Christian
Mueller, N-terminal Pro B-type Natriuretic Peptide in the Early
Evaluation of Suspected Acute Myocardial Infarction, Am Heart
J 2011, 124(8), pp.731-739.
Gagging HK, Mohammed AA, Bhardwai A et al. Heart failure
outcomes and benefits of NT-proBNP-guided management in the
elderly: results from the prospective, randomized ProBNP
outpatient tailored chronic heart failure therapy (PROTECT)
study. J Card Fail. 2012;18:626-34
http://www2.sas.com/proceedings/sugi29/165-29.
1.014 1.699
CONCLUTIONS
NT-proBNP is well-known biomarker of the diagnosis
and prognosis for heart failure, and is directly associated
with myocardial necrosis. If the pressure of the left
ventricle increases, proBNP is released from the cardiac
muscle cell of the left ventricle. proBNP is separated into
biologically activated BNP and non-activated NTproBNP (N-terminal ProBNP) to the N-terminal of BNP.
Also, BNP is known as the important indicator for
deciding the function and estimating the prognosis of the
left ventricle, as the representative neurohormone
secreted from ventricular muscle by the mechanical
overload of the left ventricle in chronic heart failure.
In this paper, we researched on the subject of the
patients who had thoracodynia and got PCI procedure
within 24 hours among the patients with early NSTEMI
diagnosis from KorMI data. Despite early NSTEMI
patients, there are patients who are as risky as STEMI
patients. We found out the prognosis estimation results
after performing PCI on the subject of patients with
severity risk patients within 24 hours through NT-
216
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
A 65nm CMOS Current Mode Amplitude Modulator
for Quad-band GSM/EDGE Polar Transmitter
Hyunwon Moon
School of Electronic and Electric Engineering, Daegu University, Gyeongsan, Korea
e-mail: [email protected]
Abstract—A current-mode amplitude modulator using a current reusing technique is proposed for a quad band
GSM/EDGE polar transmitter. In order to reduce the current consumption and silicon area, the function of a
programmable gain amplifier, AM-PM combiner, and driver amplifier is realized as one stacked circuit structure.
The proposed amplitude modulator is implemented in a 65nm CMOS technology.
Keywords- polar transmitter; AM-PM combiner; current reusing; amplitude modulation; CMOS; EDGE
the AM digital signal separated by CORDIC processor is
INTRODUCTION
Recently, as a smartphone has been widely used in our
life, people would like to use a faster data rates at a mobile
environment. In particular, a multi-band multi-mode RF
transceiver including 2G/3G/4G cellular technologies has
to be implemented as a single-chip type. Also it should
have the competitive characteristics with small silicon
area and low power consumption. So far, the mainstream
transmitter architectures of a multi-band multi-mode
cellular RF transceiver have been used the dual path
architecture, which is composed of a narrow band polar
modulator for GSM/EDGE and a wideband I/Q
modulator for WCDMA/HSDPA/LTE, because it is
optimal choice with respect to the implementation and
performance [1]-[4].
Fig. 1. Block Diagram of quad band GSM/EDGE
Amplitude Modulator.
The polar modulator for GSM/EDGE transmitter has
widely used to meet the RX band noise performance at
20MHz offset frequency without an external SAW filter.
In general, a polar transmitter has two modulation paths
such as a phase modulator (PM) based on PLL structure
and an amplitude modulator (AM). The phase modulator
has been widely used for a constant envelop modulation
signal, such as GSM/GPRS and Bluetooth, because it can
share the function of a frequency synthesizer and
transmitter and shows the better spectral purity [5]-[6]. In
this paper, a power efficient amplitude modulator for
quad band GSM/EDGE polar transmitter is proposed.
The proposed amplitude path presents the power efficient
method to combine AM and PM signal. And it can drive
an external power amplifier without noise performance
degradation. The proposed envelope modulator is verified
and fabricated using a 65nm CMOS technology.
converted to an analog current signal through the DAC.
This AM current signal will be transferred to the AM-PM
combiner without converting the voltage signal to
maintain the highly linear characteristic. The wideband
AM and PM signals are combined through the AM-PM
combiner and then the narrow band 8-PSK modulation
signal is reconstructed for EDGE standard. Total 42 dB
gain control range with a 1dB step is realized utilizing two
PGAs. The simple passive RC LPF plays a role of an antialiasing to filter the DAC clock harmonic components.
DETAIL CIRCUIT DESIGN
st
1 prorammable gain amplifier and low- pass filter
The Fig. 2 shows 1st PGA schematic that interfaces
with current type DAC. Its gain can be controlled by using
a current cancelling technique instead of varying the
width of the mirror transistor [7]. Also, 1st PGA will
transfer AM current signal received from the DAC to the
AM-PM combiner through the current mirror transistor
(Mn) without an I-to-V converter. The simple RC LPF is
applied to reject the unwanted harmonic clock signals
after 1st PGA. So, the additional current consumption for
an anti-aliasing filter can be removed.
AMPLITUDE MODULATOR ARCHITECTURE
The proposed amplitude modulator for quad band
GSM/EDGE is composed of current-mode digital-toanalog converter (DAC), two programmable amplifiers
(PGAs), low pass filter (LPF), AM-PM combiner, and
two driver amplifiers (DA) as shown in Fig. 1. The
amplitude path for a polar transmitter is able to combine
AM signal and PM signal without distortion and provide
proper gain and sufficient low noise floor. Also, its output
power level should be large to drive an external PA. First,
217
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
TABLE I
SIMULATED PERFORMANCE SUMMARY
Parameter
Comments
Results
Output
GSM/EDGE
4/1.5 dBm
power
S22
Output matching
-10 dB
Harmonic
LB
-54 dBc
Pfund/PLO3B
rejection
HB
-41 dBc
B
ratio
Output
@ 20MHz offset freq.
-172 dBc/Hz
noise
Gain and silicon area reduction are accomplished
consumption
control
1dB stepAM-PM combiner
42 dBwith
by using
the proposed stacked
range The proposed architecture is implemented
DA function.
in a 65nm CMOS process
and
its performance 13
is verified
GSM
mode
mA
Current
through
Cadence SpectreRF
EDGE simulation.
mode
40mA
Fig. 2. Schematic of 1st programmable gain
amplifier(PGA) with a passive RC low-pass filter (LPF)
ACKNOWLEDGMENT
This research was supported by Basic Science
Research Program through the National Research
Foundation of Korea (NRF) funded by the Ministry of
Education (No.2014R1A1A2054858). Also, this research
was supported IC Design Education Center (IDEC).
REFERENCES
D. L. Kaczman, M. Shah, and N. Godambe et al., “A single-chip triband
(2100, 1900, 850/800 MHz) WCDMA/HSDPA cellular
tranceiver,” IEEE J. Solid-State Circuits, vol. 41, pp. 1122-1132,
May 2006.
H. Darabi, A. Zolfaghari, and H. Jensen et al., “A fully integrated quadband GPRS/EDGE radio in 0.13 m CMOS,” IEEE ISSCC Dig.
Tech. Papers, pp. 206-207, Feb. 2008.
M. Nilsson, S. Mattisson, and N. Klemmer et al., “A 9-band
WCDMA/EDGE transceiver supporting HSPA evolution,” IEEE
ISSCC Dig. Tech. Papers, pp. 366-367, Feb. 2011.
A. Cicalini, S. Aniruddhan, and R. Apte et al., “A 65 nm CMOS SOC
with embedded HSDPA/EDGE tranceiver, digital baseband and
multimedia process,” IEEE ISSCC Dig. Tech. Papers, pp. 368369, Feb. 2011.
R. Magoon, A. Molnar, J. Zachan, G. Hatcherh, and W. Rhee, “A
single-chip quad-band (850/900/1800/1900 MHz)
direct
conversion GSM/GPRS RF transceiver with integrated VCOs
and fractional-N,” IEEE J. Solid-State Circuits, vol. 37, pp. 17101720, Dec. 2002.
S.-A. Yu and P. Kinget, “A 0.65-V 2.5-GHz fractional-N synthesizer
with two-point 2-Mb/s GFSK data modulation,” IEEE J. SolidState Circuits, vol. 44, pp. 2411-2425, Sep. 2009.
B. G. Perumana, R. Mukhopadhyay, S. Chakraborty, C.-H. Lee, and J.
Laskar, “A low-power fully monolithic subthreshold CMOS
receiver with integrated LO generation for 2.4 GHz wireless PAN
applications,” IEEE J. Solid-State Circuits, vol. 43, pp. 22292238, Oct. 2008.
Fig. 3. Current reusing stacked 2nd PGA, AM-PM
combiner and driver amplifier(DA)
2nd programmable gain amplifier, AM-PM combiner,
and driver amplifier
The proposed AM-PM combiner using a current
reusing technique for reducing the power consumption
and silicon area is presented as shown in Fig. 3. The
amplitude of AM current signal mirrored by 1st PGA can
be varied from 0 to -36dB with 6dB step in a same way in
the 1st PGA. To meet the performance of RX band noise,
very large bias current is required to implement AM-PM
combiner. Also, the stacked DA structure on top of the
AM-PM combiner is applied in order to reuse its large
bias current instead of connecting AM-PM combiner and
DA in the way of a cascade type as shown in Fig. 3.
Therefore, the proposed AM path has the better linearity
performance and lower noise performance than the upconversion mixer structure thanks to the fully currentmode delivery of AM signal.
SIMULATION RESULTS AND CONCLUSION
Table I summarizes the simulation results of the
proposed amplitude modulator for quad band
GSM/EDGE polar transmitter. There is little contribution
about the RX band noise performance degradation of the
phase modulated signal because the noise performance of
AM-PM combiner is too very low. Also, the current
218
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Appling Harmony Search Optimization Method to Economic
Load Dispatch Problems in Power Grids
Si-Na Park, Sang-Bong Rhee
Department of Electrical Engineering, Yeungnam University, Gyeongbuk 712 749, Korea
[email protected]
Abstract—This paper presents an improved harmony search (HS) algorithm for an economic load dispatch (ELD)
problems with valve-point loading constraints in thermal units. To enhance a convergence and accuracy of the original
HS algorithm, a simple concept inspired by error optimization is adopted for selection of new decision variables in
search space. An improved HS algorithm has the benefit of high convergence rate and precision compared to other
optimization methods. Three different test systems commonly used in the literature of valve-point effect ELD problems
are successfully solved by the proposed method. The proposed method is easy to implement, and the results of the
convergence performance are better than other optimization algorithms.
Keywords-Economic Load Dispatch; Harmony Search algorithm; Valve-Point Loading; Optimization; Power System Control and
Operation
In this paper, the improved HS algorithm is proposed to
solve ELD problem with a valve-point loading effect.
The addition of the valve-point loading effect to
objective function makes the ELD problems to more
complicated one since it increases the non-linearity of the
search space as well as a number of local minima.
INTRODUCTION
Economic load dispatch (ELD) is an important
optimization problems in power system operation for
allocating generation among the committed units.
Furthermore, it is a sub-problem of the optimal power flow
(OPF) forms a part of modern energy management system
(EMS) functions. The main objective of ELD problem is to
reduce the total operation cost, while satisfying various
equality and inequality constraints [1].
Many optimization techniques and mathematical
algorithms such as conventional optimization methods,
artificial intelligence methods, and heuristic algorithms
successfully have been proposed to get an optimal solution
of ELD problems. Among of these methods, genetic
algorithms (GAs) and particle swarm optimization (PSO),
known as probabilistic heuristic algorithm, have been
employed successfully to solve that problems with
robustness. However, GAs and PSO have defects about a
tuning of some weights or parameters and difficulties with
handling a large number of constraints, convergence, or
algorithmic complexity [2].
Recently, a new optimization method was proposed by
Geem et al., which they called harmony search (HS)
algorithm [3]. The HS algorithm inspired using the musical
process of searching for a perfect state of harmony.
Compared to mathematical optimization algorithms, the
HS algorithm imposes fewer mathematical requirements
and does not require initial values for the decision variables
and derivative information of the objective function.
IMPROVED HS OPTIMIZATION METHOD
Major drawbacks of HS algorithm are handling of
constraints and premature of convergence performance. In
this paper, new technique is proposed to improve the HS
algorithm for overcoming those problems.
Handling a Constraints
To treat a penalty functions, the technique of maintain a
feasible solution is applied to the HS algorithm. The
intuitive concept to maintain a feasible solution is for a
variable to fix on point of boundary when it is outside the
feasible space. Fig. 1 illustrates the search process of the
‘fixing’ technique.
219
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Fixing techniques of searching space
12
55
120
0.00284
13
55
120
0.00284
SYSTEM LOAD : 1800[MW]
Pseudo-code for HS Optimization Method
The shortly denoted pseudo-code for initial selection of
HS parameters and penalty function treatment are denoted
as below:
126
126
100
100
0.084
0.084
The optimal dispatch result of real power for the test
system is given in Table III. The optimal total generating
cost obtained using HS algorithm is $17963.83, which is
more accurate result comparing to $17994.07 of the IFEP
method [4]. Also, the sum of each generating power
satisfies the load demand exactly.
 Generate Random value : R, (0  R  1)
 If R  HMCRthen :
xi  R( xi ,max  xi ,min )  xi ,min
 Else :
-
8.60
8.60
 Generate Integer Random value R1 , 1  R1  HMS 
POWER OUTPUT RESULT (13 GENERATOR SYSTEM)
xi  R1th value in i th column to new xi
GENERATOR NO.
1
2
3
4
5
6
7
8
9
10
11
12
13
SUM
TOTAL COST[$]/[HR]
Generate Random value R2 , (0  R2  1)
If ( R2  PAR ) then :
xi  xi
else
xi  xi  a, (a  bw  u (1,1 ))
NUMERICAL E XAMPLES AND RESULTS
The HS algorithm for the ELD problem has been
applied to test systems with valve-point loading effect to
verify the performance of the HS optimization method. To
prepare a simulation, initially, several runs were performed
to select the key parameters of HS algorithm such as HMS,
HMCR, PAR, and NI. With those parameters, test systems
have been done. Table I lists the parameters for HS
algorithm.
CONCLUSIONS
An application for ELD with valve-point loading effect
using HS algorithm has been presented. The numerical
results with the two test systems show that the HS
algorithm can get the more accurate solution compared
with other methods. From the obtained results it is inferred
that the total operating cost of the ELD has been
considerably reduced with the HS algorithm. Moreover, the
results with large test system show that the HS algorithm
can be applied to real-scale systems. From the view point
of computation time during the optimization process, HS
algorithm did not compared with other existing methods,
since those methods were not available by authors.
However, performed within smaller iterations, the HS
algorithm can be regarded as fast and accurate method in
optimization problems in power systems.
THE PARAMETERS OF HS OPTIMIZATION METHOD FOR TEST SYSTEM
SYSTEM
HMS
HMCR(1HMCR)
PAR(1-PAR)
NI
3 GEN.
13 GEN.
40 GEN.
6
26
80
0.75(0.25)
0.89(0.11)
0.85(0.15)
0.50(0.50)
0.65(0.35)
0.72(0.28)
10-5
10-5
10-5
BAND WIDTH : 0.001
The data about cost coefficients, generation limits, and
load demand of 13 generators system with valve-point
loading is given in Table II.
COST COEFFICIENTS AND POWER OF 13 GENERATOR SYSTEM
GEN.
NO.
PMAX
PMIN
A
b
c
e
f
1
2
3
4
5
6
7
8
9
10
11
0
0
0
60
60
60
60
60
60
40
40
680
360
360
180
180
180
180
180
180
120
120
0.00028
0.00056
0.00056
0.00324
0.00324
0.00324
0.00324
0.00324
0.00324
0.00284
0.00284
8.10
8.10
8.10
7.74
7.74
7.74
7.74
7.74
7.74
7.74
8.60
550
309
307
240
240
240
240
240
240
126
126
300
200
200
150
150
150
150
150
150
100
100
0.035
0.042
0.042
0.063
0.063
0.063
0.063
0.063
0.063
0.084
0.084
REAL POWER OUTPUT[MW]
628.3182
149.5996
222.7497
109.8665
109.8665
60.00000
109.8665
109.8665
109.8665
40.00000
40.00000
55.00000
55.00000
1800.000
17963.83
ACKNOWLEDGMENT
The research was supported by Korea Electric Power
Corporation Research Institute through Korea Electrical
Engineering & Science Research Institute.
[grant number : R14-XA02-34]
REFERENCES
220
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
A.J.Wood, and B.F.Woollenberg, Power Generation, Operation and
Control, Wiley, New York, 1984.
T. Jayabharathi, K. Jayaprakash, N. Jeyakumar, and T. Raghunathan,
“Evolutionary programming techniques for different kinds of
economic dispatch problems,” Elect. Power Syst. Res., vol. 73, no.
2, pp.169-176, Feb. 2005.
Geem, Z.W., Tseng, C-L. and Park, Y. “Harmony search for generalized
orienteering problem: best touring in China,” Book Advanced in
Natural Computation, Vol. 361, No. 2, Springer Berlin/Heidelberg,
2005.
Zwe-Lee Gaing, “Particle swarm optimization to solving the economic
dispatch considering the generator constraints,” IEEE Trans. on
Power Sysem., vol. 18, no.3, pp. 1187-1195, Aug. 2003.
221
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Ventilation System Energy Consumption Simulator for a
Metropolitan Subway Station
Sungwoo Bae†, Jeongtae Kim
Dept. of Electrical Engineering, Yeungnam University, Gyeongsan, Gyeongbuk, Korea
†Corresponding Author
Abstract—This paper proposes an electrical energy saving simulator for the ventilation system in the metropolitan
subway station when the control method for an induction blower motor is changed from the simple on and off control
to the variable speed control method. The ventilation energy saving calculator in this paper was built by MATLAB.
Based on the comparative study, it can be concluded that the variable speed control method has 71% of energy saving
effect. This simulation results can be used for the energy saving effect when the blower motor control scheme is changed
from the simple on/off control to the variable speed control method.
Keywords – Ventilation, blower motor, variable speed control, energy saving
control method is changed from the simple on/off control
method.
INTRODUCTION
The metropolitan subway systems in Korea consume
2.25 TWh per year of which 49 % electrical energy is used
by subway stations in 2014. The amount of the total
electrical energy per year is approximately 1.1 TWh of
which energy cost is about one hundred and five million
dollars. Because of the marginal cost increase in the electric
energy, the operating cost of the metropolitan subway
station has also increased substantially [1]. The major
electric load in the subway station is the ventilation
equipment that consumes 22.7% of the total energy in the
subway station [2]. Thus, the efficient ventilation system
may contribute to more energy saving than any other factors.
However, the ventilation system in the subway station is
required to be operated continuously to satisfy its air quality
[3] because it may be reduced if the blower system is
intermittently operated to save electrical energy. Therefore,
in order to save the electrical energy consumption in the
ventilation system, it is required to adopt more energy
efficient blower motor than the conventional system. Or, the
ventilation system requires changing the variable speed
control algorithm for the blower motor from the simple on
and off control method.
ENERGY SAVING SIMULATOR FOR THE SUBWAY
VENTILATION SYSTEM
Proposed energy consumption calculator
The proposed energy consumption calculator for the
subway ventilation system is based on (1), (2), and (3) to
compute its electrical energy uses. As shown in (1), the
blower motor power (Po) can be obtained using the motor
torque (T), the mechanical angular speed (ωm), the
mechanical frequency (fm). The mechanical angular speed
(ωm) can be calculated based on (2) and (3).
Po  T m ,
(1)
m  2f m ,
(2)
rpm
,
60
(3)
fm 
where rpm is a round per minute for the blower motor in the
ventilation system.
This paper proposes an electrical energy saving
simulator for the ventilation system in the metropolitan
subway station when the control method for a blower motor
is changed from the simple on and off control to the variable
speed control method. The data measured in the Surak-san
subway station in Seoul, Korea, from 19:00 to 24:00 was
used in the energy saving calculator that compares the
electrical energy consumption between the simple on/off
control and the variable speed control for the induction
blower motor in the ventilation system. The ventilation
energy saving calculator based on MATLAB [4] presented
in this paper can provide the hourly-based energy
consumption data of the ventilation system and the
forecasting energy saving data when the variable speed
The blower motor input power (Pi) can be calculated by
(4) using the motor output power (Po), the motor efficiency
(ηm), the inverter efficiency (ηinv), and the power factor (pf)
between the motor and the inverter. The total electrical
energy consumption (W) in the blower system can be
obtained by (5). The inverter efficiency (ηinv) is assumed to
be a constant value in this blower system energy
consumption simulator
Pi 
222
Po
,
m inv pf
(4)
W  Pt
i ,
(5)
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
the motor efficiency (ηm) when the variable speed control is
applied to the existing induction motor in the Surak-san
subway station. As shown in Fig. 2, the x and y axes indicate
the minute-based time from 18:30~24:00 and the measured
input blower motor power.
Table I shows the energy consumption comparison
results for the blower motor control methods 1 and 2 in
which the existing induction motor was controlled by the
simple on/off strategy (i.e., Method 1) and the variable
speed scheme (i.e., Method 2) respectively. The existing
ventilation strategy in the Surak-san subway station was the
simple on/off control so that it consumed large amount of
unnecessary electrical energy because of its continuous
operation regardless of the concentration of fine dust.
However, if the blower motor is controlled by the variable
control method, it can be efficiently operated by the fine
dust concentration [1]. The electrical energy consumption
data was calculated by (1) ~ (5) of which parameters was
based on the existing blower motor in the Surak-san subway
station. Although the presented energy consumption and
efficiency data is limited to the ventilation operation data
from 19:00 to 24:00, we may forecast the energy saving
effect when the simple control strategy is changed to the
variable control scheme. As shown in Table I, the energy
consumption from 19:00 to 24:00 was 43.68 kWh, the
electrical energy use in the variable speed control reduced
to 12.56 kWh, resulting in about 31 kWh energy saving (i.e.,
71%). If we estimate the daily energy saving based on the
data shown in Table I, the amount of the daily saving
electrical energy could be more than about 100 kWh.
Fig. 1 Blower motor input power
CONCLUSION
This paper presented the ventilation energy
consumption simulation study for a blower motor in the
metropolitan subway station based on MATLAB. In this
simulation study, we compared the energy saving data
between the simple on/off control and the variable speed
control for the blower motor. Based on the comparative
study, it can be concluded that the variable speed control
method has 71% of energy saving effect. This simulation
results can be used for the energy saving effect when the
blower motor control scheme is changed from the simple
on/off control to the variable speed control method. For the
future work, we will conduct the comparative energy
consumption study with different motors as well as their
various control schemes in the ventilation system for the
metropolitan subway station.
Fig. 2 Blower induction motor efficiency
Table I. Energy Consumption Comparison of Methods 1 and 2
Method 1
Operating
Time
19:00∼20:0
0
Method 2
(On/Off
control)
(Variable
speed
control)
15.68 kWh
4.12 kWh
Difference
between the
Methods 1 and 2
11.56 kWh
ACKNOWLEDGMENT
This research was supported by a grant (14RTRPB067916-02) from Railroad Technology Research Program
funded by Ministry of Land, Infrastructure and Transport of
Korean government and was also supported by Basic
Science Research Program through the National Research
Foundation of Korea (NRF) funded by the Ministry of
Science,
ICT
and
Future
Planning
(NRF2014R1A1A1036384).
20:00∼21:0
10.08 kWh
3.02 kWh
Energy
consumption
simulation
results 7.06 kWh
0
Figures 1 and 2 show the simulation results of the
21:00∼22:0
ventilation
energy10.08
consumption
calculator
kWh
2.96 kWh
7.12based
kWh on the
0
MATLAB
in which the variable speed control method is
applied to the induction motor in the Surak-san subway
22:00∼23:0
station.
The vertical
represents
5.32axis
kWh in Fig.
1.931kWh
3.39the
kWhmeasured
0 power data of the hourly based blower motor input
input
power (Pi ). The horizontal axis in Fig. 1 indicates the time
23:00∼24:0
of which
unit is a minute
18:30~24:00.
Figure
2.52 kWhfrom 0.53
kWh
1.99
kWh 2 shows
0
Total
43.68 kWh
12.56 kWh
31.12 kWh
223
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
References
JungHo Lee, “[New technology trends and prototypes] Maximum power
management optimization of urban rail power equipment (Case of
the Incheon Subway Line 1),” Urban Railway Magazine 1, pp.6871, 2013.
JungYong Jeon, SuHo Choe, TaeHwan Gwon, HyeMi Choe, Ju Hyeong
Kim, JaeJun Kim, “LCCA and LCA to Evaluate Feasibiliy for
Introducing High-Efficiency Motors into Air Ventilation Systems of
Public Facilities,”will be publised in the future issue of the Korea
Institute of Construction Engineering and Management (KICEM)
Journal, 2015.
Korean Ministry of Enviroment, “Public transport Indoor Air Quality
Management Guidelines,” 2006.
Chee-Mun Ong, “Dynamic simulation of electric machinery using
Matlab®/Simulink,” Prentice Hall, 1998
224
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
The effectiveness of international development cooperation
(IDC) educational program for nursing students
1
Sun Young Park, 2Heejeong Kim
1
*2
Baekseok University, Division of Health Science, Department of Nursing Science,Korea, [email protected]
Baekseok University, Division of Health Science, Department of Nursing Science,Korea, [email protected]
Abstract - The study used descriptive research methods to Identify changes in nursing students’ IDC relations
regarding perception, attitude before and after learning IDC educational program. 34 nursing students who
were taking a ‘understanding of IDC’ in a university participated in the study . The participants' perception
level for the ODA and MDG, the attitude level for the ODA, perception level for the ODA expansion were
significantly increased after educational program. Along with this, in order to systematically manage human
resources to participate in the IDC business, it is suggested that college level programs to train and administer
health care professionals be developed further.
Keywords: educational program, IDC, nursing student
The participants' perception level for the ODA before
the IDC program was 5.9% of well informed. It however
noticeably increased to 35.3% after the program
(figure1).
1. Introduction
According to the results of a public opinion survey by
KIEP in 2011, people’s level of awareness of the fact that our
government provides foreign aid marks 82.8%, which
implies that the general public’s basic recognition for the
ODA, compared to the past, has improved. The areas in
which Korea has proven it can provide aid most effectively
are health and medical care with a rate of 62.8%. However
the recognition and education level for the IDC has
continuously been raised thus far.
Meanwhile, there exists a principal agent of advanced
learning which trains future subsequent generations and
professionals of our own country. Higher education plays a
pivotal role in training future professionals of the country
who are able to continue performing high quality ODA.
The study was designed to evaluate the IDC educational
program for nursing students.
60.0
50.0
40.0
30.0
20.0
10.0
.0
52.9
55.9
35.3
29.4
pre
8.8
5.9
11.8
.0
education
post
know very know to
well
some
do not
have no
know
idea
education
extent
Figure 1. Perception level change for the ODA
While the perception level for the MDG before the
program was 8.8%, showing the number of students with
well informed was small, the level increased to 58.8%
after the program (figure2).
2. Materials and Methods
The study used descriptive research methods to Identify
changes in nursing students’ IDC relations regarding
perception, attitude before and after learning IDC
educational program. 34 nursing students who were taking a
‘understanding of IDC’ in a university participated in the
study after being informed of the purpose of the study and
agreeing to attend research participation in written form.
After its use was permitted by the KOICA and revised
to fit the goal of the study, a questionnaire survey
regarding IDC recognition by Korea International
Cooperation Agency (KOICA), was utilized as a
research tool.
60.0
58.8
50.0
50.0
40.0
26.5
20.6
30.0
20.0
pre
17.6
8.8
education
11.8
2.9
10.0
post
education
.0
know very
well
know to
do not
have no
some
know
idea
extent
3. Results
Figure 2. Perception level change for the MDG
225
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
References
For the attitude level for the ODA, those in favor of
offering it increased from 38.2% to 47.1% (figure 3).
47.1
50.0
40.0
[1]
50.0 50.0
[2]
38.2
pre
[3]
education
30.0
post
20.0
11.8
[4]
education
2.9
10.0
[5]
.0
strongly
approval
opposition
[6]
approval
Figure 3. Attitude level change for the ODA
[7]
While 11.8% of the students before the IDC program
agreed with the large expansion of the existing perception
level for the ODA, it increased to 32.4% after the program
(figure 4).
70.0
55.9
60.0
50.0
pre
education
32.4
40.0
20.0
[9]
73.5
80.0
30.0
[8]
post
11.8
14.7
11.8
education
10.0
.0
significantly
slightly
maintain the
enlargement
enlargement
current level
Figure 4. Perception level change for the ODA
expansion
4. Discussion
In this study, the participants' perception level for the
ODA and MDG, the attitude level for the ODA, perception
level for the ODA expansion were significantly increased
after educational program.
The persistent advertising strategy to enhance
understanding and recognition by college students and the
general public regarding IDC has to be prepared. To do so,
improving recognition through the development and
management of IDC related college course works is
suggested. Along with this, in order to systematically
manage human resources to participate in the IDC business,
it is suggested that college level programs to train and
administer health care professionals be developed further.
226
Baek IH, "The Present Condition and Driving Direction of ODA in
Korea”, Proceeding of the Korean Association for Policy Studies in
fall, Korea, 2013.
Kang SJ, “ODA Policy of the Park Geun-hye Government: The
Outlook and Challenges”, Korea National Diplomatic Academy,
Korea, 2013.
Kwon Y, Park SK, Lee JY. “An Analysis of the Korean Public's
Perception on ODA” Korea Institute for International Economic
Policy, Korea, 2011.
Kim EM, Kim JY, Lee JE, “A Study on the Strategy for the
Enhancement of Nation’s Awareness on ODA”, Korea International
Cooperation Agency, Korea, 2011.
Ryu JS, “University Innovation and Competitiveness”, Samsung
Economic Research Institute, Korea, 2006.
Lee TJ, “The Partnership between ODA and University for the
Knowledge Based Expansion in Developing Countries”, International
Development Cooperation, KOICA, vol. 1, pp.32-49, 2008.
Kim EH, “A Study on Current Status and Improvement for
International Development Cooperation Education”, The Graduate
School Pusan National University, Master's Dissertation, Korea, 2013.
Choi MK, “A Study on Training and Using Plan of Professional
Human Resource in the International Development Cooperation”,
Korea International Cooperation Agency, Korea, 2008.
Hong SP, Cho MS, Jang JY, “Study on Improving Effectiveness of
Korea’s Health Field ODA”, Korea Institute for Health and Social
Affairs, Korea, 2011.
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
A Study on the Relationship between Nursing Professionalism, Internal
Marketing and Turnover Intention among Hospital Nurses
Eun Ja Yeun¹, Misoon Jeon*2
*2,
¹, First Author, Dept. of Nursing, Konkuk University, Chungju, , South Korea, [email protected]
Corresponding Author, Dept. of Nursing, Baekseok University, Cheonan, South Korea, [email protected]
Abstract - This study is a descriptive-correlational study to investigate the nursing professionalism, internal marketing and
turnover intention among hospital nurses. The data was collected from 270 nurses in university hospital located in Seoul and
Chungbuk by structural questionnaire. Data were analyzed using SPSS 18.0. The results showed that there was significant
differences in the turnover intention among the marital status (t=2.21, p=.028), the shift (F=6.39, p=.002) and the position
(F=5.49, p=.005). Also there was a positive correlation between the nursing professionalism and internal marketing (r=.36,
p<.001) and a negative correlation between the internal marketing and turnover intention (r=-.28, p<.001). Nurses’ turnover
intention is associated with internal marketing and nursing professionalism; hence, it is important to implement internal
marketing tactics centered on preventing emotional fatigue and to employ strategies that encourage nursing professionalism.
Keywords: nurse; nursing professionalism; internal marketing; turnover intention
3-1 Difference in the nursing professionalism, internal
marketing and turnover intention according to the
general characteristics
Table 1 shows the analyzed difference in the nursing
professionalism, internal marketing and turnover intention
according to the general characteristics of the subjects.
1. Introduction
Nurse turnover is one of the most critical issues in hospital
management. Thus, managing nurse turnover is an imperative
task for nurse managers. A high nurse turnover rate in clinics
begets multiple repercussions. The average nurse turnover in
Korea from 2010 to 2013 hit 16.6 - 16.9% [1]. The factors related
to the working environment include wage, promotion, welfare,
relationships with doctors, and workers’ negative attitudes. This
study identified the relationships among turnover intention,
internal marketing (an internal factor of turnover intention) and
nursing professionalism (an external factor of turnover
intention). Such an understanding provides the basis upon which
effective turnover reduction methods can be developed, while
also offering valuable basic data for the development of training
programs for new nurses or nursing students to curtail turnover.
3.2 Correlation between nursing professionalism, internal
marketing and turnover intention
The correlation between nursing professionalism, internal
marketing and turnover intention among hospital nurses was
analyzed in Table 2. The result showed that there was a
positive correlation between the nursing professionalism and
internal marketing. But there was a negative correlation
between the internal marketing and turnover intention.
4. Discussion
Buttressing internal marketing activities will augment
2. Materials and Methods
The data was collected from 270 nurses in university hospital
located in Seoul and Chungbuk by structural questionnaire from
May to June, 2013. The nursing professionalism instrument
developed by Yeun, Kwon & Ahn [2] was used in this study.
The internal marketing instrument was revised and
compensated by Choi and Ha [3], and then used in this study.
And the turnover intention instrument developed by Yeun,
Kim [4] was used in this study. The collected data were
analyzed with the SPSS 18.0 software program.
nursing professionalism while reducing turnover intention.
As shown in the study results, nurses’ turnover intention is
associated with internal marketing and nursing
professionalism; hence, it is important to implement internal
marketing tactics centered on preventing emotional fatigue
and to employ strategies that encourage nursing
professionalism.
3. Results
227
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Table1. Difference in the nursing professionalism, internal marketing and turnover intention according to the general
characteristics (N=270)
Characteristic
Classify
Age(yr.)
≦25
26-29
30-39
40≦
Education
Marital status
College
Bachelors
≥Masters
Single
Married
≦2
3-5
6-10
11≦
Career(yr)
Shift
Work unit
Full-time
12-hr shift
8-hr shift
Ward
Specificunit
OPD
Nurse
CN
≥HN
Position
Yes
No
Experience of
turnover
n (%)
Nursing professionalism
M(SD)
Internal marketing
M(SD)
Turnover intension
M(SD)
270(100)
3.30(.40)
2.58(.48)
3.91(.53)
76(28.1)
88(32.6)
87(32.2)
19( 7.0)
3.37(.35)
3.28(.43)
3.21(.40)
3.50(.39)
2.68(.46)
2.54(.45)
2.49(.52)
2.80(.42)
3.89(.48)
3.99(.42)
3.89(.56)
3.66(.84)
F
4.10
3.58
2.18
p
.007
.014
.091
159(58.9)
99(36.7)
12( 4.4)
3.29(.39)
3.28(.40)
3.55(.56)
2.59(.45)
2.56(.52)
2.65(.53)
3.91(.53)
3.86(.50)
4.17(.57)
F
2.54
.25
1.81
p
.081
.783
.165
200(74.1)
70(25.9)
3.30(.41)
3.31(.39)
2.58(.47)
2.57(.53)
3.95(.48)
3.79(.63)
2.21
t
-.16
.14
p
.871
.890
.028
86(31.9)
70(25.9)
59(21.9)
55(20.4)
3.39(.36)
3.26(.37)
3.20(.47)
3.32(.42)
2.72(.47)
2.47(.42)
2.41(.45)
2.68(.52)
3.89(.57)
4.01(.42)
3.94(.48)
3.76(.60)
2.35
F
2.97
7.29
p
.032*
<.001†
.073
42(15.6)
16( 5.9)
212(78.5)
3.29(.40)
3.41(.48)
3.29(.40)
2.61(.51)
2.88(.37)
2.55(.48)
3.89(.59)
3.47(.75)
3.94(.48)
F
.58
3.43
6.39
p
.560
.034
.002
193(71.5)
55(20.4)
22( 8.1)
3.31(.41)
3.26(.40)
3.29(.35)
2.56(.47)
2.71(.46)
2.48(.59)
3.91(.53)
3.82(.51)
4.12(.43)
F
.32
2.78
2.67
p
.728
.064
.071
244(90.4)
17( 6.3)
9( 3.3)
3.29(.39)
3.25(.52)
3.54(.32)
2.56(.48)
2.78(.44)
2.71(.41)
3.94(.51)
3.55(.47)
3.67(.73)
F
1.80
1.96
5.49
p
.168
.143
.005
60(22.2)
210(77.8)
3.26(.44)
3.31(.39)
2.50(.48)
2.61(.48)
4.00(.52)
3.88(.53)
t
-.88
-1.50
1.52
p
.380
.135
.129
References
Table 2. Correlational matrix among variables
Nursing
professionali
-sm
Internal
marketing
Nursing
professionalism
1
Internal marketing
.36 (.000)
1
Turnover intension
.04(.237)
-.28(.000)
Turnover
intension
1
228
[1]
Hospital nurse association. Research in the status of nursing personnel, Retrieved
from http://www.khna,or,kr/web/information/resource.php, 2013.
[2]
Yeun, E. J., Kwon, Y. M., Ahn, O. H., Development of a nursing professional value
scale, Journal of Korean Academy of Nursing, vol. 35, no. 6, pp1091-1100, 2005.
[3]
Choi, J., Ha, N. S., The effects of clinical nurse's internal marketing on job satisfaction,
turnover intention, and customer orientation, Journal of Korean Academy nursing
Administration, vol. 13, no. 2, pp231-241, 2007.
[4]
Yeun, E. J., Kim H. J., Development and testing of a nurse turnover intention scale
(NTIS), Journal of Korean Academy of Nursing, vol. 43, no. 2, pp256-266, 2013.
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
The Level of Depression and Anxiety in Undergraduate Students
Eun Ja Yeun¹, Misoon Jeon*2
*2,
¹, First Author, Dept. of Nursing, Konkuk University, Chungju, , South Korea, [email protected]
Corresponding Author, Dept. of Nursing, Baekseok University, Cheonan, South Korea, [email protected]
Abstract - This study was conducted to analyze the level of depression and anxiety in undergraduate students was explained to
each of the subjects Province who agreed to participate in this study. The data was collected from 431 undergraduate students
in C universities located in Gyeonggi by structural questionnaire. Data were analyzed using SPSS 18.0. The results show that
there was significant differences in the anxiety among the gender (t=-2.676, p=.008) and the living status (F=2.573, p=.037).
And there was a highly positive correlation between the depression and anxiety (r=.517, p<.001). This means that higher
depression in the undergraduate students increased to more anxiety. Therefore, it is necessary to identify and resolve the
factors that caused depression and anxiety in college students in order to improve their mental health.
Keywords: depression; anxiety; mental health
2-2 Instruments
As a depression instrument, the instrument developed by
Beck, Ward, Mendelson, Mock and Erbaugh(1961) was
translated by Lee and Song [5], and then used in this study.
The Cronbach's α was 0.773. And anxiety instrument,
developed by Beck, Ward, Mendelson, Mock and
Erbaugh(1961) used in this study. The Cronbach's α was
0.847.
1. Introduction
Korea’s college entrance rate reached 81.9% in 2009,
signifying that most high school graduates experience
college life [1]. College life is distinguished from high school
life in numerous aspects. College students are required to live
through an array of experiences and activities to establish a
new culture and lifestyle. Depression is reported to be the
most prevalent mental disorder among college students [2].
It has been estimated that approximately 29.3% of all college
students experience mild depression, 10.9% experience
moderate depression, and 4.0% experience severe depression
[3]. Depression has deteriorating effects on students’
interpersonal skills and everyday lives by inducing negative
mindsets, undermining their physical energy, crippling their
desire and impairing their concentration [4].
College students must learn methods to control the factors
that undermine their mental health in school environments.
They need an alternative source to assuage their
psychological emptiness and need to experience a more
diverse and rich college life. Therefore, the present study
sought to provide basic data for the development of mental
health-enhancing programs for college students by
identifying the levels of mental health, specifically those of
depression and anxiety, in college students.
2-3 Data Analysis
The collected data were analyzed with the SPSS 18.0
software program.
3. Results
3-1 Differences in the depression and anxiety according
to the general characteristics
Table 1 shows the analyzed difference in the depression and
anxiety according to the general characteristics of the
subjects.
There was significant differences in the anxiety among the
gender (t=-2.676, p=.008) and the living status (F=2.573,
p=.037). With regard to the gender, female showed a higher
anxiety than male. In the living status, subjects who have
lived in “home stay” and “living alone” showed a higher
anxiety than “living with family”, “dormitory” and “others”.
2. Materials and Methods
3.2 Correlation between the depression and anxiety
The correlation between the depression and anxiety in
undergraduate students was analyzed in Table 2. The result
showed that there was a highly positive correlation between
the depression and anxiety (r=.517, p<.001). This means that
higher depression in the undergraduate students increased to
more anxiety.
2-1 Sample and Data Collection
The data was collected from 431 undergraduate students in C uni
versities located in Gyeonggi by structural questionnaire from
January to February, 2014.
229
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Table 1. Differences in the Depression and Anxiety
according to General Characteristics
(N=431)
Female
Depression
M±SD
t or F(p)
1.35 ± .22
-1.658
(.098)
1.38 ± .22
Christians
Catholics
Buddhists
Others
No religion
High
Moderate
Low
Living with family
Dormitory
Home stay
Living alone
Others
Large city
Middle city
Country
1.37 ± .22
1.32 ± .19
1.39 ± .24
1.19 ± .20
1.36 ± .23
1.33 ± .18
1.35 ± .22
1.24 ± .27
1.36 ± .21
1.33 ± .25
1.40 ± .21
1.37 ± .22
1.50 ± .24
1.37 ± .23
1.33 ± .18
1.36 ± .20
Characteristic
Categories
Gender
Male
Religion
Economic Status
Living Status
Living Area
Depression
1
Anxiety
***
p<.001
.517***
2.293
(.102)
1.145
(.335)
.927
(.397)
Anxiety
1
4. Discussion
Mental well-being begins with a positive perception of
self and can be described as self-concept and selfesteem. Moreover, individuals with low self-esteem
display higher levels of depression and anxiety than
those with higher self-esteem. The results of this study
demonstrated a positive correlation between depression
and anxiety; that is, the level of anxiety increased with
an increase in the level of depression. It is necessary to
identify and resolve the factors that undermine selfesteem in college students in order to improve their
mental health.
References
[1]
[2]
[4]
[5]
1.051
(.380)
Table 2. Correlation of the Depression and Anxiety
Depression
[3]
Kim, J. H., The Influence of University Students' Social Support
and Mental Health on Their School Life Adaptation. Unpublished
master’s thesis, Paichai University, Daejeon, Korea, 2012.
Noh, M. S., Jeon, H. J. Lee, H. W., Lee, H. J., Han, S. G. & Ham,
B. J., Depressive Disorders among the College Students :
230
Prevalence, Risk Factors, Suicidal Behaviors and Dysfunctions,
Journal of the Korean Neuropsychiatric Association, vol. 5, no. 194,
pp432-437, 2006.
Hong, J. Y., How the University Students' Stress Affects Their
Anxiety master’s thesis, Youngnam University,
Depression, Unpublished
Daegu,
Korea, 2005. t or F(p)
M±SD
Yang, M. J., The Effects of Dance Career and Percent Body Fat on
1.23 ± .23
Eating Disorder and -2.676
Depression in Female College Dancers,
Unpublished
thesis, Sookmyung Women’s University,
1.30 ± .28 doctor’s (.008)
Seoul, Korea, 2012.
1.25 ± .25
Lee, Y. H. and Song, J. Y., A Study of the Reliability and the
1.25 ± .25
Validity
of the BDI ,2.143
SDS , and MMPI-D Scales, The Korean
Journal
(.075) vol. 10, no. 1, pp98-113, 1991.
1.26 ± of
.23Clinical Psychology,
1.38 ± .29
1.19 ± .27
1.28 ± .28
1.26 ± .24
1.26 ± .26
1.26 ± .26
1.21 ± .20
1.36 ± .31
1.34 ± .27
1.20 ± .13
1.26 ± .26
1.25 ± .22
1.33 ± .25
.195
(.823)
2.573
(.037)
.751
(.473)
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Analysis of dental hygienists’ financial preparation for old age
*1
Hee-Sun Woo, 2 Seok-Hun Kim
*1
Department of Dental Hygiene, Suwon women’s University, Suwon,
Korea, [email protected]
2
Department of Mobile Media, Suwon women’s University, Suwon,
Korea, [email protected]
Abstract-The purpose of this study was to help dental hygienists enjoy comfortable life in their later years by looking at
the present situation of their financial preparation for old age and anticipating future while serving as the basic material
for old age life policy development. The surveyed were selected in a simple random sampling method. Survey answers
were received via email or the researchers visited to collect them from dental hygienists. A total of 207 sets of responses
were collected and 200 sets with sincere answers were finally utilized. For this research data analysis, the statistical
analysis software, SPSS 16.0 (SPSS, IL, USA) was employed. The level of significance was set at p=0.05 to determine
statistical significance. In order to look at the financial preparation status of old age according to general characteristics,
Chi-square test was utilized herein based on cross analysis. Sociological characteristics (age, number of family members,
religion, work career, place of work and average monthly household income) were found to cause a statistically
significant difference in average monthly saving amount, preparation start time, retirement time, area of activity,
educational program and retirement saving amount (P<0.001).
Keywords: Dental hygienists, Financial, old age, Preparation
1. Introduction
2. Materials and Methods
The South Korean society faces the problem of rapid aging
population at an unparalleled speed in the world. The country
’s old population aged 65 or older reached 7% already in
2000 and became an aging society. The old population
recorded 11.0% in 2010 and expected to reach 14.3 % in
2018, becoming a super-aged society. Compared with other
advanced countries who experienced aging population for a
longer term, South Korea became an ageing society for a
shorter period of time without full preparation. In this
situation, preparation for old age in the country is now an
individual and social issue requiring active cooperation
between individuals and national government as soon as
possible.
States experiencing aging population earlier understood that
the problems of the aged are not resolved solely by any single
individual or household. So they provided pensions and other
social security schemes for people’s old age days. However,
in the case of South Korea, many people face this issue of
financial preparation for their advanced ages by themselves
individually. Although it is well recognized that through
preparation and plans are required for later years, more
specific schemes are far less than sufficient. In this
recognition, the purpose of the study is to help dental
hygienists enjoy comfortable life in their later years by
looking at the present situation of their financial preparation
for old age and anticipating future while serving as the basic
material for old age life policy development.
In this research, survey was conducted from October, 2014
to February, 2015 (for 5 months) to investigate the usual
financial preparation status for old age of dental hygienists
working in the clinical field. The surveyed were selected in
a simple random sampling method. Survey answers were
received via email or the researchers visited to collect them
from dental hygienists. A total of 207 sets of responses were
collected and 200 sets with sincere answers were finally
utilized.
I.
Analysis
For this research data analysis, the statistical analysis
software, SPSS 16.0 (SPSS, IL, USA) was employed. The
level of significance was set at p=0.05 to determine statistical
significance. In order to look at the financial preparation
status of old age according to general characteristics, Chisquare test was utilized herein based on cross analysis.
3. Results
This research investigation on the participants’
preparation status for their old age has found the followings;
1) Of the old age preparation tools, 57.5% said they had
pension; 34.0%, installment savings; 5.1%, nothing for later
days; and 3.5%, real estate investment.
231
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
2) The average monthly saving amount was found to be less
than KRW 0.4 million in 56.0% and KRW 0.4 million or
higher in 44.0%.
3) The time of preparation was earlier than 35 years old in
60.0% and 35 years old or over in 40.0%.
4) The retirement time was said to be when it becomes
financially affordable by 37.5%; as long as I stay healthy by
23.0%; others by 21.5%; and until the retirement age of 60
by 18.0%.
5) 72.0%, the largest number of them, said their area of
activity would be pastime while 16.0% said, economic
activity; and 12.0%, volunteering.
6) 96.0% said educational program was necessary while
4.0% said it was unnecessary.
7) Retirement savings were said to be less than KRW 1
billion by 59.0% and KRW 1 billion or over by 41.0%.
Table 1.
4. Conclusion
Sociological characteristics (age, number of
family members, religion, work career, place of work and
average monthly household income) were found to cause a
statistically significant difference in average monthly saving
amount, preparation start time, retirement time, area of
activity, educational program and retirement saving amount
(P<0.001).
This study suggested that preparation for old age of dental
hygienists was very important and the preparation must be
connected with the social welfare policy.
5. References
[1] Blakeley J. Ribeiro V. “Are you nurses for retirement”. J Nursing
Manag, vol. 16, pp.744-752, 2008.
[2] MY Kim, SJ Kim. “Preparation for old age life dental hygienists”. J Con
Soc, vol. 14, no. 8, pp.250-256, 2014.
[3] Bae MJ. “Perception of preparation for their old age and successful aging
by degree of facts on aging among adults”. J Wel Aged, vol. 8, pp. 111131, 2012.
[4] Laditka SB, Corwin SJ, Laditka JN, Liu L, Tseng W, Tsemg B, et al.
“Attitude aboutaging well among a diverse group of older Americans:
implications for promoting cognitive health”. The Gerontologist, vol. 49,
no. S1, pp. S30-S9. 2009.
http://dx.doi: 10.1093/geront/gnp084
Status of financial preparation of participants
Preparation for old
age
(multiple response)
Average monthly
saving amount
Time of preparation
Time of retirement
N
%
Pension
215
57.5
Installment saving
127
34.0
Real estate investment
13
3.5
Non-preparation
19
5.1
< KRW 0.4 mil.
112
56.0
KRW 0.4 mil ≦
88
44.0
< 35 years old
120
60.0
35 years old ≦
80
40.0
As long as I am healthy
46
23.0
36
18.0
75
37.5
Others
43
21.5
Pastime
144
72.0
Volunteering
24
12.0
Economic activity
32
16.0
192
96.0
8
4.0
< KRW 1 bln.
118
59.0
KRW 1 bln. ≦
82
41.0
Until the retirement age
(60)
Until it becomes financially
affordable
Area of activity
Educational program
Necessary
Not necessary
Retirement savings
232
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
The motion graphic effect of the mobile AR user interface
1
1, First Author
YunSung Cho, 2 SeokHun Kim
Department of Visual Design, Suwon Women’s University, Suwon, Korea, [email protected]
Department of Mobile Media, Suwon Women’s University, Suwon, Korea, [email protected]
*2, Corresponding Author
Abstract - The rapid development of the mobile device has brought about a change of a new lifestyle to users based on
the fusion of various IT technologies. Especially the AR(augmented reality) naturally connected the virtual object to
the real world to provide users with the effective mobile user environment. Therefore, the GUI (graphic user interface)
of the mobile device-based augmented reality needs to be designed properly to the changed mobile environment.
However, although the augmented reality is based on the video of the real world which is treated in real time, the study
on the motion graphic of the virtual object organizing the user interface is not enough. Therefore, this study is aimed
at empirically analyzing the effect of the motion graphic expressed in the user interface of the mobile augmented reality
on the users' visual experience, self-efficacy, and cognitive attitude in the use of the augmented reality contents to
design an efficient motion graphic interface.
Keywords: Motion graphics, Mobile AR, User interface
the direction, scale, and speed of the object causes the motion
effect on the interface. In order to use the augmented reality
contents efficiently, the visual experience through the motion
graphic of the user interface should induce the users' eyes and
behaviors naturally and attract the high self-efficacy and the
positive cognitive attitude through it.
1. Introduction
Now, users are pursuing the individualization and the
change of lifestyle and started having much interest in the
more immediate, direct, real, and natural mobile augmented
reality through a new device called the mobile-based head
mounted display(HMD) such as Gear VR by Samsung
Electronics. The augmented reality is a technology that
provides users with more improved sense of reality by
mixing the real-world and the virtual world seamlessly in real
time to provide for users, and the mobile augmented reality
can draw a natural flow from users by letting users recognize
the virtual visual information provided through the mobile
device is being expressed vividly in the actually existing real
world and letting them interact in real time. The development
of the mobile device enabled users to easily contact the
motion graphic included in various videos regardless of the
place and time but it is not easy for users to recognize that
the result of the motion graphic is combined with the
interface and makes the more efficient interaction possible.
When loading or removing the application simply, the
motion graphic is taking place on the interface and users do
not know it is being used actively. However, users are clearly
attracting users' attention and suggesting the use direction on
the interface and endlessly providing the information
through the 'movement' to recognize the current state.
Especially the interface of the augmented reality contents
should provide the virtual visual information to enable users
to interact with contents while not disturbing the users' flow
in videos of the endlessly moving real world. Therefore, the
natural motion graphic of interface elements will be a very
important interface design element. The motion application
effect of the digital contents interface appears when there are
the continued change of the object form, change of the space
and time, and attentiveness, especially the systematization of
the motion trait which brings about the movement such as
2. System model and Methods
2.1 System model
The hypotheses of this study are shown in Table 1.
Table 3. Hypotheses
Classification
Hypotheses 1
Hypotheses 2
Hypotheses 3
Description
The motion effect in GUI will have an effect
of on visual experience of augmented reality.
The visual experience in User interface will
have an effect of on self-efficacy of augmented
reality.
The self-efficacy in motion interface will have
an effect of on cognitive attitude of
augmented reality.
Based on these hypotheses, a research model was
proposed in Figure 1.
233
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
of the mobile augmented reality had a positive effect on the
users' visual experience. It shows the motion graphic of the
user interface provides mobile augmented reality users with
the visual interest and the understanding. Second, the visual
experience by the users' motion graphic had a positive effect
on the users' self-efficacy. Third, the user interface selfefficacy by the motion graphic had an effect on the users'
favorable attitude on the augmented reality contents. The
implications of the study which can be obtained from this
study result are as follows. When approaching to the mobile
augmented reality contents in the video aspect, the users'
behavior and attitude by the user interface are the core of the
contents success strategy. Therefore, users' positive attitude
needs to be induced in developing the mobile augmented
reality contents and for this, the positive experience on the
motion graphic of the user interface should be provided
surely.
Figure 6. System Model
2.2 Test method
In this experiment, each test target is instructed to
experience the motion graphic expressed on the user
interface of the mobile augmented reality and the degree is
measured in order. To measure each variable, the items
suggested in ‘A Study of Media Adaptation and the User
Experience of Augmented Reality’ by Mi-Young Shim and
Jin-Ho Lee(2012) were corrected in accordance with this
study and were measured through total 3 items 13 questions.
References
[1]
Table 4. Measurement scale
Classification
[2]
Description
The movement of components on the screen
was delivered the meaning of contents
effectively.
Could feel a sense of distance to the spatial
movement of the object.
The movement of the graphic elements was
harmonized with the real world properly.
Visual Experience
Although a lot of movements happened, the
meaning was clear and was easy to cognize.
Movements on the users' current position and
direction were expressed properly.
The movement provided the interface with the
visual interest.
Be confident of exploring the information to
know arbitrarily through the augmented reality.
It seemed like adjusting the augmented reality
was done by itself as if it existed really.
Self-Efficacy
I can treat the augmented reality proficiently as
my will.
I entirely concentrated on the augmented
reality.
The augmented reality is useful to me.
Cognitive Attitude I am satisfied with using the augmented reality.
Using the augmented reality is worthwhile.
[3]
3. Discussion
This study was aimed at obtaining the quantitative result
of the effect that the motion graphic of the user interface
element had on the visual experience, self-efficacy, and
cognitive attitude of the mobile augmented reality users
through the empirical data analysis, and to summarize the
result, first, the motion graphic on the user interface element
234
M. S. Kim, Y. S. Cho, and P. Y. Yi, “Formative Research of Digital
Contents for Holograms of Depth Map Generation”, Journal of Digital
Design, vol. 13, no. 2, pp. 57-66, 2013.
H. C. Yang, “3D effects on viewers' perceived eye movement,
perceived functionality, visual fatigue, and presence,” University of
Kwangwoon, M.A Dissertation , 2011.
M.Y. Sim, J.H. Lee, A Study of Media Adaptation and the User
Experience of Augmented Reality, Journal of Korean Society of
Design Science, Vol.25 No.2, p.273, 2012.
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
New Authentication Methods based by User’s Behavior Big Data
Analysis on Cloud
1
1, First Author
Sunghyuck Hong
Div. of Information and Communication, Baekseok University, Korea
[email protected]
Abstract - User authentication is the first step of network security. There are lots of authentication types, and more than
one authentication method works together for user’s authentication. Except biometric authentication, most
authentication methods can be copied or someone else can adopt and abuse someone else’s authentication method.
Therefore, this research proposed user’s behavior based authentication for secure communication, and it will improve
to establish a secure communication.
Keywords: behavior, authentication, access control, cloud
1. Introduction
2. Related Works
Authentication is the first step of security method. After
authentication, access control and authorization steps will be
established securely.
Authentication can be considered to be of three types
[1][13]:
The first type of authentication is accepting proof of
identity given by a credible person who has first-hand
evidence that the identity is genuine. As authentication is
required of physical objects, this proof could be a friend,
family member or colleague attesting to the item's
provenance, perhaps by having witnessed the item in its
creator's possession. With autographed sports memorabilia,
this could involve someone attesting that they witnessed the
object being signed. A vendor selling branded items implies
authenticity, while he or she may not have evidence that
every step in the supply chain was authenticated. This hearsay authentication has no use case example in the context of
computer security [2][3].
The second type of authentication is comparing the
attributes of the object itself to what is known about objects
of that origin. For example, an art expert might look for
similarities in the style of painting, check the location and
form of a signature, or compare the object to an old
photograph [4][5]. An archaeologist might use carbon dating
to verify the age of an artifact, do a chemical analysis of the
materials used, or compare the style of construction or
decoration to other artifacts of similar origin. The physics of
sound and light, and comparison with a known physical
environment, can be used to examine the authenticity of
audio recordings, photographs, or videos. Documents can be
verified as being created on ink or paper readily available at
the time of the item's implied creation [1].
The behavior analysis of a person can be verified by rules,
which analyses the variables that can influence human
behavior [13]. The scientific analysis of human behavior
starts with the knowledge and isolation of the parts of an
event, to determine the characteristics and the dimensions of
the occasion where the behavior occurs, and to define the
changes that were produced in response to the environment,
space, time and opportunities. Thus, it can be said that the
environment and both the virtual and physical space establish
the conditions for a certain behavior. The human behavior is
based on contextual information, based on previous
behavioral history, previous history of behavior
reinforcement and conduct of the person to interact with the
environment immediately [9]. The operant and conditioning
behavior is a mechanism that rewards a response of an
individual until he is conditioned to associate the need for
action. In operant behavior, the environment is modified and
produces consequences that are working on it again,
changing the likelihood of a future similar occurrence
[11][12]. It is in this way the environmental variables model
the users behavior, in a conditioning process [12]. In an
analogous way, during a software application session, the
user behavior is conditioned when interacting with an
electro-electronic device and the software application.
According to the Law of Effect, a person will associate the
situations he has experienced with similar ones, and will
generalize this learning process and will expand to a larger
context in life [6]. A person tends to repeat the behavior in
situations that are repeated [4]. This may be considered in the
context of an authentication system of people and the
security aspects, among other applications [7]. The capture
of user behavioral information in the environment is done
from the time when the user is identified and accesses a
software application, to the time when he closes it.
235
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
[12] L. L. Chun and T. L. Hwang, "A password authentication Scheme with
Secure Password Updating," Computers & Security, vol. 22, pp. 6872, 2003.
[13] Brosso, I.; La Neve, A.; Bressan, G.; Ruggiero, W.V., "A Continuous
Authentication System Based on User Behavior Analysis,"
Availability, Reliability, and Security, 2010. ARES '10 International
Conference on , vol., no., pp.380,385, 15-18 Feb. 2010
3. Proposed Method
Users’ behaviors can be predictable because human uses
usernames. For example, access time and access location can
be unique pattern. Therefore, user’s behaviors can be the
most efficient user authentication method if system has lots
of collects user’s pattern logos.
4. Conclusion
Relatively weak compared to the PC and mobile devices
adopt a general-purpose OS, due to the advent of the App
Store, an open platform based on the full-fledged competitive
smartphone market with increased security technologies,
applying the characteristics of the mobile network, in
particular with regard to the mobile user authentication
actively the purpose of this study is to contribute to mobile
networks, secure communication and enable research
realized by a secure mobile network authentication, the
authentication of mobile users by leveraging the mobility
characteristics of the mobile is used as a certification because
it has not been sufficiently studied.
References
[1]
[2]
Authentication, en.wikipedia.org/wiki/Authentication, 2015
Jiangshan Yu; Guilin Wang; Yi Mu; Wei Gao, "An Efficient Generic
Framework for Three-Factor Authentication With Provably Secure
Instantiation," Information Forensics and Security, IEEE Transactions
on , vol.9, no.12, pp.2302,2313, Dec. 2014
[3] Tams, B.; Rathgeb, C., "Towards efficient privacy-preserving twostage identification for fingerprint-based biometric cryptosystems,"
Biometrics (IJCB), 2014 IEEE International Joint Conference on , vol.,
no., pp.1,8, Sept. 29 2014-Oct. 2 2014
[4] A. Bhargav-Spantzel , A.C. Squicciarini , S.K. Modi , M. Young , E.
Bertino and S.J. Elliott
"Privacy Preserving Multi-Factor
Authentication with Biometrics", J. Computer Security, vol. 15, no.
5, pp.529 -560 2007
[5] S. Goldwasser , S. Micali and C. Rackoff "The Knowledge
Complexity of Interactive Proof Systems", SIAM J. Computing, vol.
18, no. 1, pp.186 -208 1989
[6] Chattopadhyay, T.; Biswas, P.; Saha, B.; Pal, A., "Gesture Based
English Character Recognition for Human Machine Interaction in
Interactive Set Top Box Using Multi-factor Analysis," Computer
Vision, Graphics & Image Processing, 2008. ICVGIP '08. Sixth Indian
Conference on , vol., no., pp.134,141, 16-19 Dec. 2008 C.T.
[7] Li and M.S. Hwang "An Efficient Biometrics-Based Remote User
Authentication Scheme Using Smart Cards", J. Network and
Computer Applications, vol. 33, no. 1, pp.1 -5 2010
[8] R. Ramasamy, A. P. Muniyandi, "New Remote Mutual
Authentication Scheme using Smart Cards," in the Journal of
Transactions on Data Privacy, Vol. 2, Issue 2, pp.141-152, August
2009.
[9] C. H. Liao, H. C. Chen and C. T. Wang, "An Exquisite Mutual
Authentication Scheme with Key Agreement Using Smart Card,"
Informatica, Vol. 33, No. 2, pp. 125-132, 2009.
[10] L. Yang, J. F. Ma, and Q. Jiang, "Mutual Authentication Scheme with
Smart Cards and Password under Trusted Computing," International
Journal of Network Security, Vol.14, No.3, PP. 156-163, May 2010.
[11] C. S. Tsai, C. C. Lee, and M. S. Hwang, "Password Authentication
Schemes: Current status and key issues,"International Journal of
Network Security, vol. 3, pp. 101-115, 2006.
236
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
The Effect of Musical activities program on Parenting stress and
Depression - Focused on Housewives with Preschool Children
1
*1,
Shinhong Min
Divsion of Division of Health Science, Baekseok University,Korea
[email protected]
gives influences human mind and emotion and makes people
have an aesthetic experience [4]. Music makes people to
express their thoughts and emotion naturally, which have
been unexpressed, providing them with emotional stability.
Music activities let people express their desire and demand
in psychologically more stable environment, which may
convert negative emotions to positive ones [5]. For this
reason music could be a safe and proper tool to treat the
housewives suffering depression with parenting stress.
In this study, housewives parenting preschoolers were
subjected to measure the level of their parenting stress and
depression. The purpose of this study is to determine whether
musical activities program applying singing and listening to
their favorite and soothing music affects the parenting stress
and depression of the housewives.
Abstract This study aims to identify the effect of musical
activities program on parenting stress and depression of
housewives rearing preschoolers. A total of 50 housewives,
25 for the experimental and 25 for control groups, enrolled
in Women's Health Center in D city, participated in this study
and the data were collected from Oct. 2013 to Nov. 2013.
The experimental group participated in the music activities,
such as listening to music and singing, for 50 min once a
week for 8 weeks, meanwhile the control group was not
involved none of these activities. The music activities for the
experimental group were proceeded in the order of listening
to music to relax → listening to music → singing. The result
showed that the experimental group participating in the
music activities program revealed statistically significant
less parenting stress and depression test scores compared
with those of the control group. This result suggests that
musical activities provide emotionally supporting program
useful to ease parenting stress and depression.
2. Research Method
2.1 Subject
A total 50 housewives parenting preschoolers, who
enrolled in a program of Women Health Center in D city,
agreed to participate in this study with their consent. Twenty
five of them were assigned to the experimental group and the
rest to the control group. For both groups parenting stress and
depression levels were inspected and the musical activities
program was applied only to the experimental group.
Keywords: Depression, Musical activities program,
Parenting stress.
1. Introduction
Parenting responsibility and demand for parent role have
been increased in modern society where economy shape has
changed rapidly and the number of nuclear family has kept
increasing. Since husband and wife have to nurture their
children on their own with no help from other family
members, parenting burden increases [1]. The studies
regarding the effects of parenting stress on the parenting
environment have shown that the stress affects adversely on
the mental wellness of mothers, and results in the increase in
depression and anxiety, which, in turn, affects the parenting
behavior of mothers, which is depending upon their mental
status [2]. When one of the family members suffers
depression, it negatively affects other members of the family,
leading to a family crisis. Particularly, depression of a mother
rearing children is related to the mentality and social
development of the children, which is worthy to pay attention
[3].
It is suggested that the nature of music as a psychological
intervention may be used to ease depression. Music in any
society at all times has provided enjoyment and affected
listeners to form a consensus. In addition, the pleasure it
2.2 Research procedure and design
This study was carried out in the Women Health
Center of D city from Oct. 4, 2013 to Nov. 29, 2013, 50
min per session, once a week for 8 weeks. Nonequivalent
control group pretest-posttest design was employed to
explore the effect of musical activities program on the
reduction of parenting stress and depression of the
housewives nurturing preschoolers.
3. Results
3.1 Genrral characteristics
The general feature of the subjects is indicated in Table 1.
Table 1. General Characteristics of the subjects
237
Characterist
ics
Category
Experimenta
l group
Control
group
Age(years)
20-25
3(12)
3(12)
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
26-30
8(32)
9(36)
31-35
13(52)
11(44)
program with more variables need to be conducted and
more researches for targeting wide subjects are also
needed.
≥36
1(4)
2(8)
M(SD)
31.0(3.35)
30.56(3.36)
1
10(12)
8(32)
2
8(32)
14(56)
3
2(8)
3(12)
Number of
child
References
[1]
[2]
3.2 Effect of musical activities program on parenting
stress and depression
Effect of musical activities program on parenting stress and
deprssion of the housewives nurturing preschoolers was
examined before and after the program on both the
experiment and the control groups. The analysis and
comparison of the average stress of the subjects and standard
deviation before and after the program showed that there was
a statistically significant difference in the experiment group
(Table 2).
[3]
[4]
[5]
Table 2. The Effect of Musical activities program on
Parenting stress and depression
pre
test
post
test
M(SD)
M(SD)
Experi
mental
group
71.44
(5.14)
Control
group
Category
Parent
ing
stress
Depre
ssion
t
p
69.28
(4.61)
5.308
.000
69.32
(5.96)
69.60
(5.94)
-1.319
.200
Experi
mental
group
33.32
(4.11)
30.52
(3.88)
7.074
.000
Control
group
34.04
(3.64)
34.28
(3.47)
-1.141
.265
4. Results and Discussion
The analysis of the scores of parenting stress and
depression before and after the experiment showed that
there was a statistically significant difference between
the two groups (p<.01). As has been seen in the result,
there was a significant reduction in parenting stress and
depression of the experimental group compared with the
control. This result suggests that music activities may
have a therapeutic effect since physiological and
emotional stress and depression can be relieved by the
musical activities.
It is necessary to find out the details for what
housewives request for expanding practical use of the
music activities program. The program reflecting those
requests then needs to be evaluated for its effectiveness.
Repeated studies supplementing detail procedures of the
238
K. H. Kim, B. H. Jo, “An Ecological Approach to Analysis of
Variables in the Parenting Stress of the Dual - Earner Mothers and
Fathers”, The Korean Journal of Child Studies, vol. 21, no. 4, pp. 3550, 2000.
J. Y. Kim, K. S. Chung, “Relations between Sense of Humor, Stress
Coping Style, and Parenting Stress of Preschooler`s Mother”,
Journal of Life-span Studies, vol. 3, no. 1, pp. 59-77, 2013.
M. Y. Kim, “Relationship of Stress and Depressive Symptoms to
Maternal Efficacy among Mothers with Children in Early Childhood”,
Unpublished master's thesis, Hanyang University, Seoul, 2012.
J. S. Park, H. K. Cho, Y. T. Kim “Impact of Music Therapy Program
on the Self-Esteem and Depression of Middle-Aged Women”, The
Korean Journal of Rehabilitation Psychology, vol. 19, no. 1, pp. 63-83,
2012.
S. Y. Park, E. Y. Hwang, “A Pilot Study on How Coping Strategy
in Musical Activities has a Positive Impact on Stress Reduction
and Relation States”, Journal of Korean Arts Psychotherapy
Association, vol. 9, no. 1, pp. 51-67, 2013.
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Relationship between ego resiliency of girl students and smart
phone addiction
1
1.
Soonyoung Yun, 2 Shinhong Min
Division of Health Science, Baekseok University, Korea,[email protected],
Division of Health Science, Baekseok University, Korea,[email protected]
*2. Corresponding Author
Abstract - In this study the relations between ego resiliency and smart phone addiction was investigated. Though smart
phone provides convenience and benefits, addictive and uncontrolled use of smart phone becomes problematic for
school and society. Interestingly, girls are affected more seriously by smart phone addiction than are boys. Based on
these facts, this study aimed to explore the relation between resilience and smart phone addiction to improve the smart
phone addiction of girl students. According to the result of this survey targeting middle and high school girls in
Chungcheong Province, there was a difference in ego resiliency depending on addiction tendencies. There was a negative
relationship between the subfactors for ego resiliency, such as emotional control, vitality, relationship with others,
optimism, and curiosity, and smart phone addiction tendency. Therefore, it is necessary to improve smart phone
addiction and to form a healthy ego resiliency through a stepwise program, by which a positive ego forms and sociality
develops. Besides, diverse programs intervening smart phone addiction are required for emotional control, relationship
with others, optimism, and vitality of adolescent.
Keywords: Ego resiliency, Girl students, Smart phone addiction
mental resistance, an ability coping with difficulty either
unaffectedly or less affectedly. Children having resilience
are superior in self-respect, problem solving, self-control
level, coping with a new environment actively, and having
clear expectation and a sense of purpose compared with those
without resilience [3].
In this study, the effect of ego resiliency on the smart phone
use was investigated, of which result showed that it is indeed
an important factor to control the use of smart phone.
1. Introduction
In April 2014, the number of subscribers to smart phone
increased rapidly and reached 38 millions and over, and
smart phone use rate of adolescent also increased from 5.9%
in 2010 to 81% in 2012, showing that 8 out of 10 adolescent
use smart phone. When comparing various groups at risk of
smart phone addiction from elementary to high school
children, interestingly girl students were three times higher
at risk of smart phone addiction than were boy students,
which may reflect serious current situation [1].
Though there are positive effects of adolescent's smart
phone using, such as sharing information and efficient
management of social network, a number of negative aspects
of excessive use of smart phone are brought out, for example,
difficulty and impairment in daily living, addiction, and
decline in academic performance. Serious problems,
including a health problem due to exposure to excessive
electromagnetic waves, financial burdens of wireless fee
cause of excessive phone use, illegal activity with improper
use of the phone, and destruction of language, have been also
appeared. It is therefore necessary to prevent and to prepare
a countermeasure the smart phone addiction of youth, whose
controllable ability is unreliable and who are exposed
without protection to this harmful environment. Ego
resiliency indicates self control ability to maintain properly
and the ability to adapt actively at unfamiliar or stressful
situations or environments [2]. Resilience is a conception of
2. Methods
2.1 Sample subjects and data collection
The target subjects were the girl students of middle and high
schools in Chungcheong Province, and a survey was
conducted from Mar. 14, 2014 to April 11, 2014 and was
based on self-reporting questionnaires.
2.2 Research tools
The tool for research was structured questionnaires, which
consisted of a total of 58-item, including 5-item for general
characteristics, 25 for ego resiliency, 13 for state of smart
phone use, and 15 for smart phone addiction tendency.
The ego resiliency scale used reference [4] in this study was
restructured to appropriate the middle school girls by
modifying questionnaires as a complement. The scale for
each item was rated by 5 points and high score implies high
ego resiliency. In this study the calculation of Cronbach's α
239
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
cofficient, representing reliability of a study, yielded 0.87.
The items regarding state of smart phone use were rated in
state of possession, smart phone use period, purchase motive,
a daily average time on the phone, with whom most
communicated, a monthly average user fee for smart phone,
positive and negative aspects of smart phone, comprising of
a total of 13 questionnaires. The questionnaires for smart
phone addiction tendency for this study were derived from
the Korean Agency for Digital Opportunity and Promotion.
phone addiction tendency
The correlation between ego resiliency and its subfactors,
such as emotional control, vitality, relationship with others,
optimism, and curiosity, and smart phone addiction tendency
is presented in Table 2.
4. DISCUSSION
We found that there was a negative correlation between the
smart phone using time of adolescent with ego resiliency.
Therefore smart phone using time and use frequency need to
be properly controlled and educational approach to raise ego
resiliency is also needed to prevent and to treat the smart
phone addiction of adolescent. As seen above, stepwise
programs for the positive ego formation and development of
sociality and diverse programs in offline world are required
to improve adolescent's emotional control, relationship with
others, optimism, and vitality. At the same time, it is
expected that healthy smart phone use in the online and
manner and technical educations for relationship formation
may lead youth to personality development affirmatively.
3. RESULTS
3.1 Ego resiliency of the target subjects depending on
the smart phone addiction tendency
The scores of smart phone addiction tendency of the target
subjects are listed in Table 1. The high risk user group getting
score 45 or higher in the addiction tendency test represented
77.9% among the subjects, the potential risk group in the
range of 42 and 44 was only 4.1%, and the general use group
took 18%. When ego resiliency scores of these three groups
were compared, there appeared to be a difference among
them.
References
Table 1. Ego resiliency of the target subjects depending on
the smart phone addiction tendency
Variables
High risk
group
Potential
risk group
General
use group
N(%)
Resilience
190(77.9)
84.45(7.42)
10(4.1)
84.00(0.32)
44(18.0)
91.00(8.26)
F
p
13.938
0.000
[1] National Information society agency, “Internet Addiction Survey 2014”,
Seoul,Press, 2015
[2] Klohnen, E. C., “Conceptual analysis and measurement of the construct
of ego-resiliency”, Journal of Personality and Social Psychology, vol. 70,
no. 5, pp. 1067-1079, 1996.
[3] Luthar, S. S., D. Cicchetti and B. Becker, “The construct of resilience: A
critical evaluation and guidelines for future work”,. Child Development, vol.
71, pp. 543-562, 2000.
[4] H. S. Yoon, “An Effect on Group Counseling for Improving EgoResilience on Middle School Student's Ego-Resilience Peer Relation”,
Unpublished master’s thesis, Chon Buk National university, 2010.
3.2 Relationship between ego resiliency and smart
Table 2. Relationship between ego resiliency and smart phone addiction tendency
Variables
Resilience
Emotional control
Vitality
Relationship
Optimism
Curiosity
Smart phone
addiction tedency
-0.635
-0.419
-0.267
-0.452
-0.343
-0.293
240
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Analysis on resilience, self-care ability and self-care practices of
middle & high school students
1
Shinhong Min, 2 Soonyoung Yun
*1,Corresponding Author
2.
Division of Health Science, Baekseok University, Korea, [email protected]
Division of Health Science, Baekseok University, Korea, [email protected]
Abstract - This study analyzes the relationship between resilience, self-care ability and self-care practices of middle and
high school students in order to provide a basic set of data to help promote health of these students. The findings were
as follows. A comparison of resilience between middle and high school students showed that among the sub-factors,
emotional control, optimism and curiosity differed, while vitality and interpersonal relationships didn’t differ much.
The correlation coefficient between resilience and self-care ability was r=0.429, between resilience and self-care practice
was r=0.528, and between self-care ability and self-care practice was r=0.679, indicating a significant positive
correlation. In conclusion, the higher the resilience score, the higher the self-care ability and practice scores. This is
assumed to be due to the ability of controlling one’s behavior through resilience and accepting it. That is, external
control that can improve self-care ability and internal control that can improve resilience are both helpful, and thus
policy measures and nursing mediation at schools are needed.
Keywords: Middle & high school students, Resilience, Self-care ability, Self-care practices
This study seeks to apply the concept to health issues of
teenagers who will become the leaders of tomorrow.
1. Introduction
Teenage years refers to ages 13-18 when one is enrolled
in middle or high school. It is a Transitional period from
childhood to adulthood, when one experiences rapid
physical changes and a development in cognitive ability
and self-awareness, which could lead to tension and
uncertainty[1]. Only when teenagers go through balanced
growth can welfare of a country and of the world be
achieved. Therefore, in order to ensure that teenagers can
grow healthily, balanced development on physical, mental
and social aspects is required [2]. One of the core factors
that affect our adjustment to changes in environment or
situation is resilience [3].
Health status is known to be closely related with
resilience[4]. Self-care ability refers to the complex ability
to meet continuing needs in order to integrate the
development of structures and functions of being human
and to promote well-being [5]. There is an increasing need
for self-care to address health issues such as illness or
injury, but also to address daily life issues. In order to live
an independent life, the maintenance and improvement of
self-care ability is important. Therefore teenagers are faced
with a need to act on self-care related to development and
more interest and knowledge are needed to help them
develop in a healthy manner. However, studies on self-care
ability to date have been limited to patients of diseases.
2. Methods
2.1 Sample subjects and data collection
A self-reported questionnaire was used from April 1 to
April 30 on middle and high school students in the
Chungcheong Province. A total of 300 copies were
distributed and 270 copies recollected. Excluding 9 copies
with insufficient answers, 261 copies were analyzed.
2.2 Research tools
A structured questionnaire was used. A total of 85
questions covering general characteristics (8 questions),
resilience (30 questions), self-care ability (35 questions)
and self-care practice (12 questions) were used.
3. RESULTS
3.1 General characteristics of subjects
Gender, birth order, religion, economic status and
perception of health issues were investigated<table. 1>.
241
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Table 1. General Characteristics of the Subjects
Characteristics Category
practices
3.3 Relationship between resilience, self-care ability and
self-care practice
The correlation between resilience, self-care ability and selfcare practice is as shown in <Table 3>.
Table 3. Correlation Matrix of Variables
N(%)
Middle school High school
Sex
Male
70(45.5)
65(60.7)
Female
84(54.5)
42(39.3)
School status 1
14(9.1)
19(17.8))
2
56(36.4)
55(51.4)
3
84(54.5)
33(30.8)
First
91(59.1)
58(54.2)
Second
49(31.8)
26(24.3)
Third
14(9.1)
23(21.5)
Economic
High
42(27.3)
29(28.1)
status
Middle
91(59.1)
65(60.7)
Low
21(13.6)
13(12.1)
Very healthy
14(9.1)
26(24.3)
Healthy
63(40.9)
19(17.8)
Normal
63(40.9)
53(49.5)
Poor
14(9.1)
9(8.4)
Birth order
Health status
High school
t
p
85.00(7.24)
86.17(9.53)
-1.118
0.000
14.77(1.76)
14.22(1.53)
2.603
0.010
13.59(2.06)
13.97(2.88)
-1.242
0.215
14.85(1.75)
14.92(2.42)
-0.259
0.796
Optism
13.81(1.78)
14.81(2.58)
-3.683
0.000
Curiosity
13.72(1.54)
14.30(2.72)
-2.186
0.030
96.18(10.81)
98.40(13.82)
-2.799
0.003
30.40(4.40)
32.42(3.80)
-3.852
0.000
Resilience
vitality
Self-care
agency
Self-care
Self-care
agency
0.429(0.000)
1
Self-care
practices
0.528(0.000)
0.679(0.000)
1
[1] H. S. Song, S. Y. Sung., “The Effect of Social Support on School
Adjustment and Life Satisfaction of middle school students: mediated
effect of ego-resilience and self-control”, Korean journal of counseling
and psychology, vol. 27, no.1, pp. 129-157, 2015.
[2] M. S. Kim, "Health promotion among adolescents", Korean Nurses,
vol. 36, no. 3, pp. 6-15, 1997.
[3] Y. J. Hwang, K. K. Kim, "An empirical analysis of the determinants
of ego resiliency among junior high school students: A social
psychological approach", Korean Journal of Sociology of Education, vol.
24, no. 1, pp. 205-229, 2014.
[4] B. R. Lee, H. J. Park, and K. Yi. Lee, “Contents : Korean Adolescents`
Physical Health and Peer Relationships: The Mediating Effects of Selfperceived Health Status and Resilience ”Korean J. of Child Studies, vol.
34, no. 5, pp.127-144, 2013.
[5] H. J, Song, M. Y. Hyun, E. J. Lee, "Hope, Self-care Agency and
Mental Health in Patients with Chronic Schizophrenia", J Korean Acad
Psychiat Ment Health Nurse, vol. 20, no.2, pp. 180-187, 2011.
Interpersonal
relationship
1
References
Emotional
control
Resilience
Self-care
practices
In conclusion, a higher resilience score indicates higher
self-care ability. It seems that the ability to objective
understand one’s situation, accept it and control one’s
behavior seems to have worked towards improving selfcare. That is, external control that can improve self-care
ability and internal control that can improve resilience
seem both to be at work.
Middle
school
Resilience
4. DISCUSSION
3.2 Comparison of resilience, self-care ability and selfcare practice between middle school and high school
students
A comparison of resilience, self-care ability and self-care
practice between middle school and high school students is
shown in <Table 2>.
Table 2. Degree of Resilience, Self-care agency and Self-care
practices
Variables
Self-care
agency
Variables
242
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
An Algorithm for Zero-One Concave Minimization Problems
under a single linear constraint.
1
1
Se-Ho Oh
Department of Industrial Engineering, Cheongju University,Korea, [email protected]
Abstract - In this paper, a branch-and-bound algorithm for the minimization of a 0-1 integer concave function under a
single linear constraint was developed. The algorithm uses simplices as partition elements for the branching and
bounding procedure. The facts that the binary division of simplex partitions the feasible vertices solutions(or local
minimum points) into subsets and that the linear convex envelope of the original concave function over the simplex can
be uniquely obtained by solving the related linear equations motivated our research. During the branching process, the
simplex associated with the selected candidate problem is divided into two subsimplices by adding 0-1 constraints. In
the next bounding operation, the linear programming problems defined over subsimplices are minimized to calculate
the lower bound and to update the incumbent value. Sequentially, the problems defined on vertices sets which do not
contain the global minimum are excluded. From the computational efficiency point of view, the important advantage
of the algorithm relies on the reduction of the problem size by partitioning of simplex.
Keywords: Branch & Bound Algorithm, Concave Minimization, Convex Envelope, Simplex
appeared, including the algorithm developed by Falk and
Hoffman[3], which uses piecewise linear underestimating
function, that of Rosen, which finds the global minimum
of a smooth concave function over a polyhedron and that
of Kalantari and Rosen who considered the global
minimization of a quadratic function over a polytope.
Benson showed that the convex envelope over a simplex is
linear and obtain an explicit formula for it[1].
1. Introduction
The problem of globally minimizing a concave
function over a polytope has occupied the attention of a
number of researchers since Tui’s fundamental work[7]. A
variety of important practical applications can be
formulated as a concave minimization problem. The zeroone integer linear programming problem, the linear fixedcharge problem, economies of scale and strategic weapons
planning, the facility location problem with concave costs
are among them[1,3,4].
The other strategy is the branching. In the course of
applying the branch and bound algorithm, the set of
feasible solutions is partitioned into many simpler subsets.
Each subset in the partition will be the set of solutions of a
candidate problem.
The global optimum for the convex function
minimization problem can normally be computed without
difficulty by the use of any appropriate local optimization
techniques. It is due to the fact that the local optimum must
be the global. But the concave function problems may have
many local solutions. The total enumeration method is
computationally impractical because the number of
solutions to evaluate is very large. From the complexity
point of view, the concave minimization problem is NPhard. It is seen by the fact that the zero-one linear
programming is a special case of the concave minimization
problem and that the former problem is NP-hard[6].
The most general approach to the concave
minimization problems is the branch and bound
procedure[1]. Many researchers have incorporated some
useful ideas into branching and bounding strategies to
design the well-working algorithms. These strategies
exploit the special nature and structure of the problem. One
of the most important bounding strategies is the use of
underestimating function. Tui constructed a cut as its first
form which can be used to exclude part of feasible
domain[7]. Since then, a number of algorithms have
The algorithm given in this paper is similar with
Benson’s algorithm in that the simplex is used in order
to calculate the linear underestimating function. Most
of the other authors’ algorithms have been suffered
from the expensive computation for generating the
subsimplices because additional constraints are half
spaces. In this paper generating subsimplices is
actualized by imposing additional equality constraints,
i.e. two hyperplanes on the selected candidate problem.
It means that the simplex is projected on two
hyperplanes respectively. Hence the dimension of two
subsimplices is one less than the selected simplex.
Consequently, splitting simplex makes the problem
size decrease one by one while iteration proceeds.
243
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Iteration 0: Initialization
0-1:Choose the initial containing simplex and Identify its
vertices
0-2:Perform the bounding operation procedure
0-3: Update the incumbent
0-4:Register it to the candidate problem list
Iteration k: Select the candidate
k-1: Perform the simplices generation procedure
k-2:Define the candidate problems
and Add them to the list
k-3: Perform the bounding operation procedure
k-4:Update the incumbent
k-4:Prune the candidate in the list whose lower bound
are larger than the current incumbent.
2. Subproblem Generation and Bounding
Operation
The The problem concerned in this paper can
be expressed as follows:
(P)
x∈Ω
min f(x)
where 𝑓(𝑥) is any concave function and 𝛺 = { 𝑥 ∈
𝑅𝑛 | ∑𝑛𝑖=1 𝑎𝑖 𝑥𝑖 ≤ 𝑏 , 𝑥𝑖 = 0 or 1 , 𝑖 = 1,2, … 𝑛 }
The algorithm for solving the problem (𝑃) performs
the binary branching and the bounding operations in
each iteration. The branching procedure generates two
subproblems. And the linear programming problem,
whose objective function underestimates over feasible
region of each subproblem, is defined for bounding
operation.
4. Conclusion
The branch and bound method for solving the concave
minimization problem was investigated.
References
subsimplices generation procedure
1.Select a candidate.
2.Identify the vertices of the candidate.
3.Choose the branching variable 𝑥𝑖𝜑 .
[1]
[2]
4.Set 𝑥𝑖𝜑 to be 0 or 1.
5.Generate the subsimplices.
[3]
Bounding operation procedure
1. Identify the vertices
2. Solve the corresponding linear equation system
3. Seek the optimal solution
4. Calculate the lower bound
[4]
[5]
[6]
[7]
3. Description of Algorithm and Nmerical
example
InFormal Algorithm Statement
244
Benson, H. P., “A Finite Algorithm for Concave Minimization over
a Polyhedron”, Naval Res. Logist. Quart. 32, pp.165-177, 1985.
Benson, H. P. and Erenguc, S. S., “A Finite Algorithm for Concave
Minimization over a Polyhedron”, Naval Res. Logist. Quart., Vol.
32, pp. 165-177, 1990.
Falk, J. E., and K. R. Hoffman., “A Successive Underestimation
Method for Concave Minimization Problems”, Math. Opns. Res.
Vol. 1, pp. 251-259, 1976.
Horst, R., “An Algorithm for Nonconvex Programming Problems”,
Math. Prog., Vol. 10, pp, 312-321, 1976.
Kalantari, B. and Bagchi, A.,“An Algorithm for Quadratic ZeroOne Programs”, Naval Rearch Logistics Quarterly, Vol. 37, pp.
527-538, 1990.
Papadimitriou, C. H. and Steiglitz, K. Combinatorial Optimization.
Prentice Hall, Englewood Cliffs, NJ, 1982.
Tui, H., “Concave Programming under Linear Constraints”: Dok.
Akad. Nauk SSSR 159, 32-35. Translated 1964, in Soviet Math.
Dokl. Vol. 4, pp. 1437-1440, 1964.
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
An Analysis of Risk Sharing between the Manufacturer
and the Supplier
Chan Jung Park
Department of Accounting, Cheongju University, Cheongju, South Korea, [email protected]
Abstract - How the product manufacturer tries to motivate the parts supplier to invest in necessary parts production
for new products through risk sharing arrangements is illustrated in this study. Analyzing risk sharing is focused on
the full cost-based transfer pricing scheme between the manufacturer and the supplier. The risk of supplier who
receives a subsidy from manufacturer can be reduced while fluctuation of sales quantity remains unchanged. Thus,
the manufacturer can motivate the supplier to accept the contract through risk sharing.
Keywords: Risk sharing, Transfer price, Subsidy
the supplier’s risk by subsidizing a portion of the
investment. In this way, fluctuations in the supplier’s
profit will be reduced, while expected profit remains
unchanged. The supplier’s risk is reduced, thus ensuring
a more positive use of profits and perhaps inducing him
or her to accept a contract with the product manufacturer.
Two cases will be presented in extremely simplified
components - a linear utility function and simple
Bernoulli probability distribution.
1. Introduction
In introducing a new product, manufacturers ask parts
suppliers to invest in the parts suitable for the new product.
Thus, the supplier has to invest in appropriate facilities to
make the parts and, as a result, the fixed cost of these
facilities accrues to the supplier. Whether or not the
supplier can recover the fixed costs of parts for a new
product depends on the market demand for the product.
If the supplier feels it is impractical to recover the total
incremental fixed costs needed, he or she may not make
an adequate investment in the production facilities for this
part. If the product manufacturer provides a subsidy in
such a situation, the supplier’s risk decrease and
production becomes more likely. In this case, a portion of
risk is shifted from the supplier to the manufacturer.
Let me show how the product manufacturer tries to
motivate the parts supplier to invest in necessary parts
production for new products through risk sharing
arrangements in this article. Analyzing risk sharing is
focused on the transfer pricing scheme between the
product manufacturer and the parts supplier. Transfer
pricing is often based on a full cost plus markup method.
Based on this method, the actual fixed cost per unit of the
transferred product depends on its sales quantity.
Therefore, the predetermined transfer price may not
recover total fixed costs. Thus, under full cost-based
transfer pricing, there is the persistent risk of unrecovered
fixed costs. In this situation, let’s assume that the part is
used only for a specific product of a particular
manufacturer.
Finally let me illustrate how a certain system of
subsidies and transfer prices can bring about risk sharing
between the manufacturer and the supplier.
Assumption 1: Let me assume that the utility(U) of the
supplier for the monetary amount of profit(X) can be
depicted by the following simplified function:
U = X, for X ≥ 0
U = pX, for X < 0
Where, p is arbitrarily set at 5 in all cases.
The concavity of the above utility functions implies that
the supplier is susceptible to high losses.
Assumption 2: Let me assume that the probability of the
supplier yielding high profit or low profit is equal. At high
demand, the sales quantity of the part is supposed to be
600 units; at low demand, 400 units.
Assumption 3: Suppose the supplier has the following
data for making the parts in question.
Unit variable costs = $40
Fixed cost = $17,500 (including mold cost $6,000)
Unit margin = $5
[Case 1]
⑴ At average demand of high sales volume (600 units)
and low sales volume (400 units):
600+400
Expected sales quantity =
= 500 units
2
Transfer price = unit variable cost + unit fixed cost
+ unit margin
$17,500
= $40 + 500 + $5
= $80
2. Illustrations of Risk Sharing Scheme
Suppose a parts supplier expects to realize profits that
will fluctuate because of uncertain market demands for a
new product. Also assume that, although the supplier
takes total risk, the product manufacturer chooses to share
245
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Average profits = ( transfer price × quantity )
- ( total variable costs + fixed costs )
= ($80×500) - {($40×500) + $17,500}
= $2,500
⑶ At low sales volume of 400 units:
Low profit = ($68×400+$6,000) - ($40×400 + $17,500)
= -$300
Assuming that high and low profits occur with equal
probability, the standard deviation of profit S(X2) is
calculated as follows:
Using the transfer price of $80, the following two profit
possibilities will be achieved.
⑵ At high sales volume of 600 units:
S(X2) = √
High profit = (transfer price×quantity) - total costs
= ($80×600) - {($40×600) + $17,500)}
= $6,500
Comparing case 2 with case 1, the standard deviation of
profit in case 2 was reduced to $2,800 from $4,000 in case
1, while expected profit remained unchanged. It means
that the risk of supplier who receives a subsidy in
compensation for mold cost can be reduced while
fluctuation of sales quantities remains unchanged.
The expected utility E(U2) of supplier for this subsidy
situation is calculated as follows:
Low profit = ($80×400) - {($40×400) + $17,500)}
= -$1,500
Assuming that high and low profits occur with equal
probability, the standard deviation of profit S(X1) is
calculated as follows:
(−$1,500 − $2,500)2 + ($6,500 − $2,500)2
2
= $4,000
The expected utility E(U1) of supplier in case 1 is
calculated as follows:
E(U1) =
1
2
×5×($2,500-$4,000) +
1
2
2
= $2,800
⑶ At low sales volume of 400 units:
S(X1) = √
(−$300 − $2,500)2 + ($5,300 − $2,500)2
E(U2) =
1
2
×5×($2,500-$2,800) +
1
2
×1×($2,500+$2,800)
= $1,900
Since the part supplier’s expected utility has positive
value, he would be willing to undertake part production.
Because a portion of the risk is shifted from the supplier
to the manufacturer through subsidization, the supplier’s
decision making is different from case 1. In case 2, the
manufacturer guarantees recovery of mold cost, while the
supplier still bears the risk of the other items. But the
manufacturer, as a result, can induce the supplier to
participate in a coalition for manufacturing a new product.
×1×($2,500+$4,000)
= - $500
The part supplier’s negative expected utility implies that
he would be reluctant to undertake part production.
However, if the manufacturer subsidizes the supplier, the
figures in the example will be changed as follows.
[Case 2]
Assumption 4: Suppose that the mold cost $6,000 will be
compensated in full by a subsidy from the product
manufacturer.
In this case, the value of the transfer price will be
decreased by the amount of unit mold cost when the
expected sales are realized earlier than planned. Also, the
unrecovered depreciation cost of mold will be
compensated by the product manufacturer when the sales
are below those expected. Therefore, such a convention
implies that the part supplier receives a subsidy equivalent
to the mold cost.
⑴ The transfer price and average profits at average
demand will be:
3. Summary
I described how the product manufacturer tries to
motivate the parts supplier to invest in necessary parts
production for new products through risk sharing
arrangements. In analyzing risk sharing, I focused on the
transfer pricing scheme based on a full cost plus markup
method between the manufacturer and the supplier. In
summary, the risk of supplier who receives a subsidy from
manufacturer can be reduced while fluctuation of sales
quantity remains unchanged. Thus, the manufacturer can
motivate the supplier to accept the contract through risk
sharing effect of subsidy.
References
Transfer price = unit variable cost + unit fixed cost
+ unit margin
$17,500−$6,000
= $40 +
+ $5
500
= $68
Average profit = average revenue - average expense
= (transfer price×quantity + subsidy)
- (total variable costs + fixed costs)
= ($68 × 500 + $6,000)
- ($40 × 500 + $17,500)
= $2,500
[1]
Belhaj, M., Bourles, R., Deroian, F., “Risk-Taking and RiskSharing Incentives under Moral Hazard”, American Economic
Journal, vol.6, no.1, 2014.
[2]
[3]
[4]
Using the transfer price of $68, the following two profit
possibilities will be achieved.
⑵ At high sales volume of 600 units:
[5]
High profit = ($68×600+$6,000 - ($40×600 + $17,500)
= $5,300
246
Cruz, C.O., R.C. Marques, “Risk-Sharing in Seaport Terminal
Concessions”, Transport reviews, vol.32, no.4, 2012.
Horngren, C.T., S.M. Datar, M.V. Rajan, Cost Accounting, 14 th ed.,
Pearson, 2012.
Monden, Y., M. Sakurai (ed.), Japanese Management Accounting,
Productivity Press, 1989.
Park, C., Agency Theory: Contract and Control, Cheongju
University Press, South Korea, 1990.
International Conference on Information, System and Convergence Applications
June 24-27, 2015 in Kuala Lumpur, Malaysia
Meme and Culture Contents in Korea
1
Kyung Sook Kim
Cheongju University,
College of Humanities,
Department of Culture & Contents Science,Korea,[email protected]
Abstract - Current Korean culture can be explained by cultural transmission and cultural evolution, in the perspective
of a sociobiological concept of a meme. The purpose of this research is twofold. One is to suggest ten Korean's memes
for the understanding of the evolution of culture, based upon the theory of consilience in Wilson and 'meme-gene
coevolution' in Dawkins : genes are connected to meme, in return meme to genes. The second is to protect the traditional
community culture based on the cultural identity and cultural prototype, and to reinforce K-Culture from the face of
'Globalization'. Searching for our national characteristics in our ancient cultures, we reveal the contents of the innate
meme. This research reviews a broad range of storytelling strategies and seeks characteristics of 'Brandization' of KCulture.
Keywords: meme, culture contents, k-culture
It reproduces mental information structures analogous to a
gene in biology. As Susan Blackmore tells us,