- IRD India

Transcription

- IRD India
Proceedings
Of
International Conference on Engineering
and Applied Science - ICEAS
Volume-I
Date:
13th July 2014
(Bangalore)
Editor-in-Chief
Prof. Pradeep Kumar Mallick
Organized by:
Institute For Research and Development India(IRD India)
Bhubaneswar, Odisha
ISBN: 978-3-643-24819-09
About
The Institute For Research and Development India (IRD India ) is pleased to organize the 2014
International Conference on Engineering and Applied Science - ICEAS.
The primary goal of the conferences is to promote research and developmental activities in
Engineering, Science, and Management. Another goal is to promote scientific information
interchange between researchers, developers, engineers, students, and practitioners working in and
around the world.
Topics of interest for submission include, but are not limited to: (All Branches of Engineering and
Applied Science)
Computer Science and Engineering
Electronics Engineering
Electrical Engineering
Mechanical Engineering
Instrumentation Engineering
Applied Science
About IRD India:
The Institute for Research and Development India (IRD India) is an independent, private non-profit
scientific association of distinguished scholars engaged in Computer Science, Information
Technology, Electronics, Mechanical, Electrical, Communication and Management. The IRD India
members include faculties, deans, department heads, professors, research scientists, engineers,
scholars, experienced software development directors, managers and engineers, university
postgraduate and undergraduate engineering and technology students, etc. IRD India plays an
influential role and promotes developments in a wide range of ways. The mission of IRD India is to
foster and conduct collaborative interdisciplinary research in state-of-the-art methodologies and
technologies within its areas of expertise.
Advisement Partner: http://www.conferencealert.org/
Publishing Partner: IGI Global USA, IRD Digital Library, www. isi-thomsonreuters.com
Programme Committee
Program Chair:
Prof. Pradeep Kumar Mallick
Chairman, IRD India
Bhubaneswer, India
Programme Committee members:
Di Gregorio, Raffaele, University of Ferrara, Italy
Fassi, Irene, National Research Council of Italy, Italy
Dr. Dariusz Jacek Jakóbczak, Technical University Of Koszalin, Poland
Guo, Weizhong, Shanghai Jiaotong University, China
Hu, Ying, Chinese Academy of Sciences, China
Lang, Sherman Y. T., National Research Council of Canada, Canada
Legnani, Giovanni, Universitá di Brescia, Italy
Ma, Ou, New Mexico State University, USA
Tan, Min, Chinese Academy of Sciences, China
Wu, Jun, Tsinghua University, China
Yang, Guilin, Singapore Institute of Manufacturing Technology, Singapore
Zu, Jean, University of Toronto, Canada
Prof. KumkumGarg, MIT Jaipur University, Ex IIT Roorkee Professor
Prof. Rama Bhargava Dept. Of IIT, Roorkee
Prof. S. P. Thapliyal Director, SGRRITS, Dehradun
Prof. Durgesh pant Director, Uttarakhand Open University
Dr. K.C. Gouda,Sr. Scientist in CSIR,Mathematical Modelling and Computer
Simulation,Bangalore, India
Zaki Ahmad, Department of Mechanical Engineering,KFUPM,Box # 1748, Dhaharan 31261 Saudi
Arabia
Rajeev Ahuja, Physics Department,Uppsala University ,Box 530, 751 21 Uppsala Sweden
B.T.F. Chung ,Department of Mechanical Engineering, University of Akron, Akron Ohio 44325
USA
TABLE OF CONTENTS
Sl.
No.
Topic
Page No.
Editor - in-Chief
Prof. Pradeep Kumar Mallick
1
Designing of vowel articulation training system for hearing impaired children as an assistive
tool.

2
3
5
6
7
8
9
10
11
12
13
60-64
Anoop M V, V Ravi
Car Parking Management System

54-59
M.Lavanya, V.Natarajan
Support of Multi keyword Ranked Search by using Latent Semantic Analysis over Encrypted
Cloud Data

48-53
S.Gayathri, V.Sridhar
An Exhaustive Study on the Authentication Techniques for Wireless sensor networks

44-47
Yashwanth K M ,Nagesha S, Naveen H M, Ravi, Mamatha K R,
Image Enhancement Technique for Fingerprint Recognition Process

38-43
Kusuma Keerthi
Intelligent Fuel Fraudulence Detection Using Digital Indicator

33-37
Ram Kumar.S, Gowshigapoorani.S
Design of High Performance Single Precision Floating Point Multiplier

25-32
Supriya M D, Chandra Shekhar Reddy Atla, K R Mohan, T M Vasanth Kumara
An Enhanced Secured Approach To Voting System

19-24
K. Mohammed Hussain, P. Sheik Abdul kadher
Distribution System Reliability Evaluation using Time Sequential Monte Carlo Simulation

13-18
Savita P.Patil, Manisha R. Mhetre
A Review of Factors and Data Mining Techniques for Employee Attrition and Retention in
Industries

6-12
Arpitha H B, Chandra Shekhar Reddy Atla,B Kanthraj, 4K R Mohan
Intelligent Baby Monitoring System

4
Rupali.S.Kad, R.P.Mudhalwadkar
Artificial Neural Network Technique for Short Term Wind Power Prediction

1-5
65-68
Chandra Prabha R,Vidya Devi M., Sharook Sayeed, Sudarshan Garg, Sunil K.R,
Sushanth H J
Spirometry air flow measurement using PVDF Film
69-73

14
Distributed Critical Node Detection of malicious flooding in Adhoc Network

15
17
18
19
20
93-98
Chaithrashree.A, Rohitha U.M
Spectrum Sensing Using CSMA Technique

89-92
Tejaswini H V, M S Mallikarjuna swamy
IRIS Authentication in Automatic Teller Machine

85-88
Yusuf Ahijjo Musa, Adamu Nchama Baba-Kutigi
Determination and Classification of Blood Types using Image Processing Techniques

81-84
Deepak. C. Pardeshi
Estimation of the Level of Indoor Radon in Sokoto Metropolis

78-80
Chandra shekar. P
Remote health monitoring in ambulance and traffic control using GSM and Zigbee

74-77
Malvika Bahl, Rajni Bhoomarker, Sameena Zafar
“Transmission Line Fault Detection & Indication through GSM”

16
Manisha R.Mhetre, H.K.Abhyankar
Rajeev Shukla, Deepak Sharma
99-101
Editorial
The conference is designed to stimulate the young minds including Research Scholars,
Academicians, and Practitioners to contribute their ideas, thoughts and nobility in these two
integrated disciplines. Even a fraction of active participation deeply influences the magnanimity of
this international event. I must acknowledge your response to this conference. I ought to convey
that this conference is only a little step towards knowledge, network and relationship.
The conference is first of its kind and gets granted with lot of blessings. I wish all success to the
paper presenters.
I congratulate the participants for getting selected at this conference. I extend heart full thanks to
members of faculty from different institutions, research scholars, delegates, IRD and, members of
the technical and organizing committee. Above all I note the salutation towards the almighty.
Editor-in-Chief
Prof. Pradeep Kumar Mallick
Designing of vowel articulation training system for hearing impaired children as an assistive tool.
________________________________________________________________________________________________
Designing of vowel articulation training system for hearing impaired
children as an assistive tool.
1
Rupali.S.Kad, 2R.P.Mudhalwadkar
PG Student, Associate Professor
Dept. of Instrumentation &Control,
College of Engineering, Pune, Maharashtra, India
Email: [email protected], [email protected]
Abstract— A vowel articulation training system for
hearing impaired children which has a MATLAB based
GUI interfaced with microcontroller has been developed.
The system gives visual information about spoken vowels
i.e whether vowel is pronounced correctly or not. In this
paper, we discuss the development of vowel training system
for hearing impaired children specifically children aged
between 5 and 10 years whose mother tongue is Marathi.
Formants emerged from vocal tract depends on the position
of jaws, tongue and the shape of your mouth opening.
Vowels in English are determined by how much the mouth
is opened, and where the tongue constricts the passage
through the mouth: front, back or in between parts of the
vocal tract and also how you position your tongue. Formant
range is different for same vowel for different ascent.To
form a normated data 750 vowel samples from normal
speakers are collected. We discuss the formation of vowel
database & vowel recognition results using the linear
predictive coefficient method. The correct recognition
obtained from this system is over 80%.
Keywords: Feature extraction, LPC, Vowel Database, GUI
I. INTRODUCTION
Presentation of speech signal in the frequency domain are
of great importance in studying the nature of speech
signal and its acoustic properties .Vowels are voiced
components of the sound, that is,/a/,/e/,/i/,/o/,/u/.The
excitation is the periodic excitation generated by
fundamental frequency of the vocal cords and sound gets
modulated when it passes via the vocal tract. Many
researchers have worked in this regard. Some commercial
software is also available in the market for speech
recognition, but mainly in American English or other
European languages. Proposed system is an assistive tool
for speech training of hearing impaired children aged
between 5 to 10 years whose mother tongue is Marathi.
This paper is divided into six sections. Section I gives
Introduction. Section II deals with details of formation of
vowels database. Section III focuses on system
implementation, Section IV covers result section V deals
conclusion followed by references
II. VOWEL DATABASE FORMATION
Database was formed from a total 50 individuals
consisting children from both gender from Municipal
schools and apartments. The speakers were children who
had no obvious speech defects. The recordings were done
using a microphone and a laptop with a sampling
frequency of 8000Hz. The vowels were recorded using
omnidiectional microphone using the sound wave
recorder. The samples were recorded in closed room
where background noise was not present. The speakers
were seating in front of the direction of the microphone
with the distance of about 1-3 cm..
Children use their hearing ability to develop their
language skills in order to communicate. But a hearing
loss can make communication difficult. If a child has a
hearing loss the basic development of language will often
be delayed. Children with mild to severe hearing loss can
develop understandable speech with the right intervention
and amplification. So the earlier the hearing loss is
detected and the earlier it is treated, the better. They can
get special speech and language therapy. Sign language is
one of the method used for the speech training. But
majority of people in society cannot always read or use
•
Vowels: a,e,i,o,u
sign language. This create the situation where children
•
Female vowel samples:675
who cannot communicate verbally are excluded from
society and miss a large part of their social learning

Male vowel samples :575
experiences..A vowel is a speech sound made by the
Database was formed with the samples of 23 male and 27
vocal cords. Vowels form the basic block for word
female normal speakers of 5-10 yrs age. Mother tongue of
formation. It is the main constituent block for word
both the speakers was Marathi. Each speaker was asked to
pronunciation. A vowel sound comes from the lungs,
speak the 5 vowels with 5 utterances of each vowel. Total
through the vocal cords, and is not blocked, so there is no
25 utterances of the vowels were recorded for each
friction. All English words have vowels. Each spoken
speaker
word is created out of the phonetic combination of a
limited set of vowel and consonant .Therefore vowel
training becomes important part in speech therapy.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
1
Designing of vowel articulation training system for hearing impaired children as an assistive tool.
________________________________________________________________________________________________
III. SYSTEM DESIGN
The vowel training system is designed to record
,analyze and to display pronounced vowel. MATLAB
software is used for analysis of vowel and graphical user
interface. In addition to that microcontroller is also
interface with MATLAB to display pronounced vowel.
Vowel sound
Microphone
PC
-Vowel Processing
-GUI
Output
Microcontroller
Fig 3. Audio amplifier set up for generation of audio
signal
Line Driver
Fig 1. Block diagram of vowel training system
A.
Vowel recognition process
Vowel utterances were recorded by omnidirectional
microphone and stored in workspace then processed
using MALAB software. Audio signal is sampled at 8
KHz is processed for feature extraction.LPC is used for
feature extraction. The steps for processing are as
follows.
Fig 4: Power spectrum of 1KHz Audio signal with 1cm
distance
Vowel sound
Sampling
Pre-emphasis
Frame blocking
Hamming Windowing
Autocorrelation
LPC Analysis
Fig 5: Power spectrum of 2KHz Audio signal with 4cm
distance
Fig 5 shows distortion at fundamental as compared to
Fig 4. The speakers were seating in front of the direction
of the microphone with the distance of about 1-3 cm
2. Sampling:
Sampling is a process of converting continuous time
signal into discrete signal. The sampling rate selected is
8000 samples/second..The speech signal is considered to
be 300 to 3000 Hertz. A sampling rate of 8000 samples
/sec gives a Nyquist frequency of 4000 Hertz, which
should be adequate for a 3000 Hz voice signal.
3.Pre-emphasis:
Fig 2. Vowel recognition process
1.Recording of vowel:
All vowel samples were recorded with omnidirectional
microphone.To avoid noise while recording the sample,
the optimal distance between microphone and speaker is
found out.To do this powerspectrum of the different
audio signal which was output of audio amplifier at
different distances were observed.
Pre-emphasis is used to boost the magnitude of higher
frequencies w.r.t to magnitude of lower frequencies. The
purpose of pre-emphasis is to improve signal to noise ratio
by lowering the adverse effects of attenuation distortion
and to shape the voice signals to create a more equal
amplitude of lows and highs before their application to
further part. To do this, pre- emphasis filter of the form is
1 – 0.99 z-1 is normally used.
4.Frame Blocking &Windowing
The vocal resonance & their time variation carries
phonetic information. This information analyzed in
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
2
Designing of vowel articulation training system for hearing impaired children as an assistive tool.
________________________________________________________________________________________________
short-time spectrum of signal. Fig 5. shows speech signal
is divided into frames. Each frame can be analyzed
separately. In order to have each frame stationary in
frame blocking process 20-25 millisecond window
applied at 10ms intervals.
5. LPC Analysis
Linear predictive coding(LPC) is s a digital method used
for encoding of analog signal in which a particular value
is predicted by a linear function of the past values of the
signal. It is proposed as a method for encoding of human
speech by the United States Department of Defense ,
standard 1015, published in 1984. LPC determines the
coefficients of a forward linear predictor by minimizing
the prediction
error in the least squares sense. It is
widely used in filter design and speech coding .In
MATLAB [a,g] = lpc (x, p) finds the coefficients of a
pth-order linear predictor (FIR filter) that predicts the
current value of the real-valued time series x based on
past samples. p is the order of the prediction filter
polynomial, a = [1 a(2) ... a(p+1)]. If p is not specified,
lpc takes p = length(x)-1 as a default value. If x is a
matrix containing a separate signal in each column, lpc
returns a model estimate for each column in the rows of
matrix a and a column vector of prediction error
variances g. The length of p must be less than or equal to
the length of x. Algorithms for lpc uses the
autocorrelation method of autoregressive (AR) modeling
to find the filter coefficients.
and a DB-9 serial cable. The Phiips microcontroller
development board is interfaced with PC. The PC is used
to write user specified embedded programs to be executed
by the Philips microcontroller. Furthermore, the PC hosts
an interactive GUI for the user to record and load audio
file and visualize pronounced vowel. The microcontroller
and the PC communicate using a serial interface. In this
paper, we use a P89V51RD2BN, 40-pin, 8-bit CMOS
FLASH dual inline package IC.To facilitate serial
communication between PIC and PC, we interface a
RS232 driver/receiver with the P89V51RD2BN. The
effectiveness of our MATLAB -based GUI environment
to interact with PIC microcontroller is demonstrated by
exporting analyzed vowel of speaker from a MATLAB
GUI interfaced to the PC
IV. EXPERIMENTAL RESULTS
B.Graphical User Interface Development
Graphical user interface (GUI) is a type of user interface
that allows users to interact with electronic devices
through graphical icons and visual indicators such as
labels or text. This MATLAB application is selfcontained MATLAB programs with GUI front ends that
automate a task or calculation. The GUI typically
contains controls such as menus, toolbars, buttons, and
sliders. MATLAB based GUI is developed to record the
.wav file or load the .wav file from destination. Toggle
buttons are used to select vowel to be analyzed. If
utterance of vowel is correct it is displayed on vowel text
box otherwise message for incorrect utterance
is
displayed .Structure of graphical user interface is shown
in figure 6.
.
Fig7:Recognition of vowel /a
.
.wav file of vowels are analyzed .Formant ranges for
vowels are determined. .Fig 7 & Fig 8 shows that vowel
/a& vowel /e are pronounced correctly.
Fig8:Recognition of vowel /e
Fig6: Graphical User Interface
C.Hardware Enviornment
The hardware environment for this paper consists of a
P89V51 microcontroller, a PC, a RS232 driver/receiver,
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
3
Designing of vowel articulation training system for hearing impaired children as an assistive tool.
________________________________________________________________________________________________
/o/
/u/
135
150
113
124
22
26
83.7
82.66
TABLE II. Formant Ranges
Vowel
Formant Range
/a/
/e/
/i/
/o/
/u/
Fig 9: Incorrect Pronunciation of vowel /e
Fig 9. Shows result of utterance of vowel /e of hearing
impaired child. Since formants are different as that of
normal speaker system shows displays that vowel is not
pronounced correctly.
450-700Hz
290-400Hz
500-900Hz
400-550Hz
350-470Hz
Correct Recognition
%
Correct
recogniti
on
85
80
75
70
82.66
82.3 83.7
80.7
77.77
/e/ /o/
Correct
Recognition
Vowel
Fig 12:Recognition of vowels
V. CONCLUSION
Fig10. Simulation Result of Microcontroller PC
interfacing
From experimental results, it can be concluded that LPC
can recognize the speech signal well. The highest correct
recognition achieved is 82.30%. For further work, in
order to get better recognition another recognition
method such as ANN or neuro-fuzzy method can be
applied in this system. The low cost and good
performance of this system indicate that developed
system will be useful in vowel training of hearing
impaired children as an assistive tool .As compared to the
common multimedia sound card which adds significant
noise above system has potential for speech training at
home for the hearing impaired.
ACKNOWLEDGMENT
It is my pleasure to get this opportunity to thank my
respected Guide Dr. R. P. Mudhalwadkar who has
imparted valuable knowledge for the development of this
system .
Fig 11: Microcontroller Board
Fig.10 shows PROTEUS simulation result of
microcontroller interfacing with PC.Fig.11 shows that
vowel /e is pronounced correctly.
TABLE I. Recognition of vowels
Vo
wel
REFRENCES
[1]
Shahrul Azmi M.Y., “An improved feature
extraction method for Malay vowel recognition
based on spectrum data,” International Journal of
Software Engineering and Its ApplicationsVol.8,
No.1 (2014), pp.413-426
Numb
Samples
Samples
Correct
er of
Not
Recogn
[2]
Y. A. Alotaibi and A. Hussain, “Comparative
sampl Recognize recognize
ition
analysis of Arabic vowels using formants and an
es
d
d
automatic
speech recognition system,”
/a/
135
109
26
80.7
International Journal of Signal Processing, Image
/e/
130
107
23
82.30
Processing and Pattern Recognition, vol. 3, 2010.
/i/
135
105
30
77.77
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
4
Designing of vowel articulation training system for hearing impaired children as an assistive tool.
________________________________________________________________________________________________
[3]
L. Rabiner and B. Juang, Fundamentals of speech
recognition. Prentice Hall, 1993.
[4]
Ayaz Keerio, Bhargav Kumar Mitra, Philip Birch,
Rupert Young, and Chris Chatwin, “On
Preprocessing of Speech Signals,” International
Journal of Signal Processing vol.5,2009,pp.216222
[5]
Rabiner. L. R., Schafer. R. W., “Digital
Processing of Speech Signals”, First Edition,
Prentice-Hall.
[6]
X. Huang, A. Acero and H. Hon, “Spoken
language processing: A guide to theory, algorithm,
and system development”, Prentice Hall PTR
Upper Saddle River, NJ, USA, (2001).

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
5
Artificial Neural Network Technique for Short Term Wind Power Prediction
________________________________________________________________________________________________
Artificial Neural Network Technique for Short Term Wind Power
Prediction
1
Arpitha H B, 2Chandra Shekhar Reddy Atla, 3B Kanthraj, 4K R Mohan
1,3,4
Dept of E&E, AIT, Chikmagalur Karnataka, India
2
PRDC Pvt, Ltd, Bangalore, Karnataka, India
1
Email: [email protected], [email protected], [email protected], [email protected]
Abstract - Installed capacity of wind power is increasing
substantially in response to the world wide interest in low
emission power source and a desire to increase the
dependence on petroleum. Hence it is essential to
integration large amount of wind energy into power
system. A large-scale integration of wind power causes a
number of challenges both in planning and operation of
complex power system. Power system operator’s needs to
deal with the variability and uncertainty in wind power
generation when making their scheduling and dispatch
decisions of conventional generation. Wind Power
Forecasting (WPF) is frequently identified as an important
tool to address the variability and uncertainty in wind
power and to more efficiently operate power system
scheduling tools with large wind power penetrations.
Several methods can be used to forecast wind; physical
methods can be used for medium term wind power
forecasting and statistical methods can be used for short
term wind power forecasting. Some of the statistical
methods are Auto-Regressive Integrated Moving Average
Model (ARIMA), - Auto-Regressive model (AR),
Artificial Neural Network (ANN). This paper adopted
ANN model because of its minimum time execution and
accepted accuracy as compared to other statistical
methods.
Key words: Artificial Neural Network, Wind Power
Forecasting, feed forward & Backward Propagation
Algorithm, Mean Absolute Error.
I. INTRODUCTION
The energy is a vital input for the socio-economic
development of any country. So the investment in
renewable energy is increasing in all countries
essentially due to mandatory environmental policies that
have been introduced recently. The wind power, as a
renewable energy source, raises great challenges to the
energy sector operation, namely due to the technical
difficulties of integrating this variable power source into
the power grid.
Wind power forecasting is required for the day-ahead
scheduling to efficiently address wind integration
challenges and significant efforts have been invested in
developing more accurate wind power forecasts in wind
industry. Wind farm developers and system operators
also benefit from better wind power prediction to
support competitive participation in generation
scheduling against more stable and dispatchable energy
sources. In general, WPF can be used for a number of
purposes, such as: generation and transmission
maintenance planning, determination of operating
reserve requirements, unit commitment, economic
dispatch, energy storage optimization (e.g., pumped
hydro storage), and even for energy trading.
Definitions of wind power forecasting – the forecasted
wind generation made at time instant t from look-ahead
time t + Δt, pt+∆t is the average power which the wind
farm is expected to generate during the considered
period of time (e.g., 1 hour) if it would operate under
equivalent constant wind. It is important to note that,
pt+∆t is called as point forecast because it is only a
single value. The probabilistic forecast generates a
probability distribution forecast to every look-ahead
time.
A wind forecasting system is characterized by its
time horizon, which is the future time period for
which the wind generation will be predicted. In order
to understand the different issues involved in wind
energy forecasting it is useful to divide the problem
into three difference time scales as follows:
In short term wind power forecasting the time horizon
range is few hours, but there is no unanimity for the
number of hours. A limit value of 12 to 16 hours has
been proposed in literature. In medium term forecasting
time horizon ranges from the short–term limit up to 36
or 72 hr. The numbers of hours in this time horizon can
also diverge depend on the operational procedures of the
countries. In long term forecast the time horizon ranges
from the short-term limit of 7 days. As the time horizon
increases, so do the forecast errors.
Wind forecast models can be categorized according to
their approaches to producing the wind power
prediction. The advanced WPF methods are generally
divided into two main approaches, such as physical
approach and statistical approach.
Physical method - The Numerical Weather Prediction
(NWP) forecasts are provided by the global model to
several nodes of grid covering an area. For a more
detailed characterization of the weather variables in the
wind farm, an extrapolation of the forecasts is needed.
The physical approach consists of several sub models,
which altogether deliver the translation from the WPF at
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
6
Artificial Neural Network Technique for Short Term Wind Power Prediction
________________________________________________________________________________________________
certain grid point and model level, to power forecast at
the considered site and at turbine hub height as shown in
Figure 1. Every sub model contains the mathematical
description of the physical processes relevant to the
translation. A NWP model is commonly used physical
method which produces forecasting of weather elements
– represented by equations of physics – through the use
of numerical methods. NWP model typically run two or
four times a day using updated meteorological
information. These models are generally operated by
national weather services‟ due to a complex nature of
work and requirement of large resources. However, few
profit making companies like ASW True wind have
invested and developed their own NWP model.
SCADA
Wind farm & Terrain
characteristics
Numerical weather
prediction (NWP)
Physical approach
Wind Power
Forecasting
Figure 1: Physical approach structure
Statistical method – This method consists of direct
transformation of the input variables into wind
generation as presented in Figure 2. The statistical block
is able to combine inputs such as NWPs of the speed,
direction, temperature, etc., of various model levels,
together with on-line measurements, such as wind
power, speed, direction, and others.
Physical methods are vulnerable for forecast errors when
NWP data has high errors. Similarly, the major
shortcoming of statistical method is that it needs a large
amount of validated and correct data to perform
modeling. Hence most of the wind forecasters prefers
WPF systems with the combination of two approaches
and thus improves the forecast accuracy [12].
SCADA
Numerical weather
prediction (NWP)
Statistical
approach
Wind Power
Forecasting
Figure 2: Statistical approach structure
II. AVAILABLE FORECASTING
TECHNIQUES
Forecasting of wind power is complex due to the
inherent nature of wind. Three main classes of statistical
techniques have been identified for short-term wind
power forecasting such as ANN methods, autoregressive
methods, others. The artificial neural network method
has been found to dominate the literature and most of
the wind forecasters adopted in Europe. So this paper
adopts ANN to forecast short term wind power. Very
limited work is progressed in the field of wind power
forecasting in India as compared to other European and
American countries.
In the literature, many studies have been focused on
providing a forecasting tool in order to predict wind
power with good accuracy, Ahmed Ouammi, Hanane
Dagdougui [1] developed a neural network model to
assess the wind energy output of wind farms in Capo
Vado site in Italy, data are monitored for more than two
years. The results are shown for four weeks considering
different information in the input patterns, sampled at
different time interval (lower sample period is ten
minutes) including: pressure, temperature, date and
hour, and wind direction. The output pattern information
is always the wind speed. G. Kariniotakis et al [2] tells
about the state of art wind power forecasting techniques,
their performances as well as their value for the
operational management or trading of wind power. K. G.
Upadhayay [3] developed feed forward back
propagation neural network for short term wind speed
forecasting, in this paper data set is comprised of first,
second, third, fourth and fifth day (24 hour per day) of
the January month (year 2009), as the input and target
output or predicted variable. One fourth of the total data
was selected for training, one fourth for validation and
the remaining one half for testing. Network performance
was estimated by linear regression between the actual
and target wind speed after post-processing. The
maximum percentage error for January 4, 2009 is 9.24
%. Cameron Potter [4] talks about Adaptive Neural
Fuzzy Inference System (ANFIS) to forecast wind
power generation, this paper forecasted the power
generation with error between 12 to 14 %. P. Pinson and
G. N. Kariniotakis[5] developed Fuzzy Neural Network
for Wind Power Forecasting with online Prediction Risk
Assessment; this paper presents detailed one year
evaluation results of the models on the case study of
Ireland, where the output of several wind farms is
predicted using HIRLAM meteorological forecasts as
input, and online estimation forecasts is developed
together with an appropriate index for assessing online
the risk due to the inaccuracy of the numerical weather
predictions. M. Jabbari Ghadi [6] talks about new
Imperialistic Competitive Algorithm- Neural Network
(ICA-NN) method to improve short-term wind power
forecasting accuracy at a wind farm using information
from Numerical Weather Prediction (NWP) and
measured data from online SCADA, this paper built
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
7
Artificial Neural Network Technique for Short Term Wind Power Prediction
________________________________________________________________________________________________
Multi-Layer Perceptron (MLP) artificial neural network
considering environmental factors and then, Imperialist
competitive algorithm is used to update weights of the
neural network and it is applicable in both wind speed
and WPF. J. P.S. Catalao [7] has developed an Artificial
Neural Network for short term wind power forecasting
in Portugal; in this paper MAPE (Mean Absolute
Percentage Error) has an average value of 7.26%, while
the average computation time is less than 5 seconds.
Hence, the proposed approach presents a good trade-off
between forecasting accuracy and computation time,
outperforming the persistence approach.
In this paper, an artificial neural networks (ANNs)
program has been developed based feed forward and
backward propagation algorithm. The developed
program has been applied for practical power system,
Gujarat state, in INDIA.
III. ARTIFICIAL NEURAL NETWORK
“Analogy of brain” - The working of human brain looks
magic, yet performance of some neurons or cells in
brain are known. These neurons are the only part of the
body they can be easily replaced, it assumes that these
neurons tell about the human abilities to remember,
think, and apply previous experience to our every action.
There are 100 billion of cells, are known as neurons.
Each of these neurons is interconnected with up to
200,000 other neurons, this interconnection between
neurons is known as synaptic weights. The power of the
human brain comes from the strength of the neuron cells
and the multiple connections between neurons. It is also
comes from generic programming and learning.
The individual neuron is itself complicate. They have
myriad of the parts, subsystems, and control mechanism.
They host electrochemical path to convey information.
Neurons can be classified into hundreds of different
classes, depending on the classification method. The
neurons and the interconnections between neurons are
not binary, and not suitable, and not synchronous. In
short, it is nothing like the currently available neural
network tries to replicate only the most basic elements
of this complicated, versatile, and powerful organism.
They did it in a primitive way.
Artificial neural network or neural network is physically
cellular systems, which can acquire, stare, and utilize
experimental knowledge. ANN motivated by the neuron
activity in human brain, (it may be reorganization,
understanding, invention, thinking abilities of brain)
there are billions neurons in brain, with trillions of
interconnection. Hence this ANN tries to imitate some
of the human activity and the performance of human
brain by artificial means. The artificially developed
neuron computing is done with large number of neurons
or cells and their interconnection. They operate
selectively and simultaneously on all the data and inputs
and also operation time of the artificial neurons are
faster than that of copied neurons from human brain.
The artificial neurons are based on self learning
mechanisms which do not require following the culture
of programming. ANN is imitated electronic models
based on the neural structure of the brain. The brain
basically learns from experience. It is natural proof that
some problem that are beyond the scope of current
computers indeed solvable by small energy efficiency
packages. This brain modeling also promises a less
technical way to develop machine solution. This new
approach to computing also provides a more graceful,
degradation during system overload than its more
traditional counter parts. They are the synthetic
networks that emulate the biological neural network
found in living organisms. They are built by biological
behavior of the brain. They are like machines for
performing for all cumbersome and tedious tasks as
which have great potential to further improve the quality
of our life.
The basic processing elements of ANN are called
NEURONS, or simply called nodes. They perform
different function such as summing point, nonlinear
mapping junctions, threshold units, etc. they usually
operate in parallel and are configured in regular
architecture, organized in layer and feedback connection
both within the layer and towards adjacent layers.
A neural network is powerful modeling tool that is able
to capture and represent the complex input/output
relationship. The motivation for the development of
neural network technology stemmed from desire to
develop an artificial system that could perform
intelligent tasks similar to those performed by the
human brain. Neural network resembles the human brain
in the following two ways:
1.
A neural network requires knowledge through
learning.
2.
Neural networks knowledge is stored within interneuron connection strengths known as synaptic
weights.
The true power of neural network lies in their ability to
represent both linear and nonlinear relationship directly
from the data being modeled. Traditional linear models
are simply inadequate when it comes to modeling data
that containing non-linear characteristics.
A multilayer perceptron neural network, with feed
forward architecture with three layers of units is used
due to its status and capacity to store large amount of
problems. The configuration of ANN has three layers,
such as first layer is input layer, second layer is hidden
layer in which neurons plays major role, any number of
neurons can be occupied in hidden layer, hidden layer
may have more than 1 layer and third layer is output
layer. These three layers are inter connected, the
connection between each three layers are modified by
“synaptic weights”. In addition each input may assumed
to have extra input the weight that modifies this extra
input is called bias. The data which propagates from
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
8
Artificial Neural Network Technique for Short Term Wind Power Prediction
________________________________________________________________________________________________
input to neuron are called as “feed-forward propagation”
[3]. Commonly neural network are adjacent, or trained,
so that a particular input leads to a specific target output.
Neural network trained to perform complex functions in
various fields of application including pattern
recognition, identification, classification, speech, vision,
and control system. Currently only few these neuronbased structures, paradigm actually, are being used
commercially. One particular structure, the feed forward
propagation network, is by far and away the most
popular. Most of the other neural network represents
mode for „thinking‟ that are still being evolved in
laboratories. Ye5r, all of these neurons are simply and as
such the only real demand they make is that they require
the network architecture to learn how to use them.
Now, advances in biological research promise an initial
understanding of the neural thinking mechanism. This
research shows that the brains stores information as
patterns. Some of these patterns are very complicate and
allow as the ability to recognize individual faces from
many different angles. This process storing information
as patterns, utilizing those patterns and then solving
problems encompasses a new field in computing.
The absolute value indicates the strength of the
correlation (from Figure 1) input layer has 4 units, they
are time, humidity, temperature and wind speed
respectively hidden layer consists of 3 neurons and
output layer 1 units respectively. Then target is nothing
but the actual output.
A.
Flowchart:
In this paper wind power forecasting brought up by
ANN, the detailed modeling of ANN using forward &
backward propagation algorithm for wind power
forecasting is presented by flow chart as shown in
Figure 4.
B.
Algorithm for wind power prediction by using
proposed ANN model:
Step 1: initialize configuration of ANN.
Step 2: specify no of inputs, no of hidden layers and no
of output.
Start
Select ANN configuration
IV. THE PROPOSED FRAME WORK:
Read no of neurons, inputs & output units
The configuration of ANN in this proposed paper is
shown in the Figure 3.
Read input, synaptic weights & target value
I
N
P
U
T
Target
O
U
T
P
U
T
Calculate neuron output, actual output and error
N
Y
If error <= 0.01
Stop
Modify weights to reduce error
Input layer
Hidden layer Output layer
layer
Figure 3: Development of ANN
The most important tasks in building an ANN
forecasting model is the selection of the input variables
each parameter plays major role in modeling. In this
paper, analysis is carried out to find the amount of
dependency between each of the meteorological values
and to get rid of the redundant values that might be
present in the data set by applying “feed-forward & back
propagation algorithm” method. The purpose of
obtaining the correlation is to measure and interpret the
strength of a linear or nonlinear relationship between
two continuous variables. Both correlation coefficients
take on values between -1 and +1, ranging from being
negatively correlated (-1) to uncorrelated (0) to
positively correlated (+1). The sign of the correlation
coefficient (i.e., positive or negative) defines the
direction of the relationship.
Draw weight values of matched inputs
Read inputs to forecast i.e. inputF
N
If inputF = input
Y
Calculated forecasted output
Stop
Figure 4: Flow chart for the proposed wind power
forecasting system using ANN.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
9
Artificial Neural Network Technique for Short Term Wind Power Prediction
________________________________________________________________________________________________
Step3: enter the values of inputs, enter the values of
weights at hidden layers and enter the targeted output
value.
Step 4: calculate the output from hidden layer using
N1 = X1 W11 + X2 W12 + X3 W13 + X4 W14
N2 = X1 W21 + X2 W22 + X3 W23 + X4 W24
N3 = X1 W31 + X2 W32 + X3 W33 + X4 W34
Step 5: calculate the final output from output layer.
Output(c) = N1 W1 + N1 W2 + N3 W3
Step 6: calculate the error in output layer using.
(AP − FP)
MAE =
∗ 100
AP
Step 7: calculate the change in weight at output layer.
W1+ = W1 +
−(δ ∗ N1 )
W2+ = W2 +
−(δ ∗ N2 )
W3+ = W3 +
−(δ ∗ N3 )
Step 8: calculate the error for hidden layer.
δ1 = δ + W1+
δ2 = δ + W2+
δ3 = δ + W3+
Step 9: calculate the new weights for hidden layers using
δ1 & δ2.
+
W11
= W11 +
−(δ1 ∗ X1 )
+
W12 = W12 +
−(δ2 ∗ X2 )
+
W13
= W13 +
−(δ3 ∗ X3 )
+
W14
= W14 +
− δ4 ∗ X4
Step 10: go to step 4, then step 5 and step 6. Then
obtain new error δ+
Step 11: if new error is < or > 0.1 old error (I e., δ+ >/<
δ) Stop iteration Else go to step 6.
Step 12: Draw second set of input pattern and values of
inputF for forecasting.
Step 13: Compare current inputs with history.
Step 14: If the inputF value matches with the history
then draw respective weights and calculate outputs
(power), this gives forecast values in terms of power in
MW.
A summary of the forecast performance results are
presented in below.
V. FORECASTING WIND POWER USING
PROPOSED ANN MODEL:
The selected input variables including actual powers as a
target value are presented in Table 1. ANN has been
trained based on the cross correlation between time,
humidity, temperature, wind speed and wind power from
historical data.
Wind power
(target) xd
generated
MW
The quantitative assessment of the short-term wind
power prediction is carried out using the input variables
shown in Table 1 and different experiments were
conducted to train and evaluate the proposed ANN
model by using the set of input variable, these
experiments were represented in the 3 cases. The
experiments were conducted to check error between
forecasted and actual wind power.
The case study is referred to Gujarat state situated in
India where installed capacity of wind power generation
is 3093 MW by 31st March 2014. One month data have
been monitored for system analysis.
Case 1:
In this case a particular day i.e. 27th May 2014 is
considered to forecast wind power. The result obtained
in this case is shown in table 2. This table shows the
comparison of the actual power and forecast power for
the same day for below mentioned time period.
TABLE 2: SHOWS THE COMPARISON OF
ACTUAL POWER AND FORECASTED POWER ON
27TH MAY 2014.
Time
in hr
2
5
8
11
14
17
20
23
Actual
Power
(AP)
in MW
815
825
260
120
450
1100
900
500
Forecast power
(FP)
in MW
772
1031
130
161
344
957
850
641
The graphical representation of the forecasted power for
27th May 2014 is depicted in the Figure 5.
Case 2:
In this case a particular day i.e. 3rd June 2014 is
considered to forecast wind power. The result obtained
in this case is shown in table 3. This table shows the
comparison of the actual power and forecast power for
the same day.
TABLE 1: LIST OF INPUTS
Inputs
Time - x1
Humidity - x2
Temperature - x3
Wind speed - x4
Specifications
0 to 24 hours
%
ºC
Km/hr
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
10
Artificial Neural Network Technique for Short Term Wind Power Prediction
________________________________________________________________________________________________
TABLE 4 – SHOWS THE COMPARISON OF
ACTUAL POWER AND FORECASTED POWER ON
15TH MAY 2014.
actual
1200
forecast
Time
in hr
2
5
8
11
14
20
23
Power (MW)
1000
800
600
400
200
0
5
8
11
14
Time (hr)
17
20
23
Figure 5: comparison of actual and forecasted power of
27th May 2014 (3).
TABLE 3 – SHOWS THE COMPARISON OF
ACTUAL POWER AND FORECASTED POWER ON
3TH JUNE 2014.
Time
in hr
2
5
8
14
17
20
23
Actual Power
(AP) in MW
340
550
330
600
1000
620
600
Forecast power
(FP) in MW
471
571.11
224.042
398.301
1097
738.861
519.701
1200
actual
1000
power (MW)
forecast
800
600
400
200
0
5
8
14
17
Time (hr)
forecast
800
600
400
200
0
2
5
8
11
14
20
23
Time (hr)
Figure 7: comparison of actual and forecasted power
Observations:
The average MAE1 is calculated with respect to
forecasted power,
(FP − AP)
∗ 100
FP
MAE1 =
No of Observations
The average MAE2 is calculated with respect to the
installed capacity in Gujarat located in Karnataka.
(FP − AP)
∗ 100
Insatlled capacity
MAE2 =
No of Observations
1200
2
actual
1000
a.
The graphical representation of the forecasted power for
3rd June 2014 is depicted in the Figure 6.
Forecast power (FP)
in MW
417.604
394.659
456.59
397.261
423.281
1017.05
630
The graphysical representation of the forecasted power
for 15th May 2014 is depicted in the Figure 7.
Power (MW)
2
Actual Power
(AP) in MW
500
370
200
375
560
1100
370
20
23
Figure 6: comparison of actual and forecasted power of
3rd June 2014
Case 3:
In this case a particular day i.e. 15 th May 2014 is
considered to forecast wind power. The result obtained
in this case is shown in Table 4. This Table shows the
comparison of the actual power and forecast power for
the particular time in a day.
TABLE 5: SHOWS THE COMPARISON OF MAE1
AND THE MAE2
Number of days MAE1in %
MAE2 in %
for analysis
24
44
8.05
From Table 5, it is observed that the average Mean
Absolute Error (MAE) with reference to forecasted
value is 44% and average MAE with reference to
installed capacity is around 8%. The MAE with
reference to installed capacity is in line with
international practices in Europe and USA, where the
MAE is reported 10 to 15% [11]. Hence the proposed
ANN method provides the wind power forecast results
for Indian conditions with acceptable accuracy.
From Table 5, it is also observed that MAE with
reference to forecast error is higher side compared with
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
11
Artificial Neural Network Technique for Short Term Wind Power Prediction
________________________________________________________________________________________________
MAE with reference to installed capacity. Also
international practice is to use MAE with reference to
installed capacity. However Indian Grid code following
MAE with reference to forecasted value. Hence it is
recommended to use MAE with reference to installed
capacity to find out the accuracy of the forecast and
same can be used for scheduling of conventional
generation in power system operations.
[5].
P. Pinson and G. N. Kariniotakis. “Wind Power
Forecasting using Fuzzy Neural Network
enhanced
with online
Prediction Risk
Assessment”. In proc. Of 2003 IEEE bologna
power tech conference, June 23-26, bologna,
Italy.
[6].
M. Jabbari Ghadi, S. Hakimi Gilani, a Sharifiyan,
H. Afrakhteh. “A New Method for Short-Term
Wind Power Forecasting”. Article Code:
dgr_3969, in proc. Of 2011 university of Guilan,
Rasht, Iran.
[7].
J. P.S. Catalao member of IEEE, H.M.I.
Pousinho, student member of IEEE and V. M. F.
mender. “An Artificial Neural Network
Approach for Short Term Wind Power
Forecasting in Portugal” in proc. Of 2009 IEEE
university if Beira.
[8].
G N Kariniotakis, G S Stavrakakis, E F Nogaret.
“Wind Power Forecasting using Advanced
Neural Networks Models”. Proceeding of 1996
IEEE transactions of energy conversation, vol.
11, no. 4. December 1996.
[9].
Makarand A Kulkarni1,∗, Sunil Patil2, G V
Rama3 and P N Sen1, “Wind speed prediction
using statistical regression and neural network”,
1Department of Atmospheric and Space
Sciences, University of Pune, Pune 411 007,
India.
VI. CONCLUSION:
High penetration and intermittent behavior of wind
power in the electricity system provides a number of
challenges to the grid operator. This paper talks about
the theoretical methodologies underlying the physical
and statistical modeling approaches. This paper also
discusses how WPF efficiency can be increased by using
tools, focusing on the problem with wind power
uncertainty. The selected wind power plant should
reflect for the iterative training of developed model and
the forecasting results must be prepared and compared
with historical values. The ability of wind power
prediction impacts on the operations of the power
system.
In this paper, artificial neural networks based on feed
forward & backward propagation model is proposed to
predict wind power in a short term scale and same is
applied for Gujarat state located in India. The feed
forward & back-propagation learning algorithm proved
a good accuracy for the short term forecasting of wind
power in practical scenario.
REFERENCE:
[1].
[2].
[3].
[4].
Ahmed Ouammi, Hanane Dagdougui, Roberto
Sacile,
“Short Term Forecast of Wind
Power by an Artificial Neural Network
Approach”, IEEE transaction on energy
conversion,
no.978-1-4673-0750-5/12/$31.00
©2012.
G. Kariniotakis, P. Pinson, N. Siebert, “The State
of Art in Short Term Prediction of Wind Power
from an Offshore Perspective”, in proc. of 2004
Seatech week, Brest, France, 20-21 Oct. 2004.
Upadhayay et al. “Short-Term Wind Speed
Forecast using Feed-Forward Back-Propagation
Neural Network”. International journal of
engineering science and technology, vol. 3, no.5,
2011, pp. 107-112.
Cameron Potter, Martin Ringrose, Michael
Negnevitsky Sandy Bay, Tasmania, Australia
“short-term wind forecasting techniques for
power generation”, Australasian Universities
Power Engineering Conference (AUPEC
2004)26-29 September 2004, Brisbane, Australia.
[10]. M. Milligan, M. Schwartz, Y. Wan, “statistical
wind power forecasting models: results for U.S.
wind farms” NREL/CP-500-33956, May 2003.
[11]. C. Monteiro et al “Wind Power Forecasting:
State-of-the-Art 2009”, Institute for Systems and
Computer Engineering of Porto (INESC Porto),
Decision and Information Sciences Division,
Argonne National Laboratory, Argonne, Illinois,
November 6, 2009.
[12]. Dr. John Zack, “Overview of Wind Energy
Generation Forecasting”, New York State Energy
Research and Development Authority, NY
12203-3656, December 17, 2003.
[13]. Hannele Holttinen, Jari Miettinen, Samuli
Sillanpaa, “Wind Power Forecasting Accuracy
and Uncertainty in Finland”, Espoo 2013. VIT
Technology 95. 60 p. + app. 8 p.
[14]. Sergio Ramos, “Short-term Wind Forecasting for
energy resources scheduling”, FCOMP–010.124-FEDER-Pest-OE/EEI/UI0760/2 011.
[15]. http://posoco.in:83/docs/RRF/RRF_Procedures.p
df

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
12
Intelligent Baby Monitoring System
________________________________________________________________________________________________
Intelligent Baby Monitoring System
1
Savita P.Patil, 2Manisha R. Mhetre
Instrumentation Dept.
VIT , Pune, Maharashtra, India
Email: [email protected], [email protected]
Abstract—This paper presents a design of a Baby
Monitoring System based on the GSM network. A
prototype is developed which gives a reliable and efficient
baby monitoring system that can play a vital role in
providing better infant care. This system monitor vital
parameters such as body temperature, pulse rate, moisture
condition, movement of an infant and using GSM network
this information is transferred to their parents.
Measurements of this vital parameters can be done and
under risk situation conveyed to the parents with alarm
triggering system to initiate the proper control actions. The
system architecture consist of sensors for monitoring vital
parameters, LCD screen, GSM interface and a sound
buzzer all controlled by a single microcontroller core.
Short Messaging Service (SMS) is fundamental part of
the original GSM system and its progress.
In this way just by an infant's few biomedical
parameters parents can get information about their
health.
II. LITERATURE SURVEY
Many home-care systems are available but majority of
this system are specially designed for the aged people
and patients. These systems can monitor their health
status, automatically send out emergency signals, and
have other functions. However, the caring methods for
infants are not the same. Children and adults require
Keywords- Baby monitoring, vital parameters,
different type of care because they are totally dependent
microcontroller, GSM network.
for their normal functions on someone else. Infants
cannot give any feedback about their discomfort or
I. INRODUCTION
health complaints. Infants cannot express themselves
In the past few decades, female participation in the
like old people, e. g when an infant has a fever, he/she
labour force in the industrialized nations has greatly
can only express his/her discomfort by crying. Hence, a
increased in present society. Subsequently, infant care
home-care system specially designed for infants is
has become a challenge to many families in their daily
today’s need which would substantially lighten parents’
life. Mother is always worries about the well being of
especially mother’s burden. In support of this
her baby[1].
requirement many research papers and patents for
healthcare application are studied with the intention of
As we seen in India both the parents need to work and
possible solutions to take care of the infant. Author had
look after their babies/infants, so more workload and
developed a system which is based on commercial GSM
stress is there on such families especially on female
network. Vital parameters such as body temperature
counterparts. If a system is developed which
measurement using LM 35[1,6], Heart rate using IR
continuously gives updates about their infants during
Transmitter and Receiver, respiratory rate by using
illness or during normal routine then it will be of great
Piezo film sensor located on Patient’s Chest and blood
help to such members as they can work in stress less
Pressure are sensed, amplified with variable gain,
environment giving more fruitful output. Also urgent
filtered and given to microcontroller. Remote subsystem
situation condition can be quickly be noticed and
with GSM module receives data which is then send to a
handled within less time. Usually, when a young baby
server by a USB port. Data are stored on the server and
cries, the cause is one of the following things i.e. they
remotely displayed in a web site. In SMS based
are hungry, tired, not feeling well or need their diaper
telemedicine system, patients temperature measured by
changed. So we developed a prototype which can
Infrared temperature sensor MLX 90614 and ECG
monitor the activities of the babies and/or infants along
signals acquired with electrodes interfaced with the
with finding one of the above causes and give this
microcontroller PIC16F877[3].A wearable hardware
information to their parents[2].
gadget is developed which captures the biological status
This proposed system give a peace of mind to loved
of the baby such as motion, temperature and heart rate
ones when they are away from their infant as they can
sensors (both optical and pressure) which are controlled
get an update status of their wellbeing. The other
by the microcontroller and connected to the Bluetooth
advantage is the programmability of alarm conditions
module to provide wireless communication[5]. In
can alleviate any inaccuracy through a normal sensor.
paper[14], the temperature and humidity parameters are
Communication is done by GSM interface in which
monitored. A skin-temperature probe, the air
temperature-probe was used to monitor the temperature
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
13
Intelligent Baby Monitoring System
________________________________________________________________________________________________
around the baby and humidity of incubator was
monitored using the humidity sensor from SYHS2XX
series. This signals are interfaced to PIC microcontroller
18F4550 and GSM modem is used for communication.
Patents are also searched to find novelty in baby care
monitoring system. In design, (Patent No. 2002/0057202
A1)[16], system is developed which monitors breathing
,fever and volume of baby sleeping in the crib. There is
a module having three sensors attached to the diaper.
This signals are amplified, transmitted by transmitter
and at remote station there is receiver, multiplexer which
applies this signal to audible alarm to alert mother to
take appropriate action. U.S. Patent No.6,043,747
(Altenhofen), Wherein a parent unit can record
messages Which may then be transmitted to the baby
unit to soothe or calm the baby[17]. The baby unit
includes a microphone and can transmit sounds to the
parent unit. However, in order for the parent to detect a
problem With the child, the parent must constantly
monitor the sounds being transmitted from the baby unit.
The next U.S. Patent No. 6,450,168 B1[18],includes an
infant’s sleep blanket/garment which is offered as either
a sleep sack or a sleep shirt, depending on the age of the
infant. The sack with no arm holes for newborns and
with arm holes and sleeves for older infants. Here
thermometers incorporated to monitor the infant’s
temperature as he sleeps. U.S. Patent No. 4,895,162
[19], in Which a soft belt containing a pair of electrodes
is positioned around the torso of an infant such that the
electrodes are in position to monitor vital signs, such as
respiration and pulse. Monitoring lead Wires connect the
electrodes to a monitor unit proximate the infant.
The following subsections provide more details of the
components used in our prototype:
A.
Human body needs special type of sensors for reliable
readings which led to the choice of using the LM35
temperature sensors in our prototype[1,6]. It operates at
3 to 5 V and can measure temperature in the range of 40 C to +125 C which is sufficient for the targeted body
temperature range .It is having linear response and easy
conditioning. The sensor's output is an analog DC
voltage signal which is read by the microcontroller using
an analog pin linked to an ADC. The ADC used has a
resolution of l0-bits, 1024 levels, with a sample rate of
9600 Hz and input voltage range depending on the
ground and Vee. The output voltage of the LM35 is
analog and in the linear range of -1 V to 6 V with
accuracy of ±0.5 °C can be converted from volts to
degrees of Celsius and Fahrenheit .
The placement of sensors is also important for accurate
measurements. In our prototype it is placed in the socks
of an infant wrapped in cotton so that no irritation
made.
The temp sensor and actual readings are listed in table
below:
Serial
No
1
2
3
4
III. SYSTEM ARCITECHTURE
The architecture of the system consist of both hardware
and software. Block diagram is as shown in
Fig.1,hardware components were assembled according
to the block diagram. The code is written in embedded C
and is burnt into the microcontroller .
Temperature Sensor
B.
TABLE I
Actual Temp ( 0C)
Practical
Temp(0C )
32
31
32.5
33
36.1
35.6
36.7
37.2
Pulse Rate Sensor
The components used are 5mm photodiode and 5mm
light emitting diode. The system consist of IR
transmitter and receiver, high pass filter ,amplifier and
comparator .By using this circuit component biological
signal in mill volt is converted to larger magnitude about
one to two volt and then send it to the microcontroller.
Fig. 1. Block Diagram of Proposed System
Fig. 2. Pulse Rate Sensor Circuit
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
14
Intelligent Baby Monitoring System
________________________________________________________________________________________________
Pulse rate will be measured from the finger using optical
sensors and displayed on the LCD. The transmittersensor pair is clipped on one of the fingers of the
subject. Pulse rate signal is applied to the Non-inverting
input terminal as shown in Fig. 2.Voltage gain of Non
Inverting amplifier is given by Equation 1+ Rf/R1.
Gain=1+ 180/1=181.
This amplified signal is given to comparator circuit
where voltage divider circuit is used. Voltage at noninverting input is compared with reference voltage and
whatever voltage is generated is applied to the base of
transistor. There is a 100 Ohm resistor at the base of
transistor used to limit the current flowing to the base of
transistor. As soon as the voltage across this resistor
increases beyond 0.7V the transistor turns ON and at the
output we get 0v and the LED D2 glows.
vibrating the accelerometer. By measuring the amount
of static acceleration due to gravity, one can find out the
angle the device is tilted at with respect to the earth. By
sensing the amount of dynamic acceleration, one can
analyze the way the device is moving. Accelerometers
use the piezoelectric effect - they contain microscopic
crystal structures that get stressed by accelerative forces,
which cause a voltage to be generated. The three axis
accelerometer are basically used to identify the
movements across the three axis i.e. x-axis, y-axis, zaxis. The accelerometer used in this system is
ADXL335, [20] which is small low profile package, can
measure minimum full scale range of +/- 3g as shown in
Fig.4.In this way movement of an infant is monitored by
placing accelerometer properly. It is positioned in the
socks of an infant so accurate motion will be detected.
The pulse-rate sensor and actual readings are listed in
table below:
TABLE II
Serial
No
1
2
3
4
C.
Actual pulse
rate
72
66
70
54
Practical pulse
rate
78
72
76
60
Fig. 4. ADXL335 Accelerometer
Moisture Detection Sensor
To determine the moisture condition i.e. urine detection
,two pairs of copper electrodes are placed under the
cloth on which baby is sleeping. The signal obtained is
given to microcontroller.
E.
In our prototype 16 X 2 LCD module is used. It has 2
rows and 16 column therefore total 32 characters are
displayed. It has two operation modes, one uses all 8
pins and the other uses only 4 of them. The 4-bit mode
was used to manage the LCD screen. All sensor output
is displayed continuously as it is being measured.
F.
Fig. 3. Moisture Detection Circuit
For detection of urine ,transistor as a switch circuit is
used as shown in Fig.3 When urine is present switch is
closed transistor turns on. When urine is absent switch is
open, transistor turns off.
D.
Motion Sensor
An accelerometer is an electromechanical device that
will measure acceleration forces. These forces may be
static, like the constant force of gravity pulling at our
feet, or they could be dynamic - caused by moving or
LCD screen
GSM Module
GSM (Global System for Mobile communication) is
a digital mobile telephony system. With the help of
GSM module interfaced, we can send short text
messages to the required authorities as per the
application. GSM module is provided by SIM uses the
mobile service provider and send SMS to the respective
authorities as per programmed. This technology enable
the system a wireless system with no specified range
limits. In this way, whenever the safe range of the vital
parameter of an infant is violated, the programmed
microcontroller produces an alarm and GSM Modem
interfaced with the microcontroller sends an alert SMS
to the parent's mobile number deploying wireless
technology.
G.
Controller
The PIC 18f4520 is an 8-bit microcontroller, which has
an on-chip eight channel 10-bit Analog-to-Digital
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
15
Intelligent Baby Monitoring System
________________________________________________________________________________________________
Converter(ADC).The amplified and conditioned sensor
signals are fed to the microcontroller.
IV. SOFTWARE DETAILS
PIC18F4520 is used as a micro-controller in a proposed
system. The sensors namely pulse rate sensor,
accelerometer, temperature sensor, moisture sensor and
sound detector are interfaced with analog channel of
ADC of micro-controller. The values taken from this
sensor are displayed after every 2msec of delay. Power
on reset function of PIC micro-controller resets all the
values. The micro-controller read output of ADC after
every 2 seconds. Temperature of an infant is read by
microcontroller, the software is developed in such a way
that upper limit of temperature is set, if crosses that limit
,buzzer will be on and alert message send to mother.
Similar conditions are considered for other sensors.
Fig. 6. Actual Implemented System
V. RESULTS
The system was tested carefully on an infant, the results
found to be same as the one's measured by standard
instrument. While testing this system on an infant
parent's concern was considered. During the execution
of the system snapshots of the display were taken. The
system being a complete hardware design and the data
available on cell phone and LCD display have been
captured. Test results of the system are given below,
shows successful implementation of the system. Fig.5
and Fig.6 shows hardware module and the actual
implemented system.Fig.7,8,9 shows a sample readings
of infant onto the LCD attached to the module on an
infant's side. The reading were matched to the readings
taken by standard instrument and found to be
same.Fig.10 and Fig.11 shows message received on
parent's cell phone when some abnormal condition
exists. Message shows temperature is high and moisture
condition exists.
Fig. 7. LCD displaying Infant's Temperature
Fig. 8. LCD displaying Infant's Urine detection
condition
Fig. 5. Hardware Module of the Implemented System
Fig. 9. LCD displaying Infant's Pulse Rate value
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
16
Intelligent Baby Monitoring System
________________________________________________________________________________________________
Fig. 10. SMS received on parent's cell phone
Fig. 11. Message received on parent's cell phone
[2].
Nitin P. Jain, Preeti N. Jain,and Trupti P.
Agarkar,
"An
Embedded,
GSM
based,Multiparameter,Realtime
Patient
Monitoring System and Control", IEEE
Conference publication in World Congress on
Information
and
Communication
Technologies,Nov 2,2013.
[3].
Ashraf A Tahat, "Body Temperature and
Electrocardiogram Monitoring Using SMS-Based
Telemedicine System", IEEE international
conference on Wireless pervasive computing
(ISWPC), 13 Feb 2009.
[4].
Jia-Ren Chang Chien, "Design of a Home Care
Instrument Based on Embedded System",IEEE
international
conference
on
industrial
technology(ICIT), 24 April 2008.
[5].
Elham Saadatian, Shruti Priya Iyer, Chen Lihui,
Owen Noel Newton Fernando, Nii Hideaki,
Adrian
David
Cheok,
Ajith
Perakum
Madurapperuma,
Gopalakrishnakone
Ponnampalam, and Zubair Amin, "Low Cost
Infant
Monitoring
and
Communication
System",IEEE
international
conference
publication ,Science and Engineering Research ,
5-6 Dec. 2011.
[6].
Baker Mohammad, Hazem Elgabra, Reem
Ashour, and Hani Saleh, "Portable Wireless
Biomedical Temperature Monitoring System",
IEEE international conference publication on
innovations in information technology (IIT), 19
March 2013.
[7].
N. M. Z. Hashim, "Development of Optimal
Photosensors
Based
Heart
Pulse
Detector",International Journal of Engineering
and Technology (IJET) Aug-Sep2013.s
[8].
Nur Ilyani Ramli, Mansour Youseffi, and Peter
Widdop, "Design and Fabrication of a low cost
heart
monitor
using
reflectance
Photoplethysmogram",World
Academy
of
science, Engineering and Technology 08
2011,pages 417 to 418.
[9].
Carsten Linti, Hansjurgen Horter, Peter
Osterreicher,and Heinrich Planck, "Sensory baby
vest for the monitoring of infant", International
workshop on Wearable and Implantable Body
Sensor Networks, BSN 2006,3-5 April 2006.
VI. CONCLUSION
Proposed Infant Monitoring System is an inexpensive
and simple to use, which can improve the quality of
infant-parent communication. This system expressively
provides the parents with the feeling of assurance. The
constant capturing of multiple biological parameters of
the baby and analysis of the overall health helps mother
to understand the internal status of the baby. As GSM
technology is used
which makes the users to
communicate for longer distances. This is a convenient
system to monitor the baby's health condition from any
distance.
REFERENCES
[1].
J.E.Garcia,R.A.Torres,
"Telehealth
mobile
system ", IEEE Conference publication on Pan
American Health Care Exchanges, May 4,2013.
[10]. Sharief F. Babiker, Liena Elrayah Abdel-Khair,
and Samah M. Elbasheer, "Microcontroller Based
Heart Rate Monitor using Fingertip Sensors",
UofKEJ Vol. 1 Issue 2 pp. 47-51 (October 2011.
[11].
Prof.K.Padmanabhan,
Heart-Rate Meter",
,www.efymag.com.
"Microcontroller-Based
electronics for you
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
17
Intelligent Baby Monitoring System
________________________________________________________________________________________________
[12]. S.Deepika, V.Saravanan, "An Implementation of
Embedded Multi Parameter Monitoring System
for Biomedical Engineering", International
Journal of Scientific & Engineering Research,
Volume 4, Issue 5, May-2013.
[13]. Sowmyasudhan S, Manjunath S, "A Wireless
Based Real-time Patient Monitoring System",
International Journal of Scientific & Engineering
Research, Volume 2, Issue 11, November-2011.
[14]. N.S. Joshi, R.K. Kamat, and P.K. Gaikwad,
“Development of Wireless Monitoring System
for Neonatal Intensive Care Unit”, International
Journal of Advanced Computer Research,
Volume-3 Number-3 Issue-11 September-2013.
[15].
V.S. Kharote-Chavan, Prof. Satyanarayana
Ramchandra Rao, “Multiparameter Measurement
of ICU patient using GSM and Embedded
Technology”, International Journal of Science
and Engineering Volume1, Number 2, 2013.
[16]. Ronen Luzon, "INFANT MONITORING
SYSTEM", May 16, 2002. Patent No. US
2002/0057202 A1.
[17].
Cynthia L_Altenhofen," BABY MONITOR
SYSTEM", Mar. 28, 2000. Patent No. 6,043,747.
[18].
Kellie I. Nguyen, "INFANT SLEEPING
BLANKET/GARMENT FOR USE WITH
MEDICAL DEVICES", Sep.17,2002, Patent
No.US 6,450,168 B1.
[19].
Maria Dolliver, "APNEA MONITOR BELT",
Jan.23, 1990, Patent No. 4,895,162.
[20].
ADXL335 Accelerometer Datasheet
[21].
LM35 Datasheet

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
18
A Review of Factors and Data Mining Techniques for Employee Attrition and Retention in Industries
________________________________________________________________________________________________
A Review of Factors and Data Mining Techniques for Employee
Attrition and Retention in Industries
1
K. Mohammed Hussain, 2P. Sheik Abdul kadher
Research Scholar, Professor & Head of Department
Department of Computer Applications
B.S Abdur Rahman University, Chennai, Tamil Nadu, India
Email: [email protected]
Abstract — In an exceedingly data driven economy, it’s
obvious that that raising as key competitive differentiators
and retaining the talent pool has become a matter of
dominant importance. The primary objective of this study
is to provide a background on attrition and also to enlist
various factors that build staff displease. This paper also
provides inputs and reasoning on
satisfactory level of
staff towards their job and dealing conditions and to seek
out the areas which are important for Indian IT industries.
The secondary objective of this paper is to provide a
detailed literature review of some of the suitable data
mining techniques for employee attrition prediction and
retention.
Keywords:- Data Mining, Employee Attrition, Prediction,
HR
I. INTRODUCTION
Employee turnover refers to the proportion of employees
who leave an organization over a period of one or two
years, expressed as a percentage of total workforce
numbers. This term is used to encompass all leavers, both
voluntary and involuntary, including those who resign,
retire or are made redundant, in which case it may be
described as „overall‟ or „crude‟ employee turnover. It is
also possible to calculate more specific breakdowns of
turnover data, such as redundancy-related turnover or
resignation levels, with the latter particularly useful for
employers in assessing the effectiveness of people
management in their organizations. Retention relates to
the extent to which an employer retains its employees
and may be measured as the proportion of employees
with a specified length of service (typically one year or
more) expressed as a percentage of overall workforce
numbers.
Calculating your company's employee attrition rate
allows you to determine the percentage of employees that
left your business over a specified period of time, usually
one year. Attrition includes all employees who leave the
company, whether the leaving was voluntarily and
involuntarily. An employee who chooses to leave a
company for another job is an example of voluntary
employee attrition. On the other hand, an employee fired
by the company is an example of involuntary attrition.
Both academic and industrial researchers start focusing
on Employee retention and Employee voluntary turnover.
Many of the researchers have examine the reasons for
voluntary turnover and retention of professional sales
force employees. Even fewer researchers have examined
the effects of Human Resource Development (HRD)
interferences on a sales force over an extended period of
time.
One of the case study conducted for Industrial sector
manufacturer, headquartered in India, examined the
entire population of technical sales employees. The
number of observations was extensive–over 20,000
observations associated with the 1,675 subjects analyzed
for the study. The longitudinal period, size of the
population, and the subject focus of this study distinguish
this investigation from previously identified studies of
employee voluntary turnover. The unique aspect of this
study, however, lies in the number of variables and the
variety of statistical treatments of employee turnover
through the data-mining process.
One of the leading UK Annual Resourcing and talent
planning survey report gives a median „crude‟ or
„overall‟ employee turnover rate for the UK sample
collected, as well as the median turnover figure relating
purely to those who „left voluntarily‟ (that is,
resignations). While voluntary turnover rates have
decreased recently as a result of challenging economic
conditions, the flip side of this coin is that redundancyrelated turnover has become more common. However,
skills shortages persist for certain occupational groupings
even during troubled economic times, so it is important
to be aware of trends in turnover rates for different
groups rather than simply focusing on „headline‟ figures.
Turnover levels can vary widely between occupations
and industries. The highest levels are typically found in
retailing, hotels, catering and leisure, call centres and
among other lower paid private sector services groups.
Levels also vary from region to region. The highest
turnover rates tend to be found where unemployment is
lowest and where it is relatively easy for people to secure
desirable alternative employment.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
19
A Review of Factors and Data Mining Techniques for Employee Attrition and Retention in Industries
________________________________________________________________________________________________
To know the acceptable level of employees towards their
job and working circumstances, the organization should
identify the factors which make disappointment of
employees like policy or norms. Apart from this,
organization should also find the areas where the
company is lagging and also identify the reasons for
attrition in Indian industries. Organization works towards
methodologies and techniques to reduce attrition in the
organization.
The aim of this paper is to check factors like
remuneration, superior – subordinate relationship, growth
chances, facilities, policies and procedures, recognition,
gratitude, ideas, co-workers by that it helps to grasp the
Attrition level within the organizations and factors about
retain them. This study additionally provides a review of
multiple papers on factors and issues related to
employee‟s attrition and provide a detailed literature
review on various researches conducted in employees‟
attrition prediction and application of data mining
techniques.
II. LITERATURE REVIEW
Nagadevara et al, (2008), explored the relationship of
withdrawal behaviors like lateness and absenteeism, job
content, tenure and demographics on employee turnover
in a rapidly growing sector like the Indian software
industry. The sole aspect of this research was the
application of predictive data mining techniques namely
artificial neural networks, logistic regression,
classification and regression trees, C5.0 classification
trees and discriminant analysis). The authors worked on a
sample data of 150 employees in a large software
organization. The results of the study clearly illustrate a
relationship between departure behaviors and employee
turnover. This study also elevated several issues for
future research. Further research works can be carried out
to explicitly collect data on demographic variables across
a large sample of organizations to assess the relationship
between demographic variables and turnover. More
analysis is recommended on large scale data on
longitudinal mode on variables in the past academic
research which have a relationship with turnover.
Hamidah et al (2011), in their research paper detail the
background of data mining, data mining in human
resource application and also an overview of talent
management. Based on the findings from the paper, there
should be wider focus and research on different type of
human resource applications and data mining techniques.
Jayanthi et al (2008) presented the role of data mining in
Human Resource Management Systems (HRMS). This
paper indicates that a deep sympathetic of the knowledge
concealed in Human Resource (HR) data is vital to a
firm's competitive position and organizational decision
making. Analyzing the patterns and relationships in
human resource data is quite rare. The human resource
data is usually treated to answer queries. Because human
resource data primarily concerns transactional processing
(getting data into the system, recording it for reporting
purposes) it is necessary for HRMS to become more
concerned with the computable data. They show how
data mining discovers and extracts useful patterns from
this huge data set to find noticeable patterns in human
resource. The paper demonstrates the ability of data
mining in improving the superiority of the decisionmaking process in HRMS and gives proposals regarding
whether data-mining competences should lead to
increased performance to survive competitive advantage.
Wei-Chiang and Ruey-Ming (2007), in their work
explored the feasibility of applying the Logit and Probit
models, which have been effectively applied to solve
nonlinear classification and regression problems, to
employee voluntary turnover estimates. A numerical
example involving voluntary turnover data of 150
professional employees drawn from a motor marketing
enterprise in central Taiwan was used with a serviceable
sample size of 132.
The data set was divided into two portions, the modeling
data set and the testing data set using both logit and
probit models. The testing data set was not used for
either model building or selection, and was used for
estimating model performance when applied to
forthcoming data. The experimental results of their
investigation exposed that the proposed models have
high forecast capabilities and that the two (logit and
probit) models also provide a promising alternative for
predicting employee turnover in human resource
management. The authors recommended that turnover
research should move in new guidelines based on new
expectations and methodologies, which would promote
the new issues and problems. The authors proposed that
neural networks and support vector machines can be used
for classification problem for detecting the continuity of
employees and identify who stays longer and who leaves
sooner.
In a dissertation by Marjorie Laura Kane-Sellers (2007),
the researchers carried out a study to explore the
variables impacting employee voluntary turnover in the
North American professional sales force of a Fortune 500
industrial manufacturing firm. By studying VTO
(Voluntary Turn Over), the intention was to
improvement a better considerate of HRD (Human
Resource Development) interventions that could improve
employee retention. The essential firm provided
explanations of the employee database for all members
of the professional technical sales force over a 14-years
longitudinal period. The original database taken 21,271
discrete observations identified by unique employee
clock number. The study design combined descriptive,
correlation, factor analysis, multiple linear regression,
and logistic regression analysis techniques to examine
relationships, as well as provide some predictive
characteristics among the variables. Initially, evocative
statistical techniques were used to develop baseline
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
20
A Review of Factors and Data Mining Techniques for Employee Attrition and Retention in Industries
________________________________________________________________________________________________
turnover rates, retention rates, and years of tenure. The
mean tenure for the population as well as for each ethnic,
gender, assignment location, supervisor, educational
level, and sales training participation group was
calculated. Hierarchical evocative techniques also
provided the mean salary by job title, ethnicity, gender,
educational level, and sales training participation. In this
study, data-mining analysis started with descriptive
analysis techniques.
The dynamic nature of Human Resource Management in
ITES (BPO) sector has stimulated many researchers to
study the various matters related to the high employee
attrition in BPO industry. Anand et al. states that
employee attrition discloses a company's internal power
and faintness. Vijay and Sekar found that the research
studies concentrating on capturing the perception of IT
employees‟ knowledge about the ideal computer
workstation arrangements and the optimal posture while
working on computer is much limited in the literature.
Mohamed et.al. observed that, from an organizational
perspective, the higher the intra organizational trust, the
more satisfied and productive the employees tend to be.
New employee need to be continuously added, further
costs in training them, getting them aligned to the
company environment. Gupta reports that attrition is a
burning problem for the promising industry of BPO,
especially because it fails to tap the full utilization of the
human resources and wastes much of its time, money and
assets due to this[10]. Mike observed that Staff attrition
(or turnover) represents significant costs to technology
and business process outsourcing companies. High
attrition rates drive up training costs, and increase human
resources, recruiting, and output costs[11]. Khanna gives
an overview of the BPO industry and analyzes as to how
attrition is the predominant challenge facing the
industry[12]. Agarwal feels that the challenge in the BPO
industry is lack of discipline. BPO employees belong to a
generation that does not like rules – they have had
multiple choices from the time they were born, and the
minute you hurt the dignity and self-respect of the people
of this generation, they are bound to leave, which is
probably the reason the attrition rate is so high, says
Agarwal[13]. Kumar evaluated that the present salary
package in BPO industry is not as lucrative as compared
to other industries. Radhika observes that 40% attrition
happens in first 120 days of hiring[14].
The importance of employee‟s retention and cost of
employees‟ sendoff is well known in the literature.
Resigning of an employee implies that employee is
leaving with his or her implicit knowledge and thus it is a
loss of social capital. Ongori, 2007 and Amah, 2009
indicated that attrition increases operation cost and cost
on induction and training. The literature indicated that
various factors that why employees quit job. There is also
much discussion on the relationship between various
factors and attrition. For example, Mobley‟s (1977) study
focused on the relationship between job satisfaction and
attrition. Mohammad (2006) worked on the relationship
between organization commitment and attrition. Another
study to show the connection between work satisfaction,
stress, and attrition in the Singapore workplace was
conducted by Tan and Tiong (2006). Steijn and Voet
(2009) also presented the relationship between supervisor
and employee attitude in their study. A research was
conducted in China to show the relationship between job
satisfaction, organizational commitment or career
commitment by Zhou, Long and Wang (2009). The
results of each study were different as each study was
carried out in different countries (having different socioeconomic and culture), in different setting, for different
organizations and used different independent variables.
III. IMPORTANT FACTORS
Review of various research studies indicated that
employees quit for a variety of reasons, these can be
classified into the following:
A.
Demographic Factors
Various studies focus on the demographic factors to see
attrition across the age, marital status, gender, number of
children, education, experience, employment tenure.
B.
Individual Factors
Individual factors such as health problem, family related
issues, children education and social prestige contributes
in attrition intentions. However, very little amount of
empirical research work is available on individual related
factors. There is another important variable “JobHoping” also donates in attrition intentions. Unrealistic
expectation of employee is also an important individual
factor which contributes in attrition. Several people keep
impractical expectations from organization when they
join. When these impractical expectations are not
realized, the worker becomes disappointed and they quit.
One of the individual features which have been lost in
many research studies is the incapability of employee to
follow organizations timings, rules, regulations, and
requirement, as a result they resign.
Masahudu (2008) has identified another important
variables “employers‟ geographic location” that may
determine attrition. The intimacy of employees to their
families and significant others may be a reason to look
elsewhere for opportunities or stay with their current
employers. For instance, two families living and working
across two time zones may decide to look for
opportunities closer to each other.
C.
Propel factors
Drive factors are features that drive the employee
towards the withdrawal from the employment. In the
literature it is also called controlled factors because these
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
21
A Review of Factors and Data Mining Techniques for Employee Attrition and Retention in Industries
________________________________________________________________________________________________
factors are internal and can be controlled by
organizations. According to Loquercio (2006) it is
relatively uncommon for people to leave jobs in which
they are happy, even when offered sophisticated pay
elsewhere.
Most staff has a favorite for stability. However, some
time employees are 'Propelled' due to disappointment in
their present jobs to seek substitute employment. On the
basis of available literature, Propel factor can be
classified as follows:
D.
Organizational Factors
There are many factors which are attached with an
organization and work as Propel factors for employees to
resign. Among them which are derived from various
studies are: salary, benefits and services; scope of
organization (the number of staff in the organization);
location of the organization (small or big city); nature
and sympathetic of organization; stability of
organization; communication system in organization;
management practice and polices; employees‟
authorization. There is another Propel variable called
organizational justice.
According to Folger & Greenberg (1985), organizational
justice means equality in the workplace. There are two
methods of organizational justice: distributive justice,
which describes the fairness of the consequences an
employee receives; and procedural justice, which
describes the equality of the procedures used to control
those consequences.
For example, Beikzadeh and Delavari (2004) used data
mining techniques for suggesting improvements on
higher educational schemes. Al-Radaideh et al. (2006)
also used data mining techniques to guess the university
students‟ performance. Many medical researchers, on the
other hand, used data mining techniques for medical
extraction units using the enormous patients data files
and histories, Lavrac (1999) was one of such researchers.
Mullins et al. (2006) also worked on patients‟ data to
extract illness association rules using unsupervised
methods.
Karatepe et al. (2006) defined the performance of a
frontline employee, as his/her productivity comparing
with his/her peers. Schwab (1991), on the other hand,
defined the performance of university teachers included
in this study, as the number of researches cited or
published. In overall, concert is usually measured by the
units produced by the employee in his/her job within the
given period of time.
Researchers like Chein and Chen (2006) have worked on
the development of employee selection, by building a
model, by data mining techniques, to predict the recital
of newly applicants. Depending on characteristics
selected from their CVs, job applications and interviews.
Their performance could be foretold to be a base for
decision makers to take their decisions about either
employing these applicants or not.
Previous studies stated several characteristics affecting
the employee performance. Some of these attributes are
personal characteristics, others are educational and lastly
professional attributes were also measured. Chein and
Chen (2006) used numerous attributes to imagine the
employee performance. They specified age, gender,
marital status, experience, education, major subjects and
school tires as possible factors that strength disturbs the
performance. Then they excluded age, gender and marital
status, so that no discrimination would exist in the
procedure of individual selection. As a result for their
study, they found that employee recital is extremely
pretentious by education degree, the school tire, and the
job experience.
Kahya (2007) also searched on positive factors that
disturb the job recital. The researcher reviewed earlier
studies, describing the significance of experience,
education, salary, working conditions and job satisfaction
on the performance. As a outcome of the research, it has
been create that several features pretentious the
employee‟s performance. The position or grade of the
employee in the company was of high positive result on
his/her performance. Working circumstances and
situation, on the other hand, had exposed both positive
and negative relationship on performance. Highly
educated
and
qualified
employees
showed
disappointment of bad working conditions and thus
pretentious their performance damagingly. Employees of
low educations, on the other hand, showed high
performance in malice of the evil conditions. In addition,
experience showed positive relationship in maximum
cases, while education did not yield strong relationship
with the recital.
In their study, Salleh et al. (2011) have tested the
influence of incentive on job performance for state
government employees in Malaysia. The study showed a
positive connection in between relationship incentive and
job performance. As people with higher affiliation
motivation and strong relationships with colleagues and
managers have a habit to perform much better in their
jobs.
Jantan et al. (2010) had discussed in their paper Human
Recourses (HR) system architecture to forecast an
applicant‟s talent based on information filled in the
human resource application and past experience, using
Data Mining(DM) techniques. The goal of the paper was
to find a way to flair guess in Malaysian higher
institutions. So, they have specified certain features to be
considered as attributes of their system, such as,
professional qualification, training and communal
responsibility. Then, several data mining techniques
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
22
A Review of Factors and Data Mining Techniques for Employee Attrition and Retention in Industries
________________________________________________________________________________________________
(hybrid) where applied to find the prediction rules. ANN,
Decision Tree and Rough Set Theory are examples of the
selected techniques.
industry", Expert Systems with Applications, In
Press.
[3]
Cho, S., Johanson, M.M., Guchait, P. (2009).
"Employees intent to leave: A comparison of
determinants of intent to leave versus intent to
stay", International Journal of Hospitality
Management, 28, pp374-381.
[4]
CRISP-DM, (2007). Cross Industry Standard
Process for Data Mining: Process Model.
http://www.crisp-dm.org/process/index.htm.
Accessed 10th May 2007.
[5]
Delavari, N., PHON-AMNUAISUK S., (2008).
Data Mining Application in Higher Learning
[6]
Dreher, G.F. (1982) “The Role of Performance in
the Turnover Process”, Academy of Management
Journal, 25(1), pp. 137-147.
[7]
Han, J., Kamber, M., Jian P. (2011). Data Mining
Concepts and Techniques. San Francisco, CA:
Morgan Kaufmann Publishers.
[8]
Ibrahim, M.E., Al-Sejini, S., Al-Qassimi, O.A.
(2004). “Job Satisfaction and Performance of
Government Employees in UAE”, Journal of
Management Research, 4(1), pp. 1-12.
[9]
Anand V.V., Saravanasudhan R. and Vijesh R.,
Employee attrition - A pragmatic study with
reference to BPO Industry, In Advanes in
Engineering,
Science
and
Management
(ICAESM), 2012 International Conference on
Advances in Engineering, 42-48 IEEE (2012)
[10]
Vijay A. and Sekar A., New Quality Rating
system for the Computer Workstation
arrangements of the Information Technology
Industries: A Six Sigma Model Approach, Res. J.
of Management Sciences, 2(7), 15-21 (2013).
[11]
Mohamed M.S., Kader M. A. and Anisa H.,
Relationship
among
Organizational
Commitment, Trust and Job Satisfaction: An
Empirical Study in Banking Industry, Res. J. of
Management Sciences, 1(2), 1-7 (2012).
[12]
Gupta S.S. Employee Attrition and Retention:
Exploring the Dimensions in the urban centric
BPO Industry”, unpublished Doctoral Thesis,
Retrieved
from
http://www.jiit.ac.in/uploads/Ph.D%20Santoshi%20Sen.pdf (2010)
[13]
Mike. Employee attrition in India [Online
Exclusive], Sourcing Line, Retrieved from
IV. CONCLUSION
Data Mining is an area full of exhilarating opportunities
for researchers and practitioners. This field assists in
industries with well-organized and effective ways to
improve industrial effectiveness and employee
efficiency. Data mining is a significant tool for helping
organizations improve the decision making and
analyzing new patterns and dealings among a huge
amount of data. A broad sense of the types of research
presently being lead in Data Mining was presented, from
smearing data mining for understanding employee
retention and attrition to finding new methods of making
personalized learning recommendations to each
individual employee. Many chances exist to study DM
from an industrial unit of analysis to individual courselevels of analysis. Some effort is strategic in nature and
some of the research is tremendously technical. A deep
understanding of the facts and data hidden in Human
Resource data is vital to a firm's competitive position and
organizational decision making. Analyzing the patterns
and relationships in Human Resource data is quite rare.
The HR data is usually treated to answer queries.
Because HR data primarily concerns transactional
processing getting data into the system, recording it for
reporting purposes it is necessary for Human Resource
Management Systems to become more concerned with
the quantifiable data. This paper discussed usefulness and
application different mining techniques and useful factors
for attrition prediction.
Multiple research avenues are available to improve the
data mining to discover and extract useful patterns from
the large data set to find observable patterns useful for
effective attrition prediction. The focus on multidimensional hybrid decision tree based methodology
would be helpful to improve the quality of the decisionmaking process in HRMS. More regression analysis and
propositions on data-mining capabilities should be
assessed to see if the mining methods can lead to high
performance and accurate prediction of attrition for
industries in India.
REFERENCES
[1]
Al-Radaideh, Q. A., Al-Shawakfa, E.M., AlNajjar, M.I. (2006). “Mining Student Data Using
Decision Trees”, International Arab Conference
on Information Technology (ACIT 2006), Dec
2006, Jordan.
[2]
Chein, C., Chen, L. (2006) "Data mining to
improve personnel selection and enhance human
capital: A case study in high technology
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
23
A Review of Factors and Data Mining Techniques for Employee Attrition and Retention in Industries
________________________________________________________________________________________________
http://www.sourcingline.com /resources accessed
on January 7th, 2009 (2009)
[14]
Khanna R., How the BPO Industry has dealt with
its Biggest Challenge: Attrition, online article
accessed on November 2009. (2007).
[15]
Agarwal A., Challenge in the BPO industry is
lack
of
discipline,
Retrieved
from
www.docstoc.com/callcenter
kaleidoscope
events, accessed on 10th march 2009. (2008).
[16]
Kumar V., High Attrition rate attributed to pay
package, Online Article Retrieved from
http://outsourceportfolio.com /high-attrition-rateattributed-to-pay-package on September 7, 2009
(2008).
[17]
Radhika. 40 percent attrition happening in
120days. Online article Retrieved from
www.mckinsey.com/clients
service/bto/../pdf
accessed on 23 rd February 2009, (2008)
[18]
Amah, O.E. (2008); Job Satisfaction and Attrition
Intention Relationship: The Moderating Effect of
Job Role Centrality and Life Satisfaction, Human
Resources Institute & Curtin University of
Technology, Singapore.
[19]
[20]
[21]
Barnard, M.E. and Rodgers, R.A. (1998); What's
in the Package? Policies for the Internal
Cultivation of Human Resources and for High
Performance Operations, Asia Academy of
Management (Hong Kong).
Bockerman, P. and Ilmakunnas, P. (2007); Job
Disamenities, Job Satisfaction, Quit Intentions,
and Actual Separations: Putting the Pieces
Together, Discussion Paper No. 166, Helsinki
Center of Economic Research, Finland.
Debrah, Y. (1993); Strategies for Coping with
Employee Retention Problems in Small and
Medium Enterprises (SMEs) in Singapore.
Entrepreneurship, Innovation, and Change, 2, 2,
pp 143-172.
systems, Research in Personnel and Human
Resources Management, 3: 141 183.
[24]
Johns, G. (1996); Organizational Behavior, New
York: Harper Collins Publishing.
[25]
Loquercio, D. (2006); Attrition and Retention –
A Summary on Current Literature, downloaded
from People in Aid” http://www.peopleinaid.org/
accessed on February 9, 2010.
[26]
Mobley and William. H. (1977); Intermediate
Linkages in the Relationship between Job
Satisfaction and Employee Attrition, Journal of
Applied Psychology, Vol 62(2), April 1977, pp
237 - 240.
[27]
Mohammad et al, (2006); Affective Commitment
and Intent to Quit: the Impact of Work and NonWork Related Issues, Journal of Managerial
Issues.
[28]
Ongori, H. (2007); A Review of the Literature on
Employee Attrition, African Journal of Business
Management pp. 049-054, June 2007.
[29]
Rahman, A., Vaqvi Raza, S.M.M. and Ramay
Ismail, M. (2008), Measuring Attrition Intention:
A Study of IT Professionals in Pakistan,
International Review of Business Research
Papers, Vol. 4 No.3 June 2008 pp.45-55.
[30]
Siong Z.M.B, et al (2006); Predicting Intention to
Quit in the Call Center Industry: Does the Retail
Model Fit, Journal of Managerial Psychology,
Vol 21, No 3, pp 231 243.
[31]
Steijn, B. and Voet, J (2009); Supervisors in the
Dutch Public Sector and their Impact on
Employees, EGPA Annual Conference, Malta,
September 2-5 2009.
[32]
Tan, J., Tan, V and Tiong, T.N. (2006); Work
Attitude, Loyalty, and Employee Attrition,
Singapore Institute of Management, National
University of Singapore.
[33]
Zhou, H., Long Lirong, R. and Wang Yuqing, Q.
(2009); What is the Most Important Predictor of
Employees' Attrition Intention in Chinese Call
Centre:
Job
Satisfaction,
Organizational
Commitment
or
Career
Commitment?,
International Journal of Services Technology and
Management, Vol 12, No 2, 2009, pp 129-145.
[22]
Debrah, Y. (1994); Management of Operative
Staff in a Labour-Scarce Economy: the Views of
Human Resource Managers in the Hotel Industry
in Singapore. Asia Pacific Journal of Human
Resources, 32, 1, pp 41-60.
[23]
Folger, R. and Greenberg, J. (1985); Procedural
justice: An interpretative analysis of personnel

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
24
Distribution System Reliability Evaluation using Time Sequential Monte Carlo Simulation
________________________________________________________________________________________________
Distribution System Reliability Evaluation using Time Sequential
Monte Carlo Simulation
1
Supriya M D, 2Chandra Shekhar Reddy Atla, 3K R Mohan, 4T M Vasanth Kumara
1,3,4
Dept of E & E ,AIT, Chikmagalur Karnataka, India.
Power Research & Development Consultant Pvt, Ltd., Bangalore, India.
Email: [email protected], [email protected], [email protected], [email protected]
2
Abstract-Reliability assessment is an important tool for
distribution system planning and operation. Distribution
system reliability assessment is able to predict the
interruption profile of a distribution system at the
customer end based on system topology and component
reliability data. The reliability indices can be evaluated
using analytical method or Monte Carlo simulation
method. The main objective of reliability analysis is to
quantify, predict, and compare reliability indexes for
various reliability improvement initiatives/network
configurations. By understanding the distribution system
reliability indices using analytical method, this paper
further implements a reliability models to evaluate the
distribution system reliability using Monte-Carlo
simulation method and describes a algorithm for computer
program to implement these techniques in VC++. General
distribution system elements, operating models and radial
configurations are considered in the program. Overall
system and load point reliability indices and expected
energy unserved are computed using these techniques.
Reliability assessment estimates the performance at
customer load points considering the stochastic nature of
failure occurrences and outage duration. The basic indices
associated with load points are: failure rate, average outage
duration and annual unavailability. Furthermore, these
models can predict other indices such as System Average
Interruption Frequency Index (SAIFI), System Average
Interruption Duration Index (SAIDI), Customer Average
Interruption Frequency Index (CAIFI), Customer Average
Interruption Duration Index (CAIDI), Average Service
Availability /Unavailability Index (ASAI), Energy Not
Supplied (ENS) and Average Energy Not Supplied (AENS).
This information helps utility engineers and managers at
electric utility organizations to decide how to spend the
money to improve reliability of the system by identifying
the most effective actions/ reconfigurations.
Index Terms: Distribution system, Reliability evaluation,
Load point indices, Reliability indices, Random failures,
Time sequential Monte-Carlo simulation and Roy Billinton
Test System.
I. INTRODUCTION
system is defined as “the ability to deliver uninterrupted
service to the end customers”. The techniques used in
distribution system reliability evaluation can be divided
into two basic categories - analytical and simulation
methods. The difference between these methods is in the
way the methodology uses the input data in which the
reliability indices are evaluated. Analytical techniques
represent the system by simplified mathematical models
derived from mathematical equations and evaluate the
reliability indices using direct mathematical solutions.
Simulation techniques, estimate the reliability indices by
simulating the actual process and stochastic behavior of
the system. Therefore, the method treats the problem as
a series of real experiments conducted in simulated time.
It estimates probability of the events and other indices
by counting the number of times an event occurs. Earlier
day‟s reliability assessment was based on deterministic
criteria for system failures like thumb rules like fixed
values based on their experience in system operation.
Nowadays, probabilistic methods are used to analyze the
more complex distribution system.
Reliability assessment of distribution system is usually
concerned with the system performance at the customer
end, i.e. at the load points. The basic indices used to
predict the reliability of a distribution system are:
average load point failure rate, average load point outage
duration and average annual load point outage time or
annual unavailability. The basic indices are important
from an individual customer‟s point of view and also
utility point of view. However they do not provide an
overall performance of the system. An additional set of
indices can be calculated using these three basic indices
and the number of customers/load connected at each
load point in the system. Most of these additional
indices are weighted averages of the basic load point
indices. The most common additional or system indices
are; SAIFI, SAIDI, CAIDI, ASAI, ASUI, ENS and
AENS. These indices are also calculated by a large
number of utilities from system interruption data and
provide valuable indications of historic system
performance. By considering the basic system indices to
measure past performance can also used to calculate the
same basic indices for future performance.
The basic function of the power distribution system is to
provide an adequate electrical supply to its customers as
economically as possible with reasonable level of
reliability and quality. The distribution system is a
portion of an electric system that delivers electric energy
from transformation points on the transmission system
to the customer point. Reliability of a power distribution
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
25
Distribution System Reliability Evaluation using Time Sequential Monte Carlo Simulation
________________________________________________________________________________________________
Reliability indices of a distribution system are functions
of component failures, repairs and restoration times
which are random by nature. The calculated indices are
therefore random variables and can be described by
probability distributions. The main objective of
reliability analysis is to quantify, predict, and compare
reliability indexes for various reliability improvement
initiatives/network configurations. This information
helps engineers and managers at electric utility
organizations to decide how to spend reliability
improvement dollars by identifying the most effective
actions/ reconfigurations for improving reliability of
distribution feeders. There are many types of system
design and maintenance tasks that fall under the
reliability improvement umbrella.
Basic distribution system data, Roy Billinton Test
System (RBTS) is presented in [1] and is used in
reliability test system for educational purposes. This test
system includes all the main practical elements like
circuit breaker, switches, distribution transformers, main
and lateral sections. This data has been used to
understand reliability models and evaluation techniques
and reliability indices like SAIFI, SAIDI are evaluated
in [1] using analytical method. The way of gaining
confidence in a reliability model is developed by a
validation method in [2], it automatically determines
appropriate default component reliability data so that
predicted reliability indices results match historical
values. Reliability can be improved by reconfiguring the
feeder in [3], [4] and [5] and predictive reliability model
is used to compute reliability indices for the distribution
systems and a novel algorithm are used to adjust switch
positions until an optimal solution is identified in [3].
The conventional FMEA technique is applied to
complex radial networks to develop a digital computer
program using general technique for two small practical
test systems in [6]. By using the feeder branches and
load branches technique in [7] computer program is
developed and numerical results show that, the proposed
technique is effective for the reliability evaluation and
design of distribution systems. In [8] recursive search is
used in the algorithm, which reduces the amount of
programming and improves its efficiency. An algorithm
for Monte-Carlo Simulation technique is used in
evaluation of complex distribution system in [9]. In [10]
the simulation program is tested on Feeder 1 of Bus – 2
of Roy Billiton Test System (RBTS) and set of system
related indices are presented. In [11] reliability indices
of expected values such as System Average Interruption
Frequency Index (SAIFI) and System Average
Interruption Duration Index (SAIDI) are calculated and
results are compared for two distribution systems by
using both analytical and simulation methods.
This paper attempts to develop the sequential MonteCarlo simulation technique for distribution system
reliability analysis. The first section of this paper briefly
illustrates the basic concepts of analytical method and
second section illustrates the basic concepts of the time
sequential Monte-Carlo simulation. The development of
the algorithm and flowchart using Monte-Carlo
simulation technique for distribution system reliability
evaluation is described in third section. Developed
simulation programs are applied on a RBTS test system
and validated the same with published results using
analytical method. The presented methodology can be
used directly by utility system to observe the reliability
performance of the system at customer and utility end
and also can be used to improve reliability of the system
further by reconfiguration of network.
II. ANALYTICAL APPROACH
The analytical method looks at how the load points
would be affected if a particular component fails. The
three basic indices are used to predict the load-point
reliability of a distribution system are (failure rate (λ),
outage time (r) and annual unavailability (U) can be
calculated using the “(1) to (3)”.
N
λp =
f
yr
λi
i=1
N
λi ri
Up =
i=1
rp =
(1)
hr
yr
(2)
Up
hr
λp
3
Reliability indices such as SAIFI, SAIDI, CAIFI,
CAIDI, ASAI, ASUI, ENS, AENS and ACCI can be
calculated using “(4) to (10)”.
A.
Methodology of Monte-Carlo Simulation
Technique
A power system is stochastic in nature and therefore
Monte-Carlo simulation technique can be applied for
reliability evaluation of a power system for more precise
results. There are primarily two types of Monte-Carlo
simulation: state sampling and time sequential
techniques. In this paper time sequential simulation
method is used for development.
B.
Time
technique
sequential
Monte
Carlo
simulation
The time-sequential Monte-Carlo simulation technique
can be used on any system that is stochastic in nature.
This time sequential simulation process can be used to
examine and predict real behavior patterns in simulated
time, to obtain the probability distributions of the
various reliability parameters and to estimate the
expected or average value of these indices. In a time
sequential simulation, an artificial history that shows the
up and down times of the system elements is generated
in chronological order using random number generators
and the probability distributions of the element failure
and restoration parameters. The system reliability
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
26
Distribution System Reliability Evaluation using Time Sequential Monte Carlo Simulation
________________________________________________________________________________________________
indices and their probability distributions are obtained
from the artificial history of the system.
Distribution system elements include basic equipment
such as distribution lines/cables and transformers, and
protection elements such as disconnect switches, fuses,
breakers, and alternate supplies. Line sections and
transformers can generally be represented by the twostate model as shown in Figure 1 where the up state
indicates that the element is in the operating state and
the down state implies that the element is inoperable due
to failure.
Failure process
Up
Dn
Restoration process
Figure 1: State Space diagram of element
The time during which the element remains in the up
state is called the time to failure (TTF) or failure time
(FT). The time during which the element is in the down
state is called restoration time that can be either the time
to repair (TTR) or the time to replace. The process of
transiting from the up state to down state is the failure
process. Transition from up state to down state can be
caused by the failure of an element or by the removal of
elements for maintenance. Figure 2 shows the simulated
element operating/restoration history.
TTR
TTR
TTF
TTR
1) Determine the type and location of the failed element,
the failed element number and the failed feeder number
that the failed element is connected to.
2) Determine the affected load points connected to the
failed feeder and the failure durations of these load
points according to the configuration and protection
scheme of the failed feeder.
3) Determine the sub feeders which are the downstream
feeders connected to the failed feeder and the effects of
the failed element on the load points connected to these
sub feeders.
4) Repeat (2) and (3) for each failed sub feeder until all
the sub feeders connected to the failed feeder are found
and evaluated.
5) Determine the up feeder which is the upstream feeder
to which the failed feeder is connected and the effects of
the failed element on the load points in the up feeder
6) Repeat (2) to (5) until the main feeder is reached and
evaluated.
D. System Analysis
The distribution system is represented as a mathematical
model for analytical techniques to be applied. A failure
in any component between the supply point and load
point will result in outages. The Bus 6 of the RBTS is a
distribution system containing of 4 main feeders, 3 sub
feeders, 42 main sections, 22 lateral sections and 40 load
points comprising agricultural, small industrial,
commercial and residential customers shown in Figure
3. The total number of customers connected on feeders
F1, F2, F3 and F4 are 764, 969, 22 and 1183 customers
respectively.
TTR
Time
Figure 2: Element operating/repair history
The parameters like TTF, TTR are random variables and
may have different probability distributions. The
uniform distribution can be generated directly by a
uniform random number generator and this generated
random numbers are converted into TTF or TTR using
this equation.
TTF = −log
U
∗ 8760
λ
(11)
Where, U is uniformly distributed random variable in
the interval [0, 1] and T is exponentially distributed.
C.
Determination of Load Point Failures
The most difficult problem in the simulation is to find
the load points affected by the failure of an element. A
complex radial distribution system can be divided into
the combination of main feeder and sub feeders. The
procedure for determining the failed load points and
their operating/restoration histories is as follows in [9]:
Each system segments consists of a mixture of
components. A main section can be a distribution line, a
combination of line and disconnect switches which can
be installed in each end or both ends of the line. A
lateral section usually consists of a line, transformer,
fuse or their combination. Some of the components that
have not been taken into account are assumed to be
100% reliable. The basic data used in these studies is
given in [1]. The failure rate of each element is assumed
to be constant. The repair and switching times are
assumed to be log normally distributed. It is assumed
that the standard deviations of the distribution line repair
time; transformer replace time and switching time of all
elements are one hour, 10 hours and 0.4 hours
respectively. The lines and cables have a failure rate
which is approximately proportional to their length.
Therefore, the main feeder sections (L1-L64) have a
failure rate of 0.065 f/ km-yr.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
27
Distribution System Reliability Evaluation using Time Sequential Monte Carlo Simulation
________________________________________________________________________________________________
AENS =
K
i=1 UiLa(i)
K
i=1 Ni
10
Where λi is the failure rate, Ui is the annual outage time
and Ni is the number of customers at load point i La(i) is
the average load connected to load point i and 8760 is
the number of hours in a calendar year.
E.
Algorithm & flowchart
The algorithm is used to develop the computer program
to determine the distribution system reliability indices
using Time Sequential Monte Carlo simulation consists
of following steps:
Step1: Define the system i.e. input data such as location
of components, failure rate, failure duration, load
connected etc. of the system.
Figure 3: Distribution system of RBTS Bus 6 [7]
System indices are as follows:
i.
System average interruption frequency index,
SAIFI
total number of customer interruptions
total number of customers served
K
i=1 λiNi
SAIFI = K
i=1 Ni
4
ii.
System average interruption duration index,
SAIDI
sum of customer interruption durations
total number of customer served
K
i=1 UiNi
SAIDI = K
5
i=1 Ni
iii.
Customer average interruption duration index,
CAIDI
sum of customer interruption durations
total number of customer interruptions
K
i=1 UiNi
K
i=1 λiNi
CAIDI =
iv.
v.
vi.
Average service unavailability index, ASUI
ASUI = 1 − ASAI
(8)
Energy not Supplied by the system,
Step 4: Generate random number [0-1] for each element
in the system and convert these random numbers into
times to failure (TTF), based on the failure time
distribution and the expected time to failure of each
element. Using “(11)”, TTF can be calculated. TTF= (log (U)/λ) x 8760, Where U is random number between
0 to 1.
Step 5: Find the element with minimum TTF.
Step 6: Generate random number and converted this into
repair time (RT) for this element according to
probability distribution chosen.
Step 7: Generate random number and converted this into
switching time (ST) according to probability distribution
if applicable. For this research work, switching time is a
fixed value of 1 hour.
(7)
Step 9: Determine the failure duration depending upon
the configuration and status of breakers, disconnects,
fuses and alternate supply and record the outage
duration for each failed load point.
Step 10: Generate a random number and convert this
into TTF for the failed element.
Step 11: Go back to Step 5 if the simulation time is less
than the mission time. Otherwise, go to Step 12.
K
ENS =
Step 3: Simulation starts, n = 1, t=0.
Step 8: Find the load points that are affected by the
failure of this element considering the configuration and
status of breakers, disconnects, fuses and alternate
supply and record a failure for each of these load points.
(6)
Average service availability index, ASAI
customer hours of available service
customer hours demanded
K
K
−
i=1 8760Ni
i=1 UiNi
ASAI =
K
i=1 8760Ni
Step 2: Input number of sample years „N‟, simulation
period „T‟.
UiLa(i)
(9)
i=1
vii.Average energy not Supplied indices,
total energy not supplied
total number of customers served
Step 12: Calculate the average value of the load point
failure rate and failure duration for the sample years.
Step 14: Calculate the system indices for the sample
years.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
28
Distribution System Reliability Evaluation using Time Sequential Monte Carlo Simulation
________________________________________________________________________________________________
Step 15: Return to Step 4 if the simulation time is less
that the total simulation period. Otherwise, output the
results.
In the time sequential Monte Carlo simulation
technique, the effect of the events of each component on
the power system is chronologically analyzed. This
technique has been applied in this paper to evaluate the
reliability indices. The simulation program developed
evaluates the reliability indices for a general radial
distribution system.
Start
Define System, N, T and Assign n=1, t=0
Generate random number for each element and convert it into TTF,
according to probability distribution. TTF = (-log (U)/λ)
Find the element with minimum TTF
Generate random number and converted this into
repair time (TTR) for the failed element according to
probability distribution
The program in VC++ is developed to evaluate the
reliability indices. Feeders from practical test system
known as RBTS Bus 6 are considered for the sequential
analysis. The random numbers generated do not appear
in any table since they were inserted directly in the
calculation formulas when necessary. Flowchart for
above algorithm is given in Figure 4.
The following modeling assumptions are made to
simplify the program:
1.
Circuit breakers are assumed to work instantly and
without any failures or delay.
2.
Alternate supply is assumed to be available
whenever needed and can supply all necessary
power to the load. No transfer load restriction
exists.
3.
Fuses are assumed to work without failures.
4.
No common mode failures occur.
5.
No busbar failures occur.
6.
The same probability distributions are assigned to
the same type of components.
III. RESULTS
Find the load points that are affected by the failure of this
element and record a failure for each of these load points
Determine the failure duration and record the outage
duration for each failed load point
Generate a random number and convert this into TTF for the
failed element. t = t + TTR + new TTF
Yes
t <T
NO
Calculate the average value of the load point failure rate and failure
duration for the sample years
Compute n= n+1
Yes
n<N
NO
Calculate the system indices for the sample years
Stop
Figure 4: Flowchart for Monte Carlo Simulation.
Comparison of load point indices and system indices for
Bus 6 using both analytical (A) and Monte Carlo
simulation (S) techniques are presented in “Table 1”.
A. Description
The test system is shown in Fig 3. The data is used from
paper [1] by L. Goel et al. (1991). The lines and cables
have a failure rate which is approximately proportional
to their length. Therefore, the main feeder sections (L1L64) have a failure rate of 0.065 f/ km-yr, transformer
failure rate of 0.015 f/ km-yr, repair time should be 5
hours for main and lateral section and 200 hours for
transformer section and switching time is 1 hour
considered in this network. Some of the components that
have not been taken into account are assumed to be
100% reliable.
B. Calculations for Feeder 1 and Feeder 2
To calculate the load point failure rate have to consider
all the main section line failures and particular load
point lateral and transformer section line failures using
“(1)”. To calculate the load point unavailability has to
consider all the main section line failures and particular
lateral and transformer sections line failures that should
multiplied with particular load point repair time,
otherwise switching time has to multiply for all main
sections using “(2)”. By using Equation 3 average repair
time can be calculated. Feeder 1 and Feeder 2 is
normally opened. To calculate the Feeder 1 and Feeder 2
indices have to consider number of customers, average
load, peak load and calculated failure rate, unavailability
and average repair time for each load point using “(4) to
(10)”.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
29
Distribution System Reliability Evaluation using Time Sequential Monte Carlo Simulation
________________________________________________________________________________________________
C.
Calculation for Feeder 3
The failure mode and effect analysis (FMEA) method
used in analytical techniques and is applied on Feeder 3.
While calculating load point failure rate has to consider
all the main sections line failure and particular lateral
and transformer sections line failures. But in
unavailability calculation has to consider particular
lateral and transformer section line failure that should
multiply with repair time. Then all the main sections has
to consider and particular load point main section line
failures from supply side should multiply with repair
time; otherwise switching time has to multiply. Because
there is no alternate supply connected to this feeder.
Average repair time is calculated using “(3)”. To
calculate the Feeder 3 indices have to consider number
of customers, average load, peak load and calculated
failure rate, unavailability and average repair time for
each load point using “(4) to (10)”.
D.
Calculation for Feeder 4
In feeder F4 there are three sub feeders namely F5, F6
and F7 are considered. To calculate the load point failure
rate by using Equation 1 for the feeder F4 has to
consider all the main section line failures and particular
lateral and transformer sections line failures, and not
necessary to consider the sub feeders. To calculate the
Unavailability consider all the main section line failures
should multiply with repair time before the disconnect
switch. After the switch main section line failure should
multiply with switching time. To calculate the sub
feeder load point indices has to consider feeder F4 main,
lateral and transformer failures and particular sub feeder
failures using “(1) to (3)”. To calculate the Feeder 4
indices has to consider number of customers, average
load, peak load and calculated failure rate, unavailability
and average repair time for each load point using “(4) to
(10)”. To calculate the System indices has to consider
the entire load point Failure rates, Unavailability,
Average repair time, Number of customers connected,
Average load and Peak load using “(4) to (10)”.
TABLE.2: AVERAGE ANNUAL LOAD POINT
OUTAGE TIME OR ANNUAL UNAVAILABILITY
Load
Unavailability(hr/yr)
points
Published
Developed
Developed
by
method by
by MonteAnalytical
Analytical
Carlo method
method
LP1
3.67
3.666
3.6954
LP5
3.68
3.676
3.856
LP10
3.66
3.656
3.541
LP15
0.84
0.835
0.795
LP20
8.4
8.40
8.654
LP25
11.29
11.287
11.568
LP30
14.05
14.05
14.23
LP35
12.72
12.724
12.985
LP40
15.48
15.48
15.62
TABLE.3: AVERAGE LOAD POINT OUTAGE
DURATION OR REPAIR TIME
Load
Average Repair Time(hr/f)
points
Published
Developed
Developed
by
method by
by MonteAnalytical
Analytical
Carlo method
method
LP1
11.1
11.101
11.1409
LP5
10.81
10.811
11.015
LP10
10.17
10.171
9.7716
LP15
3.52
3.52054
3.442
LP20
5.02
5.0233
5.170
LP25
6.75
6.74887
6.910
LP30
6.31
6.31460
5.478
LP35
5.02
5.0153
5.0240
LP40
6.16
6.164
5.9484
TABLE.4: SYSTEM INDICES FOR BUS 6
Meth Feede Feede Feede Feede Syste
ods
r1
r2
r3
r4
m
(A)
0.335
0.367 0.242 1.977 1.006
66
39
170
813
64
(S)
0.335
0.369 0.233 1.987 1.011
TABLE.1: AVERAGE LOAD POINT FAILURE
39
16
313
77
1
RATES
SAI
(A)
3.682
3.713 3.591 1.043 6.668
DI
85
983
034
973
78
Load
Failure rate (f/yr)
(S)
3.663
3.605
3.621
11.35
6.739
points
Published
Developed
Developed
04
774
233
233
97
by
method by
by MonteCAI
(A)
10.97
10.10
14.82
0.527
6.624
Analytical
Analytical
Carlo method
DI
1
886
853
842
73
method
(S)
10.92
9.767
15.52
5.711
6.665
LP1
0.3303
0.33025
0.3317
167
469
088
089
95
LP5
0.34
0.34
0.3349
AS
(A)
0.999
0.999
0.999
0.998
0.999
LP10
0.3795
0.3595
0.3624
AI
579
576
59
735
23
LP15
0.2373
0.23725
0.2311
(S)
0.999
0.999 0.999 0.998 0.999
LP20
1.6725
1.6725
1.6738
581
588
586
704
23
LP25
1.6725
1.6725
1.674
AS
(A)
0.000
0.000 0.000 0.001 0.000
LP30
2.225
2.225
2.5975
UI
420
423
41
264
76
LP35
2.537
2.537
2.5845
(S)
0.000
0.000 0.000 0.001 0.000
LP40
2.511
2.511
2.6258
418
412
413
295
76
EN
(A)
4.231
4.717 5.902 57.79 72.64
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
30
Indi
ces
SAI
FI
Distribution System Reliability Evaluation using Time Sequential Monte Carlo Simulation
________________________________________________________________________________________________
S
(S)
52
4.225
334
5.538
650
5.530
542
013
4.559
431
4.867
9191
4.705
295
532
5.904
939
268.2
969
492.0
783
038
59.18
776
48.85
070
50.03
192
14
73.87
74
24.72
479
25.14
549
REFERENCES
[1]
R.N. Allan, R. Billinton, I. Sjarief, L. Goel, K.S.
So, “A reliability test system for educational
purposes- Basic distribution system data and
results,” IEEE Transactions on Power Systems,
Vo1.6, No. 2, May 1991.
[2]
R. Brown, J.R. Ochoa, „Distribution System
Reliability: Default Data and Model Validation‟
IEEE Transactions on Power Systems, Vol. 13,
No. 2, May 1998.
[3]
Richard E. Brown (SM),‟Distribution Reliability
Assessment and Reconfiguration Optimization,‟
2001 IEEE.
[4]
The system indices for Bus 6 were calculated using both
analytical (A) and Monte Carlo simulation (S)
techniques and are shown in “Table 4”.
Richard E. Brown, Andrew P. Hanson, H. Lee
Willis, Frank A. Luedtke, Michael F. Born
„Assessing the reliability of distribution systems,‟
2001 IEEE.
[5]
A. A. Chowdhury, „Distribution
Reliability Assessment,‟ 2005 IEEE.
F.
Comparison of Analytical and Monte Carlo
Simulation Method
[6]
Roy Billinton Peng Wang, „A Generalized
Method for Distribution System Reliability
Evaluation,‟ 1995 IEEE.
[7]
Weixing Li, Wei Zhao and Xiaoming Mou, „A
Technique for Network Modeling and Reliability
Evaluation of Complex Radial Distribution
Systems,‟ 2009 IEEE.
[8]
JIN Yi-xiong, LI Hong-zhong, DUAN Jian-min,
WANG Cheng-min, „Algorithm Improvement for
Complex Distribution Network Reliability
Evaluation and Its Programming,‟ 2010 IEEE.
[9]
Roy Billinton, Peng Wang,‟ Teaching
Distribution System Reliability Evaluation Using
Monte Carlo Simulation,‟ IEEE Transactions on
Power Systems, Vol. 14, No. 2, May 1999.
[10]
Nisha R. Godha, Surekha R. Deshmukh, Rahul
V. Dagade, ‟Time Sequential Monte Carlo
Simulation for Evaluation of Reliability Indices
of Power Distribution System,‟ ISCI 2012.
[11]
O.Shavuka, K.O.Awodele, S.P.Chowdhury,
S.Chowdhury,
Reliability
Analysis
of
Distribution Networks,‟ 2010 IEEE.
[12]
Satish Jonnavithula, „Cost/ Benefit Assessment
of Power System Reliability,‟ Ph.D thesis,
Department of Electrical Engineering, University
of Saskatchewan in 1997.
[13]
Peng
Wang,
„Reliability
Cost/Worth
Considerations
in
Distribution
System
Evaluation,‟ Ph.D thesis, Department of
Electrical
Engineering,
University
of
Saskatchewan in 1998.
[14]
Binendra Shakya,‟ Repair duration effects on
distribution system reliability indices
and
AE
NS
(A)
(S)
E.
Average value of Load Point and System
Indices
The published results by analytical method results and
developed program results by analytical and MonteCarlo method for average load point failure rates,
average annual load point outage time or annual
unavailability and average load point outage duration or
repair time are shown in “Tables from 1 to 3”.
“Tables from 1 to 3” shows results of the average load
point failure rates indices, average annual load point
outage time or annual unavailability and average load
point outage duration or repair time for all feeders
obtained using analytical (A) and simulation techniques
(S). The results obtained by analytical method are
compared by the results of simulation method. The
systems indices results obtained by analytical method
are compared by the results of simulation method and
are shown in “Table 4”.
By understanding and development of distribution
system reliability indices using analytical & MonteCarlo simulation methods can be further extended to
find the predictive reliability indices by changing the
network configurations in smart grid environment.
IV. CONCLUSION
This paper introduces the methodology for calculation of
distribution system reliability indices using both
Analytical and Monte-Carlo simulation techniques. The
failure mode and effect analysis (FMEA) method used
in analytical techniques is applied on Feeder 3 of Bus 6
of the RBTS. To evaluate load point and system indices,
the algorithm and flow chart are described and computer
program is developed to implement time sequential
Monte-Carlo simulation technique in VC++. A
comparison of the load point and system indices for Bus
6 of the RBTS using both Analytical and Monte-Carlo
simulation technique are also illustrated. The MonteCarlo simulation results are very in line with Analytical
method and hence the Monte-Carlo simulation technique
can further used to model renewable energy sources
where it is very difficult to model renewable in
analytical methods.
Feeder
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
31
Distribution System Reliability Evaluation using Time Sequential Monte Carlo Simulation
________________________________________________________________________________________________
customer outage costs,‟ Thesis of Master of
Science in the Department of Electrical and
Computer
Engineering
University
of
Saskatchewan Saskatoon, Saskatchewan, Canada
in February 2012.

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
32
An Enhanced Secured Approach To Voting System
________________________________________________________________________________________________
An Enhanced Secured Approach To Voting System
1
1
Ram Kumar.S, 2Gowshigapoorani.S
UG student-final yr Department of EEE ,2UG student- final yr Department of IT,
NSIT, Salem – 636 305, Tamilnadu, India.
Email id: [email protected],[email protected]
Abstract- In this system, an online voting authentication
technique is proposed which provides biometric as well as
password security to voter accounts. The basic idea of
stegnographic method is to merge the secret key and pin
number with the cover image which produces a stego
image which looks same as the cover image. The secret key
and pin number is extracted from the stego image at the
server side to perform the voter authentication function.
This system greatly reduces the risks as the hackers have to
find the secret key, pin number, fingerprint and facial
image, which makes the election procedure to be secure
against a variety of fraudulent behaviors.
The purpose of the proposed work is to reduce the man
power provide secure voting system, quick result
announcement and allows voters to poll their vote easily
and very quickly. This work gives secured voting system
through QR codes and Biometric measures. The basic idea
of this proposed system is when the voter entering in to the
polling station, the details are shown and to verify the
particular person the fingerprint will be taken. If the
information is valid, then the ballet sheet is opened in the
system. Overcoming the disadvantage of other system of
voting, this system is designed as user friendly and it makes
the election system simple and elegant. After voting, the
information will be stored in the database which helps for
quick and easy result announcement.
I.INTRODUCTION
1.1 ELECTRONIC VOTING
Electronic voting is a term includes several different
types of voting, embracing both electronic means of
casting a vote and electronic means of counting votes.
For many years, paper-based ballot is used as a way to
vote during polling days. This matter put an inefficient
way of voting process as people have to queue up to
register their name before they can vote. Furthermore,
the traditional way of voting will take a long process and
time. So, the new electronic voting using workings will
become the best solution for the matters; besides provide
easier way of voting.
preparations, law and order, candidate‟s expenditure,
etc. and easy and accurate counting without mischief at
the counting centre. It is also eco friendly.
II. TECHNIQUES:
2.1 Authentication For Online
Steganography And Biometrics
Voting
Using
Steganography Method:
Steganographyis the art and science of writing hidden
messages in such a way that no one, apart from the
sender and intended recipient, suspects the existence of
the message, a form ofsecurity through obscurity.
ADVANTAGES:

Authenticity of an individual.

Accuracy and reliability.

Against from hacking.
DISADVANTAGES:

Limitation in searching performance.

Need more time.
2.2 Novel Design Of Electronic Voting System Using
Fingerprint
Minutiae Based Matching:
Minutiae matching method balances the tradeoffs
between maximizing the number of matches and
minimizing total feature distance between query and
reference fingerprints. A two-hidden-layer fully
connected neural network is trained to generate the final
similarity score based on minutiae matched in the
overlapping areas.
Compared to existing voting system the Electronic
voting has several advantages like: Electronic voting
system is capable of saving extensive printing stationery
and transport of large volumes of electoral material. It is
easy to transport, store, and maintain. It completely rules
out the chance of invalid votes.
The proposed work provides the results in reduction of
polling time, resulting in fewer problems in electoral
Fig.2.1Minutiae Based Matching
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
33
An Enhanced Secured Approach To Voting System
________________________________________________________________________________________________
ADVANTAGES:


Fake persons are identified.

Accuracy.
A direct-recording
electronic (DRE) voting
machine records votes by means of a ballot display
provided with mechanical or electro-optical components
that can be activated by the voter that processes data by
means of a computer program; and that records voting
data and ballot images in memory components.
DISADVANTAGES:

Complex distortions among
impression of the same finger.
the
different
2.3 Evaluating Electronic Voting Systems Equipped
With Voter-Verified Paper Records
III. BIOMETRICS

Biometric recognition means by measuring an
individual's suitable behavioral and biological
characteristics in a recognition inquiry and
comparing these data with the biometric reference
data which had been stored during a discovering
process, the identity of a specific user is
determined.

Automatic fingerprint identification is one of the
most reliable biometric technologies. This is
because of the well known fingerprint
distinctiveness, persistence, ease of attainment and
high matching accuracy rates.
Cryptography And Stenography:
Cryptography and Steganography are well known and
widely used techniques that manipulate information in
order to cipher or hide their existence respectively.
Steganography is the art and science of communicating
in a way which hides the existence of the
communication. Cryptography scrambles a message so it
cannot be understood; the Steganography hides the
message so it cannot be seen. Even though both methods
provide security, a study is made to combine both
cryptography and Steganography methods into one
system for better confidentiality and security.
DRE Voting System:
ADVANTAGES:

Able to change the cover coefficients randomly.

Accuracy.
DISADVANTAGES:

Complexity of elections
2.4 Mobile Voting Using Global System For Mobile
Communication (GSM) Technology And Authentication
Using Fingerprinting Biometrics And Wireless
Networks
GSM Network:
GSM is a digital wireless network standard widely used
in European and Asian countries. It provides a common
set of compatible services and capabilities to all GSM
mobile users. The services and security features to
subscribers are subscriber identity confidentiality,
subscriber
identity
authentication,
user
data
confidentiality on physical connections, connectionless
user data confidentiality and signaling information
element confidentiality.
Fig.3.1 biometrics
ADVANTAGES:

User convenience

Better security

Higher efficiency

More reliable

It cannot be easily misplaced, forged, or shared
ADVANTAGE:

Easy and accurate counting.

Cheaper rates of services

Secured method of voting
The Risk Of Electronic Voting

Internet Voting:
The casted vote of a secure and secret electronic ballot
that is transmitted to election officials using the internet.
Fig.3.2 finger printer
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
34
An Enhanced Secured Approach To Voting System
________________________________________________________________________________________________
The use of QR codes results in the low cost
implementation in this system and they can have the
tendency to overcome the functionalities of the existing
system. The whole symbol of the code can be masked on
a grid that can be repeated to obtain it.
Fig.3.3 finger print
3.1 OBJECTIVE OF BIOMETRICS

As the fingerprint of every individual is unique,
it helps in maximizing the accuracy. A database is
created containing the fingerprint of all the voters in the
electorate. Illegal votes and repetition of votes is
checked for in this system. Hence if this system is
employed the elections would be fair and free from
tackling.
Fig.3.4 QR code
IV. SYSTEM OVERVIEW

Fingerprint
recognition
or
fingerprint
authentication refers to the automated method of
verifying a match between two human fingerprints.
Fingerprints are one of many forms of biometrics used
to identify an individual and verify their identity.
Extensive research has been done on fingerprints in
humans. Two of the fundamentally important
conclusions that have turned out from research are: (i) a
person's fingerprint will not naturally change structure
after about one year after birth and (ii) the fingerprints
of individuals are unique. Even the fingerprints in twins
are not the same. In practice two humans with the same
fingerprint have never been found [7].
4.1 PROBLEM DEFINITION
3.3 QR Code:
The objectives of biometric recognition are user
convenience, better security and higher efficiency.
These techniques makes it possible to use the fingerprint
of a person to authenticate him into a secure system, So
the Electronic voting system has to be improved based
on the current technologies of biometric system. A pre
requisite for authentication is enrollment, in which the
biometric features are saved.
Quick Response Code is defined as a two dimensional
barcode, a machine readable optical label that contains
information about the item to which it is attached. A QR
code uses the following standardized encoding modes,
to efficiently store data and extension may also be used
for the effectiveness.

Numeric

Alphanumeric

Byte/Binary
The format information records two things: the error
correction level and the mask pattern used for the
symbol. Masking is used to break up patterns in the data
area that might confuse a scanner, such as large blank
areas or misleading features that look like the locator
marks. The mask patterns are defined on a grid that is
repeated as necessary to cover the whole symbol.
Modules corresponding to the dark areas of the mask are
inverted. The format information is protected from
errors with a BCH code, and two complete copies are
included in each QR symbol.
The online voting system seems to be risky, it is difficult
to come up with a system which is perfect in all senses.
So a Quick Response(QR) image helps to identify the
right person and use biometrics as authentication. It is
useful to achieve confidential transmission over a public
network. The main aim is to present a new voting
system employing biometrics in order to avoid
unauthorized access and to enhance the accuracy and
speed of the process so that one can cast his vote
irrespective of his location.
Objective:
Biometrics is the automated recognition of individuals
based on their behavioral and biological characteristics.
Biometric recognition means by measuring an
individual's suitable behavioral and biological
characteristics in a recognition inquiry and comparing
these data with the biometric reference data which had
been stored during a learning procedure, the identity of a
specific user is determined. Because biometric
identifiers cannot be easily misplaced, forged, or shared,
they are considered more reliable for person recognition
than traditional token or knowledge based methods.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
35
An Enhanced Secured Approach To Voting System
________________________________________________________________________________________________
Fig.4.1 System overview
A user who enter into the system will have the QR code
image and it is used to authorize the same user and to
confirm the identity, we authenticate the same user by
getting his fingerprint using the fingerprint scanner and
only if they are designated as the authenticated user,
they will be allowed to view the ballot sheet and can cast
the vote. Once casting is done, the result is stored in the
separate database. The process is repeated for all the
constitute identities and the final results can easily be
viewed.
hintMap.put(EncodeHintType.ERROR_CORRECTION,
ErrorCorrectionLevel.L);
QRCodeWriterqrCodeWriter = new QRCodeWriter();
BitMatrixbyteMatrix
=
qrCodeWriter.encode(qrCodeText,
BarcodeFormat.QR_CODE, size, size, hintMap);
4.2 VERIFY:
if(obj==miverify)
{
flag=Check.verify(v1,ipimg);
System.out.println("flag="+flag);
if(flag==1)
{
JOptionPane.showMessageDialog(null,"Verified.Please
give
your
vote","Information",1);
Vote vv=new Vote();
display(vv);
}
else
{
JOptionPane.showMessageDialog(null,"You are not
eligible
to
Vote","Information",1);
}
miverify.setEnabled(false);
//miresult.setEnabled(true);
}
V. SYSTEM TESTING
5.1 INTRODUCTION
The testing phase involves the testing of the developed
system using various kinds of data. An elaborated
testing of data is prepared and a system is tested using
the test data. It is mainly used to improve the quality and
for verification and validation. While testing, errors are
noted and corrections remade, the corrections are also
noted for future use.
5.2 UNIT TESTING
Fig.4.2 flowchart
The fingerprint scanner is a unique way of capturing the
identity of a person and confirming them over the
righteousness of the record. The processing includes
Core print estimation, sectorization, gabor filter, feature
extraction and then verification.
// Create the ByteMatrix for the QR-Code that encodes
the given String
HashtablehintMap = new Hashtable();
Unit testing involves the design of test cases that
validate that the internal program logic is functioning
properly, and that program input produces valid outputs.
All decision branches and internal code flow should be
validated. It is the testing of individual software units of
the application .it is done after the completion of an
individual unit before integration. This is a structural
testing, that relies on knowledge of its construction and
is invasive. Unit tests perform basic tests at component
level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each
unique path of a business process performs accurately to
the documented specifications and contains clearly
defined inputs and expected results.
Each units in the system are separately tested and is
managed to get the expected output. These units in the
system are the separate modules that are used in the
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
36
An Enhanced Secured Approach To Voting System
________________________________________________________________________________________________
system and they represent a process implemented in the
system. the functionality of the system is tested with the
help of this process of testing method. All decision
branches and the internal code flow should be validated
to produce a valid output.
5.3 INTEGRATION TESTING
Integration tests are designed to test integrated software
components to determine if they actually run as one
program. Testing is event driven and is more concerned
with the basic outcome of screens or fields. Integration
tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit
testing, the combination of components is correct and
consistent. Integration testing is specifically aimed at
exposing the problems that arise from the combination
of components.
When the modules are tested separately, they are also
tested for the integration between them. when the first
module is executed, it must make its path itself to the
next module. These are said to be event driven and this
is referred using the integration testing.
5.4 TEST CASE
The system is tested by providing the invalid images or
the images that is not present in the database. For a QR
image present in the database, providing the
inappropriate fingerprint image will also result in the
disqualification of the user. Thus, validating the system.
The implementation can be simple and is made
effectively with the accuracy. This system can also be
used in any organisation or even an association which
conducts the voting to select their respective presidents.
In those areas, all the members can be given only with
the QR codes that were made in the private manner
specially to use inside the organisations.
The use of QR code is itself a secure one where the
biometrics can stay only as a additional security feature
in the system. In future, we could only see the trend of
QR codes vastly. Though they are mainly used for the
purpose of advertisements now, their implementation in
a system for authenticaion would definitely bring a
change in the future world.
This can be implemented using phones if the emerging
fingerprint scanners in smartphones like iPhone 5s and
Samsung s5 reaches to the hands of the entire society.
Thus, making it online and easy.
REFERENCES
[1]
BehroozParahami,(December 1994) “Voting
Algorithms”, IEEE Transactions on Reliability.
[2]
Chris Karlof, Naveen Sastry and Tavid Wagner,
(2001)“ Cryptographic Protocols: A systems
Prespective”.
[3]
Feras A. Haziemeh, GutazKh. Khazaalehand
Khairall M. Al-Talafha, (March 2011) “New
Applied E-Voting System”, Journal of theoretical
and applied science technology.
[4]
Maltoni D., Dmaio, Navin A.K. and Prabhakar
S,(2003)
„Hand
book
of
Fingerprint
Recognition‟, Springer.
INPUT A
INPUT B
RESULT
User1.QRimage
User1.Fingerprint
True
User1.QRimage
User2.Fingerprint
False
[5]
Mercuri R,( October 2003) Electronic Vote
Tabulation Checks and balances, Ph.D Thesis.
User2.QRimage
User1.Fingerprint
False
[6]
Ravimaran S, Sagayamozhi G and Saluk
Mohamed M. A,(2012) “Reliable and Fault
Tolerant Paradigm using Surrogate object ”,
International Journal of future computer and
communication.
[7]
SalilPrabhakar,(2001) "Fingerprint classification
and matching using filterbank", Ph. D. Thesis.
[8]
Shan Ao. WeiyinRen and Shoulain Tang,(2009)”
Analysis and Reflection on the Security of
Biometrics System”
[9]
Thomas. W. Lauer,(2012) “The Risk of EVoting”, Oakland University, USA.
fig 5.1 Test case
VI. CONCLUSION
The proposed voting system benefits in user
authentication method through fingerprints, the polling
process is made easy with the use of the QR codes. The
main benefit is time consuming comparatively less than
the older voting system. The system can be implemented
easily in any areas where voting needs to be done.
The future enhancement is to analyze the compatible
support over the various distances in wide area manner.

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
37
Design of High Performance Single Precision Floating Point Multiplier
________________________________________________________________________________________________
Design of High Performance Single Precision Floating Point
Multiplier
Kusuma Keerthi
Department of Electronics
Sardar Patel Institute of Technology Mumbai, Maharashtra, India
Email: [email protected]
Abstract — The speed of an ALU depends greatly on the
speed of its multipliers and adders. The proposed work
deals with the implementation of a high performance,
single precision floating point multiplier using fast adders
and fast multipliers. Compared to the 32-bit floating point
multiplier which uses Wallace tree with Kogge-Stone adder
in the final stage for mantissa multiplication, the
implemented 32-bit floating point multiplier uses Vedic
multiplication for mantissa multiplication and was found
to have 25% improvement in speed and 33% reduction in
gate count, thereby reducing the total power consumption.
The floating point multiplier, which is based on IEEE
standard, has been implemented in Verilog HDL using
Xilinx 10.1 ISE simulation tool and targeted to Virtex-4
FPGA with speed grade of -12. it was done, principal
results, and their significance.
I. INTRODUCTION
With the advent of technology, the demand of high
speed digital systems is on the rise. The multiplier is an
important unit that affects the speed in almost every
digital system. Compared to other operations in an
Arithmetic Logic Unit (ALU), the multiplier consumes
more time and power. Hence researchers have always
been trying to design multipliers which incorporate an
optimal combination in terms of speed, power and area.
Floating point describes a method of representing real
numbers in a way that can support a wide range of
values. Floating point units are widely used in a
dynamic range of engineering and technology
applications. This demands development of faster
floating point arithmetic circuits. The proposed work
deals with implementing an architecture for a fast
floating point multiplier compliant with the single
precision IEEE 754- 2008 standard which can be
integrated on a single ALU along with fast floating
point adder and subtractor. The most common
representation is defined by the IEEE Standard for
Floating-Point Arithmetic (IEEE 754). It is a technical
standard established by the Institute of Electrical and
Electronics Engineers (IEEE) and the most widely used
standard for floating-point computation. Floating Point
numbers represented in IEEE 754 format are used in
most of the DSP Processors. It also specifies standards
for arithmetic operations and rounding algorithms.
Floating point arithmetic is useful in applications where
a large dynamic range is required or in rapid prototyping
applications where the required number range has not
been thoroughly investigated. Single precision
representation occupies 32 bits: a sign bit, 8 bits for
exponent and 23 bits for the mantissa. Double precision
representation occupies 64 bits: a sign bit, 11 bits for
exponent and 52 bits for the mantissa. Various
algorithms have been developed to improve the
performance of sequential multipliers and adders to
simplify their circuitry. The performance of some adders
and multipliers were analyzed using Xilinx ISE
simulation tool through which a fast multiplier and a fast
adder were chosen to implement a floating point
multiplier. This multiplier can be integrated into an ALU
along with a fast floating point adder and subtractor. A
Floating point multiplier is the most common element in
most digital applications such as digital filters, digital
signal processors, data processors and control units. In
most modern general purpose computer architectures,
one or more FPUs are integrated with the CPU.
II. IEEE STANDARD FOR BINARY
FLOATING POINT ARITHMETIC
The IEEE (Institute of Electrical and Electronics
Engineers) has produced a Standard to define floatingpoint representation and arithmetic. The standard brought
out by the IEEE come to be known as IEEE 754. The
IEEE 754 Standard for Floating-Point Arithmetic is the
most
widely-used
standard
for
floating-point
computation, and is followed by many hardware (CPU
and FPU) and software implementations. Many computer
languages allow or require that some or all arithmetic be
carried out using IEEE 754 formats and operations. The
standard specifies:

Basic and extended floating-point number formats

Add, subtract, multiply, divide, square root,
remainder, and compare operations

Conversions between integer and floating-point
formats

Conversions between different floating-point
formats
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
38
Design of High Performance Single Precision Floating Point Multiplier
________________________________________________________________________________________________

Conversions between basic format floating-point
numbers and decimal strings

Floating-point exceptions and their handling,
including non- numbers
There are basically two binary floating-point formats.
These formats are the „Single Precision‟ and „Double
Precision‟ formats of IEEE 754. The single precision is
32-bits wide.
The single precision number has 3 main fields that is sign
field, exponent field and mantissa field as shown in Fig
1. Thus a total of 32-bits are required for single precision
number representation. To achieve a bias, 2n-1 – 1 is
added to the actual exponent in order to obtain the stored
exponent. This equals to 127 for an eight-bit exponent of
the single-precision format. The addition of bias allows
the use of an exponent in the range from -127 to +128
corresponding to a range of 0-255 for single precision
number. The single-precision format offers a range from
2-127 to 2+127.
Sign: 1-bit wide and used to denote the sign of the
number i.e., 0 indicate positive number and 1 represent
negative number.
Mantissa: 52-bit wide and fractional component.
Fig 2: Double Precision Floating point IEEE format
The exponent range for normalized numbers is [−126,
127] for single precision and [−1022, 1023] for double
precision floating point numbers.IEEE reserves exponent
field values of all 0s and all 1s to denote special values in
the floating point scheme which are signed zero, denormalized, infinities and NANs. The IEEE standard
defines five types of exceptions that should be signalled
through a one bit status flag when encountered. They are
Invalid Operation, Division by Zero, Inexact, Underflow,
Overflow, infinity and Zero. Rounding is used when the
exact result of a floating-point operation (or a conversion
to floating-point format) would need more digits than
there are digits in the significand. IEEE 754 requires
correct rounding: There are several different rounding
modes. IEEE 754 specifies four rounding modes: Round
to nearest even, Round-to-Zero, Round-Up and RoundDown.
III. FLOATING POINT MULTIPLIER
DESIGN
Exponent: 8-bit wide and signed exponent in excess -127
representations.
Mantissa: 23-bit wide and fractional component.
The algorithm used for multiplying single-precision
floating point numbers is shown in the Fig 3. The
floating point multiplication is carried out in four parts.
Fig 1: Single Precision Floating point IEEE format
The double precision floating point number
representation is shown in Fig 2. The double precision
number is 64 bit wide. The double precision number has
three main fields which are sign, exponent and mantissa.
Thus, a total of 64-bits are needed for double-precision
number representation. To achieve a bias, 2n-1 – 1 is
added to the actual exponent in order to obtain the stored
exponent. This equals to 1023 for an 11-bit exponent of
the double-precision format. The addition of bias allows
the use of an exponent in the range from -1023 to +1024
corresponding to a range of 0 – 2047 for double precision
number. The double-precision format offers a range from
2-1023 to 2+1023.
Sign: 1-bit wide and used to denote the sign of the
number i.e., 0 indicate positive number and 1 represent
negative number.
Exponent: 11-bit wide and signed exponent in excess –
1023 representation.
Fig 3: Flowchart of floating point multiplier
1.
In the first part, the sign of the product is
determined by performing an xor operation on the
sign bits of the two operands.
2.
In the second part, the exponent bits of the
operands are passed to an 8-bit adder stage and a
bias of 127 is subtracted from the obtained output.
The addition and bias subtraction operations are
both implemented using 8-bit Knowles fast adder
which was found to be the fastest among other
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
39
Design of High Performance Single Precision Floating Point Multiplier
________________________________________________________________________________________________
prefix tree adders.
3.
a.
In the third part and most important stage, the
product of the mantissa bits is obtained. The 24-bit
mantissa multiplication was done both by using
Wallace tree and Vedic multiplication after prenormalisation.
Modified Wallace tree multiplication:
Wallace tree multiplication was performed using 3:2
compressors and final stage 48-bit adder. The final stage
48-bit addition was implemented using kogge stone
prefix tree adder as well as Knowles prefix tree adder.
Knowles prefix adder was found to be fastest [2] as
compared to kogge stone prefix tree adder. Vedic
multiplication technique for mantissa multiplication was
found to be most efficient in terms of area, delay and
power.
b.
Vedic Multiplication Technique:
The word “Vedas” which literarily means knowledge
has derivational meaning as principle and limitless storehouse of all knowledge. Entire mechanics of Vedic
mathematics is based on 16 sutras – formulas and 13 upsutras meaning – corollaries. Urdhva-tiryagbhyam sutra
is based on multiplication. To illustrate the
multiplication algorithm, consider the multiplication of
two binary numbers a3a2a1a0 and b3b2b1b0. The result
of this multiplication would be more than 4 bits and is
expressed as.... r3r2r1r0. Line diagram for multiplication
of two 4-bit numbers is shown in Fig 4. For the
simplicity, each bit is represented by a circle. Least
significant bit r0 is obtained by multiplying the least
significant bits of the multiplicand and the multiplier.
Fig 4: Line diagram for Vedic multiplication
Firstly, least significant bits are multiplied which gives
the least significant bit of the product (vertical). Then,
the LSB of the multiplicand is multiplied with the next
higher bit of the multiplier and added with the product
of LSB of multiplier and next higher bit of the
multiplicand (crosswise). The sum gives second bit of
the product and the carry is added in the output of next
stage sum obtained by the crosswise and vertical
multiplication and addition of three bits of the two
numbers from least significant position. Next, all the
four bits are processed with crosswise multiplication and
addition to give the sum and carry. The sum is the
corresponding bit of the product and the carry is again
added to the next stage multiplication and addition of
three bits except the LSB. The same operation continues
until the multiplication of the two MSBs to give the
MSB of the product.
Thus the following expressions are obtained:
r0=a0b0; (1)
c1r1=a1b0+a0b1; (2)
c2r2=c1+a2b0+a1b1 + a0b2; (3)
c3r3=c2+a3b0+a2b1 + a1b2 + a0b3; (4)
c4r4=c3+a3b1+a2b2 + a1b3; (5)
c5r5=c4+a3b2+a2b3; (6)
c6r6=c5+a3b3 (7)
With c6r6r5r4r3r2r1r0 being the final product. Hence
this is the general mathematical formula applicable to all
cases of multiplication.
24-bit multiplication is performed using Urdhvatiryagbhyam sutra for mantissa multiplication of floating
point multiplication. This method was found to be faster
than Wallace tree multiplier using kogge adder. Clearly,
this is not an efficient algorithm for the multiplication of
large numbers as a lot of propagation delay is involved
in such cases. To deal with this problem, Nikhilam Sutra
presents an efficient method of multiplying two large
numbers.
4.
In the fourth part, the product of the mantissas
is normalized and truncated. To do so, the leading one is
detected and the exponent is adjusted accordingly. The
leading one is the implied bit and hence dropped. The
remaining bits are truncated to a 23-bit value using
rounding to zero technique to give the 23-bit mantissa of
the product.
Example of Floating Point Multiplier
Consider two floating point numbers a = -18.0 and
b = +9.5
Expected floating point product = (-18.0) x (+9.5)
= - 171.0
a = - 10010.0 = - 00010010.0
= - 1.00100000000000000000000 x 24
b = +1001.1 = + 00001001.1
= + 1.00110000000000000000000 x 2 3
sign of a = 1 = s_a
sign of b =0 = s_b
biased exponent of a = 127 + 4 = 131 =10000011 = e_a
biased exponent of b = 127 + 3 = 134 =10000010 = e_b
mantissa of a = 00100000000000000000000= mant_a
mantissa of b = 00110000000000000000000 = mant_b
fp_a = 1 10000011 00100000000000000000000
= C1900000h
fp_b = 0 10000010 00110000000000000000000
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
40
Design of High Performance Single Precision Floating Point Multiplier
________________________________________________________________________________________________
= 41180000h
Calculation of sign of the product ‘s_out’:
s_out = s_a xor s_b = 1 xor 0 =1
Calculation of exponent of the product ‘e_out’:
Step1: Add e_a and e_b to get the sum
10000011 + 10000010 =1 00000101
Step 2: Bias of 127 is subtracted from the sum to
exponent of the output
1 00000101 – 01111111 = 10000110 = e_out

fp_a and fp_b are the two floating point 32-bit
operands in IEEE format.

expocin is the 1-bit carry in input for exponent
addition and is always assumed to be zero

bias is an 8-bit input which is always 01111111
in binary =127 in decimal
The single precision Floating Point Multiplier Unit has
the following output:

fp_prod is the 32-bit floating point product output
in IEEE format.
Calculation of mantissa of the product ‘m_out’:
Step 1: Extract both the mantissas by adding 1 as MSB
for normalization to form 24-bit mantissas
24-bit mant_a = 100100000000000000000000
24-bit mant_b = 100110000000000000000000
Step 2: multiply 24-bit mant_a and mant_b to get 48-bit
product.
(100100000000000000000000)
X
(100110000000000000000000)
=01010101100000000000000000000000000000000000
0000
Step 3 : Leading 1 of the 48-bit is found and the
remaining bits are truncated to 23-bit output mantissa
value to get the mantissa of the output
m_out = 01010110000000000000000,
e_out =100000110
Floating Point Product(in binary) = 1 10000110
01010110000000000000000=C32B0000h
biased exponent = 10000110 =134
unbiased exponent =134 - 127 = 7
Floating Point Product(in decimal)
= - 1. 01010110000000000000000x 27
= - 10101011.0000000000000000
= -171 .0
I. SNAPSHOTS OF SIMULATION
The RTL schematic and the simulation result of 32-bit
Floating Point Multiplier is shown in Fig 5 and Fig 6
respectively
Fig 5: RTL Schematic of floating point multiplier
Fig 6: Simulation result of floating point multiplier
32-bit Input fp_ a =
11000001100100000000000000000000
= C1900000h =-18.0
32-bit Input fp_b =
01000001000110000000000000000000
= 41180000h= +9.5
32-bit Output fp_prod
=11000011001010110000000000000000
=C32B0000h= -171.0
IV. SYNTHESIS RESULTS AND
COMPARISONS
The performance analysis of prefix tree adders is
summarized in Table 1 and the performance analysis of
different architectures of single precision floating point
multiplier is summarized in Table 2 respectively.
TABLE 1: PERFORMANCE ANALYSIS OF 48-BIT
PREFIX TREE ADDER [2]
Prefix
Delay
LUTs
Slices
Gate
Tree
(nsecs)
Count
Adder
Type
Ladner
21.22
132
74
876
Fischer
Sklansky
21.87
139
78
903
Knowles
9.4
302
173
1827
Han
12.43
224
118
1416
Carlson
Kogge10.03
312
164
1890
Stone
The single precision Floating Point Multiplier Unit has
the following inputs:
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
41
Design of High Performance Single Precision Floating Point Multiplier
________________________________________________________________________________________________
TABLE 2: PERFORMANCE ANALYSIS OF
FLOATING POINT MULTIPLIER
V. CONCLUSION AND FUTURE SCOPE
Conclusion:
Floating Point
Multiplier
architecture Type
Delay
(nsecs)
LUTs
Slices
Gate
Count
FPM
using kogge[ 1 ]
18.78
2270
1269
-
16.95
2036
1092
16215
14.17
1819
999
11199
FPM
using Knowles
[proposed]
FPM
using Vedic
[proposed]
Synthesis and simulation report show some interesting
results of minimization of delay and total gate count
compared to the existing design. Fig 7 compares the
delay of three architectures of floating point multiplier.
The optimized architecture for floating point multiplier
using modified Wallace tree with Knowles adder shows
12% improvement in speed and using Vedic
multiplication technique shows 25% improvement in
speed as compared to the existing architecture which
uses Wallace tree using kogge stone adder [1] for
mantissa multiplication. Fig 8 compares the total gate
count of two optimized architectures of floating point
multiplier. The optimized architecture using Vedic
multiplication for floating point multiplier shows 31%
improvement in total gate count as compared to the
existing architecture which uses Wallace tree with
Knowles adder for mantissa multiplication. The lesser
the gate count, the lesser is the total power consumed.
Single Precision Floating Point Multiplier unit has been
designed to using fast adder and fast multipliers. IEEE
754 standard based floating point representation has
been used. The unit has been coded in Verilog and has
been synthesised for the Virtex 4 FPGA using XILINX
9.2 ISE tool. Single Precision Floating Point Unit has
been built using power and area efficient fast adders and
multipliers to improve the performance. Knowles Prefix
Tree adder was found to be the fastest adder as
compared to Kogge Stone adder and other prefix tree
adders for large numbers. Therefore, Knowles Prefix
Tree adder is used in the design of final stage adder of
Wallace tree used for mantissa multiplication and in the
exponent addition. Vedic multiplier was found to be the
fastest multiplier as compared to Wallace tree multiplier
using 3:2 compressors. Therefore, Vedic multiplier is
used in the design of mantissa multiplication of floating
point multiplier instead of Modified Wallace Tree
multiplier to improve the efficiency of floating point
unit.
Future Scope:
The designed arithmetic unit operates on 32-bit
operands. It can be designed for 64-bit operands to
enhance precision. It can be extended to have more
mathematical operations like addition, subtraction,
division, square root, trigonometric, logarithmic and
exponential functions. Further, implementing higher
compressors for the Wallace tree used for mantissa
multiplication can further increase the efficiency of the
FPU in terms of speed. Exceptions like overflow,
underflow, inexact, division by zero, infinity, zero, NAN
etc can be incorporated in the floating point unit. A few
researchers have shown that there is a considerable
improvement in the delay by using 4:2, 5:2, 6:2, 7:2
compressors for Wallace tree as compared to Vedic
multiplier. It is therefore required to further research on
the efficiency of the various Wallace tree design
approaches for mantissa multiplication based on issues
such as area, delay and power consumption.
Fig 7: Propagation Delay of floating point multiplier
Fig 8: Total Gate Count
REFERENCES
[1]
Anna Jain, Baisakhy Dash, Ajit Kumar Pande,
Muchharla, “FPGA Design of a Fast 32-bit
Floating Point Multiplier Unit”, Proc of 2012
International Conference on Devices, Circuits
and Systems(ICDCS), 15-16 March 2012, pp 545
– 547.
[2]
Kusuma R, Kavitha V,” Performance Analysis of
48-bit Prefix Tree Adders,” Proc of 2013 ICECE,
24th April 2013, pp 17 -21.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
42
Design of High Performance Single Precision Floating Point Multiplier
________________________________________________________________________________________________
[3]
G.Ganesh Kumar, V. Charishma, “Design of
High Speed Vedic Multiplier using Vedic
Mathematics Techniques”, Proc of International
Journal
of
Scientific
and
Research
Publications,Vol-2, Issue 3, March 2012, ISSN
2250-3153.
[4]
Neil H.E. Weste, David Harris &Ayan Banerjee,
“ CMOS VLSI DESIGN”, Third Edition.
[5]
Peter.J.Ashenden, “Digital design, An Embedded
Systems approach using Verilog”,Elsevier

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
43
Intelligent Fuel Fraudulence Detection Using Digital Indicator
________________________________________________________________________________________________
Intelligent Fuel Fraudulence Detection Using Digital Indicator
Yashwanth K M ,Nagesha S, Naveen H M, Ravi, Mamatha K R,
Students, Assistant Professor,
Dept. of Electronics and Communication Engineering
BMS Institute of Technology, Bangalore, India
Email: [email protected], [email protected]
Abstract -In today’s world, actual record of fuel filled and
fuel consumption in vehicles is not maintained. The
fraudulence of fuel (petrol/diesel) is increasing in bunks to
the peak level. Most of the petrol bunks today have
manipulated the pumps such that it displays the amount as
entered but the quantity of fuel filled in the customer’s
tank is much lesser than the displayed value i.e., the pumps
are tampered for the benefit of the petrol bunks owner.
This results in huge profits for the petrol bunks but at the
same time the customers are cheated. Also the present
analog fuel indicator in vehicles gives approximate
measure of the fuel in tank. To overcome the above
problem a microcontroller based fuel monitoring and
vehicle tracking system is proposed here. The AT89C51
microcontroller is used in this system which is an ultra-low
power, 8 bit CISC architecture controller. Real Time Clock
(RTC) is also provided to keep the track of time. The
GPS(Global Positioning System) and GSM(Global System
for Mobile Communication) technology to track the vehicle
is also proposed here which sends a message to the vehicle
owner if the vehicle is stolen and also the amount of fuel
filled both quantity and quality. The embedded control
system can achieve many tasks of the effective fleet
management, such as fuel monitoring, vehicle tracking. In
order to indicate the appropriate amount or measure of
fuel a digital indicator is used which displays the related
measure of fuel. Buzzer/voice message and LED display
are used in order to indicate prior to the emptiness of fuel
in tank.
Index
Terms—Fuel
fraudulence,
AT89C51
microcontroller, Digital fuel indicator, fuel level sensor,
fuel quality sensor, GSM, GPS
I. INTRODUCTION
In this modern and fast running world everything is
going to be digitized to be easily understandable and also
to give exact calculation. Considering this idea, Digital
fuel indicator is used which shows the exact amount of
fuel remaining in the fuel gauge as compared to the
previously used gauge meter in which a needle moves to
give a rough estimate of the fuel left. A fuel indicator is
an instrument used to indicate the level of the fuel
contained in the tank.


The sender unit
The indicator
The sending unit usually uses a float connected to a
variable resistor. When the tank is full, the resistor is set
to its low resistance value. As the tank become empty,
the float drops and slides a moving contact along the
resistor, increasing its resistance, finally reaching its
highest value when the tank is empty. In addition, when
the resistance is at a certain point, it will also turn on a
"low fuel" light on some vehicles. Meanwhile, the
indicator unit (usually mounted on the instrument panel)
is measures and displays the amount of electrical current
flowing through the sending unit. When the tank level is
high and maximum current is flowing, the needle points
to "F" indicating a full tank. When the tank is empty and
the least current is flowing, the needle points to "E"
indicating an empty tank.
Finally once the fuel is filled at a bunk the device also
sends an SMS to the vehicle owner indicating the
amount, quantity and date, time etc. using GSM and also
one can find the exact location of the vehicle The added
feature in this fuel level indicator is that, the reserve
condition is pre-informed to the user with an alarm,
which helps to tune it to the reserve position before the
engine stops and this helps to avoid knocking and engine
damage. The scope of this work is it




Gives exact amount of fuel in tank.
Indicates the emptiness of fuel during final stage
by giving beep sound or voice message.
Helps to find vehicle theft faster.
Can be implemented in all vehicles
II. EXISTING METHODS
There are many sensor based techniques available in the
market to measure the liquid level and gives a close idea
of quantity of the liquid, but they failed to provide an
exact approximation of quantity as in cars by fuel meters
what one can get an idea of whether tank is full, empty,
half full etc. The liquid level detector and optimizer play
an important role in tanks to indicate the level of liquid
of a particular density. Rashmi R et al created a digital
Commonly used in cars and bikes, these may also be
display of the exact amount of fuel contained in the
used in any tank including underground storage tanks. As
vehicle tank which also helps in cross checking the
used in cars, the fuel gauge has two parts:
quantity of fuel filled at bunk. Once the fuel is filled at a
bunk the device also sends a SMS to the vehicle owner
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
44
Intelligent Fuel Fraudulence Detection Using Digital Indicator
________________________________________________________________________________________________
indicating the amount, quantity, date, time etc. and also
gives the exact location of the vehicle. Technologies used
are 8051 microcontroller, embedded C, Keil compiler
and GSM/GPS for mobile communication, level sensor
and LCD [1] but quality of the fuel is not checked.
Sachin S. Aher and Kokate R. D proposed a system [2]
in which microcontroller is the brain of system which
stores the status of fuel level in a fuel tank and position
of vehicle. The system is powered by DC power supply
with proper specifications. This supply can be provided
from batteries. Fuel Sensors 1 and 2 i.e. reed switches
will be used to sense the quantity of fuel filled and
quantity of fuel consumed and notify microcontroller
about the level of fuel in the fuel tank. Fuel sensor 1 is
placed at the inlet of fuel tank, as the disk of flow meter
rotates, due to the magnet present on the disk it will make
and break the reed switch, so square pulses will be
available as an input to the microcontroller. By counting
these pulses and multiplying it by a flow factor we will
get exact amount of fuel filled. Fuel sensor 2 is placed at
the outlet of fuel tank, as the disk of flow meter rotates,
due to the magnet present on the disk it will make and
break the reed switch, so square pulses will be available
as an input to the microcontroller. By counting these
pulses and multiplying it by a flow factor we will get
exact amount of fuel consumed. From this we can
exactly calculate the amount of fuel present inside a tank.
These different logs of fuel filling and consumption are
stored in the memory. The GSM module is interfaced to
the microcontroller. By sending different commands to
GSM module placed in a vehicle unit, owner can get the
information of different logs and location of vehicle
stored in the memory. So that owner can keep the record
of fuel and track of the vehicle accurately and
continuously. This will help the owner for effective fleet
management.
In sense of the mileage of any vehicle is affected by
some factors which Nitin Jade et al have considered and
also taken most economical, useful, intelligent and quick
responding sensors to calculate the effect of the all the
factors directly as well as indirectly too. All the sensors
are situated on their particular separate place to perform
their operation. Sensors are very efficient quick
responding units. The sensors collect all the data in
running vehicle and then the collected information moves
up to the E.C.U(Electronic Controlling Unit) which is a
controlling unit that make command on all the individual
sensors give them power to run and forward the collected
data to the C.P.U(Central Processing Unit). Then the
data moves up to the C.P.U. At this unit the data finally
computed into the numeric form by the mean of
programming. All the data from the sensors is converted
into the one form of mileage means how much vehicle
can run? All the information is in coded form which
moves towards the modem. Modem (GSM) is the unit to
modulate or demodulate the information and finally the
data is display on the digital fuel indicator in a numeric
form [3].
A continuous fuel level sensor using a side-emitting
optical fiber is introduced in this paper. This sensor
operates on the modulation of the light intensity in fiber,
which is caused by the cladding’s acceptance angle
change when it is immersed in fuel. The fiber is bent as a
spiral shape to increase the sensor’s sensitivity by
increasing the attenuation coefficient and fiber’s
submerged length compared to liquid level. The
attenuation coefficients of fiber with different bent
radiuses in the air and water are acquired through
experiments. The fiber is designed as a spiral shape with
a steadily changing slope, and its response to water level
is simulated. The experimental results taken in water and
aviation kerosene demonstrate a performance of 0.9m
range and 10mm resolution [4].
In this paper they have proposed a technique to
measure the amount of liquid available in tank. This
device digitally displays the level of liquid inside the
tanks using load sensor. The measurements are taken so
the accuracy level is of 96.36%-98%. Thus it is an
efficient device made by keeping in mind the petroleum
thefts at the various petrol bunks at the time of filling of
tanks [7].
III. PROPOSED WORK
Fig 1.Block Diagram of proposed work
A. Microcontroller (At89c51)
An 8051 architecture microcontroller (AT89C51) is
used as the microcontroller unit. The 8051 is an 8 bit
Complex Instruction Set Computer (CISC), 4KB of
program memory and 128 byte of RAM. The firmware
inside the microcontroller’s program.
A Microcontroller has all of the essential blocks to read
from a keypad, write information to the display, control
the heating element and store data such as cooking time.
In addition to simple ON/OFF inputs and outputs, many
microcontrollers have abilities such as counting input
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
45
Intelligent Fuel Fraudulence Detection Using Digital Indicator
________________________________________________________________________________________________
pulses, measuring analog signals, performing pulsewidth modulated output, and many more.
B. GSM Modem
GSM (Global System for Mobile Communications) is
world’s most famous Mobile platform. Mobile phones
with SIM cards use GSM
technology to help you
communicate with your family, friends and business
associates.
GSM systems have following advantages over basic
land line telephony systems:



Mobility
Easy availability
High uptime
GSM technology is being mostly used for talking to
family, friends and business colleagues. we use
communication feature of Telephone landlines for
internet, e-mail, data connectivity, remote monitoring,
computer
to
computer communication, security
systems. In the same way we can use GSM technology
and benefit from its advantages.
substance in a certain place, while point-level sensors
only indicate whether the substance is above or below
the sensing point. Generally the latter detect levels that
are excessively high or low.
D .Density Sensor(ULB6-A)
A unique fluid density sensor has developed by ISSYS.
A small, hollow silicon micro tube is uses by this
sensing approach. At a given frequency this small tube
vibrates. The vibration frequency will change as the
density or concentration of the liquid in the tube
changes. By using the vibrational frequency of the micro
tube the density of the fluid can be measured. The
density or API output can be used by petrochemicals and
biofuels to indicate the type of fuel, its purity and to
blend fuels together.
E. LCD Display
A high quality 16 character by 2 line intelligent display
module is used, with back lighting, Works with almost
any microcontroller.
Features
Now access control devices can communicate with
servers and security staff through SMS messaging.
Complete log of transaction is available at the headoffice Server instantly without any wiring involved
and device can instantly alert security personnel on
their mobile phone in case of any problem. BioEnable is introducing this technology in all Fingerprint
Access control and time attendance products. You can
achieve high security and reliability.
C. Fuel Level sensor (JK-CLFS-07)
Level sensor detect the level of substances that is to be
filled in the tank, including liquids, oils, gas etc., and all
such substances flow to become essentially level in their
containers (or other physical boundaries) because of
gravity.
Fig 2 Fuel Level Sensor








16 Characters x 2 Lines
5x7 Dot Matrix Character + Cursor
Equivalent LCD Controller/driver
Built-In
4-bit or 8-bit MPU Interface
Standard Type
Works with almost any Microcontroller
Great Value Pricing
IV. SYSTEM DESIGN
The microcontroller and the GSM unit is interfaced with
the fuel level sensor of the vehicle. Every vehicle has a
separate number, which is given by the
corresponding authority. The GSM unit is fixed in the
vehicle. The amount of fuel is stored in memory of the
microcontroller. Using keil software and embedded C,
SMS can be sent through Modem to that particular
mobile number.
After the readings the controller will send data to the
modem. Modem, in turn sends data to the other end. On
other end the vehicle owner will receive the data in
the form of a fuel existing before refueling, fuel added
while refueling and the total amount of fuel in the tank.
Using GSM one can get the response very fast due to
which time is saved. After sending the readings to the
vehicle owner, the owner can request for the location
of the vehicle by sending an SMS to the SIM card used
in the GSM. The vehicle owner at any point of time can
request for the amount of fuel and the location of the
vehicle. After all this process the microcontroller will
The substance to be measured can be inside a container
or can be in its natural form (e.g. A river or a lake).
The level measurement can be either continuous or point
values. Continuous level sensors measure level within a
specified range and determine the exact amount of
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
46
Intelligent Fuel Fraudulence Detection Using Digital Indicator
________________________________________________________________________________________________
reset the memory to get the fresh readings during the next
re-fuelling.
accurate, more reliable and allow for added feature the
benefit for the customer. In the near future different
vehicle company manufactures can implement this kind
of fuel system which also provides security for vehicle
owner. Not only the measurement is more accurate but
the customers are also not cheated for their hard earned
money, because of LCD display not only gives the
amount of fuel present in tank, it also gives the purity of
the fuel.
ACKNOWLEDGEMENT
The authors are thankful to the management of BMS
INSTITUTE OF TECHNOLOGY for providing all the
support for this work.
REFERENCES
Fig 3 Experimental setup
A. Results and Discussion
[1]
The practical output obtained is same as the expected
result, intelligent fuel fraudulence detection using digital
indicator is able to perform the following features
successfully.
Rashmi.R and Rukmini Durgale, “The novel
based embedded digital fuel gauge”, International
conference
on
computing
and
control
engineering(ICCCE 2012), 12 & 13 April 2012
[2]
Sachin S. Aher
and Kokate R. D, “Fuel
monitoring and vehicle tracking using GPS, GSM
and MSP430f149”, International journal of
advances in engineering and technology, July
2012, vol.4, issue 1, pp.571-578
[3]
Nitin Jade, Pranjal Shrimali, Asvin Patel, Sagar
Gupta, “Modified Type Intelligent Digital Fuel
Indicator System” , IOSR Journal of mechanical
and civil engineering.
[4]
Chengrui Zhao, Lin Ye, Xun Yu, and JunfengGe,
“Continuous Fuel Level Sensor Based on Spiral
Side-Emitting Optimizer”, Hindawi Publishing
Corporation, Journal of Control Science and
Engineering, Volume 2012, Article ID 267519, 8
pages doi:10.1155/2012/267519
[5]
Muhammad Ali Mazidi, Janice Mazidi, Rolin
McKinlay, "8051 Microcontroller and Embedded
Systems" ,The (2ndEdition), Publisher:Prentice
Hall, P 2005-10-06.
[6]
http://www.classictiger.com/mustang/OilPressure
Gauge/OilPressureGauge.htm




B.




Gives exact amount of fuel filled in tank in terms
of numeric.
Indicates the lowest level of the fuel through
buzzer indicator.
Sends information about quantity of fuel to
owner, while refilling the tank through GSM
modem.
On request of the owner, the location of the
vehicle can be tracked.
Future Scope
In case of theft of vehicle, it can be stopped i.e.
the engine can be shut down remotely using
additional software.
Location of the vehicle can be determined at any
point of time
It can be implemented in food and grains trucks.
It can be made with too lower cost and faster
performance.
V CONCLUSION
The digital fuel indicator design described above has
many new features added to enhance the monitoring and
tracking operation using recent technologies. This paper
attempts to design best prototype for the same. It is more

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
47
Image Enhancement Technique for Fingerprint Recognition Process
________________________________________________________________________________________________
Image Enhancement Technique for Fingerprint Recognition Process
S.Gayathri, V.Sridhar
Dept of E&C, Principal
SJCE, Mysore, Karnataka, India
PESCE, Mandya, Karnataka, India
Email: [email protected], [email protected]
Abstract - Image enhancement technique is a
preprocessing technique used to reduce the noise which are
generally present in the acquired fingerprints images. This
noise fails to provide accurate minutiae. The reliability of
fingerprint recognition process heavily depends on
minutiae. Hence it is essential to preprocess the fingerprint
image before extracting the reliable minutiae for matching
of two fingerprint images. Image enhancement technique
followed by minutiae extraction completes the fingerprint
recognition process. Design and implementation of image
enhancement technique for fingerprint recognition process
using HDL coding on Virtex -5 FPGA development board
is proposed. Further, the result obtained from hardware
design is compared with that of software using MatLab
simulation tool.
Keywords: reliability, quality, minutiae, preprocessing,
enhancement, hardware
I. INTRODUCTION
The most critical step in automatic fingerprint
recognition system is to extract minutiae from the input
fingerprint image. However, the performance of a
minutiae extraction process relies heavily on the quality
of the input fingerprint image. In order to ensure the
success of an automatic fingerprint recognition system,
the quality of input fingerprint image must be good. But
in most cases the quality of the acquired fingerprint
images are of poor quality. Hence it is essential to
incorporate image enhancement technique to improve
the quality of the fingerprint image.
Image enhancement technique is a preprocessing
technique which improves the clarity of ridges against
valleys. This facilitates precise extraction of minutiae.
A well enhanced fingerprint image will provide
extraction of reliable minutiae by eliminating spurious
features which are created due to noise and possibly by
artifacts. The image enhancement technique followed by
a minutiae extraction completes the fingerprint
recognition process. Thus fingerprint recognition
process provides a set of minutiae, which is used for
matching two fingerprints which constitute fingerprint
Recognition system.
Image enhancement technique consists of the five blocks
namely normalization, orientation field estimation,
filtering, binarization, and thinning.
Normalization is the first step in Image enhancement
process to standardize the pixel intensity by adjusting
the range of gray level to a determined mean and
variance.
The orientation field of a fingerprint image defines the
local orientation of the ridges in the fingerprint. The
gradient-based approach is employed in which the
orientation vector is orthogonal to the gradient. This
provides the orientation estimation of the fingerprint
image.
The aim of filtering is to separate the foreground from
the background areas. The foreground is associated with
the region that contains information of interest with
ridges and valleys. The background area does not
contain valid information and corresponds to the region
outside the borders of fingerprint.
Binarization process compares each pixel to some
threshold and then changes its value to either pure white
or pure black. The threshold used is usually either the
ideal or adaptive. Here adaptive threshold is used.
Thinning process is used to skeletonize the binary image
by reducing all lines to a single pixel thickness.
Thinning is a morphological operation that successively
erodes away the foreground pixels until they are one
pixel wide. The results of thinning show that the
connectivity of the ridge structures is well preserved
II. IMAGE ENHANCEMENT TECHNIQUE
Fingerprint recognition is one of the most used
biometric systems due to its ease of acquisition, high
distinctiveness, persistence and acceptance by the public
[1]. The performance of the recognition system heavily
depends on the extraction of minutiae from the
fingerprint image..
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
48
Image Enhancement Technique for Fingerprint Recognition Process
________________________________________________________________________________________________
A. Related work
B.
Adaptive normalization based on block processing is
suggested for improvement of fingerprint images. With
the ridge direction, the ridge frequency is selected by
utilizing the directional projection. The local property of
the adaptive normalization process ensure the reliable
fingerprint texture region of the given fingerprint image,
even the image is of poor quality [2].
Image enhancement technique for fingerprint
recognition process involves different blocks like
Normalization, Orientation field estimation, Filtering,
Binarization, and Thinning.
A low cost FPGA implementation of an image
normalization system, which is part of a fingerprint
enhancement algorithm, is discussed in [3]. Fingerprint
enhancement algorithm ensures better performance of an
automatic fingerprint identification system. This system
uses a fixed point representation to handle all the data
processing.
A fast fingerprint enhancement algorithm, which
adaptively improves the clarity of ridge and valley
structures of input fingerprint image based on the
estimated local ridge orientation and frequency, is
proposed in [4]. Performance of the image enhancement
algorithm is evaluated using the goodness index of the
extracted minutiae and the accuracy of an online
fingerprint verification system.
The orientation field of a fingerprint image defines the
local orientation of the ridges contained in the
fingerprint. Least mean square estimation method is
employed to get orientation field estimated from the
normalized fingerprint image [5].
Field programmable gate array (FPGA) is a good choice
for implementing fingerprint recognition application
because it has large logic capacity and memory
resources [6].
Reconfigurable computing adds to the traditional
hardware/software design flow, a new degree of
freedom in the development of electronic systems [7].
However, the physical implementation of automatic
fingerprint authentication system is still challenging
task. Until now, only initial stages of biometric
recognition algorithm are tested.
Fingerprint image enhancement through reconfigurable
hardware accelerators using hardware time multiplexing
proves saving of two orders of silicon area as compared
with general purpose microcontroller system [8].
Fingerprint image processing
hardware [9] implemented on
hardware-software co-design
reduction in the execution
performance.
through reconfigurable
Virtex-4 by means of
techniques, aims at
time and improved
METHODOLOGY
NORMALIZATION
An adaptive normalization algorithm based on the local
property of the given fingerprint image is proposed.
Normalization has a pre-specified mean and variance,
which enhances the quality of fingerprint image.
The input fingerprint image is represented by I(x,y)
which is defined as an N x M matrix and I (i, j)
represents the intensity of the pixel at the ith row and jth
column. Mean (M) and variance (VAR) of the given
image are computed as below
(1)
(2)
M and VAR are the computed mean and variance of the
input fingerprint image. Hong and Jain [4], have
employed the (3) for normalization process. The
normalized image is represented by G (i,j) as follows:
(3)
where M0 and VAR0 are the desired mean and variance
values.
ORIENTATION FIELD ESTIMATION
The orientation field of a fingerprint image defines the
local orientation of the ridges contained in the
fingerprint image.
Based on least mean square
estimation method, proposed by Hong [4], let
be
defined as the orientation field of a fingerprint image.
(i, j) represent the local ridge at pixel (i , j) . The main
steps of the algorithm are as follows:
1.
Divide the input image into blocks of size w*w.
2.
Compute the gradients
pixel.
3.
Estimate the local orientation using the following
equations.
and
at each
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
49
Image Enhancement Technique for Fingerprint Recognition Process
________________________________________________________________________________________________
BINARIZATION
(4)
(5)
The filtered output will be a Gray scale image. This will
be binarized, so that it will be in black and white. The
binarization will done by choosing carefully t h e
threshold value where the values for image matrix
over the threshold value will become 1(black) and
values less than threshold will be 0 (white).
For Adaptive:
(6)
f(p) = 0, p < threshold
= 1, p ≥ threshold
(9)
f(p) = 0, 0 ≤ p ≤ 127
= 1, 128 ≤ p ≤ 256
(10)
For Ideal:
Here
(i, j) is the least mean squared estimate of the
local ridge orientation of the block centered at pixel (i,
j).
THINNING
GABOR FILTER
Gabor filter preserves the continuity of the ridge flow
pattern and enhances the clarity of the ridge and valley
structures. The general function of Gabor filter [4] can
be represent as
(7)
where θ is the ridge orientation with respect to vertical
axis, ƒ0 is the selected ridge frequency in xθ – direction,
σx and σy are the standard deviation of Gaussian
function along the xθ and yθ axes respectively and the
[xθ, yθ] are the coordination of [x,y] after a clockwise
rotation of the Cartesian axes by an angle of (90-θ).
Referring to (7), the function G(x, y, θ, ƒ0) can be
decomposed into two orthogonal parts, one parallel and
the other perpendicular to the orientation θ.
After the fingerprint image is converted into binary form
it is applied to the thinning algorithm which reduces the
ridge thickness to one pixel wide. The referred algorithm
[10] consists of successive passes of two basic steps
applied by considering mixed units. It clearly state the
units for contour points of the given region, where a
contour point is any pixel with value '1' and having at
least one 8-neighbor valued '0'. With reference to the 8neighborhood definition shown in Fig.1(a), the first step
flags a contour point p for deletion if the following
conditions are satisfied:
a.
b.
c.
d.
2 ≤N(P1)≤6,
S(P1)= 1,
P2*P4*P6=0,
P4*P6*P8=0,
Where N(Pl) is the number of nonzero neighbors of Pi.
N (P1) = P2 + P3 +……..+ P9 .
S (P1): number of 0-1 transition in the ordered sequence
of p2, p3….., p8, p9.
For e.g., N(P1) = 4 and S(P1) = 3 as shown in Fig.1(b).
(8)
where GBP is only a band-pass Gaussian function of x
and ƒ0 parameters while GLP is only a low-pass
Gaussian filter of y parameter [4]. Since most local ridge
structure of fingerprint comes with well-defined local
frequency and orientation, ƒ can be set by the reciprocal
of the average inter-ridge distance K. The standard
deviation of 2 D normal distribution (or
Gaussian
envelope) is represented by σ.
(a) Definition
(b) Example
Fig.1 Neighborhood arrangement
In the second step, conditions (a) and (b) remain the
same, but conditions (c) and (d) are changed to:
c′. P2*P4*p8=0,
d′. P2*P6*P8=0,
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
50
Image Enhancement Technique for Fingerprint Recognition Process
________________________________________________________________________________________________
Step 1 is applied to every border pixel in the binary
region under consideration. If one or more of the
conditions (a) through (d) are violated, the value of the
point in question is not changed. If all conditions are
satisfied the point is flagged for deletion. It is important
to be considered, that the point is not deleted until all
border points have been processed. This prevents
changing the structure of the data during execution of
the algorithm. After step 1 has been applied to all border
points, those that were flagged are deleted, changed to
'0' Then, step 2 is applied to the resulting data in exactly
the same manner as step 1.
This basic procedure is applied iteratively until no
further points are deleted, at which time the algorithm
terminates, yielding the skeleton of the region.
Condition (a) is violated when contour point P1 has only
one or seven 8-neighbors valued '1'. Having only one
such neighbor implies that P1 is the end point of a
skeleton stroke and obviously should not be deleted. If
P1 had seven such neighbors and it was deleted, this
would cause erosion into the region.
Condition (b) is violated when it is applied to points on a
stroke one pixel thick. Thus these conditions prevent
disconnection of segments of a skeleton during the
thinning operation. Conditions (c) and (d) are satisfied
simultaneously by the following minimum set of values:
p4 = '0', or p6 = '0', or (p2 = '0' and p8 = '0')
Thus with reference to the neighborhood arrangement in
Fig.1, a point that satisfies these conditions as well as
conditions (a) and (b), is east or south boundary point or
northwest corner point in the boundary. In either case,
P1 is not part of the skeleton and should be removed.
Similarly, conditions (c') and (d') are satisfied
simultaneously by the following minimum set of values:
p2 = '0', or p8 = '0', or ( p4 = '0' and p6 = '0')
These correspond to north or west boundary points, or a
southeast corner point. Note that northeast corner points
have p2 = '0' and p4 = '0' and thus satisfy conditions (c)
and (d), as well as (c') and (d'). This is also true for
southwest corner points, which have p6 = '0' and p8 =
'0'.
Database is generated using fingerprint images acquired
from NFD HU 3.8 version optical scanner (Hamster DX
model HFDU06).
C. IMPLEMENTATION
MATLAB IMPLEMENTATION
The mean and variance of the input fingerprint image
is calculated before it is used for Normalization
process. The output image gray scale values are adjusted
to a predefined threshold. The mean and variance are
calculated and compared with the standard value to
normalize the image
A normalized fingerprint image is decomposed into
approximation and detail sub-images, then estimate
the approximation image's orientation. Finally, using
Gabor filter to enhance the fingerprint
A gray scale image is converted into a binary image
using an adaptive thresholding. Each pixel values are
analyzed to the input threshold. Those pixel values
which are smaller than the threshold value a r e
a s s i g n e d zero and those pixel value which are
greater than the threshold value are assigned one.
Ridges thinning are used to destruct the extra pixel of
ridges till the ridges are just one pixel broad.
FPGA IMPLEMENTATION
The Fingerprint image is converted into text file and
stored in a ROM. The Fingerprint image data is a matrix
of 256 × 256, and the image format is 256 gray level
scale, thus the width of the data is 8 bits These values
are copied into the RAM from ROM for processing.
The mean of image pixels is calculated and the
difference between each image pixel value with that of
the mean of the block is obtained. These values are
squared and accumulated and stored as variance in
RAM unit. The mean and variance outputs of the image
are applied to the p r o c e s s i n g b l o c k to get the
Normalized fingerprint image.
The Normalized image’s gradients in both x and y
directions are calculated and used to compute the
orientation angle of the image. The orientation process
provides the horizontal and vertical derivatives of the
normalized image.
The output of orientation field estimation is stored in the
memory. The control unit takes the data from memory
location and sent it to the multiplication-accumulator
(MAC) unit. In MAC, ROM stores the coefficient values
of the Gabor filter, which defines the ridge and valley
region of fingerprint. MAC unit perform the convolution
operation. The convoluted signal with the Gabor
coefficient is transformed into a matrix format which is
the filtered image.
Filtered image then binarized to obtain the image with
only two values. This helped us in reducing the
complexity of handling the gray-level image. The
binarized output is applied to thinning block. The
thinned image thus obtained f r o m t h e image
enhancement t e c h n i q u e process u t i l i z e s less
memory.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
51
Image Enhancement Technique for Fingerprint Recognition Process
________________________________________________________________________________________________
D.
EXPERIMENTAL RESULTS
The outputs of each block are synchronized with the 100
MHz clock input. The simulation waveform as shown in
Fig.2 depicts the values obtained for the input image
pixels and various blocks of the Image enhancement
technique. The outputs are in signed decimal values. The
input fingerprint image is stored in the ROM initially.
The outputs from the individual blocks are at a high
impedance state until the image is copied to the RAM
block. At the rising edge of the clock, the image is
copied into RAM from ROM for processing.
(b) Discrete schematic
Fig.3 RTL schematic of Image enhancement Technique
Results obtained from simulation and implementations
are discussed in three different sections considering
different aspects: firstly the processing time used,
followed by the hardware resources used and finally the
performance.
(i) Processing time
The execution time of the Image enhancement technique
is measured for the 25 fingerprint images taken from the
database, generated using NFD scanner . It is found that
the FPGA implementation is in nanoseconds and
MatLab
implementation
is
in
microseconds.
(i)Hardware resources
(ii) Hardware resources
The estimated values of the resources consumed by the
Image
enhancement
technique
with
FPGA
implementation is as shown in Table.1. The entries in
the Table.1 show that less than 10% of the total
available logic is utilized in implementing the proposed
image enhancement technique.
Fig.2 Simulation waveform of Image enhancement
technique
The corresponding RTL schematic is generated after
synthesizing the image enhancement technique Fig.3 (a)
shows the top module and Fig.3 (b) shows the discrete
schematic with internal connections of the various
blocks of image enhancement technique.
(a) Top module
Table.1 Hardware Resources
Device Utilization Summary (estimated values)
Logic Utilization
Used Available Utilization(%)
Number of Slice
Registers
3343
69120
4
Number of Slice
LUTs
5455
69120
7
Number of fully
used LUT-FF pairs
1666
7132
23
Number of bonded
IOBs
9
640
1
Number of
BUFG/BUFGCTRLs
1
32
3
Number of DSP48Es
3
64
4
(iii) Performance
The input fingerprint image and the thinned image
obtained with MatLab simulation is depicted in Fig.4.
The corresponding images with FPGA implementation
is as shown in Fig.5.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
52
Image Enhancement Technique for Fingerprint Recognition Process
________________________________________________________________________________________________
REFERENCES
(a)Input Image
(b) Thinned Image
Fig.4 Fingerprint Image (Matlab)
(a) Input Image
(b)Thinned Image
[1].
Raymond Thai , “Fingerprint image enhancement
and minutiae extraction”,- A report on fingerprint
image restoration.
[2]
Byung-Gyu Kim, Han-Ju Kim and Dong-Jo Park,
“New Enhancement Algorithm for Fingerprint
Images”, IEEE, 2002, pp 1051-4651.
[3]
Chapa Martell mario alberto, “ Fingerprint image
enhancement algorithm implemented on an
FPGA”, University of Electro-communications,
Tokyo, Japan, August 1, 2009, pp 1-6.
[4]
Lin Hong, Yifei Wan, and Anil Jain, “Fingerprint
Image Enhancement: Algorithm and Performance
Evaluation”, IEEE Tractions on pattern analysis
and machine intelligence, vol.20, No.8 August
1998, pp 777-789.
[5]
Stephan Huckemann, Thomas Hotz, and Axel
Munk, “Global models for the orientation field of
fingerprints: An approach based on quadratic
differentials”, IEEE transactions on pattern
analysis and machine intelligence, September
2008, vol. 30 , Issue no. 9, 1507-1519
[6].
Ravi.J. et al, “Fingerprint recognition using
minutia score matching”, International Journal of
Engineering Science and Technology ,Vol.1(2),
2009, 35-42
[7]
Mariano Fons, Francisco Fons, Enrique canto,
Mariano Lopez, “Flexible Hardware for
Fingerprint Image Processing”, 3rd Conference on
microelectronics and electronics, July 2-5, 2007,
pp 169-172.
[8]
Mariano Fons, Francisco Fons, Enrique canto,
“Approaching Fingerprint image Enhancement
through Reconfigurable Hardware Accelerators”,
IEEE International symposium on Intelligent
signal processing, 2007, pp 1-6.
[9]
M Fons, F Fons and E canto, “Fingerprint Image
Processing Acceleration through run-time
Reconfigurable Hardware”, IEEE Transactions
on circuits and systems –II Express Briefs,
December 2010, Vol.57, N0.12, pp 991-995.
[10]
T. Zhang and C. Suen, “A fast parallel algorithm
for thinning digital patterns,” Communications of
the ACM, vol. 27, pp. 236–239, Mar 1984.
Fig.5 Fingerprint Image (FPGA)
From Fig. 4(b) and Fig. 5(b) it is clear that the resolution
of the thinned image obtained from FPGA implementation
is better than that of MatLab simulation.
E. CONCLUSION AND FUTURE ENHANCEMENT
The proposed work is to design and implement the
image enhancement technique on FPGA and compare
the results thus obtained with that of MatLab simulation.
FPGA implementation is carried out using Xilinx ISE
Design tool 13.1 I using Verilog coding and Virtex-5
development board. It is found that FPGA
implementation has more benefits in terms of less
processing time, better clarity in the image obtained
and minimal utilization of the available hardware
resources.
The future work related to the image enhancement
technique will be the feature extraction. Then finally a
system is built using the above techniques which
consists of image enhancement technique and feature
extraction followed by matching. This system can be
validated for commercial implementation and feasibility.

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
53
An Exhaustive Study on the Authentication Techniques for Wireless sensor networks
________________________________________________________________________________________________
An Exhaustive Study on the Authentication Techniques for Wireless
sensor networks
M.Lavanya, V.Natarajan
Department of Instrumentation engineering
MIT campus, Anna University, Chennai, Tamilnadu , India
Email: [email protected], [email protected]
Abstract — Wireless sensor network (WSN) consist of
large number of sensor nodes with limitations in their
battery, processor and memory. Hence it is difficult to
incorporate security measures in such a nodes. But since
WSN are used in diverse applications and the deployment
area of such networks are unmanned security becomes a
basic requirement for preventing the nodes from
unauthorized access. This paper gives an intensive study of
the authentication techniques for WSN
I. INTRODUCTION
Wireless sensor networks are an emerging technology
that has impending applications in environment control
and biodiversity mapping, Machine surveillance,
Precision agriculture, Logistics, Telematics, Disaster
detection, Medicine and health care etc. it is a network
of tiny, inexpensive autonomous nodes, then can sense
compute and communicate data in a wireless medium.
Because of the limitations of WSN it has lot of security
vulnerabilities. There are seven security requirements
availability, authorization, authentication, Integrity and
freshness, confidentiality and non repudiation. Besides
natural loss of sensor nodes due to energy constraints, a
sensor network is also vulnerable to malicious attacks in
unattended and hostile environments. In such a scenario
maintaining and monitoring of sensor node and their
network of communication becomes major issue. we
investigate various types of treats and attacks against
WSN to save communication cost. This paper is
organized as follows, this section gives the protocol
stack of WSN and the limitations Section 2 gives all
possible attacks in a wireless sensor networks and their
counter measures. Section 3 describes the existing
authentication techniques and Section 4 concludes the
paper.
A. Protocol stack of WSN
The communication model of wireless sensor network
contains 5 layers physical layer, link layer, network
layer, transport layer and application layer. Physical
layer is responsible for frequency selection, carrier
frequency generation, signal detection, modulation and
data encryption. Link layer does multiplexing of data
streams, data frame detection, medium access and error
control. It ensures reliable point to point and point to
multi point connections in a communication network.
Network and routing layer is usually designed according
to the following principles 1. Power efficiency is an
important consideration, sensor network is mostly data
centric, ideal sensor network has attribute based
addressing and location awareness. Transport layer is
responsible for managing end to end connections.
B. Limitations in WSN
A sensor network is subjected to a unique set of resource
constraints such as finite on board battery power, limited
network communication bandwidth, limited processing
capability and limited storage.
1) Energy limitations; Sensing, processing and
communication activities of the sensor node consume
energy. Communication process consumes more energy
than processing model and sensing model. Hence
communication is more costly than computation. The
design of cryptographic algorithms should be such that
the key size or message size should be small so that the
no of bits transmitted through the channel is less.
2) Computational limitations; The processors used in
WSN are low end processors thus cannot process very
complex cryptographic algorithm.
3)
Memory limitations; Memory in a sensor node
usually includes a flash memory and RAM. Flash
memory is used for storing downloaded application code
and RAM is used for storing application programs,
Sensor data and intermediate computations. Therefore
usually there is not enough space for running
complicated algorithms.
II. SECURITY THREATS AND COUNTER
MEASURES
The layer based security threats and their counter
measures are given the tables Table 1,Table 2 and Table
3
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
54
An Exhaustive Study on the Authentication Techniques for Wireless sensor networks
________________________________________________________________________________________________
TABLE 1. PHYSICAL LAYER ATTACKS AND COUNTER
MEASURES
Attack
Counter measures
Interference
Channel hopping and Blacklisting
Jamming
Channel hopping and Blacklisting
Sybil
Physical protection of devices
Tampering
protection and changing of keys
TABLE 2. DATALINK LAYER ATTACKS AND COUNTER
MEASURES
Attacks
Counter measures
Collision
CRC and time diversity
Exhaustion
Protection of network ID and
other information that is
required to joining device
Spoofing
Use different path for resending the message
Sybil
Regularly changing of keys
De-synchronization
Using different neighbors for
time synchronization
Traffic analysis
Sending of dummy packet in
regular hours
Eavesdropping
Session keys to protect from
eavesdropper
energy symmetric key cryptography is not a
recommended technique. Hence we go for public key
cryptography. The following are some of the existing
authentication techniques for WSN
A.
1.
User Authentication
Ismail et al [1]
This scheme utilizes a certificate generated by the BS
for user authentication. The disadvantage in this scheme
is that it is vulnerable to DOS where the attacker can
easily exhaust the energy of the node.
2.
Le et al [2]
Private information is distributed to a set of user by a
trusted server. Later each member of the group can
compute a common secure group key using his private
information and the identities of other users in the
group. Keys are secure against coalitions of up to k
users.
3.
K.H.M Wong et al[3]
The user can query sensor data at any of the sensor node
in an ad-hoc manner. Computational load is very less
since it requires only simple operations. The drawback
in this scheme are it is vulnerable to replay and forgery
attacks, passwords could be revealed by any node. User
cannot change his/her password freely.
TABLE 3. NETWORK LAYER ATTACKS AND COUNTER
MEASURES
Attacks
Counter measures
DOS
Protection of network specific data
link network
Selective
forwarding
Regular monitoring
Sybil
Changing session keys
Traffic analysis
Regular monitoring
Wormhole
Physical monitoring, regular
monitoring, monitoring system
may use packet per each techniques.
4.
This scheme overcomes the replay attack in login phase.
The user supplies his id along with the time stamp to the
gateway node for authentication. Thus avoids replay.
Mutual authentication between the user the gateway
node is provided.
5.
III. EXISTING AUTHENTICATION
TECHNIQUES
sThere are a number of authentication techniques for
WSN. A network authenticates the user who accesses the
information at the node or the sensor node which requires
the information. Generally symmetric key or public key
cryptography is used for authentication. Because of the
serious constraints in WSN like limited memory, limited
Vaidya et al[4]
Jiang et al
This s a distributed user authentication scheme based on
Self-certified keys cryptosystem (SCK). ECC is used in
SCK to establish pair wise keys for authentication. The
disadvantage in this scheme is that it is vulnerable to
node capture attack and also requires synchronization
between nodes.
B.
1.
Node Authentication
Jennifer L.Wong et al [5]
In sensor networks watermarking and other intellectual
property protection techniques can be applied at a
variety of levels. Design of the sensor nodes and the
software used in the network can be protected using
functional techniques. More complex watermarking
protocols like multiple watermarks, fragile watermarks,
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
55
An Exhaustive Study on the Authentication Techniques for Wireless sensor networks
________________________________________________________________________________________________
publicly detectable watermarks and software watermarks
are also used for WSN .Real time watermarking aims to
authenticate data which is collected by sensor network,
the key idea behind watermarking is to impose
additional constraints to the system during sensing, data
processing, or data acquisition phases. First set of
techniques embed the signature into the process of
sensing data ie. Additional constraints are imposed on
parameters which define the sensor relationship with the
physical world, the parameters include location,
orientation, frequency and phase of intervals between
consecutive data capturing, resolution and intentional
addition of obstacles.
The proposed protocol is LEAP: Localized Encryption
and authentication protocol. This is a key management
protocol for sensor network designed to support innetwork processing. This protocol uses four different
types of keys for sensor nodes: 1.Individual key shared
with base station (BS). 2. Pair wise key shared with
other nodes. 3. Cluster key shared with multiple
neighbors. 4. Group key shared by all network nodes.
The overhead in this technique depends on the type of
key used for implementation. Not all four key s are used
always.
2.
A Security Manager issues the static domain parameters
for a newly joining node. This technique use elliptic
curve cryptography (ECC). Security manager will have
the public key of all nodes in the network. Based on the
device power and security policy two levels of security
are defined : high and medium. The overhead depends
on the number of bits chosen for the elliptic curve
system.
Perrig et al [6]
The security requirements are achieved by two building
blocks in SPINS called SNEP and µtesla. SNEP is
designed to provide data confidentiality, two party data
authentication, integrity and freshness. This protocol has
low communication overhead. Like many cryptographic
protocols it also used a counter. SNEP achieves
semantic security: a strong security property which
prevents eavesdropper from interfering the message
content. It also provides authentication replay protection
and weak message freshness. Data confidentiality is
achieved through randomization. A random bit string is
appended before encryption using DES-CBC, MAC are
also used for data authentication and integrity.
µtesla is used for authentication broadcast. It addresses
the problem faced by TESLA which uses digital
signature for authentication that is very expensive for
sensor networks. Micro-tesla operates on two different
scenarios 1. Base station broadcasts authentication
information to nodes 2. Sensor nodes act as sender. The
base station computes a MAC on an authenticated
packet and sends it to nodes unaware of the key used for
computing MAC, nodes stores the packet in buffer.
When key is disclosed it can verify the correctness.
3.
Karlof et al [7]
Tiny sec also provides services like authentication
integrity confidentiality and replay protection. The
difference between the above mentioned authentication
and tiny sec is that there is no counter in tiny sec. CBC
mode is used for encryption, CBC-MAC for
authentication. Tiny sec specifies two different packet
formats. Tinysec_auth to authenticate message.
Tinysec_AE to authenticated and encrypted message.
The security of CBC-MAC is directly related to the
length of the MAC. Tinysec uses 4 byte MAC. An
adversary attempts 231 times to attempt blind forgeries
i.e. approximately 231 packets must be sent to forge just
one malicious packet. This is an adequate level of
security in sensor networks.
4.
Zhu et al [8]
5.
6.
Heo &Hong et al [9]
Abdullah et al [10]
Uses identity based signature and ECC based digital
signature algorithm (DSA). The base station is the
private key generator. BS sends its own public key to all
nodes and generates the private key for all nodes. The
nodes store the id privately and the public system
parameter P. BS also generates private key for the users.
The authentication is done either by the base station or
the neighboring node. Before the authentication process
the node should register itself with the BS, after
registration the node sends the authentication request
message which is signed with the signature generation
algorithm of IBS along with time stamp to avoid replay
attack. Receiver checks for registration, time stamp is
used to verify the freshness, verify the signature using
verification algorithm of IBS. After mutual
authentication session key is established using one pass
session key establishment technique. This protocol
achieves the following security fundamentals like
mutual authentication, Integrity, confidentiality,
availability, session key agreement. The architecture of
the proposed protocol consists of network administrator,
BS nodes and users.
7.
Aydos et al [11]
The advantage of this protocol is low computational
burden and lower communication Bandwidth and
storage requirement. It uses ECC, and there are two
phases: 1. Terminal and server initialization 2. Mutual
authentication and key agreement. The drawback of this
protocol is it is vulnerable to man-in-the-middle attack.
The attacks can be categorized into two forms; attack
from user within the system and attack from any
attacker. This protocol cannot provide entity
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
56
An Exhaustive Study on the Authentication Techniques for Wireless sensor networks
________________________________________________________________________________________________
authentication of servers to terminal. Public key
certificate verification is not designed properly i.e. there
is now way to verify the association between public key
certificate and the public key.
access the BS. This protocol improves the lifetime of
WSN.
8.
This proposed protocol contains the registration, login
authentication and key generation phases. This also
supports mutual authentication. In the registration phase
the sensor node and the users register themselves to
Gateway node (GWN). Login phase is for user
verification, where the login and password of the user
are verified. This protocol satisfies the security
requirements like mutual authentication, password
protection, password changing/updating, identity
protection, key agreement, and resilience to stolen smart
card attack replay attack. The computation cost is
comparable to the latest authentication protocol
proposed by Xue et al but the disadvantages like
dictionary attack and stolen smart card attack are
overcome in this protocol.
Mangi et al[12]
The author proposed a user authentication protocol to
overcome the difficulties of the previous protocol. There
are two phases, the initialization phase were user and
server initializations are done separately. Next is the
user authentication phase. This protocol is robust to man
in the middle attack but vulnerable to forgery certificate
attack launched by attacker after forging the certificate.
Security analysis of this protocol identifies the following
drawbacks 1.The protocol doesn’t provide entity
authentication to server 2. No forward security is
provided 3. Explicit key authentication not provided.
9.
Lin et al [13]
This protocol takes station to station protocol as its basic
framework. To make the protocol more efficient Schnorr
signature scheme is used for DSA. This protocol
provides entity authentication and definite key
authentication to both communication parties. Session
key is calculated from random number generated by two
communicating parties every time. Security depends on
difficulty calculating ECC discrete logarithm. This
protocol also maintains forward secrecy, since session
keys are generated as random numbers it provides key
compromise impersonation unknown key share, key
control and terminal anonymity.
10.
12.
13.
Majid Bayat et al[16]
Nan et al
Cross layer design can share information among
different protocol layers for adaptation purposes and
increase the inter layer interaction. The proposed
security framework has Energy efficient cross layer
framework for security (ECFS), since in WSN the nodes
are resource constrained and transmission range is
limited, it is impossible to monitor the behavior of the
entire network. ECPS security framework is shown in
the fig .1
Maha Sliti et al [14]
This protocol uses ECC and threshold signature. The
proposed authentication technique is adapted in
WhoMoVes framework, that has been introduced by the
authors for military target tracking. This framework has
the following characteristics; 1.Energy aware coverage
control. 2. Coverage preserving mobility controls.3.
High quality data gathering.
Fake warnings from the sensor are avoided by counting
k-valid alerts. The authentication framework has the
following phases; node registration to certificate
authority (CA), intermediate signature verification and
generation, global message verification. Elliptic
threshold signature algorithm is used for implementing
k-security.
11.
Qasim et al [15]
A new layer called the security layer is added to the
WSN architecture in this system. This addition prevents
the network from security threats. The BS can’t be
acceded directly by the user, user has to authenticate
itself from the Kerberos server and then obtain ticket to
fig.1 EPCS Security framework
14) Liu et al [17]
In the proposed work network monitoring is done taking
energy efficiency into consideration. A subset of sensor
nodes are selected as monitors in the network and each
sensor node is monitored by at least k nodes. A
collective monitoring triggering scheme is proposed to
improve the capability and reliability of monitoring
system. SpyMon monitoring system was designed to
achieve security, energy efficiency and reliability.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
57
An Exhaustive Study on the Authentication Techniques for Wireless sensor networks
________________________________________________________________________________________________
Systems, (DCOSS ’09),Marina
California, USA, June 8-10, 2009
15) Dae et al
The proposed work uses public key cryptography (PKC)
authentication scheme. Three protocols were proposed
for public key authentication. The number of nodes that
assist in performing the authentication voting is k with
an error range of e each node requires k-e/2 no of faked
keys to redirect the result of authentication.
Wong, K.H.M.; Zheng, Y.; Cao, J.; Wang, S. “A
Dynamic User Authentication Scheme for
Wireless Sensor Networks”. In Proceedings of
the IEEE International Conference on Sensor
Networks,
Ubiquitous,
and
Trustworthy
Computing (SUTC’06), Taichung, Taiwan, 5–7
June 2006
[4].
Vaidya, B.; Rodrigues, J.J.P.C.; Park, J.H. “User
authentication schemes with pseudonymity for
ubiquitous sensor network” in NGN. Int. J.
Commun. Syst. 2009, 23, 1201–1222.
[5].
Hui Kang, Jennifer L. Wong: A Localized MultiHop Desynchronization Algorithm for Wireless
Sensor Networks. INFOCOM 2009: 2906-2910
[6].
Adrian Perrig, Robert Szewczyk,J D Tygar,
Victor Wen, David E Culler,”SPINS: Security
Protocols For Sensor Networks”, Wireless
Networks 8:521-534, September 2002.
[7].
Karlof, C., Sastry, N. & Wagner, D. (2004).
“TinySec: A Security Architecture for Wireless
Sensor Networks”. Proceedings of the 2nd
International
Conference
on
Embedded
Networked Sensors. Baltimore, MD, USA, 2004.
[8].
Zhu, S., Setia, S. & Jajodia, S. (2004). “LEAP:
Efficient Security Mechanism for Large-Scale
Distributed Sensor Networks”. Proceedings of
the 10th ACM Conference on Computer and
Communication Security (CCS), Washinton DC,
USA, 2003.
[9].
Heo and C.S. Hong, “Efficient and Authenticated
Key Agreement Mechanism in Low-Rate WPAN
Environment”, International Symposium on
Wireless Pervasive Computing 2006, Phuket,
Thailand ,16-18 January 2006, IEEE 2006, pp. 15.
17) Kalvinder et al [19]
This protocol is used for distributing keys in WSN, a
modified PPK protocol uses elliptic curves instead of
RSA, and the most difficult part of this protocol is
mapping A, B, KAB to a random point on the elliptic
curve. The function f1 is used to generate a point in
elliptic curve , function f2 is used to generate a new key,
the procedure involves calculating a hash of the key.
This protocol has the advantage of less number of
messages communicate and less MAC calculations .
IV. CONCLUSION
A data is said to be transmitted securely if it maintains
its secrecy. To achieve confidentiality authentication of
the participants is necessary. There are many techniques
for authentication some of them are discussed in this
paper. Authentication is an effective method to repel
replay and node tampering attack. The compelling
challenges for authentication technique are how to
increase scalability of the network, communication
speed and how to decrease communication cost in order
to provide security in less time.
REFERENCES
[1].
Ismail Butun and Ravi Shankar, “Advanced Two
Tier Using Authentication Scheme for
Heterogeneous WSN”, 2nd IEEE CCNC research
Student Workshop. 2011
Rey,
[3].
16) Tien-Ho Chen et Al[18]
This protocol was proposed to overcome the difficulties
in Das protocol for authentication , the drawback in Das
protocol for authentication is that it uses an hash based
authentication protocol for WSN, which provides
security against masquerade, stolen-verification, replay
and guessing attack , but doesn’t provide mutual
authentication. The proposed enhanced mutual
authentication protocol has three phases; registration
phase, login phase, verification phase and mutual
authentication phase. The proposed protocol also resists
the parallel session attacks of WSN.
Del
[10]. Abdullah Al-mahmudRumana Akhtar" Identitybased Authentication and Access Control in
wireless Sensor Networks " International Journal
of Computer Applications,© 2012 by IJCA
Journal
[11]. M.Aydos, B. Sunar"An Elliptic Curve
Cryptography based Authentication and Key
Agreement
Protocol
for
Wireless
Communication"2nd International Workshop on
Discrete Algorithms and Methods for Mobile
Computing and Communications, Dallas, Texas,
October 30, 1998.
X.H. Le, S. Lee, and Y.K. Lee. “Two-Tier User
Authentication Scheme for Heterogeneous
Sensor Networks.” the 5th IEEE International
Conference on Distributed Computing in Sensor
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
58
[2].
An Exhaustive Study on the Authentication Techniques for Wireless sensor networks
________________________________________________________________________________________________
[12]. Kumar V. Mangipudi, Rajendra S. Katti, Huirong
Fu "Authentication and Key Agreement
Protocols Preserving Anonymity." I. J. Network
Security Vol. 3 No. 3 Pg. 259-270.2006
Sensor Networks" ETRI journal vol. 32, no. 5,
Oct. 2010, pp. 704-712.
[19].
[13]. C.-L. Lin, H.-M. Sun, and T. Hwang. Three-party
encrypted key exchange: attacks and a solution.
SIGOPS Oper. Syst. Rev., 34(4):12–20, 2000
[14]. Maha Sliti, Mohamed Hamdi, Noureddine
Boudriga: An elliptic threshold signature
framework for k-security in wireless sensor
networks. ICECS 2008: 226-229
[15]. Qasim Siddique "Kerberos Authentication in
Wireless Sensor Networks"Ann. Univ. Tibiscus
Comp. Sci. Series VIII / 1 (2010), 67-80
[16]. Majid Bayat, Mohammad Reza Aref: A Secure
and efficient elliptic curve based authentication
and key agreement protocol suitable for WSN.
IACR Cryptology ePrint Archive 2013: 374
(2013)
[17]. Liu Yongliang1, Wen Gao1, Hongxun Yao1, and
Xinghua Yu2 "Elliptic Curve Cryptography
Based
Wireless
Authentica
Protocol"
International Journal of Network Security, Vol.5,
No.3, PP.327–337, Nov. 2007
Kalvinder Singh and V. Muthukkumarasamy. A
minimal protocol for authenticated key
distribution in wireless sensor networks.In
ICISIP ’06: Proceedings of the 4th International
Conference on Intelligent Sensing and
Information Processing, Bangalore, India,
December 2006.
[20]. Muhammad Hammad Ahmed', Syed Wasi
Alam2, Nauman Qureshi3, lrum Baig4"Security
for WSN based on Elliptic Curve Cryptography"
IEEE2011.
[21]. Song Ju "A Lightweight Key Establishment in
Wireless Sensor 'Network Based on Elliptic
Curve Cryptography" IEEE 2012.
[22]. Ch. P. Antonopoulos*, Ch. Petropoulos, K.
Antonopoulos, V. Triantafyllou, N. S. Voros"The
Effect of Symmetric Block Ciphers on WSN
Performance and Behavior"Fifth International
Workshop on Selected Topics in Mobile and
Wireless Computing, IEE2012.
[23]. Ravi Kishore Kodali"Implementation of ECC
with Hidden Generator Point in Wireless Sensor
Networks"IEEE 2014.
[18]. Tien-Ho Chen and Wei-Kuan Shih "A Robust
Mutual Authentication Protocol for Wireless

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
59
Support of Multi keyword Ranked Search by using Latent Semantic Analysis over Encrypted Cloud Data
________________________________________________________________________________________________
Support of Multi keyword Ranked Search by using Latent Semantic
Analysis over Encrypted Cloud Data
1
Anoop M V, 2V Ravi
Department of Computer Science, Professor, Department of Computer Science
Siddaganga Institute of Technology, Line 3: Tumkur, Karnataka, India
Email: [email protected], [email protected]
Abstract— In the recent years, both the owners and the
users are motivated to deploy their data to public cloud
servers for greater usage and less cost in data management.
For the privacy issues, sensitive data should be encrypted
before deploying, which uses traditional data utilization
like keyword-based document retrieval. Information
search and document retrieval from a remote database
(e.g. cloud server) requires submitting the search terms to
the database holder. However, the search terms may
contain sensitive information that must be kept secret from
the database holder. Moreover, the privacy concerns apply
to the relevant documents retrieved by the user in the later
stage since they may also contain sensitive data and reveal
information about sensitive search terms. In this paper, we
propose a semantic multi-keyword ranked search scheme
over the encrypted cloud data, which simultaneously meets
a set of strict privacy requirements. Firstly, we utilize the
“Latent Semantic Analysis” to reveal the relationship
between terms and documents. The relationship between
terms is automatically captured. Secondly, our scheme
employs secure “k-nearest neighbour (k-NN)” to achieve
secure search functionality. The proposed scheme could
return not only the exact matching files, but also the files
including the term latent semantically associated with the
query keyword. Finally, the experimental result
demonstrates that our method is better than the original
MRSE scheme.
Index Terms— Cloud Computing, Latent Semantic
Anlytics,Multi-Keyword Ranked Search
I. INTRODUCTION
traditional data utilization like keyword-based document
retrieval. Information search and document retrieval
from a remote database (e.g. cloud server) requires
submitting the search terms to the database holder.
However, the search terms may contain sensitive
information that must be kept secret from the database
holder. Moreover, the privacy concerns apply to the
relevant documents retrieved by the user in the later
stage since they may also contain sensitive data and
reveal information about sensitive search terms. In this
paper, we propose a semantic multi-keyword ranked
search scheme over the encrypted cloud data, which
simultaneously meets a set of strict privacy
requirements. Firstly, we utilize the “Latent Semantic
Analysis” to reveal the relationship between terms and
documents. The relationship between terms is
automatically captured. Secondly, our scheme employs
secure “k-nearest neighbour (k-NN)” to achieve secure
search functionality. The proposed scheme could return
not only the exact matching files, but also the files
including the term latent semantically associated with
the query keyword. Finally, the experimental result
demonstrates that our method is better than the original
MRSE scheme.
Computing power, or specially crafted development
environments, without having to worry how these works
internally. Cloud computing is a system architecture
model for Internet-based computing. It is the
development and use of computer technology on the
Internet. The cloud is a metaphor for the Internet based
on how the internet is described in computer network
diagrams; which means it is an abstraction hiding the
complex infrastructure of the internet. It is a style of
computing in which IT-related capabilities are provided
“as a service”, allowing users to access technologyenabled services from the Internet ("in the cloud")
without knowledge of, or control over the technologies
behind these servers [2].
Due to the overwhelming merits of cloud computing,
such as scalability cost-effectiveness, and flexibility,
more and more organizations are willing to outsource
their data for storing in the cloud. The benefits of
utilizing the cloud (lower operating costs, elasticity and
so on) come with a trade-off. Users will have to entrust
their data to a potentially untrustworthy cloud provider.
[1] As a result, cloud security has become an important
problem for both industry and academia. One important
security problem is the potential privacy leakages that
Organizations use the Cloud in a variety of different
may occur when outsourcing data to the cloud. The idea
service models (SaaS, PaaS, and IaaS) and deployment
behind cloud computing is similar: The user can simply
models (Private, Public, and Hybrid). There are a
use storage, computing In the recent years, both the
number of security issues/concerns associated with
owners and the users are motivated to deploy their data
cloud computing but these issues fall into two broad
to public cloud servers for greater usage and less cost in
categories: security issues faced by cloud providers
data management. For the privacy issues, sensitive data
(organizations
providing software-, platform-,
or
should be encrypted before deploying, which uses
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
60
Support of Multi keyword Ranked Search by using Latent Semantic Analysis over Encrypted Cloud Data
________________________________________________________________________________________________
infrastructure as-a-service via the cloud) and security
issues faced by their customers. In most cases, the
provider must ensure that their infrastructure is secure
and that their clients’ data and applications are protected
while the customer must ensure that the provider has
taken the proper security measures to protect their
information. Despite the tremendous business and
technical advantages, privacy concern is one of the
primary hurdles that prevent the widespread adoption of
the cloud by potential users, especially if their sensitive
data are to be outsourced to and computed in the cloud.
We aim to achieve an efficient system where any
authorized user can perform a search on a remote
database with multiple keywords, without revealing
neither the keywords he searches for, nor the contents of
the documents he retrieves. Moreover, our pro- posed
system is able to perform multiple keyword searches in a
single query and ranks the results so the user can
retrieve only the top matches. The contributions of this
paper can be summarized as follows. In this paper, we
will solve the problem of multi-keyword latent semantic
ranked search over encrypted cloud data and retrieve the
most relevant files. We define a new scheme named
Latent Semantic Analysis (LSA) -based multi-keyword
ranked search which supports multi-keyword latent
semantic ranked search. By using LSA, the proposed
scheme could return not only the exact matching files,
but also the files including the term latent semantically
associated with the query keyword.
Additionally, this work requires keyword fields in the
index. This means that the user must know a list of all
valid keywords and their positions as compulsory
information to generate a query. This assumption may
not be applicable in several cases. It is not efficient due
to matrix multiplication operations of square matrices
where the number of rows is in the order of several
thousands.
Wang et al. [7] propose a trapdoor less private keyword
search scheme, where their model requires a trusted
third party which they named as the Group Manager.
We adapt their indexing method to our scheme, but we
use a totally different encryption methodology to
increase the security and efficiency of the scheme.
III. PROPOSED SYETM
The problem that we consider is privacy-preserving
keyword search on the private database model, where
the documents are simply encrypted with the secret keys
unknown to the actual holder of the database (i.e. Cloud
Server). We consider three roles coherent with previous
works.
A.
System Model
The System Model consists of three different entities:
The Data Owner, The Cloud Server and The Data User.
The rest of this paper is organized as follows. In Section
2, we discuss the related previous works. Section 3 gives
about a proposed system in which there will be brief
description about system model, design goals, notations
and latent semantic analysis. Section 4 gives proposed
scheme whereas in Section 5 there will be performance
analysis, followed by conclusions and at last there is a
reference section.
II. RELATED WORK
The problem of Private Information Retrieval was first
introduced by Chor et al. [4]. Recently Groth et al. [5]
propose a multi-query PIR method with constant
communication rate.Any, any PIR-based technique
requires highly costly cryptographic operations in order
to hide the access pattern. This is inefficient in the large
scale cloud system and as an alternative approach,
privacy preserving search is employed which aims to
hide the content of the retrieved data instead of which
data is retrieved.
a)
Data Owner: There can be n number of users
(owners) in cloud and each user (owner) can store n
number of files. Owner first logins using his username
and password.
One of the closest methods of our solution is proposed
by Cao et al. [6]. Similar to our approach presented here,
it proposes a method that allows multi-keyword ranked
search over encrypted database. In this method, the data
owner needs to distribute a symmetric-key which is used
in the trapdoor generation to all authorized users.
o
Upload: For secure data sharing, the owner
encrypts the file before uploading it to cloud. As
Encryption is the standard method for making a
communication private. Before doing so, the owner
defies an access structure for the file.After defining an
access structure, encrypts the file using RSA algorithm
Fig1. Architecture of MRSE
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
61
Support of Multi keyword Ranked Search by using Latent Semantic Analysis over Encrypted Cloud Data
________________________________________________________________________________________________
that is named after the initials of its inventors: R for
Rivest, S for Shamir, and A for Adelman. It is most
popular and secure public-key encryption method where
the file is encrypted using public key and may only be
decrypted by its corresponding Private Key. Encrypted
file is then uploaded to server. In cloud the file is stored
under those particular owners folder. Encryption time is
calculated in milliseconds for each file by calculating
the difference between encryption start time and
encryption end time.
gets the list of files to which he has access from server
i.e. only those files whose access structure are satisfied
by that particular users attributes are listed.
o
User Registration: Owner has the right to
provide access to the file he owns. To do so, he first
registers users by assigning a username, password,
eligible time period, attributes for each of them and
stores in a database. Along with this a random key is
generated and stored. Here eligible time period indicates
that user can access the file only within that time. If the
owner tries to assign a username that already exists an
error will be displayed. In other words, username must
be unique. But a single user can be registered by n
number of owners.
B.
o
Access Control List: Owner creates an access
control list which consists of all the authorized users
details required by server for further activities. Then the
owner uploads the list to cloud.
b)
Cloud Service Provider: Whenever it receives
a download request from users, it re-encrypts that
particular file. The re-encryption is keyword and
information retrieval based, therefore AES algorithm is
used. It is an efficient algorithm as keys can be
generated based on our inputs.
o
Upload: Upload option is active for users who
have write permission only. If the user has write
permission, after making modifications to the
downloaded file he has to encrypt it before uploading it
back to server. Public key issued by the owner is used
for encryption. Encrypted file is then uploaded to server.
Design Goals
The cloud server both follows the designated protocol
specification but at the same time analyzes data in its
storage and message flows received during the protocol
so as to learn additional information.
The designed goals of our system are following:
1.
Latent Semantic Search: We use statistical
techniques to estimate the latent semantic structure
and, get rid of obscuring “noise” [8].
2.
Multi-keyword Ranked Search: It supports both
multi-keyword query and support result ranking.
3.
Privacy-Preserving: Our scheme is designed to
meet the privacy requirement and
prevent the
cloud server from learning additional information
from the index and trapdoor.
Privacy requirements are as below;
o
Search Manager: Receives the keyword
matrix for searching, searches the document and sends
the ranked document list to the user.
Index Confidentiality: The Trapdoor
values of keywords are stored in the index. Thus, the
index stored in the cloud server needs to be encrypted;
Trapdoor Unlink ability: The cloud server
should not be able to deduce relationship between
trapdoors. Keyword Privacy: The cloud server could
not discern the keyword in query, index by analyzing the
statistical information like term frequency.
c)
C.
o
Storage Manager: Receives the encrypted data
from the data owner index the document and makes it
available for searching.
Client:
Notations and Preliminaries
o
Login: Users who have the access can login
using the username and password given by the owner
after registration. If wrong username or password is
entered, an error message will be displayed and login
fails.If successful, user is asked to enter the random key
for next level authentication. The key entered is checked
with the key assigned to user by the owner using the
username submitted at first level. If there is a mismatch,
a message is displayed saying wrong key. Else user is
re-directed to new window where he can
download/upload the data.

D --the plaintext document collection, denoted as a
set of n data D = {d1, d2 ...dm}.

C --the encrypted document collection stored in the
cloud server, denoted C = {c1, c2....cm}.

W--the dictionary, the keyword set composing of
m keyword, denoted W= {w1, w2 ...wn}.

I--the searchable index associated, denoted I (I1,
I2...In).
o
Download: If the user needs to access the data
means they need to decrypt the data twice. The user first

Q--the query vector indicating the keywords of
interest where each bit represents the existence of
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
62
Support of Multi keyword Ranked Search by using Latent Semantic Analysis over Encrypted Cloud Data
________________________________________________________________________________________________
the corresponding keyword in the Q [j] ∈{0,1}Q
represents the existence of the corresponding
keyword in the query.
D.
Latent Semantic Analysis
A technique in natural language processing, in particular
in vectorial semantics, of analyzing relationships
between a set of documents and the terms they contain
by producing a set of concepts related to the documents
and terms. LSA assumes that words that are close in
meaning will occur in similar pieces of text. A matrix
containing word counts per paragraph (rows represent
unique words and columns represent each paragraph) is
constructed from a large piece of text and a
mathematical
technique
called singular
value
decomposition (SVD) is used to reduce the number of
columns while preserving the similarity structure among
rows. Words are then compared by taking the cosine of
the angle between the two vectors formed by any two
rows. Values close to 1 represent very similar words
while values close to 0 represent very dissimilar words.
In information retrieval, latent semantic analysis is a
solution for discovering the latent semantic relationship.
It adopts singular-value decomposition, which is
abbreviated as SVD to find the semantic structure
between terms and documents. In this paper, the termdocument matrix consists of rows, each of which
represents the data vector for each file, n
A`= (A` [1], A` [2]........... A` [j] ...A`[m])
(1)
as depicted in the Eq.1. Then, we take a large termdocument matrix and decompose it into a set of,
orthogonal factors from which the original matrix can be
approximated by linear combination. For example, a
term-document matrix named can be decomposed into
the product of three other matrices: A′
A`=U`. S`. V`
(2)
Such that U` and V` have orthonormal columns, is
diagonal. We choose previous k columns of S`, and then
deleting the corresponding columns of U` and V`
respectively. The result is a reduced model:
A= S`. U`. V`= A`
(3)
Secure k-NN:
In order to compute the inner product in a privacypreserving method, we will adapt the secure -nearest
neighbor scheme. This splitting technique is secure
against known-plaintext attack, which is roughly equal
in security to ad-bit symmetric key [8].
IV. PROPOSED SCHEMA
The data owner builds a term-document matrix A′. We
reduce the dimensions of the original matrix to get a
new matrix which is calculated the best “reduceddimension” approximation to the original term
document matrix. Specially, A [j], denotes the j-th
column of the matrix A.

Setup The data owner generates a n+2 bit
vector as X and two (n+2)* (n+2) invertible matrices
{M1,M2}. The secret key SK is the form of a 3-tuple {X,
M1, M2.}

BuildIndex(A`,FSK) The data owner extracts
a term-document matrix A′. Following, we multiply
these three matrices to get the result matrix Taking
privacy into consideration, it is necessary that the matrix
is encrypted before outsourcing. After applying
dimension-extending, the original A[j] is extended to
(n+2) dimensions instead on n. The SubIndex
I1= { M1T.A`[j], M2T.A``[j]}is built.

Trapdoor (W~) With t keywords of interest in
~
W as input, one binary vector is generated Q. The
trapdoor TW~ is generated as {M1-1.Q~, M2-1.Q~}.

Query (TW~, l, I) The inner product of Ij and
TW~ is calculated by the cloud server. After sorting all
scores, the cloud server returns the top-l ranked id list to
the data user.
V. PERFROMANCE ANALYSIS
Considering analyzing a document for finding the
keywords in it, is out of the scope of this work, a
synthetic database is created by assigning random
keywords with random term frequencies for each
document.
F-measure that combines precision and recall is the
harmonic mean of precision and recall [9]. Here, we
adopt F-measure to weigh the result of our experiments.
For a clear comparison, our proposed scheme attains
score higher than the original MRSE in F-measure.
Since the original scheme employs exact match, it must
miss some similar words which is similar to the
keywords. However, our scheme can make up for this
disadvantage, and retrieve the most relevant files.
VI. CONCLUSION
In this paper, along with MRSE support for latent
semantic search is proposed. We use the vectors
consisting of TF values as indexes to documents. These
vectors constitute a matrix, from which we analyze the
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
63
Support of Multi keyword Ranked Search by using Latent Semantic Analysis over Encrypted Cloud Data
________________________________________________________________________________________________
latent semantic association between terms and
documents by LSA. Taking security and privacy into
consideration, we employ a secure splitting k-NN
technique to encrypt the index and the queried vector, so
that we can obtain the accurate ranked results and
protect the confidentiality of the data well.
[5]
B. Chor, E. Kushilevitz, O. Goldreich, and M.
Sudan. Private information retrieval. J. ACM,
45:965 {981, November 1998.
[6]
J. Groth, A. Kiayias, and H. Lipmaa. Multi-query
computationally-private information retrieval
with constant communication rate. In PKC, pages
107 {123, 2010.
VII REFERENCES
[1]
K. Wren, C. Wang and Q. Wang, "Security
challenges for the public cloud", Internet
Computing, IEEE, vol. 16, no. 1, (2012), pp. 6973.
[7]
N. Cao, C. Wang, M. Li, K. Ren, and W. Lou.
Privacy-preserving multi-keyword ranked search
over encrypted cloud data. In IEEE INFOCOM,
2011.
[2]
Armbrust, M., et al., A view of cloud computing.
Communications of the ACM, 2010. 53 (4): p.
50-58.
[8]
P. Wang, H. Wang, and J. Pieprzyk. An efficient
scheme of common securities indices for
conjunctive keyword-based retrieval of encrypted
data. In Information Security Applications,
[3]
P. Mell and T. Grance, “The nist definition of
cloud computing (draft),” NIST Special
Publication, 2011.
[9]
Wong, W.K., et al. Secure KNN computation on
encrypted databases. in Proceedings of the 2009
ACM SIGMOD International Conference on
Management of data. 2009. ACM.
[10]
Powers, D.M. The problem with kappa. in
Proceedings of the 13th Conference of the
European Chapter of the Association for
Computational Linguistics. 2012. Association for
Computational Linguistics.
[4]
Deerwester, S.C., et al., Indexing by latent
semantic analysis. JASIS, 1990. 41 (6): p. 391407.

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
64
Car Parking Management System
________________________________________________________________________________________________
Car Parking Management System
Chandra Prabha R1,Vidya Devi M.2,Sharook Sayeed3,Sudarshan Garg4, Sunil K.R5, Sushanth H J 6
Department of Electronics and Communication, BMS Institute of Technology ,Bangalore,
Email: [email protected], [email protected], [email protected],
[email protected] 4, [email protected], [email protected] 6
ABSTRACT: In the trend of increasing traffic, it is
necessary to have systems monitoring parking spaces
efficiently. Until now, sensors were used to keep a tab on
traffic. The sensors help in finding out the location of the
car slot that is filled up. These sensors aren’t reliable when
it comes to differentiating the car and other objects. We
have come up with an alternative method for monitoring
parking spaces. We have used openCV libraries and
python to do the coding on raspberry pi. With the help of
image processing, we find the centroid of the car, this
location of the centroid gives the exact location of the cars
occupying the parking spaces and can be found in real
time. The number of cars present in the parking lot can
also be found by using the concept of contours. The
number of cars in the parking lot can be found by counting
the contours in the image. A threshold has been set in both
the parts to differentiate between car and other objects all
this happens in real time coverage of the parking lot. In
this system, the cost of the monitoring system is reduced
considerably, as it uses resources that are cheap and
available everywhere. There is no need of human
intervention once the system is put in place
Keywords: Raspberry Pi, Canny, Open CV, Python,
contours, centroid,car parking system
I. INTRODUCTION:
In this era of increasing need to travel, the number of
cars also increases, which results in increment in the
space required for parking cars. Management of these
parking lots is to be done in a very efficient way using
limited resources. The objective of this paper is to get
information of the parking lot at any place and to
provide that information to the new coming vehicle
There are two parts in our car parking management
system. The first tells the number of cars present in the
parking lot. The next part tells us the exact location of
the car in the parking lot.
To find the number of cars, the image of the empty
parking lot is taken and series of images are taken at
every instant. These two images i.e. the image at that
instant and the empty parking place images are
subtracted. The subtracted image gives the cars. By
using the concept of contours, the number of cars can be
obtained. The contour is set to a threshold value to
differentiate between car and other objects. Thus a
counter is applied which counts the number of cars
present in the lot and this is displayed on the seven
segment display.
In the next part, to find the exact location of the car, the
images are subtracted i.e. the image of the parking
layout at that instant and the image of the empty parking
lot. This image gives the location of car. The concept of
contours and movement detection is applied and the
value of contours is set to differentiate between cars and
other objects such as bikes, humans, etc. The centroid of
the image is found. By locating the centroid in real time
the exact location of the car can be found.
The main idea of this method is that, this is an
alternative
method
of
implementing
parking
management using the available resources. The
resources are of less cost and easily available
everywhere.
II. HARDWARE DESCRIPTION:
The model uses Raspberry Pi Model B which has
512Mb RAM, 2 USB ports and an Ethernet port. It has a
Broadcom BCM2835 system on a chip which includes
an ARM1176JZF-S 700 MHz processor, Video Core IV
GPU, and an SD card. It has a fast 3D core accessed
using the supplied OpenGL ES2.0, OpenCV and
OpenVG libraries. The chip specifically provides HDMI
and there is also a VGA support. The foundation
provides Debian and Arch Linux ARM distributions and
also Python as the main programming language.
We capture the image using the USB camera. This
image is being processed by the raspberry pi and the
available parking space is displayed on the screen. Here
we will be using image subtraction in order to find out
the number of cars present by subtracting the original
image as well as the updated image
Figure 1: Raspberry Pi
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
65
Car Parking Management System
________________________________________________________________________________________________
that particular spot is not free. This shows the parking
space which is occupied. Some of the concepts used in
the methodology are explained.
CANNY EDGE DETECTION
The Canny edge detector is anedge detection[1]operator
that uses a multi-stagealgorithmto detect a wide range of
edges in images.
The algorithm consists of 5 separate steps:
1.
Smoothing: Blurring of the image to remove noise.
2.
Finding gradients: The edges should be marked
where the gradients of the image has large
magnitudes.
3.
Non-maximum suppression: Only local maxima
should be marked as edges.
4.
Double thresholding: Potential
determined by thresholding.
5.
Edge tracking by hysteresis: Final edges are
determined by suppressing all edges that are not
connected to a very certain (strong) edge.
Figure2: Block Diagram
III. SOFTWARE DESCRIPTION:
Raspberry Pi can run on many operating systems such
NOOBS (New out Of Box Software), Raspbian (Debian
for Raspberry), RISC OS, Arch Linux, Open Elec, etc.
In our study we have used raspbian .There is another
application called as OpenCV (Open Source Computer
Vision) which is a library containing programming
functions which aims at real-time computing.
Raspberry Pi uses many languages such as C++, Python,
Ruby, C#, Etc. We have used python because python is
widely used and allows us to express in fewer lines of
code. Python supports object-oriented, functional
programming or procedural styles and automatic
memory management.
IV. METHODOLOGY
The image of the empty parking lot is taken. The web
cam keeps taking images every instant and keeps
subtracting from the previous image. The result of this
subtraction will be the change in movement. These
changes are applied to the concept of contours. The
contours limit is set to certain pixel area. If the contour
in the subtracted image exceeds the threshold, the
counter is given plus one. If the contour size doesn’t
exceed the limit the counter value remains unchanged.
This gives the number of vehicles in the parking lot.
An image of empty parking lot is taken. Next at every
instant, images of the parking lot are taken. The image at
any instant could again be an empty parking lot or it
could be parking lot with few vehicles. Both these
images are then converted from colour images to grey
scale images. Then the absolute difference of these
images is taken using the concept of Image Subtraction.
Image subtraction or pixel subtraction is a process
whereby the digital numeric value of one pixel or whole
image is subtracted from another image. This is
primarily done for one of two reasons – levelling uneven
sections of an image such as half an image having a
shadow on it, or detecting changes between two images.
Then using the concept of the contours, centroid of each
vehicle present in the parking lot is found. And if the
centroid for a vehicle is not detected then it indicates-
edges
are
Edge detection enhances the accuracy of contour in the
image. So edge detection is done before applying
contours.
CONTOURS
Contours can be explained simply as a curve joining all
the continuous points (along the boundary), having same
colour or intensity. The contours are a useful tool for
shape analysis and object detection and recognition.
The points are selected such that, contours can be drawn
as straight line joining these points. So, if object is a
horizontal or vertical line, only end points are stored. If
object is a rectangle, only 4 vertices are stored. The
figure 3 shows the contours of rectangle.
Figure 3: Contours of rectangle
Contours are represented in OpenCV by sequences in
which every entry in the sequence encodes information
about the location of the next point on the curve.
CENTROID:
To find the centroid of the contour, Image moments are
used. This helps in calculating some features like centre
of mass of the object, area of the object etc. Moments
are nothing but it is a certain particular weighted average
(moment) of the image pixels' intensities, or a function
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
66
Car Parking Management System
________________________________________________________________________________________________
of such moments, usually chosen to have some attractive
property or interpretation. Contour Approximation will
remove small curves, there by approximating the
contour more to straight line. The centroid of a random
contour is shown in figure 4.
Results of indication the location of space which is not
free in parking lot.
The image of the empty parking lot is as shown in the
figure 6.The dialog box named Live gives the real time
coverage of the parking lot. The dialog named
Background shows the empty parking lot. The image is
divided into 4 quadrants. When the car occupies anyone
of the quadrants or parking space. The centroid of the
car (contour) is found and displayed. The car when
occupies 3rd quadrant it is displayed as 3rd NOT FREE
as in the figure 7.
Figure 4: Centroid of a random contour
RESULTS:
Results of finding the number of cars in the parking lot.
Figure 5 indicates the basic output layout of the system.
The output is shown on LCD display. As shown in the
figure the brighter dialog box indicates the empty
parking lot. The dialog box named Difference shows the
real time output. Once the car comes and parks at the
location it counts the number of contours and displays
the output on the 7 segment display as shown in the
figure 6.Therefore as the number of cars increase and
decrease the count on the 7 segment changes
accordingly.
Figure 7:Image of empty parking lot
Figure 8: Busy Location
CONCLUSION:
Figure 5 :Basic output layout
Figure 6: Display of location on Seven Segment
In this modern scenario of increasing vehicles on road,
an efficient system is required to keep a tab on the
vehicles. The place where the vehicles needs to be
parked are to be maintained most efficiently so that the
car parking doesn’t become an issue to the traffic flow
in the streets due to less space availability .Car parking
using Raspberry pi is one such project which indicates
the number of vehicles present in the parking lot and
also the exact position of the vehicle in the parking lot.
This project has been completed using minimum
available resources and minimum cost.
The big advantage of this car parking management
system is that it does not require any sensors at all. It
just requires a display to indicate the exact position of
the vehicle and a seven segment display to indicate the
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
67
Car Parking Management System
________________________________________________________________________________________________
number of vehicles present in the parking lot. Also this
method can be easily implemented in a short period of
time and does not require periodic maintenance.
REFERENCES:
[1]
Jignesh K Tank Prof.Vandana Patel.” Edge
Detection Using Different Algorithms in
Raspberry Pi”.
[2]
Tang, Vanessa WS, Yuan Zheng, and Jiannong
Cao. "An intelligent car parking management
system based on wireless sensor networks."
Pervasive Computing and Applications, 2006.
[3]
Maire, Michael, et al. "Using contours to detect
and localize junctions in natural images."
Computer Vision and Pattern Recognition,
2008.CVPR 2008.IEEE, 2008.
[4]
Moon, Jung-Ho, and Tae Kwon Ha. "A Car
Parking Monitoring System Using Wireless
Sensor Networks."
FUTURE SCOPE:
While working on the development of Car Parking
management system using Raspberry Pi we found that
with little modification in the project several new
features could be added. Following are the things that
can be done with few modifications.

Better resolution camera can be used for better
edges in the images.

Make a Haar Classifier to recognize car more
accurately Type of car can be recognized.

Integrate the app with smart phone so that the
owner is updated with the car location every
minute.

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
68
Spirometry air flow measurement using PVDF Film
________________________________________________________________________________________________
Spirometry air flow measurement using PVDF Film
Manisha R.Mhetre, H.K.Abhyankar
Department of Instrumentation Engineering,
Vishwakarma Institute of Technology, Pune - 411037, Maharashtra, India
Email: [email protected]
Abstract—Now a days due to air pollution (air born
pollutants) respiratory disorders such as Asthma, Chronic
obstructive pulmonary disease(COPD),ling cancer are
increasing. Due to lack of awareness about this and not
having routine checkups facility in small clinics, diseased
condition come to know in when it become risky. There is a
need to have cost effective and simple measuring device to
be available for routine checkups of respiratory system.
Among different sensors used for exhaled air flow
measurement PVDF (Polyvinylidene Fluoride) film is used
for experimentation with a advantage of having voltage
generation without supply with good accuracy.
Experimentation is carried out to investigate the sensitivity
and range of voltage generation from exhalation using the
piezoelectric sensor through pipes of different diameter
and with different locations of film in pipe from mouth.
PVDF (Polyvinylidene Chloride) film is also tested for its
pyroelectric effect, CO2 change effect and air volume
measurement as there is an increase in temperature and
carbon dioxide level of human exhalation blow than
atmospheric temperature and carbon dioxide levels. The
prototype was developed and tested for detection of air
flow rate and volume of different subjects. The results of
these experiments are presented in this paper.
Keywords—- Peak expiratory flow (PEF), Asthma ,
piezoelectric sensor, PVDF film, Exhalation flow
measurements
I. INTRODUCTION
Human lung system is the purification centre of the body
where deoxygenated blood rich in CO2 from
cardiovascular system is purified in tiny air sac called
alveoli which is unit functional part of bean shaped lung
system and abundant in number .Actual diffusion of
oxygen and carbon dioxide is carried out due to partial
pressure difference of these gases present in air sac and
RBC present in blood. Resistance to inhaled air flow
from nasal cavity through trachea and bronchioles to air
sac is provided which increases the temperature of the
exhaled air. Amount of air and the rate of the exhaled air
decide the healthy condition of the respiratory system.
The Peak Expiratory Flow (PEF) is a person‟s maximum
speed of expiration. Peak flow readings are higher when
patients are well and lower when the airways are
constricted. Spirometry (meaning the measuring of
breath) is the most common of the Pulmonary Function
Tests (PFTs), in which the measurement of the amount
(volume) and/or speed (flow) of air that can be inhaled
and exhaled is carried out. Spirometery is an important
tool used for generating spirogram which is helpful in
assessing conditions such as asthma, pulmonary fibrosis,
cystic fibrosis and COPD (Chronic Obstructive
Pulmonary Disease) and its severity. Spirometry test is
performed using a device called SPIROMETER which
measures different lung volumes and air flow rate. There
are different method for air flow measurement in
different types of spirometer viz Turbine type,
differential pressure type, bellow type, Ultrasonic etc,
each having some advantages and disadvantages.
There are some Challenges in exhaled air flow
measurements using Spirometer: (i) very low air force
and pressure in mbar is exerted from mouth for its
measurement (ii) complex Signal conditioning required
as very low amplitude signals (in milli or micro volt
range) available (iii) less Span of time of exhalation
blow ( 4 to 5 sec only) to capture the signal by the
sensor. Many sensors are tested in spirometer to detect
proper air flow measurement in the above limitations.
In this paper a new approach is reported and tested
for human exhaled air flow measurement using
Polyvinylidene fluoride (PVDF) [1]. Piezoelectricity is
the ability of the material to produce voltage whenever it
is mechanically strained /stressed. PVDF is used for
many biomedical applications because of its
piezoelectric and pyroelectric properties [2]. The
pyroelectricic property of PVDF is used to detect sleep
apnea [5] and to monitor a respiration rate [6].
Experiments to investigate the voltage generation by
human exhalation using the piezoelectric sensor were
undertaken. Piezoelectric sensor was tested for
maximum voltage output by human exhalation. Its
properties are checked in a Lab for suitability of the
sensor for exhaled air flow measurement. After this
testing , a prototype is developed with proper signal
conditioning which is tested for measurement of
exhalation of different subjects. The results of these
experiments are presented. The aim of our present work
is to investigate the use of the PVDF based air flow
sensor as a diagnostic tool to evaluate the exhaled air
flow.
II. PIEZOELECTRIC SENSOR
A Greek word „Piezo‟ means pressure electricity.
Piezoelectricity is the creation of an electric charge in a
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
69
Spirometry air flow measurement using PVDF Film
________________________________________________________________________________________________
material when subjected to an applied stress. This
electric charge is a result of the crystal structure or
atomic arrangement of the material. The charge is
created by a slight deformation of a material when
subjected to an external stress, which causes a slight
variation in the bond lengths between cations and
anions, Certain crystals show piezoelectric effect as well
as other Composites such as polycrystalline Lead
Zirconate Titanate based ferroelectric ceramic materials
after being subjected to a certain process to make them
piezoelectric materials. When they are subjected to a
mechanical strain they become electrically polarized and
the degree of polarization is proportional to the applied
strain. The opposite effect is also possible: when they
are subjected to an external electrical field they are
deformed [3].
Piezoelectric materials (PZT) can be used as medium to
convert mechanical energy, usually forces into electrical
energy that can be stored and used to generate power. It
is a technology of great interest where available power is
limited [4]
Voltage generation due to stress is piezo film is
represented by the equations 1 and 2 as follows:
S=d·E+s·T
(1)
image of Piezo film (PVDF) used in our study by
measurement specialist [7].
Fig 1: DT Series Elements with lead attachment
PVDF is a non-reactive, flexible, light weight and a biocompatible polymer available in various thickness and
size and has a strong piezoelectric property. As Piezo
film is active in nature it produces voltage upon force
application. This unique property enables us to measure
very low level exhaled air force measurement in 31
mode. It is also extremely durable, capable of
withstanding hundreds of millions of flexing cycles, and
shock resistant. Table [1] shows the various parameters
of PVDF film, with a emphasize on having large stress
constant (g31) for conversion of low air force.
In this experiment, force is applied in 3 direction on
Piezo sensor placed in a pipe i.e. exhaled air flow and
electrode are attached in 1 direction on the sensor to get
the voltage output (Mode 31) is used.
Table 1: Specification sheet for PVDF film by MESAS
D=d·T+ε·E
(2)
When a strip of piezoelectric film is stretched it
generates electrical signal (charge or voltage between
upper and lower electrode surfaces), proportional to the
amount of elongation. This is the quasi static condition
of the material and its detail mathematical expression is
given by equations 3 and 4.
S =d31 / t· V + (1/ Y11 · wt) · F
(3)
Q = d31.l / t· F + C · V
(4)
Where S is the effective strain of the device, Q is the
electrical charge on the electrodes of the device, F is the
force exerted on the device, V is the voltage across the
electrodes, Y is Young‟s modulus under constant
voltage, d is the general piezoelectric coefficient, C is
the capacitance under constant force, l, w, t stand for
effective length, width and thickness respectively, and
the indices stand for the direction.
Voltage developed by the piezoelectric material depend
upon piezoelectric strain constant, d; electro-mechanical
coupling coefficient, k; piezoelectric voltage constant, g;
and permittivity of the material, ε.
Different Piezo sensors of different manufacturers are
available in market for measurement, fig [1] shows the
The DT Series Piezo film sensor with lead attachment
having 28µm thickness in mode 31 and, instrumentation
amplifier (AD 620, Analog Devices, USA) with high
input impedance to interface with PVDF film having
high impedance for amplification is used in this
experiment. Shot key diode for rectification of ac
generated signal of film is used for testing, as it is
having very low forward voltage drop of .2V only.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
70
Spirometry air flow measurement using PVDF Film
________________________________________________________________________________________________
III. EXPERIMENTAL SETUP
First of all usefulness of PVDF film is tested for its
output change due to exhalation. Fig.2 shows ac output
response of PVDF film on DSO when a person exhaled
in a Spirometery pipe in d31 mode The subject is asked
to exhale blow from one end of the pipe and the Piezo
film is attached to other end of the pipe. The force of
blow makes the movement of film which on the other
end generates the voltage due to piezoelectric effect.
For mounting of a PVDF film in a pipe, different pipe
materials are studied, as internal roughness of a pipe
play an important role in terms of low frictional losses
requirement of a fluid i.e. air moving through the pipe.
According to standard, material should be light weight
and should have low frictional constant. Among
different pipe material, Polyvinyl chloride is selected as
it is rigid, easily available, less costly and easy to
disinfect. Experiments were carried out by taking
human exhalation three times for different persons.
Different position of film in pipe from mouth with
different pipe diameter(24mm,32mm, 40mm) and same
length (18cm) are tested for maximum voltage
generation with fast response (taken according to ATS
(American Thoracic Standard
for peak flow
measurement and spirometer). Human exhaled air blow
is also measured with anemometer during each
measurement with PVDF film giving out a range of
exhaled air flow rate from 0.1 m/sec to 8 m/sec
depending upon height weight, age and sex. As a result
of these experimentation 40 mm diameter pipe with
mounting at above center position and at a distance of
3.5 cm from inlet of pipe is selected which gives good
responses of the film.
Next paragraph shows statistics of participating persons
with experimentation carried out on them which is
approved by the ethical committee and consent taken
from them.
Fig 2: Response of human blow on PVDF film
IV. STATISTICS OF PARTICIPATING
SUBJECTS
A number of experiments were conducted with
participation of 12 Subjects of varying age (22 – 40
years), weight (42 – 78 Kgms) and height (4.75 – 5.75
ft), in order to record the response of piezoelectric film
sensor. The details of the 12 Subjects (3 Females and 9
Males) participated in these experiments are furnished in
Fig 3.
Fig.3:Statistics of participating subjects
Fig 4: set up of Amplification circuit for piezo film
sensor
As Piezo film output change upon air force is in µV,
amplification is necessary for recoding and analysis
purposes. Fig 4 shows the experimental set up with
charge amplifier used with Piezo film for
amplification.The output of the charge amplifier is
determined by Q/C. Q is the developed charge on Piezo
film and C is the feedback capacitance of the charge
amplifier. The output voltage of the charge amplifier
depends on the feedback capacitance, not the input
capacitance. This indicates that the output voltage of a
charge amplifier is independent of the cable capacitance.
The major advantage of a charge amplifier, therefore,
can be found when a long cable is used between a Piezo
film sensor and electronics. In addition, it also
minimizes charge leakage through the stray capacitance
around the sensor. So long cables used in the experiment
does not contribute to the small voltage generation due
to the human exhaled blow.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
71
Spirometry air flow measurement using PVDF Film
________________________________________________________________________________________________
The response of the piezoelectric sensor to exhalation
blow recorded with participation of different Subjects is
shown in Figure 5 and 6 with different exhalation
condition viz forceful exhalation and normal breathing
which is important for Spirometery development.
V. TEMPERATURE EFFECT ON
PIEZOELECTRIC SENSOR
Fig 7 : Experimental setup for temperature effect
Fig. 5: The response of the piezoelectric sensor of
exhalation blow
As exhaled air is having temperature change of 2 to 3
degrees from inhalation( as air has to be passed from
surrounding through nose and travel through small
respiratory tract it attains temperature change).While
using PVDF sensor for air flow measurement,
temperature change effect need to be tested because it
may affect the final prototype reading. Fig 7 shows the
set up of measurement of this effect. Thermometer is
used as a calibrating temperature device and Light bulb
with variable intensity level using calibrated variac,
changes the temperature. Set up show lamp bank with
Spirometry pipe in which PVDF film is placed and
observing the voltage on calibrated Agilent make
Micrometer. Result of this measurement is shown in Fig.
8.
Fig: 6: response of PVDF film
In spirometry it is necessary to have forceful exhalation
for the measurement of lung volumes. According to the
ATS standard for spirometry testing, experimentation is
carried out by forceful initial exhalation and normal
breathing effect on PVDF film which is shown in the fig
4. This result demonstrate that,Exhalation at start and
during initial blow time (first 1-2 sec of entire
spirometry blow) gives a highest peak voltage as
compared to the normal breathe measured with micro
voltmeter(Agilent make). The output voltage range
comes to be 0.2 to 3.0 Volts and it depends on the
respiration rate of different subjects which ultimately
depend upon of height weight, age and sex. It is more in
case of Male participants (3 Volts) as compared to the
female counterparts. Also as the age, weight and height
of a person is more the output blow of exhalation is of
more force giving more output.t
Fig.8: Temperature effect on piezoelectric sensor
The output voltage varied from 35.77 to 96.89 millivolt
for temperatures ranging from 31 to 50 oC taken for 5 sec
as spirometry blow for average person is from 4 to 6 sec.
The maximum output voltage of 96.89 millivolt was
recorded for the 50oC temperature.
But as human body temperature range is very small
from 36oC to 40oC, it is observed that temperature
change is negligible in this range and can be ignored as
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
72
Spirometry air flow measurement using PVDF Film
________________________________________________________________________________________________
it will not contribute into the voltage generated due to
the air flow.
VI. CALIBRATION OF PIEZOELECTRIC
SENSOR
Calibration of PVDF sensor for air volume measurement
is important for spirometry and peak flow measurement.
Calibration is carried out with 2.5 L calibration syringe
developed in Lab and shown in Fig 9. Different air
volumes with different stroke and for different time
periods is passed over the film placed in the tube with
specified time to get the film output. Volume of syringe
is calculated from its dimensions(which we designed
and build suing PVC pipe in LAB), known different
volumes as 0.5lit,1lit,1.5lit,and 2.5lit volume is passed
with varying time stroke from 1 sec to 8 sec to get flow
rate as lit/sec as per the requirement of ATS standard
and its response is measured on PVDF film and
calibration is carried out. Resulting graph is shown in
Figure 10. The output voltage increased with increase in
airflow volume. Maximum output of 97.98 millivolt was
recorded for 2500ml of airflow volume measured for 8
seconds.
From this graph(Fig 10), we can get the equation in
terms of voltage output with flow rate change .This
equation will help in the design of the prototype , i.e.
sensor responses viz. varying exhalation volumes of
different subjects.
VII. CONCLUSION
Wide and varied Experimentation carried out on the
PVDF film shows that PVDF film gives out appreciable
change in output which after amplification and proper
calibration can be used for the detection of lung volumes
and capacities. Spirometer and peak flow measuring
device can be built up using the film.
The results of the above experiments will have
ramifications in the design of prototype.
REFERENCES
[1] R. H. Brown, 2008: “The Piezo Solution for Vital
Signs Monitoring.” Medical Design Technology
March 2008, pp. 36 – 40, 2008.
[2] GR Manjunatha, K Rajanna, DR Mahapatra“
Polyvinylidene fluoride film based nasal sensor to
monitor human respiration pattern: An initial
clinical study”, Journal of clinical, 2013, Springer
[3] Kawai H. The piezoelectric of Poly(vinylidene
Fluoride). Jpn J Appl Phys 1969;8:975-976.
[4]
M. R. Mhetre, S. N. Nagdeo, H. K. Abhyankar,
“Micro energy harvesting for biomedical
applications: review.” Proceedings IEEE 2011 3rd
International Conference on Electronics Computer
Technology (ICECT), ICECT 2011, 08 - 10 Apr
2011, Kanyakumari, India
[5]
Berry RB, Koch GL, Trautz S, Wagner MH,
“Comparison of respiratory event detection by
Polyvinylidene fluoride and a Pneumotachograph
in sleep apnea patients”, Chest 2005;128:13311338.
Fig 9:PVDF film Calibration set up with calibration
syringe with piezo film spirometry pipe
[6] Dodds D, Purdy J, Moulton C., “The PEP
transducer: a new way of measuring respiratory
rate in the non-intubated patient”, J Accid Emerg
Med 1999;16:26-28.
[7]
http://www.meas-spec.co
Fig.10:Calibration of Piezoelectric sensor with constant
airflow volume in fixed time interval (8 seconds)

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
73
Distributed Critical Node Detection of malicious flooding in Adhoc Network
________________________________________________________________________________________________
Distributed Critical Node Detection of malicious flooding in Adhoc
Network
1
Malvika Bahl, 2Rajni Bhoomarker, 3Sameena Zafar
RGPV , Bhopal , MP, India.
Email: bahl.malvika @gmail.com
Abstract -- The nodes in mobile ad hoc wireless networks
have a limited transmission range; they depend on their
adjacent nodes to relay packets which are meant for
destinations out of their scope. Nodes can rely on their
neighboring nodes based on their past records of successful
packet transfer. Nodes which interrupt this relay of
packets and act maliciously need to be tackled. An
intrusion detection scheme (IDS) to detect and defend
against malicious nodes’ attacks in wireless network is
required. Critical nodes are the ideal junctions and can be
considered most suitable for monitoring the behavior for
nearby nodes connected to them. Whenever congestion
occurs then the senders should lower the transmission rate,
but if certain senders do not do this, it can be found by the
destination by comparing it with the previous sending rate.
When both the rates are equal, the corresponding sender is
considered as an attacker and has to be removed from the
existing path. Such type of node is continuously sending the
control and data packets in the network, hindering the
connection establishment.
Keywords—Wireless Adhoc Network, malicious nodes,
Critical Nodes
I. INTRODUCTION
intruders. These detection systems are usually placed in
those elements with more confluent traffic such as
routers, gateways, and switches. Unfortunately, in ad-hoc
networks, those elements are not used, and it is not
possible to guess which nodes will route more traffic
from its neighbors and install IDS systems only in those
nodes. This is the reason justifying the proposal of a
distributed intrusion detection system where every host
in the network investigates possible misbehavior of their
neighbors. One of the most important things to secure in
the ad hoc networks is the routing system. Attacks
against this part of the network system can conclude
misbehavior of mobile nodes.
An intrusion detection system (IDS) is a device or
software application that monitors network or system
activities for malicious activities or policy violations and
produces reports to a management station.
II. LITERATURE SURVEY
Ad hoc wireless network is a self organized autonomous
network that consists of mobile nodes; each equipped
with a transmitter and a receiver, which communicate
with each other over wireless links. Wireless channel is
used by these networks and such channel is considered
highly vulnerable against malicious attacks because of
lacking fixed infrastructure, limited bandwidth, dynamic
topology, resource constraints and especially limited
battery lifetime and memory usage etc. The
communication is difficult to organize due to frequent
network topology changes. Routing and network
management are done cooperatively by the nodes thus
forms multi hop architecture, where each node work as
host as well as router that forward packets for other
nodes that may not be within direct communication
range.[6,2] During packet forwarding, valuable packets
that belong to any node are on the discretion of another
node. A node can act maliciously or selfishly and could
harm the packet under transit.
A mobile ad-hoc network is a group of devices which are
connected without a prior setup of infrastructure such as
access points or independent base stations. Such
networks are suitable in battlefield with no existing
infrastructure; emergency workers at an earthquake that
destroyed the infrastructure and others. In all such cases,
and others, each node consists of a router and a host,
usually on the same computer/node. However, in these
environments, topology may be changing all the time,
causing the desirability and validity of paths to change
spontaneously. Needless to mention, these circumstances
make routing in adhoc networks quite different from their
wired counterparts. Security is a key feature in any
network and hence its implementation here too differs
from the fixed wired networks. For this reason several
research studies have been focused in ad-hoc security,
which include intrusion prevention and intrusion
Nodes with malicious intent can easily setup various
detection systems. The prevention should prevent
kinds of attacks. Black hole Attack is initiated by a type
unauthorized access to the network; however, this is not
of malicious node that would participate in route
always possible, and this risk enforces the
discovery mechanism and try to become part of an active
implementation of a second line of defense: intrusion
route. Gray hole Attack is initiated by a type of malicious
detection. Traditional intrusion detection systems (IDS)
node that would not participate in route discovery
in wired networks analyze the behavior of the elements
mechanism that is initiated by other nodes and thus it
in the network trying to identify anomalies produced by
would not be a part of active route. [2] A black hole is a
intruders and, once identified, start a response against the
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
74
Distributed Critical Node Detection of malicious flooding in Adhoc Network
________________________________________________________________________________________________
node which drops all the packets which are supposed to
be forwarded by it. [1]
conserve energy/effort that is required to forward data
packets that belongs to other nodes. [2]
Jellyfish (JF) attack is a type of selective black hole
attack. When JF node gets hold of forwarding packet it
starts delaying/dropping data packets for certain amount
of time before forwarding normally. [4]
Ad hoc on-demand distance vector or AODV routing
protocol is a reactive demand driven. In AODV, nodes
do not maintain the whole routing path or share routing
tables. They only maintain a routing table with the
information of a particular route to a certain destination.
When a node wants to send some data to another, it will
check in its own table. If there is a route to the
destination, data will be transferred using it. If there is no
route to the destination, route discovery process will take
place. A route request (RREQ) packet will be sent to its
neighbours. Neighbours check if they are the destination
and then their routing tables for the destination upon
receiving the RREQ packet. If they are the destination, a
route reply (RREP) packet is sent back. If they are not
the destination, they look up their routing table to check
if they have a “fresh enough” route to the destination. If
they also do not have the route, they will forward the
packet to their neighbours. Route error (RERR) message
is used to notify other nodes when a node finds a link
failure. In the routing table, each route has a timer
present. If the route is not used for the particular amount
of time, it will be deleted.[1]
Various types of attacks have been identified on Mobile
Adhoc networks (MANET) [5]
1)
Denial of Service Attack (DoS) –
The denial of service (DoS) attack is launched by the
intruder inserting packets into the networks to devour
network resources. For example, if a doubtful node
floods the MANET by generating route request packets
and seizing the bandwidth.
a) Flooding Attack
The flooding Attack is a denial-of-service attack is which
malicious node sends the futile packets to devour the
precious network resources.
2)
Routing Table Runoff
Nodes misbehave by assailing routing table of other
nodes by sending route request packets for searching
nonexistent nodes. Due to restriction of memory size,
routing tables of attacked nodes will be runoff finally.
3)
Impersonation
A node may perhaps disguise as another node and send
forged routing information masqueraded as some other
normal node.
4)
Power consumption
In mobile ad hoc networks, power consumption of
mobile nodes is a decisive state. If there is a misbehaving
node with ample power supply, it can send lots of
packets to assail other nodes. Once these mobile nodes
receive these packets, they may have to relay these
packets or record route entries. Thus result in the power
consumption of mobile hosts by these attacking packets.
5)
Resource consumption attacks [4]
Packet injection attacks and control packet floods are
resource consumption attacks.
Selfish behavior (type1) is considered a misbehaving
attack in which the misbehaving node does not
participate in route discovery mechanism. It is similar to
gray hole attack but the intention behind this
misbehaviour is to conserve energy and stop cooperating
other nodes. Such act is aimed to conserve the energy but
result in disrupting overall network performance. Under
type 2 selfish behaviour, the misbehaving nodes
participate in route discovery mechanism and try to be a
part of an active route. Once becoming a part of an active
route, such misbehaving nodes would start dropping data
packets. This misbehaviour is similar to black hole attack
but the objective of these misbehaving nodes is to
According to authors of [1], the simulation results show
by using individual reputation system, alert on finding a
black hole node and exchanging neighbour information
messages on meeting a new neighbour will help
detecting and eliminating malicious or black hole nodes
from the networks.
Simulations result showed an enormous decrease in
packet delivery ratio and extensive packet dropping by
these malicious and misbehaving nodes. This study could
be a valuable asset for those researchers who are working
to propose secure routing protocols that can mitigate such
malicious or misbehaving attacks. [2]
A novel approach to detect malicious attack based on the
neighbour’s information is presented by the authors of
[3]. In this scheme, they show that the right place to
validate route reply and prevent propagation of forged
information in the network is the first node in the reverse
path. [3]
As in AODV protocol, both the Black and Gray holes
advertise themselves about the freshest route to
destination with the intention of becoming a part of the
route from source to destination. In this way, source node
can be easily exploited by the attackers sending RREP
message firstly. Source node sends the packets over the
route where attackers' nodes are present. Black hole then
drops the entire packets. Gray hole works honestly by
sending the packets in the beginning and later starts
dropping the packets. As we can observe that source can
easily be exploited for always sending the data packets
on the shortest path and these two attacks could easily be
launched on AODV due to this weakness.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
75
Distributed Critical Node Detection of malicious flooding in Adhoc Network
________________________________________________________________________________________________
The goal of the proposed solution by the authors of paper
[7] is the avoidance of black / gray hole attacks by
discarding the first and selecting the second shortest path
for data packets transmission. In this way, it becomes
difficult for the malicious node to send the RREP
message secondly. To be part of the route of the second
shortest route, malicious node will have to monitor the
entire network which obviously is not an easy task in
MANET.
Previous work on misbehaving nodes has not taken into
account the flooding attacks which can block and congest
the networks and disrupt the connection establishment
between communicating nodes.
Here we try to study such misbehavior and nodes which
identify such behavior and act accordingly to counter that
effect are designated as critical nodes.
Our proposed method is based on Family
acknowledgement tree protocol which supports reliable
multicast service for mobile ad hoc networks. For each
reliable multicast protocol, a recovery scheme is used to
ensure end-to-end delivery of unreliable multicast
packets for all group members. FAT is based on treebased recovery mechanism.
To cope with node movements, FAT constructs an ACK
tree on which each node maintains reach ability
information to three generations of nodes on the ACK
tree. When a tree is fragmented due to a moving node,
the fragments will be combined back to the tree using the
underlying multicast routing protocol. FAT then adopts
an adaptive scheme to recover missed packets that have
been multicast to the group during fragmentation and are
not repaired by the new reliability agent.
III. PROPOSED METHOD
In a distributed intrusion detection system every host in
the network investigates possible misbehaviour of their
neighbours. The inherent constraints like battery life,
limited resources and maintenance of long routing tables
with changing topology can cause large overheads.
Hence, analogous to the wired counterparts certain nodes
can be assigned for extensive monitoring of the possible
misbehaviour of their neighbouring nodes. Such nodes
are critical as they are supposed to find possible faulty
and misbehaving nodes in the network.
Whenever there is a packet flooding attack in a network
by certain malicious node(s), the two connected nodes
cannot make a reliable connection and communicate with
each other as the path between the two is congested by
the attack, hence to come out of such situation the nodes
follow another alternative path to establish a successful
link and communicate.
In this scenario, to ensure that for every connected pair
there are available alternative paths. A FAT (Family
Acknowledgement tree) topology can be considered and
is emulated in its wireless counterpart i.e. adhoc
networks.
Fig 1. A four pod fat topology
A p pod fat topology has p pods in horizontal direction. It
uses 5p2 /4 p-port switches and supports nonblocking
communication among p3/4 end hosts. A pair of end
hosts in different pods have p2/4 equal-cost paths
connecting them. Once the two end hosts choose a core
switch as the intermediate node, the path between them is
uniquely determined. The topology has three vertical
layers Top of Rack (ToR), aggregation and core. Pod is a
management unit, a replicable building block with the
same power and management infrastructure.
The above topology can be implemented in the adhoc
networks hence ensuring a reliable establishment of
alternative paths between any communicating nodes.
IV. PROPOSED WORK
The method of handling the packet flooding attacks by
the malicious nodes involves taking another path to the
destination node. Whenever a possibility of congestion
occurs in a network then senders should reduce their
sending rate. If the channel continues to be congested
because some sender nodes do not reduce their sending
rate, it can be found by the destination. It compares the
previous sending rate of a flow with its current sending
rate. When both the rates are same, the corresponding
sender of the flow is considered as an attacker. To handle
this situation, an alternative path is selected so as to
complete the message transfer and the attacker is
removed from the network.
This selection of alternate path between any
communicating nodes is based on selfish path selection
algorithm. Firstly, it uses a lightweight distributed endsystem-based path selection algorithm to move flows
from overloaded paths to under loaded paths to improve
efficiency and prevent hot spots. Secondly, it uses
hierarchical addressing to facilitate efficient path
selection. Each end system can use a pair of source and
destination addresses to represent an end-to-end path, and
vary paths by varying addresses.
Selfish path selection system overview. There are
multiple paths connecting each source and destination
pair. Every node has three functional components. The
Elephant Flow Detector detects elephant flows. The Path
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
76
Distributed Critical Node Detection of malicious flooding in Adhoc Network
________________________________________________________________________________________________
State Monitor monitors traffic load on each path by
periodically querying the intermediate nodes. The Path
Selector moves flows from overloaded paths to under
loaded paths.
to offer adequate protection. Therefore, an IDS has
become an vital component to provide defense
mechanisms in the presence of critical nodes in Mobile
Adhoc Network.
REFERENCES
Fig 2. Selfish Path Selection
Possible drawback in this approach can be path
oscillations. The reason for path oscillation is that
different sources move flows to under-utilized paths in a
synchronized manner as shown in Figure 3. As a result,
in this approach, the interval between two adjacent flow
movements of the same end node consists of a fixed span
of time; adding randomness in the control interval can
prevent path oscillation
Fig 3. Path oscillations
[1]
Htoo Maung Nyo, Piboonlit Viriyaphol,
“Detecting and Eliminating Black Hole in
AODV Routing Detecting and Eliminating
Black Hole in AODV IEEE 2011
[2]
Mohammed Saeed Alkatheiri, Jianwei Liu,
Abdur Rashid Sangi AODV Routing Protocol
Under Several Routing Attacks in MANETs
2011 IEEE
[3]
Mohammad Taqi Soleimani Secure AODV
against Maliciously Packet Dropping IEEE
[4]
Nidhi Purohit, Richa Sinha and Khushbu
Simulation study of Black hole and Jellyfish
attack on MANET using NS3 INSTITUTE OF
TECHNOLOGY, NIRMA UNIVERSITY,2011
[5]
meenakshi patel detection of malicious attack in
manet a behavioral approach 2012
[6]
ankur mishra1, ranjeet jaiswal a novel approach
for detecting and eliminating cooperative black
hole attack using advanced dri table in ad hoc
network 978-1-4673-4529-3/12/$31.00_c 2012
ieee.
[7]
hizbullah khattak, nizamuddin, fahad khurshid,
noor ul amin preventing black and gray hole
attacks in aodv using optimal path routing and
Hash 978-1-4673-5200-0/13/$31.00 ©2013
IEEE
[8]
Tsung-Chuan Huang, Sheng-Chieh Chen, Lung
Tang Energy-Aware Gossip Routing for Mobile
Ad Hoc Networks , 2011 IEEE International
Conference on High Performance Computing
and Communications.
[9]
J.Premalatha Enhancing Quality of Service in
MANETS by Effective Routing 978-1-42445137-1/10/$26.00 ©2010 IEEE
V. CONCLUSION
[10]
Once the attackers are identified, the attack traffic is
discarded and nodes are killed to make intruder free
network. The aim of an intrusion detection system is
detecting attacks on mobile nodes or intrusions into
network. Intrusion detection systems, if well designed
effectively can identify misbehaving activities and help

Arvind Sankar and Zhen Liu Maximum
Lifetime Routing in Wireless Ad-hoc Networks
0-7803-8356-7/04/$20.00 (C) 2004 IEEE
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
77
“Transmission Line Fault Detection & Indication through GSM”
________________________________________________________________________________________________
“Transmission Line Fault Detection & Indication through GSM”
Chandra shekar. P
Electronics and communication engineering
SDIT – Mangalore, Karnataka-India
Email: [email protected]
ABSTRACT- One of the 8051s many powerful features is
its integrated UART, otherwise known as a serial port. The
fact that the 8051 has an integrated serial port means that
you may very easily read and write values to the serial
port. If it were not for the integrated serial port, writing a
byte to a serial line would be rather tedious process
requiring turning on and off of the I/O lines in rapid
succession to properly “clock out” each individual bit,
including start bits, stop bits and parity bits.
However, we do not have to do this. Instead, we simply
need to configure the serial ports operation mode and baud
rate. Once configured, all we have to do is write to an SFR
to write a value to the serial port or read the same SFR to
read a value from
the serial port. The 8051 will
automatically let us know when is has finished sending the
character we wrote and will let us know whenever is has
received a byte so that we can process It. We do not have to
worry about transmission at bit level, which saves us bite a
bit of coding and processing time. In this project, we are
using one Temperature sensor, ADC, Microcontroller
8051, LCD) for displaying the faults and parameters, GSM
board used to send the fault message to electricity board.
By using this project, we can detect the multiple faults of
three phase transmission lines one can monitor the
Temperature, Voltage, Current by means of GSM modem
by sending message.
EXISTING SYSTEMS
Generally when a fault occurs in transmission line,
unless it is severe it is unseen. But gradually these minor
faults can lead to damage of transformer and can turn
havoc to human life. It may also initiate fire.
Present day in India, we do not have a system in hand
that would let us know in real time once a fault occurs.
Matter of concern is that since we do not have a real
time system, this leads to damage of the underlying
equipment‟s connected and turns out to be a threat to
human around.
In order to avoid such incidents to the maximum extent,
maintenance or checking of the transmission lines are
generally carried out on a frequent basis. This leads to
increased manpower requirement. The fact remains that
the real intention of this is not met as many a times line
failure may be due to rain, toppling of trees which
cannot be predicted. Like in Western Ghats where the
transmission lines are usually drawn amidst the forest
and places like Chirapunjee where massive rainfall
almost sets everything standstill.
It is necessary to understand the gravity and after effects
of a line failure. To overcome these, we are proposing a
GSM based transmission line fault detection System.
Whenever the preset threshold is crossed, the
microcontroller instantly initiates a message to be sent to
the area lineman and the Control Station stating the
exact pole to pole location. This helps us to realize a
almost real time system.
The real intention of detecting fault in real time and
protecting the transformer at the earliest is realized. It is
important to note that transformers are very costly. An
11KV transformer on an average costs 3000 US$. So
here we are designing a cost effective and fast response
system aiding in improving safety.
FAULTS
In an electric power system, a fault is any abnormal flow
of electric current. For example a short circuit is a fault
in which current flow by passes the normal load. An
open circuit fault occurs if a circuit is interrupted by
some failure. In three phase systems, a fault may involve
one or more phases and ground, or may occur only
between phases. In a “ground fault” or “earth fault”,
current flows into the earth. The prospective short circuit
current of a fault can be calculated for power systems. In
power systems, protective devices detect fault conditions
and operate circuit breakers and other devices to limit
the loss of service due to a failure.
In a poly phase system, a fault may affect all phases
equally which is a “symmetrical fault”. If only some
phases are affected, the “asymmetrical fault” requires
use of methods such as symmetrical components for
analysis, since the simplifying assumption of equal
current magnitude in all phases no longer applicable
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
78
“Transmission Line Fault Detection & Indication through GSM”
________________________________________________________________________________________________
BLOCK DIAGRAM OF MULTIPLE FAULT
DETECTION
upon linear predictive coding (LPC). In additional to
being efficient with bit rates, these codec‟s also made it
easier to identify more important parts of the audio,
allowing the air interface layer to prioritize and better
protect these parts of the signal.
There five different cell sizes in a GSM network-macro,
micro, Pico, femto and umbrella cells. The coverage
area of each cell varies according to the implementation
environment. Macro cells can be regarded as cells where
the base station antenna is installed on a mast or a
building above average roof top level. Micro cells are
cells whose antenna height is under average roof top
level; they are typically used in urban areas. Pico cells
are small cells whose coverage diameter is a few dozen
meters; they are mainly used indoors. Femto cells are
cells designed for use in residential or small business
environments and connect to the service provider‟s
network via a broadband internet connection. Umbrella
cells are used to cover shadowed regions of smaller cells
and fill in gaps in coverage between those cells.
CIRCUIT DIAGRAM WITH PIN DETAILS
Cell horizontal radius varies depending on antenna
height, antenna gain and propagation conditions from a
couple of hundred meters to several tens of kilometres.
The longest distance the GSM specification supports in
practical use is 35 kilometres Indoor coverage is also
supported by GSM and may be achieved by using an
indoor Pico cell base station, or an indoor repeater with
distributed indoor antennas fed through power splitters,
to deliver the radio signals from an antenna outdoors to
the separate indoor distributed antenna system. These
are typically deployed when a lot of call capacity is
needed indoors, for example in shopping centres or
airports.
SUBSCRIBER IDENTITY MODULE
TECHNICAL DETAILS
Global System for Mobile communications is the most
popular standard for mobile phones in the world. Its
promoter, the GSM Association, estimate that 82% of
the global mobile market uses the standard. GSM is used
by over 2 billion people across more than 212 countries
and territories. Its ubiquity makes international roaming
very common between mobile phone operators, enabling
subscribers to use their phones in many parts of the
world.
GSM has used a variety of voice codec‟s to squeeze 3.1
kHz audio into between 5.6 and 13 Kbit/s. Originally,
two codec‟s, named after the types of data channel they
were allocated, were used, called Half Rate (5.6 Kbit/s)
and Full Rate (13 Kbit/s). These used a system based
One of the key features of GSM is the Subscriber
Identity Module (SIM), commonly known as a SIM
card. The SIM is a detachable smart care containing the
user‟s subscription information and phonebook. This
allows the user to retain his or her information after
switching handsets. Alternatively, the user can also
change operators while retaining the handset simply by
changing the SIM. Some operators will block this by
allowing the phone to use only a single SIM, or only a
SIM issued by them; this practice is known as SIM
locking, and is illegal in some countries.
A subscriber can usually contact the provider to remove
the lock for a fee, utilize private services to remove the
lock, to make use of ample software and websites
available on the Internet to unlock the handset
themselves. While most web sites offer the unlocking
for a fee, some do it for free.
WORKING OF GSM MODEM
A GSM modem is a wireless modem that works with
GSM wireless networks. A wireless modem is similar to
a dial-up modem. The main difference is that a wireless
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
79
“Transmission Line Fault Detection & Indication through GSM”
________________________________________________________________________________________________
modem transmits data through a wireless network
whereas a dial-up modem transmits data through a
copper telephone line. Most mobile phones can be used
as a wireless modem.
place of fault using the distance from pole to pole. In
future we can have a GPS attached to it that would
exactly send the location in terms of longitude and
latitude.
To send SMS messages, first place a valid SIM card into
a GSM modem, which is then connected to
microcontroller by RS 232 cable. After connecting a
GSM modem to a microcontroller, you can control the
GSM modem by sending instructions to it.
REFERENCES
[1]
Power system analysis and design by B.R.
GUPTA.
[2]
Here, in this project we have designed a GSM based
transmission line monitoring and indication system that
sends information of the same to electricity board via
SMS.
The 8051 Microcontroller and embedded systems
using assembly and „C‟ by Muhammad Ali
Mazidi/Janice Gillispe Mazidi/Rolin D.Mc
Kinlay.
[3]
www.google.com
[4]
www.wikipedia.com
FUTURE SCOPE
[5]
www.nskelectro
CONCLUSION
The project is designed to send in an alert message as
soon as there is a fault. In this model, we predict the

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
80
Remote health monitoring in ambulance and traffic control using GSM and Zigbee
________________________________________________________________________________________________
Remote health monitoring in ambulance and traffic control using
GSM and Zigbee
Deepak. C. Pardeshi
Department of Instrumentation Engineering,
Vishwakarma Institute of Technology, Pune, India
Email: [email protected]
Abstract — Improving the present day safety measures
and transport facilities for patients for further ensuring
their wellbeing with the help of technology is the aim of
this dissertation.
In this paper a networking system for ambulance that can
interact and communicate with the traffic signals (rather
order and alter the status of traffic signals) and collect the
crucial parameters of the patient’s health and broadcast it
to the dedicated hospital via the GSM and Zigbee module.
Also the live status of the traffic on route can be checked
by the signals and would inform the ambulances driver
about the alternative route (further improving the system
for reducing the route time).
So this paper shows a system installed that would help the
ambulance to command the traffic signal and change its
status so as to reach the destination (hospital or site) as
soon as possible. Also inform the hospital about patient’s
health status via GSM, Zigbee module so as to be prepared
with prerequisites for saving a life.
So harnessing the power of technology for overcoming
the above stated problems is the main aim of this paper.
II. SYSTEM DESIGN
The purpose of this system is to transmit
the
health parameters of the patient from ambulance to
monitoring system in the hospital with controlling the
traffic signal with indication of traffic density using
GSM and Zigbee module so as to reach the hospital
as early as possible to provide proper medical
treatment to the patient to save their life.
It consists of three units: ambulance unit, traffic signal
unit and hospital unit.
Temperature sensor
LCD
Processing
Pulse rate sensor
Keywords: Ambulance, GSM, Zigbee module, emergency
healthcare,
traffic
density
switches,
wireless
communication.
People with the hemorrhage cannot survive for more
than an hour unless given proper medication but the
safety carriers-ambulances require more time to reach
site and return to hospital which leads to a sad death.
The existing method of traffic control system is not
aware of emergency vehicles, thus resulting in the need
of traffic police for handling traffic control for
emergency vehicles, which is improper as availability of
recent technology in wireless communication. So the
intelligent system that would safely and rapidly direct
the ambulances to the hospitals, further improving and
saving a life.
Zigbee
(ARM 7)
Traffic
switches
I. INTRODUCTION
Today with the boost in the lifestyle and perception
towards way of life the living, human health is facing so
many issues. Infants are born with tumors and due to
inevitable stress in life people are prone to many
cardiovascular diseases. Apart from this death due
accidents daily are ever increasing. These alerts how
unpredictable are the situations are and how the safety
systems should be ready to meet an emergency situation
[1][2][3][4][7].
unit
GSM
Fig. 1. Block diagram of the ambulance unit.
A.
Sensor:
It is most important basic unit of this networking
system. It consists of LM35 which is a precision
integrated temperature sensor that gives output in the
form of voltage, which is linearly proportional to
temperature.
B.
Pulse rate sensor:
It is used to measure the pulse rate of the patient. In this
case, IR based obstacle sensor (IR LED) is used.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
81
Remote health monitoring in ambulance and traffic control using GSM and Zigbee
________________________________________________________________________________________________
Voltage gain=5V=R3+R4/R3=1K+330K/1K=331K.
Voltage output for second stage amplifier,
Vout=Vin*65K/65K+3K=Vin*65K/68K.
This provides high gain to give proper square pulse
irrespective of change in volume of blood flow, which is
given to the transistor. Due to switching action of
transistor, diode D2 turns on or off, that indicates pulses.
These pulses are given to counter mode of the ARM
processor that gives pulse rate measurement.
Zigbee is communication module. Sensor outputs are
given to the ARM processor, which process on this. It is
given to Zigbee and GSM module. Zigbee operates
within the ISM 2.4GHz frequency band can transmit
data upto 100m at the rate of 25 kbps. The 16*2 LCD
displays the pulse rate and temperature of patient in the
ambulance unit.
Fig. 2. Circuit diagram of pulse rate unit.
Measurement of pulse rate is achieved by placing the
finger between IR transmitter (D7) and IR receiver (D1).
Fig. 3. IR transmission method.
Radial artery of human finger reflects light at intensity
proportional to the change in blood volume. A part of
light which does not reflected is refracted to IR receiver.
Output of IR receiver is very small pulse, it requires
amplification.
100Ω resistor is used for current limiting. Capacitor is
used to avoid DC component.
Current through LED is 5V/100Ω = 50mA, which is
high for LED, but increases the range of obstacle sensor.
IR receiver is connected in reverse biased condition.
C.
Ambulance unit:
This unit is responsible for transmitting health
parameters of the patient to hospital. In this unit biosensors are attached to the body of patient to grab the
health signals (for example-body temperature, pulse
rate, ECG etc). These signals are processed using ARM
processor, these signals are transmitted to doctors
mobile and hospital server using GSM, and Zigbee
module. These signals are analyzed by physician or
expert doctor. If any emergency is there, physician can
guide co-doctor which is present in Ambulance. So that
pretreatment can be provided to the patient before
reaching to the hospital. This possibly saves the patient’s
life. Ambulance unit is responsible for controlling
traffic signal and it will know traffic route condition on
road sides. Co-driver or driver will change traffic signal
condition as well as increasing signal time of green
signal as per requirement on road using Zigbee module.
Switches operated by driver or co-driver are used in unit
to know the traffic conditions in terms of HIGH density
traffic, MEDIUM density traffic and LOW density
traffic, which is displayed on LCD display on control
panel available in ambulance unit. So as per traffic
density route can be changed if required. This unit
knows traffic condition as communication between it
and Traffic signal unit is done by Zigbee module.
LED
Green*4
Traffic
Density
switches
Processing
Unit
ARM 7
LED
Red*4
Zigbee
Here two stage non inverting amplifier is used for
Fig. 4. Block diagram of the traffic signal unit.
amplification. Voltage gain for first stage amplifier,
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
82
Remote health monitoring in ambulance and traffic control using GSM and Zigbee
________________________________________________________________________________________________
D.
Traffic signal unit:
This unit is responsible to provide first priority to
ambulance to cross traffic signal so as to avoid delay in
ambulance reaching to hospital for providing necessary
treatment to the patient in case of emergency. This unit
coordinates with ambulance unit for controlling the
traffic signal using Zigbee module, ARM7. Ambulance
when reaches the traffic signal and comes within the
range of Zigbee module of traffic signal, if required
green signal time can be increased also for providing
priority for ambulance signal sequence can be altered
only after completing present green signal which is
going on to avoid unwanted condition or any accidental
situation to be take place.
GSM
Zigbee
RS 232
PC unit of Hospital,
mobile
Fig. 6. Block diagram of the hospital unit.
This is the consulation unit. It receives patient
parameters on doctors mobile using GSM module as
ambulance is far from hospital, also on line data is
observed on server as Zigbee module of ambulance
comes in the range of Zigbee module of hospital unit.
This data is analyzed by the expert doctor. So that expert
doctor or physician can convey or consult to doctor
which in the ambulance, to provide pre treatment to
patient, so that human life can be saved.
III. RESULTS
Using GSM technology patients pameter are transmitted
over doctors mobile so that patient is analysed.With
Zigbee module traffic density is observed on driver
control panel, is as shown in following table.
Fig. 5. Traffic road side with location of the sensors.
E.
Traffic density indication:
There are two sensors or switches are placed on each
side of road at a distance apart from traffic signal with
traffic switches on driver control panel in ambulance for
providing information about traffic condition as shown
in above figure.
Traffic switches T1(North), T2(East), T3(South) and
T4(West) are provided on drivers control panel in
ambulance, with S1, S3, S5 and S7 sensors or
switches(traffic density switches) are on road side 50m
apart from traffic signal, and S2, S4, S6 and S8 sensors
or switches are on road side 100m apart from traffic
signal. As per traffic on roads these switches are
operated when ambulance comes in contact with
switches that provide traffic condition, which is
displayed on ambulance display unit of control panel.
TABLE I. TRAFFIC DENSITY INDICATION.
Traffic density switch
condition
S1- ON
S3- ON
S5- ON
S7- ON
S2- ON
S4- ON
S6- ON
S8- ON
All switches are OFF
Traffic density indication
on display of panel
MEDIUM
MEDIUM
MEDIUM
MEDIUM
HIGH
HIGH
HIGH
HIGH
LOW
Also using Zigbee module, the patient’s health
parameter in real time is monitored and displayed on
hospital server using Visual Studio 6.0. Graph showing
body temperature varying with time.
If none of the switch or sensor is actuated pressed on the
road, it indicates LOW traffic on display unit in
ambulance, when traffic switches are pressed by driver
or co-driver to know the traffic condition.
Consider one side of road, T1 switch is pressed to know
the traffic condition with, If S2 is pressed, it indicates
HIGH density traffic and S1 is pressed, it indicates
MEDIUM density traffic on display unit of ambulance.
It is same for all switches which are placed on road side
this information is used by ambulance driver for
changing route if required .These signals are
communicated using Zigbee module.
Fig. 7. Screenshot of the hospital monitoring system.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
83
Remote health monitoring in ambulance and traffic control using GSM and Zigbee
________________________________________________________________________________________________
IV. CONCLUSION
In this dissertation, we proposed a network system for
health monitoring of the patient in the ambulance using
GSM and Zigbee. The results were formulated and
validated for the successful reception of data at the
hospital from the patient in the ambulance. From the
experimentation and obtained results of the proposed
system we point the unique advantage of traffic control
and density indication achieved in an accurate and
timely manner using this technique which the traditional
health monitoring systems lack.
[3].
S.
Pavlopoulos,
“A
novel
emergency
telemedicine system based on wireless
communication
technology-AMBULANCE.”
IEEE Eng. Med.. Mag., vol. 18, no. 4, pp. 32-44,
1999.
[4].
P. Giovas, “Transmission of electrocardiogram
from a moving ambulance”, J.Telemed.Telecare,
vol.4, pp.5-7, 1998.
[5].
Texas Instruments “LM35 Precision Centigrade
Temperature Sensors”, Texas Instruments
Inc, Dallas, Texas, United States of America,
www.ti.com/lit/ds/symlink/lm35.pdf, Oct. 2013.
REFERENCES
[1].
Veeramuthu Venkatesh, M. Prashanth
Kumar,
V. Vaithayanathan, Pethuru Raj, “An ambient
health monitor for the new generation healthcare”
Journal of theoretical and Applied Information
Technology. Vol 31 No. 2. pp 91-99. Sep2011.
[6].
Maxstream “XBee™/XBee‐PRO™ OEM RF
Modules‐Product
Manual
v1.06”,
Digi
International Inc, Minnetonka, Minnesota, United
States
of
America,
www.picaxe.com/docs/xbe001.pdf, Oct. 2005.
[2].
Ruihua Zhang, Dongfeng Yuan, “A Health
Monitoring system for Wireless sensor network”
in Proc. of 2ed IEEE conference on Industrial
Electronics and Applications (ICIEA), pp. 16481658. Harbin. China. May 2007.
[7].
S.Pavlopoulos. Dembeyiotis. G. Konnis.and D.
Koutsouris, “AMBULANCE-Mobile unit for
health care provision via telematics support,” in
Proc.IEEE Eng. Med. Biol, Amsterdam, The
Netherland, Oct.1996.

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
84
Estimation of the Level of Indoor Radon in Sokoto Metropolis
________________________________________________________________________________________________
Estimation of the Level of Indoor Radon in Sokoto Metropolis
1
Yusuf Ahijjo Musa, 2Adamu Nchama Baba-Kutigi
1,2
Department of Physics,
Usmanu Danfodiyo University, Sokoto, Sokoto State, Nigeria
2
Federal University,Dutsin-Ma, Katsina State, Nigeria
Email: [email protected], [email protected]
1
Abstract — Indoor radon-222 concentration was estimated
in thirty randomly selected homes and workplaces carried
out within Sokoto metropolis (a Semi-Arid Extreme Northwest of Nigeria). The studies which may help understand
the potential danger posed by radon-222 activity
concentration known for its lung cancer potency. The city’s
metropolis was gridded into thirty grids over which thirty
samples were collected using statistical random sampling;
with the aid of Activated Charcoal Detectors (ACDs). The
gamma-ray spectrometric analysed result by Sodium
Iodide detector (NaI (Tl)) revealed that indoor radon
concentration ranges from 358.81 to 542.30 Bq/m3 with a
mean value of 448.98 Bq/m3 for homes and workplaces.
I. INTRODUCTION
Radon-222 is a naturally occurring radionuclide, a
chemically inert gas, and has a suitable half-life of 3.82
days [1]. Recently, high levels of 222Rn have been
reported in dwellings of many countries around the
world. It is estimated that indoor 222Rn exposure may be
responsible for more than 10% of the lung cancer
incidence [2]. Inhalation of the radioactive decay
products of radon (222Rn), a naturally occurring gaseous
decay product of radium, present in all soil has been
linked to an increased risk of lung cancer [3]. Every
square mile of surface soil, to a depth of 6 inches (2.6
km2 to a depth of 15 cm), contains approximately 1
gram of radium, which releases radon in small amounts
to the atmosphere on a global scale, it is estimated that
2,400 million curies (90 TBq) of radon are released from
soil annually [4]. Nearly 50% of annually radiation dose
absorption of human is due to radon which is one of the
main causes of cancer to respiratory and digestive
systems [5]. Its concentration varies greatly with season
and atmospheric conditions. For instance, it has been
shown to accumulate in the air if there is a
meteorological inversion and little wind [6]. In deciding
if remedial measures have to be taken and where these
measures should be taken the wide indoor radon survey
is to be carried out. A 222Rn detector of the passive alpha
track type was used in the measurements of indoor radon
in Indian dwelling. The estimated measured activities
from an individual detector for a month-long exposure
were 18% at 500 Bq.m-3 and 13% at 1000 Bq.m-3
respectively [7].
II. THE STUDY AREA
Sokoto metropolis is the study area. Sokoto lies on the
Latitude 13.08333330, Longitude 5.250, and Altitude 895
(feet). The time zone in Sokoto is Africa/Lagos, sunrise
at 06:27 and sunset at 18:46. It is located in the extreme
northwest of Nigeria, bordering Niger and Benin
Republics in West Africa. It has an annual average
temperature of 33.3oC. Sokoto state is highly endowed
with the wealth of limestone which attracted the chosen
site of one existing cement company, and this limestone
also contains some fairly significant amount of radon222 [8]. It is no longer doubtful that low concentration
of 222Rn can as well deliver the radiation dose which can
cause internal hazards to humans [9].
A. Materials and Methods
This research was conducted on the use of a
commercially purchased activated charcoal detectors
(ACDs). ACDs are passive devices deployed for 1-7
days to obtain indoor radon sample before Laboratory
analysis. The principle of detection is radon adsorption
on the active sites of the activated carbon [10]. An
electronic chemical balance of Shimadzu Corporation,
assembled by SPM Japan, which is capable of
measuring between 0.1mg to 320g, was used to measure
40g of ACDs needed in the canister. A plastic can was
purchased in Sokoto market (Kasuwan Kara), and
constructively improvised to ascertain the required
dimensions. This is to enhance fixing of smaller
dimension cylinder (canister) inside the lager one for the
sample collection. Their dimensions were carefully
determined as shown in the figure 1 and 2 below, so that
they can conveniently fit into the Sodium Iodide
detector geometry (7.62cm x 7.62cm) for better
resolution. An electronic Chassis Model (GP 214)
manufactured by Graffin England was used to perforate
the lid of the plastic cylinder to allow radon gas
adsorption into the canister. The side of the perforated
lid was sealed round by the application of candle wax
and Vaseline jelly to greatly lay barrier for any
unwanted cross-ventilation as radon concentration can
easily be affected by air flux [10]. Since the same ACDs
were used throughout this research, the probable
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
85
Estimation of the Level of Indoor Radon in Sokoto Metropolis
________________________________________________________________________________________________
variation in the result of radon activities is a function of
possible variation in structures of the sampled points.
Fig-1 Schematic Illustration of a Constructed,
Cylinder (Canister).
from that medium in immediate vicinity of the surface
for example if freshly heated charcoal is placed in an
enclosure with ordinary air, a condensation of certain
grasses occurs upon it, resulting in a reduction of
pressure; or if it is placed in a solution of unrefined
sugar, some of the impurities are likewise adsorbed, and
thus removed from the solution [12]. Charcoal when
activated (i.e. freed from adsorbed matter by heating) is
especially effective in adsorption, due to it great surface
area presented by it porous structure. The adsorption of
dirt on one's hand results from the unequal distribution
of dirt between the skin of the hand and the air or solid
with which the skin comes in contact. Water is
frequently ineffective in removing the dirt. The efficacy
of soap in accomplishing its removal is due to the
unequal distributing of dirt between skin and soap
solution, this time favouring the soap and leaving the
hands clean. At a given fixed temperature, there is a
definite relation between the number of molecules
adsorbed upon a surface and the pressure (if a gas) or the
concentration (of a solution) which may be represented
by an equation or graphically by a curve called the
adsorption isotherm.
The freundlich or classical adsorption isotherm is of the
form.
1
x
k n
m
Where x is the mass of gas adsorbed
m is the mass of adsorbent
 is the gas pressure

(1)
k , n are constant for the temperature and system.
Fig-2 Schematic Illustration of a Constructed ACDs
Canister with Accumulated Radon.
In certain system it is necessary to express this
relationship as
x
k
m
B. Theory of Adsorption and Absorption
Adsorption, which is often confused with absorption,
refers to the adhesion of molecules of gases and liquids
to the surface of porous solid. Adsorption is a surface
phenomenon; while absorption is an intermingling or
interpenetration of two substances [11]. The relatively
large surface area of the absorbent allows absorbate
atoms, ions or molecule to be taken up. In some cases
the atoms of the absorbate share electrons with atoms of
the absorbent surface, forming a thin layer of chemical
compound. Absorption occurs when the molecules of
the absorbate penetrate the bulk of the solid or liquid
absorbent. Adsorption denotes absorption of a gas or a
solute by a surface or an interface. Adsorption implies
action at the surface .It is a spontaneous process
accompanied by reduction of surface free energy of the
adsorbing surface. Adsorption is a type of adhesion
which takes place at the surface of a solid or a liquid in
contact with another medium, resulting in an
accumulation or increased concentration of molecules
 h 
1
n
(2)
Where h is the relationship of the partial pressure of the
vapour to it saturation value and r is the surface tension.
Numerous isotherm equations have been proposed. The
lagmuir adsorption isotherm is of the form stated below.
x  1 2

m 1  1 
The
viz:





(3)
degree of adsorption depends on following factors,
The composition of the adsorbing material
The condition of surface of the adsorbing material
The material to be adsorbed
The temperature
The pressure (of a gas) [13].
C. Gamma ray Spectrometry
The concentration of radon in the air is measured in
units Becquerel’s per cubic meter (Bq/m3). One Bq
corresponds to one disintegration per second. One pCi/L
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
86
Estimation of the Level of Indoor Radon in Sokoto Metropolis
________________________________________________________________________________________________
is equivalent to 37 Bq/m3 [14]. Gamma ray spectrometry
technique was utilised in the spectral collection of the 30
prepared samples after the equilibration period.
Background measurements were performed by
measuring unexposed canister which gave an average
concentration of 1.5Bq/m3. This is because radon-222 is
present relatively everywhere, as a natural radionuclide.
Each data is corrected by subtracting the background
that measures the gamma decay of the short lived 222
decay products once equilibrium has been reached [15].
In this experiment we need no manual conversion due to
the task embedded in the MAESTRO-23 software. The
principle of detection in NaI (Tl) detector is that the
output pulse amplitude from the radioactive source
detector is proportional to the energy deposited by the
source. So the pulse-height spectrum from such a
detector contains a series of full-energy peaks
superimposed on a continuous background, the spectrum
can be quite complicated and difficult to analyze. It
contains much useful information about the energies and
relative intensities of the type of radioactive sources
[15].
14.
14Sok
0.0641
446.7114
15.
15Sok
0.0641
445.1001
TABLE: 2. RESULT OF RN-222 RADIOACTIVE
DECAY OF SOKOTO METROPOLIS CONTINEUED.
Number
Sample
Count Rate Conc. of
of
Points
of Rn-222 Rn-222
Samples
Identity
(CPS)
(Bq/m3)
16.
16Sok
0.0982
446.7114
17.
17Sok
0.0641
542.3030
18.
18Sok
0.0611
446.7114
19.
9Sok
0.0640
410.0911
20.
20Sok
0.0640
410.0911
21.
21Sok
0.0641
410.0911
22.
22Sok
0.0600
409.8105
23.
23Sok
0.0641
446.7114
24.
24Sok
0.0641
446.7114
25.
25Sok
0.0600
407.1901
26.
26Sok
0.0730
480.6742
27.
27Sok
0.0641
446.7114
28.
28Sok
0.0831
511.0072
29.
29Sok
0.0600
410.0911
30.
30Sok
0.0720
479.3100
D. Result
The analysed results of thirty (30) samples within the
gridded map of Sokoto metropolis from CERT Zaria is
shown in Table 1 and 2 below. The result is a spectrum
of the analysed thirty (30) samples within the gridded
map of Sokoto metropolis from CERT, Zaria, ranging
from Pottasium-40, Radium-226, Thorium-232, and
Radon-222. But the activity concentration of Radon-222
which is the radioactive isotope of interest is tabulated
on table 2 as shown below. Although, the significance of
this research focuses on estimating indoor radon gas
concentration since there is strong epidemiological
evidence that ionizing radiation increases cancer risks
[16].
Fig-3 Histogram of Indoor Rn-222 Concentration from
Different Sample Points within Sokoto Metropolis.
TABLE: 1. RESULT OF RN-222 RADIOACTIVE
DECAY OF SOKOTO METROPOLIS.
Number
of
Samples
Sample
Points
Identity
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
1sok
2Sok
3Sok
4Sok
5Sok
6Sok
7Sok
8Sok
9Sok
10Sok
11Sok
12Sok
13Sok
Count
Rate
of
Rn-222
(CPS)
0.0510
0.0641
0.0640
0.0721
0.0721
0.0721
0.0641
0.0641
0.0830
0.0721
0.0771
0.0641
0.0641
Conc. of
Rn-222
(Bq/m3)
358.8100
446.2301
410.0911
479.3100
479.3100
479.3100
445.1001
446.7114
512.3992
479.3100
443.3551
446.7114
446.7114
Fig-4 Histogram of Indoor Count Rate from Different
Sample Points within Sokoto Metropolis.
E. Conclusion
It has been shown from the result of this work, that with
the aid of ACDs, the concentration of Rn-222 has be
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
87
Estimation of the Level of Indoor Radon in Sokoto Metropolis
________________________________________________________________________________________________
determined and following a careful observation, this
research have been able to unveil two possible ionizing
radiation parameters i.e. activity concentration and dose
rate of Rn-222 in home and workplaces within Sokoto
metropolis. The radon level in most of the houses was
found to be fairly above the levels of other work in the
southern-Nigeria due to vast difference in weather
conditions, with mean value of 448.98Bq/m3 for this
perceive method. Hence, research should be intensified
in this direction to employ other methods. The hotness
and dryness nature of the weather in Sokoto could
account for significantly distinct result from other
research in this direction.
[6]
Steck DJ, Field RW, Smith BJ, Brus CP, Fisher
EL, Neuberger JS, Platz CE, Robinson RA,
Woolson RF, Lynch CF: Residential Radon Gas
Exposure and Lung Cancer: The Iowa Radon
Lung Cancer Study, American Journal of
Epidemiology 2000, pp.1091-102.
[7]
A.J. Khan. A study of indoor radon levels in
Indian dwellings Influencing factors and lung
cancer risks, Radiation Measurements. 2000, pp.
87-92.
[8]
Adediran JA, Oguntoyinbo FI, Omonode R and
RA Sobulo. Agronomic evaluation of phosphorus
fertilizers. 1998, pp. 12-16.
[9]
Field RW, Krewski D, Lubin JH, Zielinski JM,
Alavanja M, Catalan VS, Klotz JB, Letourneau
EG, Lynch CF, Lyon JI, Sandler DP, Schoenberg
JB, Steck DJ, Stolwijk JA, Weinberg C, Wilcox
HB, Residential Radon and Risk of Lung Cancer:
A Combined Analysis of 7 North American
Case-Control Studies, Epidemiology 16(2): 2005,
pp. 37-45.
Mallam A.A Musa Chief Academic Technologist,
(Chief Co-ordinator Physics Laboratories), Usmanu
Danfodiyo University Sokoto
[10]
Oikawa S, Kanno N, Sanada T, Abukawa J,
Higuchi H J Environ Radioact. 87(3): 2006, pp.
239-45.
Mallam Awalu Ibrahim a Principal Academic
Technologist, (Head of Electronics unit of Physics
Laboratory), Usmanu Danfodiyo University Sokoto
[11]
Peter, B. N. Encyclopedia Vol. 9. Encylopedia
Britannica Inc. London 77(20): 1994, pp. 4280.
[12]
Barnard L Cohen and Richard Nason. A diffusion
Barrier Adsorption Collection for Measuring
Radon Concentration in indoor air, Health
Physics vol. 50(14): 1986, pp. 30.
ACKNOWLEDGEMENT
This research was successfully achieved due to the
painstaking effort of the following persons:
Dr. Mitshelia, Head, Department of Pharmacy, Usmanu
Danfodiyo University Sokoto
Mallam Adam S. Sa’idu an Academic Technologist,
Centre for Energy Research and Training, Zaria
REFERENCES
[1]
Dorr, H., Kromer, B., Levin, I., Mu¨nnich, K.O.,
Volpp, H.J., CO2 and Radon-222 as tracers for
atmospheric transport. J. Geophys. 1983, pp.
1309–1313.
[2]
Dr.Maria Neira of WHO. Handbook or Indoor
Radon: A public Health perspective. 2009, pp.
12, 50-51.
[3]
Fabricant, J.I., Radon and lung cancer: the BEIR
IV report. Health Phys. 1990, pp. 59, 89–97.
[4]
Agency for toxic Substances and Discease
Registry, U.S. Public Health Service. In
collaboration with U.S. Environmental Protection
Agency. 2000, pp. 40-49
[5]
[13] Ducan. Advance Physics Field Waves and Atoms.
John Murray (Ltd) London. 1981, pp 890.
[14]
International Commission on Radiological
Protection (ICRP). The Recommendations of the
International Commission on Radiological
Protection. 2007, pp. 67:20-22.
[15] Centre for Energy Research and Training (CERT)
Zaria,
Kaduna,
Nigeria.
Two
Weeks
Documentation Trip to CERT, 21st January to 2nd
February, 2013.
[16]
Li X, Zheng B, Wang Y, Wang X. A study of
daily and seasonal variations of radon
concentrations in underground buildings. J.
Environ. Radioactivity. 2006, pp. 101-106.
Preston DL, Brenner DJ, Doll R, Goodhead DT,
Hall EJ, Land CE, Little JB,Lubin JH, Preston RJ,
Puskin JS, Ron E, Sachs RK, Samet JM, Setlow
RB, Zaider M. Cancer risks attributable to low
doses of ionizing radiation: assessing what we
really know. Proc Natl Acad Sci USA. 2003, 100:
pp. 13761–13766.

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
88
Determination and Classification of Blood Types using Image Processing Techniques
________________________________________________________________________________________________
Determination and Classification of Blood Types using Image
Processing Techniques
Tejaswini H V, M S Mallikarjuna swamy
Department of Instrumentation Technology
S. J. College of Engineering Mysore, Karnataka, India
Email: [email protected], [email protected]
Abstract — Determining of blood types is very important
during emergency situation before administering a blood
transfusion. Presently, these tests are performed manually
by technicians, which can lead to human errors.
Determination of the blood types in a short period of time
and without human errors is very much essential. A
method is developed based on processing of images
acquired during the slide test. The image processing
techniques such as thresholding and morphological
operations are used. The images of the slide test are
obtained from the hospital/pathological laboratory are
processed and the occurrence of agglutination are
evaluated. Thus the developed automated method
determines the blood type using image processing
techniques. The developed method is useful in emergency
situation to determine the blood group without human
error.
I. INTRODUCTION
Before the blood transfusion it is necessary to perform
certain tests. One of these tests is the determination of
blood type and this test is essential for the realization of a
safe blood transfusion, so as to administer a blood type
that is compatible with the type of receiver[1].There are
certain emergency situation which due to the risk of
patient life, it is necessary to administer blood
immediately. The tests currently available require
moving the laboratory, it may not be time enough to
determine the blood type and is administered blood type
0 negative considered universal donor and therefore
provides less risk of incompatibility. However, despite
the risk of incompatibilities be less sometimes occur
transfusion reactions that cause death of the patient and it
is essential to avoid them, administering blood based on
the principle of universal donor only in emergencies[1] .
Thus, the ideal would be to determine the blood type of
the patient even in emergency situations and
administering compatible blood type from the first unit of
blood transfusion. Secondly,the pre-transfusion tests are
performed manually by technician's analysts, which
sometimes lead to the occurrence of human errors in
procedures, reading and interpreting of results. Since
these human errors can translate into fatal consequences
for the patient, being one of the most significant causes
of fatal blood transfusions is extremely important to
automate the procedure of these tests, the reading and
interpretation of the results.
This is based on slide test for determining blood types
and the software developed using image processing
techniques. The slide test consist of the mixture of one
drop of blood and one drop of each reagent, anti-A, antiB, and anti-D, being the result interpreted according to
the occurrence or not of agglutination. The agglutination
reaction means that occurred reaction between the
antibody and the antigen, indicating the presence of the
antigen appropriate. The combination of the occurrence
of agglutination, or non occurrence, determines the blood
type of the patient [2]. Thus, the software developed
based in image processing techniques allows, through an
image captured after the procedure of the slide test detect
the occurrence of agglutination and consequently the
blood type of the patient.
Blood group is classification of blood based on the
presence or absence of inherited antigenic substances on
the surface of red blood cells. These antigens may be
proteins, carbohydrates, glycoproteins or glycolipids
depending on the blood group system.The ABO system
is the most important blood group system in human
blood transfusion. Rh blood group system is the second
most significant blood group system in a human blood
transfusion with currently 50 antigens.
Blood transfusion is generally the process of receiving
blood products into one’s circulation intravenously.
Transfusions are used for various medical conditions to
replace lost components of the blood. Early transfusions
used whole blood but modern medical practice
commonly uses only components of the blood such as
RBCs, WBCs, plasma, clotting factors and platelets.
India faces blood deficit of approximately 30-35%
annually. The country needs around 8 to 10 million units
of blood every year but manages a measly 5.5 million
units on top of it 94% of blood donation in the country
made by men while women contribute only 6%.
II. LITERATURE REVIEW
The blood phenotyping based on the slide test and on
image processing techniques such as thresholding
morphological operations,and the secondary operations
like dilation, erosion, opening and closing to determine
the occurrence of agglutination [3].Errors have occurred
in blood transfusions since the technique began to be
used. One requirement was the mandatory reporting of
all fatalities linked to blood transfusion and donation.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
89
Determination and Classification of Blood Types using Image Processing Techniques
________________________________________________________________________________________________
The humans will inevitably make errors and that the
system design must be such that it decreases errors and
detects residual errors that evade corrective procedures
[4].The use of automated techniques reduces the impact
of human errors in laboratories and improves
standardization and quality of achieved results [5].
Thresholding plays a major in binarization of
images.Thresholdingcan be categorized into global
thresholding and local thresholding. In images with
uniform contrast distribution of background and
foreground like document images, global thresholding is
more appropriate. In degraded document images, where
considerable background noise or variation in contrast
and illumination exists, there exists many pixels that
cannot be easily classified as foreground or background.
In such case, local thresholding is more appropriate[6].
Image segmentation of an image is a process of dividing
an image into non overlapping regions which are
homogeneous group of connected pixels consistent with
some special criteria. There are lots of ways to define the
homogeneity of a region in the segmentation process. For
example, it may be calculated by color, depth of layers,
grey levels and textures etc.
III. METHODOLGY
The digital images of blood samples are obtained from
the hospital/laboratory consisting of a color image
composed of three samples of blood and reagent.These
images are processedusing image processing techniques
namely
color
plane
extraction,
thresholding,
morphological operations. The steps involved in image
processing are shown in the Fig.1.
Input Image
Color Plane Extraction
Thresholding
Morphological Operations
viewed as an operation that involves tests against a
function T of the form
T=T[x,y,(p(x,y),f(x,y)]
(1)
Where f(x,y) is the gray level at the point (x,y) and
p(x,y) denotes some local property of the point. A
threshold image is defined as
g(x,y)= {1
{0
if f(x,y)>T
if f(x,y)≤T
(2)
Thus pixels labeled 1 corresponds to objects and pixels
labeled 0 corresponds to background. If T depends only
on f(x,y) the threshold is global, if T depends on both
f(x,y) and p(x,y) the threshold is called local, if T
depends on the spatial co ordinates x and y the
threshold is called dynamic/adaptive.
B.Niblack function
Niblack’salgorithm calculates a pixel-wise threshold by
sliding a rectangular window over the gray level
image[8]. The computation of threshold is based on the
local mean m and the standard deviation s of all the
pixels in the window and is given by the equation
Tniblack = m+ k* s(3)
Where m is the average value of the pixel, and k is fixed
to -0.2 and s is the standard deviation.
C. Morphology
It includes pre or post processing operations such as
dilation, erosion, morphological filtering and
granulometry. The fundamental operations are dilation
and erosion. The erosion operation uniformly reduces
the size of the objects in relation to their background and
dilation expands the size of the objects. By using
dilation and erosion secondary operations like opening
and closing can be done. Opening is used to smooth the
contours of cells and parasites. Closing used to fill the
holes and gaps. Morphological operations are used to
eliminate noise spikes and ragged edges [9].
D. HSL Luminance plane
HSL Luminance Plane
It stands for hue, saturation and lightness. Most common
cylindrical co-ordinate representation of points in an
RGB color model.
Quantification
IV. RESULTS
Determination of Blood Group
Fig.1.Steps of determination of blood types using image
processing
A. Thresholding
It is the simplest method of image segmentation. From a
grayscale image thresholding operationis used to create
binary images. The gray scale samples are clustered into
two parts as background and object [7]. It may be
The images of slide test are captured by a camera
consists of a color image composed of three samples of
blood and reagent. The image processing method is
experimented on the several images acquired. One of the
captured input imagesis shown in Fig. 2(a). Theseimages
areprocessed using MATLAB software. The image
processing techniques such as color plane extraction,
thresholding and morphological operations are
performed on the images. Fig. 2 (b) shows the image
obtained after the color plane extraction contains only
the green color component.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
90
Determination and Classification of Blood Types using Image Processing Techniques
________________________________________________________________________________________________
Fig.5. Fill holes
(a)
Advanced morphological operation Opening is
performed here it can be noticed that it smoothens the
contours of cells by removing small objects is shown in
fig.6.
(b)
Fig.2 (a)Input image (b) Color plane extracted image
The image obtained after applying auto thresholding
clustering function here it can be observed that the
object and background are separated is shown in Fig. 3.
Fig.6.Remove small objects
The image obtained by applying the color plane
extraction: HSL luminance plane function is shown in
fig.7.
Fig.3 Auto threshold image
In the next step, local threshold operation using Niblack
function, it calculates a pixel-wise threshold and it can
be noticed only the border segmented imageand the
result is shown in Fig. 4.
Fig.4 Localthreshold
Image obtained by the application of advanced
morphology,it can be observed that the segmented
image is filled using closing operation is shown in Fig.
5.
Fig.7. HSL Plane
The image obtained by the application of quantify
function is shown in fig.8.
Fig.8.Quantify function
V. CONCLUSION
The method developed is proves that it is effective and
efficient method to detect the agglutination and
determines the blood type of the patient accurately. The
use of image processing techniques enables automatic
detection of agglutination and determines the blood type
of the patient in a short interval of time (less than 5
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
91
Determination and Classification of Blood Types using Image Processing Techniques
________________________________________________________________________________________________
minutes). The method is suitable and helpful in
emergency situations.
REFERENCES
[I].
M. R. Brown, P. Crim. “Organizing the antibody
identification process,”Clin Lab Sci, vol. 20,
2007, pp.122-126.
[2].
Datasheet of DiamedDiaclon Anti-A, Diaclon
Anti-B,Diaclon Anti-AB. Cressiers/Morat, 2008.
[3].
[4].
[5].
Ana Ferraz, Filomena Soares”A Prototype for
Blood Typing Based on Image Processing,”The
Fourth International Conference on Sensor
Device
Technologies
and
Applications,
Copyright (c) IARIA, 2013.
B. A. Myhre, D. McRuer."Human error - a
significant cause of transfusion mortality,"
Transfusion, vol. 40, Jul.2000, pp. 879-885.
Medicine Hemotherapy, vol. 34, pp. 341-346.
Available:Kargerwww.karger.com/tmh.
[6].
T.Romen Singh, Sudipta Roy, O.Imocha Singh,
Tejmani Sinam,,and Kh.Manglem Singh"“A New
Local Adaptive Thresholding Technique in
Binarization,”IJCSI International Journal of
Computer Science Issues,Vol. 8, Issue 6, No 2,
November 2011
[7]
Stefano
Ferrari,”Image
segmentation,”Elaborazine di immagini(Image
processing),2012.
[8].
Khurram Khurshid,Imran Siddiqi, Claudie Faure,
Nicole Vincent "Comparison of Niblack inspired
Binarization methods for ancient documentst,"
DDR,volume 7247 of
SPIE,page 110.SPIE,(2009)
[9].
Miss. Madhuri G. Bhamare “Automatic Blood
Cell Analysis By Using Digital Image
Processing: APreliminary Study,”Vol. 2 Issue 9,
September - 2013
A. Dada, D. Beck, G. Schmitz."Automation
andData Processing in Blood Banking Using the
Ortho AutoVue® Innova System". Transfusion

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
92
IRIS Authentication in Automatic Teller Machine
________________________________________________________________________________________________
IRIS Authentication in Automatic Teller Machine
1
Chaithrashree.A, 2Rohitha U.M
Dept. of Electronics & Communication, BITM Bellary.
Email: 1 [email protected], 2 [email protected]
Abstract—ATM has gained its popularity within a short
period than the conventional banking system.as it provides
the customer to withdraw the fund quickly, balance
enquiry, and the most important one is any time access. At
the same time providing the security for access control is
the major concern. As the Smart card access to the ATM
does not guarantee in confirming the person who is using it
is the authorized user, and which initiates the fraud things
to be done by the people. In order to reduce the access by an
unauthorized user and withdraw the money easily, our
paper proposes the biometric based authentication security
system, which uses the physical or behavioral
characteristics to identify the person. Voice, Iris, finger
print etc. are the important physical characteristics used to
identify the person.
Proposed work presents the Iris based biometric
authentication in ATM in order to improve the security for
the customer’s fund by providing the access only to the
person who is authorized. And the authentic user is allowed
for the transaction by using the voice based commands.
Keywords— Automatic teller machine (ATM), Iris based
access control, Canny Edge Detection, Normalization, Local
Binary Pattern (LBP), Hamming Distance (HD), Chinese
Academy of Sciences Institute of Automation (CASIA).
I.INTRODUCTION
The iris based biometric recognition is the most accurate
security system in order of recognizing the person on the
basis of iris. Because of its accuracy it has been widely
applicable in many applications like law enforcement
applications, forensic work, research analysis, security
systems etc. [1].
In order to provide the improved banking system, quick
access, and to reduce at least some functions manually
and also to provide the money withdrawal service any
time ATMs have been invented. At the same time
providing the security for the customer’s fund by
allowing only to those of authorized user and make sure
that the transaction is done by the same authentic user.
So, important tool for secured transaction in ATM is
access control.
The conventional smart card access control for ATM
[9] does not guarantee in providing the secured
transaction, as it can be stolen [7], password can be used
by unauthorized person, and the information stored on
the magnetic stripe may lost due to improper usage of
card, and it can also be duplicated.
Iris recognition is a rapidly expanding method of
biometric authentication that uses pattern- recognition
techniques on images of irises to uniquely identify an
individual. Iris Code has been extensively deployed in
commercial iris recognition systems for various security
applications and more than 50 million persons have been
enrolled using Iris Code. Iris-based recognition is the
most promising for high environments among various
biometric techniques (face, fingerprint, palm vein,
signature, palm print, iris, etc.) because of its unique,
stable, and non-invasive characteristics. The iris code is
a set of bits, each one of which indicates whether a
given bandpass texture filter applied at a given point on
the iris image has a negative or nonnegative result.
Unlike other biometrics such as fingerprints and face,
the distinct aspect of iris comes from randomly
distributed features. The iris patterns of the two eyes of
an individual or those of identical twins are completely
independent and uncorrelated. Irises not only differ
between identical twins, but also between the left and
right eye. Another characteristic which makes the iris
difficult to fake is its responsive nature. Iris detection is
one of the most accurate, robust and secure means of
biometric identification while also being one of the least
invasive. The iris has the unique characteristic of very
little variation over a life’s period yet a multitude of
variation between individuals.
Iris recognition system can be used to either prevent
unauthorized access or identity individuals using a
facility. When installed, this requires users to register
their irises with the system. A distinct iris code is
generated for every iris image enrolled and is saved
within the system. Once registered, a user can present
his iris to the system and get identified. Iris recognition
technology to provide accurate identity authentication
without PIN numbers, passwords or cards. Enrolment
takes less than 2 minutes. Authentication takes less than
2 seconds.
II. EXISTING SYSTEM
Automatic teller machine is online with bank, each
transaction will be authorized by the bank on demands
and it uses real-time online processing technique which
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
93
IRIS Authentication in Automatic Teller Machine
________________________________________________________________________________________________
directly updated the account from which transaction
takes place.
The ATM model in figure below work as follow; Bank
customer inserts the smartcard (smartcard) in the ATM
machine. The machine then request for a personal
identification Number PIN if the supply PIN is correct,
access will be authorized and transaction will continue
the customer then enter the amount to withdrawal, and if
the customer has enough money in the account then the
amount will be paid.
The whole work is being monitored by the controller
class. In principle this is not necessary, but for working
with a secure model the controller class is needed as a
dispatcher of actions and it would have a log file with
the trace of every transaction earned out by ATM.
The class card_input has the methods for reading the
code of the client's card and for ejecting the card from
machine. It interacts through the controller with the class
terminal, where the methods reg_PIN and reg_amount
are defined.
III. RESEARCH FRAMEWORK
An automatic teller machine requires a user to pass an
identity test before any transaction can be granted. The
current method available for access control in ATM is
based on smartcard. Efforts were made to conduct an
interview with structured questions among the ATM
users and the result proved that a lot of problems were
associated with ATM smartcard for access control.
Among the problems are; it is very difficult to prevent
another person from attaining and using a legitimate
persons card, also conventional smartcard can be lost,
duplicated, stolen or impersonated with accuracy.
To overcome the problems of smartcard access control
in ATM the use of Iris as the biometric characteristic
offers advantages such as: it is well accepted by the user,
and the iris can be captured, the hardware costs are
reduced, etc.
Research framework and methodology is based on the
survey that covered a sample of one thousand ATM
users in Lagos state. The choice of the location is based
on the fact that Lagos state is the economy nerve center
of Nigeria and it has more branches of the banks and
ATM location compare to any other state in Nigeria.
The following questionnaire was used to get information
that prompts us to propose the Iris Based Access control.
The result obtained from the questionnaire shows that,
there is need for better security approaches to ATM
access control. The result is analyses as follows:
Question 1: Banking would have been better if ATM
was never invented
81.7%, the response to the question implied that
invention of ATM is a welcome innovation in banking
sector.
Figure 1: ATM Model Network
In order to verify whether the PIN of a particular users is
correct or not, the class card will have the information of
the cardholder i.e. card_number, PIN, and Account
number. The controller will interact with the bank the
bank using the information of the card holder in order to
get the authorization to pay (or not) request amount. The
bank interface will send the request to the accounting
class, which belongs to the bank package, in order to
call the debit method of the accounting class [5].
The accounting class has the methods of rollback,
authorization and debit which directly interact with the
accounting class. Rollback is for rollback a transaction
in case anything is wrong and should leave the account
and the teller machine in the original state; authorization
will authorize or not an operation and debit will extract
the requested amount of money from the account in case
the operation is authorized [8].
Question 2: Withdrawal of money using ATM is quick
than normal banking.
The majority of the respondent prefers using ATM
because it enables quick access to withdrawal of money.
Question 3: Withdrawal of money from ATM is more
secure
The majority of the respondent strongly agreed that fund
withdrawal through ATM is not secure compared to
using face-face with cashier.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
94
IRIS Authentication in Automatic Teller Machine
________________________________________________________________________________________________
Question 4: There is need for better security for access.
Larger percentage of the respondents strongly agreed
that there is need for better security that will guarantee
one user with one.
Question 5: Biometric approach for access control in
ATM would provide better security.
Larger percentage of the respondent strongly agreed that
biometric approach to ATM access control would
provide better security in ATM.
Based on the result analysis, it was discovered that
Smart card access control has the following drawbacks;
In order to access the ATM through iris, one has to
register his iris by giving the details.
2. Iris Recognition
A customer who has registered his/her iris wants any
transaction from his/her account has to select his/her iris
image file from the database then Pre-processing,
Normalization and Feature extraction of the image takes
place. If the extracted pattern of input loaded image and
the existing patterns in the database are matched then it
provides the authentication for the customer. The
principle can be explained with following block
diagram.
1. The card which is being used for access control in
ATM may become useless as the chip or magnetic stripe
used for storing the information for its functionality can
destroy because of its continuous and improper use.
2. The card which is being used for access control in
the conventional system may have the possibilities of
being misplaced.
3. The card could be stolen by another person even
with the password. There have been a case of burglar’s
forcefully collected ATM card and password from the
legitimate owner and even follow such person to the
ATM location to confirm that the PIN number given to
them is correct.
4. Recently, there has been reported case of card fraud.
Various methods were used by fraudster in perpetuating
this crime; among others are: for a low tech form of
fraud, the easiest is to simply steal an ATM card. A later
variant is of this is to trap the card inside ATMs card
reader [3]. Advance fraud in ATM involve the
installation of a magnetic card reader over the real
ATMs card slot and use of a wireless surveillance
camera or a modified digital camera to observe the user
PIN. Card data is then cloned out on a second card and
the criminal attempt a standard cash withdrawal.
Consequent to the identified drawbacks, we proposed
the design of Iris based access control in Automatic
Teller Machine
IV. PROPOSED IRIS BASED SYSTEM IN
ATM
The proposed system consists of following stages
1.
2.
3.
4.
Iris Registration
Iris Recognition
Authentication to the authorized user
Password Verification and Transaction
Figure 2: Block diagram of Iris Recognition
1 .Image Acquisition
2. Iris Pre processing
3. Iris Normalization
4. Feature Extraction
5. Pattern Matching
Iris Preprocessing: Pre-processing of an iris includes
localization and segmentation of an iris. After getting
the input image, the next step is to localize the circular
edge in the region of interest. Canny edge detection
operator uses a multi-stage algorithm to detect a wide
range of edges in images. It is an optimal edge detector
with good detection, good localization and minimal
response. In localization we use this detection, in which
the inner and outer circles of the iris is approximated, in
which inner circle corresponds to iris/pupil boundary
and outer circle corresponds to iris/sclera boundary. But
the two circles are usually not concentric. Also,
comparing with other parts of the eye, the pupil is much
darker. The inner boundary is detected between the pupil
and the iris. At the same time, the outer boundary of the
iris is more difficult to detect because of the low contrast
between the two sides of the boundary. So, we detect the
outer boundary by maximizing changes of the perimeter
along the circle. Iris segmentation is an essential process
which localizes the correct iris region in an eye image.
Circular edge detection function is used for detecting iris
as the boundary is circular and darker than the
surrounding.
Iris Normalization: The obtained iris region is
transformed in order to have fixed dimensions for the
purpose of comparison. Gabor filter is used for the
purpose of normalization. It is a linear filter used for
edge detection. Here it is used to perform good detection
1. Iris Registration
of iris region. The size of the pupil may change due to
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
95
IRIS Authentication in Automatic Teller Machine
________________________________________________________________________________________________
the variation of the illumination and the associated
elastic deformations in the iris texture may interface
with the result of pattern matching. And so, for the
purpose of accurate texture analysis, it is necessary to
compensate this deformation. Since we have detected
both inner and outer boundaries of the iris, it is easy to
map the iris ring to a rectangular block of texture of a
fixed size. Here a convolution filter also employed for
the purpose of enhancement. The original image has low
contrast and may have non- uniform illumination caused
by the position of the light source. These may impair the
result of the texture analysis. We enhance the iris image
in order to reduce the effect of non -uniform
illumination.
Feature Extraction: LBP is a type of feature used for
classification in computer vision. LBP was first
described in 1994. It has since been found to be a
powerful feature for texture classification; it has further
been determined that when LBP is combined with the
Histogram of oriented gradients improves the detection
performance considerably on some datasets.
LBP operator forms labels for the image pixels by
thresholding the neighbourhood of each pixel and
considering the result as a binary number. LBP provides
fast feature extraction and texture classification. Due to
its discriminative power and computational simplicity,
the LBP texture operator has become a popular approach
in various applications like image retrieval, remote
sensing, biomedical image analysis, motion analysis etc.
to extract the entire iris template features. Here, LBP is
used to extract the features of the normalized iris image.
Pattern Matching: Matching of two iris code is
performed using the Hamming distance. The Hamming
distance gives a measure of how many bits are the same
between two bit patterns. Using the Hamming distance
of two bit patterns, a decision can be made as to whether
the two patterns were generated from different irises or
from the same one. In comparing the bit patterns X and
Y, The Hamming Distance HD is defined as the sum of
disagreeing bits (sum of exclusive OR between X and
Y) over N, the total number of bits in the bit pattern is
given by
Figure 3: Three neighborhood examples used to define a
texture and to calculate LBP
The LBP feature vector, in its simplest form, is created
in the following manner:
Figure 4: Iris Recognition Process
1.
2.
Divide the examined window to cells (e.g. 16x16
pixels for each cell).
For each pixel in a cell, compare the pixel to each
of its 8 neighbors (on its left-top, left-middle, leftbottom, right-top, etc.). Follow the pixels along a
circle, i.e. clockwise or counter-clockwise.
3.
Where the center pixel's value is greater than the
neighbor, write "1". Otherwise, write "0". This
gives an 8-digit binary number (which is usually
converted to decimal for convenience).
4.
Compute the histogram, over the cell, of the
frequency of each "number" occurring (i.e., each
combination of which pixels are smaller and which
are greater than the center).
5.
6.
Optionally normalize the histogram.
(a)The original eye image taken from CASIA iris
database (b) Region of interest extracted image (c)
Filtered iris image and (d) Edge detected portion of the
iris textures
The above figures are the results of iris recognition
process. In which, figure (a) is the original eye image
taken from CASIA iris database. The eye image is
processed to segment the region of interest portion as
shown if figure (b). After this, the extracted image is
filtered to get the patterns of clear iris textures as shown
in figure (c). Figure (d) shows the canny edge detected
portion of the filtered iris textures
3. Authentication to the authorized user
If the pattern extracted from the loaded input image and
the pattern of the existing image in the database are
matched then it provides the authentication to the user.
Concatenate normalized histograms of all cells.
This gives the feature vector for the window.
4. Password Verification and Transaction
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
96
IRIS Authentication in Automatic Teller Machine
________________________________________________________________________________________________
The authorized person then enters password, if the
entered password match is found then the system allows
for the transaction and the transaction process begin
through voice commands. A microphone commonly
used in computer system is used as voice sensor to
record the ATM user voice. The recorded voice is then
sent to the system which will identify the command
given by the user based on his/her voice.
Implementation: Implementation of any software is
always preceded by important decisions regarding
selection of the platform, the language used,etc. these
decisions are often influenced by several factors such as
real environment in which the system works, the speed
that is required, the security concerns, and other
implementation decisions that have been made before
the implementation of this project. They are as follows:
1.
Selection of the platform (Operating System)
2.
Selection of programming language for the
development of the application
3.
Coding guidelines to be followed.
Figure 6: Iris Registration
MATLAB high level language is used to implement the
project. For the user interaction GUIs created using
MATLAB tool, can also read, write data files and even
communicate with other GUIs.
Figure 7: Input Iris for Recognition
Figure 5: User Registration
Figure 8: Iris Authentication to Authorized user
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
97
IRIS Authentication in Automatic Teller Machine
________________________________________________________________________________________________
To overcome these problems it is advisable to
implement the Iris based access control in ATMs as it
will eliminate the problems associated with the smart
card access control. Iris based recognition enables the
most secured authentication to access the Automatic
Teller Machine (ATM), because of its stable, unique and
non-invasive characteristics.
REFERENCES
[1]
Atkins, W., 2001: A testing for face recognition
technology. Biometric Technology Today, vol
147, pp. 195-197.
[2]
Campbell, J, P., 1997. Speaker recognition; a
tutorial. In Procc. IEEE, pp; 1437-1462.
[3]
Dade, L. A. et al. 1998. Human brain function
encoding and recognition: Anal of the New York
Academy of Sciences, 355, 572 - 574
[4]
Kung, S. Y., M W, Mack, and S. H. Lin, 2004.
Biometric authentication machine learning
approach. Prentice Hall
[5]
Njemanze, P. C. 2007. Cerebral lateralization for
processing Laterality, 12, 31 -49.
[6]
Schoon G. A. A. and deBurn J. C. 1994, Forensic
science international, pill.
[7]
Wahyudi et al, 2007. Speaker recognition
identifies people by their voices. In Proc. Of
conference on security in computer application
(2007).
[8]
Yekini, N. A., and Lawal, 0. N. 2000. 1CT for
accountants and Bankers: Hasfem Publication,
[9]
Zhang, D, d., 2000. Automated biometric
technologies and systems. Kluwer academic
Publisher.
[10]
Adams W.K. Kong, Member, IEEE, David
Zhang, Fellow, IEEE, and Mohamed S. Kamel,
Fellow, IEEE, An Analysis of Iriscode, IEEE
transactions on image processing, 19(2),
(2010)[11]. Amol D. Rahulkar and Raghunath S.
Holambe, Half-Iris Feature Extraction and
Recognition.
Figure 9: Transaction
Above figure shows database creation, which includes
the process of loading an input eye image from database,
extracting the region of interest, filtering the extracted
image and the canny edge detection is for edge
detection. Also the details of user are registered for
storing and recognition. Figure 7 is the captured input
iris of a user which is being compared with the existing
pattern in the database. Matching of two iris code is
performed by using Hamming distance of two bit
patterns. If any of the patterns is matched then it
displays “Iris Found” else “No Iris found”. Then the
recognized person gets the access to ATM.
V. MERITS OF IRIS AUTHENTICATION IN
ATM
1.
Compared to smart card based access the Iris based
access system has low false acceptance rate.
2.
Iris recognition is reliable in the sense that no two
people have the same Irises.
3.
Smartcard used to access the ATM might be lost,
misplaced or even duplicated but the authentication
provided through iris can reduce these problems.
4.
Iris Recognition system is economical as it saves
the banks cost of producing smartcard.
VI. CONCLUSION
This paper proposes and describes the design and
evaluation of a biometric based authentication to access
an Automatic Teller Machine. As the ATM users with
smart card access have encountered several problems
like card misplacement, chip distortion, card fraud etc.

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
98
Spectrum Sensing Using CSMA Technique
________________________________________________________________________________________________
Spectrum Sensing Using CSMA Technique
1
Rajeev Shukla, 2Deepak Sharma
Electronics and Communication Engineering, CSIT, Durg, Chhattisgarh, INDIA
Email: [email protected], [email protected]
Abstract — Cognitive Radio is an emerging frontier to
tackle the ever increasing demand of spectral needs. The
most important function of CR is to search for spectrum
holes or white spaces in the spectrum. Many techniques has
been introduced and researched to increase the efficiency
and accuracy of Spectrum Sensing. With the introduction
of more complex methods cost of the whole process also
increases. The following paper suggests a new idea of
involving CSMA technique for spectrum sensing. This
paper will give an insight on working of CR and CSMA
technique. Later it will put forward the criterion for
spectrum sensing and how CSMA can be used for
investigate the presence of primary user in the spectrum
vicinity.
I. INTRODUCTION
Joseph Mitola-III first coined the term “Cognitive
Radio”. According to him “Cognitive Radio (CR) is a
type of Software Defined Radio which continuously
monitors its RF environment for Spectrum holes and
provides this unused frequency band to another user” [1].
The original licensed user are called primary user
whereas the users to whom the spectrum holes are
provided for usage are termed as secondary user. The
CR uses various Spectrum Sensing methods to detect the
spectrum holes in the RF spectrum. It then estimate the
timing for which spectrum would be allotted, then use
Dynamic spectrum management techniques to allocate
the unused frequency to secondary user through
different Power Control methods to communicate
between its users undisturbed. The term spectrum holes
may be defined as „The spectrum holes is a band of
frequencies assigned to a primary user, but, at a
particular time and specific geographic location, the
band is not being utilized by that user‟. Because of its
high awareness about its environment CR uses the
methodology of understanding- by-building to learn
from the environment and adapt to statistical variations
in the input stimuli, with two primary objectives in
mind:
II. SPECTRUM SENSING
The main objective for CR network to achieve is to grant
extremely trustworthy communications whenever and
wherever needed and to exploit the radio spectrum
resourcefully. To utilize the radio spectrum the CR needs
to search for the spectrum holes within the spectrum and
provide it to secondary user. Here, the term “Spectrum
holes” stands for those sub bands of the radio spectrum
that are not used by PU at a particular instant of time and
specific geographic location (Fig. 1.). Spectrum Sensing,
defined as the process of searching spectrum holes in the
radio spectrum in local neighborhood of the cognitive
radio receiver. It facilitates the cognitive radio to
continuously monitor a licensed frequency band and
smartly transmits whenever it doesn‟t detect a primary
signal. With ability of parallel detection and reaction to
the spectrum usage, these types of secondary users can
be considered as the basic forms of cognitive radio. The
basic prerequisites for spectrum sensing are the full
awareness of its radio environment and acquaintance of
its geographical location. The responsibilities executed
spectrum sensing unit involves [2]:
1)
Recognition of possible spectrum holes;
2)
Spectral resolution of each spectrum hole;
3)
Estimation of the spatial directions of incoming
interferes;
4)
Signal classification.
III. CSMA
•A trustworthy system for communication whenever and
wherever needed;
• Well-organized utilization of the radio spectrum.
Fig.1. Spectrum Holes [2]
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
99
Spectrum Sensing Using CSMA Technique
________________________________________________________________________________________________
Multiple access methods are the techniques of allowing
multiple users to use a transmission medium or channel
for transmission and reception purposes. Media Access
Control (MAC) provides addressing and channel access
control mechanisms that make it possible for users to use
channel in multiple access.
Carrier Sense Multiple Access (CSMA) is a MAC
protocol in which user first validates about the channel
whether it is pre-occupied by any other user or not
before transmission. It is based on the principle “Sense
before Transmit” or “Listen before Talk” [4].
IV. CSMA WITH COGNITIVE RADIO
CR make use of spectrum sensing techniques for
searching the spectrum holes which are nothing but free
channels in the dedicated spectrum. CSMA techniques
also verify that channel is free for use or it is preoccupied . thus we can combine CR technology with
CSMA. The simplest way for spectrum sensing is by
using Energy Detection (ED) technique. The Energy
Detection technique is based on the principle that if a
primary user is present in the spectrum then there will
exist a finite amount of energy in the associated channel.
If we analyze the whole spectrum measuring the enery
level at each level we can estimate the presence of
primary user. But the ED technique has a major flaw
that when the signal is continuous in nature the energy
of the signal become infinite. So for this purpose we are
measuring the power content in the channel instead of
energy. For this purpose we are making use of Power
Spectral Density (PSD) graph for estimating the power
in different channels.
Fig.2. PSD graph when all channels are occupied
So we can conclude that the system will detect primary
user at these carrier frequencies.
When some of the channels are unoccupied there will be
zero power measured in that channel as no signal will be
flowing through that channel. There will be absence of
some peaks in the PSD graph of fig.2. as shown in fig.3,
upon comparing with fig.2 we see that there will be no
peaks for channel 1 and 4 indicating the absence of PUs
in that channel.
Fig.3. PSD graph when 2 channels are absent.
The information thus generated by the reciever can be
fed back to transmitter to determine whether another
transmission is in progress before initiating any
transmission.
V. EXPERIMENTAL
VII. CONCLUSION AND FUTURE WORK
For the simulation of above idea we are using MATLAB
simulation software. We have taken exponetial test
signal as an input and 5 channels through which test
signal will be transmitted after modulation. The carrier
frequency for each channel is different and here it is
1KHz for channel 1, 2KHz for channel 2, 3KHz for
channel 3, 4KHz for channel 4 and 5KHz for channel 5.
At the reciever end, the signals from different channels
are combined and their FFT is taken. Through the FFT
result we obtain PSD for the combined signals.
In this paper, we had proposed a method for spectrum
sensing with the help of CSMA MAC protocol and
energy detection using PSD. We had simulated the
method using MATLAB to get the desired result. For
further improvement of the process presence of noise
and attenuation could be taken into consideration. Also
the problem of „hidden terminals‟ and „exposed
terminal‟ had to be dealt with so as to improve the
performance of the system.
REFERENCES
VI. RESULT AND DISCUSSION
[1].
When simulated the PSD graph thus obtained when all
channels are occupied is shown in fig.2. Here we can see
that PSD curve has some positive finite power when
curves get near to carrier frequencies and decreases there
after.
Joseph Mitola III and Gerald Q. Maguire, Jr.
“Cognitive Radio: Making Software Radios More
Personal” IEEE Personal Communications,
August 1999.
S. Haykin, “Cognitive Radio: Brain-empowered
wireless communications”, IEEE Journal on
Selected Areas in Communications, Special Issue
on Cognitive Networks, vol. 23, pp. 201-220,
February 2009.
________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
100
[2].
Spectrum Sensing Using CSMA Technique
________________________________________________________________________________________________
[3].
Rajeev Shukla and Deepak Sharma, “Estimation
of Spectrum Holes in Cognitive Radio using
PSD”, IJICT Vol.3, No.7, October 2013.
[4].
A. Nasipuri and S. R. Das. Multichannel, “
CSMA with signal power-based channel
selection for multihop wireless networks.”, Proc.
of IEEE Fall Vehicular Technology Conference
(VTC 2000), Sept. 2000.
[5].
Ghalib A. Shah Ozgur B. Akan, “CSMA-based
Bandwidth Estimation for Cognitive Radio
Sensor Networks”, proc. of IEEE, NTMS 2012,
May 2012.
[6].
Rong-Rong Chen, Xin Liu, “Coexisting with
CSMA-based Reactive Primary Users”, New
Frontiers in Dynamic Spectrum, proc. of IEEE,
April 2010.

________________________________________________________________________________________________
International Conference on Engineering and Applied Science - ICEAS, 2014
ISBN: 978-3-643-24819-09, Bangalore, 13th July, 2014
101