Proceeding of "Recent Trends in Computations and Mathematical

Transcription

Proceeding of "Recent Trends in Computations and Mathematical
Recent Trends in Computations
and Mathematical Analysis
in Engineering and Sciences-2015
“CRCMAS 2015”
Edited byDr. B. B. Chattopadhyay
Principal, Govt. Polytechnic Silli
Samarjit Roy
Assistant Professor, Dept. of Computer Sc. & Engg.,
Techno India, Silli,
Organized byGovt. Polytechnic Silli, Ranchi, Jharkhand
2015
International E - Publication
www.isca.me , www.isca.co.in
International E - Publication
427, Palhar Nagar, RAPTC, VIP-Road, Indore-452005 (MP) INDIA
Phone: +91-731-2616100, Mobile: +91-80570-83382
E-mail: [email protected] , Website: www.isca.me , www.isca.co.in
© Copyright Reserved
2015
All rights reserved. No part of this publication may be reproduced, stored, in a retrieval
system or transmitted, in any form or by any means, electronic, mechanical, photocopying,
reordering or otherwise, without the prior permission of the publisher.
ISBN: 978-93-84659-04-2
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
CONFERENCE ORGANIZATION
President
Sri A.K. Singh, IAS, Hon'ble Secretary, Dept. of Higher & Tech. Edu,
Govt. of Jharkhand.
Chief Patrons
Mr. G. Roy Choudhury, TIG
Mr. Mohit Chattopadhyay, Director, Jharkhand Project, TIG
Organizing Chair
Dr. B. B. Chattopadhyay, Principal, Govt. Polytechnic Silli
(+91-8405000384)
Email: [email protected]
Convener
Samarjit Roy
(+91-8405058525/ 8902041490)
Email: [email protected]/ [email protected]
Co-Convener
Nilanjan Sil
(+91-9905156745/ 9431355883)
International Science Congress Association
iii
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Advisory Committee
Prof. (Dr. Engg.) K.P. Ghatak, UEM, Kolkata
Mr. B.M. Kumar, JCST, Jharkhand
Dr. Sudipta Chattopadhyay, Jadavpur University, Kolkata
Dr. Partha Pratim Roy, GGU, Bilaspur
Ms. Sujata Ghatak, IEM, Kolkata
Program Committee
Dr. Debashis De, WBUT, Kolkata
Dr. Smita Dey, Ranchi University, Ranchi
Prof. (Dr.) S.R. Kumar, NIFFT, Ranchi
Prof. (Dr.) Bani Mukherjee, ISM Dhanbad, Jharkhand
Prof. (Dr.) M.K. Singh, Magadh University, Gaya
Dr. Chandan Koner, B.C. Roy Engineering College, Durgapur
Dr. B. B. Sarkar, Techno India Salt Lake
Mr. Sudipta Chakrabarty, Techno India Salt Lake
Dr. Kunal Das, BPPIMT, Kolkata
Mr. Soumik Das, Techno India Salt Lake
Mr. Bishaljit Paul, Govt. Polytechnic Silli
Mr. Souvik Paul, Techno India, Salt Lake
Organizing Committee
Dr. Arindam Sarkar
Rajiv Ranjan Sah
International Science Congress Association
iv
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Utpal Kumar Pal
Aanabick Kolay
Somnath Nag
Chayan Chakraborty
Suprakash Jana
Samir Hazam
Sudip Mondal
Debashish Roy
Rajesh Guria
Sumit Kumar
Finance
Arun Kanti Manna
Sayan Kundu
Sayantan Ghorai
Suraj Das
Sangeet Panda (Student)
Members
Diwakar Kumar, Student
Uttam Anand, Student
Babli Kumari, Student
International Science Congress Association
v
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
INDEX
Sr. No.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Title & Authors Name
People Counting From an Image Using Image Processing Technique
Pijush Kanti Kumar1, Saurabh Singha2, Premjit Sen3
Big Data Analytics in Banking Sector
Yuvika Priyadarshini
Hydrodynamic Natural Convection Slip Flow of A Nanofluid in the Presence of Newtonian Heating and
Non-Linear Thermal Radiation
Goutam Kumar Mahato
A Global Prospective of Cloud Computing in Governance
Sudhanshu Maurya
A Foremost Survey on State-of-The-Art Computational Music Research
Sudipta Chakrabarty1, Samarjit Roy2 and Debashis De3
The Relationship between Climatic Factor with the ebola virus Disease Outbreak in Guinea, Liberia and
Sierra Leone, 2014-2015
Roshan Kumar and Smita Dey
Developing Local-Area Networks Using Pervasive Theory
Jyoti Kumari, Nisha Kumari, Priyanka Kumari and Arun Kanti Manna
Visualizing Local-Area Networks and E-Commerce
Dukhit Mahato, Deepak Kumar Paswan, Nabaranjan Mahato, Shivshankar Singh Munda and Arun Kanti
Manna
Modeling in Gis with Spatial Data
Swagata Ghosh1, A.K.Upadhaya2
Structure, Microstructure and Dielectric Properties OF(1-x)Ba0.06(Na1/2Bi1/2)0.94TiO3-xNaTaO3 Lead-Free
Ceramics
Sumit Kr. Roy1,a), S. Chaudhuri1, S.N.Singh2 and K. Prasad3
An Understanding OF 802.11 Mesh Networks
Suraj Kumar, Vishal Kumar Sharma, Sandip Kumar Mehta, Amit Mandal and Arun Kanti Manna*
Approaches to Implement Authentication and Encryption Techniques in Cloud Computing
Arun Kanti Manna1 and Chandan Koner2
Proposed Artificial Intelligence Based Authentication of User in Remote System
1
Biswajit Mondal, 2Priyanka Roy and Chandan Koner1
Transmission Congestion Management, Pricing and Locational Marginal Pricing in the Deregulated Power
System
Bishaljit Paul
A Survey on the Generalizations of Association Scheme
Pankaj Kumar Manjhi and Arjun Kumar
A Fixed Point Theorem Satisfying Compatibility
Dhruva Narayan Singh
Dispersal of Arsenic into Damodar River: A Mathematical Model
Shafique Ahmad1 and Shibajee Singha Deo2
A Survey of Vertical Handoff Schemes in Vehicular Ad-Hoc Networks
Sadip Midya, Koushik Majumder and Asmita Roy
Energy Gain of Signal Wave and That of Idler Wave Due to Nonlinear Parametric Interaction in
Piezosemiconducting Medium: A Numerical Approach
Pravat Kumar Mandal
International Science Congress Association
Page No.
1
5
9
10
16
26
31
37
42
46
51
56
61
65
70
73
76
82
90
vi
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
20
21
22
23
24
25
26
Skin Lesion Analysis and Treatment Monitoring Using Image Processing Technique: A Review
Ishita Bhakta1 and Santanu Phadikar2
An Understanding of Local-Area Networks Using Catalanelbow
Kumari Asima Mahato, Baby Kumari, Sunita Kumari, Sushma Kumari and
Arun Kanti Manna
Affect detection from facial expression: A review
Aritra Ghosh and Saikat Basu
105
A Review on Facial Emotion Recognition System
Zahir Abbas Rahaman and Saikat Basu
119
Prediction in Stock Market Through Mathematical Modelling
Mrinalini Smita
Mathematical Model for Deteriorating Inventory - Items Under Trade Credit And Inventroy Level
Dependent Demand Rate
Dhrub Kumar Singh1 and Sahadeo Mahto2
Theoretical Study of Spin-Hamiltonian Parameters for the four-Coordinated Nickel (II) Ion in Malonato
Complexes
Mitesh Chakraborty1, Vineet Kumar Rai2 and Vishal Mishra3
International Science Congress Association
99
111
123
125
133
vii
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
People Counting From an Image Using Image Processing Technique
Pijush Kanti Kumar1, Saurabh Singha2, Premjit Sen3
IT, Government College of Engineering & Textile Technology, Serampore, West Bengal, INDIA
Computer Science and Technology, Indian Institute of Engineering Science and Technology, Shibpur
IT, Government College of Engineering & Textile Technology, Serampore, West Bengal, INDIA
Abstract
Today people counting are a very useful task. In many cases we can measure the number of present people in an open environment.
This paper outlines a model which is useful for counting people on a ground plane through image processing technique. In the
literature overview, it was found that the most reliable and accurate sensor was the video camera but it was also the most expensive
and hard over counting. Several papers has been published this last ten years for counting people by using video processing with
different methods. Most of them are taking video frames as input for counting the people in an open environment and they are using
several methods or algorithms. But we are introducing a density based counting of people using image processing technique in
which input is an image. Many papers has been published on this topic but our new invention in this paper is we are cropping the
input image into several blocks of same size and then counting the number of people for each block and adding the results to get the
total number of people for the given input image more accurately.
Keywords: MATLAB; Image processing; People counting; Density estimation
Introduction
In recent years, the application of image processing techniques in people counting has been investigated in many ways by several
researchers. This is not a simple task, there are some situations difficult to solve even with today's computer speeds (the algorithm
has to operate in real-time so it makes limits for the complexity of methods for detection and tracking). And one of the most
difficult, is people occlusions. Real-time people counting can be a very useful information for several applications like security or
people management such as pedestrian traffic management or tourists flow estimation. People counting system is important for
marketing research also. When people entering or exiting of the field of view in group, it is very hard to distinguish all the humans
in this group. Thanks to all those research, many organizations propose people counting based on video camera. Their system or
models are very accurate and reliable but those are also expensive. So the goal of the entire project is to make a very cheap model
which is able to count people more accurately from an image. And may be in future it will become a reliable people counting
system.
Experimental Environment
Matlab: MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and
programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation.
MATLAB is widely used in all areas of applied mathematics, in education and research at universities, and in the industry.
MATLAB stands for MATRIX LABORATORY and the software is built up around vectors and matrices. This makes the software
particularly useful for linear algebra but MATLAB is also a great tool for solving algebraic and differential equations and for
numerical integration. MATLAB has powerful graphic tools and can produce nice pictures in both 2D and 3D. It is also a
programming language, and is one of the easiest programming languages for writing mathematical programs. MATLAB also has
some tool boxes useful for signal processing, image processing, optimization, etc. Certainly, you can write data evaluation
programs in other programming languages such as Visual Basic, C++, or Java, but MATLAB is a language designed especially for
processing, evaluating and graphical displaying of numerical data.
Image processing: Image processing is a method to convert an image into digital form and perform some operations on it, in order
to get an enhanced image or to extract some useful information from it. It is a type of signal dispensation in which input is image,
like video frame or photograph and output may be image or characteristics associated with that image. Usually Image Processing
system includes treating images as two dimensional signals while applying already set signal processing methods to them. It is
among rapidly growing technologies today, with its applications in various aspects of a business. Image Processing forms core
research area within engineering and computer science discipline too.
Procedure to Count People: Here we are taking an input image of any size, then we will convert it into a fixed sixe image and
then cropping it into 16 blocks. So at first, take the input image and then resize it into a fixed size (800px X 800px). Resizing is
necessary because if we do not change the size of the uploaded image, we can’t crop it into 16 patches of equal size. Now after
resizing the input image crop it into blocks. Then convert the each cropped image from RGB to LAB to perform the transformation
International Science Congress Association
1
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
and apply a threshold value. Use Otsu’s method which chooses the threshold to minimize the interclass variance of the black and
white pixels. After that convert it to the binary image which replaces all pixels in the input image with luminance greater than level
with the value 1 (white) and replaces all other pixels with the value 0 (black).Specify level in the range [0,1]. This range is relative
to the signal levels possible for the image's class. Therefore, a level value of 0.5 is midway between black and white, regardless of
class. Now draw bounding box using blob measurement and determine the density and number of people for each block.
Density is estimated from the equation given belowDensity=cc.NumObjects / (size (bw,1) * size(bw,2)) ;
Here cc is the connected component and it is used to finding the density of the people. And bw is the binary image.
Experimental Results
We have done many experiments on this project using various crowd images. Some of those results are shown below.
Figure-1
Results of people counting for cropped images
International Science Congress Association
2
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure-2
Results of people counting for the overall images
18
16
12
14
12
13
7
10
11
9
9
5
11
8
11
9
Actual Result
International Science Congress Association
3
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
15
19
16
21
18
19
14
13
17
11
14
15
16
18
13
17
Estimated Result after Cropping
Conclusion
In this paper an image processing algorithm, suitable for people counting from an image has been suggested and analysed. In this
model we have made a graphical user interface (GUI) gives a value which is approximately 80% accurate. This model will be
useful to count people from an image of any public rally, procession, airport, railway station, cultural program etc. The application
model is easy to operate and also cheap model.
References
1.
Hsiang-Chieh Chen; Ya-Ching Chang; Nai-Jen Li; Cheng-Feng Weng and Wen-June Wan , “Real-time people counting
method with surveillance cameras implemented on embedded system”, WCECS 2013, 23-25 October, 2013, San Francisco, USA.
2.
Djamel Merad; Kheir-Eddine Aziz; Nicolas Thome, “Fast people counting using head detection from skeleton graph”,9780-7695-4264-5/10 $26.00 © 2010 IEEE DOI 10.1109/AVSS.2010.77
3.
Muhammad Arif; Sultan Daud;SalehBasalamah, “Counting of People in the Extremely Dense Crowd”, IAES International
Journal of Artificial Intelligence (IJ-AI) Vol. 2, No. 2, June 2013, pp. 51~58 ISSN: 2252-8938.
4.
Monali P. Patil; Varsha R. Ratnaparkhe; “Object Counting Based On Image Processing: FPGA Approach”, IOSR Journal
of VLSI and Signal Processing (IOSR-JVSP) Volume 4, Issue 2, Ver. I (Mar-Apr. 2014).
Books:
“Digital image processing using MATLAB” by Rafael C. Gonzalez, Richard E. Woods and Steven L. Eddins,Gatesmark
Publishing.
Websites:
[1] http://www.mathworks.com/
[2]http://imageprocessingblog.com
International Science Congress Association
4
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Big Data Analytics in Banking Sector
Yuvika Priyadarshini
Jharkhand Rai University, Ranchi, Jharkhand, INDIA
Abstract
In the current business world, Data and its applications are of great importance for the organizations for achieving competing
advantage and defined as a new strategic approach to innovation and a potential element for creating larger market share.
Understanding the Data Management (DM) process in terms of banking sector will highlight how it influences organizational
performance. In a developing country, it is also showing signs of competition and improved performance through DM. In response
to this need, this research explores the key processes and technologies of DM being used in the banks in order to give an insight for
bankers and strategist to understand its importance.
Keywords:
Introduction
In information era, Data is becoming a crucial organizational resource that provides competitive advantage and giving rise to DM
initiatives. Many organizations have collected and stored vast amount of data. The computerization of financial operations,
connectivity through World Wide Web and the support of automated software hasentirely changed the fundamental concept of
business and the approach the business operations are being approved. The banking sector is not exclusion to it. It has also
witnessed a tremendous transform in the way the banking operations are carried out. Since 1990’s the intact concept of banking has
been transferred to centralized databases, online transactions and ATM’s all over the world, that has ended banking system
technically strong and more customer oriented. In the current day environment, the massive amount of electronic data is being
maintained by banks around the sphere. The huge size of these data bases constructs it impossible for the organizations to analyze
these databases and to retrieve useful information as per the requirement of the decision makers 3,5. Since 1980’s the banking sector
is incorporating the perception of Management Information System, through which banks are generating various kinds of reports,
which are then presented and analyzed for the decision making within the organization. However these reports obtainable in the
summarized structure can be used by the governing authorities 2. While dealing with banking sector, which itself is an information
intensive industry, is quite cumbersome assignment. The banks at present generate reports from the periodic paper reports and the
statements submitted by various constitute units. Such reports have a high degree of error, due to data being recorded and
interpreted by various parties at various levels2. Moreover the Total Branch Computerization (TBC) software packages being used
at various branch levels are transaction oriented, as these were designed keeping day to day transactions in mind. Designing the
new MIS or restructuring the existing ones would not be possible by just replacing the existing Total Branch Computerization
packages. The solution seems to be in incorporating the concept of data warehousing and data mining. Due to the vast expansion of
the horizons of the data and its multivariate uses, the organizations and the individuals are feeling a need for some centralized data
management and retrieval system. The centralization of the data is required basically for better processing and in turn facilitating
the user access and analysis.
Today the bank is focusing on big data, but with an emphasis on an integrated approach to customers and an integrated
organizational structure. It thinks of big data in three different “buckets”—big transactional data, data about customers, and
unstructured data. The primary emphasis is on the first two categories. With a very large amount of customer data across multiple
channels and relationships, the bank historically was unable to analyze all of its customers at once, and relied on systematic
samples. With big data technology, it can increasingly process and analyze data from its full customer set.
Other than some experiments with analysis of unstructured data, the primary focus of the bank’s big data efforts is on understanding
the customer across all channels and interactions, and presenting consistent, appealing offers to well-defined customer segments.
For example, the Bank utilizes transaction and propensity models to determine which of its primary relationship customers may
have a credit card, or a mortgage loan that could benefit from refinancing at a competitor. When the customer comes online, calls a
call center, or visits a branch, that information is available to the online app, or the sales associate to present the offer. The various
sales channels can also communicate with each other, so a customer who starts an application online but doesn’t complete it, could
get a follow-up offer in the mail, or an email to set up an appointment at a physical branch location.
International Science Congress Association
5
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Data Mining For Analysis of Credit Card Transaction
With the advent of new technologies, people are increasingly showing their inclination towards electronic means of payment. Credit
card margins continue to be squeezed by a combination of high charge-off and rising account acquisition costs. Record levels of
delinquencies, personal bankruptcies and resulting charge-off coexists in a saturated market where offers are quickly becoming
commodities. In this environment, accurate risk prediction is of utmost important. In order to remain competitive, credit card
issuers are turning to data mining to uncover information from their massive databases. This application deals with credit card from
a nationalized bank. Mining the credit card data not only discover the customer segments but also helps extracting additional hidden
information that may guide the bank in developing models tailored for specific business goals, such as to detect fraud or accurately
target customers.
Credit card issuers are engaged in improving response rates while also identifying the best candidates in terms of profitability and
risk. Issuers that most accurately match customer risk profile and behavioral attributes with differentiable products will seize a
competitive advantage
The drivers for data mining the card transactions are: i. Determine whom to solicit as client, possibly with pre-approved credit
limits. ii. Customer retention – find and analyze which customer characteristics will help to offer services that will keep the best
credit card customers for the long term. iii. Customer attrition – discover which customers are most likely to leave for lower
interest- rate cards. iv. Fraud detection – find purchase patterns and trends to detect fraudulent behavior at the time of credit card
purchases. v. Payment or default analysis – identify specific patterns that will help predict when and why cardholders default on
their monthly payments. vi. Market segmentation – correctly segment cardholders into groups for
promotional and evaluation
purposes.
Data mining makes the above possible by organizing the bank’s credit card holders into related groups and then examining the past
credit history, the purchasing profile, the payment profile of each group, merchant details, etc. It uncovers vital knowledge hidden
in the database so that the issuers can improve marketing of card products and related services, retain and attract good customers,
increase market share, reduce cost, and increase return on investment. Applications of big data analytics are risk management,
campaign analysis, customer profile analysis, loyalty analysis, customer care analysis, business performance analysis, sales analysis
and profitability analysis are some of the areas.
Data Warehousing
The development of management support systems is characterized by the cyclic up and down of buzzwords. Model based decision
support and executive information systems were always restricted by the lack of consisted data. Now-a-days data warehouse tries to
cover this gap by providing actual and decision relevant information to allow the control of critical success factors. A data
warehouse integrates large amounts of enterprise data from multiple and independent data sources consisting of operational
databases into a common repository for querying and analyzing. Data warehousing will gain critical importance in the presence of
data mining and generating several types of analytical reports which are usually not available in the original transaction processing
systems.
Banking being an information intensive industry, building a Management Information System is a gigantic a task. It is more so for
the public sector banks, which have a wide network of bank, branches spread all over the country. It becomes more difficult due to
prevalence of varying degrees of computerization. At present, banks generate MIS reports largely from periodic paper
reports/statements submitted by the branches and regional/zonal offices. Except for a few banks, which have been using technology
in a big way, MIS reports are available with a substantial time tag. Reports so generated have also a high margin of error due to data
entry being done at various levels and likelihood of varying interpretations at different levels.
Though computerization of bank branches has been going at a good pace, MIS requirements have not been fully addressed to. It is
on account of the fact that most of the Total Branch Computerization (TBC) software packages are transaction processing oriented.
In most banks large databases are in operation for normal daily transactions. In most cases, these operational databases have not
been designed to store historical data or to respond to queries but simply to support all the applications for day-to-day transactions.
The present information systems evolved from the legacy of the old. They exist as collection of separate islands of information that
have developed as response to certain operational needs. They have not been designed to meet the information requirement on real
time basis of decision-makers cutting across departments. Due to contingent nature of the developmental process, the hardware and
software platforms that have been used in these operational information systems lack compatibility. As a result whenever decision-
International Science Congress Association
6
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
makers information requirements have to be met by pulling out data from various operational databases, special efforts are needed
to be made for collating these data. Another important consequence of the disparate nature of the existing system is the lack of
subject orientation in the system. This in turn reduces the utility of the system to the decision makers. Another major shortcoming
of the present system is its inability to provide consistent data for different variables for a reasonably long duration. This apart, the
most critical deficiency of the present information system is the lack of information about the availability of data. In this
connection, an application of Data Warehousing along with Online Analytical Processing (OLAP) and Data Mining techniques
appears to be the appropriate solution. Further data warehouses provide a central repository for large amounts of diverse and
valuable information.
Conclusion
Banks can use technology to improve their performance and they can get the sustainable competitive advantage. It’s important to
remember that the primary value from big data comes not from the data in its raw form, but from the processing and analysis of it
and the insights, products, and services that emerge from analysis. The sweeping changes in big data technologies and management
approaches need to be accompanied by similarly dramatic shifts in how data supports decisions and product/service innovation.
This integration will not only facilitate the capturing and coding of knowledge but also enhances the retrieval and sharing of
knowledge across the bank to gain strategic advantage and sustain in competitive market.
References
1.
Basak S. and Shapiro A., Value-at-Risk-Based Risk Management:Optimal Policies and Asset Prices. The Review of Financial
Studies, Volume doi: 10.1093/rfs/14.2.371. Bernstein, P. L., 1998. Against The Gods, The Remarkable Story of Risk, s.l.:
Published by John Wiley and Sons (2001)
2.
A. Vasudevan, Report of the Committee on Technology Up gradation in the Banking Sector”,
of India, Chairman of Committee, (1999)
3.
S.R. Mittal, Report of Committee on Internet Banking, Constituted by Reserve bank of India, Chairman of the Committee
(2001)
4.
Madan Lal Bhasin, Data Mining: A Competitive Tool in the Banking and Retail Industries, The Chartered Accountant
October, (2006)
5.
Rajanish Dass, Data Mining in Banking and Finance: A Note for Bankers, Indian Institute of Management Ahmadabad.
6.
Alavi M. and Leidner D.R., ‘Review: Knowledge Management and Knowledge Management Systems: Conceptual
Foundations and Research Issues’, MIS Quarterly, 25(1), 107-136 (2001)
7.
Apostolou D. and Mentzas G., Managing Corporate Knowledge: Comparative Analysis of Experiences in Consulting Firms,
Knowledge and Process Management, 6(3), 129-138 (1999)
8.
Aarabi S.M and Saeid Mousavi, Strategic KM model for Research Centers Performance Promotion, Journal of Research and
Planning in Higher Education, 15(51), 1-26 (2009)
9.
Akhavan Peyman, Towards Knowledge Management: an Exploratory Study for Developing a KM Framework in Iran',
International Journal of Industrial Engineering and Production Research, 20(3), 99-106 (2009)
Constituted by Reserve Bank
10. S.P. Deshpande and V.M. Thakare, "Data Mining SystemAnd Applications: A Review
11. Kadayam S., New Business Intelligence: The promise of Knowledge Management, the ROI of Business Intelligence (2002)
12. Clifton C. and D. Marks, Security and Privacy Implications of Data Mining”, Proceedings of the ACM SIGMOD Conference
Workshop on Research Issues in Data Mining and Knowledge Discovery, Montreal, (1996)
13. Morgenstern M., Security and Inference in Multilevel Database and Knowledge Base Systems, Proceedings of the ACM
SIGMOD Conference, San Francisco, CA, (1987)
14. Database Security IX Status and Prospects Edited by D. L. Spooner, S. A. Demurjian and J. E. Dobson ISBN 0 412 72920 2,
391-399 (1996)
15. Lin T.Y., “Anamoly Detection -- A Soft Computing Approach”, Proceedings in the ACM SIGSAC New Security Paradigm
Workshop, Aug 3-5, 1994,44-53. This paper reappeared in the Proceedings of 1994 National Computer Security Center
Conference under the title“Fuzzy Patterns in data (1994)
International Science Congress Association
7
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
16. Scott W. Ambler, Challenges with legacy data: Knowing your data enemy is the first step in overcoming it”, Practice Leader,
Agile Development, Rational Methods Group, IBM, (2001)
17. Agrawal R and R. Srikant, Privacy-preserving Data Mining, Proceedings of the ACM SIGMOD Conference, Dallas, TX,
(2000)
18. Clifton C., M. Kantarcioglu and J. Vaidya, Defining Privacy for Data Mining, Purdue University, 2002 (see also Next
Generation Data Mining Workshop, Baltimore, MD, November 2002).
19. Evfimievski A., R. Srikant, R. Agrawal and J. Gehrke, Privacy Preserving Mining of Association Rules, In Proceedings of the
Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Edmonton, Alberta, Canada,
(2002)
20. Kubheka, NSP, How to leverage information to improve business performance in a financial services company-Research
Report (2007)
21. Lee James Sr., Knowledge Management: The Intellectual Revolution. IIE Solutions, 32, 34-37 (2000)
22. Lingling Zhang, Jun Li,Yong Shi, Foundations of intelligent knowledge management. Human Systems Management, 28(4),
145-161 (2009)
23. Marco D., The key to knowledge management. http://www.adtmag.com/ article.asp?id= 6525 (2002)
24. Maryam B, Rosmini O and Wan K., Knowledge Management and Organizational Innovativeness in Iranian Banking Industry.
Proceedings of the International Conference on Intellectual Capital, Knowledge Management and Organizational Learning,
47-60, 14 (2010)
25. Nonaka I., The Knowledge-Creating Company. Harvard Business Review, 85(7/8), 162-171 (2007)
26. Dalkir K., Knowledge Management in Theory and Practice. Boston: Butterworth-Heinemann (2005)
27. Dawei J., The Application of Date Mining in Knowledge Management.2011 International Conference on Management of eCommerce and e-Government, IEEE Computer Society, 7-9. doi: 10.1109/ICMeCG.2011.58 (2011)
28. Porter M. and S. Stern, Innovation: Location Matters, Sloan Management Review, 28-37 (2001)
29. Devedzic V., Knowledge Discovery and Data Mining, School of BusinessAdministration, University of Belgrade, Yugoslavia,
1-24 (1998)
30. Stonman Paul, Financial Factors and the Inter Firm Diffusion of New Technology: A Real Option Model, University of
Warwick EIFC Working Paper No.8, (2001)
31. Dixit, Avinah, and Robert Pindyck, Investment Under Uncertainty, (Princeton, New Jersey: Princeton University Press, (1994)
32. Hall, Bronwyn H. and Khan, Beethika, Adoption of New Technology, University of California, Berkeley, Department of
Economics, UCB, Working Paper No. E03-330, 1-16 (2003)
33. Vitria, Technology, Inc., Maximizing the Value of Customer Information Across Your Financial Services Enterprise, White
Paper, 1-10 (2002)
International Science Congress Association
8
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Hydrodynamic Natural Convection Slip Flow of A Nanofluid in the Presence
of Newtonian Heating and Non-Linear Thermal Radiation
Goutam Kumar Mahato
Department of Mathematics, Centurion University of Technology and Management (CUTM), Bhubaneswar, Odisha, INDIA
Abstract
Natural convection flow of a viscous, incompressible, and electrically conducting nanofluidin the presence of nonlinearradiative
heat transfer, hydrodynamic slip and Newtonian heating is studied. The Brownian diffusion and thermophoresiseffects are taken
into consideration to describe the nanofluidmodel. The governing nonlinear partial differential equations are transformed to a set of
nonlinear ordinary differential equations which are then solved using spectral local linearization method (SLLM). Numerical values
of fluid velocity, fluid temperature and species concentration are displayed graphically versus boundary layer coordinatefor various
values of pertinent flow parameters whereas those of skin friction, rate of heat transfer and rate of mass transfer at the plate are
presented in tabular form for various values of pertinent flow parameters. Such fluid flow finds applications in many engineering
devices including geothermal heat source pump and in cooling of electronic devices and stretched wires.
Keywords: Nanofluid flow; natural convection; Newtonian heating; non-linear thermal radiation; velocity slip.
International Science Congress Association
9
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
A Global Prospective of Cloud Computing in Governance
Sudhanshu Maurya
Department of Computer Science and Engineering, Jharkhand Rai University, Ranchi, Jharkhand, INDIA
Abstract
Cloud Computing is an actual accepted appellation in this avant-garde and computer apple in IT band-aid which is provided as an
account over the web instead of chump attributable and affairs the solution. It is an ample accumulation of accord of computers.
Over a decade of analysis it achieves in virtualization, broadcast computing, account accretion and networking. It creates account
aggressive architectonics by accoutrements software and platforms as services. It bargain advice technology for end –user on appeal
casework and abounding of the added things accompanying to it. Technologies like filigree, cluster and Cloud accretion has all
aimed for accoutrements admission to ample amount of computer in a virtualized manner like invisible, by accession assets and
alms individual arrangement examination and added over in accession to that one of the capital aim of these technologies is
Delivering accretion as a Utility.
Keywords: Cloud; Computing; Virtualization; Networking; Software, Utility.
Introduction
Cloud computing is the approaching of advice technology. It embodies all the big trends in the design and use of computer
architectures. And it ties carefully to added trends such as big data and the “Internet of things”. It is an aggregation of technologies
and trends that are authoritative IT infrastructures and applications added dynamic, added modular, and added consumable. It lets
organizations ramp up new casework and reallocate accretion assets rapidly, based on business needs. It gives users self-service
admission to accretion resources, while advancement appropriate levels of control. And, done right, it accommodate the agency to
administer beyond amalgam accretion environments, both on- and off-premise, based on cost, accommodation requirements, and
added factors.
When you store your photographs online rather than on your home machine, or utilization webmail or a person to person
communication webpage, you are utilizing a "distributed computing" administration. In the event that you are an association, and
you need to use, for instance, an online invoicing account instead of after light the centralized one you accept been application for
abounding years, that online invoicing account is a “cloud computing” service.
Distributed computing alludes to the conveyance of registering assets over the Internet. As opposed to keeping information all alone
hard drive or overhauling applications for your needs, you utilize an administration over the Internet, at an alternate area, to store
your data or utilize its applications. Doing so may offer climb to certain protection suggestions.
Figure-1
Cloud (A graphical view)
Today computers are acclimated by the government sectors, industries, military, railway everyone. An accumulation of computers
works as an individual computer to accommodate and abstracts and added applications to user on the internet. An arrangement
which is already accessible in the Billow of computer that works as the IP abode in the server that connects the several systems.
These accommodate an all-inclusive accumulator adequacy and ample calibration accumulation of collaboration. In adjustment to
International Science Congress Association
10
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
analytic the problems like allegory accident in medical accessories and banking sectors, even in computer amateur the users may
allurement through web. The ample networking accumulation of servers uses alone bargain chump PC technology. It includes
specialized abstracts access that candy affairs of them. Our capital albatross accepting authoritative abiding that all our agent accept
actual and appropriate software and accouterments for their jobs. Anybody can buy the computer but it is not enough-Whenever
you are accepting a new befalling you accept to buy software which is accepting altered versions or accomplish abiding your
accepted software authorization allows to added user.
Figure-2
Advances of cloud computing
Web-based account which entertains all the programs that the user charge for his job. It could be alleged billow accretion and it can
change the absolute computer industry. Local computers accept to do actual abundant jobs if it comes to active applications .Instead
of that the arrangement can handles them both accouterments and software users , which can as simple as web browser and the
server will yield affliction of it by active all the programs. The software and accumulator does not abide on your computer for aegis
reasons. It’s on the casework billow computing.
Deployment Models of Cloud
There are different models of cloud i.e.: public cloud, community cloud, private cloud and hybrid cloud 1. A public cloud is a cloud
computing model in which resources like application and storage is available to general public over the internet. Community cloud
shares infrastructure between different organizations from a specific community with common concerns like compliance,
jurisdiction, security etc. Private cloud is basically an enterprise computing architecture, also called as internal cloud in which they
provide service to a limited number of users. Hybrid cloud is a combination of more than one clouds. It manages a heterogeneous
set of resources wherever they are located 2.
Cloud Computing in Government
The development in cloud computing are leading many outside and inside of the public sector to ask, “If it works for business, why
not for government?”3. In an era of virtualisation, any time anywhere services and on-demand network, the phenomenon of cloud
computing is gaining traction across governments, industries and consumers. Cloud computing helps to lower the cost and
environmental impact of government operations, create a more secure computing environment, and drive innovation within the
government by pooling IT resources across organizational boundaries. IT services and infrastructure are shared by multiple
customers, with different physical and virtual resources dynamically assigned and reassigned in real time according to customer
demand (e.g., storage, processing, network throughput, and virtual machines)
The accelerated adoption of IT in government is now uniquely positioned to gain from this growing technology. There is an
opportunity for the government and industry to partner, to drive adoption of cloud in India and build India as a major hub for
delivering cloud solution. Cloud computing has also been identified as one of the thrust areas in the national IT policy4.
International Science Congress Association
11
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure-3
Deployment Models of Cloud
Figure-5
Cloud based E-governance
Figure-5 shows how the cloud concept can be used to integrate the functioning among various government agencies and
departments.
International Science Congress Association
12
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Government of India is effectively advancing Cloud computing through the development of different test informal lodging dispatch
of various Cloud services, for example: Cloud grids, e-governance and so on. The reception of cloud computing services, approx.
one billion US dollar in 2014, which was driven by government activities like Unique Identification Authority of India (UIDAI)
venture and e-governance.
National Association of Software and Services Companies (NASSCOM) imagined and dispatched eGovReach Portal http://egovreach.in - an arrangements trade entryway to cultivate closer collaboration and unite between the Government and
industry. Mr. R. Chandrasekhar, the then Secretary-Information Technology, Government of India propelled the entrance, in
August 2010. It has been produced by a start-up part organization of NASSCOM, and is facilitated on the Cloud stage.
Figure-4
Architecture of Meghraj
Source : http://www.nasscom.in/government-india%E2%80%99s-cloud-initiative?fg=248518
The portal has manufactured a rich registry of administration suppliers in the eGovernance ecosystem. The portal now has day by
day reports on tenders and opportunities from the Central and State Governments, districts, local bodies, banks and few public
sector undertaking. The portal likewise gives most recent stories on e-Governance, both at the Central and State levels [09].
In order to utilise and harness the benefits of Cloud Computing, the Indian government in a major move has launched an important
initiative – “GI Cloud” which has been coined as ‘Meghraj’. Task Force was constituted by Department of Electronics and
Information Technology (DeiTY) with a focus to bring out the strategic direction and implementation roadmap of GI Cloud,
leveraging the existing or new infrastructure5.
Meghraj, the National cloud initiative, aims to accelerate delivery of e-services provided by the government and to optimise ICT
spending of the government. In the first phase of implementation, National Informatics Centre (NIC) cloud service was launched in
Delhi in December 20136. MeghRaj has encompass a set of discrete cloud computing environments spread across multiple
locations, which is built on existing or new (augmented) infrastructure. It will follow a set of common protocols, guidelines and
standards issued by the Government of India.
The National Cloud has help the departments to procure ICT services on demand in the OPEX model rather than investing upfront
on the CAPEX. The Cloud Services available are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a
Service (SaaS) and Storage as a Service (STaaS).
Some of the features of the National Cloud include self-service portal, multiple Cloud solutions, secured VPN access and multilocation Cloud7.
International Platform
International Science Congress Association
13
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Cloud computing is not just an Indian phenomenon. Indeed, cloud movements are taking place in governments around the world.
For instance, In the European Union presently, the European Commission and several member states are taking actions perceived
by many as leading toward the creation of a cloud-based, common infrastructure for IT in member states 8. We are however already
seeing significant cloud models being used in areas around the world. In Singapore, the government started its trip to cloud as ahead
of schedule as 2005 with the execution of an entire-of-government shared infrastructure which gives processing computing resource
to government agencies on an ‘as a service’ membership model 10,11. James Kang, Assistant Chief Executive from the Infocomm
Advancement Authority of Singapore (IDA), says that, “from here, it is the part of IDA to conceptualize, characterize, and execute
a central government cloud to encourage government offices' appropriation of cloud computing." This central government cloud,
called "G-Cloud," says Kang, will turn into the core of entire of-government framework”11,16. In the UK, the nation published its
Digital Britain report in 2009, an archive laying out that country's roadmap for expecting and keeping up an administration part in
an inexorably advanced worldwide environment12. Result of migration to the cloud resulted in reducing cost (up to 90%), system
flexibility, improved capabilities and complete process automation. So, customer queries and requests are handled in real time and
it allows users to access data to integrate with other online solutions like website and blogs 13. Currently cloud-based solution made
upgrades to the site takes only a day, which previously took up to nine months to complete 14. Therefore, the availability of the
online solutions like website and blogs increased up to 99.99 % that is per month almost zero downtime. The assigned budget to
www.usa.gov reduced to only $ 650.000 American dollars per year 15. In Canada, shared Services is a government organization
concentrating on recognizing and acknowledging investment funds and efficiencies over the Canadian Federal Government17.
Declared in August 2011, the activity expects to cut the aggregate number of government server farms from more than 300 to 20,
while paring down the quantity of email services from 100 to one and only. Cloud-based procedures and innovations, says KPMG's
Cochrane, "will essentially assume a prime part”. In July 2011, the United State Office of Management and Budget included
impressive substance, responsibility, and straightforwardness to its November 2010. Cloud First policy declaration, which obliges
offices to offer need to electronic applications and services or administrations. In a discourse given by OMB's Chief Performance
Officer, it was authoritatively declared that as of spending plan year 2012, all new federal government IT arrangements must
receive cloud innovations “wherever a protected, financially savvy, reliable cloud alternative exists”.
In Japan, the government is undertaking a cloud computing initiative named “Kasumigaseki Cloud” 14. As per the Ministry of
Internal Affairs and Communications (MIC) Japan, Kasumigaseki Cloud18 will provide greater information and resource sharing
and promote more standardization and consolidation in the IT resources of government 19. This Cloud is part of the “Digital Japan
Creation Project”. It represents a governmental strength aimed at using IT investments (valued at just under 100 trillion yen) to help
spur economic recovery by producing thousands of new IT based jobs in the upcoming years and making the Japan’s IT market
double by 202020. In Thailand, Government Information Technology Service (GITS) is building up a private cloud for use by Thai
government organizations. The GITS has effectively settled a cloud-based email administration, and it is wanting to include
Software as a Service (SaaS) very soon. GITS trusts that such solidification will enhance administration offerings for government
organizations, while at the same time cutting their general IT costs "considerably" 21.
In South Africa, while the nation "confronts an immense test in that the condition of preparation of its processing framework, of its
subjects, and its administration, isn't very cloud-prepared," says Isaac Mophatlane, Chief Executive at frameworks integrator
Business Connection Group LTD, the official does trust that state organizations are currently proceeding onward creating measures
that will help catalyse variation. Consumer appropriation and telecoms framework will likewise have influence. “South Africa is
one of the quickest developing markets for BlackBerry and for Apple”, notes Mophatlane. As citizen interest for mobile
advancements expands, framework will have a tendency to develop in lockstep. So conditions for cloud in government are
progressing8.
Conclusion
In this paper we attempted to point the innumerable advantages like cost effectiveness, adaptability, legitimate security and
integration that cloud computing gives, has changed over it to appropriate choice for use in e-government. It could be inferred that
developed and even developing nations have basic need to make e-Government to decrease expenses furthermore having
Sustainable Development in this economic and basic circumstances and the most ideal approach to finish this matter is the
utilization of green and shoddy innovation which is the cloud computing. The cooperation of nations with one another on
specialized and lawful issues is code key for accomplishing e-government in view of cloud computing as soon as possible. It is the
best choice to execute or enhance the government services in healthcare, education and social upliftment of the citizens of the
nations.
References
International Science Congress Association
14
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
1.
R. Buyya, C.S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, Cloud computing and emerging IT platforms: Vision, hype, and
reality for delivering computing asthe 5th utility, Future Generation Computer Systems, 25, 599À 616, (2009)
2.
Cloud Computing, Wikipedia
3.
Condon, Stephanie, Is Washington ready for cloud computing? CNet News, February 25, 2009. Retrieved March 2, 2009,
from news.cnet.com/ 8301-13578_3-10172259-38.html
4.
https://cloud.gov.in/aboutus.php
5.
http://www.iamwire.com/2013/10/indian-government-launches-national-cloud-initiative-meghraj/21503
6.
http://www.nasscom.in/government-india%E2%80%99s-cloud-initiative?fg=248518
7.
http://www.nextbigwhat.com/indian-government-national-cloud-meghraj-service-297/
8.
Herhalt J., Cochrane K., “Exploring the Cloud : A Global Study of Governments’ Adoption of Cloud”, Available Online at:
https://www.kpmg.com/ES/es/ActualidadyNovedades/ArticulosyPublicaciones/Documents/Exploring-the-Cloud.pdf
9.
NASSCOM Annual Report 2010-2011, URL: http://www.nasscom.in/sites/default/files/NASSCOM_Annual_Report_201011.pdf
10. Cloud Computing for Singapore Government, http://www.egov.gov.sg/egov-programmes/programmes-by-government/cloudcomputing-forgovernment
11. Malini
Nathan,
Cloud
Computing
for
Singapore
Government,
IDA
Singapore,
https://www.ida.gov.sg/~/media/Files/Archive/News%20and%20Events/News_and_Events_Level2/20120508123036/CloudC
omputingFactsheet.pdf
12. D.C. Wyld, Moving to the cloud: An introduction to cloud computing in government. Washington, DC: IBM Center for the
Business of Government, November (2009)
13. Toby Velte, Anthony Velte, Toby J. Velte, Robert C. Elsenpeter. Cloud Computing: A Practical Approach. New York:
McGraw Hill Professional, 274 (2010)
14. David C. Wyld, The Cloudy Future of Government IT: Cloud Computing and the Public Sector around the World,
International Journal of Web and Semantic Technology (IJWesT), 1(1), 1-20 (2010)
15. Kundra V., State of public sector cloud computing. Federal Chief
http://www.cio.gov/pages.cfm/page/ State-of-Public-Sector-Cloud- Computing
Information
Officers
Council.
2009,
16. B. Glick, Digital Britain commits government to cloud computing,” Computing, 2009. http://www.computing.co.uk/
computing/news/2244229/digital-britain-commits
17. F. Charmaine, E-Govenrment: The Canadian Experience”, DJIM, Volume 4 – Spring 2009, http://djim.management.dal.ca
18. R. Hicks, “The future of government in the cloud,” FutureGov, 6(3), 58-62
19. D. Rosenberg, “Supercloud looms for Japanese government,” CNet News, May 14, 2009. http://news.cnet.com/830113846_3-10241081-62.html
20. J.N. Hoover, Japan hopes IT investment, private cloud will spur economic recovery: The Kasumigaseki Cloud is part of a
larger government project that's expected to create 300,000 to 400,000 new jobs within three years,” InformationWeek, May
15, 2009. http://www.informationweek.com/shared/printableArticle.jhtml?articleID=217500403
21. R. Hicks, Thailand hatches plan for private cloud, Future Gov, May 25, 2009 http://www.futuregov.net/articles/2009
/may/25/thailand-plansprivate-cloud-e-gov/ (2009)
22. Mvelase P.S. et.al., Towards a Government Public Cloud Model: The Case of South Africa, Second International Conference
“Cluster Computing” CC 2013 (2013)
International Science Congress Association
15
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
A Foremost Survey on State-of-The-Art Computational Music Research
Sudipta Chakrabarty1, Samarjit Roy2 and Debashis De3
1
Department of Master of Computer application, Techno India EM 4/1, Salt Lake City, Sector – V, Kolkata – 7000091, West Bengal,
INDIA
2
Department of Computer Science and Engineering, Techno India Silli, Ranchi-835102; Jharkhand, INDIA
3
Department of Computer Science and Engineering, West Bengal University of Technology, BF – 142, Sector – 1, Salt Lake City,
Kolkata – 700064, West Bengal, INDIA
Abstract
The primary aim of this paper is to highlight the different computational music research issues and their implementation.
Computational Music is one of the largest fields of social science controlled by computer science. Computational musicology is
done with computers by the help of different computational modelling like, mathematical modelling, statistical modelling, music
modelling with music elements, genetic algorithm, object oriented modelling, etc. that specifically run with designed programs.
This paper mainly focuses the survey of various areas like, raga based music identification, music databases, analysis of music,
artificial production of music, historical change of music, different music modelling techniques in music research.
Introduction
Musicological research has long existence since ancient times. The present state of science and technology can provide ample scope
to investigate swaras, intervals, octaves (saptak), thaat and raga. Raga is a basic building block of a song. The number of sounds
that the human ear can hear, in an octave, is infinite. But the number of sounds that it can discern, differentiate, or grasp, is 22.
They are called shrutis. Sound of reference is called tonic, key, or "Sa".In Indian musical terminology, it is known as shadja, "Sa"
for short. It is represented by the symbol Sa. Out of the 22 shruti-s, 7 are selected to form a musical scale. The tonic is fixed first,
followed by 6 more shruti-s to form a 7-ladder scale. These 7 sounds, or tones, are called swara-s (or notes). The first and the fifth
notes, namely Sa and Pa are regarded immutable ("achala"). The remaining 5 notes have two states each. Thus we have 12 notes in
an octave.
The combination of several notes woven into a composition in a way, which is pleasing to the ear, is called a Raga. Each raga
creates an atmosphere, which is associated with feelings and sentiments. Any stray combination of notes cannot be called a Raga.
The Raga is the basis of classical music. It is based on the principle of a combination of notes selected out the 22 note intervals of
the octave. A performer with sufficient training and knowledge alone can create the desired emotions, through the combination of
shrutis and notes.
The aim of the paper is to standardise musical building blocks like swar, thaat, raga using different methods like, object oriented
methodology, Digital Signal processing, Genetic Algorithm and it will be able to represent different musical pattern from a
predefined training set.
Some definitions are given which are useful to understand the project
Raga: Ragas are compositions of different notes after different melodious combinations of the notes that are belonging to a thhat. A
raga can be identified by its aroha, aboroha which are merged into the aalap of the raga. In precise from the aalap portion of the
raga at the starting of the performance the raga can be identified.
Thhat: Different Distributions of notes making different note structures are called thhats. These thhats are dependent upon the aroha
(Ascending note sequence) of the raga. In Indian Classical Music there are 10 thhats from each of which many ragas are created.
The names of these ten thhats are - Kalyan, Bhairav, Kafi, Asavari, Bilabal, Khamaj, Bhairavi, Purbi and Torhi.
Aroha and Aboroha: The sequence of notes of a particular raga or thhat in ascending order of the frequencies starting from the tonic
of the scale of performance is called the Aroha and the sequence of notes starting from the double frequency of the tonic of the
scale to the tonic of the scale in descending order of frequencies is called the Aboroha. By these two properties a Raga can be
decided to belong to a Thhat.
Aalap: Aalap of a raga is a rendition of the raga in which part the possible legal combinations of the used notes are performed
without any fixed rhythm. Here in the beginning portion the performer starts from the tonic and reaches to higher to higher
frequencies according to ability and expertise and then comes downward to reach to the tonic gradually.
International Science Congress Association
16
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Shrutis: The linking between notes is called the Shrutis or Semi tones. To get more pleasant and melodious music the Shrutis are
used by the expert musicians. Shrutis are interlinked and merged with the notes. In between the transition from one note to another
note these shrutis are present.
Swara(note):The Swaras or notes used in Indian Music and Western Music are: Sa (Sadaj) = Do, Re (Rishava) = Re, Ga (Gandhara)
= Mi, ma (Madhyama) = Fa, Pa (Panchama) = So, Dha (Dhaivata) = La, Ni (Nishad) = Ti, SA (Sadaj) = Do. SA has the doubled
frequency of Sa. For example if Sa is of 240Hz then SA should be of 480Hz. The Swaras or notes explicitly and uniquely used in
Indian Music are- re (Komal Rishava), ga (Komal Gandhara), MA (Tivra Madhyama), dha (Komal dhaivata) and ni (Komal
Nishada). They are called as Vikrita Swaras or altered notes.
Music Research Issues
There are some previous research works that inspire us to perform our experiment. In last few years many scientists and engineers
explore their interest on research related to musical automation. These works contributes a lot in automated musical research.
A remarkable work is performed for automatic extraction of pure and altered notes for aalap portion of kheyal rendering for few
ragas sung by the expert. The basic problem was the detection of tonic. Paper mentioned above also detects an Algorithm for tonic
detection using error analysis1.
In a paper named “A Multipitch Approach To Tonic Identification In Indian Classical Music” was published. Unlike other
approaches that identify the tonic from a single predominant pitch track, a method is proposed based on a multipitch analysis of the
audio. They use a multi pitch representation to construct a pitch histogram of the audio except, out of which the tonic is identified.
Rather than manually define a template, a classification approach is followed to automatically learn a set of rules for selecting the
tonic. The proposed method returns not only the pitch class of the tonic but also the precise octave in which it is played. This
approach is evaluated on a large collection of Carnatic and Hindustani music, obtaining an identification accuracy of 93% 27.
Another experiment, on the study of whether some short pitches of taan (fast rendering of notes) in a raga contains objective
information for identifying raga, is attempted. Taan portions are selected from the kheyal style rendering of some raga. Correlation
coefficients and a modified correlation are used for both identification and the classification of raga 2.
Another paper named “ A Methodology Of Note Extraction From The Song Signals” presents an approach for annotation of aalap
in north Indian classical vocal singing without using any musicological information except that of ratios representing notes. As the
aalap portions are usually non-metrical, the analysis for determination of meters is not undertaken. Thus the annotation merely
consists of detecting notes and their corresponding durations. Musicological information is used to verify the notations. 96% notes
are correctly identified26
Another Automatic raga classification approach is taken using spectrally derived tone profiles. This system classifies segments
from raga performances beginning at the signal level based on a spectrally derived tone profile. A tone profile is essentially a
histogram of note values weighted by duration. The method is shown to be highly accurate (100% accuracy) even when rags share
the same scalar material and when segments are drawn from different instruments and different points in the performance 22.
An article published on April 2011, in which Tonal modulation is discussed in the context of graph theory with the aim of applying
its fundamental ideas and theorems to solve musically interesting problems. Mathematical ideas such as connectivity of graphs,
group structure, graph colouring, metrics, Hamiltonian paths and Euler tours are used to prove the existence of special sequences of
modulations and chord progressions, as well as to investigate the possibilities and limitations of tonal modulation 6.
Another research work is performed to develop a system which automatically mines the raga of an Indian Classical Music. As a
first step Note transcription is applied on a given audio file to generate the sequence of notes used to play the song. In the next step,
the features related to Aroha – Avaroha are extracted. These features are given to ANN for training and testing the system 24.
An automated system named “TANSEN” is invented in order to solve the problem of automatic identification of Ragas from audio
samples. Tansen is based on a Hidden Markov Model enhanced with a string matching algorithm. The whole system is built on top
of an automatic note transcriptor. Experiments with Tansen show that this approach is highly effective in solving the problem 25.
Another system is constructed to recognize ragas based on pitch-class distributions (PCDs) and pitch-class dyad distributions
(PCDDs) calculated directly from the audio signal. Classification was performed using support vector machines, maximum a
International Science Congress Association
17
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
posteriori (MAP) rule using a multivariate likelihood model (MVN), and Random Forests. This work clearly demonstrates the
effectiveness of PCDs and PCDDs in discriminating ragas, even when musical differences are subtle 23
Krumhansl and Shephard28 as well as Castellano et al29 have shown that steady pitch disseminations give rise to mental schemas
that structure expectations and facilitate the processing of musical information. Using the renowned probe-tone method,
Krumhansl30 showed that auditors’ ratings of the appropriateness of a test tone in relation to a tonal context is directly related to the
relative prevalence of that pitch-class in a given key. Huron31 has shown that sensitive adjectives used to describe a tone are highly
correlated with that tone’s frequency in a applicable corpus of music. Further, certain potentials seemed to be due to higher-order
statistics, such as note-to-note transition probabilities. These experiments show that listeners are sensitive to PCDs and internalize
them in ways that affect their experience of music. The parade that PCDs are relatively stable in large corpora of tonal Western
music led to the development of key- and mode-finding algorithms based on correlating PCDs of a given excerpt, with empirical
PCDs calculated on a large sample of related music32,33.
Raag classification has been a central topic in Indian music theory for centuries, motivating rich debate on the essential features of
raags and the features that make two raags similar or dissimilar 34. Pandey developed a system to habitually recognize raags Yaman
and Bhupali using a Markov model. A success rate of 77% was reported on thirty-one samples in a two-target test, although the
procedure was not well documented. An additional stage that searched for definite pitch sequences improved performance to 87%.
In an experimental step, Chordia35 classified one hundred thirty segments of sixty seconds each, from thirteen raags. The feature
vector was the Harmonic pitch class profile (HPCP) for each segment. Perfect results were obtained using a K-NN classifier with
60/40% train/test split. This was further developed in [36] where PCDs and PCDDs were used as features with more sophisticated
learning algorithms. In a 17 target experiment with 142 segments, classification accuracy of 94% was attained using 10-fold crossvalidation. However, the significance of the results in both cases was restricted by the size of the database.
The next project work which is mentioned proposes a unique approach to musical score recognition, a particular case of high-level
document analysis. They aim to resolve the problem completely, but using simple means, i.e. a regular personal computer and a
standard 300 dpi scanner, without heavy pre-processing. They shall make up for these real-world constraints by using more
intelligence. In particular benefit of as much domain knowledge as possible is taken, and of modern artificial intelligence
techniques.
Recently, a system for Iranian traditional music “Dastgah” classification is presented. Persian music is based upon a set of seven
major “Dastgahs”. The “Dastgah” in Persian music is similar to western musical scales and also Maqams in Turkish and Arabic
music. Fuzzy logic type 2 as the basic part of this system has been used for demonstrating the uncertainty of tuning the scale steps
of each “Dastgah”. The technique assumes each performed note as a Fuzzy Set (FS), so each musical piece is a set of FSs. The
maximum likeness between this set and theoretical data indicates the desirable “Dastgah”. In this study, a collection of small-sized
dataset for Persian music is also given. The results indicate that the system works precisely on the dataset 40.
In another paper, named “Perceptual Issues in Music Pattern Recognition: Complexity of Rhythm and Key Finding”, Authors
consider several perceptual issues in the context of machine recognition of music patterns. It is claimed that a successful execution
of a music recognition system must integrate perceptual information and error criteria. We discuss several measures of rhythm
complexity which are used for determining relative weights of pitch and rhythm errors. Then, a new method for determining a
localized tonal context is proposed. This method is based on empirically derived key distances. The generated key assignments are
then used to construct the perceptual pitch error criterion which is based on note relatedness ratings obtained from experiments with
human listeners41.
A project work is released where scientists compare the performance of recognition of short sentences of speech using Hidden
Markov models (HMM) in Artificial Neural Networks (ANN) and Fuzzy Logic. The data sets used are sentences from The DARPA
TIMIT Acoustic- Phonetic Continuous Speech Corpus. Currently, most speech recognition systems are based on Hidden Markov
Models, a statistical framework that supports both acoustic and temporal modelling. Despite their state-of-the-art performance,
HMMs make a number of sub-optimal modelling assumptions that limit their potential effectiveness. Neural networks avoid many
of these assumptions, while they can also learn complex functions, generalize effectively, tolerate noise, and support parallelism.
The recognition process consists of the Training phase and the Testing (Recognition) phase. The audio files from the speech corpus
are pre-processed and features like Short Time Average Zero Crossing Rate, Pitch Period, Mel Frequency Cepstral Coefficients
(MFCC), Formants and Modulation Index are extracted. The model database is created from the feature vector using HMM and is
trained with Radial Basis Function Neural Network (RBFNN) algorithm. During recognition the test set model is obtained which is
compared with the database model. The same sets of audio files are trained for the speech recognition using HMM/Fuzzy and the
International Science Congress Association
18
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
fuzzy knowledge base is created using a fuzzy controller. During the recognition phase, the feature vector is compared with the
knowledge base and the recognition is made. From the recognized outputs, the recognition accuracy is compared and the best
performing model is identified. Recognition accuracy using Radial Basis Function Neural Networks found to be superior to
recognition using Fuzzy42.
Increasing amount of online music content has opened new opportunities for implementing new effective information access
services – commonly known as music recommender systems – that support music navigation, discovery, sharing, and formation of
user communities. In the recent years a new research area of contextual (or situational) music recommendation and retrieval has
emerged. The basic idea is to retrieve and suggest music depending on the user’s actual situation, for instance emotional state, or
any other contextual conditions that might influence the user’s perception of music. Despite the high potential of such idea, the
development of real-world applications that retrieve or recommend music depending on the user’s context is still in its early stages.
This survey illustrates various tools and techniques that can be used for addressing the research challenges posed by context-aware
music retrieval and recommendation. The survey covers a broad range of topics, starting from classical music information retrieval
(MIR) and recommender system (RS) techniques, and then focusing on context-aware music applications as well as the newer
trends of affective and social computing applied to the music domain.
A new method is being proposed for cataloguing different melodious audio stream into some specific featured classes based on
object oriented modelling. The classes should have some unique features to be specified and characterized; depending upon those
properties of a particular piece of music it can be classified into different classes and subclasses. The concept is developed
considering the non-trivial categorization problem due to vastness of Indian Classical Music43.
Another interesting paper gives a survey of the infrastructure currently being developed in the MUSITECH project. The aim is to
conceptualize and implement a computational environment for navigation and interaction in internet-based musical applications.
This comprises the development of data models, exchange formats, interface modules and a software framework. Different
information are integrated and media types like MIDI, audio, text based codes and metadata and their relations, especially to
provide means to describe arbitrary musical structure. We attempt to connect different musical domains to support cooperation and
synergies. To establish platform independence Java, Extensible Mark-up Language (XML), and other open standards are used. The
object model, a framework and various components for visualization, playback and other common tasks and the technical
infrastructure are being developed and will be evaluated within the project 44.
An audio beat tracking system, IBT, for multiple applications is proposed recently. The proposed system integrates an automatic
monitoring and state recovery mechanism that applies (re-) inductions of tempo and beats, on a multi-agent-based beat tracking
architecture. This system sequentially processes a continuous onset detection function while propagating parallel hypotheses of
tempo and beats. Beats can be predicted in a causal or in a non-causal usage mode, which makes the system suitable for diverse
applications. We evaluate the performance of the system in both modes on two application scenarios: standard (using a relatively
large database of audio clips) and streaming (using long audio streams made up of concatenated clips). Experimental evidence of
the usefulness of the automatic monitoring and state recovery mechanism in the streaming scenario is performed (i.e.,
improvements in beat tracking accuracy and reaction time). It shows that the system performs efficiently and at a level comparable
to state-of-the-art algorithms in the standard scenario. IBT is multi-platform, open-source and freely available, and it includes
plugins for different popular audio analysis, synthesis and visualization platforms 45.
Motivated by the explosion of digital music on the Web and the increasing popularity of music recommender systems, a paper
presents a relational query framework for flexible music retrieval and effective playlist manipulation. A generic song representation
model is introduced, which captures heterogeneous categories of musical information and serves a foundation for query operators
that offer a practical solution to playlist management. A formal definition of the proposed query operators is provided, together with
real usage scenarios and a prototype implementation46 another research paper focuses primarily on music similarity. While treating
music similarity from different angles, various approaches for playlists generation have been proposed. For example, some
approaches for playlist generation are pure audio-based47, other employ a hybrid (combination of audio-content and social) music
similarity48. The creation of playlists that meet given constraints has been addressed 49,50, and approaches that incorporate user
feedback have also been considered51, 52. Some work has been done on data and query models for music and playlist manipulation 53–
55, 56, 57
. However, the existing work either disregards similarity queries and playlists 56, 57, or addresses specific scenarios of playlist
manipulation53–55. The existing query models are limited when seen in the broad context of music recommenders that manage music
and playlists in various ways.
Rubenstein56 introduce a music data model that extends the entity-relationship model and implements the notion of hierarchical
ordering found in musical data. Wang et al. 57 propose a music data model and query language, exemplifying their use on musical
International Science Congress Association
19
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
instruments retrieval. However, both models lack an adequate framework to perform similarity searches and playlist operations.
Jensen et al.55 propose a data and query model for dynamic playlist generation that supports arbitrary similarity measures. However,
the retrieval operators are limited to the case of continuous playlists, where songs are retrieved one at a time taking into account the
user’s skipping behaviour. Another work was recently proposed by Deliege and Pedersen 53, 54. A music warehouse prototype
capable of performing arbitrary similarity searches is described 53, but only nearest neighbours searches are covered.
A query algebra manipulating playlists, seen as fuzzy lists, is introduced 54. However, the query model is applicable when solely
modelling user feedback on the music. Most related work is perhaps the list-based relational algebra proposed by Slivinskas et
al.[58], introducing the notion of ordered lists in the relational model. While playlists are conceptually ordered lists of songs, the
operators in58 are the same as the standard ones except that they contend with order; thus, they do not cover playlist manipulation.
This has several advantages over previous work. All categories of musical information: metadata, audio-content, and social data is
compared. Compared for example to using dimension hierarchies54, 55, this data model is more flexible as it abstracts over database
design and can be accommodated in any warehouse schema, including those mentioned above. While it is natural to consider order
when dealing with playlists, the ordered relational algebra 58 does not target playlist manipulation, and the fuzzy algebra54 addresses
specific scenarios. Query operators that extend the algebra is proposed in 58 and capture generic usage of playlists.
A review based on an introduction by Douglas Hofstadter of an automated music composition system 59 designed by David Cope of
UC Santa Cruz. The system takes a series of a composer’s scores and develops new works in that style. After a brief introduction to
the system, one is encouraged to go to his website and listen to the musical compositions. The discussion is epistemological in
nature. Emphasis is put on the imprecise use of the term Artificial Intelligence where a more precise term is available and
applicable to the described system60.
There is much ongoing development in Computer Science that falls under the umbrella of Artificial Intelligence [AI]. However,
much of this work seems to focus on specific application domains rather than on foundations that could lead to powerful and
possibly intelligent systems. Few attempts have apparently been made to provide an operational definition for intelligence. The
original work of Turing is often cited as a test for intelligence. No attempt at the complexity of such a definition is given here.
With increasing amounts of music being available in digital form, research in music information retrieval has turned into a
dominant field to support organization of and easy access to large collections of music. Yet, most research is focussed traditionally
on Western music, mostly in the form of mastered studio recordings. This leaves the question whether current music information
retrieval approaches can also be applied to collections of non-Western and in particular ethnic music with completely different
characteristics and requirements.
In an ongoing project work the performance of a range of automatic audio description algorithms is analysed on three music
databases with distinct characteristics, specifically a Western music collection used previously in research benchmarks, a collection
of Latin American music with roots in Latin American culture, but following Western tonality principles, as well as a collection of
field recordings of ethnic African music. The study quantitatively shows the advantages presents an approach to visualize, access
and interact with ethnic music collections in a structured way61.
In Western music, as opposed to what has been said about ethnic music in the previously mentioned work, the meta-data fields most
frequently used(and searched for) are song title, name of artist, performer or band, composer, album, etc.—and a very popular
additional one: the ‘‘genre’’62. However, the concept of a genre is quite subjective in nature and there is no clearest way to define
how to assign a musical genre63,64. Nevertheless, its popularity has led to its usage not only in traditional music stores, but also in
the digital world, where large music catalogue share currently labelled manually by genres. However, assigning (possibly multiple)
genre labels by hand to thousands of songs is very time-consuming and, moreover, to a certain degree, dependent on the annotating
person. Research in Music IR has therefore tackled this problem already in a variety of ways. A brief analysis of the state of the art
shows that there are different approaches in Music IR for the semi-automatic description of the content of music. In content-based
approaches, the content of music files is analysed and descriptive feature extracted from it. In case of audio files, representative
feature extracted from the digital audio signal65. In case of symbolic data formats (e.g. MIDI or Music XML), feature derived from
notation-based representations66. Additionally, semantic analyses of the lyrics can help in the categorization of music pieces into
categories that are not predominantly related to acoustic characteristics67. Community meta- data have also been used for such
tasks, for instance, collaborative filtering68, co-occurrence analysis (e.g. on blogs and other music related texts in the web 69,70), or
analysis of meta-information provided by users on dedicated third-party sources(e.g.socialtagsonlast.fm71). In cases where
manpower is available, expert analyses are an alternative and can provide powerful representations of music collections extremely
useful for automatic categorizations (as in the case of Pandora1 and the Music Genome Project, 2 or AMG Tapestry3). Hybrid
alternatives also exist, they combine several of the previous approaches, e.g. combining audio and symbolic analyses 72, audio
International Science Congress Association
20
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
features, symbolic features and community meta-data73 or combining audio content features and lyrics74. Although hybrid
approaches have proved to be usually better than using a single approach, there are some implication s on their use beyond
traditional Western music. First of all, naturally, there is a lack of publicly available meta-data for non-Western and ethnic music,
which could be used as a resource for hybrid approaches. Moreover, both community meta-data and lyrics-based approaches are for
English as opposed to other languages. Moreover, as seen in 75 the adaptation of an NLP method from one language to another is far
from trivial. This is especially true for ethnic music where the NLP resources might not even exist. While Music IR research has
resulted into a wide range of method sand(also commercial) applications, non- Western music was rarely the scope of this research,
and only little research has been performed with focus on ethnic music. Although ethno musicology is a very traditional field of
study with many institutions, archival and academic, involved, research on the signal level has rarely been performed. Charles
Seeger was one of the first researchers to objectively measure, analyze and transcribe sound, using his Melo graph 76. Later, pitch
analysis on monophonic audio to score has also been performed by Nesbit et al. onto Aboriginal music77. Krishnaswamy focused on
pitch contour enhancing annotations by assigning typologies of melodic atoms to musical motives from carnatic music 78, a
technique that is also employed by Chordia et al. on Indian music79. Moelants et al. point out the problems and opportunities of
pitch analysis of ethnic music concerning the specific tuning systems differing from the Western well-tempered system [80].
Duggan et al.81 analyzed pitch extraction results achieving segregation of several parts of Irish songs. Pikrakis et al. and
Antonopoulos et al. performed meter annotation and tempo tracking on Greek music, and later also on African music. Wright
focuses on micro- timing of Rumba music, visualizing the smallest deviations of performance opposed to the transcription by the
traditional theoretical musical framework82. A similar work on Samba music is done83. Only very few authors presented work
related to timbre and its usefulness in genre classification of ethnic music. The term Computational Ethno musicology was
emphasized by Tzanetakis, capturing some historical, but mostly recent research that refers to the design, development and usage of
computer tools within the context of ethnic music 84.
Genetic Algorithm applied for the automatic versatile music rhythm generation using the calculation of fitness value by Roulette
Wheel Selection mechanism [85]. The object oriented methodology for ICM has developed the inheritance and polymorphism
model for musical pattern recognition and pattern analysis86-87. One approach has been introduced to identify ‘Thhat’ of ICM 88.The
object oriented methodology for Indian Classical Music has developed the Petri nets Models for musical pattern recognition and
pattern analysis. These two papers illustrate that Petri nets is the appropriate tool for computational musicology 89-90. A new and
intelligent mechanism is introduced that efficiently selects the parent rhythms for creating offspring rhythm using Genetic
Algorithm Optimization in Pervasive Education. The main objective of this contribution is to select the parent rhythms from a set
of initial rhythm to produce offspring rhythms for practical implementation in World Music in context awareness pervasive music
rhythm learning education91. A system has been introduced that identifies the raga automatically from song music has been
proposed. Automatic Raga identification is achieved by identifying the notes by mapping the fundamental frequencies of each notes
and Pitch Contour data values associated with that particular song and after findings of notation of that particular song again
matching the notation with Raga Knowledgebase92.
Conclusion
The Computational Music covers all topics dealing with essential usage of mathematics for the formal conceptualization,
modelling, theory, computation, and technology in music. Computational Music has been guided by the computational thinking of
composer by the help of different music modelling. In Indian Music raga based music modelling is one of the most promising
techniques in the field of music research as well as Music Information Retrieval (MIR). Statistical Modelling can also be applied
for music classification and different features of Object Oriented Paradigm can be applied for music modelling. Genetic Algorithm
and some other bio-inspired techniques are used for versatile music production depends on environment, mood, behaviour, gesture,
etc.
Acknowledgement
Authors are grateful to Department of Science and Technology (DST) for sanctioning a research Project under Fast Track Young
Scientist scheme reference no.: SERB/F/5044/2012-2013 Under which this paper has been completed.
References
1.
R. Sengupta, N. Dey, Dipali Nag and A.K. Datta , Automatic Extraction of Swaras and Srutis from Kheyal Rendition, J.
Acoust. Soc. India, 30, (2002)
International Science Congress Association
21
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
2.
A.K. Datta, R. Sengupta, N. Dey and Dipali Nag, Studies on Identification of Raga using Short Pieces of Taan: A Signal
Processing Approach, , Proc. 6th Int. Workshop on Recent Trends in Speech, Music and Allied Signal Processing, 19-21
(2001)
3.
Sayanti Chakraborty, Debasish De, Object Oriented Classification and Pattern Recognition of Indian Classical Ragas, 1 st Int’1
Conf, on Recent Advances in Information Technology
4.
Parag Chordia And Alex Rae, Raag Recognition Using Pitch-Class And Pitch-Class Dyad Distributions, Georgia Institute Of
Technology, Department Of Music,840 Mc Millan St., Atlanta Ga 30332 {Ppc,Arae3}
5.
Extraction And Relevance Of Transitory Pitch Movements In Hindusthani Music By R. Sengupta, N. Dey And D. Nag
,Scientific Research Department, Itc Sangeet Research Academy, 1, N S C Bose Road, Tollygunge, Kolkata 700 040, India
And A. K. Datta
6.
Adrian Walton, A Graph Theoretic Approach To Tonal Modulation, Journal Of Mathematics And Music Online Publication
Date: 23 June 2010
7.
Shreyas Belle, Rushikesh Joshi And Preeti Rao,: “Raga Identification By Using Swara Intonation” :Journal Of Itc Sangeet
Research Academy, 23, (2009)
8.
Clark, Milner. “Dependence of Timbre on the Tonal Loudness Produced By Musical Instruments“. J. Audio. Eng. Soc. 12, 2831
9.
Eagleson H.W., Eagleson O.W., Identification of Musical Instruments When Heard Directly And Over A Public-Address
System, J. Acoust. Soc. Am. 19, 338-342
10. Strong, Clark., Perturbations of Synthetic Orchestral Wind Instrument Tones, J. Acoust. Soc. Am., 41, 277-285
A.
Prasad Et Al., Gender Based Emotion Recognition System For Telugu Rural Dialects Using Hidden Markov Models” Journal
Of Computing: An International Journal, 2(6), Ny, Usa, Issn: 2151-9617 (2010)
11. S. Dixon, Multiphonic Note Identification, Proc. 19th Australasian Computer Science Conference, (2003)
12. W. Chai And B. Vercoe: “Folk Music Classification Using Hidden Markov Models”: Proc. Internation Conference On
Artificial Intelligence, (2001)
13. Tarakeswara Rao B Et. All, A Novel Process For Melakartha Raaga Recognitionusing Hidden Markov Models (Hmm)”,
International Journal Of Research And Reviews In Computer Science (Ijrrcs), 2(2), Issn: 2079-2557 (2011)
14. A. Ghias, J. Logan, D. Chamberlin and B. C. Smith: “Query By Humming – Musical Information Retrieval In An Audio
Database”: Proc. Acm Multimedia, 231-236 (1995)
15. H. Deshpande, U. Nam and R. Singh: “Mugec: Automatic Music Genre Classification”: Technical Report, Stanford
University, (2001)
16. L.E. Baum and T. Petrie, Statistical Inference for Probabilistic Functions Of Finite State Markov Chains, Ann. Math. Stat., 37,
1554-1563 (1966)
17. Parag Chordia, Automatic Rag Classification Using Spectrally Derived Tone Profiles, 2004 International Computer Music
Conference, and Miami: University Of Florida.
18. Xavier Serra, Opportunities for a Cultural Specific Approach in The Computational Description Of Music. By 2nd
Compmusic Workshop, Dates, (2012)
19. Suvarnalata Rao, Culture Specific Music Information Processing: A Perspective From Hindustani Music., 2nd Compmusic
Workshop, Dates, July 12th-13th,
20. Justin Salamon, Sankalp Gulati and Xavier Serra, A Multipitch Approach To Tonic Identification In Indian Classical Music
21. Parag Chordia, Automatic rag classification using spectrally derived tone profiles, 2004 International Computer Music
Conference, Miami: University of Florida
22. Parag Chordia and Alex Rae, Raag Recognition Using Pitch-Class And Pitch-Class Dyad Distributions, a seminar at Georgia
Institute of Technology, Department of Music
23. Surendra Shetty and K.K. Achary, Raga Mining of Indian Music by Extracting Arohana-Avarohana Pattern, International
Journal of Recent Trends in Engineering, 1(1), 1-5 (2009)
International Science Congress Association
22
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
24. Gaurav Pandey, Chaitanya Mishra, and Paul, TANSEN : A System for Automatic Raga Identification, a seminer at
Department of Computer Science and Engineering Indian Institute of Technology, Kanpur, India
25. A.K. Datta , R. Sengupta, N. Dey, D. Nag and A. Mukerjee, A Methodology of Note Extraction from The Song Signals
Journal, Scientific Research Department, ITC Sangeet Research Academy,
26. Salamon J., Gulati S, and Serra X, A Multipitch Approach to Tonic Identification in Indian Classical Music, 13th International
Society for Music Information Retrieval Conference (ISMIR 2012)
27. C. Krumhansl and R. Shepard. Quantification of the hierarchy of tonal functions within a diatonic context, Journal of
Experimental Psychology: Human Perceptionand Performance, 5(4), 579–594, (1979)
28. MA Castellano, JJ Bharucha and CL Krumhansl, Tonal hierarchies in the music of north india. Journal of Experimental
Psychology, (1984)
29. Carol Krumhansl, Cognitive Foundations of Musical Pitch. Oxford University Press, (1990)
30. David Huron. Sweet Anticipation: Music and the Psychology of Expectation. MIT Press, (2006)
31. Ching-Hua Chuan and Elaine Chew, Audio keyfinding using the spiral array ceg algorithm. In Proceedings of International
Conference on Multimedia and Expo, (2005)
32. E. Gomez and P. Herrera. Estimating the tonality of polyphonic audio files: Cognitive versus machine learning modelling
strategies. In Proceedings of International Conference on Music Information Retrieval, (2004)
33. V.N. Bhatkande, Hindusthani Sangeet Paddhati. Sangeet Karyalaya, (1934)
34. Parag Chordia. Automatic rag classification using spectrally derived tone profiles. In Proceedings of the International
Computer Music Conference, (2004)
35. Parag Chordia, Automatic raag classification of pitchtracked performances using pitch-class and pitch-class dyad
distributions. In Proceedings of International [2] Leo Breiman. Random forests. Machine Learning, 45(1), (2001)
36. Joe Cheri Ross and Preeti Rao, Detection of Raga-Characteristic Phrases From Hindustani Classical Music Audio”: Proc. of
the 2nd Comp Music Workshop (Istanbul, Turkey, July 12-13, 2012), (2012)
37. P.K. Srimani and Y.G. Parimala, Artificial Neural Network (ANN) Approach for an Intelligent System: A Case Study in
Carnatic Classical Music (CCM)”: International Conference on Intelligent Computational Systems (ICICS'2012) Jan. 7-8,
2012 Dubai, (2012)
38. Surendra Shetty1 and K.K. Achary 2, Raga Mining of Indian Music by Extracting
39. Arohana-Avarohana Pattern” : International Journal of Recent Trends in Engineering Vol. 1, No. 1, May 2009
40. Sajjad Abdoli, Iranian Traditional Music Dastgah Classification 12th International Society for Music Information Retrieval
Conference (ISMIR 2011) (2011)
41. Ilya Shmulevich, Olli Yli-Harja, Edward Coyle, Dirk-Jan Povel And Kjell Lemström: “Perceptual Issues In Music Pattern
Recognition: Complexity Of Rhythm And Key Finding”: Computers And The Humanities 35: 23–35, 2001.© 2001 Kluwer
Academic Publishers. Printed In The Netherlands.23
42. Performance of Speech Recognition using Artificial Neural Network and Fuzzy Logic, European Journal of Scientific
Research
ISSN
1450-216X,
66(1),
41-47
(2011)
©
Euro
Journals
Publishing,
Inc.
2011
http://www.europeanjournalofscientificresearch.com
43. Marius Kaminskas, Francesco Ricci : Contextual Music Information Retrieval And Recommendation: State Of The Art And
Challenges, Computer Science Review, 6, 89–119, (2012)
44. Martin Gieseking, Tillman Weyde, Concepts of the MUSITECH Infrastructure for Internet-Based Interactive Musical
Applications, Proceedings of the Second International Conference on WEB Delivering of Music (WEDELMUSIC.02)
45. João Lobato Oliveira, Matthew E. P. Davies, Fabien Gouyon, and Luís Paulo Reis: “Beat Tracking for Multiple Applications:
A Multi-Agent System Architecture With State Recovery”: IEEE Transactions on audio, Speech, and Language Processing,
20(10), (2012)
46. Maria M. Ruxanda Christian S. Jensen, A Flexible Query Framework for Music Data and Playlist Manipulation, 19th
International Conference on Database and Expert Systems Application © 2008 IEEE
International Science Congress Association
23
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
47. T. Pohle, E. Pampalk and G. Widmer, Generating Similarity Based Playlists Using Traveling Salesman Algorithms.” In Proc.
DAFx, 220–225, (2005)
48. P. Knees, T. Pohle, M. Schedl, and G. Widmer, Combining Audio-based Similarity with Web-based Data to Accelerate
Automatic Music Playlist Generation”.: In Proc. ACM Int. Workshop on MIR, (2006)
49. M. Alghoniemy and A. Tewfik, A Network Flow Model for Playlist Generation, In Proc. ICME, pages 84–95, (2001)
50. J. Aucouturier and F. Pachet, Scaling up Music Playlist Generation, In Proc. ICME, 105–108, (2002)
51. E. Pampalk, T. Pohle and G. Widmer, Dynamic Playlist Generation Based on Skipping Behavior, In Proc. ISMIR, 634–637,
(2005)
52. S. Pauws and B. Eggen. PATS, Realization and User Evaluation of Automatic Playlist Generator, In Proc. ISMIR, (2002)
53. F. Deliege and T.B. Pedersen, Fuzzy Song Sets for Music Warehouses, In Proc. ISMIR, 21–26, (2007)
54. F. Deliege and T.B. Pedersen. Using Fuzzy Lists for Playlist Management, In Proc. MMM, 198–209, (2007)
55. C.A. Jensen, E.M. Mungure, T.B. Pedersen and K.I. Sørensen, A Data and Query Model for Dynamic Playlist Generation.” In
Proc. ICDE IEEE Workshop, 65–74, (2007)
56. W. Rubenstein, A Database Design for Musical Information, In Proc. ACM SIGMOD, 479–490, (1987)
57. C. Wang, L.J. and S.S., A Music Data Model and its Application, In Proc. MMM, 79–85 (2004)
58. G. Slivinskas, C. Jensen and R. Snodgrass, Bringing Order to Query Optimization, SIGMOD Record, 31(2), 5–14, (2002)
59. Hofstadter D., The surprising prowess of an automated music composer. The Invisible Future: The seamless integration of
technology into everyday life.”: Denning, P.J. ed., McGraw-Hill, 65-86, (2002)
60. John A. Dion, Automated Music Composition : An Expert Systems Approach: 10 th Science and Music seminar (2010)
61. Thomas Lidy, CarlosN.Silla Jr., Olmo Cornelis, Fabien Gouyon, Andreas Rauber, Celso A.A. Kaestner and
AlessandroL.Koerich : “On the suitability of state-of-the-art music information retrieval methods for analyzing, categorizing
and accessing non-Western and ethnic music collections”: Accepted17 September 2009, Availableonline 23 September2009
62. J.S. Downie and S.J. Cunningham, Toward a theory of music information retrieval queries : system design implications” ,in
:Proceedings of the International Conference on Music Information Retrieval, Paris, France, 299–300 (2002)
63. J.J. Aucouturier and F. Pachet, Representing musical genre: a state of the art”, Journal of New Music Research, 32(1), 83–93
(2003)
64. C.McKay, I. Fujinaga, Musical genre classification, Is it worth pursuing and how can it be” in: Proceedings of the
International Conference on Music Information Retrieval, Victoria, Canada, 101–106 (2006)
65.
G.Tzanetakis and P. Cook, Musical genre classification of audio signals”, IEEE Transactions on Speech and Audio
Processing, 10(5), 293–302 (2002)
66. C. McKay and I. Fujinaga, Automatic genre classification using large high level musical feature sets, in: Proceedings of the
International Conference on Music Information Retrieval, Barcelona, Spain, 525–530 (2004)
67. R. Neumayer and A. Rauber, Multimodal analysis of text and audio features for music information retrieval, in: Multimodal
Processing and Interaction: Audio, Video, Text, Springer, Berlin, Heidelberg, (2008)
68. J.L. Herlocker, J.A. Konstan, L.G. Terveen and J.T. Riedl, Evaluating collaborative filtering recommender systems, ACM
Transactions on Information Systems, 22(1), 5–53 (2004)
69. X.Hu, J.S. Downie, K. West, A. Ehmann, Mining music reviews: promising preliminary results in Proceedings of the International Conference on Music Information Retrieval, London, UK, 536–539 (2005)
70. M. Schedl, P. Knees, T. Pohle and G. Widmer, Towards an automatically generated music information system via web content
mining: in European Conference on Information Retrieval, Glasgow, Scotland, 585–590 (2008)
71. P. Lamere, Social tagging and music information retrieval”: Journal of New Music Research, 37(2), 101–114 (2008)
72. T. Lidy, A. Rauber, A. Pertusa and J.M. Inesta, Improving genre classification by combination of audio and symbolic
descriptors using a transcription system : Proceedings of the International Conference on Music Information Retrieval,
Vienna, Austria, 23–27 (2007)
International Science Congress Association
24
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
73. C. McKay and I. Fujinaga, Combining features extracted from audio, symbolic and cultural sources : Proceedings of the
International Conference on Music Information Retrieval, Philadelphia, PA,USA, 597–602 (2008)
74. R. Mayer, R. Neumayer and A. Rauber, Combination of audio and lyrics features for genre classification in digital audio
collections: Proceedings of the ACM International Conference on Multimedia, Vancouver,Canada,159–168 (2008)
75. I.A. Bolshakov and A. Gelbukh, Computational linguistics: models, resources, applications, IPN-UNAM-FCE, (2004)
76. C. Seeger, An instantaneous music notator, Journal of the International Folk Music Council, 3, 103–106 (1951)
77. A. Krishnaswamy, Melodic atoms for transcribing carnatic music: Proceedings of the International Conference on Music
Information Retrieval, Barcelona, Spain, (2004)
78.
P. Chordia and A. Rae, Raag recognition using pitch-class and pitch- class dyad distributions : Proceedings of the
International Conference on Music Information Retrieval, Vienna, Austria, 431–436 (2007)
79. D.Moelants, O.Cornelis, M. Leman, J. Gansemans, R. De Caluwe, G. De Tre´, T. Matthe and A. Hallez, Problems and
opportunities of applying data- and audio-mining techniques to ethnic music: Proceedings of the International Conference on
Music Information Retrieval, Victoria, Canada, (2006)
80.
B. Duggan, B.O. Shea, M. Gainza, P. Cunningham, Machine annotation of set s of traditional Irish dance tunes : Proceedings
of the International Conference on Music Information Retrieval, Philadelphia, PA,USA, 401–406 (2008)
81. A. Pikrakis, I. Antonopoulos, S. Theodoridis, Music meter and tempo tracking from raw polyphonic audio”: in Proceedings of
the International Conference on Music Information Retrieval, Barcelona, Spain, (2004)
82. M. Wright, A. Schloss and G. Tzanetakis, Analyzing Afro-Cuban rhythm using rotation-aware Clave template matching with
dynamic programming” : Proceedings of the International Conference on Music Information Retrieval, Philadelphia,PA,USA,
647–652 (2008)
83. F. Gouyon, Micro-timing in Samba de Roda’’—preliminary experiments with polyphonic audio”: Proceedings of the Brazilian
Symposium on Computer Music, 197–203 (2007)
84. G. Tzanetakis, A. Kapur, A. Schloss and M. Wright, Computational ethno musicology”, Journal of Inter disciplinary Music
Studies, 1(2), 1–24 (2006)
85. Sudipta Chakrabarty, Debashis De, Quality Measure Model of Music Rhythm Using genetic Algorithm, In Proceedings of the
International Conference on RADAR, Communications and Computing (ICRCC), IEEE, 203-208 (2012)
86. Debashis De, Samarjit Roy, Inheritance in Indian Classical Music: An object-oriented analysis and pattern recognition
approach”, In Proceedings of International Conference on RADAR, Communications and Computing (ICRCC), IEEE, 296301, (2012)
87. Debashis De, Samarjit Roy, Polymorphism in Indian Classical Music: A pattern recognition approach”, In Proceedings of
International Conference on communications, Devices and Intelligence System (CODIS), IEEE, 632-635 (2012)
88. Madhuchhanda Bhattacharyya, Debashis De, An approach to identify thhat of Indian Classical Music, In Proceedings of
International Conference of Communications, Devices and Intelligence System (CODIS), IEEE, 592–595 (2012)
89. Samarjit Roy, Sudipta Chakrabarty, Pradipta Bhakta and Debashis De, Modelling High Performing Music Computing using
Petri Nets, Accepted In: International Conference on Control, Instrumentation, Energy and Communication (CIEC), IEEE,
757-761, (2013)
90. Samarjit Roy, Sudipta Chakrabarty and Debashis De, A Framework of Musical Pattern Recognition using Petri Nets, In
Proceedings of Emerging Trends in Computing and Communication (ETCC), Springer-Link Digital Library, 245-252, (2013)
91. Sudipta Chakrabarty, Samarjit Roy and Debashis De, Pervasive Diary in Music Rhythm Education: A Context-Aware
Learning Tool using Genetic Algorithm, In Proceedings of International Conference on Advanced Computing, Networking,
and Informatics (ICACNI), Springer- Verlag, Smart Innovation, Systems and Technologies, 669-677, (2014)
92. Sudipta Chakrabarty, Samarjit Roy and Debashis De, Automatic Raga Recognition using Fundamental Frequency Range of
Extracted Musical Notes”, Accepted in: Eighth International Conference on Image and Signal Processing (ICIP), Elsevier
indexed by DBLP, (2014)
International Science Congress Association
25
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
The Relationship between Climatic Factor with the ebola virus Disease
Outbreak in Guinea, Liberia and Sierra Leone, 2014-2015
Roshan Kumar and Smita Dey
University Department Of Mathemtics, Ranchi University, Ranchi, India-834008
Abstract
The recent outbreaks of Ebola virus disease (EVD) infections have underlined the impact of the virus as a major threat for human
health and other primates so far as largest and deadliest recorded in history. Due to the high biosafety classification of EBOV (level
4), basic research is very limited. Therefore simple mathematical way to represent the propagation of EVD is the Ebola data
analysis (EDA). In this paper our aimed to investigate the EDA and association between climatic temperature and wind speed for 3
most affected countries Guinea (10°.23’34.60’’N, 3°51’26.42 W), Liberia (6°25’41.00’’N, 9°25’46.20 W), and Sierra Leone
(8°27’38.00’’N, 11°46’47.60 W) from March 2014 to October 2015.
Keywords: EVD, EDA, WCN, Temperature, Wind speed
Introduction
The recent EVD outbreak in Guinea in 2014 is the first reported in West Africa 1. Initial confirmed and probable cases in Liberia
and Sierra Leone are reported to have travelled to Guinea 2. The occurrence of Ebola virus causes hazardous haemorrhagic fever in
humans and non-human primates like monkeys, fruit bats, rotten etc. It is extremely communicable leading to a death rate of up to
approximately to 87%3. Ebola in humans is caused by four of five viruses of the genus Ebola virus as Bundibugyo virus (BDBV),
Sudan virus (SUDV), Ta Forest virus (TAFV) and one simply called Ebola virus (EBOV, formerly Zaire Ebola virus). EBOV is the
most dangerous of the known Ebola virus disease-causing viruses, and is responsible for the largest number of outbreaks.
Notably, Ebola is transmitted into the human population through physical contact with blood, secretions, organs or other bodily
fluids of infected animals such as chimpanzees, gorillas, fruit bats, monkeys, forest antelope and porcupines found ill or dead or in
the rain forest. It then spreads through human-to human transmission via direct contact (through broken skin or mucous
membranes) with blood, secretions, organs or other bodily fluids of infected people, and with surfaces and materials contaminated
with these fluids. Ebola is characterized by initial flu-like symptoms including sudden onset of fever, fatigue, muscle pain,
headache and sore throat. This then rapidly progresses to vomiting, rash, symptoms of impaired kidney and liver function, and in
some cases, both internal and external bleeding4.
Most infected persons die within 0 days after their initial infection (80%-90% mortality)5. Lessons learnt from other outbreaks
including cholera, H7N9 and H1N1 avian influenza, severe acute respiratory syndrome (SARS), Lassa fever, the Middle East
respiratory syndrome (MERS), dengue pandemic and the human-animal with environmental- climate interface in Africa and
elsewhere can assist in setting benchmarks for monitoring epicenter/focal early warning alert, incidence and prevalence as well as
effective surveillance response interventions measures6, 10.
Climatic factor like temperature and wind speed acts as a catalyst for spread EVD. “From West Africa arriving at five large airports
in the U.S. will have their temperature taken and face questions about their health in an effort to prevent the spread of Ebola” said
Thomas R. Frieden -CDC Director.11. But analysis also says that “Climate change is not causing West Africa’s Ebola outbreak 12.
Prior to the 2014 Ebola epidemic, the WHO had already warned that contagious diseases appeared to be on the rise—and that
climate change could be a factor. Ebola outbreaks may become more frequent because of climate change, scientists have warned, as
the deadly disease ravages four countries across West Africa. 13, this virus is lethal to humans and other primates, and has no cure. In
addition, it is unclear where the disease, which causes fever, vomiting and internal or external bleeding, comes from—though
scientists suspect fruit bats. What is clear is that outbreaks tend to follow unusual downpours or droughts in central Africa—a likely
result of climate change14.
According to the World Health Organization, a recent global increase in infectious diseases that seems to correspond with rising
global temperatures. But determining whether there is a direct causative relation between the two is a hazy business 15. Seasonal and
cyclical patterns of Ebola virus infections have been observed, suggesting seasonal changes in factors such as climate maybe useful
International Science Congress Association
26
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
predictors of EVD outbreaks16,17. Examination of these factors may also provide some insight into why EVD had been limited to
central parts of Africa in the past and why it has started to appear in West Africa.
The objective of this study was to investigate the association between climatic conditions and EVD outbreaks in Africa that
occurred between 2014 and 2015, and to discuss potential mechanisms to which climate may have an influence on Ebola virus
infection in the natural host, intermediate hosts and humans.
Study Site and Avilavle Data: Our study site is Guinea (10°.23’34.60’’N, 3°51’26.42 W), Liberia (6°25’41.00’’N, 9°25’46.20 W),
and Sierra Leone (8°27’38.00’’N, 11°46’47.60 W) which are located on the west coast of Africa. The climate Guinea, Liberia, and
Liberia is tropical and there boundaries touches each other (Figure 1).
For research purpose data have been taken from the WU (Weather Underground) at 00GMT and CDC (Center for Disease Control
and Prevention).
Figure-1
An overall view of location of the experimental site
Methodology
Before Time series data taken from Q1-2014 to Q2-2015 for better understanding of WCN for Guinea, Liberia and Sierra Leone.
From the Table 1 and figure 2, it has been found that WCN is low in 1Q-2014 and 2Q-2015. In 1Q-2014 only Guinea WCN found.
There slope are constant from 1Q-2014 to 2Q-2014 but in 3Q-2014, there slope polynomial increases and reaches at peak during
4Q-2014. In 1Q-2015 there slope decreases and becomes constant during 2Q-2015. We have identified that in 1Q-2014, Liberia
and Sierra Leone WCN is zero and in 4Q-2014, Liberia WCN is very low as compare to 3Q-2014. Also compare to Liberia and
Sierra Leone, Guinea WCN is low in throughout each quarter.
Table-1
Weekly Case Number (WCN) and Quarter (Q) data
Guinea
Liberia
Sierra Leone
WCN
Q
Q1-2014(start)
Q2-2014(start)
Q2-2014(end)
Q3-2014(end)
Q4-2014 (end)
Q1-2015(end)
Q2-2015(end)
International Science Congress Association
4
34
48
512
472
2
45
0
3
80
1685
159
226
0
0
0
108
1509
1732
133
30
27
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure-2
Graphical representation of weekly case numbers (WCN) for each quarter
For Liberia, from Figure 3 it indicates that Total cases number is higher than Total death number. Also
Total case/death increases from March 2014 to October 2015.
From figure 4, we can see that mean temperature (eq1) and mean wind speed (eq2) linearly decreases with negative slope.
Mean temperature;
Y= -0.0008x + 59.625
Mean wind speed;
Y= -0.0008x+38.101
This indicate that EVD is inversely related to temperature and wind speed.
Figure-3
Graphical representation of Total case and death for
Liberia
(1)
(2)
Figure-4
Representation of “-Ve” slope of temperature and
wind speed for Liberia
For Guinea, from Figure 5 it also indicates that Total cases number is higher than Total death number. Also Total case/death
increases from March 2014 to October 2015.
From figure 6, we can see that mean temperature (eq3) and mean wind speed (eq4) linearly decreases with negative slope.
Mean temperature;
International Science Congress Association
28
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Y= -0.0008x+60.631
Mean wind speed;
Y= -0.0017x+82.079
Figure-5
Graphical representation of Total case and death for
Guinea
(3)
(4)
Figure-6
Representation of “-Ve” slope of temperature and wind
speed for Guinea
For Sierra Leone, from Figure 7 it also indicates that Total cases number is higher than Total death number. Also Total case/death
increases from March 2014 to October 2015.
From figure 8, we can see that mean temperature (eq5) and mean wind speed (eq6) linearly decreases with negative slope.
Mean temperature;
Y= -0.0004x+42.984
Mean wind speed;
Y= -0.001x+55.684
This indicate that EVD is inversely related to temperature and wind speed.
Figure-7
Graphical representation of Total case and death for
Sierra Leone
(5)
(6)
Figure-8
Representation of “-Ve” slope of temperature and wind
speed for Sierra Leone
Conclusion
From 2014 to 2015 in Guinea, Liberia and Sierra Leone negative slope of Mean Temperature and Mean Wind Speed identified.
Also during this period total death number due to Ebola Virus Disease increases. So, it concluded that Mean Temperature and Mean
Wind Speed are inversely proportional to total death number due to Ebola Virus Disease. Also weekly case number is very high
during first and second quarter of 2014.
International Science Congress Association
29
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Acknowledgment
Authors are thankful to Mathematics Department of Ranchi University for support and motivation.
References
1.
World Health Organization (WHO) Regional Office for Africa. Ebola virus disease, Liberia (Situation as of 30 March
2014).Brazzaville: WHO Regional Office for Africa. Updated 30 Mar 2014. [Accessed 31 Mar 2014]. Available from:
http://www.afro.who.int/en/clusters-a- programmes/dpc/epidemic-a-pandemic-alert-and-response/outbreak-news/4072-ebolahaemorrhagic- fever-liberia.html
2.
Bagcchi S. Ebola haemorrhagic fever in West Africa. Lancet Infect Dis. 14(5), 375, (2014)
3.
Gerardo Chowell and Hiroshi Nishiura3 Transmission dynamics and control of Ebola virus disease (EVD): a review
4.
Ebola: Symptoms, Causes and Treatments: http://www.medicalnewstoday.com/articles/
5.
The New Public Health: An Introduction for the 21st Century: Theodore H. Tulchinsky, Elena A. Varavikova. (Chapter
4,Page no. 209).
6.
WHO:
Ebola
virus
disease,
West
Africa-update
http://www.who.int/csr/don/2014_07_31_ebola/en/.
7.
WHO/AFRO: Ebola virus outbreaks in Africa update(7th May 2014). Disease Outbreak News. 2013,
http://www.afro.who.int/en/clusters-a programmes/dpc/epidemic-a-pandemic-alert-and-response/outbreak-news/4240-ebolavirus-disease-west-africa-6-august-2014.html
8.
Zhou XN, Bergquist R, Tanner M: Elimination of tropical disease through surveillance and response.Infect Dis Poverty 2013,
2:1.
9.
Tambo E, Lin A, Xia Z, Jun-Hu C, Wei H, Robert B, Jia-Gang G, Jürg U, Marcel T and Xiao-Nong Z, Surveillance-response
systems: the key to elimination of tropical diseases. Infect Dis Poverty, 3, 17 (2014) http://www.idpjournal.com/
content/3/1/17
(31st
July).
Disease
Outbreak
News.
2014,
10. Zhang H, Lai S, Wang L, Zhao D, Zhou D, Lan Y, Buckeridge DL, Li Z and Yang W, Improving the performance of outbreak
detection algorithms by classifying the levels of disease incidence, PLoS One 2013, 8: e71803
11. U.S. airports to enhance screenings for Ebola: http://www.usatoday.com/story/news/nation/2014/10/08/ebola-travel-airport
screenings/16914661/
12. Stop saying global warming caused Ebola!
http://www.salon.com/2014/08/27/global_warming_and_ebola_when_is_it_okay_to_make_the_climate_connection/
13. Deadly by the Dozen: 12 Diseases Climate Change May Worsen, http://www.scientificamerican.com/article/twelve-diseasesclimate-change-may-make-worse/
14. Climate change and infectious diseases:
http://www.who.int/globalchange/climate/summary/en/index5.html
15. Pinzon JE, Wilson JM, Tucker CJ, Arthur R, Jahrling PB, Formenty P. Trigger events: enviroclimatic coupling of Ebola
hemorrhagic fever outbreaks. Am J Trop Med Hyg., 71(5), 664-74 (2004)
16. Bausch DG and Schwarz L. Outbreak of ebola virus disease in Guinea: where ecology meets economy. PLoS Negl Trop Dis.,
8(7), e305 (2014)
17. Is climate change key to the spread of Ebola: http://www.cnbc.com/2014/08/15/is-climate-change-key-to-the-spread-ofebola.html
International Science Congress Association
30
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Developing Local-Area Networks Using Pervasive Theory
Jyoti Kumari, Nisha Kumari, Priyanka Kumari and Arun Kanti Manna
Department of Computer Science & Engineering, Govt. Polytechnic Silli, Ranchi-835102, Jharkhand, India
Abstract
Flexible technology and online algorithms have garnered profound interest from both analysts and security experts in the last
several years. In this position paper, we disconfirm the investigation of kernels. In order to realize this purpose, we prove that
although IPv6 and virtual machines can connect to accomplish this ambition, the Turing machine and consistent hashing can
cooperate to solve this obstacle.
Keywords: AcredCogman; steganography; QoS; Coyotos.
Introduction
One is Journaling file systems must work. Though prior solutions to this issue are useful, none have taken the atomic method we
propose in our research. For example, many frameworks observe the producer-consumer problem. Therefore, homogeneous
communication and Internet QoS offer a viable alternative to the construction of Boolean logic.
We explore a novel heuristic for the emulation of the World Wide Web, which we call AcredCogman. For example, many
methodologies evaluate the synthesis of kernels. We emphasize that AcredCogman observes electronic communication. Certainly,
for example, many methodologies analyze the look aside buffer. Clearly, we see no reason not to use perfect algorithms to harness
optimal information.
Another confusing aim in this area is the analysis of SMPs. We emphasize that AcredCogman caches the analysis of IPv4.
Although conventional wisdom states that this challenge is continuously surmounted by the improvement of local-area networks,
we believe that a different solution is necessary. This combination of properties has not yet been harnessed in existing work.
In this position paper we describe the following contributions in detail. We understand how Web services can be applied to the
refinement of hash tables. We motivate new stable archetypes (AcredCogman), demonstrating that DNS2 and sensor networks are
generally incompatible.
We proceed as follows. Primarily, we motivate the need for A* search. On a similar note, we verify the understanding of cache
coherence. On a similar note, we disprove the technical unification of the producer-consumer problem and e-business. As a result,
we conclude.
Related Works
Authentication We now compare our method to related constant-time technology solutions25. In this position paper, we addressed
all of the challenges inherent in the existing work. On a similar note, we had our solution in mind before D. Q.Thomas et al.
published the recent much-touted work on e-business25. Further, our heuristic is broadly related to work in the field of machine
learning by Robinson et al.27, but we view it from a new perspective: wide-area networks11. This is arguably fair. Maruyama 8,15,10,1
suggested a scheme for constructing IPv6, but did not fully realize the implications of the simulation of the Ethernet at the time.
Davis and Johnson developed a similar framework, however we disconfirmed that our framework runs Ω in (n + (log log log n)/n !)
time. This solution is even more cheap than ours. Though we have nothing against the prior approach by Nehru et al. 16, we do not
believe that approach is applicable to theory.
AcredCogman builds on existing work in Bayesian epistemologies and steganography5,14,23. Clearly, if throughput is a concern,
AcredCogman has a clear advantage. J. Dongarra22 suggested a scheme for emulating event-driven symmetries, but did not fully
realize the implications of semantic communication at the time20, 7, 18. Davis and Nehru proposed several game-theoretic solutions12,
and reported that they have improbable effect on the partition table 13, 17. A methodology for scatter/gather I/O4 proposed by Miller
et al. fails to address several key issues that AcredCogman does address24. Thus, the class of frameworks enabled by AcredCogman
is fundamentally different from existing methods.
International Science Congress Association
31
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
The concept of self learning communication has been improved before in the literature. Our system represents a significant advance
above this work. Unlike many previous methods19, we do not attempt to request or observe pseudorandom symmetries. Recent
work suggests a heuristic for controlling randomized algorithms, but does not offer an implementation. These applications typically
require that the foremost authenticated algorithm for the simulation of thin clients is maximally efficient, and we argued here that
this, indeed, is the case.
Model
AcredCogman relies on the significant model outlined in the recent wellknown work by Scott Shenker in the field of pseudorandom
machine learning. This may or may not actually hold in reality. We show a novel framework for the structured unification of linked
lists and redundancy in Figure 1. Though physicists generally assume the exact opposite, our system depends on this property for
correct behavior. Any robust study of write-ahead logging will clearly require that the location-identity split can be made “smart”,
interactive, and concurrent; our algorithm is no different. This seems to hold in most cases. Any natural deployment of wide-area
networks will clearly require that the seminal wearable algorithm for the exploration of checksums by R. Tarjan et al. runs in (log
n) time; AcredCogman is no different. This is a natural property of AcredCogman. Continuing with this rationale, we assume that
random communication can create vacuum tubes without needing to request extreme programming. Despite the fact that
information theorists always hypothesize the exact opposite, our methodology depends on this property for correct behavior.
Further, the design for AcredCogman consists of four independent components: large-scale theory, Boolean logic, linked lists, and
IPv4. While steganographers mostly hypothesize the exact opposite, AcredCogman depends on this property for correct behavior.
We ran a month-long trace showing that our methodology is not feasible. See our existing technical report 21 for details.
Figure-1
A framework showing the relationship between AcredCogman and distributed communication.
Acred Cogman relies on the confusing design outlined in the recent acclaimed work by Kobayashi et al. in the field of software
engineering. This may or may not actually hold in reality. Despite the results by P. Sato et al., we can confirm that expert systems
can be made metamorphic, pervasive, and wearable. Even though scholars largely assume the exact opposite, our framework
depends on this property for correct behavior. On a similar note, we assume that the producer-consumer problem and the transistor
can collaborate to realize this purpose. This seems to hold in most cases. Continuing with this rationale, we assume that linked lists
International Science Congress Association
32
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
and voice-over-IP can connect to fix this grand challenge. We use our previously simulated results as a basis for all of these
assumptions.
Figure-2
The relationship between our methodology and rasterization
Implementation
In this section, we describe version 1.9, Service Pack 2 of AcredCogman, the culmination of minutes of hacking. Our methodology
requires root access in order to cache cacheable archetypes. Physicists have complete control over the hacked operating system,
which of course is necessary so that the famous low-energy algorithm for the exploration of spreadsheets by A. Garcia [6] is
recursively enumerable. Furthermore, the hand optimized compiler contains about 1265 semi colons of Python. One will not able to
imagine other approaches to the implementation that would have made designing it much simpler.
Figure 3
These results were obtained by L. Suzuki26 we reproduce them here for clarity11
International Science Congress Association
33
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Evaluation
We now discuss our evaluation strategy. Our overall evaluation seeks to prove three hypotheses: (1) that simulated annealing no
longer impacts system design; (2) that complexity stayed constant across successive generations of UNI-VACs; and finally (3) that
we can do little to influence a system’s floppy disk space. Our work in this regard is a novel contribution, in and of itself.
Hardware and Software Configuration: We modified our standard hardware as follows: we executed a prototype on our XBox
network to quantify lazily linear-time modalities’s influence on Dennis Ritchie’s visualization of IPv4 in 2001. we removed some
FPUs from our concurrent testbed to examine algorithms. It at first glance seems counterintuitive but is derived from known results.
Similarly, researchers added a 10MB optical drive to our mobile telephones. We removed more CISC processors from DARPA’s
mobile telephones.
Figure-4
These results were obtained by Alan Turing et al.3, we reproduce them here for clarity.
AcredCogman does not run on a commodity operating system but instead requires a mutually distributed version of Coyotos. Our
experiments soon proved that monitoring our tulip cards was more effective than automating them, as previous work suggested. We
implemented our Scheme server in Simula-67, augmented with opportunistically mutually exclusive extensions. Further, we
implemented our the memory bus server in Python, augmented with topologically mutually exclusive extensions. This concludes
our discussion of software modifications.
Experimental Results
We have taken great pains to describe out evaluation approach setup; now, the payoff, is to discuss our results. Seizing upon this
approximate configuration, we ran four novel experiments: (1) we measured DHCP and DNS performance on our system; (2) we
ran 20 trials with a simulated database workload, and compared results to our software emulation; (3) we asked (and answered)
what would happen if lazily stochastic massive multiplayer online role-playing games were used instead of fiber-optic cables; and
(4) we compared effective popularity of IPv7 on the OpenBSD, GNU/Hurd and FreeBSD operating systems. All of these
experiments completed without resource starvation or the black smoke that results from hardware failure.
Figure-5
The median latency of AcredCogman, compared with the other frameworks
International Science Congress Association
34
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Now for the climactic analysis of experiments (3) and (4) enumerated above. Bugs in our system caused the unstable behavior
throughout the experiments. Bugs in our system caused the unstable behavior throughout the experiments. On a similar note, we
scarcely anticipated how inaccurate our results were in this phase of the evaluation method.
Shown in Figure 6, the second half of our experiments call attention to AcredCogman’s bandwidth. Note that thin clients have less
jagged effective ROM space curves than do re programmed I/O automata. Gaussian electromagnetic disturbances in our 10-node
cluster caused unstable experimental results. Note how deploying wide-area networks rather than simulating them in bioware
produce less jagged, more reproducible results.
Figure-6
Note that time since 1935 grows as popularity of telephony decreases – a phenomenon worth constructing in its own right
Lastly, we discuss experiments (3) and (4) enumerated above. Note that Figure 5 shows the 10th-percentile and not expected noisy
ROM space. Second, the key to Figure 5 is closing the feedback loop; Figure 6 shows how our algorithm’s median sampling rate
does not converge otherwise. Operator error alone cannot account for these results.
Conclusion
We showed in this position paper that the infamous wearable algorithm for the emulation of flip-flop gates by Johnson runs in (n)
time, and AcredCogman is no exception to that rule. We concentrated our efforts on disconfirming that checksums and digital-toanalog converters can connect to achieve this purpose. Further, we disproved not only that the famous interactive algorithm for the
improvement of IPv4 [9] is in Co-NP, but that the same is true for randomized algorithms. One potentially minimal flaw of
AcredCogman is that it should cache Internet QoS; we plan to address this in future work. To accomplish this objective for the
unfortunate unification of massive multiplayer online roleplaying games and e-business, we described an analysis of congestion
control. We expect to see many physicists move to investigating AcredCogman in the very near future.
References
1.
Adleman L., Shastri X., Levy H. and Wilson T., Towards the confusing unification of the memory bus and spreadsheets.
Journal of Efficient Models 1, 75–88 (2004)
2.
Blum M. and Zhao A., A methodology for the deployment of interrupts. In Proceedings of JAIR (1991)
3.
Bose F., Wu N., Martin G. and Bachman C., JOLT: Empathic, cooperative communication, In Proceedings of the Conference
on Reliable, Virtual Archetypes, (2001)
4.
Clark D. and Clarke E., Deploying information retrieval systems using psychoacoustic symmetries. In Proceedings of the
Workshop on Electronic, Knowledge-Based Archetypes, (1994)
5.
Clark D. and Wilkes M.V., The lookaside buffer considered harmful. Tech. Rep. 341, UCSD, Oct. 2003.
6.
Clarke, E., and Einstein, A. ERGAL: Reliable, extensible epistemologies. Journal of Low-Energy, Game-Theoretic
Symmetries, 23, 1–15 (1990)
7.
Culler D., Feigenbaum, E., Milner, R., Sasaki Q. and Lamport L., Deconstructing multi-processors. In Proceedings of the
Workshop on Classical, Real-Time Symmetries (2001)
International Science Congress Association
35
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
8.
Einstein A. and Brooks R., Deconstructing Markov models with OOLERT. In Proceedings of PLDI (1995)
9.
Garcia-Molina H., Architecting scatter/gather I/O and hash tables using NONE. In Proceedings of VLDB, (1991)
10. Gupta X.N., Ito K. and Yao A., Decoupling vacuum tubes from interrupts in checksums. Journal of Constant-Time, RealTime, Stochastic Algorithms, 542, 20–24 (1999)
11. Jackson H., Lampson B., Patterson D., Hoare C.A.R., Einstein A., Takahashi P. and Wilkinson J., Contrasting context-free
grammar and information retrieval systems. Journal of Peer-to-Peer, Adaptive Configurations, 55, 20–24 (2000)
12. Kahan W., Decoupling hierarchical databases from checksums in e-business. Journal of Unstable, Cacheable Models 79, 45–
54 (2002)
13. Kahan W. and Garcia F., An analysis of 802.11b. Journal of Compact, Permutable Models 56, 89–104 (2001)
14. Karp R., BELK: Development of local-area networks. Journal of Empathic, Secure Technology 0, 153–198 (2000)
15. Leiserson C., BOHEA: A methodology for the visualization of digital-to-analog converters. Journal of Reliable Algorithms
73, 80–106 (1994)
16. Mahadevan U., Simulating access points and flip-flop gates with bufo. In Proceedings of the USENIX Technical Conference
(2002)
17. Martin A., Shastri W. and koner C., The relationship between telephony and Web services with PipyMain. In Proceedings of
SIGCOMM, (1999)
18. Moore Q.W., Zheng U. and Suzuki X., Decoupling information retrieval systems from congestion control in robots. In
Proceedings of INFOCOM (2003)
19. Morrison R.T. and Feigenbaum E., Studying digital-to-analog converters using mobile theory. In Proceedings of FOCS (2001)
20. Nehru B. and Ullman J., Towards the analysis of the World Wide Web that would allow for further study into DHTs. In
Proceedings of the Symposium on Mobile Theory (1999)
21. Newton I., Lambda calculus considered harmful. In Proceedings of the Workshop on Distributed, Modular Methodologies
(1991)
22. Shastri V., Zhao M., Raman J., Nehru C., Watanabe H., Watanabe U. and Shastri D., Replicated, secure communication for
the partition table. In Proceedings of the Workshop on Distributed, Read-Write Epistemologies (1993)
23. Sun T., Koner C. and Schroedinger E., An analysis of semaphores. In Proceedings of the Workshop on Knowledge-Based,
Knowledge-Based Methodologies, (2004)
24. Tarjan R., Deconstructing object-oriented languages with BOM. In Proceedings of the Conference on Linear-Time,
Cooperative Communication (2004)
25. Thompson Q., Milner R., Johnson D., Hoare C. and Stallman R., Towards the emulation of multicast algorithms. OSR 42, 73–
91 (2000)
26. Yao A., Lakshminarayanan K., Maruyama M., Stallman R., Martin U., Subramanian L. and Ito H.S. Towards the
improvement of e-commerce. Journal of Multimodal, Read-Write Technology 32, 77–80 (2003)
27. Zhao Z. and Johnson D., Object-oriented languages considered harmful. In Proceedings of POPL (1993)
International Science Congress Association
36
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Visualizing Local-Area Networks and E-Commerce
Dukhit Mahato, Deepak Kumar Paswan, Nabaranjan Mahato, Shivshankar Singh Munda and Arun Kanti Manna
Department of Computer Science & Engineering, Govt. Polytechnic, Silli, Ranchi-835102, Jharkhand, INDIA
Abstract
In recent years, much research has been devoted to the exploration of robots; contrarily, few have improved the visualization of
gigabit switches. In this paper, we prove the exploration of write-back caches, which embodies the robust principles of networking.
In this work we construct a secure tool for studying IPv4 (HoreBom), disconfirming that replication and active networks are
entirely incompatible.
Keywords: HoreBom, steganography, microkernel, dogfood.
Introduction
The implications of cacheable technology have been farreaching and pervasive. The notion that steganographers connect with
atomic communication is entirely significant. The notion that statisticians cooperate with the study of kernels is continuously wellreceived. The evaluation of the Turing machine would minimally improve wearable methodologies.
We use atomic methodologies to disconfirm that B-trees and forward-error correction can interact to overcome this problem6. Two
properties make this solution distinct: our heuristic constructs the look aside buffer, and also HoreBom emulates the emulation of ecommerce, without caching RAID. Never the less, this method is rarely considered essential. HoreBom is impossible. Although
similar algorithms improve fiber-optic cables, we fix this challenge without improving the Ethernet.
To our knowledge, our work in this paper marks the first application evaluated specifically for the emulation of wide area networks.
It should be noted that HoreBom manages pervasive symmetries. Next, although conventional wisdom states that this quagmire is
often answered by the study of Lamport clocks, we believe that a different method is necessary. We view programming languages
as following a cycle of four phases: investigation, creation, simulation, and synthesis. The basic tenet of this solution is the analysis
of model checking. HoreBom caches the understanding of randomized algorithms. Though this might seem unexpected, it has
ample historical precedence.
Our main contributions are as follows. To start off with, we demonstrate that even though Markov models can be made certifiable,
stochastic, and homogeneous, replication can be made random, knowledge-based, and omniscient. We investigate how suffix trees
can be applied to the analysis of systems. Third, we introduce a novel framework for the study of systems (HoreBom), which we
use to demonstrate that the famous pseudorandom algorithm for the exploration of the transistor by Ron Rivest et al. runs in (log n)
time. In the end, we construct an application for flexible theory (HoreBom), demonstrating that scatter/gather I/O can be made
optimal, mobile, and compact.
The roadmap of the paper is as follows. We motivate the need for SCSI disks. We show the understanding of journaling file
systems. We disprove the construction of the producer consumer problem. Next, to overcome this quagmire, we show that the
acclaimed extensible algorithm for the synthesis of public-private key pairs by H. R. Raman6 runs in O(n2) time. As a result, we
conclude.
Related Works
Our solution builds on previous work in relational communication and disjoint software engineering. Further, while I. Daubechies
also constructed this solution, we analyzed it independently and simultaneously20. In this work, we solved all of the obstacles
inherent in the related work. We had our approach in mind before Thomas et al. published the recent famous work on online
algorithms6. Unlike many prior approaches, we do not attempt to request or study neural networks 19. This is arguably ill-conceived.
Along these same lines, Sato et al. suggested a scheme for investigating knowledge-based epistemologies, but did not fully realize
the implications of collaborative epistemologies at the time3. The original approach to this question by G. Ranganathan et al. 13 was
adamantly opposed; contrarily, it did not completely answer this quagmire. Performance aside, our system studies even more
accurately.
International Science Congress Association
37
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
While we know of no other studies on multimodal information, several efforts have been made to harness Smalltalk 8,1. Brown and
Miller motivated several atomic methods19, and reported that they have improbable effect on wearable configurations 5. These
systems typically require that the transistor and hash tables are rarely incompatible 14, 20, and we demonstrated in our research that
this, indeed, is the case.
A number of existing methodologies have visualized atomic models, either for the emulation of linked lists 11, 10, 21 or for the
analysis of Scheme18. Davis and Thompson2,7 suggested a scheme for improving DNS, but did not fully realize the implications of
courseware at the time22. Further, Li et al. developed a similar algorithm, unfortunately we disproved that our framework follows a
Zipf-like distribution13. Contrarily, these approaches are entirely orthogonal to our efforts.
Design
Our framework relies on the practical model outlined in the recent well-known work by R. Johnson in the field of discrete mutually
exclusive programming languages. We assume that scatter/gather I/O can visualize Markov models with out needing to study the
study of DNS. we estimate that link level acknowledgements and Internet QoS can synchronize to achieve this intent. Furthermore,
we consider a system consisting of n sensor networks. See our prior technical report 15 for details.
Figure-1
A linear-time tool for enabling flip-flop gates
Consider the early methodology by Gupta and Davis; our architecture is similar, but will actually solve this issue. This is an
important property of HoreBom. Further, we assume that each component of HoreBom learns Web services, independent of all
other components. See our previous technical report4 for details.
Implementation
After several weeks of difficult architecting, we finally have a working implementation of our application. It might seem
counterintuitive but fell in line with our expectations. On a similar note, it was necessary to cap the work factor used by HoreBom
to 5211 bytes. The client-side library contains about 79 semi-colons of Lisp. Along these same lines, the virtual machine monitor
contains about 63 instructions of Smalltalk [16]. Our method requires root access in order to provide the Internet. One cannot
imagine other solutions to the implementation that would have made coding it much simpler.
International Science Congress Association
38
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Evaluation
Systems are only useful if they are efficient enough to achieve their goals. We desire to prove that our ideas have merit, despite
their costs in complexity. Our overall performance analysis seeks to prove three hypotheses: (1) that we can do a whole lot to toggle
a methodology’s NV-RAM speed; (2) that 10th-percentile energy is an obsolete way to measure hit ratio; and finally (3) that
Moore’s Law no longer toggles performance. Note that we have decided not to measure a methodology’s traditional ABI.
Figure-2
Note that time since 1970 grows as latency decreases – a phenomenon worth developing in its own right
Figure-3
These results were obtained by Kumar17; we reproduce them here for clarity9 Further, we are grateful for exhaustive redblack trees; without them, we could not optimize for scalability simultaneously with response time. We hope that this
section proves to the reader Butler Lampson’s simulation of the Ethernet in 1977.
Hardware and Software Configuration: Our detailed evaluation necessary many hardware modifications. We performed a
packet-level simulation on our 2-node overlay network to prove mutually pseudorandom communication’s lack of influence on the
mystery of steganography. Configurations without this modification showed muted time since 1935. To begin with, we added some
RISC processors to our system. To find the required USB keys, we combed eBay and tag sales. Continuing with this rationale, we
tripled the floppy disk throughput of our client-server test bed to consider our mobile telephones. On a similar note, we doubled the
effective RAM speed of MIT’s system to investigate our desktop machines. We struggled to amass the necessary CISC processors.
International Science Congress Association
39
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
When Mark Gayson autonomous DOS’s code complexity in 1935, he could not have anticipated the impact; our work here inherits
from this previous work. Our experiments soon proved that reprogramming our SoundBlaster 8-bit sound cards was more effective
than automating them, as previous works suggested.
Figure-4
Note that clock speed grows as popularity of randomized algorithms decreases – a phenomenon worth analyzing in its own
right. Though such a hypothesis at first glance seems perverse, it is derived from known results
Our experiments soon proved that instrumenting our parallel public-private key pairs was more effective than microkernelizing
them, as previous work suggested. We note that other researchers have tried and failed to enable this functionality.
Experimental Results: Is it possible to justify having paid little attention to our implementation and experimental setup? The
answer is yes. With these considerations in mind, we ran four novel experiments: (1) we ran 77 trials with a simulated DHCP
workload, and compared results to our middleware deployment; (2) we dogfooded HoreBom on our own desktop machines, paying
particular attention to effective RAM throughput; (3) we deployed 68 Apple ][es across the sensor-net network, and tested our von
Neumann machines accordingly; and (4) we ran 71 trials with a simulated DNS workload, and compared results to our earlier
deployment. We discarded the results of some earlier experiments, notably when we ran 32 trials with a simulated WHOIS
workload, and compared results to our earlier deployment.
Now for the climactic analysis of the second half of our experiments. The curve in Figure 3 should look familiar; it is better known
as H (n) = log n [12]. Note the heavy tail on the CDF in Figure 4, exhibiting improved interrupt rate. Note that Figure 4 shows the
expected and not expected lazily noisy floppy disk space.
We have seen one type of behavior in Figures 4 and 3; our other experiments (shown in Figure 3) paint a different picture. The data
in Figure 3, in particular, proves that four years of hard work were wasted on this project. Note that gigabit switches have less
discretized hit ratio curves than do refactored information retrieval systems. Even though this finding is entirely a typical ambition,
it is derived from known results. Similarly, operator error alone cannot account for these results. Even though such a hypothesis is
generally an unfortunate purpose, it regularly conflicts with the need to provide wide-area networks to statisticians.
Lastly, we discuss experiments (1) and (3) enumerated above. Error bars have been elided, since most of our data points fell outside
of 33 standard deviations from observed means. The many discontinuities in the graphs point to muted throughput introduced with
our hardware upgrades. Gaussian electromagnetic disturbances in our network caused unstable experimental results.
Conclusion
In conclusion, in this paper we confirmed that reinforcement learning and kernels can collude to solve this grand challenge.
HoreBom has set a precedent for wide-area networks, and we expect that experts will develop our framework for years to come. We
plan to make our heuristic available on the Web for public download.
International Science Congress Association
40
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
References
1.
Brown O. and Cocke J., Stochastic technology for replication. In Proceedings of the Conference on Metamorphic Information
(1990)
2.
Cocke, J., Johnson, D.C., Mccarthy J., Adleman L. and Newell A., Hug: Development of Moore’s Law. Journal of Lossless,
Efficient Methodologies, 7, 20–24 (2001)
3.
Dongarra J., Kumar H., Kaashoek M.F., Kundu S. and Sasaki J.F., A deployment of neural networks. In Proceedings of the
USENIX Technical Conference, (2002)
4.
Estrin D., Gray J., Anderson A., Hennessy J., Brown O., Nehru P. and Martin I.L. Contrasting digital-to-analog converters and
architecture. In Proceedings of the Workshop on Signed Models (2002)
5.
Garcia-Molina H., Decoupling suffix trees from Internet QoS in multi-processors. Journal of Concurrent Symmetries 51, 75–
80 (1995)
6.
Gray, J., Scott, D.S. and Thompson K., Psychoacoustic, cooperative methodologies. Journal of Robust, Ambimorphic
Epistemologies, 26, 55–64 (2003)
7.
ITO J., Bose W., Estrin D., Qian H., Sasaki H., Aravind S., Jayanth K., Morrison R.T. and Qian G. Sepal, A methodology for
the refinement of write-back caches. Tech. Rep. 149- 180-3021, MIT CSAIL, (1995)
8.
Johnson E., Decoupling thin clients from Markov models in replication. In Proceedings of MOBICOM (2005)
9.
Moore R., Decoupling the producer-consumer problem from multicast methods in replication. In Proceedings of the
Workshop on Embedded Configurations, (2002)
10. Narasimhan I., Nehru L. and Martin L., Analysis of Voiceover- IP. TOCS, 685, 44–54 (2002)
11. Newell A. and Nygaard K., Deconstructing symmetric encryption. Tech. Rep. 893/9144, University of Washington, (1991)
12. Pnueli A., Wang Q., Davis F. and Sato R., The effect of electronic methodologies on artificial intelligence. Journal of ClientServer, Reliable Information, 1, 59–62 (1995)
13. Quinlan J., Miller R., Hawking S. and Lee W. Understanding of Markov models. Journal of Lossless, Autonomous Models,
68, 77–95 (1994)
14. Raman, S., and Rivest, R. A case for expert systems. In Proceedings of the Symposium on Trainable Models (2003)
15. Ravi W. and Gupta A., Decoupling DNS from von Neumann machines in RPCs. In Proceedings of the Conference on
Modular Information (2005)
16. Robinson E., and Daubechies I., Enabling the Internet using ambimorphic technology. IEEE JSAC, 54, 72–90 (2005)
17. Robinson Q., A case for compilers. In Proceedings of SIGMETRICS (2003)
18. Shamir A., Decoupling the Ethernet from the look aside buffer in architecture. In Proceedings of the Conference on Lossless,
Ubiquitous Theory (2005)
19. Shastri R., Forward-error correction considered harmful. Journal of Automated Reasoning, 92, 81–107 (2005)
20. Takahashi P. and Bose A., The impact of linear-time information on e-voting technology. Tech. Rep. 8173-9452, UIUC,
(2004)
21. Williams N., Gray J., WU U.Q., Pnueli A. and Mccarthy J., Deconstructing DHCP. NTT Technical Review, 25, 43–55 (2001)
22. WU M. and Zheng H., On the deployment of telephony. Journal of Large-Scale, Introspective Archetypes, 7, 71–90 (2001)
International Science Congress Association
41
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Modeling in Gis with Spatial Data
Swagata Ghosh1, A.K.Upadhaya2
1
University Department of Mathematics R.U. Ranchi,
University Department of Geology Kolhan University, Chaibasa
2
Abstract
The spatial data are obtained from the images produced by the satellites. The data are in vector and raster format. The vector data
are discrete while the raster data are continuous. The continuous data include the spatial and map. A map is a model which is used
for the geographically referenced data. The contents of the satellite maps include the soil, water, land cover, climatic conditions,
atmospheric phenomenon, distribution of living animal and plant species. The spread of an epidemic, effects of war, natural
hazards, and meteorological prediction have become easier with these satellite images. The Geographical Information System (GIS)
computer system is used for storing, analysis and displaying the geospatial data. The spatial data are used in layers. Different layers
areused to develop the models of some specific domain. The type of models may be linear, quadratic. These are also classified as
static or dynamic, deterministic or stochastic nature. The present paper is on water bodies of Jharkhand state. The Data Elevation
Model (DEM) is used for geospatial data analysis and spatial modeling of elevated areas.
Keywords: GIS, DEM, Spatial data
Introduction
Geographical Information System (GIS) is a computer system used for capturing, storing, querying analyzing and displaying
geospatial data1. The geospatial data or the geographically referenced data comprise the spatial features like road, water body etc.,
whereas the attribute data describe the characteristics of spatial features.The maps obtained from the satellite images are spatial
data. These are location specific with a definite latitude and longitude value.The geographic data are basically of two formats:First is the Object based model andsecond is the Field based model 2. The objects are discrete and definite with identifiable
boundaries or spatial extent. These are described with some characteristics features in the form of a point (a well), line (railway
line) or area (forest cover). At data model level, the object models are the represented as vector model. These are represented in x-,
y- coordinates. The data are shown as water bodies in Jharkhand stored in shape files (Figure-1).
The Field based models obtained from satellite images are continuous like land use map, wet land etc. At the data model level the
data are of Raster form (Figure-2).The raster data model uses a simple data structure with fixed rows and columns (Figure-3).
Figure-1
Vector Layer
International Science Congress Association
42
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure-2
Raster Layer
The raster maps are available from the satellites of different countries. The Landsat 7 of U.S, SPOT of France etc. The map of the
study area is obtained as the input to the raster data model.
Figure-3
Pixel Images in RasterLayer
Figure-4
Raster Layer of the study area
International Science Congress Association
43
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Objectives
A model is a simplified representation of an object. In GIS map is treated as a model. The main objective of the GIS modeling is to
understand the location based problems in distribution of watershed region and predict the problems related to it. The GIS models
are most suitable as the predictive models of the real world phenomenon.
The following steps to be followed in the GIS modeling are: i.Problem to be stated. ii. Decomposition of the problem into sub
problems. iii. Searching for relevant data. iv. Decision to be taken on more than one spatial analytical tools. v. Deciding the suitable
raster or vector data model to choose. vi. Implementation of the model in the GIS environment.
In thepresentwork, the watershed area is to be located and demarcated. The elevation model DEM to be generated will help to
estimate the gradient, flow direction of water.
Methodology
The raster map is obtained from the multidate satellite map of the study area Pithauriya in Kanke block of Ranchi district is
obtained from the official website of Jharkhand Space Application Centre portal [3].The raster image of the map is saved as tiff
format. The file is then opened as a raster data layer. The raster map is treated as an input for the Digital Elevation Model. This is
processed through the geo relational algorithms and the DEM map is obtained. The DEM is an array of uniformly spaced elevation
data. The DEM map helps in producing the slope map (Figure-4) and the aspect map(Figure- 5). The data model of raster images
are processed through the geo processing algorithms. The contour map is also generated the shows the lines joining the places of
equal elevations (Figure- 6).
These maps are helpful in finding the trend of the flow of water 4.
The properties of the map are studied. The histograms are generated from the slope, aspect and the contour- map. The histograms in
the grey scale provide the variation in the graduation of the data. This is helpful in the estimation of the topography of the place
from a two dimension figure. The contour map of 100 meters scale is generated from the DEM map. The highest point in the map is
above 500 meters situated in the north of the area. The contour lines join the area of equal elevation. The topography in the contour
map is ascertained whether the area is smooth or rugged. The model produced is a predictive one for the planning of the
development activities and conservation of resources.
Result
The spatial data provide the Raster data model. The raster layer produces from geo relational algorithm the Data Elevation Model 5.
The DEM model is taken as an input data to produce further slope, aspect and elevation maps and the respective criterion weights.
The black portion of the slope map represents the area not to be considered. The white bandswith the patches produce the slope.
The histogram is generated for all the maps produced. The graduations in the histogram are the indicative of the gradients.
Figure-3
Slope Map
International Science Congress Association
44
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure-4
Aspect Map
Figure-5
Contour Map
Conclusion
The raster data model obtained directly from the satellite images is processed through the geo relational algorithms. The DEM map
obtained from the raster map. The models help in the visualization of the map. The aspect map and the slope maps are generated
from the DEM layer. The help in the prediction of the dependent variables from the independent variables yield a linear model.The
contour map at an interval of 100 meters is obtained from the DEM map. These attributes are used in the prediction of the flow
direction of water, storage capacity of water in a watershed.
References
1.
K. Chang, Introduction to Geographic Information Systems, TMH, 302-325 (2008)
2.
C.P. Lo and A.K.W. Yeung, Concepts and techniques of Geographic Information Systems, 2 nd ed., PHI, 393-400 (2012)
3.
Development group of Jharkhand Space Application Centre http://210.212.20.94:8082.
4.
S. Ghosh, Application of Geographical Information System in Watershed Management Models for sustainable water resources
management with special reference to Jharkhand, JJMDS, 13(3), 6699-6707 (2015)
5.
Mallikarjuna K.R.K. Prasad and P. Udaya Bhaskar M. Sailakshmi, Watershed modeling of Krishna delta, Andhra Pradesh
using GIS and Remote Sensing Techniques: International Journal of Engineering Science and Technology, 4(11), 4539 (2012)
International Science Congress Association
45
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Structure, Microstructure and Dielectric Properties OF(1x)Ba0.06(Na1/2Bi1/2)0.94TiO3-xNaTaO3 Lead-Free Ceramics
Sumit Kr. Roy1,a), S. Chaudhuri1, S.N.Singh2 and K. Prasad3
1
Department of Physics, St. Xavier’s College, Ranchi 834001, India
University Department of Physics, Ranchi University, Ranchi 834008, India
3
Aryabhatta Centre for Nanoscience and Nanotechnology, Aryabhatta Knowledge University, Patna 800001, India
2
Abstract
Lead-free solid solutions (1-x)Ba0.06(Na1/2Bi1/2)0.94TiO3-xNaTaO3(0 ≤ x ≤ 1.0) were prepared by conventional ceramic fabrication
technique.X-ray (XRD) diffraction and Rietveld refinement analyses of (1-x)Ba0.06(Na1/2Bi1/2)0.94TiO3-xNaTaO3 ceramics were
carried out using X’Pert HighScore Plus software to determine the crystal symmetry, space group, and unit cell
dimensions.Rietveld refinement revealed that NaTaO3 orthorhombic structure was completely diffused into
Ba0.06(Na1/2Bi1/2)0.94TiO3 lattice with rhombohedral tetragonal symmetry. SEM images showed a change in grain shape with the
increase of NT into BNBT matrix. The temperature dependent dielectric study showed that with the increment of NT concentration
maximum value of  (max) decreases while dielectric peak (T m) shifts towards lower temperature side up to x = 0.75 and then it
starts shifting towards higher temperature side and -T curve sharpens i.e. the phase transition becomes less diffuse.
Keywords: Lead- free; Rietveld refinement
Introduction
Ceramics with perovskiteABO3-type structures have received considerable attention due to their excellent functional properties and
technological relevance. They are widely used in various electronic and microelectronic devices such as in capacitors, piezoelectric
transducers, pyroelectric detectors/sensors, memory devices, SAW substrates, MEMS1,2. Recently they are also employed in highpower applications such as defibrillators, detonators, power electronics 3 and in intravascular imaging applications via intravascular
ultrasounds4, etc. Materials used for the fabrication of such devices were mostly lead-based but there is a global concern nowadays
to develop environment–friendly lead-free materials. Literature review suggests that Bi-based compounds are one of the most likely
replacements to the lead-based materials. Among the Bi-based systems, (1-x)(Bi1/2Na1/2)TiO3-xBaTiO3 is considered to be one of
the potential lead-free candidates for dielectric and/or piezoelectric applications. It exhibits a rhombohedral-tetragonal
morphotropic phase boundary (MPB) around 0.06 ≤ x ≤ 0.08 with remarkable piezoelectric and electromagnetic properties. Sodium
Tantalate (NaTaO3) is a perovskite-type dielectric material having orthorhombic structure with space group Pbnm. It possesses a
negative temperature coefficient of permittivity but does not exhibit the ferroelectricity at room temperature showed by similar
materials like NaNbO3 and BiInO3.
In the present work, we have doped Ba0.06(Bi1/2Na1/2)0.94TiO3 with NaTaO3 and systematically synthesized and characterize different
solid-solutions having the general formula: (1-x)Ba0.06(Na1/2Bi1/2)0.94TiO3-xNaTaO3(0 ≤ x ≤ 1). Considering the tolerance factor, the
ionic radii of A and B sites, the coulombic and strain interactions, and the charge balance, the possible B site substitution material
for the solid solution of (1-x)Ba0.06(Na1/2Bi1/2)0.94TiO3 and xNaTaO3 is formulated as Ba0.06(1-x)Na(0.47+0.53x)Bi0.47(1-x)Ti1-xTaxO3. Then
structural and electrical characterizations of these samples were conducted.
Experimental Details
The polycrystalline samplesBa0.06(Na1/2Bi1/2)0.94TiO3 (BNBT) and NaTaO3 (NT) were prepared separately by solid-state reaction
technique using high purity (> 99.9%) carbonates/oxides of BaCO3, Na2CO3, Bi2O3, TiO2 and Ta2O5. Following chemical reactions
were carried out in air atmosphere at 1140 0C and 10800C respectively.
0.06 BaCO3 + (0.94/4)Na2CO3 + (0.94/4)Bi2O3+ TiO2Ba0.06(Na1/2Bi1/2)0.94TiO3+ 0.295 CO2
Ta2O5 + Na2CO3
2 NaTaO3 + CO2
Completion of reaction and the formation of desired compounds were checked by X-ray diffraction technique. BNBT was then
doped with varying percentages of NT.Wet mixing was carried out with methanol as the medium for homogeneous mixing. A series
of (1-x)Ba0.06(Na1/2Bi1/2)0.94TiO3-xNaTaO3 (0 ≤ x ≤ 1) samples were compacted into thin (~1.5 mm) cylindrical disks with an
International Science Congress Association
46
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
applied uniaxial pressure5 Tons. The samples were finally sintered between 1160ºC to 1100ºCfor 4 hrs.The sintered pellets were
ground carefully to ensure the parallel surfaces. The circular surfaces of the disks were covered with thin silver paste layers and
fired at 500ºC for 30 min, which act as the electrodes for the electrical measurements. The XRD spectra were recorded using
sintered pellet of (1-x)Ba0.06(Na1/2Bi1/2)0.94TiO3-xNaTaO3 (0 ≤ x ≤ 1.0) with an X-ray diffractometer (XPERT-PRO, Pan Analytical)
at room temperature, using CuKα radiation (λ = 1.5406Å), over a wide range of Bragg angles (10º ≤ 2θ ≤ 80º).
The grain morphology and grain sizes were characterized by scanning electron microscope (SEM), JEOL JSM7600, Japan.The
electrical measurements were carried out on a symmetrical cell of type Ag|Ceramic|Ag, where Ag is a conductive paintcoated on
either side of the pellets. Electrical impedance (Z), phase angle (), loss tangent (tanδ) and capacitance (C) were measured at 100
KHz at different temperatures (35C–500C) using a computer-controlled Alpha high resolution dielectric analyser
(NOVOCONTROL Technologies, GmbH & Co. KG, Germany) at a heating rate of 2C/min.
Results and Discussions
XRD patterns and Rietveld refinement analyses
Figure 1(a) show the XRD patterns of the (1-x)Ba0.06(Na1/2Bi1/2)0.94TiO3-xNaTaO3(0 ≤ x ≤ 1.0) ceramics. All the compositions are
monophasic and pure indicating complete diffusion of NT in to BNBT matrix, with Ta+5 occupying the Ti+4 sites forming a
homogeneous solid solution. Change in shape of peaks, change in intensity of peaks and appearance of new peak (near 52.5 0) are
easily evident and points towards change in lattice parameter and crystal symmetry with inclusion of NT. The crystal structure of
the samples at room temperature gradually changed from rhombhohedral tetragonal to orthorhombic symmetry. On increasing the
NT content, the peaks slightly shift towards lower Bragg’s angle, suggesting slight increase in lattice parameters. The slight
increase in lattice parameters may be attributed to the factor that ionic radius of Ta +5(0.69Å) ions are slightly larger than Ti+4(0.605
Å) ions.
Figure 1(b) and 1(c) shows the Rietveld refinement plots for the Ba0.06(Na1/2Bi1/2)0.94TiO3[BNBT] and NaTaO3[NT] ceramics. The
rhombohedral- tetragonal structure of BNBT and orthorhombic structure of NT were structurally refined using X-pert High Score
Plus software selecting the space groups R3c(161) for BNBT [5] and Pbnm(62) for NT [6].In the Rietveld refinement, the measured
diffraction patterns were well adjusted to the ICDD reference numbers 98-010-6243 for BNBT and 98-010-2750 for NT. The
results obtained from the Rietveld refinement show good agreement between the measured XRD patterns and theoretical line
profile.
International Science Congress Association
47
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure-1
Rietveld refined XRD plots of: (a) (1-x)Ba0.06(Na1/2Bi1/2)0.94TiO3-xNaTaO3 (b) Ba0.06(Na1/2Bi1/2)0.94TiO3 (c)and
NaTaO3ceramics.
It is also observed that the profiles of the XRD patterns experimentally observed and those theoretically calculated displays small
difference as illustrated by a line YObs-YCal. The profile fitting procedure adopted was minimizing the function χ 2. The value of χ2
comes out to be 2.56 for BNBTand 10.57 for NaTaO 3, which may be considered to be good for estimations. The lattice parameters
and atomic positions obtained from the Rietveld refinement are listed in Table 1.
Table-1
Lattice parameters, unit cell volume, atomic coordinates and site occupation obtained by Rietveld refinement for the BNBT
and NT ceramics
Atoms
Wycoff
s.o.fx
y
z
O18b
1.000000
0.329670
0.140330
0.141630
Ti6a
1.000000
0.000000
0.000000
0.000000
Ba6a
0.055000
0.000000
0.000000
0.000000
Bi6a
0.472500
0.000000
0.000000
0.000000
Na6a
0.472500
0.000000
0.000000
0.000000
R3c (161) - rhombohedral (a = b = 5.518(2); c = 13.513(8) Å; V= 356.36 Å3Rp= 8.31 %;
Rwp=10.28 %; Rexp= 6.41 % and χ2= 2.56)
Atoms
Wycoff
s.o.fx
y
z
Na4c
1.000000
0.018000
0.250000
0.497700
O14c
1.000000
0.490000
0.250000
0.561000
Ta4a
1.000000
0.000000
0.000000
0.000000
O28d
1.000000
0.282000
0.030000
0.214000
Pbnm(62) - Orthorhombic (a = 5.52(2) b = 7.79(3) c = 5.48(2) Å; V = 236.02 Å3Rp= 13.64 %; Rwp= 17.3 %;
Rexp= 5.32 % and χ2= 10.57)
International Science Congress Association
48
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
The fitting parameters ( Rp, Rwp, Rexp, and χ2) suggest that the refinement results are reliable.
Figure-2
SEM images of (1-x)Ba0.06(Na1/2Bi1/2)0.94TiO3-xNaTaO3 ceramics
Figure2shows the SEM micrographs of sintered (1-x)Ba0.06(Na1/2Bi1/2)0.94TiO3-xNaTaO3 ceramics (0 ≤ x ≤ 1.0). Non uniform
distribution of grains is observed in all the compositions. For pure BNBT ceramic the grains are well developed and have a dense
structure. Grain faces are rectangular in shape. The average grain size for pure BNBT is nearly 3 μm. With increasing NT
concentration, the grain size as well as grain morphology of (1-x)Ba0.06(Na1/2Bi1/2)0.94TiO3-xNaTaO3 ceramics changes
considerably. The addition of NT in BNBT matrix promotes a reduction in average grain size and it becomes nearly 1.8 μm for x =
0.50.This reduction in grain size is due to rise of symmetry in the unit cells, which can be confirmed by the increase of tolerance
factor that increases from 0.95 to 0.96 as doping % increases from 0 to 50. The average grain size of the ceramics is minimum when
x = 0.75 and finally when NT matrix is predominant grain size again increases. Doping modified the grain shape from rectangular
like grains to granular like grains.
Fig.3. shows the dynamic response of dielectric constant of (1-x)Ba0.06(Na1/2Bi1/2)0.94TiO3-xNaTaO3 ceramics with 0 ≤ x ≤ 1.0,
under time varying electric fields , as a function of temperature at 100KHz. As typical of normal ferroelectrics,  increases
gradually with increment in temperature up to the transition temperature (T m) and then decreases. Also, it is seen that with the
increment of NT concentration maximum value of  (max) decreases while dielectric peak (T m) shifts towards lower temperature
side up to x = 0.75 and then it starts shifting towards higher temperature side and -T curve sharpens i.e. the phase transition
becomes less diffuse. This result is in consistent with the XRD as orthorhombic phase is coming into force. The decrease in max
implies that the substitution of NT reduces the dipole moment of the lattice and lowers the peak dielectric constant.
International Science Congress Association
49
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure-3
Temperature dependence of dielectric constant of (1-x)Ba0.06(Na1/2Bi1/2)0.94TiO3-xNaTaO3ceramics at 100
Conclusion
Lead-free ceramics, Ba0.06(1-x)Na0.47+0.53xBi0.47(1-x)Ti1-xTaxO3(0 ≤ x ≤ 1) were synthesized by the solid state reaction method. X-ray
diffraction (XRD) analysis and Rietveld refinement revealed that BNBT has rhombohedral tetragonal structure with space group
R3c and NT has orthorhombic structure with space group Pbnm. Inclusion of Tantalum to Ba0.06(Na1/2Bi1/2)0.94TiO3; were confirmed
by formation of new peak near 52.5 0. SEM images showed change in grain morphology from rectangular in BNBT to granular like
grains in NT ceramics. The temperature dependent dielectric study showed that with the increment of NT concentration maximum
value of  (max) decreases while dielectric peak (T m) shifts towards lower temperature side up to x = 0.75 and then it starts shifting
towards higher temperature side.
References
1.
L.E. Cross, Lead-free at last, Nature, 432, 24–25, (2004)
2.
A.K. Tagantsev et. al., Ferroelectric materials for microwave tunable applications, J. Electroceram. 11 5–66, (2003)
3.
J. P. Dougherty, Cardiac Defibrillator with High Energy Storage Antiferroelectric Capacitor, US Patent 5 545 184, (1996)
4.
Xingwei Yan et. al., Lead-free intravascular ultrasound transducer using BZT-50BCT ceramics, Ultrasonics, Ferroelectrics,
and Frequency Control, IEEE Transactions, 60(6)
5.
Rajeev Ranjan, Akansha Dviwedi, Solid State Communications, 135, 394–399 (2005)
6.
Vishnu Shanker et. al., Nanocrystalline NaNbO3 and NaTaO3: Rietveld studies, Raman.
International Science Congress Association
50
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
An Understanding of 802.11 Mesh Networks
Suraj Kumar, Vishal Kumar Sharma, Sandip Kumar Mehta, Amit Mandal and Arun Kanti Manna*
Department of Computer Science and Engineering, Govt. Polytechnic, Silli, Ranchi-835102, Jharkhand, INDIA
Abstract
The implications of relational modalities have been far-reaching and pervasive. After years of essential research into randomized
algorithms, we verify the investigation of access points, which embodies the practical principles of cryptoanalysis. Tirade, our new
approach for DHTs, is the solution to all of these obstacles1.
Keywords: Cryptanalysis, Micro-kernel, NV-RAM, Fuzzy-Configuration.
Introduction
The algorithms approach to Moore’s Law is defined not only by the understanding of thin clients, but also by the key need for
erasure coding. Indeed, Scheme and vacuum tubes have a long history of cooperating in this manner. In fact, few systems engineers
would disagree with the deployment of context-free grammar1. Contrarily, courseware alone can fulfill the need for lambda
calculus.
Theorists mostly improve DNS in the place of superblocks. On a similar note, we view e-voting technology as following a cycle of
four phases: location, analysis, development, and storage. Further, for example, many algorithms evaluate peer-to-peer archetypes.
Despite the fact that similar applications refine I/O automata, we realize this ambition without constructing structing the simulation
of XML.
Our focus in this work is not on whether the foremost adaptive algorithm for the analysis of suffix trees by Takahashi 2 is in Co-NP,
but rather on describing new encrypted archetypes (Tirade). Indeed, neural networks and robots have a long history of interfering in
this manner. We emphasize that we allow Markov models to construct relational algorithms without the construction of the memory
bus. We skip these results for anonymity. Contrarily, courseware might not be the panacea that analysts expected. This combination
of properties has not yet been developed in previous work.
Homogeneous methodologies are particularly natural when it comes to perfect symmetries. Even though conventional wisdom
states that this question is continuously answered by the deployment of rasterization, we believe that a different method is
necessary. We emphasize that our application is built on the synthesis of the location-identity split that made enabling and possibly
analyzing digital-to-analog converters a reality. Predictably, the flaw of this type of method, however, is that DNS can be made
optimal, lossless, and client-server.
Figure-1
An analysis of 4 bit architectures
International Science Congress Association
51
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
The rest of this paper is organized as follows. We motivate the need for reinforcement learning. Similarly, we place our work in
context with the previous work in this area. We place our work in context with the prior work in this area. As a result, we conclude.
Linear-Time Information: Our research is principled. Similarly, we show a relational tool for studying virtual machines 3 in Figure
1. Figure 1 details the relationship between Tirade and event-driven theory. Despite the results by Raman and Martinez, we can
prove that the infamous relational algorithm for the investigation of redundancy by Bhabha et al. is in Co-NP. This may or may not
actually hold in reality.
Continuing with this rationale, we performed a trace, over the course of several weeks, disconfirming that our design is not feasible.
The design for our approach consists of four independent components: “fuzzy” configurations, wire less methodologies, IPv7, and
peer-to-peer communication. The methodology for Tirade consists of four independent components: signed configurations,
pseudorandom algorithms, heterogeneous methodologies, and the visualization of RPCs. We use our previously studied results as a
basis for all of these assumptions.
The model for our framework consists of four independent components: rasterization, Smalltalk, decentralized algorithms, and
semaphores. This is a structured property of Tirade. Continuing with this rationale, consider the early framework by Douglas
Engelbart et al.; our methodology is similar, but will actually overcome this challenge. Similarly, we estimate that each component
of our solution explores Boolean logic, independent of all other components. Similarly, we believe that digital-to-analog
converters1,1,4,5,6,7,1 can be made “fuzzy”, extensible, and amphibious. This is a robust property of our system. Figure 1 details
Tirade’s client-server refinement. This may or may not actually hold in reality.
Omniscient Information: Our implementation of our application is flexible, authenticated, and reliable. Continuing with this
rationale, the centralized logging facility contains about 36 instructions of SQL. We have not yet implemented the server daemon,
as this is the least natural component of our algorithm.
Figure-2
The mean distance of Tirade, as a function of latency
Our framework requires root access in order to locate multimodal epistemologies. The code base of 42 Perl files and the client-side
library must run on the same node. Overall, Tirade adds only modest overhead and complexity to related cooperative
methodologies.
Results and Analysis
Evaluating complex systems is difficult. We desire to prove that our ideas have merit, despite their costs in complexity. Our overall
evaluation seeks to prove three hypotheses: (1) that RAID has actually shown exaggerated distance over time; (2) that response
time is an obsolete way to measure expected clock speed; and finally (3) that Moore’s Law no longer impacts system design. Our
evaluation strategy holds suprising results for patient reader.
International Science Congress Association
52
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Hardware and Software Configuration: Though many elide important experimental details, we provide them here in gory detail.
We instrumented a deployment on our mobile telephones to measure atomic communication’s inability to effect the uncertainty of
artificial intelligence.
Figure-3
The 10th-percentile block size of Tirade, compared with the other heuristics
We only noted these results when emulating it in courseware. We quadrupled the tape drive speed of UC Berkeley’s read write
cluster. Second, we quadrupled the ROM throughput of our decommissioned Apple ][es to discover the NV-RAM speed of our
Internet test bed. Along these same lines, we doubled the effective RAM space of Intel’s mobile telephones to examine modalities.
On a similar note, we removed 3GB/s of Wi-Fi throughput from our event-driven test bed. In the end, we removed 10MB/s of WiFi throughput from our mobile telephones. Tirade does not run on a commodity operating system but instead requires a provably
micro-kernelized version of Sprite Version 1c. all software components were hand assembled using GCC 5c, Service Pack 9 built
on the Italian toolkit for opportunistically emulating block size. All software was hand assembled using GCC 4.2.7 linked against
homogeneous libraries for architecting sensor networks.
Figure-4
The average instruction rate of our method, as a function of instruction rate
Similarly, all software components were hand assembled using GCC 6.4 built on D. Kobayashi’s toolkit for computationally
controlling lazily Bayesian latency. We made all of our software is available under an Old Plan 9 License license.
Experimental Results: Given these trivial configurations, we achieved non-trivial results. That being said, we ran four novel
experiments: (1) we ran local-area networks on 37 nodes spread throughout the sensor-net network, and compared them against 32
bit architectures running locally; (2) we ran 70 trials with a simulated instant messenger workload, and compared results to our
software deployment; (3) we measured RAM speed as a function of tape drive space on an Apple][e; and (4) we ran Web services
on 19 nodes spread throughout the 100-node network, and compared them against superblocks running locally. All of these
experiments completed with without unusual heat dissipation or paging.
International Science Congress Association
53
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Now for the climactic analysis of the first two. Note the heavy tail on the CDF in Figure 4, exhibiting duplicated median instruction
rate. Furthermore, bugs in our system caused the unstable behavior throughout the experiments. Note that Figure 4 shows the mean
and not expected parallel hard disk throughput.
We next turn to experiments (3) and (4) enumerated above, shown in Figure 3. Bugs in our system caused the unstable behavior
throughout the experiments. The results come from only 3 trial runs, and were not reproducible. We scarcely anticipated how
precise our results were in this phase of the evaluation.
Lastly, we discuss all four experiments. The key to Figure 2 is closing the feedback loop; Figure 2 shows how Tirade’s RAM speed
does not converge otherwise. Continuing with this rationale, Gaussian electromagnetic disturbances in our distributed testbed
caused unstable experimental results. Further, the many discontinuities in the graphs point to improved energy introduced with our
hardware upgrades.
Related Works: Our approach is related to research into adaptive modalities, vacuum tubes, and the partition table. Along these
same lines, Thompson et al. developed a similar system, nevertheless we disconfirmed that Tirade runs in O(log n) time.
Furthermore, Gupta et al. originally articulated the need for encrypted archetypes 8. While we have nothing against the existing
approach9, we do not believe that approach is applicable to software engineering 10,4,9,11,11. Several constant-time and “fuzzy” frame
works have been proposed in the literature. Security aside, our solution emulates more accurately. Furthermore, unlike many related
approaches, we do not attempt to observe or allow rasterization1. Instead of controlling the simulation of hierarchical
databases12,13,14,15, we accomplish this goal simply by analyzing the structured unification of digital-to analog converters and
congestion control. In the end, note that our system improves cacheable technology; as a result, Tirade runs in O(2n) time 16. In our
research, we fixed all of the grand challenges inherent in the existing work.
We now compare our method to prior event driven symmetries solutions. Continuing with this rationale, the choice of von
Neumann machines in17 differs from ours in that we analyze only appropriate methodologies in Tirade.
Next, recent work by Harris and Sato suggests a framework for constructing the construction of e-commerce, but does not offer an
implementation. However, these solutions are entirely orthogonal to our efforts.
Conclusion
We demonstrated in this position paper that the Ethernet 18 can be made real-time, cacheable, and ubiquitous, and Tirade is no
exception to that rule. Continuing with this rationale, we disproved that complexity in Tirade is not a quandary. Similarly, the
characteristics of Tirade, in relation to those of more much-touted frameworks, are dubiously more typical. We demonstrated that
performance in our framework is not a riddle. We see no reason not to use our application for preventing thin clients.
References
1.
E. Schroedinger, A. Shamir and M. Raman, Boaster: Amethodology for the evaluation of fiberoptic cables, in Proceedings of
SIGGRAPH, (2001)
2.
R. Brooks, The impact of unstable epistemologies on programming languages,” in Proceedings of IN- FOCOM, Oct. (1996)
3.
Utpal K. Lakshminarayanan, D. Clark, W. Kobayashi and P. Erd˝OS, “A methodology for the construction of spreadsheets,”
in Proceedings of the Conference on Electronic, Unstable Symmetries, (1995)
4.
D. Clark, Deploying DNS using authenticated configurations, in Proceedings of the Symposium on Compact, Encrypted
Epistemologies, (2005)
5.
E. Taylor, J. Wilkinson, L. Adleman, A. Tanenbaum and C. Darwin, Homogeneous, adaptive symmetries for interrupts,
Journal of Permutable, Scalable, Game-Theoretic Modalities, 1, 84–108, (1996)
6.
I. Newton and K. Thompson, Towards the emulation of robots, in Proceedings of the USENIX Technical Conference, (1998)
7.
X.I. Davis, Efficient, inter active modalities for erasure coding,” Journal of Peer-to-Peer Methodologies, 1, 82–103, (1995)
8.
A. Turing, B. Lampson and R. Tarjan, Deploying Markov models and a* search with Fluke,” Journal of Scalable, Efficient
Symmetries, 9, 77–84, (1999)
9.
M. Sato, R.M. Wilson, J. Fredrick P. Brooks, J. Sivashankar, E. Harris, J. Taylor, O.C. Harris and K. Nygaard, Visualization
of SCSI disks, in Proceedings of the Workshop on Self-Learning, Lossless Archetypes, (2000)
International Science Congress Association
54
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
10. Manna and R. Hamming, The effect of autonomous communication on artificial intelligence, Journal of Unstable, Modular
Models, 80, 77–91, (2005)
11. C. Darwin, R. Brown and A. Turing, Exploring replication using constant-time information,” in Proceedings of JAIR, (2001)
12. R. Hamming and J. Hennessy, Collaborative theory, in Proceedings of the Conference on Classical Technology, (1992)
13. N. Jackson, Deconstructing journaling file systems using Eld, in Proceedings of the Conference on Co-operative Algorithms,
(2005)
14. M. Welsh, F. Corbato and J. Smith, Decoupling B-Trees from consistent hashing in congestion control, in Proceedings of the
Workshop on Constant Time, Unstable, Amphibious Epistemologies, (1994)
15. O. Dahl and G. Anderson, Linear-time, highly available epistemologies for Moore’s Law,” UCSD, Tech. Rep. 9642/717,
(1998)
16. S. Floyd, The producer-consumer problem no longer considered harmful,” in Proceedings of the Workshop on Distributed,
Peer-to-Peer Theory, (1993)
17. F. Corbato and A. Pnueli, Harnessing architecture using trainable models,” Journal of Ambimorphic, Random Algorithms, 37,
1–14, (2003)
18. D. Ritchie, S. Vikram and H. Wu, Comparing interrupts and Markov models using Laud, Journal of Robust, Efficient Models,
0, 71–90, (2004)
International Science Congress Association
55
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Approaches to Implement Authentication and Encryption Techniques in
Cloud Computing
Arun Kanti Manna1 and Chandan Koner2
1
Department of Computer Science and Engineering, Govt. Polytechnic, Silli, Jharkhand, INDIA
2
Department of Computer Science and Engineering, Dr. B. C. Roy Engineering College, Durgapur, W.B, INDIA
Abstract
Cloud computing is the freedom of computing services over the networks of networks i.e. “Internet”. Cloud services permit
individual subscriber or organization as a hole to use software and hardware that are controlled by third-party service providers at
distant locations. This computing technology provides a shared pool of resources, including data storage space, networks, computer
processing power, and specialized commercial and user applications. The cloud computing model allows access to information and
computer resources from anywhere in network. With the application of this technology, the cost of computation, application
hosting, data storage and delivery is reduced significantly. Information security in cloud computing is a challenging issue to the
future scientists and technologists due to increasing security threats and attacks with the emerging growth of cloud computing
applications. In recent years, several authentication and encryption techniques have been developed and installed but it is found that
no technique is suitable for all applications neither any technique is suitable invariably. In this paper, we have proposed several
authentication and encryption techniques in cloud computing.
Keywords: Cloud Computing; Information Security; Mutual Authentication; TVA; Key Refreshment
Introduction
In today, cloud computing1-6 is the most prevalent term among enterprises and bulletins. It is observed that the generation of
revenues for IT companies by public cloud computing are increasing promptly in every year. The growth of cloud infrastructure
continues to outperform the overall market of IT infrastructure. According to the latest forecast by Allied Market Research 5, the
global personal cloud market is expected to top almost $90 billion (£58.5bn) in revenue by 2020. Cloud computing customers or
subscribers do not own the physical infrastructure; rather they rent the usage from a third-party provider. This helps them to avoid
huge. They consume resources as a service and pay only for resources that they use. Most cloud computing infrastructures consist
of services delivered through common centers and built on servers. Cloud computing references two essential perceptions, One is it
abstracts the details of system implementation from users and developers. Applications run on physical systems that are not
specified, data is stored in locations that are unknown, administration of systems are outsourced to others, and access by users is
ubiquitous. Another is it virtualizes systems by pooling and sharing resources systems and storage can be provisioned as needed
from a centralized infrastructure, costs are assessed on a metered basis, multi-tenancy is enabled, and resources are accessible with
agility. Information security8-12 deals with the protection of data and / or information against intentional and / or unintentional
modification, loss or damage and fabrication of data, and / or deliberate disclosure of data to unauthorized persons or miscreants.
Usually the core concepts of information security had been dealing with providing Confidentiality, Integrity and Availability.
Afterwards, some other elements like Possession, Authenticity and Utility were proposed. Furthermore, the techniques to achieve
information security include cryptography, especially when transferring data / information. Information would be encoded
(encrypted) in such a way that it would be usable only to the authorized ones.
Review of Related Works
Authentication is a process by which a cloud server (service provider) gains confidence about the identity of the communication
partner. It ensures that a legitimate subscriber is accessing the services provided by a cloud server. In recent times, Identification
based cloud computing security models and frameworks have been developed by Li. et al.13 in 2009. But only identifying the actual
user does not all the time prevent data hacking or intruding in the database of cloud environment. Yao’s Garbled Circuit is
identification based work which is used for secure data saving in cloud servers14. But, this work does not ensure security in whole
cloud computing platform. Tang et al. and Vaquero et al. used AES based file encryption system in some of cloud computing
models15,16. But these models keep both the encryption key and encrypted file in one database server. In 2009 and 2010, some other
models and secured architectures are developed for ensuring security in cloud computing 17,18 by Wang et al. and Nguyen et al.
respectively. Although these models ensures secured communication between users and servers, but the loaded information is not
encrypted. For best security ensuring process, the uploaded information needs to be encrypted so that none can know about the
information and its location. Recently some other data security models for cloud computing environment are also being planned 19,20
in 2009 and 2011. But, these models also fail to ensure all criteria of cloud computing security issues 21. In 2010, Chow et al.22
International Science Congress Association
56
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
proposed a framework of authentication to the mobile users in cloud. This approach is based on a flexible framework for supporting
authentication decisions and on a behavioral authentication approach referred to as implicit authentication. Another framework for
User Authentication Framework in Cloud Computing was researched by Chaudhury et al. in 201123. In this framework user
legitimacy is strongly verified before enter into the cloud. And it provides identity management, mutual authentication, session key
establishment between the users and the cloud server. A user can also change his password when he wants. Password based twofactor authentication scheme24 was developed by Yassin et al. in 2012. In this scheme, authentication is verified by Schnorr digital
signature and fingerprint of subscriber/user. In the same year, Zhang et al. proposed an identity-based authentication scheme25 for
many applications scenarios of e-business based on cloud computing.
Figure-1
An example of cloud computing
Multi-factor authentication (MFA) is an approach to user validation that requires the presentation of two or more authentication
factors. In 2014, Liu et al. researched a multi-factor cloud authentication system (MACA)26 utilizing big data. In MACA, the first
factor is a password while the second factor is a hybrid profile of user behaviour. The hybrid profile is based on users' integrated
behaviour, which includes both host-based characteristics and network flow-based features. This is the first MFA that considers
both user privacy and usability combining big data features. They take up fuzzy hashing and fully homomorphic encryption (FHE)
to protect users' sensitive profiles and to handle the varying nature of the user profiles. In most recent27, a shared authority based
privacy-preserving authentication protocol (SAPA) was proposed by Liu et al. for cloud storage. In SAPA, shared access authority
is achieved by anonymous access request matching mechanism with security and privacy considerations, attribute based access
control is adopted to realize that the user can only access its own data fields; proxy re-encryption is applied to provide data sharing
among the multiple users.
Proposed Authentication and Encryption Techniques for Cloud Computing
In the thesis work we propose to develop a few new authentication and encryption techniques/algorithms. All the techniques will be
a collection of different phases, namely, Enrollment Phase, Authentication Phase, Network Authentication Phase, Encryption Phase
etc.
The proposed techniques will be address in following directions for investigation –
Two-way or Mutual Authentication: It means that both communicating pair authenticates each other. The entire authentication
techniques and improvements provide only one-way authentication i.e. only server in cloud computing environment can check the
authenticity of a subscriber by the entities (e.g. user id, password etc) of subscriber. The server can check the authenticity of a
International Science Congress Association
57
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
subscriber but the subscriber cannot check whether he is communicating with a correct server or not. It is a vital gap where a
potential adversary can spoof the server and get valuable user information. This motivate to construct an authentication technique
for cloud computing environment that provides user and server authentication, and in this approach, server in cloud examines the
authenticity of subscriber and as well as subscriber verified whether he is connecting correct server in cloud or not.
Figure-2
Two-way or Mutual Authentication Technique
Time Variant Authentication (TVA): When a subscriber wants to access a server, then the subscriber sends a login request to the
server. Then the server checks the authenticity of subscriber by the entities (e.g. user id, password etc) of subscriber. If the
subscriber is authentic, the sever gives permission to access. After getting the permission, the user starts to access the resources of
the sever. During the time of accessing, authenticity of subscriber is further not checked by the server. The processes as of today in
one time check only. Thus the subscriber submits his entities in the login time and can access the server for unlimited time.
TVA is technique where the server checks the authenticity of user time to time throughout the accessing of the server. This gives
better confidence of security as per Shannon. Thus user or user system has to enter his entity/s after a regular interval throughout
the communication with the server and server has to verify the authenticity of user at every interval. At any instant, if the server
senses that the entity/s are wrong, server stops the communication.
In the entire authentication technique the challenge of the designer is to make entity/s unbreakable whereas the challenger threats to
break the entity/s. The entity/s would be impossible to break if the entity/s are made automatic variable. The automatic variable
entity can be implemented by changing the entity/s from session to session.
Automatic variable entity/s can be implemented in TVA, by entering different (modified) entity/s at every regular interval. So,
whenever user or user system will enter entity/s to the server, firstly entity/s is modified by selective logical operation and the
modified entity/s is submitted to the server. After receiving the modified entity/s, server applies reverse logical operation on
modified password to obtain the original entity/s. Superiority of AVP is that the users entity/s (which user enters at the login time)
are changed time to time by an intelligent technique to make the entitiy/s is unbreakable.
Figure-3
Time Variant Authentication
International Science Congress Association
58
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Key Refreshment: Key based encryption systems for ensuring message integrity with key refreshment techniques (using different
keys in each and every session). In all cryptosystem the challenge of the designer is to make key unbreakable whereas the
challenger threats to break the key. Vernum proposed that key would be impossible to break if the key is made time variant. In this
approach, keys will be changed in each and every session by producing new keys time to time.
Figure-4
Key Refreshment
Conclusion
Cloud computing network is completely described above with the present authentication and encryption schemes. Our future work
is to invent new efficient authentication and encryption techniques for cloud computing network.
References
1.
A. Doss, R. Nanda, Cloud Computing: A Practitioner's Guide, McGraw Hill Education, (2013)
2.
Dr. Kumar Saurabh, Cloud Computing, Wiley Publication, (2012)
3.
Thomas Erl, Ricardo Puttini, Zaigham Mahmood, Cloud Computing: Concepts Technology and Architecture, Pearson, (2013)
4.
Buyya, Vecchiola, Selvi, “Mastering Cloud Computing”, McGraw Hill Education, (2013)
5.
By James Bourne Personal cloud market to hit $90bn by 2020, research study claims
6.
Forrester Research. EGEE '08. Istanbul, September, (2008)
7.
NIST cloud definition, version 15 http://csrc.nist.gov/groups/SNS/cloud-computing/
8.
Yashpal Kadam, Security Issues in Cloud Computing A Transparent View, International Journal of Computer Science
Emerging Technology, 2(5), 316-322 (2011)
9.
Z. Wang, Security and Privacy Issues within Cloud Computing” IEEE Int. conference on computational information sciences,
Chengdu, China, (2011)
10. Mathisen, Security Challenges and Solutions in Cloud Computing 5th IEEE International Conference on Digital Ecosystems
and Technologies (IEEE DEST2011) , Daejeon, Korea, (2011)
11. Greveler U, Justus b et al., A Privacy Preserving System for Cloud Computing, 11th IEEE International Conference on
Computer and Information Technology, 648–653.
12. John Harauz, Lorti M. Kaufinan. Bruce Potter, Data Security in the World of Cloud Computing, IEEE Security and Privacy,
Co published by the IEEE Computer and Reliability Societies, July/August 2009. (2011)
13. Hongwei Li, Yuanshun Dai, Ling Tian and Haomiao Yang, “Identity-Based Authentication for Cloud Computing”, CloudCom
2009, LNCS 5931, 157–166, (2009)
14. Sven Bugiel, Stefan Nurnberger, Ahmad-Reza Sadeghi, Thomas Schneider, “Twin Clouds: Secure Cloud Computing with
Low Latency”, CASED, Germany, (2011)
15. Yang Tang, Patrick P.C. Lee, John C.S. Lui and Radia Perlman, FADE: Secure Overlay Cloud Storage with File Assured
Deletion, (2010)
International Science Congress Association
59
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
16. Luis M. Vaquero, Luis Rodero-Merino, Daniel Morán, “Locking the sky: a survey on IaaS cloud security”, Computing, 91,
93–118, (2011)
17. Cong Wang, Qian Wang, and Kui Ren, Wenjing Lou, Ensuring Data Storage Security in Cloud Computing”, US National
Science Foundation under grant CNS-0831963, CNS-0626601, CNS-0716306, and CNS-0831628, (2009)
18. Thuy D. Nguyen, Mark A. Gondree, David J. Shifflett, Jean Khosalim, Timothy E. Levin, Cynthia E. Irvine, “A CloudOriented Cross-Domain Security Architecture”, The 2010 Military Communications Conference, U.S. Govt.
19. Vaibhav Khadilkar, Anuj Gupta, Murat Kantarcioglu, Latifur Khan, Bhavani Thuraisingham, Secure Data Storage and
Retrieval in the Cloud, University of Texas, (2011)
20. John Harauz and Lori M. Kaufman, Bruce Potter, “data Security in the World of Cloud Computing”, The IEEE Computer
SOCIETIES, August, (2009)
21. Kevin Hamlen, Murat Kantarcioglu, Latifur Khan, Bhavani Thuraisingham, “Security Issues for cloud computing”,
International Journal of Information Security and Privacy, 4(2), 39-51, (2010)
22. R. Chow, FatSkunk, R. Masuoka, J. Molina, Y. Niu, E. Shi, Z. Song, Authentication in the Clouds: A Framework and its
Application to Mobile Users” CCSW’10, Chicago, Illinois, USA October 8, (2010)
23. Choudhury, A.J. Kumar, P. ; Sain, M. ; Hyotaek Lim ; Hoon Jae-Lee, A Strong User Authentication Framework for Cloud
Computing, in IEEE Asia-Pacific Services Computing Conference (APSCC), 110-115, (2011)
24. Yassin A.A., Hai Jin, Ibrahim A. and Deqing Zou, Anonymous Password Authentication Scheme by Using Digital Signature
and Fingerprint in Cloud Computing, Second International Conference on Cloud and Green Computing (CGC), 282-289,
(2012)
25. Zhi-Hua Zhang, Jiang Xue-Feng, Jian-Jun Li, Wei Jiang, “An Identity-Based Authentication Scheme in Cloud Computing,
International Conference on Industrial Control and Electronics Engineering (ICICEE), 984-986, (2012)
26. Wenyi Liu, Uluagac, A.S. Beyah R., MACA: A privacy-preserving multi-factor cloud authentication system utilizing big
data”, IEEE Conference on Computer Communications Workshops (INFOCOM WORKSHOPS), 518-523, (2014)
27. H. Liu, H. Ning and Q. Xiong, Shared Authority Based Privacy-Preserving Authentication Protocol in Cloud Computing”
Parallel and Distributed Systems, IEEE Transactions on 26(1), 241–251, (2014)
International Science Congress Association
60
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Proposed Artificial Intelligence Based Authentication of User in Remote
System
1
Biswajit Mondal, 2Priyanka Roy and Chandan Koner1
1
Department of Computer Science and Engineering Dr. B. C. Roy Engineering College, Durgapur, West Bengal
2
Department of Information Technology, Dr. B. C. Roy Engineering College, Durgapur, West Bengal
Abstract
Authentication of remote user’s message is a research challenge for future scientists and researchers due to increasing security
threats and attacks with the increasing volume of wireless traffic. Next generation remote system has been developed for
introducing several new messaging systems having increased volume of data. In the entire popular remote user authentication, the
authenticity of a user is checked by the server at the starting time up of communication. These authentication techniques are based
on application of cryptographic algorithms and functions for user’s authentication, but do not provide any message authentication
method. In this paper, we propose an artificial intelligence based user message authentication scheme. This paper also reports how
human intelligence can be efficiently introduced to a message server for checking authenticity of the users.
Introduction
Remote system authentication is a process by which a remote system gains confidence about the identity of the communication
partner. Remote user authentication ensures that a legitimate user is accessing the services provided by a remote server. Remote
server checks the authenticity of remote user when the user wants to access the resources of remote server. For the last few decades,
several remote user authentication schemes have been developed and installed due to increasing security threats and attacks with
the emerging growth of wired and wireless traffic. But the communications are still violated by security threats and attacks.
The basic remote user password authentication was first designed by Lamport in 1981 1. However, Lamport’s scheme suffers from
high hash overhead and the necessity for password resetting problem decreases its suitability for practical use. In addition, the
scheme is vulnerable to the replay attack. Haller2 proposed secret key based one time password scheme (a modified version of
Lamport’s scheme) that eliminates hash function chaining, but the scheme is also vulnerable to the replay attack. Furthermore, in
both the schemes, it is required to maintain user password database in remote server. Thus, an attacker can hack and change the
password of users. To overcome the above drawback, many researchers have planed the use of cryptographic mechanisms to
prevent the intruder from acquiring the secret password. But all of these mechanisms needed the involvement of remote server for
change of user password. To solve the problem authentication token by the means of smart card is introduced for storing the user
information (Password, Identity, Biometric property, Secret key etc). Remote user password authentication scheme with smart card
was developed by Chang and Wu in 1993. After that, there is history of advancement of remote user password authentication
technology with smart card. Several new remote user password authentication schemes with smart card have been planed and
developed. In 1995 a new remote login authentication scheme, proposed by Wu 3, is based on simple geometric Euclidean plane. His
scheme allows users to freely choose passwords themselves. However, Hwang 4 has showed that the weakness of Wu’s scheme lies
in the security. Wang and Chang invented a smart card based password authentication scheme 5 to eliminate the security problems in
the traditional password authentication scheme. The cryptographic technique used in their scheme is combined application of
ElGamal’s6 public-key scheme and Shamir’s ID-based signature scheme7. But in 2001, Chan-Cheng8 pointed out that Wang and
Chang’s scheme is vulnerable to a replay attack. After that remote user authentication using smart card, introduced by Hwang and
Li in 20009, is an application of ElGamal’s6 scheme to authenticate a user. In that year10, Chen and Cheng pointed out that the
Hwang-Li’s scheme is vulnerable to the masquerade attack. In 2003, Shen, Lin, and Hwang 11 presented a different attack on the
Hwang-Li’s scheme that cannot withstand a masquerade attack and a enhance scheme to solve the problem of Hwang-Li’s scheme.
In the same year, Chang and Hwang12 proposed at extended attack to solve the problem of Chan and Cheng’s attack on Hwang-Li’s
authentication scheme. Das et al. developed a dynamic-ID13 based remote user authentication scheme which allows the users to
choose and change their passwords freely in 2004. Subsequently a few public-key based authentication schemes have been invented
and improved. But all of the schemes check only the authenticity of user but cannot check the authenticity of server. In 2006, Das et
al.14 invented a flexible remote user authentication scheme using smart card that authenticates user as well as remote sever.
Recently in 200815, Das and Narasimhan planed a two factor entity authentication scheme for remote systems, which provides
strong authentication, greater flexibility and requires less computational cost. The remote server authentication is necessary by
which the user can check whether he is communicating with the intended server or not. User fingerprint-based authentication
scheme first proposed by Lee et al. in 200216, is a biometric authentication to check the validity of a user by user biometric
property. In 2006, Khan et al.17 introduced modified version of Lee’s scheme that requires only a secret key but in 2008, Xu et al. 18
showed that it is vulnerable to the parallel session attack and impersonation attack.
International Science Congress Association
61
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Artificial Intelligence (AI) is a theory based on uncertainty. Fuzzy operations 19 can be performed for taking decision in AI based
application. AI can be applied in such applications which are based on uncertainties like vagueness, ambiguity and imprecision.
User message writing characteristics is also based on uncertainties. This provides a research challenge for application of AI on user
authentication. We propose to use parameter of user’s writing habit. It means which sentences, idioms, phrasal verbs are used with
most frequently, more frequently, less frequently by the user. By the theory of AI, different relative grade can be assigned for most
frequently; more frequently, less frequently used word groups or sentences considering the frequency of those word groups or
sentences in messages. Fuzzy sets may then be derived by relative grade and number of occurrence of those sentences, idioms,
phrasal verbs in a message. Then applying fuzzy operations on fuzzy sets, authenticity of user can be verified.
Proposed Technique
Artificial Intelligence (AI) based user message authentication technique that will check the authenticity of a user by fuzzy operation
on fuzzy sets which are derived from user’s earlier messages. The earlier or past messages of the user are stored in database of the
remote server, which ultimately indicates to measure this authentication technique. Remote server performs a feasibility study of
user writing characteristics i.e. writing habit or style from stored messages. It assigns different relative grades according to the
appearance in the past messages i.e. frequency of sentences, idioms with salutations and phrasal verbs appearing like most
frequently, more frequently, less frequently used sentences, idioms with salutations and phrasal verbs in those messages. Thus it
ascertains the theory of artificial intelligence and thereof derives fuzzy sets from the relative grades which are obtained from
number of occurrence of those sentences or idioms or phrasal verbs in a message. Now applying fuzzy operations on fuzzy sets, the
server validates authenticity of the users.
User Message Authentication Technique
Proposed AI based user message authentication technique is a collection of two different phases, namely, User Enrollment Phase
and User Authentication Phase. These two phases are explained below.
User Enrollment Phase
In user enrollment phase, the user is enrolled to a remote server. This phase is executed only once for one user.
UEP1: The user sends an application request to the authority concerned for new smart card.
UEP 2: After receiving the request, the authority asks to submit his twenty different past messages.
UEP 3: The user sends his twenty different past messages to the authority.
UEP 4: After receiving the messages, authority examines those messages thoroughly and performs a feasibility study of writing
habit of a user from those messages. The authority records the followings,
(i) Which sentences (including proverbs) are most frequently, more frequently and less frequently are used by the user for writing a
message?
(ii) Which idioms (including salutation and subscription) are most frequently, more frequently and less frequently are used by the
user for writing a message?
(iii) Which phrasal verbs are most frequently, more frequently and less frequently are used by the user for writing a message?
UEP 5: The authority uses three databases in server for storing the above user writing habit. The first database, D S stores the user
most frequently, more frequently and less frequently used sentences (including proverbs) and theirs corresponding relative grades.
The first row, DSR1 of DS, stores the most frequently used sentences and their relative grade which is assigned by 0.99. The second
row, DSR2 of DS, stores the more frequently used sentences and their relative grade which is assigned by 0.66. The third row, D SR3 of
DS, stores the less frequently used sentences and their relative grade which is assigned by 0.33.
The second database, DI stores the most frequently, more frequently and less frequently used idioms (including salutation and
subscription) and theirs corresponding relative grades. The first row, D IR1 of DI, stores the most frequently used idioms and their
relative grade which is assigned by 0.99. The second row, D IR2 of DI, stores the more frequently used idioms and their relative grade
which is assigned by 0.66. The third row, D IR3 of DI, stores the less frequently used idioms and their relative grade which is assigned
by 0.33.
The third database, DP stores the most frequently, more frequently and less frequently used phrasal verbs and theirs corresponding
relative grades. The first row, DPR1 of DP, stores the most frequently used phrasal verbs and their relative grade which is assigned by
International Science Congress Association
62
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
0.99. The second row, DPR2 of DP, stores the more frequently used phrasal verbs and their relative grade which is assigned by 0.66.
The third row, DPR3 of DP, stores the less frequently used phrasal verbs and their relative grade which is assigned by 0.33.
MSE6: If the authority does not get sufficient information, it sends a request to the user for sending other different past messages.
Then the authority executes the above steps again to create a novel database.
User Authentication Phase
When a user requests for sending a message, M, the server receives the M and counts number of sentences (including salutation and
subscription), n in the M. Then server scans M and executes the following operations,
UAP 1: Finds the matched sentences within the rows DSR1, DSR2, DSR3 of DS and M. Let the number of matched sentences in D SR1,
DSR2 and DSR3 are x1, y1and z1 respectively.
UAP 1.1: Calculates, a1= (0.99 × x1)/n, b1= (0.66 × y1)/n and c1 = (0.33 × z1)/n.
The membership functions of a fuzzy set F1 can be defined as follows,
µF1 (a1) = (0.99 × x1)/n, µF1 (b1) = (0.66 × y1)/n, µF1 (c1) = (0.33 × z1)/n
Hence, F1 = {(a1, (0.99 × x1)/n), (b1, (0.66 × y1)/n), (c1, (0.33 × z1)/n)}
UAP 2: Finds the matched idioms within the rows D IR1, DIR2, DIR3 of DI and M. Let the number of matched idioms in D IR1, DIR2 and
DIR3 are x2, y2 and z2 respectively.
UAP 2.1: Calculates, a2= (0.99 × x2)/n, b2= (0.66 × y2)/n and c2 = (0.33 × z2)/n
The membership functions of a fuzzy set F2 can be defined as follows,
µF2 (a2) = (0.99 × x2)/n, µF2 (b2) = (0.66 × y2)/n,
µF2 (c2) = (0.33 × z2)/n
Hence, F2 = {(a2, (0.99 × x2)/n), (b2, (0.66 × y2)/n), (c2, (0.33 × z2)/n)}
UAP 3: Finds the matched phrasal verbs within the rows D PR1, DPR2, DPR3 of DP and M. Let the number of matched phrasal verbs in
DPR1, DIR2, DIR3 are x3, y3 and z3 respectively.
UAP 3.1: Calculates, a3= (0.99 × x3)/n, b3= (0.66 × y3)/n and c3 = (0.33 × z3)/n
The membership functions of a fuzzy set F3 can be defined as follows,
µF3 (a3) = (0.99 × x3)/n, µF3 (b3) = (0.66 × y3)/n,
µF3 (c3) = (0.33 × z3)/n
Hence, F3 = {(a3, (0.99 × x3)/n), (b3, (0.66 × y3)/n), (c3, (0.33 × z3)/n)}
UAP 4: Computes,
UAP 4.1: µF1  F2  F3 (a) = min {µF1 (a1), µF2 (a2), µF3 (a3)}
UAP 4.2: µF1  F2  F3 (a) = max {µF1 (a1), µF2 (a2), µF3 (a3)}
UAP 4.3: If µF1  F2  F3 (a) ≥ 0.09 and
µF1  F2  F3 (a) ≥ 0.18, server ensures that the user is authentic. Else executes the next steps.
UAP 4.4: µF1  F2  F3 (b) = min {µF1 (b1), µF2 (b2), µF3 (b3)}
UAP 4.5: µF1  F2  F3 (b) = max {µF1 (b1), µF2 (b2), µF3 (b3)}
UAP 4.6: If µF1  F2  F3 (b) ≥ 0.045 and
µF1  F2  F3 (b) ≥ 0.09, then server ensures that the user is authentic. Else executes the next steps.
UAP 4.7: µF1  F2  F3 (c) = min {µF1 (c1), µF2 (c2), µF3 (c3)}
UAP 4.8: µF1  F2  F3 (c) = max {µF1 (c1), µF2 (c2), µF3 (c3)}
UAP 4.9: If µF1  F2  F3 (c) ≥ 0.03 and
µF1  F2  F3 (c) ≥ 0.06, then server ensures that the user is authentic.
If no conditions are matched, then remote server ensures that the user is unauthentic and ignores the message of user. Server sends
an authentication failure message to the user.
Conclusion
In our proposed artificial intelligence based technique, message authentication scheme is developed for the user of any remote
system. A novel artificial intelligence is introduced to the server for this authentication purpose. This technique enables to work
within a real time basis for the present as well as the next generation remote system networks.
References
International Science Congress Association
63
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
1.
L. Lamport, “Password authentication with insecure communication”.Communication. ACM, 24(11), 770-772, (1981)
2.
N.M. Haller, A one-time password system, RFC 1704, (1994)
3.
T.C. Wu, Remote login authentication scheme based on a geometric approach Computer Communications, 18(12), 959-963,
(1995)
4.
M.S. Hwang, Cryptanalysis of remote login authentication scheme” Computer Communications, vol. 22(8), 742-744, (1999)
5.
S.J. Wang, J.F. Chan, Smart card based secure password authentication scheme”, Computers and Security, 15(3), 231-237,
(1996)
6.
T. ElGamal, A public key based cryptosystem and a signature scheme based on discrete algorithms”, IEEE Transactions on
Information Theory, 31(4), 469-472, (1985)
7.
A. Shamir, Identity-based cryptosystems and signature schemes, Proc. CRYPTO’ 84, Lecturer notes in Computer Science,
vol. 196, Springer, Berlin, 47-53, (1985)
8.
C.K. Chan and L.M Cheng, Remarks on Wang-Chang’s password authentication scheme”, Electronics Letters, 37(1), 22-23
(2001)
9.
M.S. Hwang and L.H. Li, A new remote user authentication scheme using smart cards”, IEEE Transactions on Consumer
Electronics, 46(1), 28-30, (2000)
10. C.K. Chan and L.M Cheng, Cryptanalysis of a remote user authentication scheme using smart cards”, IEEE Transactions on
Consumer Electronics, 46(4), 992-993, (2000)
11. J.J. Shen, C.W. Lin, and M.S. Hwang, A modified remote user authentication scheme using smart cards”, IEEE Transactions
on Consumer Electronics, 49(2), 414-416, (2003)
12. C.C. Chang and K.F. Hawng, Some forgery attacks on a remote user authentication scheme using smart cards”, Informatics,
14(3), 289-294, (2003)
13. M.L. Das, A. Saxsena and V.P. Gulati, A dynamic ID-based remote user authentication scheme, IEEE Transactions on
Consumer Electronics, 50(2), 629-631, (2004)
14. M.L. Das, Flexible and Secure Remote Systems Authentication Scheme Using Smart Cards”. HIT Transaction on ECCN,
1(2), 78-82, (2006)
15. M.L. Das and V.L. Narasimhan, EARS: Efficient Entity Authentication in Remote Systems, Proc. ITNG08, USA, 603-608,
(2008)
16. J.K. Lee, S.R. Ryu and K.Y. Yoo, Fingerprint-based remote user authentication scheme using smart cards”, Electronics
Letters, 38(2), 597-600, (2002)
17. M. K. Khan and J. Zhang, An efficient and practical fingerprint-based remote user authentication scheme with smart cards”,
IPSEC 2006, Lecturer Notes in Compueter Science 3903, 260-269, (2006)
18. Jing Xu, Wen-Tao Zhu, Deng-Guo Feng, Improvement of a Fingerprint-Based Remote User Authentication Scheme, ISA
2008, 87-92, (2008)
International Science Congress Association
64
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Transmission Congestion Management, Pricing and Locational Marginal
Pricing in the Deregulated Power System
Bishaljit Paul
Department of Electrical Engineering, Techno India, Silli, Ranchi, Jharkhand, INDIA
Introduction
Around the whole world there is a very large impact on power transmission in almost all the power systems due to the deregulation
and privatization of electricity markets. Among the market participants in a competitive market environment there is an obstacle for
perfect competition due to the bottlenecks in the transmission line. Planning should be done appropriately for the opera- tion of the
transmission systems. Many participants who buy and sell electricity make the competitive electricity market which is very
complex in nature. As the supply and demand must be balanced at all times, the complexity arises due to the limitations of the
available transmission systems. A system is said to be congested, when the producers and consumers of electric energy desire to
produce and consume the amounts that would cause the transmission system beyond their transfer capabilities. ‘Locational
Marginal Pricing’ approach is chosen to locate the spots of congestion in the utility system. The results are found efficient in
minimizing the congestion due to transmission line outage, increase in loads and generation failure.
Under deregulated market operation, electric power utilities are undergoing ma- jor restructuring process1-3. Due to power system
deregulation, the ben- efit of lower electricity cost, better consumer service, improved system efficiency are gained. Electric Supply
Industry (ESI) through out the world is undergoing restructuring for better utilization of the resources and to provide quality services to the consumers at competitive prices. Introducing competition at various levels, the monopoly in the generation and trading
sectors is being abolished due to the restructuring of the power industry. As different parties compete with each other to win their
market share and remain in business, which promotes technical growth, improves customer satisfaction, increases efficiency, the
elec- tricity sector restructuring is done which is popularly known as deregulation.
Electricity market throughout the whole world needs competitive forces which makes the market more efficient and the price is
determined by the supply and demand functions. Due to the increased volatility of the electricity market, a market participant can
make trading contracts with other parties to discard possible risks and get better returns. Congestion which is due to overloading of
the transmission lines or transformers prevents the system operators from dispatching additional power from a specific generator.
When congestion occurs on a bulk power grid, Locational Marginal Pricing (LMP), a market pricing approach is used to manage
the efficient use of the transmission system. One or more restrictions on the transmission system pre- vent the economic or least
expensive supply of energy from serving the demand, it is a case when a congestion arises. A transmission constraint is that when
transmission lines may not have enough capacity to carry all the electicity to meet the demand in a certain location. LMP includes
the cost of congestion i.e. it includes the cost of supplying the more expensive electricity in those locations, thus providing a
precise, market based method for pricing energy. At every location on the grid, a clear and accurate signal of the price of electricity
is provided by the LMP to the market participants.
Consumers are charged more than the average cost of production of electricity due to the nonlinear nature of the power flow and the
constraints imposed by the Optimal Power Flow (OPF) when LMPs are used for settlement of trans- actions. The difference is
accumulated as network rental by the Independent System Operator (ISO). Rental is of two components i.e. loss rental and constraint rental. The difference in average losses and marginal losses is loss rental which is due to nonlinear nature of losses.
Transmission Congestion: To have a market based solution with economic efficiency, congestion management in a multibuyer/multi-seller system is one of the most involved tasks. Generation, transmission and distribution are within direct control of a
central agency or a single utility in a vertically integrated utility structure. To achieve the system least cost, generation is dispatched
accordingly10. The possible occurence of congestion is eliminated by the optimal dispatch solution using security constrained
economic dispatch. Thus the generations are dispatched in such a way that the power flow limits on the transmission lines are not
exceeded. Congestion management is a mechanism in which the transactions are prioritized and committed to work in such a
schedule which would not overload the network. In a deregulated power environment the things are not as simple. Irrespective of
relative geographical location of buyer and seller, every buyer wants to buy power from the cheapest generator available in a
deregulated environment. As a result of this, the transmission corridors evacuating the power of cheaper generators would get
overloaded if all such transactions are approved. Congestion thus occurs and the system operator finds all the transactions cannot be
allowed due to overlad of the transmission network. The system operator handles this situation by means of real time con- gestion
International Science Congress Association
65
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
management which involves precautionary as well as remedial action on system operator’s part as follows –
(i) That allow only that set of transactions which taken together, keeps the transmission system within limits.
(ii) Remedial action in real time is to be taken as the transmission corridors may get overloaded due to unscheduled flowA set of
rules are to be ensured in the transmission congestion management to ensure control over generation and loads to maintain
acceptable level of system security and reliability. Under open market structure the rules ensure market efficiency maximization
and a set of players will always be looking for loopholes in the mechanism to exploit it.
The market must be modelled in a deregulated environment so that the partic- ipants (buyers and sellers) [16] engage freely in
transactions in a manner that does notthreaten the security of the power system. Congestion management schemes has become an
important activity of power system operators and its objective is to minimize the interference of the transmission network in the
mar- ket to ensure the secured operation of the power system.
Whenever physical or operational constraints in a transmission network become active, the system is said to be in a state of
congestion. The possible limits that may hit in case of congestion are (i) Line thermal limits (ii) Transformer emergency ratings (iii) Bus voltage limits (iv) Transient or oscillatory stability.
Effects of Transmission Congestion
(i) Market inefficiency - The effect of transmission congestion is to create mar- ket inefficienct. Market efficiency refers to a market
outcome that maximizes the social welfare which is the sum of producer surplus and consumer surplus. Market efficiency results
with respect to generation when the most cost effective generation resources are used to serve the load. The difference of social
welfare between a perfect market and a real market is a measure of efficiency of the real market.
(ii) Market power - If the generator can successfully increase its profits by strategic bidding 19 or by any means other than lowering
its costs, a market power exits. In a two area system with cheaper generation in area 1 and relatively costlier generation in area 2,
buyers in both the areas will prefer generation in area 1 and the tie-lines between the two areas will start operating at full capac- ity
such that no further power transfer from area 1 to area 2 is possible. The sellers in area 2 are then said to possess market power
since these sellers can charge higher price to buyers if the loads are inelastic. So congestion leads to market power which results in
market inefficiency.
In a centralized dispatch structure the system operator changes schedules of generators by raising generation of some while
decreasing the others. The oper- ator compensates the parties who were asked to generate more by paying them for their additional
power production and giving lost opportunity payments to parties who were ordered to step down.
Congestion Management in Electricity Markets: In a competitive electricity market, congestion occurs in transmission system
due to overloading of lines or transformers for market settlement. In the deregulated market the chances of congestion is quite high
since the customers would like to purchase electricity from cheapest available sources. For secure operation of the power system,
the congestion which is undesirable should be alleviated. Congestion management use optimal power flow techniques 11 for
rescheduling output of sources, compensating devices and curtailment of loads. In a restructured electricity market, the transmission
network operate at or beyond transfer limits, when the producers and consumers of electric energy desire to produce and consume
in amounts. If the congestion in the system persist for a long time, it can cause sudden rise in the electricity price and threaten
system security and reliabity. Congestion management is one of the most challenging tasks of the system operator (SO) in the
deregulated environment. There are three different ways to overcome the network congestion - (i) Price Area Congestion
Management (ii) ATC based Congestion Management (iii) OPF based Congestion Management.
(i) Price Area Congestion Management - In Norway, Sweden, Finland and Den- mark when congestion is predicted, the system
operator declares it and system is split into price areas at the congestion areas. Spot market bidders submit separate bids for each
price area in which they have generation and loads. In case of no congestion market will settle at one price and in case of
congestion, the price areas are separated and settled at prices that satisfy transmission constraints. Lower prices exist in excess
generation and higher prices for excess loads.
(ii) ATC based Congestion Management - Federal Energy Regulatory Com- mission (FERC) established a system where each SO
would be responsible for monitoring its own regional transmission system and calculating its ATC for congested paths entering,
leaving and inside its network. The ATC values [12] for the next hour are placed in a website Open Access same-time Information
System (OASIS), operated by SO.
International Science Congress Association
66
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
(iii) Optimal power Flow based Congestion Management - OPF is performed to minimize generators’ operating cost subject to a set
of constraints that repre- sent the transmission system within which the generators operate. Customers willing to purchase power
send a bid function to the SO. OPF solution gives cost/MW at each node of the system. Zonal pricing method is followed in which
the system is divided into various zones on geographical basis. The zone prices obtained from OPF are used in the manner that
generators are paid zone price of energy and the loads pay the zone price of energy.
Electricity markets have been established as part of this industry restructuring. Two schemes of electricity pricing have been at the
core of currently operating electricity markets. These are Nodal Pricing and Zonal Pricing. Nodal pricing is quite prevalent in the
US electricity markets (PJM, New York ISO), Zonal pricing is in some European countries. Nodal pricing scheme is more complex
and intensive than zonal pricing because nodal pricing uses the pricing scheme known as Locational Marginal Price (LMP), where
electricity price is determined on a marginal basis at the bus level. Under nodal pricing, LMP is composed of three components System Marginal energy Price (SMP), Marginal Congestion Component (MCC), Marginal Loss Component (MLC).
Locational Marginal Price
There are many different methods for congestion management with varying levels of economic signal. Locational Marginal Pricing
(LMP) is the most effective mechanism as it provides the strongest economic signal to market participants. Locational Marginal
Pricing (LMP) 4 are the incremental prices of energy at each node on the power system. On a lossless system, LMP comprise the
marginal price for generation and a transmission congestion component. In the absence of congestion, the LMPs are the same
everywhere since the incremental load at each node can be met by incremental generation from the marginal unit. However, when a
transmission constraint becomes prominent, the system is said to be congested and the marginal prices will, in general be different
at each node. The deregulation of electricity market27 gives rise to competitive market which are complex systems with many
participants who buy and sell electricity. Due to the limitations of the transmission systems and the fact that supply and demand
must be in balance at all times, the complexity arises. LMP solution is done by Linear Programming Technique. Through the
historical op- erational information, loss factors for the network losses are set. LMP reflects not only the marginal cost of energy
production but also its delivery because of the effects of transmission losses and transmission system congestion. LMP 6 vary
significantly from one location to another. Decomposition of LMP into three components Marginal Energy Price (MEP), Marginal
Loss Price (MLP), and Marginal Congestion Price (MCP) is not unique and there is a large level of arbitariness in any
decomposition.
LMP based clearance scheme23 is used to calculate the amount of money earned from ISO by the energy sellers and paid to ISO by
the energy con- sumers. Linearized DC OPF problems are usually applied for the approximation of nonlinear AC OPF problems in
order to find real power solutions for restructured wholesale power markets. LMP is defined as a change in production cost to
optimally deliver an increment of load at the locations while satisfying all the constraints. Cost of transmission services 20 is
accounted together with LMP which represents energy price, network losses cost and transmission congestion cost.
When the producers and consumers of electric energy19 desire to produce and consume energy that would cause the transmission
system to operate beyond one or more transfer limits, LMP approach is preferred to locate the spots of congestion under various
critical conditions. Incidence Matrix approach18for calculating LMP is an effective tool for short-term and long-term economic
analysis of restructered power system. LMP components17 are evaluated for an important role played by the nodes with generators
having free capacity.LMPs of nodes without generation or with generation at their limits are a function of the LMPs at the marginal
nodes and the impacts of the network constraints, including losses in the network.
LMP components5, 28 produced by loss modeling by introducing loss distribution factors to balance the consumed losses in the
lossless d.c power system model, achieves dependable and predictable market-clearing results. LMP can be higher than the highest
generation bidding price and LMP can be lower than the lowest generation bidding price.LMP sensitivities 8 are calculated with
respect to changes in demand throughout an electric power network. Not only prices but their sensitivities with respect to demands
constitute fundamental information now a days in matured electricity markets. The changes in LMP 9 as parameters vary and
provide insight on the functioning and behaviour of the electric energy system. To assess the degree of competitiveness in the
electricity markets, producers and consumers establish their bidding strategies by the sensitivity information. For short term15
market based operation and planning, the information of congestion and price versus load is obtained. It helps the planners to
identify the possible congestion as the system load grows. Generation companies get useful information for possible congestions
and price change as the system load grows as most of them use OPF model for congestion and price forecasting to achieve better
economic benefits.
International Science Congress Association
67
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
An AC OPF based formulation for procuring, pricing and settling energy and ancillary services by integrated market systems
provides LMP for energy and Ancillary Service Marginal Prices (ASMP) where the characteristics of the prices are analyzed when
economic substitution among ancillary services is required.
Accordingly, LMP is stated as follows :LMP=LM Penergy + LM Ploss + LM Pcongestion
That is, LMP is the summation of the costs of marginal energy, marginal loss, and congestion. Two general methods are applied for
calculating LMP. One is to determine the three components separately and then sum them up. A second method is to first calculate
LMPs based on full ac network model and then identify individual components as necessary. The LMP difference between any two
locations represents the cost of transmission from the injection to the withdrawal, including congestion and losses. When
congestion exists in the transmission system, this method first uses loss factor (LF) to determine the portion of LMP that represents
losses. Then, by subtracting the sum of the marginal energy and the marginal loss costs from the LMP at the location of interest, we
can get the transmission congestion cost.
Conclusion
The fundamental concepts of LMP have been discussed. The LMP difference between two adjacent buses is the congestion cost
which arises when the energy is transfered from one location to the other location. Transmission losses may impact LMP
differences as well. LMP at a certain bus can be higher, lower or even negative than the highest offer price. It is possible that we
have to reduce output of cheaper units and increase outputs of more expensive units in order to supply an additional MW load at a
specific bus. DC load flow is the linearized model of AC load flow and is capable of giving acceptable results in many systems. So
it is preferred for LMP calculations in power market operations. As the LMP implementation is very efficient for restructed power
markets in bidding strategies, the electricity market through out the world employs LMP as one of the most popular approaches and
more research is required.
Future Work
As we can calculate the price at all locations, LMP for IEEE 30 bus is to be calculated and compared with the LMP values under
cost minimization and loss minimization in a secured constrained power system as these data are helpful for bidding strategy. LMP
values under Unit-committed cases for hourly load cases are to be calculated and reflected in the website.
References
1.
M. Shahidehpour, H. Yamin and Z.Y. Li, Market operations in electric power system. John Wiley and Sons,, Inc., New York,
(2002)
2.
Website http://www.pjm.com: Sponsored by Pennsylvania-New Jersey- Maryland Interconnection, (2001)
3.
Z. Li and H. Daneshi, Some observations on market clearing price and locational marginal price. IEEE Power Engineering
Society General Meeting, 2005. 2702-2709, (2005)
4.
T.J. Overbye, X. Cheng and Y. Sun, A comparison of the AC and DC power flow models for LMP calculations. Proceedings
of the 37th Hawaii international conference on system sciences, (2004)
5.
E. Litvinov, T. Zheng, G. Rosenwald and P. Shamsollahi, Marginal loss modeling in LMP calculation. IEEE Transactions on
Power Systems, 19(2), 880-888, (2004)
6.
Young Fu, Member, IEEE, Zuyi Li, Member, IEEE Different Models and Properties on LMP Calculations.
7.
Nicholas Steffan, Student Member, IEEE, Gerald T. Heydt, Life Fellow, IEEE Quadratic Programming and Related
Techniques for the Calculation of Locational Marginal Prices in Distribution Systems.
8.
Antonio J. Conejo, Fellow IEEE, Enrique Castillo, Roberto Minguez and Federico Milano, Member IEEE Locational
Marginal Price Sensitivities.
9.
Liu Yang, Chunlin Deng, China A united method for Sensitivity Analysis of the Locational Marginal Price based on the
Optimal Power Flow.
10. Richard D. Christie, Member IEEE, B.F. Wollenberg, Fellow IEEE, Ivar Wangensteen. Transmission Management in the
International Science Congress Association
68
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Deregulated Environment.
11. Ashwani Kumar, S.C Srivastava, IIT Kanpur, India. AC Power Transfer Distribution Factors for Allocating Power
Transactions in a Deregulated Market.
12. S.C Srivastava, IIT Kanpur, India. Transmission System Management in Restructured Electricity Markets.
13. A. Kumar and Punit Kumar, NIT Kurukshetra. Locational Marginal Prices with SVC in Indian Electricity Market.
14. Fungxing Li, Senior Member, IEEE, Rui Bo, Student Member, IEEE. DCOPF based LMP Simulation: Algorithm,
Comparison with ACOPF, and Sensitivity.
15. Ignacio J. Perez-Arriaga, Luis Olmos and Michel Rivier. Transmission Pric- ing.
16. Fangxing Li, Senior Member IEEE, Rui Bo, Student Member, IEEE. Con-gestion and Price Prediction under Load Variation.
17. Tina Orfanogianni, George Gross, Fellow IEEE. A General Formulation for LMP Evaluation.
18. Mohammad Sadegh Javadi. Incidence Matrix- Based LMP Calculations: Algorithm and Applications. Islamic Azad
University, Fass, Iran.
19. P.Ramachandran, R. Senthil. Locational Marginal Pricing approach to min- imize Congestion in Restructured Power Markets.
Anna University, Chen- nai, India.
20. Muhammad Bachtiar Nappu. Locational Marginal Prices scheme consider- ing Transmission Congestion and Network Losses.
Hasanuddin University, Sulaweri, Indonesia.
21. Avinash Swami. Transmission Congestion Impacts on Electricity Market: An Overview.
22. A.J. Wood and B.F. Wollenberg, Power Generation, Operation and Control. New York: John Wiley and Sons, (1996)
23. Md. Irfan Ahmed, Saket Saurabh, NIT Patna. DC-OPF for LMP Calculations in wholesale Electricity Market.
24. Tong Wu, Mark Rothleder, Ziad Alaywan, Alex D. Papalexopoulos, Fellow IEEE. Pricing Energy and Ancillary Services in
Integrated Market Systems by an Optimal Power Flow.
25. B.B. Chakrabarti, Member IEEE, D. Godwin, N.K.C Nair, Member IEEE. Power System Congestion - In search of an Index
from LP Basis Matrices.
26. S.M.H Nabavi, A. Kazemi, Tehran, Iran M.A.S. Masoum, Perth Australia. Congestion Management using Genetic Algorithm
in Deregulated Power Environments.
27. Basant Kumar Panigrahi, IIT, Roorke. LMP in Deregulated Electricity Markets.
28. Ashikur Bhuiya, Alberta Electric System Operator, N. Chowdhury, Univ Saskatchewan. Determination of Loss Factors in a
Deregulated System.
29. K. Kamaldass, P. Kankaraj, M. Prabavathi, Madurai, Tamil Nadu. Locational Marginal Pricing in Restructured Power Market.
30. James Momoh, Lamine Mili. Economic Design and Planning for Electric Power Systems.
31. D.S. Kircchen, Goran Strabac. Power System Economics.
International Science Congress Association
69
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
A Survey on the Generalizations of Association Scheme
Pankaj Kumar Manjhi and Arjun Kumar
University Department of Mathematics, Vinoba Bhave University, Hazaribag – 825301, INDIA
Abstract
In this paper a survey on the generalization of Association scheme is given by weakening its conditions. These generalizations are
Coherent Configuration, Generalized Directed Association Scheme and Frobenius Generalized Association Scheme. In this paper
we have only studied the relationship of these generalizations with Association Scheme on the basis of the conditions of
Association Scheme.
Keywords: Association Scheme; Coherent Configuration; AMS (2010): 05E30; 05CXX.
Association Scheme (AS)
Bose and Simamoto [5] have been defined Association Scheme is as a set of non-empty relations C={c0, c1, c2, …, cm} on a finite
set X, which satisfies the following conditions:
m
(i)
(ii)
(iii)
Ci  X  X;
i=0
Co = Diag(X) = diagonal relation on X
Ci is symmetric for i = 1,2,…, m;
(i)
 ( x, x) : x  X
For all i, j, k, in {0, 1, 2 …., m} there is an integer
p
p k ij
;
such that, for all (x, y) in Ck.
k
ij
| {z  X : (x,z)  Ci and (z, y) Cj}|=
.
Association schemes are also defined by the adjacency matrices Ao, A1,…, Am of their associate classes which have the following
properties:
m
A
i
 JX
(i)
None of the Ai is equal to Ox and i  0
.
(ii)
A0 = Ix ;
(iii)
Ai is symmetric for i = 1,2,…..m;
(i) For all i, j in {1,2, …, m} the product Ai Aj is a linear combination of Ao, A1, …., Am;
(Vide [2] , [3],[4] and [9])
Coherent Configuration (CC)
In 1967 D.G.Higman defines Coherent Configuration as a set of non-empty relations C={C1, C2, …, Cm} on a finite set X, which
satisfies the following conditions:
(i)
C is a partition of X x X
m
Ci
that is, i 1
=XxX;
(ii)
There exist a subset Co of C which is a partition of the diagonal relation = {(x,x) : x  X ) ;
(iii)
for every relation Ci  C, its converse
C’i = {(x,y) : (y,x)  Ci} is in C,
That is
(iv)
C i '  C i*  C
there exist integer
and (z,y)  Cj is equal to
p k ij
p k ij
for 1
 i, j, k  m such that for any (x,y)  Ck the number of points z X such that (x,z) Ci
(and, in particular, is independent of the choice of (x, y)  Ck).
International Science Congress Association
70
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
CC is also defined by adjacency matrices of classes of C. If A1 A2 , …, Am are adjacency matrices of C1, C2, Cm respectively then
the axioms take the following form:
(i)
A1 + A2 + … + Am = J;
(ii)
There exist a subset of {A1, A2,…., Am} with sum I= identity matrix;
(iii)
Each element of the set {A1, A2,…, Am} is closed under transposition
m
k
k
ij
k
ij
i

1
(iv)
Ai Aj =
where
are non-negative integers.
(Vide [1],[7] and [8])
p
A
p
Generalized Directed Association Scheme (Gdas)
In 2012 Singh and Manjhi [10] introduced Generalized Directed Association Scheme (GDAS) as a set C = { C0, C1, C2, ….., Cm} of
binary relations on a finite set X (subsets of X  X) satisfying the following two conditions:
(i)
C is a partition of X x X
that is, = X x X ;
(ii)
For all i, j, k, in {0, 1, 2 …., m} there is an integer
{z  X : (x,z)  Ci and (z, y) Cj}|=
p
p k ij
such that, for all (x, y) in Ck.
k
ij
.
GDAS is also defined by adjacency matrices of classes of C. If A0, A1, …, Am are adjacency matrices of C0, C1, …, Cm respectively
then the axioms take the following form:
(i)
A0+A1 + A2 + , …, + Am = J;
m

(ii)
Ai Aj =
i 1
p k ij A k
where
p k ij
are non-negative integers
Frobenius Generalized Association Scheme
In 2012 Singh and Manjhi [11] introduced Frobenius Generalized Association Scheme (FGAS) as follows:
Let G be a Frobenius group of order n+1 with permutational representation Gp= {A0, A1, A2,…, An}, where Ai (i=0,1,2, …, n) are
permutation matrices. Since AiAj = Ak for some k{0,1,2,…,n} ,Gp constitute a GAS called Frobenius Generalized Association
Scheme (FGAS).
Example: G = S3 = {I,(12), (23), (13), (123), (132)}
Then A0 = I3
0
A1  
1

0
0
A4  
1

0
0
1
A2   0
0

 0
1
,
1
0
0
0
0
1
1
0

0

,
0
0
1
0
A5  
0

1
International Science Congress Association
0
0

1  A3   0
1
0 
,
1
0
0
0
1
0
1
0 
0 
,
0
1

0

71
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
We see that:
(i)
A0Ai = AiA0 for all i = 1, 2, 3, 4, 5.
And
= A0
(ii)
A1A2 = A4,
= A0, A2A1 = A5, A1A3=A5,
A3A1 = A4, A1A4=A2 , A4A1=A3, A1A5 = A3, A5A1 = A2
(iii)
A2A3 = A4,
= A0, A3A2 = A5, A2A4=A3,
A4A2 = A1, A2A5=A1, A5A2=A3.
(iv)
= A0, A3A4 = A1, A4A3=A2, A3A5 = A2, A5A3=A1
(v)
= A5, A4A5 = A0, A5A4=A0
(vi)
= A4
Therefore {A0, A1, A2, A3, A4, A5} constitutes a FGAS.
Relationship of Generalizations With Association Scheme
We see that Coherent Configuration is obtained from Association scheme by weakening conditions (ii) and (iii), Generalized
Directed Association Scheme is obtained by removal of conditions (ii) and (iii) of Association Scheme and Frobenius Generalized
Association Scheme is obtained from Association Scheme by removing conditions (i),(ii) and (iii).In future we are looking for
more generalization of Association Schemes and their applications.
References
1.
P.P. Alejandro, R.A. Bailey and P.J. Cameron, Association schemes and permutation groups, Discrete Mathematics 266, 4767 (2003)
2.
R.A. Bailey, Generalised wreath product of association schemes, preprint, (2003)
3.
R.A. Bailey, Association schemes designed experiments, algebra and combinatorics,Cambridge University Press, (2004)
4.
R.A. Bailey and P.J. Cameron, Crested product of Association Schemes, submitted exclusively to the London Mathematical
Society.
5.
R.C. Bose and T. Shimamoto, Classi_cation of analysis of partially balanced incomplete block designs with two associate
classes, Journal of the American statistical association, 47, 151-184 (1952)
6.
D.G. Higman, Intersection matrices for _nite permutation groups, Journal of Algebra, 6, 22-42 (1967)
7.
D.G. Higman, Coherent Con_guration I, Geometriac Dedicata, 4, 1-32 (1952)
8.
D.G. Higman, Coherent Con_guration II, Geometriac Dedicata, 5, 413-426 (1967)
9.
J.A. Nelder, The analysis of randomized experiments with orthogonal block structure I. Block structure and the null analysis
of variance, Proceeding of the Royal Society of London, Series A 283, 147-162 (1965)
10. M.K. Singh and P.K. Manjhi, Generalized Directed Association Scheme and its Applications, International J. of Math. Sci.
and Engg. Appls., 6(III), 99-113 (2012)
11. M.K. Singh and P.K. Manjhi, Algebra of Generalized Association Scheme and its Applications, Ph.D. thesis, Ranchi
University Ranchi, India.
International Science Congress Association
72
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
A Fixed Point Theorem Satisfying Compatibility
Dhruva Narayan Singh
Dept. of Mathematics, Chas College, Chas, Jharkhand, INDIA
Abstract
In this paper we have established fixed point theorem satisfying the notion of compatibility which was introduced by Naidu and
Prasad (1986) under Gregus type and ψ type contractive conditions.
Introduction
In this Paper we shall prove a common fixed point theorem in 2-metric space using the notion of compatibility introduced by Naidu
and Prasad. Our theorems extend and improve the result of Fisher and Murthy (1987), Naidu and Prasad (1986) and Singh and
Singh (2010).
Fisher and Murthy (1987) proved the following result on metric space.
Theorem: Let f be a self-map on a complete metric space (M,ρ)such that:
ρ2(fx,fy)≤ αρ(x,f(x)).ρ(y,fy)+βρ(x,fy).ρ(y,f(x))
for all x,y in M for some non-negative constants α,β with α<1. Then f has a fixed point. Further if β<1, then f has a unique fixed poi
nt in M.
Naidu and Prasad (1986) also proved the following result in a complete 2-metric space.
Theorem: Suppose (X,d) is a complete 2-metric space and
d2(fx,fy,a)≤αd(x,fx,a).d(y,fy,a)+βd(x,fy,a)d(fx,y,a)
for all x,y,a in X and for same non-negative constants α and β with α<1, then f has a fixed point in X.
Preliminaries
Now we give some basic definitions and well known results that are needed in the sequel.
Definition (2.1): Let x be a non-empty set and d: XxXxX→R+. If for all x, y, z, and u in X. We have
(d1)
d(x, y, z) = 0 if at least two of x, y, z are equal.
(d2)
for all x ≠ y, there exists a point z in x such that d(x, y, z) ≠ 0.
(d3)
d(x, y, z) = d(x, z, y) = d(y, z, x) =............... and so on
(d4)
d(x, y, z) ≤ d(x, y, u) + d(x, u, z) + d(u, y, z)
Then d is called a 2-metric on X and the pair (X, d) is called 2-metric space.
Definition
(2.2):
A
sequence
{xn}n
ϵ
N
in
a
2-metric
space
(X,d)
is
said
to
be
a
cauchy
sequence
for all a ϵ X.
if
Definition (2.3): A sequence {xn}n ϵ N in a 2-metric space (X,d) is said to be a convergent if
The point x is called the limit of the sequence.
= 0 for all a ϵ X.
Definition (2.4): A 2-metric space (X,d) is said to be complete if every cauchy sequence in X is convergent.
Definition (2.5): A pair {f1,f2} of self-map on a 2- metric space (X,d) is said to be a compatible pair if
lim d  f f x , f f x ,a 
1 2 n
2 1 n
n 
= 0 for all aϵX, whenever {xn}nϵNϵX such that
lim f  x   lim f  x 
1
n
2
n
n  
n  
= t for some tϵX.
Results
Theorem (3.1): Let E,F and T be three self-maps of a complete 2-metric space (X,d)s.t.
(i)
T is continuous
(ii)
{E,T} and {F,T} are compatible pairs.
International Science Congress Association
73
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
(iii)
E(X)⊆T(X) : F(X)⊆T(X)
(iv)
d2(Ex,Fy,a)≤αd(Tx,Ex,a).d(Ty,Fy,a)+βd(Tx,Fy,a)d(Ex,Ty,a)+γd(Tx,Ty,a).d(Ex,Fy,a)
for all x,y,a in X and for some non-negative constants α,β,γ with 0≤α,β,γ<1.
Then E,F and T have a common fixed point in X. Further if β+γ≤1 then E,F and T have a unique common fixed point in X.
Proof: Let x0be any arbitrary point of X. Since E(X)⊆T(X), F(X)⊆T(X), we can choose a point x1 in X such that Tx1=Ex0 and x2 in
X such that Tx2=Fx1. In general
Tx2p+1=Ex2p and Tx2p+2=Fx2p+1 for p=0,1,2,……….
Now we first prove that d(Tx2p,Tx2p+1,Tx2p+2)=0.
d2(Tx2p,Tx2p+1,Tx2p+2)
= d(Ex2p,Fx2p+1,Tx2p)
≤ αd(Tx2p,Ex2p,Tx2p).d(Tx2p+1,Fx2p+1,Tx2p)
+ βd(Tx2p,Fx2p+1,Tx2p).d(Ex2p,Tx2p+2,Tx2p)
+ γd(Tx2p,Fx2p+1,Tx2p).d(Ex2p,Fx2p+1,Tx2p)
= 0
or d2(Tx2p,Tx2p+1,Tx2p+2)
= 0
i.e. d(Tx2p,Tx2p+1,Tx2p+2)
= 0
Now we consider
d2(Tx2p,Tx2p+1,a)
= d2(Fx2p-1,Ex2p,a)
= d2(Ex2p,Fx2p-1,a)
≤ αd(Tx2p,Ex2p,a).d(Tx2p-1,Fx2p-1,a)
+ βd(Tx2p,Fx2p-1,a).d(Ex2p,Tx2p-1,a)
+ γd(Tx2p,Fx2p-1,a).d(Ex2p,Fx2p-1,a)
= αd(Tx2p,Tx2p+1,a).d(Tx2p-1,Tx2p,a)
+ βd(Tx2p,Tx2p,a).d(Tx2p,Tx2p-1,a)
+ γd(Tx2p,Tx2p-1,a).d(Tx2p+1,Tx2p,a)
= (α+γ) d(Tx2p,Tx2p+1,a).d(Tx2p-1,Tx2p,a)
i.e. d(Tx2p,Tx2p+1,a)≤ (α+γ)d(Tx2p-1,Tx2p,a)
Again,
d2(Tx2p+1,Tx2p+2,a)
= d2(Ex2p,Fx2p+1,a)
≤ αd(Tx2p,Ex2p,a).d(Tx2p-1,Fx2p+1,a)
+ βd(Tx2p,Fx2p+1,a).d(Ex2p,Tx2p+1,a)
+ γd(Tx2p,Tx2p+1,a).d(Ex2p,Fx2p+1,a)
= αd(Tx2p,Tx2p+1,a).d(Tx2p+1,Tx2p+2,a)
+ βd(Tx2p,Tx2p+2,a).d(Tx2p+1,Tx2p+1,a)
+ γd(Tx2p,Tx2p+1,a).d(Tx2p+1,Tx2p+2,a)
= (α+γ) d(Tx2p,Tx2p+1,a).d(Tx2p+1,Tx2p+2,a)
i.e. d2(Tx2p+1,Tx2p+2,a)
≤ (α+γ) d(Tx2p,Tx2p+1,a)
= hd(Tx2p,Tx2p+1,a) where h=(α+γ)
≤ h2d(Tx2p-1,Tx2p,a)
⋮
≤ h2pd(Tx0,Tx1,a)
Hence {Tx2p} is convergent. Let x0 be the limit point of this sequence. As {E,T} and {F,T} are compatible pairs then
d(ETx2p,TEx2p,a)→0 and d2(FTx2p+1,TFx2p+1,a)→0
Again, {E,T} and {F,T} are compatible pairs and {Tx2p}∊X is a sequence then d(ETx2p,TEx2p,a)→0 as {Ex2p} and {Tx2p}
converges to the same limit, so:
lim .ETx  lim .TEx
p  

2p
p  


2p
 E plim
.Tx2 p  T plim
.Ex2 p
 
 

Eu=Tu, Similarly Tu=Fu.
Also Eu=u=Tu=Fu as follows:
d2(Eu,u,a)
≤ [d(Eu,u,Tx2p+2)+d(Eu,Tx2p+2,a)+d(Tx2p+2,u,a)]2
= d2(Eu,u,Tx2p+2)+d2(Tx2p+2,u,a)+d2(Eu,Fx2p+1,a)+.......
≤ d2(Eu,u,Tx2p+2)+d2(Tx2p+2,u,a)
+αd(Tu,Eu,a)d(Tx2p+1,Fx2p+1,a)
+βd(Tu,Fx2p+1,a)d(Eu,Tx2p+1,a)
International Science Congress Association
74
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
+γd(Tu,Tx2p+1,a)d(Eu,Tx2p+1,a)
whenp→∞, Tx2p+2→u, Tx2p+1→u, Fx2p+1→u as Eu=Tu we have:
d2(Eu,u,a)
≤ βd(Tu,u,a).d(Eu,u,a)+γd(Tu,u,a).d(Eu,u,a)
= (β+γ)(Tu,u,a).d(Eu,u,a)
= (β+γ)d2(Eu,u,a)asEu=Tu
i.e. d(Eu,u,a)≤(β+γ)d(Eu,u,a), which is a contradiction.
Hence, Eu=u, similarly we can show that Fu=u.
Also Eu=Tu=uSinceEu=Tu. Therefor Tu=u.
Thus we get Eu=Fu=Tu=u. i.e. u is a common fixed point for E,F and T.
Uniqueness: If possible, Letv be another fixed point, then:
d2(u,v,a)
≤ d2(Eu,Fv,a)
≤ αd(Tu,Eu,a)d(Tv,Fv,a)
+βd(Tu,Fv,a)d(Eu,Tv,a)
+γd(Tu,Tv,a)d(Eu,Tv,a)
= αd(u,u,a)d(v,v,a)+βd(u,v,a)d(u,v,a)
+γd(u,v,a)d(u,v,a)
= (β+γ)d2(u,v,a)
i.e. d(u,v,a)≤(β+γ)d(u,v,a), which is a contradiction.
Hence, d(u,v,a)=0. i.e. u is a unique common fixed point of E,F. and T.
References
1.
S. Gahler, 2-metrische Raume and ihretopologische structure, Math Nach, 26, 115-148 (1963)
2.
L. Gajic and M. Stojakovic, On Compatible mappings in Fixed point theory, Univer. U. Navom Sadu Zb. Rad Prirod - Mat.
Fak. Ser. mat. 24(2), 3951 (1994)
3.
L. Shambhu Singh and L. Sharmeshwar Singh, Some fixed point theorems in 2-metric space, International transactions in
Mathematical Sciences and Computer, 3(1), H. 121-129, (2010)
4.
P.P. Murthy, S.S. Chang, T.J. Cho and B.K. Sharma, Compatitble mappings of type (A) and common fixed point theorems,
Kyungpook Malt J. 322003-216 (1992)
5.
M.A. Ahmed, Some Common Fixed Point Therorems for weakly Compatible Mappings in Metric Spaces, Fixed point Theory
and applications, Volume 2009, Article ID804734, (2009)
6.
Naidu S.V.R. and Rajendra Prasad, Fixed point theorems in 2- metric space, Indian. J. Pure Appl. Math, 17(8) 974-993 (1986)
International Science Congress Association
75
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Dispersal of Arsenic into Damodar River: A Mathematical Model
Shafique Ahmad1 and Shibajee Singha Deo2
1
Dept. of Mathematics, B.D.A. College, Pichari, Bokaro, Jharkhand, INDIA
2
Dept. of Mathematics, A.M. College, Jhalda, West Bengal, INDIA
Abstract
Damodarriver, in her course through highly populated and highly industrialized area of coalfield has become a repository of many
types of wastes produced by various human activities like industrial, agricultural and domestic. The polluted plight of the river is
enhanced with passing time due to continuous and uncontrolled discharge of toxic and hazardous effluents into it by over 46
industries located on its banks or in its vicinity particularly in Bokaro - Dhanbad areas of Jharkhand. The mathematical model is
employed as it captures the essential physic of arsenic disposition .It gives detail distribution of arsenic contamination, especially in
the high – dosage region near the point of dispersion.
Keywords: Damodarriver; Wedgemodel; arsenic; arsenolized; downstream; Dhanbad.
Introduction
Historically arsenic is known as a poison. It does not often present in its elemental state but is more common in sulfides and
sulfosalt such as Arsenopyrite. In some countries, arsenic is the most important chemical pollutant in groundwater and drinking
water. The Bengal delta region is particular affected as an estimated people have been drinking arsenic rich water for the past 20 30 years. According to Chowdhury et al.1 examination for arsenical dermatologic symptoms in 29 thousand people showed that
15% had skin lesions. Arsenic removal from groundwater by household sand studied by Berg et al. 2. Buschmann et al.3 describes
arsenic and Manganese pollution in upper Mekong delta.Newman et al. 4 also indicated that microorganisms play critical roles in
both the reduction andremoval of As(V) from groundwater. Poyla et al. 5 described that natural contamination of groundwater by
arsenic is also an emerging issue in some countries of Southeast Asia, including Vietnam, Thailand, Cambodia and Myanmar.
Vulnerable areas for arsenic contamination are typically young Quaternary deltaic and alluvial sediments comprising highly
reducing aquifers. The development of symptoms of chronic arsenic poisoning is strongly dependent on exposure time and resulting
accumulation in the body. The various stages of arsenic sis are characterized by skin pigmentation, keratosis, skin cancer, effects on
the cardiovascular and nervous system, and increased risk of lung, kidney and bladder cancer. The European Union allows a
maximum arsenic concentration of 10 µg/L in drinking water, and the World Health Organization (WHO) recommends the same
value. In contrast, developing countries are struggling to establish and implement measures to reach standards of 50 µg/L in arsenic
affected areas. Drinking water supplies in Cambodia and Vietnam are dependent on groundwater resources.
The Mekong and the Red River deltas are the most productive agricultural regions of South East Asia. Both deltas have young
sedimentary deposits of Holocene and Pleistocene age. The ground waters are usually strongly reducing with high concentrations of
iron, manganese, and ammonium. The Mekong and Red River deltas are currently exploited for drinking water supply using
installations of various sizes. In the last 7-10 years a rapidly growing rural population has stopped using surface water or water
from shallow dug wells because they are prone to contamination by harmful bacteria. Instead, it has become popular to pump
groundwater using individual private tube-wells, which is relatively free of pathogens. The Vietnamese capital Hanoi is situated in
the upper part of the 11,000 km. Red River delta, which is inhabited by 11million people and is one of the most populous areas in
the world. Due to naturally anoxic condition in the aquifers, the ground waters contain large amounts of iron and manganese that
are removed in Hanoi drinking water plants by aeration and sand filtration discussed by Duong et al.6. Understanding the mobility
of arsenic in subsurface environments is important for evaluating its possible environmental and economic effects studied by
Williams et al.7. Islam et al.8 suggestedthat arsenic adsorbed onto sediment surfaces could bemobilized into groundwater by
anaerobic respiration of Fe(III)reducing bacteria. According to Smedley and Kinniburgh 9 arsenic iscarcinogenic, and can also cause
other human health effects, such as black disease and diabetes. According to Feldman and Rosenboom10 the upper and lower
Quaternary aquifers were investigated by analyzing ground waters from small-scale tube-the Cambodian Mekong delta area in
2000, and has since been investigated and addressed through close collaboration of local authorities and NGOs. In this paper, the
arsenic levels in Damodar river of the Dhanbad region are presented, which is reported for the first time. In addition to an overview
of the magnitude of arsenic poisoning in this region, the limited information available in the literature on the geology and genesis of
Damodarriver is summarised.
Nomenclature
α
:
Wedge angle at the point of disposal
International Science Congress Association
76
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
i
t
u
r
:
:
:
:
Average distance travelled by arsenic particle before its gets fully deposited
time
water velocity
distance =u.t
d(r)
C(r)
:
:
:
Deposition velocity of arsenic
radial spread (thickness)
arsenic concentration
b( /s)
MO
m(r)
h
:
:
:
:
water – intake rate
Total mass of arsenic that become arsenoid into intake form
Amount of arsenic problem in the plume as a dispersal (i.e.
depth of mixing layer of river
after time t = r/a)
=
Let the point of dispersal of arsenic be at the point C in the Damodarriver of city Dhanbad. All of the arsenic is oxidized into
arsenic oxide. The arsenic oxide mix with water and spread .The flow of river will transport it to considerable distance. Upon
disposal, the arsenic oxide will rapidly mix up. This the mixing layer of the water let h (= CD) be the depth of mixing layer. We
assume that this depth remains constant as the flow moves at a constant speed α.
Let the velocity of water in the river be u and will be centered after time t at a distance r (= ut) with same radial (downstream) speed
d(r). The cross flow spread (that is perpendicular to the flow direction) is taken to be an arc. Let this arc subtend a wedge angle α at
the disposal point C. The annular segment gives the horizontal section of the flow at some instant of time t.
Figure-1
Schematic representation of arsenic disposal at point C of Damodar River.
Development of Mathematical Model
Let the arsenic concentration C(r) be uniform at any time t. Let M(r) be the amount of aerosol present in the flow at distance r
(=DE). Assume that the amount deposited on the ground is proportional to the amount present in the river.
So we have M(r) =
(1)
Where
is the total mass of arsenic that became arsenolized into drinking form and
arsenic particle before it gets deposited.
So =
is the average distance traversed by an
(2)
Where is deposition velocity of aerosol. If the amount deposited per unit area is denoted by
This gives
=─
,
`
=─
(3)
The volume of water at a specific point said to be plume after arsenic deposition be
International Science Congress Association
.
77
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
If the concentration is C(r, α, z) then
C(r, α, z) =
(4)
Now consider a person who is about to be reversed is the river .Since the plume has thickness d(r), therefore it will pass the person
is time
(5)
If the water intake rate is b
given (in mg) by
/s, the person reversed in river will intake water in a volume b
, containing an amount of arsenic
=
=
(6)
This shows that
is independent of the thickness d(r).
Let the population density per unit area within the specified area of theDamodarriver where arsenic deposition is available be
P(r,
). If the total amount of arsenic taken (through water)by the population is denoted by
then
=
=
(7)
If the population density is uniform say
then this reduces to
=
=
from (2)
(8)
Figure-2
Schematic representation of intake of arsenic (through river water) by the Population in the city Dhanbad
International Science Congress Association
78
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Assume that a city Dhanbad D of width and length lies downstream of the flow of Damodar river within the wedge angle
at a distance from the place C where the deposition of arsenic occurs.(figure 2) .
Further assume that the city is an annular piece at a distance
from the disposal and subtend an angle
= .
The city D we can assume the population density uniform equal to
Thus from equation (6) we have
.
.
=
=
dr
=
(8)
Results from equation (7) & (8) can be applied to the arsenic intake by the population from original contaminated plume from
dispersal as it moves downstream for numerical calculations we take the following admissible values of some parameters which are
as follows:
mg ,
=population density /sqmt
b = water intake rate of arsenic =3.3×
Range of aerosol disposition velocity is 0.001≤ v≤ m/sec
U = downstream velocity =m/sec
H = mixing depth =500cm
= 50 km
= 10 km
=
=50
For these values (7) becomes
=
(9)
& equation (8) gives
=
(10)
Where 0.1 ≤ α ≤ 0.3
Taking these values the estimation for & is made for different values of.
The results of calculations are entered in tables (1) to (4)
Table-1
Values of
↓
0.001
0.005
0.01
0.05
0.1
for different values of v&
using equation (10)
.5×
.75×
1×
1.5×
2×
1650
330
165
33
16.5
2475
495
24.75
49.5
24.75
3300
660
330
66
33
4950
990
495
99
49.5
6600
1320
660
132
66
International Science Congress Association
79
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Table-2
Variation of
.75×
↓α
0.1
0.2
0.3
against α for u=.01m/s
/
1.5×
33.0091
16.5045
11.0030
/
2×
66.018
33.009
22.006
/
88.0240
44.0120
29.3413
Table-3
Variation of
↓α
0.1
0.2
0.3
.75×
against α for u=0.05m/s
/
1.5×
6.6018
3.3040
2.2006
/
2×
13.3036
6.6018
4.4012
/
17.6048
8.8024
5.8682
Table-4
Variation of
↓α
0.1
0.2
0.3
.75×
/
against α for u = 0.1m/s
1.5×
3.3009
1.6504
1.1003
/
2×
6.6018
3.3009
2.2006
/
8.8024
4.4012
2.9341
Conclusion
The mathematical model is employed as it captures the essential physic of arsenic disposition. It gives detail distribution of arsenic
contamination, especially in the high – dosage region near the point of dispersion. Calculation for
(i.e. the total amount of
arsenic depostel water taken when population density is uniform) &
(= arsenic intake in city’s population during plume) have
been made by taking empirical data for the parameter involved.
By numerical analysis of the entries made in tables (1) to (4) we arrive at the following calculation.
(i) The effect of arsenic intake ( ) during the plume increases as population size
increases for a given values of arsenic.
Disposal velocity v but for a fixed population size,the effect of arsenic intake during the passage of plume decreases as the
downstream velocity u increases (Table 1) .
(ii)Arsenic absorption by between the people in the city at which deposition from factory occurs and the people in the adjoining
region in obtained by computing
wedge angle ,
. For a fixed population size
decreases as the wedge angle increases .For a fixed value of
increases as population size increases (see table 2) to (4))
(iii) As deposition velocity v of arsenic increases,
decreases for all values of α and
(see tables (2) to (4)).
References
1.
U.K. Chowdhury, B.K. Biswas, G. Samanta and B.K. Mandal, Groundwater arsenic contamination in Bangladesh and West
Bengal”, Environ. Health Perspect, India, 1089(5), 393-7, (2000)
2.
M. Berg, S Trang, Luzi P.K.T. and S. Stuben, Arsenic removal from groundwater by household sand filters- comparative field
study, model calculations, and health benefits, Environ. Sci. Technol., 40(55), 67-73, (2006)
International Science Congress Association
80
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
3.
J. Buschmann, M. Berg, C. Stengel and M. L. Sampson, Arsenic and Manganese pollution in upper Mekong delta, Cambodia:
comprehensive groundwater survey”, Environment Sci. Tech., 37(212), 26-34, (2011)
4.
D.K. Newman, D. Ahmann, F. M. M. Morel, A brief review of microbial arsenate respiration, Geomicrobiol, J., 15, 255–268,
(1998)
5.
D.A. Polya, A.G. Gault, N. Diebe, Arsenic hazard in shallow Cambodian groundwaters, mineral Mag, 69(5), 807-23, (2005)
6.
H.A. Duong, M.H. Hoang and W. Giger, Trihalomethane formation by chlorination of ammonium and bromide-containing
groundwater in water supplies of Hanoi, Vietnam, Water Res, 37(42), 4 -52, (2003)
7.
L.E. Williams, M.O. Barnett, T.A. Kramer and J.G. Melville, Adsorption and transport of arsenic(V) in experimental
subsurface systems, J. Environ. Qual, 32, 841–850, (2003)
8.
F. Islam, A. Gault, C. Boothman, D. Polya, J. Charnock, D. Chatterjee and J. Lyond, Role of metal-reducing bacteria in
arsenic release from Bengal delta sediments, Nature, 430, 68–71, (2004)
9.
P.L. Smedley, D.G. Kinniburgh, A review of the source, behaviour and distribution of arsenic in natural waters”. Appl.
Geochem, 17, 517–568, (2002)
10. P.R. Feldman and J.W. Rosenboom, Cambodia drinking water quality assessment Phnom Penh, Cambodian Ministry of
Industry, Mines and Energy, (2001)
International Science Congress Association
81
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
A Survey of Vertical Handoff Schemes in Vehicular Ad-Hoc Networks
Sadip Midya, Koushik Majumder and Asmita Roy
Department of CSE, WBUT, Kolkata, India
Abstract
With the growth in population in recent times, numbers of vehicles on road also have increased drastically. And with this increase
in vehicle numbers, accident rates, traffic congestion also have increased proportionately. Having a way of establishing
communication between vehicles can reduce accident rates and traffic congestion significantly. VANET (vehicular ad-hoc
networks) is the study of connection and communication between vehicles on road and the infrastructure that supports it.
Establishing a connection between two static vehicles or nodes is a concern, but establishing connection between two moving nodes
is a challenge. A vehicle at a point can leave its home network and move to a new network. Then it is required to perform handoff
procedure between the two networks. Handoff mechanism can be divided into two classes, Horizontal handoff and Vertical
handoff. Here, a survey is done on various vertical handoff schemes and a detailed comparison is done between them based on
different layers involved in hand off and various technologies used by different handoff mechanisms etc. Some improvements over
the present handoff mechanisms are proposed leading to a wide area of research on this field.
Keywords: VANET, Vertical Handoff, MAG, MHVA, GVMM, VHDS, NEMO, DHCP, CoA
Introduction
The drastic development of technologies involving vehicles requires the safety of vehicles in traffic. So, nowadays, the carmanufacturers are working with government agencies to increase on-road safety and ease in traffic1. In Vehicular Ad-hoc Networks
(VANET) maintaining a continuous connection is very challenging, as a VANET network needs to maintain two types of
communication V2V (Vehicle-to-Vehicle communication) and V2I (Vehicle-to Infrastructure communication).Some of the
technologies that are useful in vehicle-to-vehicle communication are DSRC (Dedicated Short Range Communication) and WAVE
(Wireless Access for Vehicular Environment), while vehicle-to-infrastructure uses GPRS, WiFi or WiMax. Any Internet
Application needs a continuous connectivity to function properly. While in stationary objects maintaining a continuous connection
may not be of major concern but in VANET where a vehicle moves between various networking environments it needs to switch
between various networks in different areas. A frequent switch in networks may cause degradation in performance of the system.
This switch between various networks is known as handoff or sometimes handover. Handoff can be defined as a process which is
triggered when a vehicle switches network areas without interruption or loss of service 2.
Handoff Classification: Handoffs can be classified in two ways according to network technology involved:
Horizontal Handoff: This is a traditional handoff mechanism, also known as intra-system handoff. Horizontal handoff occurs when
the MS (Mobile Station) switches between different BS’s(Base Stations) or AP(Access points) of the same radio access network.
Vertical Handoff: In modern times where there is a variety of network technologies, we deal with networks with great diversity and
heterogeneity. So, we require a handoff mechanism that not only deals with switching in between networks, but also heterogeneous
networks having different wireless access network or technologies. Vertical handoff or inter-network handoff scheme supports
handoff between two heterogeneous networks.
The rest of the Paper is organized as follows Section II provides with a Review and Survey of various vertical handoff schemes in
VANET. Section III presents a Comparison table of the various schemes reviewed in section II. And Finally, The Conclusion and
Future Scope are given in Section IV
Survey on Various Vertical Handoff Schemes
A Mobility Handover Scheme for IPv6-Based Vehicular Ad Hoc Networks[3]
A new scheme MHVA is described in this section. In MHVA, the vehicles are uniquely identified by its home address; hence no
CoA(Care Of Address) is required during mobility. So, the mobility HO(Handoff) cost is reduced and HO delay is shortened. Here
the handover in the network layer is completed before HO in link layer thus a vehicle can continue to keep connection with its AP
in the link layer and continue receiving packets from the AP(Access Point).
In MHVA when a vehicle joins VANET It acquires a home address (IPV6) and is identified by the home address throughout its
lifetime. The vehicles Home Agent stores the local vehicle table. A local vehicle table has two entries namely: - i) IPv6 address of a
vehicle ii) IPv6 address of an AR. Between the Home Agents the AR’s identify the subnet where the vehicle is located. The AR
International Science Congress Association
82
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
stores vehicle routing table which also has two entries namely: - i) IPv6 address of a vehicle ii) IPv6 address of a AP The AP here is
the associated AP of the vehicle. Each AP stores the neighbor AP table also having two entries: - i) IPv6 address of neighbor AP ii)
Relative orientation of neighbor AP. Even if Omni-directional antennae is used, it is possible to obtain angle between two AP’s by
sending beacon frames using AOA method
An AP can get the relative orientation with respect to its neighboring AP using beacon frames sent from the neighboring AP using
AoA (Angle of Arrival)[4] method. In the same way an AP can determine whether a vehicle is leaving its communication area or
not, by measuring the RSS of the vehicle. It can also get the relative orientation of vehicles with the help of beacon frames and AoA
method.
When the AP of the vehicle detects that a vehicle is going to leave its communication area then the AP can determine the next
associated AP (NAAP) of the vehicle. According to the information collected on the basis of relative orientation of vehicle and
relative orientation of NAAP. It selects the neighbor AP whose relative orientation is equal to the relative orientation of the
Neighbor AP.
This scheme can be applicable in two scenarios:- a)Handoff within a Subnet, b)Handoff in between Subnets .When a vehicle is
inside a subnet the AP within its one-hop scope is responsible for the mobility HO process while when a vehicle is moving intersubnet then the neighbor vehicle within its one-hop scope is responsible for mobility HO process. The respective vehicle doesn’t
take part in the HO process.
Advantages and Disadvantages: The MHVA scheme is advantageous because it completes its handover in the network layer
before completing the handoverin the link layer. So the devices are still connected in the link layer while handover takes place in
the network layer. It does not use CoA to identify each vehicle but identifies each vehicle using their home addresses, that reduces
the cost of calculating CoA for each vehicle entering a new network area The only disadvantage in this scheme is that if a large
amount of vehicles enters the subnet it increases the load of the VANET network and calculation throughput.
(a)
(b)
Figure-1
(a) Mobility Handoff Within a Subnet (b) Message Flow diagram
International Science Congress Association
83
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
(a)
(b)
Figure-2
(a) Mobility Handoff in Between Subnets (b) Message Flow Diagram
A Cross Layer Fast Handover Scheme in VANET [5]
This scheme introduces a fast handover scheme for VANET known as VFHS. The most powerful and efficient way to access the
internet in a transportation environment is the Multi-hop technique. For example we can take IEEE 802.16j Worldwide
Interoperability for Microwave Access Mobile Multi-hop Relay (WiMAX MMR).
This scheme VFHS improves L2 handover process to solve dis-connectivity problem between sedan and coaches or access points.
In [7] VFHS (Vehicular fast handoff scheme) improves layer 2 handover performance by utilizing topology information
broadcasted by oncoming small sized vehicles
The vehicles on the freeway move with unlimited speed as long as their relative speeds are supported by MMR WiMax.. There are
mainly 3 kinds of vehicles in this model RV (Relay vehicle) BV (Broken Vehicles) and OSV (Oncoming way small vehicle).
RV (Relay vehicle) –The relay vehicle RV is a large vehicular coach which has the capabilities of relaying and mobility
management of MMR WiMAX network to its neighboring vehicles. An RV along with the vehicles in its transmission range forms
a cluster.
Broken Vehicle (BV) – A small vehicle which is outside the transmission range of any RV and is willing to connect to an RV is
known as a broken vehicle.
Oncoming small size vehicle (OSV) – An OSV or an oncoming small size vehicle is a vehicle that is moving in the opposite
direction than the RV and BV, which has no packets to transmit. An OSV collects physical layer information of the RV and
provides the information to the BV or broken vehicle in the opposite lane with the help of a cross layer network topology message
NTM(Network Topology Message).
International Science Congress Association
84
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
VFHS adapts a dynamic approach in handoff, where it uses cross layer design to send messages across devices. It sends an NTM,
which is a cross-layer message, comprising of physical and MAC layer. The Physical layer comprises of information like position
and channel whereas the Mac layer comprises of information on WiMAX. The position and channel information of RV are
accumulated and abstracted by OSV. Now the OSV along with the information of RV also inserts its position information and
broadcasts the NTM to oncoming BV’s. ON receiving the NTM the BV’s adjust the channel frequency of its WiMAX adapter to
the channel frequency of the RV in front, by comparing its location. So, the BV, instead of searching all channel frequency in the
physical layer, just searches for the RV in front.
Advantages and Disadvantages: The Advantage of this VFHS scheme is that it transmits cross-layer message named as NTM
which contains topology information like position and channel, which are physical layer information as well as MAC layer
information. By receiving physical layer information along with MAC layer it can perform L2 handoff very smoothly. A vehicle on
receiving an NTM can set its frequency to a desired channel and perform handoff. Secondly, the concept of Relay Vehicles which
are large vehicles having the capability of relaying and transmitting packets to and from smaller vehicles. A base station can only
have direct connection with the Relay vehicles instead of all vehicles, which lowers handoff latency.
The Disadvantage of this scheme is that if no relay vehicles are around the smaller vehicles has no provision to connect to the base
station. And the handoff procedure of relay vehicles is not clearly demonstrated.
Figure-3
VFHS architecture
A New Scheme of Global Mobility Management for Inter VANETs Handover of Vehicles in V2V/V2I Network
Environments[6]
The architecture of GMM is shown is Figure 4. It consists of GVMM (Global vehicle mobility management) and LVMM (Local
vehicle mobility management). The GVMM stores the MAC address, care-of-address (CoA) and permanent IP address (PoA),
Local VANET ID (VID), Id of V2V group (GID) and the IP of the LVMM. The MAC addresses of the vehicles that are entering
the network area are forwarded to the LVMM by the local AP.
When a vehicle VC#1 enters a new AP area, the AP retrieves the vehicles MAC address and sends it to the new LVMM in the form
of AR (Association Report) message. The new LVMM then makes an entry of the new vehicle in its L-ABT (Local Address
Binding Table).Then it sends a LR (Location Reply) message to the GVMM, where the GVMM updates its C-ABT (Central
Address Binding table) with the entry of the new vehicle. The GVMM then sends a Location update message to the old LVMM,
and sends a Location Reply Acknowledge message to the new LVMM which currently is associated with the vehicle. The old
LVMM then sends a Location Update Acknowledge to the GVMM. This completes the Handover Procedure
Advantages and Disadvantages: It performs fast handoff using L2 triggering is its advantage. Its Disadvantage is that it requires
CoA configuration which delays the handoff latency.
International Science Congress Association
85
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
(a)
(b)
Figure-4
(a) VFHS architecture (b) Message Flow Diagram showing handoff
A Proxy MIPv6 Handover Scheme for VehicularAd-hoc Networks [7]
In this scheme, the road has two lanes on each direction. The technologies used at points of attachments are WiMAX, 3G/4G on
RSU’s. The LMA (Local Mobility Anchor) stores the vehicles new location and the correspondent Node acts as an FTP server. For
providing seamless mobility we can consider MIPV6, FMIPV6 and HMIPV6 as host-based mobility protocol. A protocol namely
PMIPV6 is used for network based mobility management.
Here there are two entities Mobile Access Gateway, whose job is to create a tunnel with the HA. The MAG does the mobility
management signaling with HA for the MN attached to the network. The local mobility anchor (LMA) acts as an anchor for MN by
providing a Home network prefix, when the mobile node is outside its Home network.
It provides with an Early Binding Registration prior to the Handoff procedure. Each vehicle is firstly provided with a GPS to
forward its current coordinates to its respective points of attachments. This helps in detecting that the vehicle is leaving its coverage
area or not. Its current position is compared with a pre-configured threshold, which varies with velocity of the vehicle, i.e it
decreases with the increase of velocity. So, by using this GPS coordinate, it is easier to detect in which direction the car is moving
and also the next Point of attachment.
An information request (IR) message is sent by the current PoA (Point of attachment) to the P-MAG (Previous Mobile Access
Gateway), to retrieve the information of the next PoA and the vehicles Home address. Each MAG contains information of their
neighboring MAG’s maintained in a specific table. It also maintains the pool IP address within a specific range being assigned to
each MAG.
International Science Congress Association
86
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
This helps the MAGs to send the IR message to the specific N-MAG (New Mobile Access Gateway) directly. After receiving the
IR message, the N-MAG selects an IP address from the IP address pool and sends an IR acknowledgement with the newly selected
CoA of the vehicle back to the P-MAG. At the same time it also sends to the LMA, a request for binding cache entry (RBCE)
message. On receiving the RBCE the LMA updates its BCE table. And it replies to the N-MAG with a proxy binding
acknowledgement (PBA), containing the HNP of the vehicle, and then it establishes a bi-directional tunnel with the N-MAG. In the
meantime, the IRA messages arrives the vehicle via the currently associated point of attachment.
(a)
(b)
Figure-5
(a) EBR-PMIPv6 Scheme (b)Message Flow Diagram
So, the relevant information about handoff is configured in advance while still in connection with its new PoA.
Advantages and Disadvantages: The advantage of this algorithm is that it uses a gateway to produce a tunnel with the Home
agent, which allows packet to be transmitted from pMAG to nMAG i.e from the network where the vehicle was associated
previously to the network where the vehicle is associated at present.
International Science Congress Association
87
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Comparison Table
Handoff Execution Algorithms
Scheme
Addressing
A Mobility Handover
Scheme (MHVA)
Vehicles are identified
by its home IPV6
address instead of CoA
Cross Layer Fast
Handover
Scheme(VFHS)
Does not use any CoA,
vehicles are identified
by their physical
addresses or MAC
addresses
Global Mobility
Management for InterVANET Handover
(GMM)
Uses CoA (Care of
Address) to identify
vehicles in a subnet.
When a vehicle enters
a new subnet it is
identified by a new
CoA that is assigned to
it.
Uses CoA to identify
vehicles in a network
area. IP address
obtained from vehicles
in the opposite
direction.
A Proxy MIPv6
Handover Scheme for
Vehicular network
Table-1
Layers Involved in
Handover
The handover takes
place in the network
layer while it is still
connected in the link
layer.
The handover scheme
uses only physical and
MAC layer to perform
the handover
procedure.
Subnet /Network
Area
detection
Uses AoA method to
detect the target subnet
to which the vehicle is
to be handed over.
Technology used
Uses message
broadcast from OSV
(other side vehicle) to
detect to which RV the
BV is to be associated
with.
Uses NTM (Network
topology message) ,a
cross layer message
comprising of information
of the physical as well as
MAC layer, WiMAX
MMR (Mobile Multi-Hop
Relay),.3 types of
vehicles RV(relay
vehicles),BV(Broken
Vehicles), OSV(Other
Side Vehicles)
GVMM(Global Vehicular
Mobility Management),
LVMM (Local Vehicular
Mobility Management
This scheme uses
Mobile IPV6 thus takes
place in the network
layer.
Handover is performed
after the vehicle enters
the subnet
This involves IP
addressing thus
handover takes place in
layer 2 (Data link
layer)
And layer 3(Network
layer)
Handover takes place
when a vehicle enters a
network area covered
by PoA which
periodically broadcasts
Link-up triggers.
Handover process
starts when a vehicle
picks up these link-up
triggers.
AoA to determine
NAAP(Next associated
AP) ,Access routers
(AR),Access points (AP)
MAG’s (Mobile access
Gateway), LMA (Local
mobility
anchor)Correspondent
node (CN) acting as FTP
server.
Conclusion and Future Scope
The above algorithms studied shows us 4 different handoff execution techniques where some scheme uses a care-of-Address and
some uses its own home address. The problem with CoA is that it requires a DAD (Duplicate Address Detection) phase for
detecting whether two vehicles has the same IP address or not. The DAD phase causes almost 70% of the delay in handoff. So, to
obtain a time efficient handoff process, we require omitting the DAD phase.
A technique that doesn’t use any CoA is also known as MHVA. In Addition, detecting the vehicle movement is necessary, and also
assuming the next associated access point to initiate the handoff. A method is mentioned in MHVA which uses AoA (Angle of
Arrival) to detect vehicle orientation and orientation of next associated access point. This causes far efficient handoffs. Also in
MHVA, handoff occurs in network layer while the data link layer remains connected. This two layer handoff proves to be much
efficient for seamless handoff. Its disadvantageis that when the number of vehicles increases then the handoff throughput becomes
International Science Congress Association
88
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
quite high. A solution to this problem is that we could decrease the number of vehicles by allowing only a selective number of
vehicles to connect with the base station.
The technique VFHS lacks a proper stationary infrastructure. VFHS uses Relay vehicle to which all the small scale vehicles are
connected. But if a vehicle gets detached from a relay vehicle or there is no relay vehicle around, there is no provision for those
small vehicles to connect to the VANET network. This can be overcome by introducing a proper Infrastructure which is stationary
like a base station to which the smaller vehicles can connect to when no Relay Vehicles are around. A new architecture can be
formed using the infrastructure of MHVA [3] and the concept of Relay vehicles from VFHS [5], using the Access points, AR, AoA
techniques from MHVA while the Relay vehicle will be introduced where smaller vehicles will connect to the relay vehicles instead
of the Access Points. QoS can also be introduced in order to take handoff decision, some parameters included and discarded based
on the handoff scenarios. This reduces packet drop probability and thus manages power of mobile nodes.
References
1.
Kang J., Chen Y., Yu R., Zhang X., Chen H. and Zhang L., Vertical handoff in vehicular heterogeneous networks using
optimal stopping approach. In Communications and Networking in China (CHINACOM), 2013 8th International ICST
Conference on (pp. 534-539). IEEE (2013)
2.
Kumaran U. and Shaji R.S.,. Vertical handover in Vehicular ad-hoc network using multiple parameters. In Control,
Instrumentation, Communication and Computational Technologies (ICCICCT), 2014 International Conference on (pp. 10591064). IEEE (2014)
3.
Wang X. and Qian H., A mobility handover scheme for IPv6-based vehicular ad hoc networks.Wireless personal
communications, 70(4), 1841-1857 (2013)
4.
Seow C.K. and Tan S.Y., Localization of omni-directional mobile device in multipath environments. Progress In
Electromagnetics Research, 85, 323-348 (2008)
5.
Chiu K.L., Hwang R.H. and Chen Y.S., A cross layer fast handover scheme in VANET. In Communications, 2009.ICC'09.
IEEE International Conference on, 1-5 IEEE. (2008)
6.
Lee J.M., Yu M.J., Yoo Y.H. and Choi S.G., A new scheme of global mobility management for inter-vanets handover of
vehicles in v2v/v2i network environments. In Networked Computing and Advanced Information Management, 2008.
NCM'08.Fourth International Conference on (Vol. 2, pp. 114-119).IEEE (2008)
7.
Moravejosharieh A. and Modares H., A Proxy MIPv6 Handover Scheme for Vehicular Ad-hoc Networks. Wireless personal
communications, 75(1), 609-626 (2014)
International Science Congress Association
89
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Energy Gain of Signal Wave and That of Idler Wave Due to Nonlinear
Parametric Interaction in Piezosemiconducting Medium: A Numerical
Approach
Pravat Kumar Mandal
Dept. of Maths., A.M. College, Purulia, W.B., INDIA
Abstract
The present paper is an attempt to investigate the nonlinear parametric interaction of longitudinal acoustic waves in n-type
degenerate piezosemiconducting media. The nonlinearity arises due to nonlinear transport property of electrons in these media. The
equation of mechanical motion and of electricity supplemented by basic equations of piezosemiconducting media are used to solve
the problem. The numerical analysis presented in this paper, is facilitated by assumption of the equality of drift velocity with sound
velocity. The gain of energy of signal wave and that of idler wave have been computed by numerical approach for different
propagation distances.
Keywords: Parametric amplification, Piezosemiconductor, Acoustic Wave.
Introduction
Since the discovery of helicon wave propagation in semiconductors, the interaction between it and other modes-acoustic as well as
electromagnetic of the propagating medium have received considerable attention. There exists a vast literature on acoustic wave
propagation, including review articles and monographs on the propagation characteristics, experimental properties and applications
of these waves1-6. Most of the above mentioned works with the linear behavior of these waves. However in the last few decades,
nonlinear propagation characteristics of helicons and other electromagnetic modes as well as acoustic modes have become a lively
subject of investigation in semiconductor as well as piezosemiconductors. One of the most important areas of studies in acoustics is
to investigate wave propagation in piezosemiconductors mainly on account of electromechanical coupling in these media.
Thepossibility of acoustic wave amplification in both magnetisedandunmagnetised piezoelectric semiconductor have been studied
during last few decades7-11. The cause of nonlinearity is attributed to the heating of the carriers by the pump.Kinetic theory of
nonlinear wave interaction has been studied by M.Lazar et al 12.Modifiedinteractions of longitudinal Phonon-Plasmon in magnetized
piezoelectric semiconductor has been studied by M. Salimullah, et. al 13.Shukla et al14 and Brodin et al15 have studied nonlinear
wave interactions in quantum magnetoplasma medium. Parametric interactions in Ion-implanted magnetized piezoelectric
semiconductor plasma is analytically investigated by using hydrodynamic model of semiconductor plasmas and a coupled mode
theory of interacting waves16. In recent years different aspects of linear and nonlinear wave propagation of electron plasma waves in
quantum plasma field have been studied17. Mandal18 has studied the amplification of acoustohelicon waves in
magnetisedpiezosemiconducting medium. Ultrasonic wave instability in a n-type degenerate thermopiezo-semicoducting medium
has been studied by P.Mandal19.
Most of these studies are analytical in nature. Motivated by these and the intense interest in the field of energy gain or loss of
nonlinear acoustic waves in piezosemiconductors, in the present paper we have attempted to study the three-wave nonlinear
parametric interaction in these media by numerical simulation. The gain of energy of signal wave and that of idler wave have been
computed for different propagation distances in n-type degenerate piezosemiconducting medium.
Statement of the Problem and Basic Equations
Weconsider the longitudinal wave propagation through an-type degeneratepiezosemi-conducting medium which is subjected to d.c
electric field. Our problem is to investigate gain or loss of energy of waves in the case of three-wave nonlinear parametric
interaction of acoustic waves by numerical computation. The basic equations of the problem are the equations of mechanical
motion and of electricity supplemented by constitutive equation, vide, Sinha and Gupta 9.
The constitutive relations and basic equations used in this analysis are:
T = C1S - epz E
D = εE + epz S
D
 ens
x
International Science Congress Association
(2.1)
(2.2)
(2.3)
90
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
n s j

0

t

x
e
(2.4)

j = |e|(no + fns)  E- eDn x (no + fns)
  2 (x)
x = t 2
(2.5)
(2.6)
The notations used are as follows :
where
E
Electric field
T
Mechanical Stress
D
Electric Displacement
S
Strain
epz
Piezoelectric constant
ε
Dielectric permittivity at constant strain
C1
Elastic constant at constant electric field

Material density
x
n0
ns
f
Mechanical displacement
Mean carrier density
Perturbed carrier
Fraction of the space charge
en s
Space charge contributing to conduction process
Using equations (2.2) to (2.5) we get
`~
`~
`~
`~
~
3
2
2
3
 (x )
E
E
 (x )  E
 E
 (x )  ( Eo  E )

 e pz
 
 f {
 e pz
}
 f (
 e pz
}
2
2
2
3
xt
x
x
x
x
x t
x
x
x
`~
3
4
 E
 (x )
 Dn f (
 e pz
}
3
4
x
x

2
`~
E
(2.7)
~
where we have E =
Vd
(≡-
 Eo ) is equal to the sound velocity
`~
. Again substituting (2.1) in (2.6), we get
 (x)
E
 (x)
 e pz

.
2
x
x
t 2
2
C1
E o  E , Eo being the applied electric field such that the drift velocity
us
2
(2.8)
Numerical Solution and Discussion
As usual, for the method of parametric interaction, we treat the interaction of waves by taking displacement x and perturbed
~
electric field E for three wave interaction as in9
x = u1(x)e i(1t-k1x) + u2(x)e i(2t-k2x) + u3(x)e i(3t-k3x)+ c.c
~
E = E1(x)e i(1t-k1x) + E2(x)e i(2t-k2x) + E3 (x) e i(3t-k3x) + c.c ]
(3.1)
wherec.c is the conjugate complex parts and 1,2and 3 are the frequencies of the signal wave, pump wave and idler wave
respectively and they are related by the equation
International Science Congress Association
91
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
3 = 1+ 2
(3.2)
and phase matching condition is
k3 = k1+ k2
(3.3)
Substituting (3.1) in (2.7), (2.8) and equating the terms of equal frequencies, we get the following amplitude equations
e
u1 e pz
E

E1  i pz . 1
x 2C1
2C1k1 x
(3.4)
e pz
e pz E 2
u 2

E2  i
.
x
2C1
2C1k 2 x
(3.5)
u3
E

E3  i
. 3
x 2C1
2C1k 3 x
(3.6)
e pz
e pz
u1
E
 M 1u1  N1 1  P1E1  Q1u2* E3  R1u3 E2*  S1E3 E2*
x
x
(3.7)
u2
E 2
*
*
L2
 M 2 u2  N 2
 P2 E 2  Q2 u1 E3  R2 u3 E1  S 2 E3 E1*
x
x
u3
E
 M 3u3  N 3 3  P3 E 3  0
L3 x
x
L1
(3.8)
(3.9)
where
L1  e pz 1k1  4 Dn e pz k1 i
3
M1 =  Dn e pz k1
4
N1 = 3 Dn k1  + σ -  i 1
3
3
P1 =- σi k1 - i k1
3
2
Q1 = μ e pz i( k1 k 2 - k 2 )
2
2
R1 = -μ e pz i( k 2 k 3 - k 3 )
2
2
S1 =2 μ  k 2 k3  k 2  k 3
L2  e pz 2 k 2  4 Dn e pz k 2 i
3
M 2 =  Dn e pz k 2
4
N 2 = 3 Dn k 2 + σ -  i 2
3
P2 = - σi k 2 - Dn k 2  i
2
Q2 = μ e pz i k1
3
2
3
R2 = - μ e pz i( k1 k 3 - k 3 )
2
2
S 2 = 2μ  k1k3  k3   k 3
L2  e pz 3k3
International Science Congress Association
92
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
M 3  ie pz 3 k 3  i3 e pz k 3
2
N3
= σ - i
P3
= - σi
f≈1
3
k3
In deducing the amplitude equations, we have assumed that the amplitudes are taken to be varying slowly with x so that we could
neglect the second and higher order terms. We have also assumed the amplitude of initial pump wave to be very much large
compared to that of signal wave so that the pump energy loss due to nonlinear interaction is negligible and shift in energy is caused
solely due to linear interaction with the medium. Again, since the drift velocity is exactly equal to the sound velocity and since
there is no linear gain or loss when drift velocity is equal to sound velocityvide, Sinha and Gupta[9]we can take the amplitude of
pump wave
u3
u3
u3
(x) as
iνx
(x) = (0)e
……….
where ν (a real quantity)can be easilyobtainedin the form as, given by
(3.10)
3
D
C  3
k3

D
2 3
2
ν =
K
Now by the equation (3.6), we can express
………..
E3
(x) in terms of
2iC1k 3
E3
u3 (0)e ix .......... .......... .......... .....
(x) =
2e pz (k 3  )
(3.11)
u3
(0) as
(3.12)
To facilitate the solution we take
……….
(3.13)
wherea, b, c, d, p, q, r, s are functions of x only.
Substituting (3.10), (3.12), (3.13) in (3.4), (3.5), (3.7),(3.8) and equating real and imaginary parts, ultimately we geteight first-order
nonlinear differential equations involving a, b, c, d, p, q, r, s are
a
x = a-
b - c(-P Cosνx+ Q Sinνx)p - (P Sinνx +QCosνx)q +
(-R Sinνx+ SCosνx)r + (S Cosνx+ R Sinνx)
(3.14a)
(P Cosνx+ Q Sinνx)q - (P Sinνx +Q Cosνx)p +
(S Sinνx+ R Cosνx)r + (-S Cosνx+ R Sinνx)
(3.14b)
a+ b+ c(P1 Cosνx+ Q1 Sinνx)p - (P1Sinνx +Q1Cosνx)q +
(R1Sinνx+ S1Cosνx)r + (-R1Cosνx+S1Sinνx)
(3.14c)
b
x = a + b + cc
x =
International Science Congress Association
93
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
d
x =
a+ b+ c+
(P1Sinνx + Q1Cosνx)p + (P1 Cosνx +Q1 Sinνx)q +
(R1 Cosνx + S1Sinνx)r + (S1Cosνx+ R1Sinνx)s
(3.14d)
(C2 Cosνx –D2 Sinνx)d –P2p –Q2q–R2r
(3.14e)
(C2 Sinνx –D2 Cosνx) d - Q2 p -P2q –R2s
(3.14f)
(C3 Sinνx – D3 Cosνx) d – P3 p + Q3 q –R3 s
(3.14g)
(C3Cosνx –D3 Sinνx)d +Q3 p + P3 q + R3 r
where
(3.14h)
p
x = (A2 Cosνx –B2 Sinνx)a - (A2 Sinνx +B2 Cosνx)b +(C2 Sinνx – D2Cosνx)c+
q
x = - (A2Sinνx - B2Cosνx)a + (A2Cosνx - B2 Sinνx) b + (C2Cosνx - D2 Sinνx) c–
r
x = - (A3 Sinνx – B3 Cosνx)a + (A3Cosνx – B3 Sinνx) b + (C3 Cosνx – D3Sinνx) c –
s
x = - (A3Cosνx –B3 Sinνx)a - (A3 Sinνx + B3 Cosνx)b - (C3 Sinνx – D3Cosνx)c = 0.870418
= 0.3207161
0.890922
P= 0.281840
Q= 0.1796191
R = 0.78493401
S = 0.5031092
= 0.5087721
= 0.6047290
= 0.4850473
= 0.463357
P1= 0.302290
Q1= 0.890466
R1= 0.514904
S1= 0.8701092
B2= 0.7911076
C2= 0.18717948
D2= 0.5460331
P2 = 0.890466
Q2= 0.8479031
R2= 0.467033
A3= 0.2119437
B3= 0.6560930
C3 = 0.8710622
D3= 0.5330725
International Science Congress Association
94
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Q3= 0.6747302
P3 = 0.2112307
R3= 0.890741
The values of physical constants for (Cds material) used to deduce the above system of equations (3.14a) to (3.14h) are given below
:
 = 3  10-2 m2 / v-sec
C1 = 9.36  1010 N/m2
 = 4.8  10-11 C/v-m
n = 1019 m-3
us = 1.1992  103 m/sec
 = 4.8  10-2 C/m – v sec
1 = 1  104 sec-1
2 = 1.1  104 sec -1
 3 = 2.1  104 sec -1
k1 = 8.33333m-1
k2 = 9.1666663m-1
k3 = 17.499999m-1
 D = 1.8580645 
109 sec -1
o
T = 300 K
epz = 0.44 A-sec/m2
Dn = 0.775  10-3 m2/sec
 c = 109 sec-1
K2 = 0.4309116  10-1
ν = 0.3856223  10-11 m-1
To solve the above mentioned system of differential equations (3.14a) to (3.14h) numerically
we use Runge-Kutta method and develop the program in Fortran – 77 and run in PC, taking
a(0) =0.00001, b(0) =0.0, c(0)=0.0, d(0)=0.0, p(0) =0.0, q(0) =0.0, r(0) =0.0, s(0) =0.0,
u3(0) =0.001.
The expression of energy
of the component
, is given by
=
where strain amplitude
of the wave at frequency
is given by
=2
So the relative change in energy of the signal wave over a propagation path (x) ix given by
=
=
The expression of energy
=
of the idler wave at frequency
is given by
=2
The expression of energy
of the signal wave at frequency
is given by
=
=2
The ratio of energy of idler wave to that of signal wave is
= 1.21
International Science Congress Association
95
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
The energy of the signal wave andthat of idler wave and the ratio of energies have been computed over a propagation path
0.05
step length of 0.001 . But these values are shown at an interval 0.005 in the following table.
Propagation path(x) in meter
1.00E-03
5.00E-03
1.00E-02
1.50E-02
2.00E-02
2.50E-02
3.00E-02
3.50E-02
4.00E-02
4.50E-02
5.00E-02
Table-1
Relative change in energy of
Energy of the idler wave
the signal wave
1.69E-12
1.25E-14
2.39E-11
3.01E-09
1.32E-09
1.24E-08
1.46E-08
2.79E-08
8.76E-08
4.98E-08
3.27E-07
7.46E-08
1.12E-06
1.18E-07
2.65E-06
1.49E-07
6.13E-06
1.98E-07
1.21E-05
2.42E-07
2.38E-05
3.05E-07
Ratio of energy of the idler
wave to that of signal wave
9.56E-10
2.32E-08
9.54E-08
2.09E-07
3.69E-07
5.98E-07
8.79E-07
1.09E-06
1.37E-06
1.89E-06
2.38E-06
Figure-1
Relative change in energy of the signal wave
International Science Congress Association
96
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure-2
Energy of the idler wave
Figure-3
Ratio of energy of the idler wave to that of signal wave
International Science Congress Association
97
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Conclusion
In the present analysis, the gain of energy of signal wave and that of idler wave have been studied by numerical computation in
presence of strong pump wave due to nonlinear parametric interaction of acoustic waves in piezosemiconducting medium. For
numerical simulation we reduce the coupled mode equations into eight simultaneous 1 st order ordinary differential equations (3.14a)
to (3.14h) and then solve by Runge-Kutta Algorithm which is shown in Table-1. We use MATLAB software to construct the
figures 1,2,3. From Fig.1,2 it is clear that the energy of the signal wave as well as the idler are increasing with the increase of
propagation distances. It is also evident from Fig1. to Fig3. that the increase is more rapid in the latter stage of propagation in both
the cases.
Sometimes it is very difficult to study the nonlinear interaction through analytical approach due to complicated form of the
nonlinear differential equations and analytical results can only be obtained after imposing some conditions. But this short of
stipulation is not required when we study those above by numerical approach.
References
1.
J. Pozhela, Plasma and Current Instabilities in Semiconductors, Oxford/London Pergamon Press (1981)
2.
S. Ghosh and R.B. Saxena, Raman instability in n-type piezoelectric semiconducting plasmas, Jr. Appl. Phys 58, 3133-40
(1985)
3.
S. Ghoshand S. Khan, “Parametric instability in n-type piezoelectric semiconductor plasmas.” Ultrasonics 24, 93-99 (1986)
4.
K.D. Misra and M.K. Pandey, “Acoustohelicon Amplification in a Piezoelectric Semiconducting Plasma”, Phys. Status Solidi
(b) 182, 153 (1994)
5.
W. Huang and W.X. Ding, “Estimation of Lyapunov-exponent spectrum of plasma chaos” Phys. Rev. E50, 1062 (1994)
6.
H.A Shah, I.U.R Durrani and T. Abdullah, “Nonlinear helicon-wave propagation in a layered medium” Phys. Rev. B.V.47,
1980 (1993)
7.
S. Ghosh and D.K Sinha, Amplification of acoustic waves in piezoelectric semiconductors: Effect of nonlinearity in electron
effective mass and collision frequency” J. Appl. Phys. 60, 267 (1986)
8.
K.K Ghosh and S.N Paul, Acousto-helicon interaction in narrow-gap semiconductors” Phys. Status Solidi(b), V.197, 441
(1996)
9.
D.K. Sinha and M. Gupta, Nonlinear parametric interaction in a piezosemiconductingmedium”Phys. Stat. Sol (b), 107, 469
(1981)
10. S. Ghoshand S. Khan, Parametric instability in n-type piezoelectric semiconductor plasmas, Ultrasonics, 24, 63 (1986)
11. K.L Jat, A. Neogi and S. Ghosh, Parametric amplification in a magnetized non degenerate plasmas” Acta Phys. Polonica A.79
(1991)
12. M. LazarandI. Merches, “Kinetic theory of nonlinear waves interaction in relativistic plasmas, Phys. Letters A 313 (2003)
13. M. Salimullahet. al., Modified interactions of longitudinal phonon-plasmon in magnetized piezoelectric semiconductor
plasmas, Physica B, 351, 163-170, (2004)
14. P.K Shukla, S. Ali and M. Stenflo, Nonlinear wave interactions in quantum magnetoplasmas, Phys. of Plasmas 13: 11, (2006)
15. G. Brodin, M. Marklund, L. Stelnflo and P.K. Shukla, “Dispersion relation for electromagnetic wave propagation in a strongly
magnetized plasma” New Jr. of Physics 8 (2006)
16. N. Jadav, S. Ghosh, P. Thakur, M. Jamil, and M. Salimullah, Parametric interactions in ion-implanted piezoelectric
semiconductor plasmas” A.J.S.E., 231-240 (2010)
17. S. Chandra, S. N.Paul, B. Ghosh, Linear and non-linear propagation of electron plasma in quantum plasma”Ind. Jr. of Pure
and Appl. Phys, 50, 314-319, (2012)
18. P.K. Mandal, Amplification of Acousto helicon waves in Magnetised Nondegenerate Piezoelectric Semiconductor: A
Numerical Approach” Int. Jr. of Inf. and Comp. Sc. (IJICS) ISSN 0972-1347, 16(2), 13-20, (2013)
19. P.K. Mandal, Ultrasonic wave Instability in a n-type degenerate thermopiezo semiconducting medium - A numerical
approach, Acta Ciencia Indica, Mathematics , ISSN 0970-0455, XL.M(3), 299-307, (2014)
International Science Congress Association
98
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Skin Lesion Analysis and Treatment Monitoring Using Image Processing
Technique: A Review
Ishita Bhakta1 and Santanu Phadikar2
1
Department of Information Technology, West Bengal University of Technology
2
Department of Computer Science, West Bengal University of Technology
Abstract
An important application of digital image processing technology is found in the field of medical environment. Now a day’s a new
technology called medical imaging gains popularity as a combination of digital image processing and its medical applications.
Medical imaging helps doctors in disease diagnosis by seeing inside of human body without surgery. It also provides medical
treatment to the patients of remote area where doctors are not available. It combines biology and image processing with enhanced
technology and improve healthcare monitoring. Skin disease is very common in today’s polluted world. With the help of medical
imaging technology automated skin disease detection system can be designed to help patient and doctors for quality service. This
paper provides an overview of different image processing technique which are already been used to diagnose specific skin disease
automatically.
Keywords: Image segmentation, hypopigmented skin lesion, hyperpigmented skin lesion, Feature extraction.
Introduction
Medical imaging is a technique and process to create visual representations of the internal structures of a body for clinical analysis
and medical intervention to diagnose and treat disease. The technical field where epiluminescence microscope is used to view skin
lesion in magnification in-vivo is called dermoscopy. It is mainly useful in the early detection of melanomas. Dermoscopic images
can be taken with digital camera and examined to extract information from that image which is a part of digital imaging and teledermatology. Automation of skin disease diagnosis is an interesting field of medical imaging technology. Because a lot of
difficulties is faced by individual in accessing health care - Shortage of specialists, Uneven geographical distribution of doctors and
Long waiting times. Automated treatment procedure helps health professional and patient to take early steps about disease.
However, noise, variability of biological tissues, imaging system anisotropy etc. make the difficulties in automated analysis of
medical images. So far, medical imaging has contributed immensely towards advancing medical procedures. The fact that
interpretation and analysis of medical imaging results are still heavily dependent on medical experts (whose availability are low or
non-existence) is a serious concern for developing and underserved regions (especially rural settings). An approach is needed to
minimize this dependency and also to limit probable bias of medical personnel in the analysis of a medical image result.
This paper presents a review of research work done in the field of computerized analysis of dermatological images with computeraided systems. Diagnostic accuracy of dermoscopy may be lower in the hands of inexperienced dermatologists. This subjective
variation of visual interpretation can be minimized by computerized image analysis techniques. The steps involve in an automated
skin disease recognition system is described in Figure-1.
Image
Acquisition
Image
Enhancement
Image
Segmentation
Feature
Extraction
Classification
Figure-1
Steps of automated skin disease recognition system
International Science Congress Association
99
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
There is a strong possibility of certain degree of ambiguities regarding terminology in interdisciplinary subject like dermatology
and computer vision. So to overcome these ambiguities a detailed guidance in the relevant medical material and computer vision is
required. This paper provides a detailed study, necessary information and relevant references on the specific parts of image
processing techniques to detect skin disease.
This paper is structured as follows. Section 1 illustrates normal human skin anatomy and Skin Diseases. Section 2 gives an
overview of methods for skin disease detection using medical imaging technique. Finally section 3concludes the paper.
Normal Human Skin Anatomy and Skin Diseases
The human skin is the outer portion of the body which covers the inner part. Human skin composed of multiple layers of
ectodermic tissue which guards internal organs.
Figure-2
Anatomy of the skin, showing the epidermis, the dermis, and subcutaneous (hypodermic) tissue
(copyright 2008 by Terese Winslow)
There are two main layers in human skin. These are epidermis and dermis (Figure-2). Epidermis provides protection against any
external aggressions like injuries, ultraviolet radiation, infections and water loss. It is a layered stratified squamous epithelium like
tissue. It consists of four different types of cells1. These are –
Keratinocytes –In epidermis major portion (95%) of cells are represented by Keratinocytes. These are responsible for continuous
renewal of human skin. They divide and differentiate basal layer to the stratum corneum (the horny layer). Keratinocytes are
produced by division in the basal layer and move to the next layers transforming their morphology and biochemistry called
differentiation. The outer most layer of the epidermis is called Corneocytes. This layer is created as result of this differentiation and
transformation. These are the flattened cells filled with keratin without nuclei. Atthe end of this differentiation process, the
corneocytes lose its cohesion. As a result they separate from the inner surface by the process called desquamation.
Melanocytes: It is the dendritic cells which found in the basal layer of the epidermis. They are filled with melanin pigment,
surrounding keratinocytes and it gives the color to skin and hair.
Langerhans cells: Its function is to identify foreign antigens that have entered into the epidermis and destroy it by phagocytosis.
Merkel cells: These are probably originated from keratinocytes and act as touch receptors. The dermis is the inner part of skin
layer composed of collagen and elastic tissue. It has two sub-layers -the reticular dermis (thick layer) and the papillary dermis (thin
layer). The papillary dermis serves as “glue” that holds the dermis and the epidermis together. The reticular dermis contains blood
vessels, lymphatic channel, nerve endings, hair follicles and sweat glands. It supplies nutrition and energy to the epidermis. It also
plays an important role in thermoregulation, healing process, touch, pain, and pressure and temperature sensation.
International Science Congress Association
100
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure-3
Hypo-pigmented skin
Figure-4
Hyper-pigmented skin lesion
There are two broad categories of skin diseases – Hypo pigmented and Hyper pigmented skin disease. In pathology we use the term
skin lesion instead of skin disease. Hypo pigmented skin lesion means loss of skin color and hyper pigmented skin lesion means
darkening of an area of skin. Some examples of hypo pigmented skin lesion are given in Figure-2. Some examples of hyper
pigmented skin lesion are given in Figure-3.
Methods for Skin Disease Detection
This section gives an overview of different medical imaging technology that had been used to detect different skin disease so far.
Image acquisition: It is method of transforming illumination energy into a voltage waveform by combination of input electrical
power and sensor material. After that digitization is done using sampling and quantization of the voltage waveform to generate
digital image. Image acquisition can be done in following three ways 1.
Using single sensor – used in photodiode.
2.
Using sensor strips or line sensors – used in scanners.
3.
Using array sensor – used in digital camera.
Image Enhancement: Different illumination source and devices are used to acquire image which creates difficulties during
segmentation. Low contrast images make accurate border detection difficult. That’s why some pre-processing steps are required to
enhance the color information and contrast of the image. This enhancement procedure improves the performance of lesion
segmentation algorithms. The most important image enhancement operation for lesion diagnosis is color correction or calibration.
This operation recovers real colors of a photographed lesion. It is a reliable operation to extract color information in manual and
automated system. Recent studies give special emphasis on color correction of JPEG images obtained using low-cost digital
cameras. Other image enhancement operations are illumination correction, contrast and edge enhancement. Another image
enhancement operation is Karhunen-Loève Transform (KLT) which is also known as Hoteling Transform or Principal Component
Analysis (PCA). An adaptive image enhancement method for contrast stretching of dermatological images in the wavelet domain is
International Science Congress Association
101
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
proposed in2. An automatic color equalization method is used for low contrast images in 3. A segmentation accuracy of 86.07% is
achieved for RGB images if images are normalized with adaptive light compensation technique 4.
Image Segmentation: Image segmentation plays important role in automated skin disease detection system. Segmentation is
required to detect skin lesion border accurately and separate lesion area from normal skin area. Feature extraction is done only after
segmentation. There are mainly three types of segmentation methods in image processing.
Edge based segmentation – boundary detection based on local discontinuities in intensity. This can be done by different edge
detection algorithm. Gradient vector based on first order derivative is used for edge detection. Laplacian detector based on second
order derivative is also used for edge detection.
Gradient vector flow (GVF) and adaptive snake (AS) are two edge based segmentation technique used for melanoma skin disease
detection. But among these two methods AS provides better performance5 for melanoma skin disease analysis.
Region based segmentation – partitioning an image into regions that are similar according to a set of predefined criteria. Region
growing, Region split and merge are two procedure of region based segmentation.
Three region based algorithms - level set method of Chan et al. (C-LS), expectation-maximization level set (EM-LS), fuzzy-based
split-and-merge algorithm (FBSM) are compared on a data set of 100 melanoma skin disease images 5.
Celebi et al. used a modified unsupervised region based segmentation algorithm (JSEG) for melanoma skin disease 6. The method
requires less than 1 min. on a Pentium IV 1.8 Ghz. computer to segment an image of size 768× 512 pixels. This border detection
method may not perform well on images with significant amount of hair. The detection of regions inside the lesion with significant
coloring is not done here.
K-means clustering is used for segmentation of cancerous skin diseases 7. Fuzzy C-Means (FCM), Improved Fuzzy C-Means
(IFCM) are also used for segmentation of brain tumor cell, breast cancer cell. FCM is very sensitive to noise. To overcome this
drawback of FCM algorithm IFCM is proposed.
In8 an evolutionary strategy (ES) based segmentation algorithm was used to identify the lesion area. This method is useful to detect
malignant melanomas. The lesion was segmented by an optimized ellipsoid. Optimization is done by ES algorithm with respect to
the defined objective function.
Segmentation using thresholding – Dividing an image based on a threshold. There are various method of thresholding – otshu
thresholding, global thresholding, optimum thresholding, entropy base thresholding etc.
Segmentation based on laplacian and otshu thresholding is used to segment nucleus of skin segment9. This method is used to
differentiate malignant and benign skin tumor. This segmentation method does not work well for indistinct nuclei edges, low
chromatin nuclei edges.
Another threshold based segmentation technique used for melanoma skin disease diagnosis is adaptive thresholding (AT) 5.
Features extraction: Feature extraction is an important and crucial step in automated skin disease analysis system in image
processing. Most of the research work concentrates on segmentation technique to detect melanoma skin diseases. Accuracy of an
automated skin disease detection system can be achieved from feature extraction and classification technique. But there is very little
work on feature extraction technique for skin disease detection. Some of these feature extraction technique used to extract specific
global and local features of skin diseases are discussed below.
In10 color and texture features are extracted from lesion area. Four color features - Mean, Variance, Standard deviation and
skewness is extracted from RGB,HSV and Y,Cb,Cr color spaces. Texture features are extracted from gray level co-occurrence
matrix (GLCM). GLCM is an effective method to evaluate similarity of rock texture but is not effective when texture is
heterogeneous. Principle Component Analysis (PCA) is a commonly used dataset acts as a cluster analysis tool in micro array
research. PCA is used to extract RGB color features of psoriasis image lesions 12.PCA can be used successfully to discriminate
plaque clearly along with other psoriasis group diseases. The Independent Component Analysis (ICA) is recommended to use in
future work for psoriasis group disease analysis. Since Independent Component Analysis (ICA) is a most recent topic in medical
signal processing. Border irregularity of a skin lesion is measured using Compact index, Edge Abruptness, Fractal Dimension 13.
Border irregularity is used for melanoma skin disease detection. An ellipse-fitting algorithm is used to extract and measure the
International Science Congress Association
102
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
characteristics of melanoma skin disease according to ABCDE rule14.Psoriasis is an immune-mediated skin disease. It cannot be
cured completely but its growth can be controlled under medication. In15 psoriasis skin disease is analyzed by extracting color
features. There are mainly four types of psoriasis skin disease - Guttate, Nail, Plaque and Pustular. This paper diagnoses the type of
psoriasis skin lesion based on color histogram features. At first RGB color space of an image is converted to HSV color space then
feature extraction process is applied.
Classification: The final step of an automated skin disease recognition system is classification. The output of lesion classification
can be binary, ternary or n-ary depending on the system which identifies several skin diseases. After feature extraction
classification of skin lesion is done depending on feature descriptors. Performance evaluation depends on the extracted feature
descriptors, used data set and chosen classifier. So comparison of classification approaches must be performed on same dataset and
same set of descriptors to obtain optimal results. Mostly used classifiers are – ANN, SVM, Decision trees, k-NN, Baysian classifier
and fuzzy logic. Among all these classifier Decision trees are not suitable for an automated system since it required user
intervention.
In10 performance of SVM and k-NN classifier is compared on a dataset of 726 samples collected from 141 images with 5 different
types of diseases. About 46.71% accuracy is achieved using SVM classifier and 34% accuracy using k-NN classifier. Multilayer
perceptron classifier is tested on 180 images of different types of Dermatitis, Eczema, and Urticaria 11 and 96.6% accuracy is
achieved.
Conclusion
Currently there is no system for detection and monitoring all types of hypo and hyper pigmented skin lesion. Such a system is very
useful to patient in remote area and health professional. It makes skin disease treatment procedure easier. Only border detection,
segmentation and feature extraction for benign and malignant tumor is done in the literature. Very few works are available in the
literature on feature extraction of skin diseases. This paper carried out a detailed study on different medical imaging technique used
in literature for skin disease automated system.
References
1.
Korotkov, Konstantin and Rafael Garcia, Computerized analysis of pigmented skin lesions: a review." Artificial intelligence in
medicine, 56.2, 69-90, (2012)
2.
Jung, Cláudio R. and Jacob Scharcanski, Sharpening dermatological color images in the wavelet domain." Selected Topics in
Signal Processing, IEEE Journal of 3.1, 4-13, (2009)
3.
Schaefer, Gerald, et al. "Colour and contrast enhancement for improved skin lesion segmentation." Computerized Medical
Imaging and Graphics 35.2, 99-104 (2011)
4.
Ch'ng, Yau Kwang, et al. "A two level k-means segmentation technique for eczema skin lesion segmentation using class
specific criteria." Biomedical Engineering and Sciences (IECBES), 2014 IEEE Conference on. IEEE, (2014)
5.
Silveira, Margarida, et al. "Comparison of segmentation methods for melanoma diagnosis in dermoscopy images." Selected
Topics in Signal Processing, IEEE Journal of 3.1, 35-45 (2009)
6.
Celebi, M. Emre, Y. Alp Aslandogan and Paul R., Bergstresser, Unsupervised border detection of skin lesion
images." Information Technology: Coding and Computing, 2005. ITCC 2005. International Conference on. 2. IEEE, (2005)
7.
Trabelsi Olfa, et al., Skin disease analysis and tracking based on image segmentation." Electrical Engineering and Software
Applications (ICEESA), 2013 International Conference on. IEEE, Shortage of specialists, (2013)
8.
Situ Ning, et al., Automatic segmentation of skin lesion images using evolutionary strategy." Image Processing, 2007. ICIP
2007. IEEE International Conference on. 6. IEEE, (2007)
9.
Tanaka, Toshiyuki, Tomoo Joke and Teruaki Oka, Cell nucleus segmentation of skin tumor using image
processing." Engineering in Medicine and Biology Society, 2001. Proceedings of the 23rd Annual International Conference of
the IEEE. 3, IEEE, (2001)
10. Sumithra R., Mahamad Suhil and D. S. Guru, Segmentation and Classification of Skin Lesions for Disease Diagnosis."
Procedia Computer Science, 45, 76-85 (2015)
11. Mittra Anal Kumar and R. Parekh, Automated detection of skin diseases using texture features, International Journal of
Engineering Science and Technology (IJEST) 3.6 (2011)
International Science Congress Association
103
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
12. Hashim Hadzli, et al., A study on RGB color extraction of psoriasis lesion using principle component analysis (PCA)."
Research and Development, 2006. SCOReD 2006. 4th Student Conference on. IEEE, (2006)
13. Clawson K., Morrow P., Scotney B., McKenna J. and Dolan O., Analysis of pigmented skin lesion border irregularity using
the harmonic wavelet transform, In Machine Vision and Image Processing Conference, 2009. IMVIP'09. 13th International
(18-23). IEEE (2009)
14. Ganzeli H.S., et al. "SKAN: Skin Scanner-System for Skin Cancer Detection Using Adaptive Techniques." Latin America
Transactions, IEEE (Revista IEEE America Latina) 9.2, 206-212, (2011)
15. Dhandra B.V., et al., Color Histogram Approach for Analysis of Psoriasis Skin Disease, (2013)
International Science Congress Association
104
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
An Understanding of Local-Area Networks Using Catalanelbow
Kumari Asima Mahato, Baby Kumari, Sunita Kumari, Sushma Kumari and
Arun Kanti Manna
Department of Computer Science and Engineering,Govt. Polytechnic Silli, Ranchi-835102, Jharkhand, INDIA
Abstract
Scholars agree that virtual modalities are an interesting new topic in the field of robotics, and end-users concur. In fact, few
biologists would disagree with the development of redundancy. We introduce a Bayesian tool for visualizing fiber-optic cables,
which we call Catalan Elbow.
Keywords: Catalan Elbow, Dogfooding, Fiber-optic cable, Robotics
Introduction
The programming languages solution to active networks22 is defined not only by the evaluation of rasterization, but also by the
essential need for forward-error correction. However, a confirmed challenge in hardware and architecture is the theoretical
unification of forward-error correction and the analysis of B-trees. On a similar note, contrarily, an important question in
networking is the exploration of Scheme. However, replication alone is not able to ful fill the need for the analysis of reinforcement
learning. CatalanElbow, our new application for atomic archetypes, is the solution to all of these obstacles. Without a doubt, the
shortcoming of this type of method, however, is that the fore most replicated algorithm for the improvement of massive multiplayer
online role-playing games by V. Moore et al.14 is in Co-NP. Our method is in Co-NP. This combination of properties has not yet
been analyzed in prior work.
In this paper, we make two main contributions. We use large-scale technology to argue that extreme programming can be made loss
less, ubiquitous, and interposable. We verify not only that extreme programming and DHCP are continuously incompatible, but that
the same is true for interrupts.
The rest of this paper is organized as follows. We motivate the need for replication. Continuing with this rationale, to fix this
question, we demonstrate that though IPv6 and write-ahead logging can agree to accomplish this ambition, RPCs can be made
ubiquitous, metamorphic, and authenticated. We place our work in context with the prior work in this area. Next, to achieve this
intent, we verify that while journaling file systems can be made lossless, client-server, and wireless, simulated annealing and
802.11b are regularly incompatible. As a result, we conclude.
Framework
Our research is principled. We executed a 9-year-long trace arguing that our framework is not feasible. This is an appropriate
property of our algorithm. We show CatalanElbow’s efficient creation in Figure 1. Similarly, despite the results by Takahashi, we
can validate that multi-processors and architecture can interfere to realize this objective. The question is, will CatalanElbow satisfy
all of these assumptions? Exactly so14.
We consider a system consisting of n virtual machines. Though it at first glance seems perverse, it has ample historical precedence.
Further more, Figure 1 details the relationship between CatalanElbow and classical algorithms. This seems to hold in most cases.
We scripted a minute-long trace validating that our design holds for most cases. This seems to hold in most cases.
We believe that the seminal empathic algorithm for the emulation of courseware that would allow for further study into thin clients
by Kobayashi et al. is optimal. Furthermore, we assume that vacuum tubes can be made low-energy, stable, and real-time. We
consider a methodology consisting of n online algorithms. Next, any robust refinement of homogeneous methodologies will clearly
require that 802.11b18 and systems can synchronize to surmount this problem; Catalan Elbow is no different.
International Science Congress Association
105
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure-1
Our methodology develops hash tables in the manner detailed above.
Figure-2
CatalanElbow’s semantic improvement.
Implementation
After several minutes of difficult architecting, we finally have a working implementation of Cata-lanElbow. On a similar note,
despite the fact that we have not yet optimized for security, this should be simple once we finish implementing the homegrown
database. Even though this result is rarely a practical mission, it is derived from known results. Hackers worldwide have complete
control over the centralized logging facility, which of course is necessary so that Boolean logic and replication can collaborate to
address this obstacle. CatalanElbow is composed of a server daemon, a server daemon, and a virtual machine monitor.
Results
As we will soon see, the goals of this section are manifold. Our overall evaluation seeks to prove three hypotheses: (1) that ROM
throughput behaves fundamentally differently on our desktop machines; (2) that NV-RAM throughput behaves fundamentally
differently on our planetary-scale overlay network; and finally (3) that popularity of the memory bus is more important than an
application’s historical ABI when minimizing 10th-percentile time since 2004. our logic follows a new model: performance matters
only as long as performance takes a back seat to usability22. Note that we have decided not to emulate a heuristic’s wireless userkernel boundary. Despite the fact that such a claim at first glance seems unexpected, it is supported by prior work in the field.
International Science Congress Association
106
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Third, only with the benefit of our system’s flash-memory throughput might we optimize for complexity at the cost of complexity
constraints. Our evaluation will show that doubling the USB key throughput of opportunistically cooperative models is crucial to
our results.
Figure-3
The average hit ratio of Catalan Elbow, compared with the other heuristics.
Hardware and Software Configuration: Our detailed evaluation mandated many hardware modifications. We performed an
emulation on our system to disprove the computationally scalable behavior of mutually exclusive models. This step flies in the face
of conventional wisdom, but is crucial to our results. To begin with, we added 200Gb/s of Wi-Fi through-put to our
decommissioned Atari 2600s to better understand the effective ROM throughput of our introspective overlay network. On a similar
note, we added more 10GHz Pentium IIs to our linear-time overlay network to probe the effective floppy disk space of our XBox
network. Further, we tripled the effective ROM throughput of our embedded cluster to discover algorithms 13. Along these same
lines, we removed some tape drive space from our network to measure the mutually ubiquitous behavior of partitioned models.
Similarly, we added 8MB of flash-memory to our decommissioned Motorola bag telephones to disprove the work of British gifted
hacker I. V. Davis. Lastly, we removed 300MB/s of Ethernet access from our mobile telephones.
CatalanElbow does not run on a commodity operating system but instead requires a mutually hardened version of FreeBSD. All
software was hand hex-editted using a standard toolchain built on the American toolkit for independently improving wireless
Nintendo Gameboys. We implemented our simulated annealing server in Smalltalk, augmented with mutually wired extensions 10,22.
Next, we note that other researchers have tried and failed to enable this functionality.
Figure-4
The expected block size of CatalanElbow, as a function of popularity of hierarchical databases.
Dogfooding Methodology
Is it possible to justify having paid little attention to our implementation and experimental setup? No. Seizing upon this
approximate configuration, we ran four novel experiments: (1) we measured WHOIS and DNS performance on our mobile
telephones; (2) we asked (and answered) what would happen if independently wireless superblocks were used instead of
International Science Congress Association
107
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
superpages; (3) we dogfooded CatalanElbow on our own desktop machines, paying particular attention to ROM throughput; and (4)
we ran 85 trials with a simulated RAID array workload, and compared results to our earlier deployment. Now for the climactic
analysis of all four experiments. Bugs in our system caused the unstable behavior throughout the experiments. On a similar note,
the many discontinuities in the graphs point to exaggerated interrupt rate introduced with our hardware upgrades. On a similar note,
bugs in our system caused the unstable behavior throughout the experiments. Shown in Figure-7, the first two experiments call
attention to our system’s response time. The results come from only 0 trial runs, and were not reproducible. Second, note that
Figure-7 shows the average and not expected parallel expected response time. Next, note how deploying semaphores rather than
deploying them in a chaotic spatiotemporal environment produce smoother, more reproducible results. Our objective here is to set
the record straight.
Figure-5
The mean signal-to-noise ratio of our algorithm, compared with the other solutions.
Lastly, we discuss all four experiments. Gaussian electromagnetic disturbances in our Internet-2 overlay network caused unstable
experimental results. Next, note how simulating access points rather than simulating them in software produce more jagged, more
reproducible results. Error bars have been elided, since most of our data points fell outside of 25 standard deviations from observed
means.
Related Works
While we know of no other studies on efficient communication, several efforts have been made to deploy hierarchical databases 12.
The fore most framework does not store the refinement of 802.11 mesh networks as well as our solution.
Further, although Sato and Kumar also explored this method, we emulated it independently and simultaneously 4. Although we
have nothing against the prior approach by Johnson et al. 19, we do not believe that method is applicable to homogeneous robotics11,
7
.
Figure-6
Note that distance grows as distance decreases – a phenomenon worth improving in its own right.
International Science Congress Association
108
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
The improvement of perfect models has been widely studied 5. We had our solution in mind before Sasaki et al. published the recent
acclaimed work on the synthesis of von Neumann machines. Charles Darwin et al. suggested a scheme for synthesizing multiprocessors, but did not fully realize the implications of congestion control at the time 3. Contrarily, these solutions are entirely
orthogonal to our efforts. Although we are the first to explore the study of red-black trees in this light, much related work has been
devoted to the deployment of architecture20. A recent unpublished undergraduate dissertation16,5,1 proposed a similar idea for XML6.
Unlike many previous approaches23, we do not attempt to harness or locate flexible technology15. On a similar note, instead of
visualizing psychoacoustic algorithms8, we solve this grand challenge simply by exploring the investigation of XML 9,17. As a result,
the framework of Moore et al.2 is a technical choice for “fuzzy” models. Here, we fixed all of the challenges inherent in the
previous work.
Figure-7
These results were obtained by Harris et al.21; we reproduce them here for clarity.
Conclusion
Our algorithm will answer many of the problems faced by today’s hackers worldwide. Further, our framework for analyzing
reinforcement learning is famously useful. On a similar note, to surmount this problem for the construction of link-level
acknowledgements, we proposed new embedded symmetries. The characteristics of our methodology, in relation to those of more
much-touted solutions, are predictably more appropriate. The exploration of evolutionary programming is more compelling than
ever, and our heuristic helps physicists do just that.
References
1.
Agarwal R., and Harris G. Exploring IPv6 and cache coherence with SixBattel. In Proceedings of the Symposium on Atomic,
Reliable Information (2001)
2.
Anand O., Wu B. and Wilkinson J. Investigating IPv4 and thin clients. In Proceedings of FOCS (1999)
3.
Anderson P.J. and Martin G.L., A deployment of Voice-over-IP. Journal of Automated Reasoning, 1-11 (1999)
4.
Bhabha V., The effect of cacheable symmetries on cryptoanalysis. In Proceedings of FPCA, (2004)
5.
Daubechies I. and Jackson F., A methodology for the construction of erasure coding. Journal of Secure, Autonomous
Symmetries, 34, 75–80 (1999)
6.
Dongarra J., Qian H. and Zhao G., A case for virtual machines. OSR, 28, 1–11 (1999)
7.
Erd˝ OS P. and Sasaki D., A case for the producer-consumer problem. In Proceedings of SIGCOMM, (1993)
8.
Garcia E., Towards the evaluation of Internet QoS. In Proceedings of MICRO, (2004)
9.
Garcia V. and Shenker S., Introspective, autonomous archetypes for multi-processors, In Proceedings of WMSCI, (1997)
10. Harris V. and Jackson K., Deploying reinforcement learning using modular technology. OSR, 71, 86–103 (2001)
11. KunduS., Smith M. and Kundu S., Deconstructing the Turing machine with yufts. Journal of Cacheable, Introspective
Communication 3, 157–190 (2002)
International Science Congress Association
109
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
12. Minsky M. and Dijkstra E., Emulation of superblocks. In Proceedings of the Symposium on Probabilistic Models (2003)
13. Moore T.V. and Cook S., A case for IPv6. In Proceedings of INFOCOM, (1999)
14. Nehru L.G., An improvement of IPv4. Tech. Rep. 4603, IBM Research, (2003)
15. Perlis A., Johnson R. and Adleman L., A case for wide-area networks. TOCS, 5, 70–85 (2002)
16. Quinlan J. Evaluation of RPCs, Journal of Robust, Knowledge-Based Technology 1, , 81–109 (2002)
17. Robinson F. and Leiserson C., Sensor networks considered harmful. In Proceedings of the Symposium on Replicated, HighlyAvailable Modalities, (1991)
18. Seshagopalan Z., Reddy R., Leiserson C., Corbato F., Estrin D. and Knuth D., Vast: A methodology for the refinement of
write-ahead logging. Journal of Atomic, Heterogeneous Theory, 20, 57–63 (2003)
19. Smith J., Anderson J., Corbato F. and Sasaki F., ECLAT: Improvement of extreme programming. Journal of Probabilistic
Methodologies, 10, 85–107 (1997)
20. Thomas O. and Sun D., A methodology for the analysis of red-black trees. In Proceedings of the WWW Conference (1990)
21. Wilkes M.V., Decoupling expert systems from web browsers in web browsers. OSR 55, 1–16 (1998)
22. Williams F., The influence of authenticated information on artificial intelligence. Journal of Secure, Pseudorandom
Archetypes 9, 153–196 (1999)
23. Zhao Q., Operating systems no longer considered harmful. Journal of Virtual Models 4, 156–197 (1999)
International Science Congress Association
110
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Affect detection from facial expression: A review
Aritra Ghosh and Saikat Basu
1
Department of Software Engineering, West Bengal University of Technology, Kolkata, INDIA
Department of Computer Science and Engineering, West Bengal University of Technology, Kolkata, INDIA
2
Abstrac
In our daily life,most primary form of attention is the face. Any person’s emotion is first reflected in their face. At first a basic idea
about human emotion is given. Various Dimensional models of emotions are described to present an idea about different types of
emotions. In this paper, our main focus is on emotion recognition or affect detection from facial expression. Different stages of
emotion recognition from facial expression is described step by step. Various facial features can be extracted from main facial
components. We use different feature extraction algorithms to obtain these features. After that classification algorithms are
employed to train the system by the available relevant features to recognize emotions.
Keywords: HCI; SVM; PCA; SVD; Stimuli; LOSO; 10 fold cross validation.
Introduction
Now a days, recognition of a person’s emotional state is very useful in various fields such as Human Computer Interaction(HCI), 1-3
physiological health services, medicalscience, education such as counselling, e-learning applications etc. Recently emotion
recognition has been used widely in HCI. The ability of computers to understand and analyze human emotions and perform
appropriate actions accordingly is one of the key focus area of Human Computer Interaction (HCI). If the state of user’s mind can
be recognized or understoodby the computers and robots then they can interact with them accordingly. It will also help the user to
use the system with a greater ease and the interactions will be more meaningful. For example: if a psychologist or psychiatrist
knows the mental state of the patient it will be more convenient for the doctor to treat the patient. Distance or e-learning would be
more meaningful and effective if the mental state of the user can be analyzed by the system. The study materials can be provided
accordingly2. Accidents may be avoided by recognizing driver’s mental state 4 etc. Facial expression and color, sound, speech,
physiological signal, gesture- Movement of different body parts these are the different approaches which are used for human
emotion recognition. In this paper our main focus is the recognition of emotion from facial expression. Use of Facial Expression
one of most commonly used and simpler approach in this field as it changes with a person’s mental state and change in emotion.
This also is most common way to express the emotion and very easy to capture and analyze.
Models of Emotion
Emotions affects human consciousness, it is a mental state or spontaneous reaction. Emotions have many types such as anger,
happiness, disgust, sorrow, fear, surprise etc.
There are various models of emotions proposed by different researchers’ likeDiscrete Emotional Model: This is one of the most applied model of emotion proposed by Paul Ekman. According to this model
there is some basic emotions among all cultures. These basic emotions are happiness, sadness, surprise, disgust, anger and fear 2.
Two Dimensional Valance Arousal Model: In this model emotions are characterized by their valance and arousal, which
represents pleasantness and activation level respectively. Valance range from positive to negative and Arousal range from low to
high. Here basically emotions is categorized based on scale. Two Dimensional model was proposed by Lang.
Plutchik's model
Plutchik's model is commonly used, in different forms or versions, in HCI (Human Computer Interaction) or sentiment analysis. A
hybrid of both basic-complex categories and dimensional theories is proposed by Robert Plutchik. It is basically a threedimensional model which arranges emotions in concentric circles. Here inner circles are more basic and outer circles more
complex. The inner circle emotions are bended to form outer circles. Plutchik's model emanates from a circumflex representation.
Based on similarity emotional words are plotted.
International Science Congress Association
111
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure-1
Two dimensional valance arousal model.
Figure-2
Plutchik’s model [5]
Steps Of Emotion Recognition From Facial Expression
Emotion Elicitation Stimuli: In order to recognize emotion accurately gathering of high quality facial data is important. For the
purpose of recognition, all the emotions must be natural. Different techniques of emotion elicitation such as International affective
picture system, Audio visual clips and other multimodal approaches 2, 7 are used. To obtain the target emotion chances of inducting
multiple emotions in the subject is eliminated.
International Science Congress Association
112
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Pre-Processing: Raw data are always noisy and contains external artefacts. So, before processing elimination of these noise and
artefacts is necessary. For facial data various types of filtering techniques such as Low-pass filters (Butterworth, Adaptive filters
etc.), Smoothing techniques, Histogram are used for pre-process the raw data.
Feature Extraction: After the raw data is pre-processed emotion is recognized from different facial features. For this purpose
required features are extracted from the images. Here a number of different feature extraction techniques are used such as
Geometrical feature extraction, Template matching algorithms, Multimodal Information6.
Geometrical Feature Extraction
Eye and Eye-brow features: Thickness of Eye-brows, an Eye-brow’s height, gap between an Eye and an Eye-brow, height of an
Eye, Eye-brow width, horizontal difference between the left-most edge of an eyebrow and that of an eye, distance between the
centre of an eye and the left-most edge of an Eye, distance between the centre of an eye and the right-most edge of an eye, Eye
width and The farthest height between an eye and an eyebrow6.
Mouth Features: The closest distance between the upper lip andthe bottom of a nose, thickness of the upper lip, height of the
widened mouth, height of the wholemouth, the left-steepest region of the lower lip, distance between the center and the left-most
mouth, the width of the whole mouth, the right steepest region of the lower lip and the width of the widened mouth 6.
Nose Feature: Height of a nose, distance between the eyebrow and the bottom of a nose, distance between the eye and the bottom of
a nose, distance between the peak and the bottom of a nose and nose width6.
Chin Feature: Height between the upper lip and the edge of theChin, height to the lower lipand c: the horizontal half width of the
chin.6
Template Matching: Here the face is represented by the template. Bi-dimensional array of intensity values represent images and
comparison done by Euclidean distance.
Feature Reduction: In feature extraction, many features are extracted amongst which some of them may not be related with the
emotion. So, the feature reduction is performed in order to eliminate the unnecessary features extracted from feature extraction as
they can degrade the performance and accuracy of the system. Different searching algorithms are implemented to recognize the
relevant features and the remaining unnecessary features are eliminated. Algorithms used for searching are sequential forward
search, sequential backward search, Fischer projection2,7 etc.
Classification: When the selection of relevant features are done the system is trained to recognize and classify different emotional
state with the help of available features. For this purpose various classifiers such as K-nearest neighbor (KNN), Support Vector
Machines (SVM), Linear Discriminant Analysis (LDA), Artificial Neural Network (ANN) 2,7 etc. are used.
Figure-3
Process flow of Facial Emotion Recognition.
International Science Congress Association
113
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Related Works
Facial expression is most commonly used technique for recognizing human emotion. Different approaches and techniques are used
by researcher for extracting facial features and recognize mental state of a person from facial expression. Table [1, 2.] below shows
recognition of different emotions by using Principle Component Analysis (PCA) and Singular Value Decomposition (SVD). Here
two different approaches are used in first approach only PCA is used in second approach PCA is used with SVD in both cases
accuracy is calculated here the main useful statistical measurements implemented are: Recognition Rate, Precision and Accuracy
Table [1, 2, 3.] [1.]. It is noticed that overall accuracy is grater when PCA is used with SVD Table [3.].
Figure-4
System Architecture.
Facial Expression
Happy
Disgust
Natural
Sad
Anger
Surprise
Fear
Facial Expression
Happy
Disgust
Natural
Sad
Anger
Surprise
Fear
Table-1
Accuracy Rates of Various Facial Expression1
Accuracy Rate PCA+SVD (%)
92.85
92.85
95.71
90
95.71
97.14
95.71
Table-2
Recognition Rates of Various Facial Expression1
Recognition Rate PCA+SVD (%)
57.14
89.47
85
76.19
80
65
63.63
International Science Congress Association
Accuracy Rate PCA (%)
90
90
94.29
82.86
97.14
91.43
95.71
Recognition Rate PCA (%)
42.85
68.42
70
71.43
75
50
63.63
114
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Another approach used in [7]. Here audio-visual affective information of six emotions: happiness, sadness, surprise, anger, disgust
and fear is analyzed. This technique improve the recognition by modelling the cross-relation data. Here the data is treated as
asynchronous streams [7]. Binary-class classification is performed here using Support Vector Machine (SVM) for binary
classification. 10 fold cross validation where database is divided into 10 folds which contains approximately equal data and 9 folds
are tested on the remaining fold (for subject dependent analysis) or LOSO (Leave-One-Subject-Out) cross validation strategy (for
subject independent analysis). Analysis of Variance (ANOVA) is employed for statistical results. Then the Multi-Class
Classification is performed and the result of Binary-class classification is compared with this. For Multi-Class Classification SVM
is employed for classification and LOSO (Leave-One-Subject-Out) is used as cross validation strategy (for subject independent
analysis). The un-weighted accuracy is grater while using LOSO than 10 fold Cross Validation Table [4] [5] [6].
Table-3
Multi-Class Classification Accuracy (In %): Confusion Table For Randomized 10 Fold Cross Validation
REFERENCE
RECOGNISED EMOTION
EMOTION
ANGER
DISGUST
FEAR
HAPPY
SAD
SURPRISE
7.0
9.3
2.3
7.4
8.8
ANGER
65.2
6.5
7.0
6.5
6.0
1.9
DISGUST
72.1
7.9
8.4
6.5
14.4
14.0
FEAR
48.8
0.5
3.8
5.7
2.3
5.2
DAPPY
82.5
9.7
4.2
16.3
2.8
13.5
SAD
53.5
10.2
0.9
15.3
4.2
14.0
SURPRISE
55.3
Table-4
Multi-Class Classification Accuracy (In %): Confusion Matrix For Leave-One-Subject-Out Cross Validation
REFERENCE
RECOGNISED EMOTION
EMOTION
ANGER
DISGUST
FEAR
HAPPY
SAD
SURPRISE
ANGER
65.2
7.0
9.3
2.3
7.4
8.8
DISGUST
6.5
72.1
7.0
6.5
6.0
1.9
FEAR
7.9
8.4
48.8
6.5
14.4
14.0
DAPPY
0.5
3.8
5.7
82.5
2.3
5.2
SAD
9.7
4.2
16.3
2.8
53.5
13.5
SURPRISE
10.2
0.9
15.3
4.2
14.0
55.3
Paper in9 mainly concentrate on determining the state of a student’s mind during a class thus it can be very useful in enhancing the
education system. A student may be confused during a class may be having difficulties understanding a certain topic. So the
proposed system can analyze the mental state and can help improving the learning process. Here mental state is analyzed by
analyzing the facial features of the student and its relationship with the student’s mental state. Eyes, Eye-brows, Forehead, Mouth
and two inner Eye corners are recognized to be the most relevant features in reflecting mental state of the students. Raised Eyebrows, widely opened eyes, shrieked lips, two inner eye corners having under natural state little wrinkle etc. Are recognized as
positive state, whereas naturally opened eyes, naturally closed mouth, twoInner eyes corners are the signs of neutral state. Dropped
Eye-brows, shrink eyes, unnaturally open or closed mouth indicates negative state. Here the image information is calculated and
extracted by using a texture description operator 11 which came from the Local Binary Pattern (LBP) proposed in 10. SVM is used as
classifier. So the complexity of the training completely depends upon sample size.Paper in 9 mainly concentrate on determining the
state of a student’s mind during a class thus it can be very useful in enhancing the education system. A student may be confused
during a class may be having difficulties understanding a certain topic. So the proposed system can analyze the mental state and can
help improving the learning process. Here mental state is analyzed by analyzing the facial features of the student and its relationship
with the student’s mental state. Eyes, Eye-brows, Forehead, Mouth and two inner Eye corners are recognized to be the most
relevant features in reflecting mental state of the students. Raised Eye-brows, widely opened eyes, shrieked lips, two inner eye
corners having under natural state little wrinkle etc. Are recognized as positive state, whereas naturally opened eyes, naturally
closed mouth, twoInner eyes corners are the signs of neutral state. Dropped Eye-brows, shrink eyes, unnaturally open or closed
mouth indicates negative state. Here the image information is calculated and extracted by using a texture description operator11
which came from the Local Binary Pattern (LBP) proposed in10. SVM is used as classifier. So the complexity of the training
completely depends upon sample size.
International Science Congress Association
115
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
NO
TRAIN SAMPLES
1
190
2
191
3
191
4
193
5
192
6
192
7
193
8
192
9
192
10
191
Avg. Recognition Rate
Ref
No.
[1]
[1]
[7]
[7]
[9]
[15]
EXPRESSION
TOTAL NO.
ANGER
DISGUST
FEAR
HAPPINESS
NETURAL
SADNESS
SURPRISE
30
29
32
31
30
31
30
Table-5
Observations [9]
TEST SAMPLES
RIGHT NOS.
23
17
22
15
22
16
20
13
21
14
21
16
21
16
21
15
21
14
22
16
Table-6
Observations9
RIGHT RECOGNIZED
NO.
20
20
22
25
22
22
21
RECOGNITION RATE (%)
73.91
68.18
72.73
65.00
66.67
76.19
80.00
71.4
66.67
73.73
71.35%
WRONG RECOGNIZED
NO.
10
9
10
6
8
9
9
Table-7
Comparison Of different facial expression emotion recognition techniques
Stimuli Used
Database
NO Of Subjects
Classification
Used
Facial Expression
JAFFE
70
PCA
Images
Facial Expression
JAFFE
70
PCA+SVD
Images
Audio-Visual
eNTERFACE’05
42
SVM, 10 fold cross validation,
ANOVA
Audio-Visual
eNTERFACE’05
42
SVM, 10 fold cross validation,
ANOVA
Facial Images
Student
190
SVM+LBP
volunteers
Images
Unknown
42
SVM+AAM
RECOGNITION
RATE (%)
66.67
68.97
68.75
80.65
73.33
70.97
70.00
Avg. Accuracy (%)
67.14
78.57
47.6
62.9
71.35
84.55
Algorithms: When recognizing human emotions different types of algorithms are used such as Machine learning algorithms,
Searching algorithms, Feature Reduction algorithms, Statistical algorithms. We employ all of above types of algorithms to design a
system which can recognize emotion. Machine learning algorithms are used to train a system where as Searching Algorithms can be
used to searchfeatures relevant and irrelevant to emotion. Statistical algorithms are employed to recognize the emotion as accurately
as possible.
International Science Congress Association
116
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Principal Component Analysis (PCA): PCA comes under feature extraction algorithm it’s basically used for representing data.
When the data set to be processed is very large we implement PCA to construct small dimensionality feature space 12 by reducing
the large dimensionality data space12. In PCA the vector which is most suited for distribution of facial expression within the entire
image space12,13. In PCA first a Principle Component is detected which is the largest possible variance from a large database then
the succeeding component has the largest possible value after the previous component. As the principle components are the
eigenvector of the symmetric convenience matrix thus they are orthogonal. It’s the easiest amongst eigenvector-based analysis but
PCA only extract the features and reduce a large dataset to smaller datasets.
Support Vector Machine (SVM): SVM is a non-probabilistic binary linear classifier14 used for classification and regression
analysis. This is a model of supervised learning. It analyses the features and assigns them to different categories 15. SVM classifier is
trained to recognize different emotional features and then classify the features into different categories and recognize emotions from
there. SVM is very accurate and produces robust classification results regardless of the non-linearly separable data. During test
phase SVM can be a bit slow, algorithmic complexity of SVM is high.
Analysis of Variance (ANOVA): It’s basically a statistical model by which differences among group means and their associated
procedures are analyzed to determine differences between the means of several groups. In this technique the total variance present
in the data set is divided into non-negative component due to factor of variation. Pros are its very simple and experimental error can
be reduced. ANOVA is not suitable for non-homogeneous data.
K-nearest neighbors (K-NN) algorithm: It’s a non-parametric method basically used for classification purposes and regression.
The output of K-NN classification is a member of the same class. It is one of the simplest machine learning algorithm. Here
functions are approximated locally and all the calculations are postponed till classification is done. The major drawback of K-NN is
that it simply uses the training for classification rather than learning anything from it known as lazy learning.
Future Scope: Emotion recognition from facial expression is the simplest of all the available techniques. In future an android based
emotion recognition system can be built using mobile camera. Facial expression based emotion recognition may be simplest of all
but it is not giving accurate result always. So there are many scope of improvement.
Conclusion
In this paper, main focus is affect detection from facial expression. At first a brief idea about different types of emotions is given.
Emotion recognition from facial expression has several steps. Different facial features are recognized. When those features are
analyzed, mental state of a person may be realized since the features changes with emotion. Classification algorithms plays a huge
part in emotion recognition as they are used to train the system and classify the emotional features in different categories such
algorithms are SVM, K-NN, LDA etc. There is also feature reduction algorithms like PCA for extracting key features from a large
data-set. In present days, emotions are recognized from physiological signals, speech, and gesture etc. but the widely used
technique is emotion recognition from facial expression for its simplicity. Although it not as accurate as other technique as it is very
easy to fake.
References
1.
Gosavi Ajit P. and S.R. Khot, Emotion recognition using Principal Component Analysis with Singular Value Decomposition."
In Electronics and Communication Systems (ICECS), 2014 International Conference on, 1-5. IEEE, (2014)
2.
Jerritta, S., M. Murugappan, R. Nagarajan and Khairunizam Wan, "Physiological signals based human emotion recognition: a
review." In Signal Processing and its Applications (CSPA), 2011 IEEE 7th International Colloquium on, 410-415. IEEE,
(2011)
3.
W.R. Picard, Affective computing: challenges, International Journal of Human-Computer Studies - Application of affective
computing in human—Computer interaction, 59, 55-64, (2003)
4.
Baker S., Real-time non-rigid driver head tracking for driver mental state estimation, (2004)
5.
Kamińska, Dorota, and Adam Pelikant. "Recognition of human emotion from a speech signal based on Plutchik's model."
International Journal of Electronics and Telecommunications, 58(2), 165-170 (2012)
6.
Byun Kwang-Sub, Chang-Hyun Park, and Kwee-Bo Sim. "Emotion recognition from facial expression using hybrid-feature
extraction, In SICE annual conference, 2483-2487 (2004)
International Science Congress Association
117
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
7.
Pantic Maja and Léon JM Rothkrantz, An expert system for multiple emotional classification of facial expressions." In Tools
with Artificial Intelligence, 1999. Proceedings. 11th IEEE International Conference on, 113-120. IEEE, (1999)
8.
Tawari Ashish and Mohan Manubhai Trivedi, Face expression recognition by cross modal data association." Multimedia,
IEEE Transactions on, 15(7), 1543-1552 (2013)
9.
Changjun, Zhou, Peipei Shen, and Xiong Chen. "Research on algorithm of state recognition of students based on facial
expression, In Electronic and Mechanical Engineering and Information Technology (EMEIT), 2011 International Conference
on, 2, 626-630, (2011)
10. T. Ojala, M. Pietikainen and T. Maenpaa, Multiresolution Gray Scaleand Rotation Invariant Texture Classification with Local
Binary Pattern," IEEE Transaction on PAMI, 24(7), 971-987, (2002)
11. A.Hadid, M.Pietikainen, and T.Ahonen, "A discriminative feature space for detecting and recognizing faces, In CVPR, pages
797-804,
12. Washington, DC, USA, (2004)
13. Meher, Sukanya Sagarika, and Pallavi Maben, Face recognition and facial expression identification using PCA." In Advance
Computing Conference (IACC), 2014 IEEE International, 1093-1098. IEEE, (2014)
14. Kim, Kwang In, Keechul Jung, and Hang Joon Kim. "Face recognition using kernel principal component analysis." Signal
Processing Letters, IEEE 9(2), 40-42 (2002)
15. 14 Sun, Jian-ming, Xue-sheng Pei, and Shi-sheng Zhou. "Facial emotion recognition in modern distant education system using
SVM." In Machine Learning and Cybernetics, 2008 International Conference on, 6, 3545-3548. IEEE, (2008)
16. Sun, Jian-ming, Xue-sheng Pei, and Shi-sheng Zhou. "Facial emotion recognition in modern distant education system using
SVM." In Machine Learning and Cybernetics, 2008 International Conference on, 6, 3545-3548. IEEE, (2008)
International Science Congress Association
118
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
A Review on Facial Emotion Recognition System
Zahir Abbas Rahaman and Saikat Basu
1
Department of Information Technology, West Bengal University of Technology, Kolkata, West Bengal, India
2
Department of Computer Science and Engineering, West Bengal University of Technology
Introduction
The present review relates generally to the field of facial expression recognition. Specifically, to a method that use as an apparatus
for recognizing an emotion of an individual using facial Action. In real-time people often express emotions through facial
expressions. Facial expressions are some of the most powerful, natural, and immediate ways for humans to communicate their
emotions and intentions. The face can express an emotion sooner than people verbally or even realize their feelings. For example,
different emotions are expressed using various facial regions, mainly the mouth, the eyes, and the eyebrows. More often, emotional
expression is communicated by certain changes in one or a few discrete facial features, such as a tightening of the lips in anger or
certain way of opening the lip corners in sadness. Many computer systems are configured to recognize a small set of different type
of emotional expressions, e.g., joy, surprise, anger, sadness, fear, and disgust. A Facial unit (FU) has been developed for describing
facial expressions. The change of the facial facts are changes due to facial muscles are recorded in database. Some changes are an
atomically related to contractions of specific facial muscles, i.e., 12 different type of mussels changes are for upper face, and 18 are
for lower face, can occur either singly or in combination. When this occur in combination, they may be additive, in which the
combination does not change the appearance of the constituent to non-additive, in which the appearance of the constituents does
change.
Summary of Emotion Recognition
Accordingly, the present invention is made to address at least the above-described problems described above and to provide at least
advantages described below,
In accordance with an aspect of the present invention, a method is provided for recognizing an emotion of an individual based on
FUs. The method includes receiving an input FUS string including one or more FUs that represents a facial expression of an
individual from an FUS detector, matching the input FUs string with each of a plurality of FUs strings, wherein each of the plurality
of FUs strings includes a set of highly discriminative FUs, each representing an emotion, identifying an FUs string from the
plurality of FUs strings that best matches the input FUS string, and outputting an emotion label corresponding to the best matching
FUS string that indicates the emotion of the individual.
In accordance with another aspect of the present invention, an apparatus is provided for recognizing an emotion of an individual
based on FUs. The apparatus includes a processor; and a memory coupled to the processor. The memory includes instructions
stored therein, that when executed by the processor, causes the processor to receive an input FUs string including one or more FUs
that represents a facial expression of the individual; match the input FUs string with each of a plurality of FUs strings, wherein each
of the plurality of FUs strings includes a set of highly discriminative FUs, each representing an emotion; identify an FUs string
from the plurality of FUs strings that best matches the input FUs string; and output an emotion label corresponding to the best
matching FUs string that indicates the emotion of the individual.
Emotion Recognition System
Various methodologies is present in this review will now be described in detail With reference to the accompanying drawings. In
the following description, specific details such as detailed configuration and components are merely provided to assist the overall
understanding of these embodiments of the present invention. Therefore, it should be apparent to those skilled in the art that various
changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the
present invention. In addition, descriptions of Well-known functions and constructions are omitted for clarity and conciseness.
Herein, the terms “facial units (FUs)” and “FUs” are used interchangeably. To map FUs detected from a face of an individual to
target emotions, a relation matrix is formed based on dis criminative power of FUs With respect to each of the target emotions.
Value that helps to determine statistical relationship between each action unit and one or more emotions. For example, a high
discriminative power indicates that the action unit belongs to an emotion, more than the action unit with a low discriminative
power. The relation matrix is used for mapping an input FUs string with a number of template FUs strings selected from the relation
matrix to recognize an emotion of an individual, according to an embodiment of the present invention.
International Science Congress Association
119
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure-1
Flowchart
Figure-1 is a flowchart illustrating a method of forming a relation matrix indicating statistical relationship between a set of FUs and
an emotion, according to an embodiment of the present invention. Referring to FIGURE-1, discriminative power is computed for
each FUS associated with a facial expression. Discriminative power is a value whose magnitude quantifies discriminative power of
FUs associated with facial actions for an emotion. Thus, the discriminative power enables identification of highly discriminative
facial actions for various emotions using statistical data. Statistical data relates to probabilities/ statistics of correlations between
various emotions and FUs derived from a large set of facial expressions. In accordance With an embodiment of the present
invention, the discriminative power for each FUS is computed based on Equation (1)
H= A(|Y|-lXl-)—A(Yjl)_(i)/Normal factor …….. (1) In Equation (1), A (Yj|Xl-) is the probability of action unit Yj, given that the
emotion Xi has occurred, and A(Yj|Xi) is the probability of action unit Yj, given that the emotion Xi has not occurred. Using the
values, a relation matrix is formed to represent statistical relationship between each FUS and six emotions, as illustrated in
FIGURE-2. The values in the relation matrix are then normalized to suppress learning sample size for each emotion. Herein, the
discriminative power is computed for action units, which are based on the Facial System.
Figure-2
Intensity of action units
International Science Congress Association
120
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Referring to figure-2, positive intensity a lighter colour indicates high probability of an action unit belonging to a particular
emotion, while negative intensity (a darker colour) indicates high probability of an action unit not belonging to a particular emotion.
For example, the emotion “happy” has FU 12, FU 6, and FU 26, which are positive discriminative FUs, and FU1, FU2, and FU5,
which are negative discriminative FUs. A matrix is derived from the relation matrix based on the identified set of highly
discriminative action units for each of the target emotions. An example of a matrix including five highly discriminative action units
selected for six emotions, i.e., angry, fear, sad, happy, sur prise, and disgust is shown in Table 1 below.
Table-1
Example of Facial Expression
In Table l, for example, the emotion “angry” has FU20, FU10, FU17, FU7 , and FU10 as highly discriminative FUs, and “happy”
has FU22, FU5, FU21, FU2, and FU27 as highly discriminative action units. The matrix in Table l helps efficiently map an input
FU string to one or more FU strings corresponding to six emotions for recognizing an emotion of an individual based on detected
facial expressions
The input FU string is matched across the template FU strings in the matrix formed in a template FU string from the tem plate FU
strings that best matches the input FU string is determined using a longest common subsequence technique. The longest common
subsequence technique is an approximate string matching technique indicating a greatest amount of similarity between the input FU
string and one of the template FU strings. Thus, the longest common subsequence technique helps determine a template FU string
having mini mal matching cost against the input FU string as compared to remaining template FU strings.
In accordance With an embodiment of the present invention, a common subsequence is determined by matching the input FU string
with each of the template FU strings. The common subsequence is indicative of a distance measure of an amount of similarity
between the input FU string and the template FU strings. Based on the common subsequence, a longest common subsequence
associated With an FU string best matching the input FU string is identified. In the longest common sub-sequence, a sub-sequence
is a sequence in Which FUs appears in a same relative order but are not necessarily contiguous. Additionally, the longest common
subsequence technique allows insertion and deletion of FUs in the input FU string, but not substitution of FUs.
An emotion label corresponding to the determined template FU string is output as an emotion associated with the individual. For
example, if the input FU string is {4 6 l2 17 26}, the input FU string best matches with the template FU string {12 6 26 10 23},
which corresponds to an emotion label “happy”, as shown in Table l. In this example, the input FU string includes erroneous FUs
like 4, and 17. However, because the longest common subsequence technique allows for insertion of erroneous action units in the
input FU string, the erroneous input FU string can be accurately mapped to the template FU string {7, l0, 12, 20, 26} to recognize
that the individual is happy. Likewise, deletion becomes important to map the input FU strings like {6}, {6, 10}, {6, 10, 17} to
happy as all of these FUs indicate happy
Compression
A method for recognizing an emotion of an individual based on facial Units (FUs), the method comprising: receiving an input FU
string including one or more FUs that represents a facial expression of an individual from an FU detector; matching the input FU
string With each of a plurality of FU strings, Wherein each of the plurality of FU strings includes a set of highly discriminative
FUs, each representing an emotion; identifying an FU string from the plurality of FU strings that best matches the input FU string;
and outputting an emotion label corresponding to the best matching FU string that indicates the emotion of the individual. The
method of claim 1, Wherein identifying the FU string from the plurality of FU strings that best matches the input FU string
International Science Congress Association
121
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
comprises: determining a common sub sequence between the input FU string and each of the plurality of FU strings; and
identifying a longest common subsequence from the determined common sub-sequences, Wherein the longest common
subsequence is indicative of a greatest amount of similarity between the input FU string and one of the plurality of FU strings.
Wherein the common sub-sequence is indicative of a distance measure of an amount of similarity between the input FU string and
each of the of FU strings. Further comprising: determining a discriminative power for each of a multicity of FUs based on statistical
data; selecting a set of FUs representing each of a plurality of emotion labels, based on the discriminative power for each of FUs;
and storing the selected set of FUs associated with each of the plurality of emotion labels as FU strings. The method of claim
wherein the selected set of FUs associated with each of the plurality of emotion labels is stored as the FU strings in a matrix.
Wherein the discriminative power for each of the string of FUs indicates a probability of each FU belonging to one of the
combination of emotion labels. An apparatus for recognizing an emotion of an individual using facial Units (FUs), the apparatus
comprising: a processor; and a memory coupled to the processor, Wherein the memory includes instructions stored therein, that
When executed by the processor, cause the processor to: receive an input FU string including one or more FUs that represents a
facial expression of the individual; match the input FU string With each of a plurality of FU strings, Wherein each of the plurality
of FU strings includes a set of highly discriminative FUs, each representing an emotion; identify an FU string from the plurality of
FU strings that best matches the input FU string; and output an emotion label corresponding to the best matching FU string that
indicates the emotion of the individual.
Conclusion
This paper about reviewing some paper regarding facial recognition with adding feature of facial feature extraction. The features
also known as facial unit(s) FUs. This FU compare with template which also a database of particular emotion. this documentation
is an overview of the methods or methodology that are used by various paper authored by various author and published in reputed
publish house like IEEE, springer etc.
References
1.
Y. Amit, D. Geman and K. Wilder, Joint induction of shape features and tree classifiers, (1997)
2.
Y. Freund and R. Schapire, A decision-theoretic generaliza- tion of on-line learning and an application to boosting. In
Eurocolt '95, 23-37. Springer-Verlag, (1995)
3.
C. Papageorgiou, M. Oren and T. Poggio, A general frame- work for object detection. In ICCV, (1998)
4.
D. Roth, M. Yang and N. Ahuja, A snow-based face detector. In NIPS 12,2000. H. Rowley, S. Baluja, and T. Kanade. Neural
network-based face detection. In IEEE PAMI, volume 20, (1998)
5.
H. Schneiderman and T. Kanade, A statistical method for 3D object detection applied to faces and cars. In ICCV, (2000)
6.
K. Sung and T. Poggio, Example-based learning for view- based face detection. In IEEE PAMI, volume 20, pages 39-5 I,
(1998)
7.
K. Tieu and P. Viola, Boosting image retrieval. In ICCV, (2000)
International Science Congress Association
122
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Prediction in Stock Market through Mathematical Modelling
Mrinalini Smita
Department of Mathematics, St. Xavier’s College, Ranchi.Ranchi, Jharkhand, INDIA
Abstract
Prediction is a very difficult art, especially when it involves the future‖ -Neils Bohr (Nobel Laureate Physicist). “Forecasting is the
process of making statements about events whose actual outcomes (typically) have not yet been observed” Wikipedia. Words such
as predicting are used to also refer to forecasting. The art of forecasting into the future is a very vital but important exercise to many
stakeholders in diverse industries. As farmers would like to know the future rainfall pattern in order to properly sow their seeds and
at the right time so do financial analyst expect to know the future performance of various market stocks to guide investment options
available to them.
It is not possible to accurately forecast the future. Forecasting into the future comes with a margin of error. The margin of error
widens especially when forecasting deep into the future. In other words , when predicting variables and their expected influence
may change (with social, economic and political change) and new variables may emerge. These errors arise as a result of the level
of inaccuracy of the base information used and the method used to forecast into the future. This makes the choice of the forecasting
method pivotal when predicting into the future. In many cases forecasting uses quantitative data rather than qualitative data which
depends on the judgment of experts.
There are several forecasting models and methods in practice. The popularity of a forecasting model as against another is solely
based on their risk metrics. Forecasting of stocks is generally believed to be a very difficult task. Common time series forecasting
models are as follows: Box – Jerkin‟s Methods, Holt Winters Exponential Smoothing and simple linear regression. The result
always favours one of the methods of forecasting and this can be ascertained by the use of error metrics.
In time- series model, the past behaviour of a time series is examined to infer something about its future behaviour instead of
searching for effect of one or more variables on the forecast variable. They put more emphasis on the data analysis for
simplification of the model.
Different patterns or trends can be seen in the time series data. The time series is influenced by several factors like random
components, seasonal components, cyclic components etc. The random component in the time series may shield the influence of
other components and make it difficult to describe the observed trends or patterns in the data. .This influence the performance and
accuracy of Time-series model . Therefore, mathematical modelling( the process of developing a method of simulating real –life
situations with mathematical equations to forecast their future behaviour ) can be the best way to break everything down and
predict how something new will play out .Having a good mathematical model will make it easier to predict how certain plans will
play out.The main idea of forecasting techniques is to minimise the difference between actual and predicted values since this
should influence the performance and reliability of models.To figure out mathematical equation we had to use the right
mathematical model that would fit our needs best and help us.
Significance of the Study
The role of research in several fields of applied economics, whether related to business or to the economy has greatly increased in
modern times. This study will help in focussing attention on increasingly complex nature of business and government and hence in
solving operational problems.
The primary advantage of forecasting is that it provides various stakeholders (a person who has interest in or investment in
something and who is impacted by and cares about how it turns out) with valuable information that can be used to make decisions
about the future.
It will go along way to help the managers of financial portfolios and to understand and appreciate the underlying factors behind the
in- sample forecasting accuracy of stocks in stock exchanges.
It will further boost the confidence of stakeholders in the financial industry to do more business with less risk.
Other beneficiaries of research are investors, directors, regulators and other financial institutions as well as researchers in academia.
International Science Congress Association
123
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
This study will enable public to invest wisely in stock market
To make investment as ‘Social revolution’ by spreading awareness of capital investment in stock market.
To educate laymen for investment without any risk in stock market.
This study will definitely an important source providing guidelines for solving different business, governmental and social
problems.
On the whole, it is concluded that Mathematical modelling is much more useful for prediction in stock market than time series
model as the random components in the time series may shield the influence of other components and make it difficult to describe
the observed trend or patterns in the data. Through Mathematical modelling it is possible to develop mathematical equations to
forecast future behaviour in stock market including several factors affecting stock market.
Keywords: Prediction, stakeholders, time-series models, mathematical model.
International Science Congress Association
124
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Mathematical Model for Deteriorating Inventory - Items Under Trade Credit
And Inventroy Level Dependent Demand Rate
Dhrub Kumar Singh1 and Sahadeo Mahto2
1
University Department of Mathematics, Ranchi University, Ranchi-834001, Jharkhand, INDIA
Associate Professor, Department of Mathematics, Ranchi University, Ranchi-834001, Jharkhand, INDIA
2
Abstract
This paper deals with the problem of determining the optimal selling price and order quantity simultaneously under EOQ model for
deteriorating items. It is assumed that the demand rate depends not only on the on-display stock level but also the selling price per
unit, as well as the amount of shelf/display space is limited. We formulate a mathematical model to manifest the extended EOQ
models for maximizing profits and derive the algorithms to find the optimal solution. Numerical examples are presented to illustrate
the models developed and sensitivity analysis is reported.
Keywords: Inventory control, pricing, stock-dependent demand, deterioration.
Introduction
In the classical inventory models, the demand rate is regularly assumed to be either constant or time-dependent but independent of
the stock levels. However, practically an increase in shelf space for an item induces more consumers to buy it. This occurs owing to
its visibility, popularity or variety. Conversely, low stocks of certain goods might raise the perception that they are not fresh.
Therefore, it is observed that the demand rate may be influenced by the stock levels for some certain types of inventory. In years,
marketing researchers and practitioners have recognized the phenomenon that the demand for some items could be based on the
inventory level on display. Levin et al.(1972) pointed out that large piles of consumer goods displayed in a supermarket would
attract the customer to buy more. Silver and Peterson (1985) noted that sales at the retail C., T., Chang, et. al. / Inventory Models
with Stock-and Price- Dependent Demand level tend to be proportional to stock displayed. Baker and Urban (1988) established an
EOQ model for a power-form inventory-level-dependent demand pattern. Padmanabhan and Vrat (1990) developed a multi-item
inventory model of deteriorating items with stock-dependent demand under resource constraints and solved by a non-linear goal
programming method. Datta and Pal (1990) presented an inventory model in which the demand rate is dependent on the
instantaneous inventory level until a given inventory level is achieved, after which the demand rate becomes constant. Urban (1992)
relaxed the unnecessary zero ending-inventory at the end of each order cycle as imposed in Datta and Pal (1990). Pal et al. (1993)
extended the model of Baker and Urban (1988) for perishable products that deteriorate at a constant rate. Bar-Lev et al. (1994)
developed an extension of the inventory-level-dependent demand-type EOQ model with random yield. Giri et al. (1996)
generalized Urban’s model for constant deteriorating items. Urban and Baker (1997) further deliberated the EOQ model in which
the demand is a multivariate function of price, time, and level of inventory. Giri and Chaudhuri (1998) expanded the EOQ model to
allow for a nonlinear holding cost. Roy and Maiti (1998) developed multiitem inventory models of deteriorating items with stockdependent demand in a fuzzy environment. Urban (1998) generalized and integrated existing inventory-control models, product
assortment models, and shelf-space allocation models. Datta and Paul (2001) analyzed a multi-period EOQ model with stockdependent, and price-sensitive demand rate. Kar et al. (2001) proposed an inventory model for deteriorating items sold from two
shops, under single management dealing with limitations on investment and total floorspace area. Other papers related to this area
are Pal et al. (1993), Gerchak and Wang (1994), Padmanabhan and Vrat (1995), Ray and Chaudhuri (1997), Ray et al. (1998),
Hwang and Hahn (2000), Chang (2004), and others.
As shown in Levin et al. (1972), “large piles of consumer goods displayed in a supermarket will lead customers to buy more. Yet,
too many goods piled up in everyone’s way leave a negative impression on buyers and employees \alike.” Hence, in this present
paper, we first consider a maximum inventory level in the model to reflect the facts that most retail outlets have limited shelf space
and to avoid a negative impression on customer because of excessively piled up in everyone’s way. Since the demand rate not only
is influenced by stock level, but also is associated with selling price, we also take into account the selling price and then establish an
EOQ model in which the demand rate is a function of the on-display stock level and the selling price. In Section 2, we provide the
fundamental assumptions for the proposed EOQ model and the notations used throughout this paper. In Section 3, we set up a
mathematical model. The properties of the optimal solution are discussed as well as its solution algorithm and numerical examples
are presented. An easy-to-use algorithm is developed to determine the optimal cycle time, economic order quantity and ordering
point. Finally, we draw the conclusions and address possibly future work in Section 5.
Assumptions and Notations
International Science Congress Association
125
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
A single-item deterministic inventory model for deteriorating items with price- and stock-dependent demand rate is presented under
the following assumptions and notations.
1.
2.
Shortages are not allowed to avoid lost sales.
The maximum allowable number of displayed stocks is B to avoid a negative impression and due to limited shelf/display
space.
3. Replenishment rate is infinite and lead time is zero.
4. The fixed purchasing cost K per order is known and constant.
5. Both the purchase cost c per unit and the holding cost h per unit per unit time are known and constant. The constant selling
price p per unit is a decision variable within the replenishment cycle, where
.
6. The constant deterioration rate
is only applied to on-hand inventory. There are two possible cases for the cost
of a deteriorated item s: (1) if there is a salvage value, that value is negative or zero; and (2) if there is a disposal cost, that
7. value is positive. Note that c > s (or − s ).
8. All replenishment cycles are identical. Consequently, only a typical planning cycle with T length is considered (i.e., the
planning horizon is [0, T]).
9. The demand rate
is deterministic and given by the following expression:
10.
,where I(t) is the inventory level at time
is a non-negative constant, and α ( p) is a nonnegative function of p with
.
11. As stated in Urban (1992), “it may be desirable to order large quantities, resulting in stock remaining at the end of the cycle,
due to the potential profits resulting from the increased demand.” Consequently, the initial and ending inventory levels y are
not restricted to be zero
The order quantity Q enters into inventory at time t = 0. Consequently,
During the time interval [0, T], the inventory is depleted by the combination of demand and deterioration. At time
T, the inventory level falls to y, i.e., I(T) = y. The initial and ending inventory level y can be called ordering point. The
mathematical problem here is to determine the optimal values of T, p and y such that the average net profit in a replenishment cycle
is maximized.
Mathematical Model and Analysis
At time t = 0, the inventory level I(t) reaches the top Ī (with Ī ≤ B) due to ordering the economic order quantity Q. The inventory
level then gradually depletes to y at the end of the cycle time t = T mainly for demand and partly for deterioration. A graphical
representation of this inventory system is depicted in Figure 1. The differential equation expressing the inventory level at time t can
be written as follows:
Inventory Level
Figure-1
Graphical Representation of Inventory System
International Science Congress Association
126
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
(1)
with the boundary condition
.
Accordingly, the solution of Equation (1) is given by
(2)
Applying (2), we obtain that the total profit TP over the period [0, T] is denoted by
( 3)
Hence, the average profit per unit time is AP = TP / T
=
×
Necessary conditions for an optimal solution
Taking the first derivative of AP as defined in (4) with respect to T, we have
=
From Appendix, we show that
unit of inventory and
and
discussed as follows:
is greater than zero.
is the benefit received from a
is the total cost (i.e., holding and deterioration costs) per unit inventory. Let
based on the values of
and
, two distinct cases for finding the optimal T * are
Case 3.1
(Building up inventory is profitable) “
” implies that the benefit received from a unit of inventory is
larger than the total cost (i.e., holding and deterioration costs) due to a unit of inventory. That is, it is profitable to build up
inventory. Using Appendix ,
International Science Congress Association
127
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
If
. Namely, AP is an increasing function of T with
Therefore, we should pile up inventory to the maximum
allowable number B of stocks displayed in a supermarket without leaving a negative impression on
customers. So,
. From
, we know
,
(6)
which implies that T is a function of p and y.
Substituting (6) into (4), we know that AP is a function of y and p.
The necessary conditions of AP to be maximized are
and
. Hence, we have the following two conditions:
[
+
(7)
and
= -{K+
(8)
where T is defined as (6) and
(9)
From (7) and (8), the optimal values of p* and y* are obtained. Substituting p* and y* into (6), the optimal value T* is solved.
Since AP(y, p) is a complicated function, it is not possible to show analytically the validity of the sufficient conditions. However,
according to the following mention, we know that the optimal solution can be obtained by numerical examples. Because building
up is profitable and AP is a continuous function of y and p over the compact set
where is a sufficiently large
number, so AP has a maximum value. It is clear that AP is not maximum at y = 0 (or B) and p = 0 (or L). Therefore, the optimal
solution is an inner point and must satisfy (7) and (8). If the solution from (7) and (8) is unique, then it is the optimal solution.
Otherwise, we have to substitute them into (4) and find the one with the largest values.
Case 3.2.
(Building up inventory is not profitable)
First taking the partial derivative of AP with respect to y, we obtain
=
Next, we get
maximized are
. Substituting y* = 0 into (4), we have AP is a function of p and T. So, the necessary conditions of AP to be
and ∂AP / p = 0. Then, we get the following two conditions:
=
and
[
International Science Congress Association
128
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
From (11) and (12), we can obtain the values for T and p. Substituting y* = 0, T and p into (2) and check whether I(0) < B or not. If
I(0) < B, then the optimal values T* = T, p* = p and Q* = I(0). If I(0) ≥ B, then set I(0) = B and obtain
T=
,
which is a function of . Substituting
of AP to be maximized is
and
Hence,
+
into
+
, we have
is only depend on . Then, the necessary conditions
]T
(15)
The optimal value p* is determined by (14). Substituting p* into (13), the optimal value T* is solved.
Algorithm
The algorithm for determining an optimal selling price p*, optimal ordering point y*, optimal cycle time T*, and optimal economic
order quantity Q* is summarized as follows:
Step 1. Solving (7) and (8), we get the values for p and y.
Step 2. If
(6).
, then
and the optimal value T* can be obtained by substituting p and y into
Step 3. If
, then re-set
. By solving (11) and (12), we get the values for T and p. Substituting y* = 0, p and T into
(2) to find I(0). If I(0) < B, then the optimal values T* = T, p* = p and Q* = I(0), and stop. Otherwise, go to Step 4.
Step 4. If the simultaneous solutions T and p in (11) and (12) make I(0) > B, then the optimal value p* is determined by (14), T* is
obtained by substituting p* into (13), and Q* = I(0) by substituting p* and T* into (2).
Numerical examples
To illustrate the proposed model, we provide the numerical examples here. For simplicity, we set the function
where x and r are non-negative constants. That is, we assume that demand is a constant elasticity function of the price. C., T.,
Chang, et. al. / Inventory Models With Stock-62 And Price- Dependent Demand.
Example 3.1 Let K = $10 per cycle, x = 1000 units per unit time, h = $0.5 per unit per unit time, s = $0 per unit, r = 2.5 and
. Following through the proposed algorithm, the optimal solution can be obtained. Since (4) and (6)-(9) are nonlinear,
they are extremely difficult to solve. We use Maple 9.5 software to solve them. The computational results for the optimal values of
and AP with respect to different values of
are shown in Table 3.1.
0.15
0.20
0.25
0.30
0.35
0.20
100
1.5
100
110
130
1.5
Table 3.1
Computational results for the case of
Q*
p*
29.7671
70.2329
6.036963
27.5915
72.4085
5.057843
21.6955
78.3045
4.4.1015
12.9392
87.0608
3.916335
1.5681
98.4319
3.542865
27.5915
72.4085
5.057843
25.7399
84.2601
4.916473
19.8247
110.1753
4.727722
International Science Congress Association
T*
2.995380
2.228339
1.874682
1.700138
1.626419
2.228339
2.437927
2.927107
AP*
53.8080
65.6087
74.6548
81.5477
86.6871
65.6087
66.5922
67.8135
129
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
150
170
100
0.20
1.1
1.3
1.5
1.7
1.9
12.1859
3.9578
47.2880
38.7618
27.5915
14.7100
2.3596
137.8141
166.0422
52.7120
61.2382
72.4085
85.2900
97.6404
4.618470
4.552949
5.192483
5.099564
5.057843
5.094514
5.209061
3.478172
4.059602
1.538303
1.811917
2.228339
2.827599
3.598091
68.6228
69.1629
79.0717
72.2547
65.6087
59.3269
53.5902
Based on the computational results as shown in Table 3.1, we obtain the following managerial phenomena when building up
inventory is profitable: (1) A higher value of β causes higher values of Q* and AP*, but lower values
of y*, p* and T*. It reveals that the increase of demand rate will result in the increases of optimal economic order quantity and
average profit, but the decreases of optimal ordering point, selling price and cycle time. (2) A higher value of B causes higher
values of Q*, T* and AP*, but lower values of y*and p*. It implies that the increase of shelf space will result in the increases of
optimal economic order quantity, cycle time and average profit, but the decreases of optimal ordering point and selling price. (3) A
higher value of c causes higher values of Q* and T*, but lower values of y* and AP*. It implies that the increase of purchase cost
will result in the increases of optimal economic order quantity and cycle time, but the decreases of optimal ordering point and
average profit.
Example 3.2 Let K = $10 per cycle, x = 1000 units per unit time, h = $0.2 per unit per unit time, c = $1.0 per unit,
s = $0 per unit, r = 2.8,
and B = 300. From Step 3 of the proposed algorithm, we obtain the optimal ordering point y* =
0. Using Maple 9.5 software, we solve (2), (4), (11) and (12). The computational results for the optimal values of p, Q, T and AP
with respect to different values of are shown in Table 3.2.
0.10
0.12
0.15
0.17
0.20
Q*
162.6161
169.2624
181.2873
191.2537
211.0556
Table 3.2
Computational results for the case of
p*
1.685130
1.689956
1.698773
1.706166
1.721085
T*
0.666568
0.693068
0.741555
0.782279
0.864684
AP*
129.4149
130.4691
132.1537
133.3611
135.3406
Table 3.2 shows that a higher value of β causes in higher values of Q*, p*, T* and AP*. It indicates that the increase of demand rate
will result in the increases of optimal economic order quantity, selling price, cycle time and average profit, when building up
inventory is not profitable.
Conclusion
This article presents the inventory models for deterioration items when the demand is a function of the selling price and stock on
display. We also impose a limited maximum amount of stock displayed in a supermarket without leaving a negative impression on
customers. Under these conditions, a proposed model has been shown for maximizing profits. Then, the properties of the optimal
solution are discussed as well as its solution algorithm and numerical examples are presented to illustrate the model.
Furthermore, we discover some intuitively reasonable managerial results. For example, if the benefit received from a unit of
inventory is larger than the total cost per unit inventory, then the building up inventory is profitable and thus the beginning
inventory should reach to the maximum allowable level. Otherwise, building up inventory is not profitable and the ending inventory
should be zero. Finally, numerical examples are provided to demonstrate the applicability of the proposed model. The results also
indicate that the effect of stock dependent selling rate on the system behavior is significant, and hence should not be ignored in
developing the inventory models. The sensitivity analysis shows the influence effects of parameters on decision variables. The
proposed models can further be enriched by incorporating inflation, quantity discount, and trade credits etc. Besides, it is interested
to extend the proposed model to multi-item inventory systems based on limited shelf space or to consider the demand rate which is
a polynomial form of on-hand inventory dependent demand. Finally, we may extend the deterministic demand function to
stochastic fluctuating demand patterns.
International Science Congress Association
130
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
If
then AP is an increasing function of T
To prove
Then (A.1) yields
we set x
, for
(A.1)
so
for
(A.2)
(A.3)
References
1.
Baker R.C. and Urban T.L., A deterministic inventory system with an inventory level dependent demand rate, Journal of the
Operational Research Society, 39, 823-831 (1988)
2.
Bar-Lev, S.K., Parlar M. and Perry D., On the EOQ model with inventory-level-dependent demand rate and random yield”,
Operations Research Letters, 16, 167-176 (1994)
3.
Chang C.T., Inventory model with stock-dependent demand and nonlinear holding costs for deteriorating items”, Asia-Pacific
Journal of Operational Research, 21, 435-446 (2004)
4.
Datta T.K. and Pal A.K., A note on an inventory model with inventory level dependent demand rate”, Journal of the
Operational Research Society, 41, 971-975 (1990)
5.
Datta T.K. and Paul K., An inventory system with stock-dependent, price-sensitive demand rate”, Production Planning and
Control, 12, 13-20 (2001)
6.
Gerchak Y. and Wang Y., Periodic-review inventory models with inventory-level dependent demand”, Naval Research
Logistics, 41, 99-116 (1994)
7.
Giri B.C. and Chaudhuri K.S., Deterministic models of perishable inventory with stock dependent demand rate and nonlinear
holding cost”, European Journal of Operational Research, 105, 467-474 (1998)
8.
Giri B.C., Pal S., Goswami A. and Chaudhuri K.S., An inventory model for deteriorating items with stock-dependent demand
rate, European Journal of Operational Research, 95, 604-610 (1996)
9.
Hwang H. and Hahn K.H., An optimal procurement policy for items with an inventory level-dependent demand rate and fixed
lifetime, European Journal of Operational Research, 127, 537-545 (2000)
10. Kar S., Bhunia A.K. and Maiti M., Inventory of multi-deteriorating items sold from two shops under single management with
constraints on space and investment, Computers and Operations Research, 28, 1203-1221 (2001)
11. C.T., Chang, et. al. / Inventory Models With Stock-And Price- Dependent Demand 69
12. Levin R.I., McLaughlin C.P., Lamone R.P. and Kottas J.F., Productions / Operations Management: Contemporary Policy for
Managing Operating Systems, McGraw-Hill, New York, 373, (1972)
13. Padmanabhan G. and Vrat P., Analysis of multi-item inventory systems under resource constraints: A non-linear goal
programming approach, Engineering Costs and Production Economics, 20, 121-127 (1990)
14. Padmanabhan G. and Vrat P., EOQ models for perishable items under stock dependent selling rate”, European Journal of
Operational Research, 86, 281-292 (1995)
15. Pal S., Goswami A. and Chaudhuri K.S., A deterministic inventory model for deteriorating items with stock-dependent rate,
International Journal of Production Economics, 32, 291-299 (1993)
16. Ray J. and Chaudhuri K.S., An EOQ model with stock-dependent demand, shortage, inflation and time discounting,
International Journal of Production Economics, 53, 171-180 (1997)
17. Ray J., Goswami A. and Chaudhuri K.S., On an inventory model with two levels of storage and stock-dependent demand rate,
International Journal of Systems Science, 29, 249-254 (1998)
International Science Congress Association
131
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
18. Roy T.K. and Maiti M., Multi-objective inventory models of deteriorating items with some constraints in a fuzzy environment,
Computers and Operations Research, 25, 1085-1095 (1998)
19. Silver E.A. and Peterson R., Decision Systems for Inventory Management and Production Planning, 2nd edition, Wiley, New
York, (1985)
20. Urban T.L., An inventory model with an inventory-level-dependent demand rate and relaxed terminal conditions, Journal of
the Operational Research Society, 43, 721-724 (1992)
21. Urban T.L. and Baker R.C., Optimal ordering and pricing policies in a single-period environment with multivariate demand
and markdowns, European Journal of Operational Research, 103, 573-583 (1997)
22. Urban T.L., An inventory-theoretic approach to product assortment and shelf-space allocation”, Journal of Retailing, 74, 1535 (1998)
International Science Congress Association
132
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Theoretical Study of Spin-Hamiltonian Parameters for the four-Coordinated
Nickel (II) Ion in Malonato Complexes
Mitesh Chakraborty1, Vineet Kumar Rai2 and Vishal Mishra3
1
Department of PhysicsSt. Xavier’s College, Ranchi, INDIA
Laser and Spectroscopy LaboratoryDepartment of Applied Physics, Indian School of Mines, Dhanbad, INDIA
3
Department of Physics and ElectronicsRajdhani College, University of Delhi, New Delhi, INDIA
2
Abstract
In the present paper we have evaluated the spin-Hamiltonian g and zero field splitting (ZFS) tensor from computation methods. The
ab-initio quantum chemistry program developed by Nesse et al. has been employed. All calculations are based on four-coordinated
tetragonal symmetry with distortion. The DFT calculations for ZFS and g-Tensor employed the open shell UKS functional and
Ahrichs’ valence triple-ζ basis set (TZV) for all functional in conjuction with TZV/J auxiliary basis sets.
Keywords: electronic g-Tensor, zero field splitting tensor, density functional theory, z-matrix
Introduction
The zero field splitting parameters are important factors to describe the anisotropy of any complex system. It is an important factor
to study the local site-symmetry of a dopant ion in the host. Due to the effect of spin-orbit interaction the charge distribution is
spheroidal and not spherical. The asymmetry is tied to the direction of the spin, so that a rotation of the spin direction relative to the
crystal axes changes the exchange energy and also affects the electrostatic interaction energy of the charge distribution on the pair
of atoms, hence give rise to ZFS factor [1].
Both orbital and spin motion contribute to zero field splitting (ZFS) parameters. The ZFS factor can also be due to admixture of
higher excited states into ground states. Due to its high sensitivity to the local environment, the study of ZFS parameters has
become a subject of active interest among the researchers. In EPR technology, zero field splitting factor corresponds to high spin
paramagnets’, raised from magnetic dipolar interaction between the multiple itinerant unpaired electrons in the doped system [2].
The axial and rhombic ZFS parameters are said to be more fundamental to spin-Hamiltonian theory [3]. The two most important
parameters in magnetic system are ZFS and electronic g-tensors [4-6].
The g-tensor calculations are mostly based on first principle evaluation [7]. The ZFS has a strong influence to break the degeneracy
and hence it is visible in the Electron Paramagnetic Resonance (EPR) spectra [8, 9]. The ZFS parameters of the dopant ion in one
host is different from that of the another, hence strongly ascribes the structural information. ORCA (Quantum Chemistry Program
package) developed by Prof. Dr. Frank Nesse et al. is an ab-initio quantum chemistry program package that contains modern
electronic structure methods including density functional theory, many-body perturbation, coupled cluster theories, multireference,
and semiempirical methods. Its main field of application is larger molecules, transition metal complexes, and their spectroscopic
optical properties [10]. ORCA uses standard Gaussian basis functions. Due to the user-friendly style, ORCA is considered to be a
helpful tool for computational theorists and can be extended in developing the full information content of experimental data with
the help of calculations.
Computational Details
Density functional theory (DFT) calculations of complex were performed using ORCA version 3.0.1. software package developed
by Nesse et al [10]. Atomic coordinated were performed using empirical study [11]. The dopant Manganese(II) ion was replaced
by Nickel (II) in the original host. Due to difference in the ionic radii of Manganese (II) and Nickel (II) , a variation in bond length
and angles have been made in the nearest neighbor coordinated tetrahedral symmetry. The DFT calculations for ZFS and g-Tensor
employed the open shell UKS functional and Ahrichs’ valence triple-ζ basis set (TZV) for all functional in conjuction with TZV/J
auxiliary basis sets [12-15].
The COSMO model is used for dielectric modeling of the environment. Further zero order regular approximation (ZORA) is used
for evaluation of ZFS and g-Tensor.
The Z-matrix designed from computational DFT studies is given as
International Science Congress Association
133
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Table 1
Z-matrix of the tetrahedrally coordinated dopant Nickel (II) in the host
---------------------------------------------------------------------------------------------------- -----------INTERNAL COORDINATES (Bond length and angles in angstrom and degrees)
------------------------------------------------------------------------------------------------------Ni 0 0 0 0.000000 0.000
0.000
O
1 0 0 2.690000 0.000
0.000
O
1 2 0 2.510000 180.000 0.000
O
1 2 3 2.700000 90.000 0.000
O
1 2 4 2.500000 90.000 180.000
Table-2
The table shows the Self- Consistent field (SCF) iterations using the open shell Ab-initio calculations. SCF converged after
104 cycles.
-------------------------------------------------------------------------------------------------------------------ITER
Energy
Delta-E
Max-DP
RMS-DP
[F,P] Damp
-----------------------------------------------------------------------------------------------0 -1827.7316211024 0.000000000000 0.13378505 0.00308155 0.6547954 0.7000
1 -1828.5710308437 -0.839409741237 0.11169661 0.00163221 0.4066678 0.7000
2 -1828.7164462412 -0.145415397472 0.18580124 0.00134262 0.2851812 0.7000
3 -1828.8874989811 -0.171052739930 0.06097549 0.00067418 0.1961800 0.7000
4 -1829.0207544413 -0.133255460233 0.02702070 0.00037296 0.1350557 0.7000
5 -1829.1112894747 -0.090535033374 0.05641224 0.00088134 0.0953056 0.0000
6 -1829.3222423953 -0.210952920584 0.02637227 0.00023256 0.0118450 0.0000
7 -1829.3233301984 -0.001087803131 0.01551630 0.00012618 0.0143277 0.0000
8 -1829.3242352662 -0.000905067835 0.01187378 0.00009101 0.0029494 0.0000
9 -1829.3245842284 -0.000348962206 0.00972375 0.00007455 0.0022174 0.0000
10 -1829.3247938656 -0.000209637180 0.00807931 0.00006166 0.0026669 0.0000
11 -1829.3249477567 -0.000153891040 0.00563116 0.00004520 0.0011556 0.0000
12 -1829.3250417455 -0.000093988803 0.00503124 0.00003981 0.0011682 0.0000
13 -1829.3251069490 -0.000065203554 0.00395263 0.00003546 0.0011346 0.0000
14 -1829.3251585603 -0.000051611230 0.29595680 0.00240100 0.0005092 0.0000
15 -1829.0957971877 0.229361372561 0.00002037 0.00000036 0.4042369 0.7000
16 -1829.0957480312 0.000049156454 0.00013866 0.00000261 0.4042659 0.7000
17 -1829.0955259185 0.000222112760 0.00019330 0.00000345 0.4044436 0.7000
18 -1829.0952719617 0.000253956766 0.00025236 0.00000352 0.4046341 0.7000
19 -1829.0950358298 0.000236131943 0.11389973 0.00138515 0.4047896 0.7000
20 -1829.2023725033 -0.107336673572 0.03901936 0.00048045 0.2095139 0.7000
21 -1829.2254769150 -0.023104411631 0.03024494 0.00050585 0.1455774 0.7000
22 -1829.2561264232 -0.030649508272 0.05020508 0.00068281 0.0761691 0.0000
23 -1829.2604874135 -0.004360990270 0.01563454 0.00012010 0.2223787 0.7000
24 -1829.2777178865 -0.017230472992 0.02127906 0.00015977 0.1883793 0.7000
25 -1829.2955084920 -0.017790605502 0.02104471 0.00016004 0.1495359 0.7000
26 -1829.3093172433 -0.013808751277 0.01902525 0.00015208 0.1159732 0.7000
27 -1829.3202671178 -0.010949874535 0.05760737 0.00057577 0.0791928 0.0000
28 -1829.3318441113 -0.011576993443 0.01233685 0.00017706 0.0519889 0.0000
29 -1829.3335837834 -0.001739672120 0.00395263 0.00006007 0.0465996 0.0000
30 -1829.3352107758 -0.001626992400 0.00503479 0.00005162 0.0245803 0.0000
31 -1829.3359986741 -0.000787898362 0.00462537 0.00004196 0.0076423 0.0000
32 -1829.3362352514 -0.000236577276 0.00408997 0.00003688 0.0026674 0.0000
33 -1829.3363141772 -0.000078925739 0.00310360 0.00002893 0.0055805 0.0000
34 -1829.3363675030 -0.000053325818 0.00267730 0.00002539 0.0062893 0.0000
35 -1829.3364069767 -0.000039473743 0.00238852 0.00002196 0.0069301 0.0000
36 -1829.3364232152 -0.000016238494 0.00203833 0.00001948 0.0078664 0.0000
37 -1829.3363467022 0.000076512998 0.00132680 0.00001670 0.0108871 0.0000
38 -1829.3363712001 -0.000024497887 0.00125596 0.00002053 0.0082908 0.0000
International Science Congress Association
134
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
-1829.3364885649
-1829.3366000906
-1829.3365844643
-1829.3366444523
-1829.3366885620
-1829.3367182871
-1829.3367888384
-1829.3367721872
-1829.3367606857
-1829.3367546871
-1829.3367487173
-1829.3364901707
-1829.3367692949
-1829.3369298206
-1829.3369909211
-1829.3370041903
-1829.3370077316
-1829.3370071505
-1829.3370065253
-1829.3370064385
-1829.3370099085
-1829.3364663129
-1829.3301438798
-1829.3351349163
-1829.3363825350
-1829.3368517447
-1829.3370492523
-1829.3370720648
-1829.3370751998
-1829.3370765781
-1829.3370772548
-1829.3370761071
-1829.3370790854
-1829.3370825844
-1829.3370909726
-1829.3370951945
-1829.3371045881
-1829.3371047299
-1829.3371062843
-1829.3371079365
-1829.3371090461
-1829.3371190205
-1829.3371054159
-1829.3371176862
-1829.3371214765
-1829.3371231765
-1829.3371236418
-1829.3371241503
-1829.3371246844
-1829.3371251177
-1829.3371188260
-1829.3371241947
-1829.3371270150
-1829.3371296066
-1829.3371294235
-1829.3371299165
-0.000117364827 0.00172080
-0.000111525635 0.00220285
0.000015626230 0.00150708
-0.000059987922 0.00164436
-0.000044109720 0.00176584
-0.000029725117 0.01629190
-0.000070551325 0.00009746
0.000016651231 0.00008841
0.000011501470 0.00006010
0.000005998630 0.00005743
0.000005969755 0.00541460
0.000258546671 0.00129214
-0.000279124275 0.00105905
-0.000160525663 0.00132553
-0.000061100508 0.00089752
-0.000013269205 0.01035550
-0.000003541320 0.00001109
0.000000581153 0.00000811
0.000000625228 0.00001301
0.000000086713 0.00019093
-0.000003469978 0.00473097
0.000543595593 0.01414064
0.006322433101 0.00481759
-0.004991036508 0.00250263
-0.001247618686 0.00211823
-0.000469209712 0.00198063
-0.000197507602 0.00129746
-0.000022812424 0.00067114
-0.000003135009 0.00054549
-0.000001378308 0.00049611
-0.000000676707 0.00044496
0.000001147697 0.00038923
-0.000002978315 0.00061799
-0.000003498968 0.00039713
-0.000008388253 0.00042337
-0.000004221901 0.00515626
-0.000009393618 0.00008599
-0.000000141806 0.00007736
-0.000001554388 0.00007943
-0.000001652157 0.00008825
-0.000001109602 0.00043911
-0.000009974391 0.00059791
0.000013604575 0.00026189
-0.000012270313 0.00023453
-0.000003790236 0.00274034
-0.000001700003 0.00001645
-0.000000465343 0.00005020
-0.000000508465 0.00005940
-0.000000534114 0.00006525
-0.000000433338 0.00236869
0.000006291702 0.00014582
-0.000005368703 0.00008367
-0.000002820278 0.00011726
-0.000002591602 0.00085960
0.000000183110 0.00005655
-0.000000493004 0.00006900
International Science Congress Association
0.00002888
0.00003255
0.00001818
0.00001672
0.00001675
0.00022680
0.00000191
0.00000136
0.00000096
0.00000085
0.00007628
0.00002751
0.00002096
0.00001794
0.00001176
0.00011268
0.00000017
0.00000013
0.00000026
0.00000349
0.00006816
0.00021400
0.00008343
0.00004246
0.00003001
0.00002774
0.00001755
0.00001150
0.00000735
0.00000561
0.00000454
0.00000970
0.00001514
0.00000600
0.00000459
0.00005508
0.00000148
0.00000114
0.00000112
0.00000103
0.00000782
0.00001013
0.00000506
0.00000337
0.00002851
0.00000040
0.00000101
0.00000093
0.00000082
0.00002635
0.00000269
0.00000183
0.00000173
0.00001366
0.00000087
0.00000128
0.0080304 0.0000
0.0016829 0.0000
0.0067642 0.0000
0.0031723 0.0000
0.0013417 0.0000
0.0007841 0.0000
0.0101076 0.0000
0.0106486 0.0000
0.0109879 0.0000
0.0111406 0.0000
0.0113600 0.0000
0.0223978 0.0000
0.0116185 0.0000
0.0059637 0.0000
0.0013329 0.0000
0.0006102 0.0000
0.0066382 0.0000
0.0066742 0.0000
0.0067072 0.0000
0.0067412 0.0000
0.0065288 0.0000
0.0248646 0.0000
0.0862524 0.0000
0.0380003 0.0000
0.0214049 0.0000
0.0117594 0.0000
0.0028529 0.0000
0.0013330 0.0000
0.0023419 0.0000
0.0024687 0.0000
0.0024612 0.0000
0.0028250 0.0000
0.0019298 0.0000
0.0027514 0.0000
0.0015866 0.0000
0.0006818 0.0000
0.0018357 0.0000
0.0020736 0.0000
0.0018086 0.0000
0.0016491 0.0000
0.0015179 0.0000
0.0012111 0.0000
0.0037932 0.0000
0.0012374 0.0000
0.0003335 0.0000
0.0014652 0.0000
0.0013839 0.0000
0.0011872 0.0000
0.0011054 0.0000
0.0009512 0.0000
0.0021661 0.0000
0.0017181 0.0000
0.0013895 0.0000
0.0007993 0.0000
0.0012582 0.0000
0.0011936 0.0000
135
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
95 -1829.3371309227 -0.000001006226 0.00078234 0.00000879 0.0008757 0.0000
96 -1829.3371325696 -0.000001646905 0.00032107 0.00000358 0.0002968 0.0000
97 -1829.3371332802 -0.000000710538 0.00058458 0.00000616 0.0002282 0.0000
98 -1829.3371335114 -0.000000231197 0.00019449 0.00000337 0.0003052 0.0000
99 -1829.3371328022 0.000000709172 0.00083510 0.00000809 0.0003575 0.0000
100 -1829.3371334411 -0.000000638881 0.00022095 0.00000406 0.0001908 0.0000
101 -1829.3371337795 -0.000000338398 0.00038685 0.00000439 0.0006104 0.0000
102 -1829.3371317807 0.000001998761 0.00021765 0.00000248 0.0006292 0.0000
103 -1829.3371336950 -0.000001914257 0.00013028 0.00000193 0.0001576 0.0000
Figure 1: The figure depicts the SCF convergence .
1.65
1.6
1.55
1.5
1.45
1.4
1.35
1.3
1.25
1.2
1.15
1.1
1.05
1
Energy, A.U
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
-0.05
0
5
10
15
20
25
30
35
40
45
50
55
Step N
60
65
70
75
80
85
90
95
100
105
Fiure-1
Energy in (A.U.) versus step N
International Science Congress Association
136
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Table-3
Orbital energies
SPIN UP ORBITALS
----------------------------------------------------------------------------------------------------------- ------------------------No. Occupancy E(Eh)
E(eV)
----------------------------------------------------------------------------------------------0 1.0000 -305.275601 -8306.9714
1 1.0000 -36.029138
-980.4027
2 1.0000 -30.891867
-840.6104
3 1.0000 -30.887779
-840.4992
4 1.0000 -30.876307
-840.1870
5 1.0000 -18.709641
-509.1152
6 1.0000 -18.701190
-508.8853
7 1.0000 -18.689708
-508.5728
8 1.0000 -18.685332
-508.4537
9 1.0000
-3.913556
-106.4933
10 1.0000
-2.448546
-66.6283
11 1.0000
-2.434243
-66.2391
12 1.0000
-2.416675
-65.7611
13 1.0000
-0.616668
-16.7804
14 1.0000
-0.607396
-16.5281
15 1.0000
-0.595240
-16.1973
16 1.0000
-0.589977
-16.0541
17 1.0000
-0.152059
-4.1377
18 1.0000
-0.137770
-3.7489
19 1.0000
-0.120324
-3.2742
20 1.0000
-0.114831
-3.1247
21 1.0000
-0.108951
-2.9647
22 1.0000
-0.108184
-2.9438
23 1.0000
-0.084186
-2.2908
24 1.0000
-0.083072
-2.2605
25 1.0000
-0.074782
-2.0349
26 1.0000
-0.074572
-2.0292
27 1.0000
-0.063747
-1.7346
28 1.0000
-0.059278
-1.6130
29 1.0000
-0.055928
-1.5219
30 1.0000
-0.054829
-1.4920
31 1.0000
-0.050204
-1.3661
32 1.0000
-0.045902
-1.2491
33 1.0000
-0.044702
-1.2164
34 0.0000
0.020726
0.5640
35 0.0000
0.144096
3.9211
36 0.0000
0.170793
4.6475
37 0.0000
0.246720
6.7136
38 0.0000
0.253631
6.9016
39 0.0000
0.290498
7.9048
40 0.0000
0.296183
8.0596
41 0.0000
0.304975
8.2988
42 0.0000
0.349789
9.5182
43 0.0000
0.354598
9.6491
44 0.0000
0.441989
12.0271
45 0.0000
0.449325
12.2267
46 0.0000
0.508918
13.8484
47 0.0000
0.518570
14.1110
48 0.0000
0.522247
14.2111
49 0.0000
0.522787
14.2258
International Science Congress Association
137
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
International Science Congress Association
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.540929
0.571672
0.574266
0.580082
0.606159
0.617860
0.738635
0.745174
0.857528
0.871521
1.123914
1.142429
1.151358
1.167576
1.232824
1.614380
1.625558
1.626038
1.631251
1.631872
1.635804
1.637634
1.640780
1.642269
1.644709
1.649418
1.653399
1.664465
1.674817
1.691772
1.704035
1.705373
1.759606
1.824795
1.868875
1.918153
2.142904
2.159208
2.657241
2.660182
2.672185
2.674584
2.678693
2.681152
2.690325
2.702944
2.704890
2.710063
2.731982
2.774419
2.811496
2.880642
2.892405
3.049782
3.134255
4.155180
14.7194
15.5560
15.6266
15.7848
16.4944
16.8128
20.0993
20.2772
23.3345
23.7153
30.5833
31.0871
31.3300
31.7714
33.5469
43.9295
44.2337
44.2467
44.3886
44.4055
44.5125
44.5623
44.6479
44.6884
44.7548
44.8829
44.9913
45.2924
45.5741
46.0355
46.3692
46.4056
47.8813
49.6552
50.8547
52.1956
58.3114
58.7550
72.3072
72.3872
72.7138
72.7791
72.8909
72.9579
73.2075
73.5508
73.6038
73.7446
74.3410
75.4958
76.5047
78.3863
78.7063
82.9888
85.2874
113.0682
138
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
International Science Congress Association
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
4.189692
4.218409
4.227468
4.299867
5.335909
5.341549
5.349443
5.349624
5.352739
5.354070
5.355814
5.357839
5.359042
5.360043
5.362342
5.363882
5.365173
5.366059
5.367861
5.368087
5.372969
5.374398
5.375270
5.376565
5.379700
5.380843
5.381512
5.381603
5.383668
5.384380
5.386017
5.407414
5.520806
5.528270
5.547358
5.548637
5.552496
5.558367
5.568808
6.359346
6.362020
6.363798
6.367700
6.370478
6.374323
6.377404
6.380253
6.381519
6.389407
6.389475
6.389951
6.402354
6.405097
6.420030
6.452245
6.471764
114.0073
114.7888
115.0353
117.0053
145.1975
145.3509
145.5658
145.5707
145.6554
145.6917
145.7391
145.7942
145.8269
145.8542
145.9168
145.9587
145.9938
146.0179
146.0669
146.0731
146.2059
146.2448
146.2685
146.3038
146.3891
146.4202
146.4384
146.4409
146.4971
146.5164
146.5610
147.1432
150.2288
150.4319
150.9513
150.9861
151.0911
151.2508
151.5350
173.0466
173.1194
173.1678
173.2739
173.3495
173.4541
173.5380
173.6155
173.6499
173.8646
173.8664
173.8794
174.2169
174.2915
174.6979
175.5745
176.1057
139
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
162 0.0000
6.490457
176.6143
163 0.0000
6.510742
177.1663
164 0.0000
6.572458
178.8457
165 0.0000 10.133443
275.7450
166 0.0000 10.286773
279.9173
167 0.0000 10.313816
280.6532
168 0.0000 14.200271
386.4090
169 0.0000 14.220642
386.9633
170 0.0000 14.389965
391.5708
171 0.0000 14.430994
392.6873
172 0.0000 18.932579
515.1817
173 0.0000 68.693000 1869.2315
174 0.0000 77.096598 2097.9051
175 0.0000 77.115356 2098.4155
176 0.0000 77.288353 2103.1230
177 0.0000 77.338481 2104.4871
178 0.0000 176.190144 4794.3776
179 0.0000 176.584868 4805.1186
180 0.0000 176.603465 4805.6246
181 0.0000 410.212638 11162.4534
182 0.0000 2710.501264 73756.4891
-------------------------------------------------------------------------------------------------------------------------------------------SPIN DOWN ORBITALS
------------------------------------------------------------------------------------------------------------------- -----------------------No Occupancy
E(Eh)
E(eV)
----------------------------------------------------------------------------------------------------------------------------- -------------0 1.0000 -305.275552 -8306.9701
1 1.0000 -36.002100
-979.6670
2 1.0000 -30.867588
-839.9498
3 1.0000 -30.865153
-839.8835
4 1.0000 -30.856703
-839.6536
5 1.0000 -18.702391
-508.9179
6 1.0000 -18.695155
-508.7210
7 1.0000 -18.687620
-508.5160
8 1.0000 -18.683969
-508.4167
9 1.0000
-3.847772
-104.7032
10 1.0000
-2.374823
-64.6222
11 1.0000
-2.370157
-64.4953
12 1.0000
-2.359088
-64.1941
13 1.0000
-0.600068
-16.3287
14 1.0000
-0.593637
-16.1537
15 1.0000
-0.590262
-16.0619
16 1.0000
-0.586838
-15.9687
17 1.0000
-0.108583
-2.9547
18 1.0000
-0.096461
-2.6248
19 1.0000
-0.091878
-2.5001
20 1.0000
-0.083856
-2.2818
21 1.0000
-0.072651
-1.9769
22 1.0000
-0.071437
-1.9439
23 1.0000
-0.069091
-1.8800
24 1.0000
-0.066331
-1.8050
25 1.0000
-0.065043
-1.7699
26 1.0000
-0.055259
-1.5037
27 1.0000
-0.054940
-1.4950
28 1.0000
-0.049610
-1.3499
International Science Congress Association
140
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
International Science Congress Association
1.0000
1.0000
1.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
-0.049252
-0.046519
-0.042299
-0.034679
-0.000818
0.030283
0.146013
0.179795
0.255159
0.257490
0.295313
0.296190
0.306643
0.353729
0.355207
0.451510
0.453205
0.516277
0.521589
0.524903
0.525631
0.547290
0.575851
0.580514
0.581585
0.607905
0.624820
0.746007
0.748251
0.862733
0.873720
1.130964
1.165822
1.175678
1.189300
1.239437
1.628959
1.634389
1.634534
1.638106
1.640297
1.641764
1.643354
1.644717
1.646171
1.646295
1.650172
1.662899
1.671716
1.683667
1.702429
1.709107
1.712546
1.771337
1.830066
1.896659
-1.3402
-1.2658
-1.1510
-0.9437
-0.0223
0.8240
3.9732
4.8925
6.9432
7.0067
8.0359
8.0597
8.3442
9.6255
9.6657
12.2862
12.3323
14.0486
14.1931
14.2833
14.3032
14.8925
15.6697
15.7966
15.8257
16.5419
17.0022
20.2999
20.3609
23.4762
23.7751
30.7751
31.7236
31.9918
32.3625
33.7268
44.3262
44.4740
44.4779
44.5751
44.6347
44.6747
44.7179
44.7550
44.7946
44.7980
44.9035
45.2498
45.4897
45.8149
46.3255
46.5072
46.6007
48.2005
49.7986
51.6107
141
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
International Science Congress Association
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
1.933379
2.175168
2.181992
2.666685
2.667986
2.684771
2.686481
2.687102
2.690299
2.695349
2.708452
2.713613
2.727608
2.734156
2.778863
2.838016
2.894551
2.899303
3.061172
3.152383
4.217387
4.234656
4.247536
4.251086
4.365805
5.351578
5.359625
5.362454
5.363452
5.365772
5.367816
5.367879
5.369236
5.370871
5.371963
5.373359
5.373713
5.374604
5.374923
5.376762
5.377903
5.378323
5.379256
5.381495
5.383350
5.384791
5.384853
5.385547
5.385944
5.386425
5.386953
5.390853
5.416967
5.583728
5.596180
5.600212
52.6099
59.1893
59.3750
72.5642
72.5996
73.0563
73.1029
73.1197
73.2068
73.3442
73.7007
73.8412
74.2220
74.4002
75.6167
77.2263
78.7647
78.8940
83.2987
85.7807
114.7609
115.2309
115.5813
115.6779
118.7996
145.6239
145.8428
145.9198
145.9470
146.0101
146.0657
146.0674
146.1043
146.1488
146.1785
146.2165
146.2262
146.2504
146.2591
146.3091
146.3402
146.3516
146.3770
146.4379
146.4884
146.5276
146.5293
146.5482
146.5590
146.5721
146.5864
146.6926
147.4032
151.9410
152.2798
152.3895
142
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
141 0.0000
5.604790
152.5141
142 0.0000
5.612839
152.7331
143 0.0000
5.620910
152.9527
144 0.0000
5.626930
153.1165
145 0.0000
6.381833
173.6585
146 0.0000
6.382867
173.6866
147 0.0000
6.382999
173.6902
148 0.0000
6.384663
173.7355
149 0.0000
6.386109
173.7749
150 0.0000
6.387455
173.8115
151 0.0000
6.389440
173.8655
152 0.0000
6.390597
173.8970
153 0.0000
6.392155
173.9394
154 0.0000
6.392870
173.9588
155 0.0000
6.396737
174.0641
156 0.0000
6.399399
174.1365
157 0.0000
6.406067
174.3180
158 0.0000
6.423330
174.7877
159 0.0000
6.428062
174.9165
160 0.0000
6.468081
176.0054
161 0.0000
6.483256
176.4184
162 0.0000
6.502738
176.9485
163 0.0000
6.528690
177.6547
164 0.0000
6.587435
179.2532
165 0.0000 10.192697
277.3574
166 0.0000 10.360090
281.9124
167 0.0000 10.371595
282.2255
168 0.0000 14.215961
386.8360
169 0.0000 14.224957
387.0808
170 0.0000 14.402456
391.9108
171 0.0000 14.438920
392.9030
172 0.0000 19.010808
517.3104
173 0.0000 68.751376 1870.8201
174 0.0000 77.106603 2098.1773
175 0.0000 77.118069 2098.4893
176 0.0000 77.296044 2103.3323
177 0.0000 77.343780 2104.6312
178 0.0000 176.235761 4795.6189
179 0.0000 176.640260 4806.6258
180 0.0000 176.648413 4806.8477
181 0.0000 410.229388 11162.9092
182 0.0000 2710.505629 73756.6079
----------------------------------------------------------------------------------------------------------------------------- --------------
International Science Congress Association
143
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure 2
The figure represent the molecular orbital (MO) energy level diagram for the occupied and unoccupied orbitals. The alpha and beta
molecular orbitals indicates the up and the down electron spin.
International Science Congress Association
144
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Table-4
The table shows the dipole moment at the local site coordinated geometry of the Nickel (II) ion in the complex from
electronic and nuclear contribution.
-----------------------------------------------------------------------------------------------DIPOLE MOMENT
--------------------------------------------------------------------------------------------- --X
Y
Z
Electronic contribution: -0.58243
-0.70555
-0.00083
Nuclear contribution :
0.05996
0.06662
0.00000
-------------------------------------------------------------------------------------------------------------------Total Dipole Moment : -0.52247
-0.63892
-0.00083
------------------------------------------------------------------------------------------------------- ------------Magnitude in (a.u.): 0.82535
Magnitude in (Debye): 2.09787
Table 5: The electronic g-Tensor evaluated for the dopant Nickel (II) in the host compound
______________________________________________________________________________
2.1358577 0.0017694 0.0003577
0.0011180 2.1235246 -0.0000014
0.0002321 -0.0000066 2.1486413
______________________________________________________________________________
The principal eigen values of the g matrix evaluated from the diagonalisation properties of matrices are as follows
g-total (gxx, gyy, gzz ) : 2.1233578
2.1360177
2.1486482
Table 6: The ZFS tensor in the local site symmetry of the dopant Nickel (II) in the host matrix is as follows
______________________________________________________________________________
ZERO-FIELD-SPLITTING (ZFS) TENSOR
______________________________________________________________________________
Raw-matrix :
225.845632 0.016965 0.108113
0.016965 212.021112 0.000367
0.108113 0.000367 215.172831
Diagonalized D matrix :
212.021091
215.171736 225.846747
The axial (D) and the rhombic (E) zero field splitting (ZFS) parameters evaluated from the computational technique of Density
functional theory (DFT) are 12.25 cm-1 and 1.58 cm-1 .
Figure 3: Nickel (II) ion in the tetragonally distorted structure of the local site symmetry about the oxygen atoms.
International Science Congress Association
145
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure-3(a)
Figure-3 (a): The bond angles and lengths of the central metal nickel (II) ion with respect to the nearest neighbor oxygen atoms.
The arrow (blue colour) shows the direction of the dipole moment.
International Science Congress Association
146
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
Figure-3(b)
Figure-3(b): The Mayer’s free valence of the various atoms is shown in the Nickel (II) dopant and oxygen atoms. The blue colour
arrow shows the direction of the dipole moment.
Conclusion
The present work deals with theoretical computational studies of the dopant Nickel(II) ion in the diaqua malonato complexes. The
dipole matrix, g-Tensor and ZFS matrix are evaluated in the four coordinated nearest neighbor interaction. The dielectric and spinorbital molecular operator is taken into account. The spin unrestricted self consistent (UKS) method is employed to check the
convergence of energy. A high value of zero field splitting is observed.
References
1.
C.Kittel, Introduction to Solid State Physics, Wiley, India, (2009)
2.
B.Kenny Lipkowitz, B.Donald Boyd, Reviews in Computational Chemistry ISBNs: 0471-22441-3.
3.
F. Nesse, Chem. Phys. Lett., 380, 721 (2003)
4.
M. Kaupp, M. Buhl and V.G. Malkin, Calculation of NMR and EPR Parameters, Theory and Applications, Wiley-VCH,
Weinheim, (2004)
5.
F. Neese, Coord. Chem. Rev. 253, 526 (2009)
6.
F. Neese, W. Ames, G. Christian, M. Kampa, D.G. Liskos, D. A. Pantazis, M. Roemelt, P. Surawatanawong, S. F. Yee, Ad.
Inor. Chem., 62, 301 (2010)
7.
E. Lenthe, P. E. S. Wormer, A. Avoird, J. Chem. Phys., 107, 2488 (1997)
8.
J. N. Rebilly, G. Charron, E. Riviere, R. Guillot, A. L. Barra, M. D. Serrano, J. Van Slagaren, T. Mallah, Chem. Eur.-J, 14,
1169 (2008)
9.
J.Krystek, A. Ozarowski, J. Telser, Coord. Chem.Rev., 250, 2308 (2006)
10. F. Neese, ORCA - An ab initio, Density Functional and Semi-empirical Program Package, version 3.0.1; Max Planck institute
for chemical energy conversion, Ruhr, Germany, (2013)
11. N. J. Ray, B. J. Hathaway, Acta Cryst., B38, 770 (1982)
12. D.A. Pantazis, X.Y. Chen, C.R. Landis and F. Nesse, J. Chem. Theory Comput., 4, 908 (2008)
International Science Congress Association
147
Proceeding of Recent Trends in Computations and Mathematical Analysis in Engineering and Sciences-2015
Ranchi, Jharkhand, India 20th -21st Nov 2015
13. D.A. Pantazis and F. Nesse, J.Chem. Theory Comput , 5, 2229 (2009)
14. D.A. Pantazis and F. Nesse, J.Chem. Theory Comput , 7, 677 (2011)
15. D.A. Pantazis and F. Nesse, Theor. Chem. Acc., 131, 1292 (2012)
International Science Congress Association
148
149