Souvenir - SYNERGY ENGG. COLLEGE, DHENKANAL

Transcription

Souvenir - SYNERGY ENGG. COLLEGE, DHENKANAL
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Techno-Managerial Approach in SC and ST Community Development
(A Case Study of Odisha; India)
*Mahima Prakashan Sahoo, ** Muna Kalyani
* E-mail- [email protected] ** E-mail- [email protected]
Asst. Prof in Management
Reader, PG Dept. of Business Administration
SYNERGY, Dhenkanal.
Utkal University, Vani-Vihar, Bhubaneswar
ABSTRACT- There is a wide range of disparities among
different section of population in India. Especially the
Scheduled Castes (SC) and Scheduled Tribes (ST)
sections of the population are socially, culturally,
educationally, economically and health wise weaker than
the other sections of the population. The state Odisha
constitutes as many as 62 tribal communities and 93
different scheduled caste communities, which are about
22% and 16.53% of the total population of the state
respectively. Though they have enormous individual
potential in Handicraft (Mainly Brass & Bamboo
Crafts), Handloom (Mainly weaving), Agriculture,
Horticulture, Forest outputs and other tiny industrial
sector, their standard of living not changed up to the
expectation level at par to the policy of the Planning
Commission of India. It is found from the study,
regarding the least contribution of technical revolution
and managerial actions to their development. The
restoration of their cultural heritage is also at stake.
They are being suffered by the other communities
(Domb) and a few short sighted (myopia) political
persons in the path of their development process. Their
Socio-psychological correlates towards entrepreneurship
development can be studied with reference to the
entrepreneurial skills. Some constraints and indicators of
development can be identified towards their development
as part of the social research in area wise and
community wise. In this case the authors try to develop
and commercialize only one product known as Dokra
brass metals, which produced by the Santal and Bhatudi
ST Communities of Mayurbhanj and Ghantra SC
community of Rayagada district of Odisha. The
ornamental products used by the Dungaria primitive ST
community and have high demand in EURO market, due
to the price rise in gold and jewelry sector. Some of the
techno-managerial functions are in practice through the
entrepreneurs of these communities by the authors to get
the real output as part of their integral development.
I.
INTRODUCTION
Economic growth paved the way for the major
goals to be achieved in the first few five year plans.
The assumption was that economic growth would
automatically solve the problems of poverty,
unemployment and lack of basic needs. Due to the
necessity of building up adequate infrastructure in
the industrial, power and irrigation sectors, the
percolation effect was also expected to operate. As
we gathered some momentum in infrastructure,
expertise in the adequacy of percolation effect,
creating distance between the GNP and
Unemployment, the question of Social Justice
became important.
There is a wide range of disparities among different
section of population in India. Especially the
Scheduled Caste (SC) and Scheduled Tribe (ST)
section of the population are socially,
educationally, economically weaker than the other
sections of the population. One of the major
concerns of Indian planning has been the removal
of disparities among different section of the
population. In order to correct some of these
imbalances the Planning Commission emphasized
the need for district level planning.
Every effort of economic development of a country
like India where more than 70% of the population
live in rural, must begin with the development of
villages and every effort towards development of
villages must begin with the development of the
weaker section of the population that is the SC and
ST. Thus the recognition of SC and ST by sensitive
observers is significant. Our first prime-minister
Pundit Jawaharlal Nehru had expressed his deep
concern about the tribes and formulated basic
strategy for their development.
Keywords: Optimum utilization of SC & ST people,
Potential and Skills of SC and ST Entrepreneurs,
Planned change of SC & ST at District level, District
industrialization, District developmental process,
Sustainable development etc.
The main feature common to these tribes according
to Indian Constitution were that, they were having
tribal origins, primitive ways of life, habitation in
remote and less easily accessible areas and
1
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
plans. If a district level plan is based on these
considerations and financial allocations are
realistically made, then many of the problems now
being faced in the country regarding productions
and employment can be solved. Such a plan frame
will require taking into account the needs of each
village community and different agro-climatic
conditions.
generally backward socially and economically.
Almost all of the tribal in India live in rural areas.
So development of rural areas through Rural
Industrialization enhances the development of these
weaker sections of population. Gandhiji rightly
recognized the importance of self sufficient village
economy through the development of entrepreneurs
to establish village industries. In this light, steps
were taken to assist the improvement of tribal to
higher level of social and economic development
through their potential skills application.
The actual needs of our rural areas are many. Apart
from the service facility like health and education,
the recent upsurge in agriculture has created
demand for efficient distribution of agricultural
inputs for marketing and for processing of
products. If the current tempo in agricultural
activity is to be maintained, all of these have to be
provided without delaying with the strengthening
of agricultural sector, a planned strategy for
decentralizing industrial efforts in the rural areas
has also become imperative. Land reforms and
intensive agriculture will to some extent, open up
awareness for employment of the rural unemployed
and under employed, but there will still be a large
number for whom non-agriculture employment
opportunity will have to be created.
According to Gandhiji, “All our efforts should aim
at the development of tribal population equally
with other section of the society”, which could be
marked with basic national indices like per capita
income, infant mortality, nutritional levels, life
expectancy, and literacy representation at higher
levels in all technical and other fields.
II.
RESEARCH & LITERATURE REVIEW
The state Odisha constitutes as many as 62 tribal
communities which are about 22% of the total
population of the state. The district Mayurbhanj of
Odisha consists 60% of its population as STs,
majority of whose living standards are below
poverty line and also said to be land of tribal. Out
of 62 communities, 45 communities of tribal can be
found in the district like Santal, Ho, Bhumija,
Bhuyan, Bhathudi, Kolha, Munda, Gond, Kharia,
Lodha and Dungaria etc. More than 70% of the
tribal populations of the district live below poverty
line in spite of rich natural and human resources. It
is very much essential to improve development of
industry and commerce for the upliftment of the
people of the district, especially for the tribal of the
district in the future business scenario.
A method of making the strategy work will
combine increased provision for service facilities
like health and education, strengthening of the
agricultural infrastructure for distribution of inputs,
marketing, and storage and processing, and the
provision of non-agricultural enhancement
opportunities in the rural areas. The strategy will
have to be conditioned to the fact that there is
scarcity of resources at the present level of
development in the country.
One of the solution appears to bring the Common
Property Resources (CPRs) like community forests,
pastures, waste lands, ponds/ tanks, rivers and
rivulets etc., can be brought into use by utilizing
the local manpower with the natural resources and
not only generate wage employment but also join
the main stream of industrialization and
modernization
through
small
business
development. Effective policies for optimum use of
CPRs for the rural poor can bring a substantial
change in the quality of life for them.
In order to correct some of these imbalances the
Planning Commission emphasized the need for
district level planning on the assumption that plans
made at the state and national levels could be
formulated on the basis of greater awareness of
field situations and association of the people
involved in plan formulation is attempted at district
level. This idea was praise worthy. But in practice,
there was the danger of this attempt ending in the
preparation of a list of requirements without
synchronizing with them the favorable and other
parameters of district plan. Additionally, there was
the inherent risk of committing the same errors as
were noticed in sectional planning, at the district
level. This could be decreased to some extent, by
taking into account the available local resources
while determining the nature and size of such
The basic exercise needed is a clear identification
of local resources, productive skills of the people,
markets and other factors. The Government has
well defined policies, objectives and guidelines for
each state to induce and encourage indigenous
entrepreneurs as well as enterprises, with the
optimum utilization of natural and human
2
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
international market in addition to imitation
ornamental items.
resources. The other dimension of the government
policy is to generate employment and increase
sources of income to certain individuals and groups
in the community, who can best make use of the
local resources. In China, small industry is mainly
designed to mobilize what the Chinese call the
Four Locals; local raw materials, local skills, local
finance and local markets.
They manufacture these Dokra items
by Lost Wax Casting method. The process of
manufacture has been clearly understood. Many
researchers have been undertaken the work on the
casting oven for fuel efficiency and on the process
of casting of the product. Though there is
development in reduction of use of wooden and
fossil energy sources but nothing great could be
contributed in the process of casting. These types
of product wouldn’t be manufactured by
Electro-Casting Machines with the same
impression. However on alloys and marketing
strategies emphasis are determined poor. They use
wax and brass as a ratio of 1:10, as a traditional hit
and trial method i.e. if 1gm of wax used for design
then 10gms of brass needed to pour on the mould
pot. No specific mathematical equations could be
developed as per the authors’ knowledge. The
alloys of brass can be added with NaCl (Sodium
Chloride) and do the heat treatment to make the
product black. Even Pb (Lead) can be added to
make the product signing and blackish to make the
product attractive. Different colors can be painted
on the surface of the deity at appropriate area to
make the product more attractive and value
addition for economical viability. Coloring can be
done by surface coating (heat treated imitating
method), spraying and galvanizing process. The
district administration now is encouraging them for
better and profitable marketing strategies and
generating scopes through ORMAS. They are also
attending different Melas, Events and big shopping
Malls directly with their traditional gesture and
posture to market the products. Some of the
product pictures are given below with different
alloys and treatment. The different phases of
casting process are also mentioned below through
figures.
The main aim of small business development is to
make the rural community self reliant, generate
income within the community and to provide
employment to rural youth and to reduce migration
from rural to urban areas. Finally within the
available infrastructure and resources, the
entrepreneur can promote his entrepreneurial
capability as well as induce economic growth.
III.
OBJECTIVE OF THIS STUDY
Objective of this study is to apply the productive
potential of those people for product design and
development
and
formulating
appropriate
marketing strategies for their existing and newproducts. Here it is mentioned about a lucrative
handicraft product i.e. Dokra Metal Deity and
Dokra Ornamental products manufactured up
of brass by the Santal and Bhatudi ST
communities of Mayurbhanj district and the
Ghantra SC community of Rayagada district of
Odisha.
Methodology:
A preliminary field survey was conducted
through a schedule to understand their social,
cultural, political, physical and economical status.
In this survey, it was observed and found that they
have lot of potential skills in agricultural (Hybrid
Paddy) and horticultural (Mango and Cashew
plantation and grafts production) product
development. It is also found the ST and SC
Communities have enormous potential in
producing the Dokra Metal Deity products
manufactured of Brass metal of different types
(around 148 items identified) and forms; which in
market sales as handicraft products in high value
and demand. Tribal jewelry of (546 items) also
has a great demand in and around national
market and produce by the Ghantara SC
community of Rayagada district of Odisha and
used by the primitive ST group named Dungaria
Kondhs. Also these have an increased demand at
Europe and UK market. The price of the gold in the
recent time paved the way for the enhancement and
spread of these types of jewelry ornaments in
Findings of the Study:
a. The quality of raw materials is not tested but
collected locally for recycled.
b. Losses are more because of damages of the
product at the time of manufacture. This is
because of the casting process and brass
melting point. Again, though the melting
point of brass is around (940oC-980oC) but to
proper cast the mud (earth) used has to be
heated up to 1200oC for the melted brass to
occupy the empty space inside the mould.
They are even if adding iron (Fe) particles in
the inside cast process to make the product
3
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
more weight, which causes damage on
heating.
Product treated with Pb
Product with reduced % of Cu
Crafts of Odisha
Website: http://handicrafts.nic.in/
Product treated with NaCl
Casting Process Step- 1
Product of pure brass i.e. 60% Cu
4
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
g. It is also found the training imparted to
increase the no. of skilled labors, revolving
around a few. The concern training
organizations do not introduce new entrants in
the program. This process restricts increasing
in skilled no. of manpower, volume
production (contributing to price) and underutilization of the Union and State Govt. funds.
Knowledge transfer is also restricted.
h. They adopt serial production process leading
to increase in cost of labor, which increases
cost of product and ultimately the price of
product in market. Less no. of skilled labor in
this field is also one of the major reason in
restriction of competition and market
coverage.
i. The community couldn’t accept it as a
profession. Only a few people with certain
informal education regarding the business are
engaged in this industry.
j. This handicraft industry neither encouraged
nor
rewarding
by
the
recognized
organizations.
Casting Process Step- 2
Suggestions to the Findings of the Study:
Hence the research is under progress a much
concrete suggestions would not be possible to
contribute. The suggestions whatever will be
specified only based on certain logics and
assumptions. The fields applications of these
suggestions are still in the verse of prove.
Casting Process Step- 3
1.
c. Products
are not standardized. High
differentiation is observed. This increases the
bargaining power of customers.
d. Waxes used are of either from bee shed,
bitumen (alkatara) and candle waste or
mixture of all. It should be from the filtered
honey but for the purpose of economic they
mixed all, which contributes in bad finished
goods.
e. Proper marketing strategies should be formed
to transfer extra financial benefits to the
skilled manufacturers. The channel partners
enjoying more benefits.
f. The tribal skilled people have no financial
strength (at the context of Odisha) to hold the
product for proper price. Because of their
livelihood and style of living they go for
desperate sale, causing zero net profit.
2.
3.
5
A formal and/or formal with partial informal
training is needed for a period of three months
time to train those people on project
preparation, techno-economic feasibility
analysis, registration, material procuring and
selection, inventory requirement, product
standardization, product specification and
design, business opportunity identification,
marketing, rules and regulations on tie-up,
collaborations, profit sharing, motivational
skills development, time management and
time bound, managerial skills development
and export-import technique, handicraft
industrial business activities and any other
relevant syllabi.
Quality raw materials can be used for
reducing damages. Also that section of people
must be engaged in this business as a passion
not by chance or at free time.
Financial help should be organized through
loan scheme to SHG or micro-financing
system. These industries are supposed to be
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
coordination and integration of productive potential
of rural people with its rural prospects. However
the development of rural people are plagued by
some major problems like inadequate flow of
credit, use of obsolete technology, machinery and
equipment and inadequate infrastructure facilities,
lack of communication and market information,
poor quality of raw materials, lack of storage and
ware-housing facilities, and lack of promotional
strategy. Solving these problems is necessary for
developing rural industries. Rural industrialization
is
inextricably
interwoven
with
rural
entrepreneurship. A new approach is required to
build sustainable development in the field of rural
industries. That is an integrated approach to
entrepreneurial culture in rural Orissa. That must
be consisting of varied activities - governmental
and NGOs efforts existing entrepreneurial culture,
market culture where the products and service are
delivered, which create around the dynamics of
entrepreneurial growth and change. This kind of
new entrepreneurial culture needs for new goods
and services, start many new venture by exploiting
new combinations of its available resources to
achieve entrepreneurial goals, This culture should
be nurtured, fostered and promoted with new
vision, values, norms and traits that is conducive
for the sustainable development of the rural people.
looked into or take care by the Ministry of
MSME, Govt. of India. Subsidies/Bail out
packages is supposed to be worked out for
this type of industry.
4. More no. of human resources should be
involved and competition can be created
through human skill and market expansion in
both horizontally and vertically.
5. Hence there is a greater demand of these
products in the Europe and UK market; other
international market areas should be focused.
6. New utilities of the products can be thought
over. Now these type of products are used for
decorative items, table weights and astray,
utensils, mementos and some other few
utilities.
7. Colors can be painted or imitated on the
surface of the product (Ex- A product called
Nandi, sold in market by TRIFED, Govt. of
India undertaking) to have a better
commercial value in market. As all these are
seem to be a greater advantage in case of gold
ornamentals.
8. Exploiting natural resources for human
welfare should include utilization of existing
energy resources effectively and developing
new source of energy.
9. Managing human skill and developing, it
would include identifying rural artisans
engaged in cottage and small industries which
could be run through local skill, providing
training and management skills and marketing
of the village industrial products.
10. Improve productivity enhance quality, reduce
cost and restructure product mix through up
gradation of technology and modernization.
11. Strengthen and enlarge skill profile and
entrepreneurial base to increase opportunities
for self-employment.
12. Restructuring the production process which
includes change in output pattern, reevaluations of non-renewable resources and
ecologically adjusted production.
13. Improve general levels of welfare of workers
and artisans through better working
conditions, welfare measures and security of
employment earnings.
IV.
At last it can be concluded by hoping the
following social contribution outcomes.
1. This strategy will generate small business
2.
3.
4.
5.
CONCLUSION
6.
The state is endowed with rich structure,
policies and prospects for rural industrialization
and development but no remarkable achievement in
this field has been obtained. It lacks proper
6
units at rural and remote India. This also will
join the main stream of industrialization and
modernization to generate good business
opportunity in the era of globalization.
These small business units will help in
optimum utilization of human resources skills
those who remained outside the mainstream.
This process will help in the preparation of a
list of requirements in synchronization to the
parameters of district level plan.
Some of the problems in the country
regarding productions and employment can be
solved with realistic financial allocations.
It will open up awareness and inspire the rural
unemployed and under-employed mass for
employment through small/tiny handicraft
industries.
Migration of rural people towards urban cities
will reduce which eradicate seeking cheap
labor.
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
REFERENCES
[1].
[2].
[3].
[4].
[5].
[6].
Crafts of Orissa A Product catalogue By Anwesha Tribal
Arts and Crafts, Bhubaneswar, N2-175, [email protected]
Crafts Treasure of Orissa A Product catalogue of Cluster
Craft by Anwesha.
Dash S S. “Entrepreneurship Development in Eastern
India and Central Africa”. 2006. pp- (1-10).
District Statistical Hand book 2000-01, Mayurbhanj,
Directorate of Economics & Statistics, Odisha,
Bhubaneswar
Indian Planning Commission, Fifth Year Plan 1974-79,
C.O.P., 1976, IX.162p.
Jodha N.S (1986) Common Property Resources and Rural
Poor in Dry Regions of India, Economic & Political
Weakly, Vol. XXI, No.27.
7
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Cloud Computing and Microsoft Windows Azure
Girija Prasad Nanda
EMBA Student of IIT Kharagpur, Working in Infosys
E-mail: [email protected]
Abstract-Cloud computing provides computing or storage as
a service, where resources, applications are delivered as a
utility over a network. [1] or In other words you can say
combination of Grid and Utility computing.
In this paper we will be discussing about the three
fundamental terms in cloud computing, namely, IaaS, PaaS,
SaaS. We will also be exploring the 3 different types of cloud
such as public cloud, Private cloud and hybrid cloud.
After the discussion of these, we will focus on Windows
Azure cloud computing platform which uses PaaS as the
backbone of it.
Grid computing is the computing platform of solving
a single problem by using resources of multiple
computers.
Utility computing is the computing platform where
you pay as you use. Just like your electricity or
telephone bill.
So essentially now cloud computing users do not
have to bother for the procuring and maintaining the
physical infrastructure, rather devote their resources
(time and money) to keep their applications up. They
also do not have to predict traffic, that has become a past
now. Whenever they require, they can scale up or scale
down their application.
Keywords- Cloud, Grid, Utility computing, PaaS, IaaS, SaaS.
I.
INTRODUCTION
Life Before cloud Computing
Before cloud computing the life of applications was
complicated and also costly. You need to procure all
related hardware and software for designing and
maintaining the application. You need to maintain a
separate team of experts to install, configure, run, and
update the application. Because of this, maintaining
multiple applications incur a huge cost which is almost
not feasible in case of a small sized or even medium
sized company. And if the project is failed the
companies have to incur a huge loss of installation cost
for those hardware and software. [2]
A very nice example: if you need a book, would you
invest for a library? All the users are having the intent to
use the software (book), why they should invest in the
huge infrastructure (library).
II.
CLOUD COMPONENTS
The various components of cloud computing are
clients, internet, data centers which are shown in Figure
1.
The client device, is the one by which we will be
accessing the hosted application in the cloud such as
mobile devices, personal computers, laptops, PDAs etc.
Life with Cloud Computing
With the evolution of cloud computing this risk of
installation is gone. Companies like Amazon,
Salesforce, and Microsoft will provide the necessary
hardware and software facilities. You have to only
concentrate on your application. You do not have to
concentrate on how the updates for software will be
installed or which new patches you need to install etc.
just concentrate on your application development.
Internet is the media through which we will be
accessing the hosted applications.
Data centers are the collections of high end servers
(application server, database server etc) where the
applications will be hosted.
Definition of Cloud computing
Usually we depict internet as a cloud. From this
concept only, the name cloud computing has been
evolved.
Cloud computing is the delivery of computing or
storage as a service, where resources, applications are
provided as a utility over a network. [1]
Hence you can think of this as a computing platform
with high end hardware and software resources to be
delivered virtually on demand. You can think of cloud
computing as a combination of grid and utility
computing.
Figure 1: Cloud Components
8
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
III.
Exa: Amazon EC2 (Elastic Compute cloud),
Rackspace cloud etc
TWO PERSPECTIVES OF CLOUD COMPUTING
A. Capabilities/ Services
Depending on the capabilities or service model cloud
computing can have 3 areas. [3]
•
B. Access/ Sourcing:
SaaS: (Software as a Service):
Based on who will access or on sourcing models it is
divided into areas shown in Figure 2 such as: [3]
In this model service providers will be creating
various services, for example, Gmail, Google
Apps, and Yahoo etc. and host it in the cloud in
order that all other users or organizations can
consume the same. A complete application can
be given depending on the consumer demand. In
this scenario client does not have to invest much
except the browser and an accessing device like
PDA, mobile devices or personal computers.
Here you get some interface and do not bother
for the implementation part of the application.
Hence it behaves as a kind of black box.
Costing will be either per user or per month.
Examples:
Free: Google Docs, Gmail, Facebook
Chargeable: Salesforce
•
Figure 2: Access
•
PaaS: (Platform as a Service):
Here resources are offered behind a firewall
and consumed by the organization who owns
them. They are not meant to be shared with
outside users. Private cloud can again be
divided into two forms such as
o Self-hosted private cloud, where
the organization itself hosts and
manages the resources
o Partner-hosted private cloud,
where it is designed internally,
hosted externally and externally
managed.
In this model the established organizations
leverage their resources, so that users/ smaller
organizations can host applications in their
infrastructure, which would have incurred more
cost for them in terms of setting up of that
infrastructure. Provider organizations deliver the
required capability or the run time environment
with the predefined cost. Here the client deploys
his application as a package.
The costing will be done depending on the
compute hours or/ and data storage and transfer
per GB.
•
Examples: Google, Microsoft, SalesForce etc.
•
Private Cloud
Public Cloud
A set of computing services which are hosted
and accessed in internet for a pay per use
amount. The service providers manage the
infrastructure and users can scale up or down
easily according to the need.
• Hybrid Cloud
IaaS: (Infrastructure as a Service):
Here the idea is to provide a virtual environment
as the foundation for SaaS and PaaS. This offers
the runtime environment for virtual machines.
Here you will have more access controls as
compared to PaaS. But the patch installations
etc are taken care by the users or the consumers.
The amalgamation of both public and private
clouds.
•
Costing will be based on compute usage per
hour or/ and data storage and transfer per GB.
Community Cloud
This is a term used in which many
organizations share their infrastructure among
themselves to get some benefit.
9
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
IV.
•
No infrastructure
required
related
•
•
Less maintenance issues
Multitenancy support: multiple users share
multiple instances and data
•
Agility or economies of scale [4]: on
demand computing infrastructure
•
Service orientation: most of the things are
opened as a service to the external world
Ubiquitous access [4]: From any location
you can acess by using any kind of devices
•
Architectural Flexibility [4]: sometimes
due to cost associated with different
architectures the organizations are forced
to use a certain architecture which is gone
with cloud environment.
•
Regulations:
Many barriers are there in terms of
international law, data sharing etc.
Control:
Users have less control over their
application and the service provider has the
applications and data on its premise.
Less number of staffs required
•
Mobility: applications can be accessed
from anywhere in the world
•
Reduced cost as compared to on premise
applications
•
Security:
Security is a major concern as usual in this
as the users have to share data with the
service providers.
•
Multitenancy: though we have considered
this as an advantage this can also be
considered under disadvantage because the
application is to be hosted at many places
simultaneously.
VI.
WHERE DOES MICROSOFT AZURE STAND
Basically two types of cloud offerings are emerging
in the current market scenario, namely IaaS and PaaS.
Shared databases [4]: all tenants can share
the same database hosted in the cloud
platform
•
IaaS provides the user-built virtualized systems
which are to be hosted in the cloud. This will be
installed with all the related software to be required by
the application. It is the responsibility of the user to take
care of the software and their updates. The responsibility
is not taken by the service provider to install any patches
or service packs or any up-gradation of the system.
The other type of offering is known as PaaS. In this
only the application code is to be uploaded, configured
and executed. Microsoft Windows Azure is a cloud
environment which uses PaaS as the platform.
VII.
CONCERNS OF CLOUD COMPUTING [5] [6]
•
Latency:
As the applications are accessed over
internet, hence it adds latency for all
transactions.
•
Virtualization: multiple instances of
multiple applications can run in a computer
using Hypervisor.
•
V.
•
investment
Accessible from any kind devices such as
mobiles, PDAs, PCs, Laptops etc
•
Platform and language constraints:
Some vendors provide support for only
some platforms and languages hence
making the choices limited.
ADVANTAGES OF CLOUD COMPUTING [5]
•
•
•
WINDOWS AZURE APPLICATION
In the link http://www.microsoft.com/windowsazure/
Microsoft says focus on your application not the
infrastructure.
Interoperability:
Till now no standard has been made to
switch one vendor to another. This will be
of high cost if one thinks of changing the
vendor, making it unthinkable.
10
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
•
Windows Azure Marketplace: an online
service providing, purchasing cloud based
data and applications
All these components are running in Microsoft data
centers which are spread across the globe.
Windows Azure Compute:
Windows azure compute runs many kinds of
applications. But it must be implemented with one or
more roles. Windows azure currently provides 3 roles.
Web role: is meant to run web based applications or
web services. Except Microsoft technologies like
ASP.Net, you can also use Java, PHP, etc for creating
cloud based applications. The tool used is Visual Studio.
Each web role has IIS 7 preconfigured in them. The web
roles will use http or https to listen the client.
Figure 3: Windows Azure
The runtime environment of the windows azure
platform is called as windows azure. The major parts of
the windows azure are shown in Figure 3 and described
below: [8]
•
Windows Azure: a windows environment
for storing data and running the application
Worker role: a kind of windows service or a
background job. The difference between web role and
worker role is, worker roles do not have IIS configured
in them.
•
Compute: runs applications on a windows
server foundation.
VM role: this is used to move on premise windows
server application to cloud.
•
Storage: this service provides BLOB
(Binary Large Objects), Queue, Tables for
storage.
•
Fabric controller (FC): this is the building
block of the whole Azure platform. The
compute and storage services are built over
this. 5 to 7 FCs are maintained always to
handle the user requests.
For one instance of web role or worker role
Microsoft creates 3 replicas of instances that may not be
stored in a single data center. If the live instance fails
then one of the other instances will come up so that the
down time of the application will be minimized. Again
there might be some catastrophes for which the replicas
may be stored across different geographical locations to
maintain the application availability.
•
SQL Azure: A relational data services in
cloud
•
Windows azure AppFabric: this is a cloud
based infrastructure services either to run
the application in the cloud or on premise
•
References
1.
2.
3.
4.
5.
6.
7.
CDN (Content Delivery Network): used to
cache frequently accessed data of the
BLOB closer to the users for quick access
8.
11
http://www.eic.fiu.edu/2011/09/cloud/
http://www.salesforce.com/cloudcomputing/
http://technet.microsoft.com/en-us/cloud/hh147295
http://msdn.microsoft.com/en-us/library/ee658110.aspx
http://www.sei.cmu.edu/library/assets/whitepapers/Cloudcompu
tingbasics.pdf
http://technet.microsoft.com/en-us/cloud/hh147295
http://www.keithpij.com/Portals/0/Downloads/IaaS,%20PaaS,%
20and%20the%20Windows%20Azure%20Platform.pdf
http://www.microsoft.com/windowsazure/Whitepapers/introduc
ingwindowsazureplatform/
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
1
FUZZY REASONING PETRI NETS ANALYSIS AND REVIEW
Sibaram Pany
Department of Computer Science and Engg
SIET, Dhenkanal, Orissa, India
E-mail: [email protected]
Prof. Dr. Chakradhar Das
H.O.D Department of Computer Science and Engg
SIET, Dhenkanal, Orissa, India
E-mail: [email protected]
II. KNOWLEDGE REPRESENTATION
Abstract— The efficiency and reliability of fuzzy
reasoning, through the representation of fuzzy Petri Nets
(FPNs) have been the crucial and intractable issues. We
develop a representation model for the knowledge base of
fuzzy production systems with rule, chaining based on the
connectivities conjunction “AND” or disjunction “OR” on
the Petri net formalism. An efficient algorithm has been
proposed to perform fuzzy ways of reasoning
automatically. The antecedent and consequent relationship
between two properties di and dj. di not equal to dj have
been studied. Moreover, the degree of truth values of
proposition dj can be found out from the given degree of
truth value of proposition di in analytical manner. A
sprouting tree has been developed for the execution of the
algorithm.
Fuzzy production rule is rule which describes the fuzzy
relationship between two prepositions. Let R={R1,
R2.......…Rn} be a set of fuzzy production rules. The
general formulation of the ith fuzzy production rule is as
follows:
Ri: IF dj THEN dk (CF=µi)
Where
1) dj and dk are prepositions which may contain some
fuzzy
linguistic variables, such as “high”, “low”,
“medium”, “hot” etc. The truth of each preposition is a
real value between 0 and 1.
2) µi Є [0, 1], represents the certainty factor (CF)of the
rule Ri. It represents the strength of belief of the rule.
The large the value, the more the rule is believed in.
Index Terms— Fuzzy Petri Nets, Knowledge-base,
Production rule, Reasoning algorithm, Sprouting tree.
For example: R1: IF eyes yellow THEN one is suffering
from jaundice (CF=0.90).
Let λ be the threshold value, where λ Є [0,1]. If the
degree of truth of preposition dj of (1) is yj Є [0, 1], then
Case (1): if yj ≥ λ then (1) can be fired. It indicates that
the truth of the proposition dk of (1) is yj × µi
Case (2): if yj< λ then (1) cannot be fired.
For example for the degree of the preposition dj = “eyes
yellow” in yj =0.90 and threshold value λ=0.20 then the
rule R1 can be fired and the degree of the preposition dk
“suffering from jaundice ” is yj ×μ =0.90× 0.90=.81.
Thus the chance of suffering from jaundice is 0.81.
I. INTRODUCTION
Petri nets are graphical modeling tools for analyzing
discrete event systems (DES) such as communication,
manufacturing and logistics systems. Artificial
Intelligence (AI) is the development of sufficiently
precise notations for knowledge representation. To make
real-world knowledge suitable for processing by
computers, many knowledge representation methods
have been developed, these are production rules, fuzzy
production rules, semantic networks, predicate
transitions network, Petri nets, conceptual graphs, etc.
Fuzzy reasoning is an advanced research field of logical
reasoning. Fuzzy production rules have been used to
represent imprecise knowledge in real-world and execute
fuzzy reasoning. Whereas, Fuzzy Petri Nets (FPN) takes
the advantage of both Petri net and Fuzzy theory and
structure similarities make FPN to suitably model fuzzy
production rule. Specifically FPN maps fuzzy rules of
rule based systems to structural representation of the k
knowledge, and algorithms based on FPN, have been
proposed to allow reasoning more flexible and efficient
manner.
III. FUZZY PETRI NET (FPN)
•
It is a bipartite directed graph with two types of
nodes, place /transition like ordinary PN.
• Each place may or may not contain token
associate with a truth value between 0 & 1.
• Each transition is associated with certainty
factor (CF) with truth value between 0 and 1.
The generalized definition of FPN: A fuzzy Petri net is
an 8 tuple.
FPN=(P, T, D, I, O, f, α ,β)
12
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
2
where
P ={p1,p2……,pn}is a finite set of places.
T={t1,t2………,tm} is a finite set of transitions.
D={d1,d2……..,dn} is a finite set of propositions.
P∩T∩D=Ф, |P|=|D|.
In the subsequent figure, if the degree of the preposition
“eyes is yellow” is 0.90, then the rule and the fact can be
represented by marked FPN, as follow:
I: T→P∞ is the input function, a mapping from
transitions to bag of places (input places)
O: T→P∞ is the output function, a mapping from
transitions to bag of places (output places).
f:T→[0,1] an association function, a mapping from
transitions to real values between 0 and 1.
In fuzzy set, it is degree of associations.
α:P→[0,1], is also an association function, a mapping
from places to real values between 0 and 1.indicating
truth degree of association of place.
β:P→D is an association function, a bijective mapping
from places to transitions.
See the figure 1 below
Fig.1
Fig 2 Knowledge representation with MFPN
Firing rule:
In FPN, a transition may be enabled to fire.
A transition ti is enabled if for all pj Є I(ti),α(pj)≥λ, where
λ is threshold value and λЄ [0,1]. When a transition ti
fires by removing the tokens for its input place, and then
depositing one token into each of its output place, the
token value in output place pk is calculated as yk=yj× µi.
(Fuzzy Petri net)
Fig 3(a)
FPN=(P, T, D, I, O, f ,α, β)
P={ p1,p2}
T={t1}
D={eyes yellow, suffering from jaundice}
I(t1)={p1}
O(t1)={p2}
α(p1)=0.90, α(p2)=0
α1=β(p1)=eyes yellow, β(p2)= suffering from jaundice
=d2
f(t1)=µ1, µ1Є[0.1]=0.90
Fig 3(b)
Firing a MFPN Fig3 (a) Before firing, Fig3(b) After firing.
IV. MARKED FUZZY PETRI NETS (MFPN)
A FPN with some places containing tokens is called
marked fuzzy Petri nets. In marked fuzzy Petri nets, the
token in place pi is represented by a label black dot.
The token value in place pi denoted by α(pi) is yi.
where α(pi) Є [0,1]. If α(pi)= yi and yiЄ[0,1] and β(pi)=di,
then it indicates that the degree of truth of preposition di
is yi.
By using a FPN, with fuzzy production rule,
R1,IF d1THEN d2(CF= µ1)
is modeled in figure 2.
Fig
4(a)
Fig 4(b)
An example of firing MFPN Fig 4(a), Before firing t1,
Fig 4(b) After firing t1.
13
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
3
V. TYPES OF FPN
If the antecedent portion or consequent portion of a
fuzzy production rule contains “and” (conjunction ٨) or
“or” (disjunction ٧) connectors, then it is called
composite fuzzy production rule. According to the
composite fuzzy production rule, we can have the
follows four types of FPNs.
Type2: If dj THEN dk1٨ dk2٨ ……. ٨dkn(CF=µi)
This type is modeled and with fuzzy reasoning rule is
shown in Fig.6
Type 1: If dj1٨ dj2 ٨ …… ٨djn THEN dk(CF=µi)
The rule is modeled by the following FPN.
The fuzzy reasoning process of type1 rule is also
modeled in subsequent FPN, is shown in Fig. 5
Fig.6 FPN representation of type 2 rule.
Fig.5 FPN representation of type1 rule.
Fig.6 (a) Type2 FPN before firing.
Fig.5 (a) Type1
FPN before firing.
Fig.6 (b) Type2 FPN after firing.
Fig.5 (b) Type1 FPN after firing.
14
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
4
Type3: If dj1٧ dj2٧ ……٧djn THEN dk(CF=µi)
The FPN of type 3 is shown in Fig.7
Hence we do not allow this type of rule to appear in the
knowledge base.
Fig.7 FPN representation of type 3 rule.
Fig.8 FPN representation of type 4 rule.
It is seen that the number of token in a place in FPN is
always 1.
VI. REACHABILITY ANALYSIS IN FPN
Let ta be a transition and pi, pj, pk be three places.
(i) If pi ЄI (ta) and pk Є O (ta), then pk is called
immediately reachable for pi.
(ii) If pk is immediately reachable for pi and pj is
immediately reachable for pk, then pj is called reachable
for pi.
(iii) The reachability is a binary relation, which is
reflexive, transitive closure of the immediately reachable
relation.
We denote IRS (Pi), the set of immediately reachable
places for pi, and RS (Pi) the set of reachable places of
pi. (See the figure.9 and table. I)
Fig.7 (a) Type 3 FPN before firing.
Fig.7 (b) Type 3 FPN after firing.
Type 4:If dj THEN dk1٧ dk2٧ …….. ٧dkn(CF=µi).
The corresponding graphical structures FPN is shown
in Fig.7, rules of this type are unsuitable for deducing
control, because they do not make specific implications.
Fig.9 Marked fuzzy Petri net
15
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
5
TABLE.I
If APik=Ф (i.e, pi has no adjacent place) and the goal pj
ЄRS (Pk), α (pi) ≥ λ, CF ik =μ Є[0,1] and pk does not
appear in any node on the path between the root node
(ps, α (ps), IRS(ps)) and the selected node (pi, α (pi),
IRS(pi)), then create a new node (pk, α (pk), IRS(pk)) in
the tree and as an arc labeled μ, is directed for node (pi,
α (pi), IRS(pi)) to the node (pk, α (pk), IRS(pk)) , where
α(pk)= α (pi)*μ The node (pk, α (pk), IRS(pk)) is called
nonterminal node.
IMMEDIATELY REACHABILITY SET AND REACHABILITY SET
FOR EACH PLACE Pi IS GIVEN FOR FIG.9
Place Pi
P1
P2
P3
P4
P5
P6
IRS (Pi)
{p2, p6}
{p4,p5}
Ф
{p3}
{p4}
Ф
RS (Pi)
{p2,p3,p4,p5,p6}
{p3,p4,p5}
Ф
{p3}
{p3,p4}
Ф
Else, if APik={pa, pb, …….., pz} and all are adjacent
place of pi and the goal place pj Є RS(pk), then request
the user to enter the degree of truth of propositions pa,
pb,…..pz respectively. Suppose the degree of truth of the
propositions da, db……..and dz given by the user are ya,
yb, ……yz respectively.
set g = min (α (pi), ya, yb………yz)
In g ≥ λ and CFik =μ Є [0, 1] in the tree and an arc
labeled μ, directed for (pi, α (pi) IRS (pi)) to the node (pk,
α (pk), IRS (pk)), where α (pk) = g*μ, then the node (pk, α
(pk), IRS (pk)) is called nonterminal node.
Else, the node (pi, α (pi), IRS (pi)) is terminal node.
Step3.
If no nonterminal nodes exist, go to step4 Otherwise, go
to step2.
Step4.
If there are no success nodes, then there is antecedentconsequence relationship between ds and dj, stop.
Else, the path for the root node to each success node is
called reasoning path.
Let Q= {(pj, s1, IRS (pj)), (pj,s2,IRS (pj)),……………
(pj, sm, IRS (pj)}
Where si Є [0, 1] and 1≤ i ≤ m, Q is set of success nodes.
Set z=max (s1,s2,…..sm)is the degree of the proposition
dj.
Adjacent places:
If both places pi and pk are input places of a transition ta,
i.e, pi, pk Є I (ta) then pi and pk are adjacent places, with
respect to ta.
VII. FUZZY REASONING ALGORITHM FOR
RECHABILITY
Fuzzy Reasoning algorithm helps in determining as
antecedent-consequence relationship between two
proposition ds and dy Moreover if the degree of truth of
proposition ds is given then how to find the degree of
truth of proposition dj. The place ps is called the starting
place and pj is called the goal place. The reasoning
algorithm is demonstrated by a tree diagram. Each node
of the tree is denoted by a triple (pk, α (pk), IRS (pk))
where pk Є p. let λ be a threshold value, CFxy denote the
certainty factor value associated with transition between
px and py . Let APxy denote the adjacent place of px and
py Є IRS (px).
Step1.
Set root node (ps, α(ps), IRS(ps))as nonterminal node,
where:
(a) Ps is the starting place, β(ps)=ds
(b) Α (ps)= ys Є [0, 1]
(c) IRS (ps) is the immediately reachability set of
the starting place ps
Step2.
Select one nonterminal node (pi, α (pi), IRS (pi)) if IRS
(pi) = Ф or pk Є (IRS (Pi)), the goal place pj RS (pk),
then mark the node as a terminal node. Otherwise, if pj Є
IRS (pi) and α (pi≥ λ. CFij = μ Є [0, 1], then create a new
node (pj, α (pj), IRS (pj)) in the tree as an arc and labeled
μ, directed for (pi, α (pi), IRS (pi)) to the node (pj, α (pj),
IRS (pj)) where α (pj) = α (pi)*μ. The later node is called
a successor.
Otherwise, for each pk Є IRS (pi))
VIII. EXAMPLES
Example.1 (without adjacent place):
Let d1, d2, d3, d4, d5, d6, d7, d8 and d9 be nine
propositions, assume that the threshold value λ=0.25 and
the knowledge base of a rule-based system contains the
following fuzzy production rule.
R1: IF d1 THEN d2 (CF=0.80)
R2: IF d1 THEN d5 (CF=0.90)
R3: IF d2 THEN d3 (CF=0.90)
R4: IF d2 THEN d4 (CF=0.85)
R5: IF d3 THEN d7 (CF=0.75)
R6: IF d4 THEN d6 (CF=0.80)
R7: IF d5 THEN d7 (CF=0.90)
R8: IF d7 THEN d6 (CF=0.85)
R9: IF d6 THEN d8 (CF=0.90)
The degree of truth of proposition d1 at place p1 (starting
place) is given by the user is 0.70. Calculate what is the
degree of truth of proposition d6 at place p6 (goal place).
16
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
6
The rules and the fact can be modeled by the fuzzy Petri
net (FPN) as shown in Fig.10
The immediate reachability set and the reachability set
for each place pi, pi Є p in Fig.10 are shown in Table II.
The set of adjacent places for each place is shown in
Table III.
After solving the problem by using the given algorithm,
the tree sprouts as shown in Fig.10 (a)
There are three success nodes, we can obtain the
following results:
Q= {(p6, 0.31, p8), (p6, 0.38, {p8}), (p6, 0.48, {p8})}
z=Max (0.31, 0.38, 0.48) = 0.48.
So the degree of truth of the proposition d6 at place p6 is
0.48.
Fig. 10. Marked fuzzy Petri net of Example 1.
Fig. 10(a) Sprouting tree of example.1
17
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
7
user is 0.70, 0.50, 0.65 respectively. Calculate what is
the degree of truth of proposition d6 at place p6 (goal
place). The rules and the fact can be modeled by the
fuzzy Petri net (FPN) as shown in Fig.11.
The immediate reachability set and the reachability set
for each place pi, pi Є p in Fig.11 are shown in Table IV.
The set of adjacent places for each place is shown in
Table V.
After solving the problem by using the given algorithm,
the tree sprouts as shown in Fig.11 (a)
There are three success nodes, we can obtain the
following results:
Q= {(p6, 0.31, p8), (p6, 0.37, {p8}), (p6, 0.38, {p8})}
z=Max (0.31, 0.37, 0.38) = 0.38.
So the degree of truth of the proposition d6 at place p6 is
0.38.
TABLE II:
IMMEDIATE REACHABILITY SET FOR EACH PLACE Pi IN FIG.10
Place Pi
IRS (Pi)
RS (Pi)
P1
P2
P3
P4
P5
P6
P7
P8
{p2, p5}
{p3,p4}
{p7}
{p6}
{p7}
{p8}
{p6}
Ф
{p2,p3,p4,p5,p6,p7,p8,p9}
{p3,p4,p6,p7,p8}
{p6,p7,p8}
{p6,p8}
{p6,p7,p8}
{p8}
{p6,p8}
Ф
TABLE III:
SET OF ADJACENT PLACES APik FOR EACH PLACE IN FIG.10
Place Pi
Place Pk
APik
P1
P1
P2
P2
P3
P4
P5
P7
P6
P2
P5
P3
P4
P7
P6
P7
P6
P8
Ф
Ф
Ф
Ф
Ф
Ф
Ф
Ф
Ф
Example.2 (with adjacent places):
Let d1, d2, d3, d4, d5, d6, d7, d8, d9 and d10 be ten
propositions, assume that the threshold value λ=0.25 and
the knowledge base of a rule-based system contains the
following fuzzy production rule.
R1: IF d1 THEN d2 (CF=0.80)
R2: IF d1 THEN d5 (CF=0.90)
R3: IF d2 THEN d3 (CF=0.90)
R4: IF d2 THEN d4 (CF=0.85)
R5: IF d3 THEN d7 (CF=0.75)
R6: IF d4 THEN d6 (CF=0.80)
R7: IF d5 AND d9 THEN d7 (CF=0.90)
R8: IF d7 AND d10 THEN d6 (CF=0.85)
R9: IF d6 THEN d8(CF=0.90)
The degree of truth of proposition d1 at place p1 (starting
place), d9 at place p9 and d10 at place p10 is given by the
18
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
8
Fig. 11.Marked fuzzy Petri net of Example 2.
Fig. 11(a) Sprouting tree of example.2
19
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
9
TABLE IV:
IMMEDIATE REACHABILITY SET FOR EACH PLACE Pi IN FIG.12
REFERENCES
1.
Place Pi
IRS (Pi)
RS (Pi)
P1
P2
P3
P4
P5
P6
P7
P8
P9
P10
{p2, p5}
{p3,p4}
{p7}
{p6}
{p7}
{p8}
{p6}
Ф
{p7}
{p6}
{p2,p3,p4,p5,p6,p7,p8}
{p3,p4,p6,p7,p8}
{p6,p7,p8}
{p6,p8}
{p6,p7,p8}
{p8}
{p6,p8}
Ф
{p6,p7, p8}
{p6,p8}
C.L Chang. Introduction to Artificial Intelligences
technique, Austin, TX: JMA Press 1985.
2. S. M. Chen. “A new approach to handling fuzzy
decision marking problems”, IEEE Trans Syst. Man.
Cybern., Vol. SMC-18.no.6, pp.1012-1016, Nov/
Dec, 1988
3. C.G. Loony. “ Fuzzy Petri nets for rule based
decision making” , IEEE Trans. Syst. , Man, Cybern.
, Vol. SMC-18 no.1, pp.178-183, Jan/Feb.1988
4. M.Mizumoto, “Fuzzy controls under various fuzzy
reasoning methods”, Inform. Sci., Vol.45.pp.129151.1988.
5. S. Rebic, “Knowledge representation- Scheme based
on Petri net Theory”, Int. Pattern Recognition
Artif.Intell. , Vol 2 , pp. 691-700, 1988.
6. D. Tabak, “Petri net representation of decision
model”, IEEE Trans. Syst., Man, Cybern., Vol.SMC15, no.6, pp. 812-818, Nov/Dec. 1985
7. K.P.
Adlassning.
“Fuzzy
set
theory
in
medicaldiagnosis, “IEEE Trance. Syst., Man,
Cybern.,vol, SMC-16, NO. 2, PP.270-276,
Mar,/Apr.1986.
8. P. N. Creasy. “An information systems view of
conceptual graphs,” in Proc. Int. Comput. Symp., vol,
2, 1988, pp. 833-838.
9. B. R. Gaines and M. L. Shaw, “From fuzzy logic to
expert systems,” Inform. Sci., vol. 36, pp. 5-15, 1985.
10. A. Giordana and L. Saitta, “ Modelling Production
rules by means of predicate transition network,” ,”
Inform. Sci., vol. 35, pp, 1-41, 1985.
11. K. S. Leung and W. Lam, “Fuzzy concept in export
systems,” IEEE Comput. Mag., vol. 21, no, 9, pp, 4356, 1988.
12. J. L. Peterson, Petri nets Theory and Modeling of
Systems. Englewood Cliffs, NJ: Prentice-Hall, 1981.
TABLE V:
SET OF ADJACENT PLACES APik FOR EACH PLACE IN FIG.12
Place Pi
Place Pk
APik
P1
P1
P2
P2
P3
P4
P5
P7
P9
p10
P6
P2
P5
P3
P4
P7
P6
P7
P6
P7
p6
P8
Ф
Ф
Ф
Ф
Ф
Ф
{p9}
{p10}
{p5}
{p7}
Ф
VIII. CONCLUSION
The FPN presented exhibits fuzzy production rules of a
rule based system. The knowledge representation is the
domain of application helps in developing systematic
procedures for supporting fuzzy reasoning. This in turn
allows computers to think more like human being. It is
the main criteria of CI.
Time complexity of the fuzzy reasoning algorithm is
0(mn), where m is the number of transitions n is number
of places. The execution time is proportional to number
of nodes sprouting tree generated by the algorithm. The
logic can be executed very efficiently.
20
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
10
.
21
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Study on problem solving in parallel programs using Petri nets
Sunita Panda
Bhagaban Swain
Asst.Professor
Dept. of CSE
SIET, Dhenkanal, India
E-mail: [email protected]
Asst.Professor
Dept. of Information Technology
Central University (Assam University), Silchar, India
E-mail: [email protected]
could be expanded on user defined process group. In this
sense parallel program debug tool differs from sequential
programs debuggers by extended functionality of process
group definition for debugging actions. This functionality
directly linked with description language of parallel
program state. At the present time most of debuggers for
this purpose use the same language that was used for initial
coding. Therefore using OpenMP standard we can use
"teams" like groups of processes, and when using MPI
standard - communicators define their own groups of
processes. But using OpenMP standard programmers are
free from routine and sophisticated work on interprocess
communications, and this result in quite easy debugging
stage of programming comparable with than one in
sequential programming. On the contrary, when using MPI
standard, parallel program correctness is directly depend on
correctness of programmer
written
interprocess
communication procedures, and increased complexity of
parallel program debugging does not compensate by some
simplification of process grouping with communicator
help.
Abstract-Parallel programming has allowed us to solve problems
which otherwise seemed impossible through sequential programming
largely due to the constraints of memory volume or in some cases
solving time. However, we come across different types of error and
bugs, when we use parallel programming. Therefore, it is necessary to
debug these errors. Existing tools and approaches lag way behind in
overcoming these errors. This paper aims at Petri net as an
application to overcome the errors.
Keywords: parallel
programming,
debugging, debugging techniques, Petri nets
I.
parallel
program
INTRODUCTION
Parallel programming gives us an opportunity for solving
those problems, that couldn't be solved in sequential
programming due to resource restrictions on memory
volume or on solving time. However, when instead of
sequential programming we use parallel one, we come
across on a set of problems, which was more or less
successfully solved in sequential programming. In this set
we can distinguish such problems as scalability of parallel
programs, reusing of source text, and problems of
correctness and debugging of parallel programs.
In this paper we study new approach to debugging of
parallel MPI-programs with help of Petri nets - formal
language of parallel and distributed system specification.
Using Petri nets we will gain natural language for parallel
program state specification and new powerful way for
implicit definition of sequential process subsets by means
of Petri nets markings and steps for the sake of parallel
program debugging.
The problem of parallel programs debugging have an
especial actuality, because time consumption of parallel
programs debugging at the present time often exceed such
expenses for initial parallel program source writing. The
base of this problem is non-deterministic behavior of
parallel program execution, which makes cyclic repetition
of erroneous situation very difficult and even more difficult
investigation of error reason.
II. PETRI NET MODEL OF COMMUNICATING
SEQUENTIAL PROCESSES
In the basis of modern debugging tools, most powerful of
which is TotalView, lays down an idea of source program
modification with addition of debugging information and
code. This relatively small additional part of program
deliver to debugging system necessary information for
monitoring program state and realizing usual set of service
functions, by means of which programmer can control
execution of program and analyze its current state. Those
service functions are stop and resume sequential process
execution, setup breakpoints and examination of process
memory and stack. Historically, all above-mentioned
functions control execution of one single process, and
Petri nets and the most of extensions of this language [2, 3]
are usually grounded on the algebra-plural approach to
creation of descriptions. The descriptions received at this
approach, strongly differ from usual programming
languages text representation. In addition standard for the
most of programming languages possibilities of usage of
data type definition with a lot of values (for example, real,
float, integer and other) complicate application of Petri nets
for the exact programs specification.
For simplification of the program description authors have
developed the extension of the Petri nets, called Petri nets
for programming (PNP). Its purpose is to model structure
22
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
transitions with same marks can be fired only
simultaneously. Example of model of function call and
function body is displayed on Fig. 2. Results of
composition of function call with its body are displayed on
Fig. 3.
of parallel program with a lot of values set, described in
terms of imperative programming language. There are two
kinds of model in PNP: plain Petri net and hierarchical
Petri net.
Plain Petri net are specified by its own structure and a set
of special inscriptions, attached to each element of the net.
Structure of plain Petri net, as well as in common Petri
nets, consists of a set of places, transitions and arcs. Places
are used to specify a state of model and are represented by
circles on pictures. Inscriptions described tokens can be
attached to any place. Transitions are used to specify events
possible in model and represented by rectangles on
pictures. Excitation predicate can be attached to transition
to specify whether transition can be fired or not. Transition
can be fired only if value of predicate equals true.
Inscriptions called substitutions are attached to arcs ingoing
to transition. This type of inscription gives ingoing tokens
names that are used in predicates and expressions.
Expression is a type of inscription that attached to arc
outgoing from transition. Based on ingoing tokens
expression calculate new values of tokens. In addition to
predicate inscriptions transition can contain inscriptions,
described rules of access point participation. To ensure
independency of inscription language from notation rules
interpretator is used. It provides functioning of
constructions of imperative programming languages in
models described in PNP.
Fig. 3. Result of composition
Using PNP we can define method of parallel program
model construction as following. Parallel program control
flow transforms to the Petri net structure, parallel program
processes with its own data transform into tokens. The
control flow of the received model is handled by data, the
description of operations above which remains in terms of
the source programming language [4].
The models of the parallel program received by the given
method possess graphics representation which can be used
at debugging.
III. DEBUGGING LANGUAGE IN TERMS OF
PETRI NETS
Debugging of programs is a laborious process which for
long time of the development has got own terminology
which we name language of debugging. During debugging
the developer uses such terms, as obtaining of a debug
code, start of the program on debugging, a stop and resume
of execution, execution of the program on steps, setting a
breakpoints and research state of the program. In Petri nets
researcher performs actually similar operations.
The evident tool of investigation of properties of the
modeled program is simulation which allows displaying
program functioning in dynamics. Simulation possesses
many concepts similar to debugging. So, start of the
program on execution, corresponds to start of simulation
from initial marking. Initial marking in this case models a
point of start of the program.
Each following step of simulation is an execution of some
set of transitions and moving of tokens from ingoing places
to outgoing ones. The given action is similar to step-bystep execution of the debugged program. In the theory of
Petri nets distinguish interleaving and non-interleaving
semantics of transition fired. Interleaving semantics defines
firing of each separate transition for once. Non-interleaving
operation allows defining firing step of several transitions.
In conformity with debugging of parallel programs it can
mean, that step firing is similar to detailed execution of
each separate command of process. Non-interleaving
operation can mean group execution of one or more
Fig. 1. Plain Petri net
Fig. 2. Hierarchical Petri net (function calls and body)
Hierarchical PNP is defined as composition of a number of
PNP; each of them is represented as rectangle on a picture.
Composition operation is described by two nets and two
access points defined in each net. Access points are
represented by little squares on net rectangle. Access points
are linked with lines. Rules of nets fusion guarantee that
23
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Based on event tracing, the debugger can handle execution
of the program, achieving identical program behavior.
Presence of the given possibility simplifies debugging in
conditions of non-determined execution environment, when
behavior of the program is varied in parallel program
restarts.
operations in several processes of the program. Alternation
and classification of the given possibilities gives the
flexible tool for definition of a set of operations in the
parallel program which the developer plans to perform for
one step of execution. Also it is necessary to note, that if
transition models function call, that, as well as real
debuggers, it is possible to do step-by-step execution of
each operation of function, and possible to treat transition
firing as execution of one complex action. It is possible to
interrupt execution of simulation if some marking is
reached. In this case marking of a Petri net corresponds to
concept of a breakpoint of traditional debugging tools.
Reachability of some marking corresponds to reachability
of some state in the program. Representing the marking on
the Petri net graph allows to present evidently a current
state of each process separately and the program entirely.
The data values, described in tokens, allows to receive the
additional information on the possible reasons of an error.
V.
CORRECTNESS OF DEBUGGING PROCESS
For debug process to be correct it is necessary to show, that
the program modified by adding debug information does
not change its visible behavior. Or in other words that
debug version of program is equivalent by behavior to
release version of program.
At first we should note that parallel program, written with
MPI standard is a set of communicating sequential
processes [1]. And sequential process can be described in
terms of plain PNP. Therefore parallel program can be
represented as hierarchical PNP net, composed from
sequential processes nets and nets, which describe
functions used in program. As for the executing program
we can model it by adding PNP net that model executing
environment, to hierarchical PNP net.
IV. DEBUGGING TECHNIQUE IN TERMS OF
PETRI NETS
The main purpose of debugging is detection program
errors, search and correction of the reasons of their
occurrence, testing of the corrected code. For this purpose
the developer performs multiple execution of the program
with the same data and conditions of execution. Using the
mechanism of breakpoints and step-by-step execution, the
developer reaches a place of occurrence of an error.
Usage PNP in this case allows visualizing process of
execution of the program, producing more information for
user on a current state of the parallel program.
Besides advantages of visual representation, simulation of
model of the program allows to save sequence of
transitions firing and sequence of accessible states. This
information represents history of program execution till the
given moment and allows receiving the additional
information for the analysis of the reasons of error. For
highlighting of the certain elements in Petri nets there is a
concept of a marks which can be used at saving history of
firing transitions. Allocation of marks in real programs is
possible in two variants. Ether it is performed by the user
manually, or is automatically performed by various
libraries for monitoring parallel program state. In Petri nets
both cases are possible. It is possible to setup a mark
according to own reasons in the given conditions of
debugging, and it is possible to realize automated setup of
marks according to some criteria which the program
possesses. Representation by transitions of some special
function calls can be one of criteria. For example, in MPIprogram this criterion can be a function call of interaction.
In general, it is necessary to note, that presence of
representation of the program in the form of model allows
to build as much as complex algorithms of automatic
arrangement of a marks according to the user criteria.
Fig. 4. Example of parallel MPI-program
The part of events that occur in parallel program is in one
or another way visible, that is appears in interaction with
physical devices or with other processes and other part of
events is invisible. Only visible events are important for the
user of the program. That is if two programs will display
identical visible behavior, for the user they will be
undistinctable or just identical. In terms of Petri nets it
means, that for matching parallel programs presented by
Petri nets bisimulation equivalence criteria can be used.
In order to debug parallel program translator, compiler or
another tools add debug information to parallel program.
This debug information adds new events to initial net
visible only for debugger. So we can speak of this event as
invisible in other contexts except debugging.
24
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Fig. 5. Example of parallel MPI-program debugging
It seems quite obvious, that adding to the state-machine
Petri net invisible events that are not changing sequences of
initial events, does not change visible behavior of the net.
The fact is less obvious, that if all the transitions added in
compositional of a Petri net are not visible, and do not
change sequence of initial events than visible behavior of
the modified net does not change. This fact follows from
construction algorithm of reachability tree for comparing
Petri nets on bisimulation equivalence.
Therefore we can say that a criterion of correctness for
debug process is that the code added to the program is not
visible and does not change sequence initial events. So we
can see that debugging process can be presented in Petri
nets and that Petri nets give an opportunity to formally
proved correctness of debug tools. Using obtained criteria
informally we can say, that for correct debugging it is
necessary, that the added code did not change the variables
of the parallel program and did not use interacting
functions, visible in the source program.
REFERENCES
[1].
Hoare C.A.R., “Communicating Sequential Processes”, Series in
Computer Science. Prentice-Hall International, (1985).
[2].
Best E., Devillers R., Koutny M., “Petri Net Algebra”, SpringerVerlag, (2001).
[3].
Jensen K., “Coloured Petri Nets. Basic Concepts, Analysis
Methods and Practical Use. Volume 1, Basic Concepts”,
Monographs in Theoretical Computer Science, 2nd edition,
Springer-Verlag, (1997).
[4].
Golenkov E.A., Sokolov A.S., Tarasov G.V., Kharitonov D.I.,
“Experimental version of parallel programs translator from Petri
nets to C++”, Proc. of the Parallel and Computational
Technologies, Novosibirsk, Russia, (2001), pp 226-331.
25
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Study on the impact of cloud computing in IT sector
Smruti Ranjan Dash
Bhagaban Swain
Asutosh Rath
Asst.Professor
Dept. of CSE
SIET, Dhenkanal, India
E-mail: [email protected]
Asst.Professor
Dept. of Information Technology
Central University, Silchar, India
E-mail: [email protected]
System Developer
OracleApps
IBM, Kolkata
E-mail: [email protected]
is really a very interesting concept with a lot of positives in
it but as it is obvious that every positive concept brings
quite a few negatives with it also, similarly cloud
computing too has disadvantages associated with it.
Broadly speaking with every internet dependent application
connectivity is a major issue and hence lack of connectivity
will result in 100% downtime. Further cloud computing is
vulnerable to security exploits resulting in problems like
Denial of Service (DoS). By centralizing services, cloud
computing increases the likelihood that a system failure
becomes catastrophic rather than isolated. Besides these
disadvantages one major problem that will in all probability
be seen in the near future is the impact that cloud
computing will have on the IT sector. It is apprehended that
the IT sector will be facing a crunch in its growth. There
are many possible reasons as to the reason of this crunch.
In this paper we have cited certain challenges and also
provide a background for converting these challenges to
opportunities.
Abstract- Cloud Computing has become a buzzword recently
in the computing paradigm. Professionals have been
astonished the way cloud computing has climbed in the
popularity chart. The reason being all IT services being
present at one place i.e. the so called cloud for all to avail.
This is a significant improvement to the way we have been
accessing the IT services till recently. No doubt this
improvement achieved through cloud computing will help us
tremendously in accessing the services. On the other hand
cloud computing will provide various challenges to the IT
management. These challenges include the Data Governance,
the manageability issues such as auto-scaling and load
balancing, the shrinking of IT department mainly due to the
aspect of reliability and availability, the realization of
security through virtualization and the use of hypervisor and
the most important challenge being the upgradation of new
skill set. This paper deals with these challenges and to sustain
the IT sector in the face of such growing challenges because
any negative impact on the IT sector invariably will have a
negative ripple impact on the economy as well.
Keywords: hypervisor
II.
Auto-scaling, load balancing, virtualization,
CHALLENGES
Data Governance
I.
INTRODUCTION
By moving the data into the cloud, enterprises will lose
some capabilities to govern their own data set - creation /
distribution / use / maintenance / dispositition.
Cloud Computing is emerging at the convergence of three
major trends – service orientation, virtualization and
standardization of computing through the internet. Cloud
Computing enables users and developers to utilize services
without knowledge of, expertise with, nor control over the
technology infrastructure that supports them. The concept
generally incorporates combinations of the following:
Manageability
As infrastructure environments become increasingly
dynamic and virtualized, the "virtual datacenter" or VDC
will emerge as the new enterprise compute platform. How
to build management capabilities on top of the existing
cloud infrastructure/platforms, and how to deal with
management issues such as auto-scaling and load
balancing.
Infrastructure as a service (IaaS)
Platform as a service (PaaS)
Software as a service (SaaS)
Users avoid capital expenditure on hardware, software, and
services when they pay a provider only for what they use.
Consumption is billed on a utility (e.g. resources
consumed, like electricity) or subscription (e.g. time based,
like a newspaper) basis with little or no upfront cost. Based
on this discussion it is quite evident that cloud computing
Reliability and Availability
IT departments will shrink as users go directly to the cloud
for IT resources. Business units and even individual
employees will be able to control the processing of
26
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
service providers to ensure they properly integrate and
enhance their on-premise operations. This should reduce
the time they spend on day-to-day firefights and free them
to focus on more strategic initiatives.
information directly, without the need for legions of
technical specialists.
Virtualization Security
Further with the concept of autonomic computing, other
challenges of cloud computing would be negated to a
certain extent further popularizing cloud computing.
Large enterprises are building their own private clouds.
Private cloud serves are usually run in datacenters managed
by third parties. Private clouds address the security
concerns of large enterprises. They're scalable, growing
and shrinking as needed. They're also managed centrally in
a virtualized environment.
New Skills Set Required
Cloud computing will shift the skills needed by
IT workers. It's no longer enough for a CIO to oversee
rollouts, integrations and development projects. Instead, IT
professionals need to focus on extracting the most business
value from new technologies, e.g. project management,
quality assurance testing, business analysis and other highlevel abstract thinking.
All the above issues is certainly going to put a dampener on
the IT sector growth but on the contrary we feel that these
issues can be modified to improve upon the performance of
cloud computing.
III.
Autonomic Computing (courtesy IBM)
IV.
OPPORTUNITIES
CONCLUSION
Cloud Computing potential is considered so vast that it is
surely going to give up a new dimension for the generation
to come. So, in the long run, most of the companies (large,
mid-size, and small) do not want to have the overhead cost
associated with running a large IT department that is solely
involved in sustaining existing enterprise application.
Cloud computing happens to be the new alternative
eradicating the need of IT department companies but
wholesome eradication of IT department doesn’t look
feasible as the role of IT department suppose to change due
to cloud computing not its existence. It will be safe to say
that the so called ‘avatar’ of the IT sector will be changed
and not its essentiality which will continue to be there.
The feeling is that in wake of all these challenges, the IT
sector in general will shrink but working wise it will have
to be more intense to ensure the services are being provided
by cloud computing as guaranteed.
Essentially we don't see in house IT departments changing
that much. Sure some may shrink as skills are not needed
due to some apps etc running in the cloud. However, with
good cloud based apps, there will still be a need for IT
support; there will still be a need for desktop support, there
will still need to be an IT department to address issues with
the cloud provider. And let’s not forget, many processes,
solutions and applications won’t make it to the cloud, and
these will remain in house.
REFERENCES
First, and foremost, the IT department will become the
keeper of the Service Level Agreement (SLA). Whereas
before, they provided the services (and, unfortunately, did
little to manage to a specific level), part of going to the
cloud is the guarantee of service, which someone will need
to ensure is met. So, the good news is new job for IT.
[1]
[2]
[3]
[4]
http://en.wikipedia.org/wiki/Cloud_computing.
https://cloudsecurityalliance.org/topthreats/csathreats.v1.0.pdf
http://www.dmtf.org/sites/default/files/standards/documents/D
SP-IS0102_1.0.0.pdf
http://www.cio.com/article/590115/Cloud_Computing_Shakes
_Up_Traditional_IT_Outsourcing
The focus of the IT organization will shift from acquiring,
deploying and managing HW and SW to evaluating,
contracting and monitoring the performance of Cloud
27
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
WSN for Super Cyclone Prediction Using Genetic Fuzzy Approach
Arabinda Nanda1
Omkar Pattnaik2
Sasmita Pani3
Department of CSE
Krupajal Engineering College
Bhubaneswar, Orissa, India
[email protected]
Department of CSE
S.I.E.T Dhenkanal
Dhenkanal, Orissa, India
[email protected]
Department of CSE
S.I.E.T Dhenkanal
Dhenkanal, Orissa, India
[email protected]
of cyclone shelters, coastal high way, conservation of
declining mangrove and other forests, afforestation and
drainage improvements are the main need in the area.
Improvements of saline and other embankments, de-silting
(dredging) of the mouths of channels and rivers and
additional ventage to the roads and cross drainage structures
have been advocated. Proper forecasting and other curative
measures with proper disaster management programme can
mitigate the flood and cyclone to a great extent.
Abstract -Super cyclone prediction is very much useful for
human activities and to reduce the construction cost in marine
research environment. Wireless Sensor Network (WSN) is one
of the research areas in the information age. Which provide
platform to researcher with the capability of developing realtime monitoring system. This paper discusses the development
of a WSN to detect super cyclone, which includes the design,
development and implementation of a WSN for real time
monitoring, the development of the genetic fuzzy system
needed that will enable efficient data collection and data
aggregation, and the network requirements of the deployed
super cyclone detection system. The actual deployment of
Paradeep port (Latitude: 20° 16' 60 N, Longitude: 86° 42' 0 E)
north east coast of India, a region well-known for deals with
bulk cargo apart from other clean cargoes.
Environmental disasters are largely unpredictable and
occur within very short spans of time. Therefore technology
has to be developed to capture relevant signals with
minimum monitoring delay. Wireless sensors are one of the
latest technologies that can quickly respond to rapid changes
of data and send the sensed data to a data analysis center in
areas where cabling is not possible. WSN technology has
the capability of quick capturing, processing, and
transmission of critical data in real-time with high
resolution. However, it has its own limitations such as
relatively low amounts of battery power and low memory
availability compared to many existing technologies. It
does, though, have the advantage of deploying sensors in
hostile environments with a bare minimum of maintenance.
This fulfills a very important need for any real time
monitoring, especially in unsafe or remote scenarios. This
paper discusses the design and deployment of super cyclone
prediction detection system using a WSN system at
Paradeep port, Orissa (State), India. The increase in
depressions during the monsoons over Bay of Bengal is
directly related to rise in the temperature of sea surface. It is
an impact of global warming. Abnormal behavior of sea
surface temperature has started to affect the atmospheric
climate over the Bay of Bengal. The increased number of
continuous depressions over the Bay of Bengal has also led
to increase in the height and velocity of the sea waves,
which causes cyclone on the sea coast.
Keywords- wireless sensor network, fuzzy inference, super
cyclone heterogeneous networks. I.
INTRODUCTION
Accurate super cyclone prediction is an important
problem for construction activities in coastal and offshore
areas. In some coastal areas, the slopes are very gentle and
tidal variation makes waterfront distances in the range from
hundred meters to a few kilometers. In offshore areas,
accurate super cyclone data is helpful for successful and
safe operations. The applications of Wireless Sensor
Networks (WSN) contain a wide variety of scenarios. In
most of them, the network is composed of a significant
number of nodes deployed in an extensive area in which not
all nodes are directly connected. Then, the data exchange is
supported by multihop communications. Routing protocols
are in charge of discovering and maintaining the routes in
the network. However, the correctness of a particular
routing protocol mainly depends on the capabilities of the
nodes and on the application requirements [1].
Orissa on the East Coast along with West Bengal and
Andhra Pradesh has the locational is advantage of being in
the path of depression of severe cyclonic storms that occur
before the onset of south-west monsoon or after it recedes
.The super cyclone and severe cyclone of october 1999
distressed 14 prosperous coastal districts throwing the lives
of one crore of people out of gear .The exceptional cyclonic
gale , high flood ,tidal doorway and stagnation are the main
factors for distress and calamity there . Though relief was
pouring in from all parts of the world but it was not reaching
the victims due to improper disaster management. Proposal
The remainder of the paper is organized as follows.
Section II describes Research Background and Related
Work. In Section III, we describe the genetic programming
paradism. Section IV Mamdani Fuzzy Model.Section V
describe Wireless Sensor Test Bed. Section VI Conclusion
and Future Work.
28
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
II. RESEARCH BACKGROUND AND
RELATED WORK
The following two parental LISP S-expressions:
The research background and relevant technologies
includes: (1) the definition of super cyclone (2) wireless
sensor network technology.
(IF (AND(x y))
(r))
Definition of super cyclone
(IF (AND(x z))
(r))
What is super cyclone?
The children resulting from crossover is shown below:
A Super Cyclone is one whose wind speed encountered
in core-area of a tropical cyclone equals or exceeds 226
km/hr.
(IF (AND(y z)) (r))
What causes super cyclone?
IV.
A cyclone is a very large mass of air ranging from 800
km to 1600 km diameter with low pressure surrounded by a
high pressure air mass. Due to unequal heating of earth
surface pressure difference arises and strong wind blow in a
spiral motion towards the low pressure centre from all
direction because of rotation of earth around its own axis,
causing cyclonic gale of more than 50 kmph .The large
whirling mass of air at the centre where pressure is low is
known as cyclone and acts like a chimney through which air
gets lifted, expands, cools and finally gets condensed
causing precipitation. If precipitation is caused by cold front
it is very intense but for short period, while by warm front it
is more continuous .A super cyclone is one whose wind
speed encountered in core-area of a tropical cyclone equals
or exceeds 226 km/hr.
MAMDANI FUZZY MODEL
The most commonly used fuzzy inference technique is
the so-called Mamdani method. In 1975, Professor Ebrahim
Mamdani of London University built one of the first fuzzy
systems to control a steam engine and boiler combination.
He applied a set of fuzzy rules supplied by experienced
human operators. The Mamdani-style fuzzy inference
process is performed in four steps:
1. Fuzzification of the input variables
2. Rule evaluation (inference)
3. Aggregation of the rule outputs
4. Defuzzification
Step 1: Fuzzification
The first step is to take the crisp inputs, x1, y1 and z1
(depression over sea, temperature over sea and velocity of
wind), and determine the degree to which these inputs
belong to each of the appropriate fuzzy sets. We examine a
simple three-input one-output problem that includes two
rules:
III. GENETIC PROGRAMMING PARADISM
Genetic programming is a branch of genetic algorithm. The
main difference between genetic programming and genetic
algorithm is the representation of the solution. Genetic
programming creates computer programs in LISP or scheme
computer languages as the solution. Genetic algorithms
create a string of numbers that represent the solution.
Genetic programming uses four steps to solve problems:
Rule: 1 IF x is A2 AND y is B2 THEN r is O2
Rule: 2 IF x is A2 AND z is C2 THEN r is O2
The Reality for these kinds of rules:
1. Generate an initial population of random compositions of
the functions and terminals of the problem.
2. Execute each program in the population and assign it a
fitness value according to how well it solves the problem.
3. Create a new population of computer programs.
(a) Copy the best existing programs
(b) Create new computer programs by mutation.
(c) Create new computer programs by crossover (sexual
reproduction).
4. The best computer program that appeared in any
generation, the best-so-far solution, is designated as the
result of genetic programming [Koza 1992].
Rule: 1 IF depression over sea is more AND
temperature over sea is more THEN super cyclone is more.
Rule: 2 IF depression over sea is more AND velocity of
wind is more THEN super cyclone is more.
Step 2: Rule Evaluation
The second step is to take the fuzzified inputs, μ(x=A1)
= 0.2, μ(x=A2) = 0.8, μ(y=B1) = 0.2, μ(y=B2) = 0.8 and μ(z=C1) =
0.2, μ(z=C2) = 0.8. Apply them to the antecedents of the
fuzzy rules. If a given fuzzy rule has multiple antecedents,
the fuzzy operator (AND or OR) is used to obtain a single
number that represents the result of the antecedent
evaluation.
In our proposed system, we choose the cross over operation
by choosing parents to produce children.
The variable x, y and z are used for the input for fuzzy
inference system and the variable r is used for the output for
fuzzy inference system.
29
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
RECALL: To evaluate the disjunction of the rule
antecedents, we use the OR fuzzy operation. Typically,
fuzzy expert systems make use of the classical fuzzy
operation union:
the gateway. A Wi-Fi network is used between the gateway
and FMC to establish the connection. The FMC incorporates
facilities such as a VSAT satellite earth station and a
broadband network for long distant data transmission. The
VSAT satellite earth station is used for data transmission
from the field deployment site at paradeep sea beach,
Orissa, India to the Data Management Center (DMC),
situated within the state. The DMC consists of the database
server and an analysis station, which performs data analysis
and super cyclone modeling and simulation on the field data
to determine the cyclone probability.
μA∪B(x) = max [μA(x), μB(x)]
Similarly, in order to evaluate the conjunction of the
rule antecedents, we apply the AND fuzzy operation
intersection:
μA∩B(x) = min [μA(x), μB(x)]
V I . CONCLUSION AND FUTURE WORK
Rule: 1 IF x is A2 (0.8) AND y is B2 (0.8) THEN r is O2
(0.8)
Rule: 2 IF x is A2 (0.8) AND z is C2 (0.8) THEN r is O2
(0.8)
Real time monitoring of super cyclone prediction is
one of the research areas available today in the field of
geophysical research. This paper discusses the development
of an actual field deployment of a WSN based super
cyclone prediction detection system. This system uses a
heterogeneous network composed of WSN, Wi-Fi, and
satellite terminals for efficient delivery of real time data to
the DMC, to enable sophisticated analysis of the data and to
provide cyclone warnings and risk assessments to the
inhabitants of the region. In the future, this work will be
extended to a full deployment by using the lessons learned
from the existing network. This network will be used for
understanding the capability and usability of WSN for
critical and emergency application. In the future, we plan to
experiment with this method, including a simulation and
implementation, to evaluate its performance and usability in
a real sensor network application.
Step 3: Aggregation of the Rule Outputs
Aggregation is the process of unification of the outputs
of all rules. We take the membership functions of all rule
consequents previously clipped or scaled and combine them
into a single fuzzy set. The input of the aggregation process
is the list of clipped or scaled consequent membership
functions, and the output is one fuzzy set for each output
variable.
r is O2 (0.8) Æ r is O2 (0.8)=Σ
Step 4: Defuzzification
The last step in the fuzzy inference process is
Defuzzification. Fuzziness helps us to evaluate the rules, but
the final output of a fuzzy system has to be a crisp number.
The input for the defuzzification process is the aggregate
output fuzzy set and the output is a single number. There are
several defuzzification methods, but probably the most
popular one is the centroid technique. It finds the point
where a vertical line would slice the aggregate set into two
equal masses. Mathematically this centre of gravity (COG)
can be expressed as:
The final output of the system will be the super cyclone
degree.
V.
WIRELESS SENSOR TEST BED
A photograph of Orissa super cyclone-1999.
The WSN follows a two-layer hierarchy, with lower
layer wireless sensor nodes, sample and collect the
heterogeneous data from the sensor column and the data
packets are transmitted to the upper layer. The upper layer
aggregates the data and forwards it to the sink node
(gateway) kept at the deployment site. Data received at the
gateway has to be transmitted to the Field Management
Center (FMC) which is approximately 400 mt away from
30
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
REFERENCES
A strengthening Cyclone Olaf, upper left, and a weakening
Cyclone Nancy, right, eye each other across the South
Pacific in this NASA Moderate Resolution Imaging
Spectroradiometer (MODIS) image from February 15.
[1]
A. Nanda, A. K. Rath, S. K. Rout,” Real Time Wireless
Sensor Network for Coastal Erosion using Fuzzy
Inference System”: International Journal of Computer
Science & Emerging Technologies (IJCSET) Vol-1,
Issue 2, August 2010, PP. 47-51.
[2] E.R.Musaloiu, A. Terzis, K. Szlavecz,A.Szalay, J.Cogan, and J. Gray,
“Life under your feet: A wireless soil ecology sensor network”,2006.
[3] H. Kung, J. Hua, and C. Chen, “Drought forecast model and
framework using wireless sensor networks, Journal of Information
Science and Engineering, vol. 22, 2006, pp. 751-769.
31
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
A comparative Study of Dynamic authenticity of digital signature.
Biswajit Tripathy1,
Jibitesh Mishra2
1
2
AssistantProfessor ,
Dept of Computer Sc & Engg,
Synergy Institute of Engineering & Technology,
Dhenkanal 759 001(India),
email: [email protected]
Associate Professor,
Dept of Computer Sc & Engg,
College of Engineering & Technology,Ghatikia,
Bhubaneswar (India),
email:[email protected]
ciphers are DES, IDEA and AES, and most popular
stream cipher is RC4.
Abstarct- Globalization of the Internet has boosted
electronic information exchange on both the personal and
business levels. The popularity turned information
communication and secure Internet transactions, which
require the verification of digital signatures and identities,
into a hot issue. Most existing digital signature schemes are
based on the public key system of RSA and ElGamal.
Security thus depends on solving factorization and the
discrete logarithm. In this paper we studied different
encryption technique in present scenario used to support a
common
Internet-based
e-commerce
activity—fair
document exchange between two parties.
Using symmetric-key cryptography, two parties who
want to communicate confidentially must have access
to the private key. This is somehow a limiting aspect
for this category of cryptography. In contrast with
symmetric-key, the key used during encryption is
distinct from that used during decryption in
asymmetric-key algorithms. The encryption key is
made public while the decryption key is kept secret.
Within this scheme, two parties can communicate
securely as long as it is computationally hard to
deduce the private key from the public one. This is
the case in nowadays asymmetric-key, or simply
public-key algorithms such as RSA, which relies on
the difficulty of integer factorization. The future of
cryptography resides in systems that are based on
elliptic curves, which are kind of public key
algorithms that may offer efficiency gains over other
schemes.
Keywords: Digital Signature ; Communication protocol ;
RSA; Encryption, ElGamal; Security;
I.
INTRODUCTION
Cryptography has evolved over the years from Julius
Cesar’s cipher, which simply shifts the letters of the
words a fixed number of times, to the sophisticated
RSA algorithm, which was invented by Ronald L.
Rivest, Adi Shamir and Leonard M. Adleman, and
the elegant AES cipher (Advanced Encryption
Standard), which was invented by Joan Daemen and
Vincent Rijmen.
II.
NEED OF DATA PROTECTION
Here are some high‐profiled incidences of data
breach:
Cryptographic algorithms used nowadays by
cryptosystems fall into two main categories:
symmetric-key algorithms,
and asymmetric-key
algorithms. Symmetric-key ciphers use the same key
for encryption and decryption, or to be more precise,
the key used for decryption is computationally easy
to compute given the key used for encryption.
Cryptography using symmetric ciphers is also called
private-key cryptography.
Heartland Payment Systems (provider of credit and
debit card processing services) in October 2008 that
compromised data of over 100 million credit cards
due to malicious software that crossed Heartland's
network.
This went on to be reported as the “world’s biggest
data breach” by the news media. It was only in
August 2009 that the Department of Justice (USA)
made an announcement about one suspect.
A few months after the breach of IT systems at TIX,
a world renowned American retailer in the apparel
and home fashions business. In this case, some 45
million credit cards were compromised.
Symmetric-key ciphers, in turn, can fall into two
categories: block ciphers and stream ciphers. Stream
ciphers encrypt the plaintext one bit at a time, in
contrast to block ciphers, which operate on a block of
bits of a predefined length. Most popular block
32
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
b)Electronic Signature- The term 'electronic
signature' means an electronic sound, symbol, or
process, attached to or logically associated with a
contract or other record and executed or adopted by a
person with the intent to sign the record.
In December 2008, RBS WorldPay (formerly RBS
Lynk), the U.S. payment processing arm of The
Royal Bank of Scotland Group acknowledged that its
"computer system had been improperly accessed by
an unauthorized party, affecting "approximately 1.5
million cardholders and other individuals."
2)As per IT Act 2000 Digital Signature
In 2005 the US Air Force discovered a massive
breach in its Assignment Management System at
Randolph Air Force Base, Texas, whereby unknown
quantities of data and information in such areas as
commence and control, logistics, personnel,
scheduling, and even in classified research and
development areas were downloaded by a hacker,
whose identity remains unknown.
a)Definition 1
A digital signature (not to be confused with a digital
certificate) is an electronic signature that can be used
to authenticate the identity of the sender of a message
or the signer of a document,
b)Definition 2
Total per‐incident costs in 2008 were $6.65 million,
compared to an average per‐incident cost of $6.3
million in 2007.
A digital signature is basically a way to ensure that
an electronic document (e-mail, spreadsheet, text file,
etc.) is authentic. Authentic means that you know
who created the document and you know that it has
not been altered in any way since that person created
it.
The researchers at Purdue University’s Center for
Education and Research in Information Assurance
and Security conducted a study on the security of
information in eight countries & found that the
companies surveyed lost a "combined $4.6 billion
worth of intellectual property in 2008 alone, and
spent approximately $600 million repairing damage
from data breaches. Based on these numbers, McAfee
projects that companies worldwide lost more than $1
trillion in 2008.”
III.
3) “Electronic Signature” means authentication of
any electronic record by a sunscriber by means of the
electronic technique specified in the Second Schedule
and includes digital signature;
4) “Electronic Signature Certificate” means an
Electronic Signature certificate issued under section
35 and includes Digital Signature Certificate.
PAPER REVIEW
B)Digital signature verification.
A. Enforceability of electronic signatures
Sender by software then encrypts the message digest
with his private key. The result is the digital
signature. Finally, sender software attaches / affixes
the digital signature to data or message. All of the
data that was hashed has been signed. Receiver by
software will decrypts the signature (using sender
public key) changing it back into a message digest.
In 1996 the United Nations published the
UNCITRAL Model Law on Electronic Commerce.
Under this an electronic signature for the purpose of
US law as "an electronic sound, symbol, or process,
attached to or logically associated with a contract or
other record and executed or adopted by a person
with the intent to sign the record.”
It may be an electronic transmission of the document
which contains the signature, as in the case
of facsimile transmissions, or it may be encoded
message, such as telegraphy using Morse code .
C) Uses of digital signature
1. Issuing forms and licences
2. Filing tax returns online
3. Online Government orders/treasury orders
4. Registration
5. Online file movement system
6. Public information records
7. E-voting
8. Railway reservations & ticketing
9. E-education
10. Online money orders
11. Secured emailing etc.
1)ESIGN Act Sec 106 definitions (Under Federal
Law)
a)Electronic- The term 'electronic' means relating to
technology having electrical, digital, magnetic,
wireless, optical, electromagnetic, or similar
capabilities.
33
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
crunch down the data or message into just a few lines
by a process called “hashing algorithm/ hash
function”. These few lines are called a message
digest/ hash result. Any modification in message or
data changes the hash result. With the hash result we
cannot construct the original message or data.
D)Act of “digitally signing” can fulfill these
functions
1)Will: The signatory gives his/her acceptance to the
text placed before the signature.
B) Signature Algorithms
2)Identificaton: A signature can be used to identify a
person.
These exist in two forms. The most relevant
algorithms are the following:
- RSA, which is also used for so-called public
key-encryption, is the best known and most
used algorithm for digital signatures. It is
internationally standardised (ISO, ISO/IEC).
- It is patented in the USA but can be used
freely in Europe.
- RSA algorithm, which was invented in 1977
by Ronald L. Rivest, Adi Shamir and
Leonard M. Adleman, and the elegant AES
cipher (Advanced Encryption Standard),
which was invented by Joan Daemen and
Vincent Rijmen.
- Nowadays asymmetric-key, or simply
public-key algorithms such as RSA, which
relies on the difficulty of integer
factorisation.
- The future of cryptography resides in
systems that are based on elliptic curves,
which are kind of public key algorithms that
may offer efficiency gains over other
schemes.
- Public-key cryptography was invented in
1976 by Whitfield Diffie and Martin
Hellman .
- A disadvantage of using public-key
cryptography for encryption is speed: there
are popular secret-key encryption methods
which are significantly
- faster than any currently available publickey encryption method.
- But public-key cryptography can share the
burden with secret-key cryptography to get
the best of both worlds.
3)Authentication: Through writing a signature on a
document which contains a text, the text is connected
in a certain way to the signature and thereby to the
person indicated by the signature.
4)Evidence: The identification function and the
evidence function can be used in situations where the
need for evidence arises, e.g. to verify the
authenticity of legal documents.
Section 2 (f) of IT Act 2000 “asymmetric crypto
system” means a system of a secure key pair
consisting of a private key for creating a digital
signature and a public key to verify the digital
signature.
IV.
WORKING PROCEDURE
A)Cryptographic system
Cryptographic mechanism process done by the
computer system. The message or data send out will
be encrypt by a cryptographic mechanism.
Cryptographic mechanism includes private key and
public key which are cryptographic methods
provided certifying authorities(CA). (Private Key
encryption is essentially the same as a secret code
that the two computers must each know in order to
decode the information. The code would provide the
key to decoding the message)(To decode an
encrypted message, a computer must use the public
key provided by the originating computer and its own
private key.)Public key and private key or both
mathematically related to each other. Therefore
private key is being used to encode the data/message
and a public key is being used to decode the data/
message. Private key will be with sender only.
Hash function=checksum/message digest
Hash function process is done by the computer
system. Hash function which mean algorithm is a
mathematical function/formula that converts a large,
possibly variable-sized amount of data into a small
datum. This is called as hash result and message
digest. To sign a document, sender by software will
C)RSA Cryptosystem
-
34
RSA is a public-key cryptosystem for both
encryption and authentication; it was
invented in 1977 by Ron Rivest, Adi
Shamir, and Leonard Adleman . It works as
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
-
-
-
-
-
-
O(k^4) steps, where k is the number of bits
in the modulus; O-notation refers to the an
upper bound on the asymptotic running time
of an algorithm .
follows: take two large primes, p and q, and
find their product n = pq; n is called the
modulus. Choose a number, e, less than n
and relatively prime to (p-1)(q-1), and find
its inverse, d, mod (p-1)(q-1), which means
that ed = 1 mod (p-1)(q-1); e and d are
called the public and private exponents,
respectively.
The public key is the pair (n,e); the private
key is d. The factors p and q must be kept
secret, or destroyed.
It is difficult (presumably) to obtain the
private key d from the public key (n,e). If
one could factor n into p and q, however,
then one could obtain the private key d.
Thus the entire security of RSA is
predicated on the assumption that factoring
is difficult; an easy factoring method would
``break'' RSA.
Here is how RSA can be used for privacy
and authentication :
RSA privacy (encryption): suppose Alice
wants to send a private message, m, to Bob.
Alice creates the ciphertext c by
exponentiating: c = m^e mod n, where e and
n are Bob's public key. To decrypt, Bob also
exponentiates: m = c^d mod n, and recovers
the original message m; the relationship
between e and d ensures that Bob correctly
recovers m. Since only Bob knows d, only
Bob can decrypt.
RSA authentication: suppose Alice wants to
send a signed document m to Bob. Alice
creates a digital signature s by
exponentiating: s = m^d mod n, where d and
n belong to Alice's key pair. She sends s and
m to Bob. To verify the signature, Bob
exponentiates and checks that the message
m is recovered: m = s^e mod n, where e and
n belong to Alice's public key.
Thus encryption and authentication take
place without any sharing of private keys:
each person uses only other people's public
keys and his or her own private key. Anyone
can send an encrypted message or verify a
signed message, using only public keys, but
only someone in possession of the correct
private key can decrypt or sign a message.
An RSA operation, whether for encrypting
or decrypting, signing or verifying, is
essentially a modular exponentiation, which
can be performed by a series of modular
multiplications.
Algorithmically, public-key operations take
O(k^2) steps, private key operations take
O(k^3) steps, and key generation takes
1)Working of RSA
-
-
-
-
-
Take two large primes, p and q, and find
their product n = pq; n is called the modulus.
Choose a number, e, less than n and
relatively prime to (p-1)(q-1), and find its
inverse, d, mod (p-1)(q-1), which means that
ed = 1 mod (p-1)(q-1); e and d are called the
public and private exponents, respectively.
The public key is the pair (n,e); the private
key is d. The factors p and q must be kept
secret, or destroyed.
It is difficult (presumably) to obtain the
private key d from the public key (n,e). If
one could factor n into p and q, however,
then one could obtain the private key d.
Thus the entire security of RSA is
predicated on the assumption that factoring
is difficult; an easy factoring method would
“break'' RSA.
Algorithmically, public-key operations take
O(k^2) steps, private key operations take
O(k^3) steps, and key generation takes
O(k^4) steps, where k is the number of bits
in the modulus;
O-notation refers to the an upper bound on
the asymptotic running time of an algorithm.
DSA, which is included in the American
signature standard DSS. It is also patented.
NIST, which was behind the launch of DSA,
has however stated that it will be offered
free of charge on the world market. DSA is
however the subject of a patenting dispute
and, at the time of writing, it is unclear
whether the dispute has been resolved.
D) Elliptical Curves
This mathematical basis for creating new and more
effective signature algorithms has not been patented
for signature use.
- Many effective uses for elliptical curves
have however been patented.
35
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
-
REFERENCES
Hash Algorithms: The most common hashalgorithms, which occur in connection with
digital signatures, are not patented.
[1].IT Act 2000, Amendments of IT Act 2008 of Government of
India.
[2]. www. Wikipedia.com ,Electronic signature. Last visited dated
12.9.2011.
[3]. Digital Signature – a technological and legal overview,
Ministry of Transportation and communications, February 1998.
Consultation paper by the Swedish inter-ministerial working group
on digital signatures.
[4]. INTEGRATION, the VLSI journal 40 (2007) 1–2, Editorial,
Embedded cryptographic hardware, by Nadia Nedjah et al.,
Department of Electronics Engineering & Telecommunications,
Faculty of Engineering, State University of Rio de Janeiro, Brazil,
http://www.eng.uerj.br/_nadia/english.html.
[5]. Application Security: An SDLC Imperative, © 2009, HCL
Technologies Ltd., November, 2009 by Sunil Anand et al.,
Architecture and Technology Services (ATS), HCL Technologies,
NOIDA.
[6]. Strong and provable secure ElGamal type signatures,
Chapter16.3, [email protected].
[7]. Answers To frequently asked questions About Today's
Cryptography by Paul Fahn, RSA Laboratories, a division of RSA
Data Security Inc., 100 Marine Parkway, Redwood City, CA
94065, Version 2.0, draft 2f, Last update: September 20, 1993.
[8] http://www.ed.uiuc.edu/wp/privacy/encrypt.html. Last visited
dated 12.9.2011.
E)El Gamal cryptosystem
-
Public key system based on discrete
logarithm problem.
It was described by Taher ElGamal in 1984.
1)System parameters
-
Let H be a collision-resistant hash function.
Let p be a large prime such that computing
discrete logarithms modulo p is difficult.
Let g < p be a randomly chosen generator of
the multiplicative group of integers modulo
p.
These system parameters may be shared
between users.
2)Key generation
-
Choose randomly a secret key x with
1 < x < p − 1.
- Compute y = g x mod p.
- The public key is (p, g, y).
- The secret key is x.
- These steps are performed once by the
signer.
3)Security
-
A third party can forge signatures either by
finding the signer's secret key x or by
finding collisions in the hash function.
Both problems are believed to be difficult.
However, as of 2011 no tight reduction to a
computational hardness assumption is
known.
V.
CONCLUSION
In this paper we studied different techniques like
RSA, ElGamal model in which a sender can easily
encrypt messages, so that only by the recipients the
message can be recover. But ordinary people mostly
have not had access to affordable military grade
public-key
cryptographic
technology.
Court
challenges to encryption's classification under ITAR
have met with mixed results. So a strong encryption
technique as well as a good legal system should be
prepare to meet this challenge.
36
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Recognizing a Good Livestock: an application of Fuzzy System
Er. Siddharth Dash, M.Tech
Dr. Susanta Kumar Dash Ph.D.
Asst.Professor(CSE)
SIET, Dhenkanal.
Professor(ABG)
OUAT, Bhubaneswar.
husbandry is one of the most important
production. So, type judging is one of the best ways for
industries in the field of providing nutrients and food security. Now
evaluating useful features. According the definition of Gillepsi
days, many branches of the science are applied in this industry, like
[1] the type contains those physical aspects of body
genetics which is employed for improving the race of all domestic
component that bases on facial component. This assessment
livestock. This science tries to transfer good features from current
contains those features that have maximum correlation for
generation to the next. Many researchers have reported that there is
producing milk. Fulfilling of the form related to type judging,
Abstract-livestock
a meaningful correlation between facial type (physical form) and
named Unified Score Card, needs very much experiences and
production. So, type judging is one of the best ways for evaluating
skills. A judge (human expert) does this uncertainly with
useful features. This assessment contains those features that have
regarding to his experience, expertise and skills. In the present
maximum correlation for producing milk. A judge (human expert)
does this uncertainly with regard to his experiences and skills. In this
study, possibility of developing of an expert system for
paper, possibility for developing an expert system for replacing
replacing human expert is investigated. Here, it is studied on
human expert is investigated. Also, the knowledge extraction methods
the crossbred Jersey cows. In this paper the knowledge
are described. Fuzzy logic is used for dealing with uncertainty.
extraction methods are described. Fuzzy logic is being used
Finally, the knowledge representation methods are discussed and
for
fuzzy rule base is proposed for representing this knowledge.
representation methods are discussed and fuzzy rule base is
Index Terms— Expert system, Fuzzy logic, , Livestock, Type judging.
I.
dealing
with
uncertainty.
Hence,
the
knowledge
proposed for representing this knowledge.
INTRODUCTION
II.
NEED OF EXPERT SYSTEM
Livestock husbandry is one of the most important industries in
The Unified Score Card, USC is considered as first attempt in
the field of providing nutrients and food security. Nowadays,
judging the diary animals on basis of some qualitative
many branches of the science are applied in this industry for
features. The “good diary cows have common and certain
improving productivity. Genetics and strategic breeding plans
features” is the basic idea behind designing of this card. This
are employed for improving the race of dairy cattle, buffaloes
card can show remarkable features of all races. In addition, it
and other small ruminants. India has the largest number of
explains the features of ideal cattle and also their values. This
livestock in the world, but per animal productivity is amongst
feature has considerable correlation to facial type of animal.
the lowest. Low productivity is largely due to poor genetic
According to USC, 19 features of animal are determined and
make up of the livestock and traditional management and
evaluated with nine point scale (1-9). Among them, 12
feeding practices followed by the farmers. This science tries to
features (Table 1) are qualitative which are valued according
transport good features from current generation to the next.
to expert’s experience [2, 3].
Many researchers have reported that there is a meaningful
correlation
between
facial
type
(physical
form)
and
37
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
of the cattle and also on their features. Due to high sensitivity
Table 1. Qualitative features for diary animals
of this work, it is done only by human expert, until now.
1. Chest width
7. Fore udder attachment
2. loin
8. Suspensory ligament
is very costly for animal husbandry. Also, performing this task
3. Angularity
9. Udder depth
for faraway animals is very time consuming, too. In addition,
4. Rear leg side view
10. Front teat placement
possibility of expert’s mistake and un-controlling items during
5. Rear leg rear view
11. Rear teat placement
6. Foot angle
12. Body depth
Because of limited number of these experts, the type judging
work is not negligible, and this could yield high cost.
Hence idea of using system utilizing a few human experts and
some other knowledge resources, the response of the expert
system may become more validated than a single human
Evaluation of qualitative features of dairy cattle is done
expert. Besides, the environment conditions like tiredness can
according to experience and expertise of human expert and
affect on human expert; however, this is not for expert system.
previous observations. The expert creates hidden rules in his
Evaluating of update costs of expert system can also help in
mind, according to his experience. Considering those hidden
analysis of the justification of using such system. Moreover,
rules, he evaluates the qualitative features. This is one of the
the effectiveness of the system can be evaluated at regular
most important reasons for unsuccessfulness of the systematic
intervals and necessary modifications can be made with regard
methods and also previous attempts using classic software-
to breeds or species and under different farming systems and
based methods for solving the problem. It is obvious that the
geographical area in consultation with domain experts, so as
solution which can model and solve this problem has to be
to get utmost accuracy in judging the livestock for desired
based on expert’s experience and knowledge.
productivity.
With the above logic, it can be conceived that
There are many reasons for justifying the usage of expert
developing an expert system for dairy cattle judging is
system for solving this problem. As the first argument, any
justified, both in technical and economical aspects.
mistake in type judging directly affects on the next generation
III.
DESIGNING THE EXPERT SYSTEM
from the web. Empirically, it can be said that the best and
Any kind of expert system can be designed in several steps.
fastest of them is human expert.
First, the knowledge for solving the problem is collected from
B. Knowledge Extraction Methods
knowledge resources and then this knowledge is integrated.
After detecting the knowledge resources, the knowledge
After that, the best method of knowledge representation in
extraction step is initiated which employs following methods.
hardware level is selected. Finally, with regard to the nature of
The first method is human perception from non-human
problem an inference mechanism is determined [4].
resources. Content of the books [1] and the materials like
A. Knowledge Resources
tables and figures can help acquiring more perception about
The first step for designing of an expert system is always
the knowledge in said area. Besides, a number of rules and
knowledge extraction [4]. For this work, several resources are
myths those are implicitly said about the concerned field may
used for knowledge acquisition. The most important
become helpful in receiving knowledge. As an example it is
knowledge resource in this study is human expert. Other
said that "In an animal, if the loin bends and the maze
knowledge resources are some literatures and information
upwards, then the its value would be less than five in a nine
38
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
point score". Also, a part of this knowledge is extracted in the
C. Inference Mechanism
form of pictures. Fig. 1 is an instance of such kind of
From the collected knowledge, it may be concluded that the
knowledge. In this figure, X is defined as the fuzzy variable
human expert solves the problem separately for each feature,
for difference between loin and maze. Another simple
regarding the inputs and parameters that are needed for each
example of “Black cows yield sweet milk” is a say in dairy
rule. It can be relatively easy for machine to pursue the human
animals, which can be inculcated in developing the knowledge
expert in this mechanism. It means that the problem is divided
base, which, of course is not considered as a parameter in
into 12 sub problems, and then each of them is solved,
present investigation.
separately. Finally, to obtain the final solution of the main
problem, the results are aggregated in backward mode.
Consequently, the inference engine is activated by the
problem in this system. It means that the problem causes to
make observations. The simplicity and justifiability for clients
are two measurements that must be considered for choosing
inference engine method. These measurements are satisfiable
by backward chaining mechanism; because this is the
mechanism that human uses for solving this problem. In
Fig. 1. The Loin feature, from left to right: whatever the height of loin is
bigger than maze, it can achieve more scores.
addition, this method can be easily depicted and also
Another method of knowledge extraction is interviewing with
design the expert system for type judging problem. All of
human expert. Since, the human expert has some hidden rules
qualitative features, linguistic and symbolic rules are
which are not explicit, the interview with the schedule
qualitative parameters show strong need for expert system.
represented. Fig. 2 shows the general proposed scheme to
structured questionnaires will bring his hidden knowledge to
light effectively. This knowledge completes and revises the
knowledge acquired through literature. One example of them
on loin judgement (loin in comparision with base of tail) with
productivity is shown in Table 2. The used linguistic variables
in the forms are usually obtained from the non-human
resources which are confirmed by human expert during the
personal interview with stakeholders
Table 2. The empty prepared table for extracting human expert's knowledge
with fuzzy variables
Fuzzy
Variable
(X)
Very
lower
lower
Little
lower
Almost
same
Little
upper
Quite
upper
Fig. 2. Structure of the proposed expert system
IV.
Score
FUZZY RULE BASE
Existence of uncertainty and many qualitative
features in this problem can be troublesome here. The fuzzy
39
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
knowledge can be one of the best solutions to control and
manage this uncertain features and parameters. Fuzzy logic is
quite capable to represent the qualitative parameters by
linguistic variables and the linguistic words, which are used
by human expert can be modeled and represented by linguistic
terms [5].
V.
CONCLUSION
Usually the human experts are tried to be replaced
with the expert systems in solving complex problems. Expert
systems solve the problem using human knowledge as well as
using other knowledge resources. In the present study type
judging of dairy cattle is thought of using expert systems and
the justification in this regard is discussed. Knowledge
collection from knowledge resources and its representation in
machine level is very important task to design the expert
system which is also discussed for this problem. In addition,
the backward chaining is proposed for inference engine
mechanism, like the human inference method. Also, the fuzzy
logic is proposed to solve the uncertainty problem. The
proposed fuzzy expert system may handle the qualitative
terms and variables effectively. As a future work, using image
processing techniques can improve this expert system in the
case of obtaining inputs. These techniques can decrease the
stress on cattle while measuring the inputs. This is a primary
study in the field and may be tried with modifications to reach
accuracy in addressing the problem, which would become
practical and economical as well.
REFERENCES
[1] J. R. Gillespie, Modern Livestock & poultry Production, pp. 689-700.
[2] The Basics of Dairy Cattle Judging, University of Maryland Cooperative
Extension Service, College Park, MD. 1989.
[3] W.H. Broster and V.J. Broster, Body score of dairy cows, Journal of Dairy
Research, 65: 155-173 Cambridge University Press, UK, 1998.
[4] J. Durkin, Expert Systems, design and development, Prentice Hall
International, Inc., Copyright 1994.
[5] Mamdani, E. H., Assilian, S., An experiment in linguistic synthesis with a
fuzzy logic controller. Int. J. Man-Machine Studies 7, 1-13, 1974.
40
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
1
Adopting biological methods to software evolution
Manoj Kumar Jena.
Research scholar, Department of Computer Science and application,
Utkal University, Odisha, India
[email protected]
methods; methodologies used to develop and maintain
software also evolve. Here we give special attention to
software methods. We also show the importance of species
concepts for software evolution. Next we show an application
of a morphogenesis. In section 4, the most important
classification schools in biology are presented. The last
section is devoted to the main conclusions.
Abstract- Many biological methods can be applicable towards
specifying the software and the process evolution. In this paper
discussion has been made about software evolution linkage with
biological evolution by applying biological methods basing upon
theoretical reasoning. Species are the units used by biologists to
measure variety of forms and study evolution and taxonomies are the
way to express relations between species. In this paper we relate
classification of species with evolution and propose the use of
biological methods to construct phylogenetic relations of software.
Software evolves, but the processes, methods; methodologies used to
develop and maintain software also evolve. Here special attention
has been given to software methods. We also show the importance of
species recognition, giving a taxonomic view of evolution.
I.
II. SOFTWARE EVOLUTION WITH SPECIES
Species are the units used by biologists to measure variety and
study evolution. It is very clear that I am not pretending to
define species in software. The difficulty to define species
also exist in biology, where the biologists know how to define
a species in particular, but not in abstract [15], that is, it is a
theoretical but not a practical problem. Biological evolution
doesn’t happen in a synchronized way, even for the members
of the same species. The situation is in agreement with the
evolution process. But this makes species identification more
difficult. In the same way as happens in nature, also in
software it can be expected the impossibility to find a clear
separation between species. This is acceptable, thinking that
applications always have points in common, like an interface,
and the same should be expected with other kinds of artifacts.
Therefore, can’t be expected that the characters common to a
class be exclusive to that class and that all the members evolve
at the same rate like in biology. In [11] the individual process
that generates each product is designated the genotype, each
product the phenotype, and the organizational process is
compared to the genotype of the species where each
organization is considered a species. Also there is a process
used to develop each product/version and process for each
organization. Accepting that, it would be possible to study the
evolution of: a product through its versions and relating it
with other products in the market; the process used in each
version and each product an organization produces; and the
organizational processes and relating them with processes
from other organizations. However, there is no evidence to
sustain those metaphors for genotype and phenotype. To study
evolution at the organizational level is out of software
engineering, even though the organizational level can affect
software evolution. n the case of software methods, also called
“tools” [13], when a method is influenced by another method,
it seems reasonable to think of some kind of Inter breeding. It
is also possible that two or more methods be combined in one,
or that a method be retired because of some other method.
INTRODUCTION
The term “software evolution” is treated similar to “biological
evolution” logically. Biological species gradually change over
the course of tens of millions of years, adapting in response to
changing environmental conditions. A piece of software
likewise changes over time, in response to changing
requirements and needs. In a truly evolutionary development
process, such as that advocated by Extreme Programming [2],
it is not possible to do major code rewrites. Updates must be
kept small, and each update must change a working program
into another working program. This, too, is analogous to
biological evolution. Each organism in the real world must
produce viable offspring. These offspring are slightly different
from their parents, but they must be capable of surviving on
their own. Major change occurs only after thousands of
generations. Is this just an analogy, or is there something
deeper? The purpose of this paper is to look the software
evolution in biologist’s point of view. The biological
metaphor in software engineering is as old as software
engineering, because in the first software engineering
conference in 1968 [5] it was possible to find the idea of
software as an organism. Several authors have suggested a
biological evolution vision for software and the processes by
which software is developed and maintained. For example, in
[11] the author describes software evolution according to a
genetic perspective, in [1] software process is seen as a
complex process of morphogenesis, in [12] is discussed the
application of natural selection to computers and programs,
and in this and another series of conferences it is possible to
find mentions of the biological metaphor. In spite of the
progresses in the study of software and process evolution, it is
necessary to have new methodological approaches to the study
of that phenomenon. Software evolves, but the processes,
41
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
2
This happened with Bootstrap assessment method. The latter
situation is clearly an ecological problem of competition; the
first is a mix of ecological and biological concepts. For the
case of software applications this is not common and can be
harder to detect than with methods. In software is common to
compose components. For methods it is possible to consider
as a species all the methods capable to be influenced by each
other. In the limit all methods can be influenced, but if in
place of speak about influence we speak about combining
methods, we narrow the domain. Accepting this, it is possible
to have, for example, the species “design methods”, or the
species “process evaluation methods”. o distinguish the
several species, it would be necessary to classify jointly the
several kinds of methods. In research it is not common to
make comparisons of distinct entities (called an oranges and
apples comparison), but here it will be required. If we can
speak of species, then it will make sense to speak in species
evolution also. But it is not certain that the species could be
the correct unit for measuring variety in software as occurs in
biology. A significant difference between the characterization
of organisms in biology and software individuals relies in the
importance the different kind of characters have in software.
External characters, like the results obtained with a method, or
the benefits it allows, are of fundamental importance in
software. But in biology the speed of a cheetah is not used in
species recognition, unlike the internal characters that explain
its speed. The general absence of theory in software
engineering and consequent ignorance of the internal
characters of interest can also make measurement in software
more easily oriented to the results, that is, to the external
characteristics. To define species in software presupposes the
definition of the individuals, that is the unit of study, but an
individual has not been defined in software [13]. The problem
can be posed at the level to choose. It can be the application
(method) level, but it can be some other like a release, or just a
component .Variation in biology can be explained by factors
like natural selection, mutation and genetic drift [13.In
software it is not an easy task to find the equivalent factors, or
define any other factor responsible for variation. Variation can
occur within the species, between different populations and
inside the same population. Different populations, in distinct
geographic, regions tend to differ and this can take the form of
smooth or stepped variation in a phenotypic or genetic
character within a species, called cline [13], or not. How is
variation in software? In conclusion, we cannot answer
questions related to species without knowing what a species is
in software. This is a very hard question, when we think of
interbreeding between software applications [13]. Also
interesting is that biological species may differ more when
coexisting in the same place, sympatry, than where only one
parent species at first exists in separated locations, allopatry
[15]. In software, following Weinberg [12], versions of the
same system in different locations will became more and more
separated as time passes. Biological evolution is usually done
during long periods of time, but in software the time scale is
much shorter. In biology living beings’ growth is a distinct
problem to evolution.
.
III. MORPHOGENESIS AND META-PROGRAMMING
Multicellular organisms create a natural mapping between
genotype and phenotype by means of morphogenesis.
Morphogenesis is the process of growth and development,
whereby a single cell repeatedly divides and differentiates to
build an entire organism. While the exact mechanisms of
morphogenesis are not well understood, it is clear that the
process is algorithmic. The evidence lies in the recursive
fractal patterns found in almost all living things, from ferns to
blood vessels to spiral shells. [10] We do know that a
significant percentage of genes are active only during
development. Developmental genes create chemical gradients,
and switch on and off in response to signals produced by other
genes, thus producing the complex structures found in living
organisms. Because each cell in the body has a complete copy
of the DNA, a single gene can express itself in many different
physical locations. Aspects and meta-programs serve the same
role in software evolution that morphogenesis plays in
biological evolution they help to establish a natural map
between genotype and phenotype. The clear lesson from
evolutionary theory is that controlling the genotype to
phenotype map is the key to evolvability. A language like C
has a fairly direct mapping between source code and machine
code; every function or statement can be translated almost
directly to (unoptimized) assembly. Since the interpretation of
machine code is fixed by the CPU architecture, this means that
the genotype to phenotype map is also fixed. Aspects and
meta-programs introduce a more sophisticated genotype to
phenotype map. A meta-program algorithmically generates an
implementation that is quite different from the source code.
This is ideal for situations such as parser generators and
DSLs, where a great deal of repetitive code needs to be
produced. Aspects are similar. An aspect can weave advice
(such as logging code) through out an application, thus
algorithmically generating an implementation. Work on
evolving neural-networks suggests that generating solutions
algorithmically does, in fact, lead to more modular and
evolvable designs. [7]
IV. PHYLOGENETIC AND PHENETIC APPROACHES
Another school of thought in biological classification is the
cladistic. Cladists want to obtain a phylogeny of the organisms
being analyzed. A phylogeny is the evolutionary story of the
organisms
(or genes). There is a third school that can be called of
evolution with an intermediate position. At the present there is
a certain tension between pheneticists and cladists and
supporters of classic evolution .The first intend to construct
numerical taxonomies abstracted from any theoretical model
of the domain, based on a large number of observed
characters. The second school claims to define a phylogenetic
classification using phylogenetic trees. Phylogenetic trees
represent evolutionary patterns. Phylogenetics is about
reconstructing the evolutionary story of species, which can be
represented by molecular sequences. The goal of
reconstruction is to find a tree most adequate to the data about
the attributes that characterize the species. The intermediate
42
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
3
school advocates a classification that shows not only common
ancestry,
but also similarity. The numerical taxonomic procedures are
used by all the schools. Historically, pheneticists accept
evolution, but neither tries to find it directly, nor use that kind
of information in their classifications. The numerical
taxonomic procedures are used by all the schools. It is
expected that those techniques, initially used by pheneticists,
be of great value in natural sciences, because of proteins and
nucleic acids sequencing on a large scale [5].In the root of the
divergences is the impossibility to see the birth of new
species. In general, the history of species could not be directly
observed. In the study of software evolution we have the
possibility to assist to the evolution and “birth” of new
“species”, because we can find very complete information
about the evolution of the individuals1: software versions,
releases, processes, methods and methodologies. Being this
true, software evolution is not a hypothesis, at least in same
sense as in biology where the evolutionary path of organisms,
including with millions of years, have to be reconstructed
from data found in the present day. In disciplines like software
the use of phylogenetic methods making inferences for the
construction of phylogenetic trees should pose fewer problems
and these methods should be more adequate to the study of
evolution than the phenetics because phylogenetic information
is available. Having data about evolution and about software
phenetic characteristics it would be 1 Following what was said
before, we are not considering entirely human and social data,
despite our recognition of the importance of that dimensions
for software development. possible to combine phenetic and
phylogenetic information. A fundamental requirement for the
application of phylogenetic methods is the existence of
evolutionary patterns in the studied phenomenon and entities
that appear by a branching process [6]. According the studies
of Lehman and colleagues there are some patterns. The
realization of several events about software evolution is a
signal that researchers believe in that. As long as the author
knows, there is no branching process in software, at least that
happen in a frequent way. However at the product unit scale
(micro-evolution) it exists and version control systems can
provide rich data. But at this level we can also find, and do,
the merge of branches. Data about evolution could be used to:
develop the phylogenetic relations; or to validate the result of
a study of phylogenetic reconstruction or of a classification
study. However, the data would make unnecessary the
reconstruction of the phylogenetic relations from actual data.
But this is not exactly as it appears. This cannot be true if the
method used to reconstruct the phylogeny could be used to
predict evolution and/or if evolution data is not complete
enough. More, if we can show that our reconstruction methods
produce valid results, we can formulate future scenarios and
predict how evolution will occur, from this point of time to the
future time of the scenarios. Surely, this is an ambitious task,
whose laborious work and practicability for the user we can
not foresee. Recognizing software uncertainty some error will
be always present in such models. Software evolution data can
also be used to enrich developed models. Our goal is to
increase knowledge about the software evolution phenomenon
and to develop a model with predictive power. For example,
to detect symptoms of evolutionary pathologies [8] or to know
when a certain system will becomes no more maintainable.
But first, we should start by identifying patterns, for example,
to know if the molecular-clock hypothesis applies to software.
We can conclude by the relevance to apply methods that allow
phylogenetic reconstruction to software.
V. CONCLUSIONS
In this paper we show the relation between species
morphogenesis and software evolution. The goal is to identify
stages of evolution of software, processes and methods.
Understanding software change as an evolutionary process
analogous to biological evolution is an increasingly popular
approach to software evolvability but requires some caution.
Issues of evolvability make sense not only for biological and
evolutionary computation systems, but also in the realms of
artifacts, culture, and software systems. Persistence through
time with variation (while possibly spreading) is an analogue
to variation (with heritability). Thus discrete individual
replicators are not strictly necessary for an evolutionary
dynamic to take place. Studying identified properties that give
biological and software evolution the capacity to produce
complex adaptive variation could shed light on how to
enhance the evolvability of software systems in general and of
evolutionary computation in particular. Evolution and
evolvability can be compared in different domains. So it is
concluded that the biological methods can be applicable for
studying the software evolution.
REFERENCES
[1] Aristotle, Organon I and II, Guimarães
Editores Lisboa, 1985.
[2]Kent Beck and Cynthia Andres. Extreme Programming Explained.
Addison-Wesley,2004.
[3] Gray, E.M., Sampaio, A., Benedicts, O., An Incremental Approach To
Software Process Assessment AndImprovement “, Software Quality
Journal, Vol.13, N.1, pp7- 16, 2005.
[4] Kuhn, T., The Structure of Scientific Revolutions, 2nd ed., University of
Chicago Press, 1970.
[5]Frederic Gruau. Genetic synthesis of modular neural networks. In
Proceedings of the 5th International Conference on Genetic Algorithms,
pages 318–325, San Francisco, CA, USA, 1993. Morgan Kaufmann
Publishers Inc.
[6] Lehman, M.M., Assumptions – (Their Nature, Role, Impact, Control),
International ERCIM-ESF Workshop on Challenges in Software
Evolution – ChaSE, Berne Switzerland, 12-13 April, 2005. [11] Maynard
Smith, J., Evolutionary Genetics, 2nd ed., Oxford University Press, 1999.
7] Nehaniv, C., Hewitt, Christianson, B., Wernick, P., What Software
Evolution and Biological Evolution Don’t Have in Common, Proceedings
of the 2006 IEEE International Workshop on Software Evolvability
(SE’06),IEEE CS, 2006.
[8] P. Prusinkiewicz and Aristid Lindenmayer. The algorithmic beauty of
plants. Springer-Verlag New York, Inc., New York, NY, USA, 1990.
[9] Ridley, M., Evolution, Blackwell Publishing, 3rd ed., 2003.
[10] Tully, C., How Seriously should we take our evolutionary metaphor?,
FEAST’00, London, 2000a
[11] Weinberg, G., Natural Selection as Applied to Computers and Programs,
General Systems, Vol 15, pp 145 -150, 1970, In M.M. Lehman and L.A.
Belady (eds.), Program Evolution - Processes of Software Change,
Academic Press, Chapter 4, 1985
[12] Ridley, M., Evolution, Blackwell Publishing, 3rd ed., 2003
43
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
COMPLEX KNOWLWDGE SYSTEM MODELING THROUGH OBJECT
ORIENTED FUZZY PETRI NET
Mrs. Manasi Jena,
Department of computer science and engineering,
Synergy Institute of Egineering & Technology, Dhenkanal
Odisha, India
I.
INTRODUCTION
II.
Petri Nets (PNs) have ability to represent and analyze in
an easy way concurrency and synchronization phenomena,
like concurrent evolutions, where various processes that
evolve simultaneously are partially independent.
Furthermore, PN approach can be easily combined with
other techniques and theories such as
Object-oriented programming, fuzzy theory, neural
networks etc. These modified PNs are widely used in
computer; manufacturing, robotic, knowledge based
systems, process control, as well as other kinds of
engineering applications. PNs have an inherent quality in
representing logic in intuitive and visual way. The reasoning
path of expert systems can be reduced to simple sprouting
trees if Fuzzy Petri Nets(FPN)-based algorithms are applied
as an inference engine. Many results prove that FPN is
suitable to represent and reason misty logic implication
relations. FPN is widely applied in knowledge system
representation and redundancy reduction. But there exist
some main weaknesses when a system is complex:
• The complexity of knowledge system cause a huge fuzzy
Petri net model, this hampers the application of FPN.
• Knowledge system is updated or modified frequently.
Suitable models for them should be adaptable.
• Knowledge cannot be classified as well as human
cognation in a FPN. When a knowledge system is made
of some substructures, for example, a expert system may
be divided into several subsystems according to different
type of knowledge. But how to abstract these subsystems
of knowledge? A methodology or a principle is
necessary. This means that the perspective selection is
very
important.
Following
object
orientation
methodology, in this paper we proposed a modification
of FPN which is called Object Oriented Fuzzy Petri Net
(OOFPN) model. Object oriented colored Petri net
(OOCPN) has been proved successful for manufacturing
systems modeling and simulation. OOCPN was
developed based on colored Petri net and object
orientation methodology, and object classes are
abstracted to colors of object subnet. Here we want to
extend this idea to fuzzy Petri net modeling process.
PRODUCTION RULES AND FUZZY PETRI NET
In order to properly represent real world knowledge, fuzzy
production rules have been used for knowledge
representation. A fuzzy production rule (FPR) is a rule
which describes the fuzzy relation between two
propositions. If the antecedent portion of a fuzzy production
rule contains AND or OR connectors, then it is called a
composite fuzzy production rule. If the relative degree of
importance of each proposition in the antecedent
contributing to the consequent is considered, Weighted
Fuzzy Production Rule (WFPR) has to be introduced.
Let R be a set of weighted fuzzy production rules R = {R1
,R2, · · ·, Rn}. The general formulation of the
ith weighted fuzzy production rule is as follows:
Ri: IF a THEN c (CF = μ), Th, w
Where a =< a1, a2, · · ·, an > is the antecedent portion
which comprises of one or more propositions connected by
either AND or OR, c is the consequent proposition, μ is the
certainty factor of the rule, Th is the threshold, w is the
weight. In general, WFPRs are categorized into three types
which are defined as follows:
Type 1: A Simple Fuzzy Production Rule
R: IF a THEN c (CF = μ), λ, w
Type 2: A Composite Conjunctive Rule
R: IF a1 AND a2 AND · · · AND an THEN c (CF = μ),
λ,w1, w2, · · ·, wn
Type 3: A Composite Disjunctive Rule
R: IF a1 OR a2 OR · · · OR an THEN c (CF = μ), λ1, λ2, · ·
· , λn,w1, w2, · · ·, wn
In order to capture more information of the weights, the
FPN model has been enhanced to include a set of threshold
values and weights, it consists of a 13-tuple
44
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
θ)
FPN = (P, T, D, Th, I, O, F, W, f, α, β, γ,
(1)
Where Th = {λ1, λ2, · · · , λn} denotes a set of threshold
values, F = {f1, f2, · · · fs} denotes a set of fuzzy sets,
W = {w1,w2, · · ·,wr} denotes a set of weights of WFPRs, α
: P → F is an association function which assigns a fuzzy set
to each place, γ : P → Th is an association function which
defines a mapping from places to
threshold values. The definitions of P, T,D, I, O, f and β are
the same as above. Each proposition in the antecedent is
assigned a threshold value, and θ : P → W is an association
function which assigns a weight to each place.
But it cannot adjust itself according to the knowledge
updating. In another word, it has not learning ability. We
may introduce learning mechanism in it.
The mappings of the three types of weighted fuzzy
production rules into this fuzzy Petri net are shown as Fig.1,
Fig.2 and Fig.3 respectively. The three types of WFPR may
be represented as follows:
Definition 1 A fuzzy Petri net with learning ability is a 9tuple
Type 1: A Simple Fuzzy Production Rule
AFPN = (P,T,D, I,O, α, β, Th, W)
R: IF a THEN c Th (t) = λ, WO (t, pj) = μ, WI (pi,
Where P, T, D, I,O, α, β are defined as follows :
t) = w
P = {p1, p2, p3, …. , pn} is a finite set of places,
T = {t1, t2, t3, …. , tm} is a finite set of transitions,
D= {d1, d2, d3, …. , dn} is a finite set of propositions,
, |P| = |D|,
I: T → P∞ is the input function, a mapping from transitions
to bags of places,
O: T → P∞
is the output function, a mapping from
transitions to bags of places,
f: T → [0, 1] is an association function, a mapping from
transitions to real values between zero and one,
α: P → [0, 1] is an association function, a mapping from
places to real values between zero and one,
β: P → D is an association function, a bijective mapping
from places to propositions.
Type 2: A Composite Conjunctive Rule
R: IF a1 AND a2 AND · · · AND an THEN c, Th(t) = λ,
WO(t, pj) = μ, WI(pi, t) = wi, i = 1, · · · , n
Type 3: A Composite Disjunctive Rule
R: IF a1 OR a2 OR · · · OR an THEN c, Th (ti) = λi, WO(ti,
pj) = μ, WI(pj, ti) = wi, i = 1, · · · , n
The mapping may be understood as each transition
corresponds to a simple rule, composite conjunctive rule or
a disjunctive branch of a composite disjunctive rule; each
place corresponds to a proposition (antecedent or
consequent).
Th: T → [0, 1] is the function which assigns a threshold
value λi from zero to one to transition ti.
Th = {λ1, λ2, · · · λm}. W = WI ∪ WO. WI: I →¨ [0, 1]
and WO: O → [0, 1], are sets of input weights and output
weights which assign weights to all the arcs of a net.
III. OBJECT ORIENTED FUZZY PETRI NET (OOFPN)
In real world, knowledge may be classified clearly. But this
classification cannot be expressed by an ordinary fuzzy Petri
net. So, a methodology is necessary to guide us modeling a
knowledge system like our cognation. Object oriented idea
provide us a formal bottom- up modeling method, and it is
widely used in system design and programming. A
knowledge system consists of so much knowledge which
comes from different areas and also different level of
abstraction. So, it is reasonable to utilize object oriented
methodology to guide the modeling process. Fuzzy Petri net
has been evaluated to be one of the best models for
knowledge systems. So, in this section, we will show how to
use object oriented idea in FPN modeling process
45
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Ii (Oi): Ti→ Pi is the input (output) function which defines a
mapping from transitions to bags of places;
αi: Pi → [0, 1] is an association function which assigns a
real value between zero to one to each place;
βi: Pi → Di is a bijective mapping between the proposition
and place label for each node;
Thi: Ti → [0, 1] is the function which assigns a threshold
value λik from zero to one to transition tik;
Wi = WIi ∪ WOi. WIi: Ii →¨[0, 1] and WOi: Oi → [0, 1],
are sets of input weights and output weights which assign
weights to all the arcs of a net. Pi;
When we observe the world, we always take a perspective.
So, modeling the knowledge system by an ordinary Petri net
is usually organized by analyzing the logic structure, i.e.
process-oriented. Object oriented modeling changes this
view point. We manage to find the objects in the knowledge
system regardless of the system running rules. Object
oriented model of knowledge does not care about what rules
are being processed, but it pays much attention on the
structure, subsystem and how they communicate each other.
If we know all these exactly, we can develop the subnet
model for each object; communicate these subnets
according to their relation.
From the view of object-oriented aspect, a system is
composed of objects and their interconnected relations.
OOFPN is developed according to this concept, i.e. OOFPN
consists of two parts:
Relation
In OOFPN, the relations between the objects depend on
their common message places, i.e. if Pi ∩ Pj = ∅, two
objects Oi and Oj have communication. OOFPN consists of
many object subnets, based on communication mechanism,
for example, wait-and reply mechanism, we connect these
object subnets through their common message places to
obtain the OOFPN model, i.e. OOFPN = ||i Oi . OOFPN is
such a FPN
AFPN = (P, T, D, I, O, α, β, Th, W)
OOFPN = (O, R)
Where O = {O1,O2, · · ·,Ok} is the set of finite objects, in
which Oi, i = 1, 2, · · · , k is an object is described by a
colored Petri net. R is the set of communicating relations
between objects that are described by common message
places between object subnets.
Actually, OOFPN is still a fuzzy Petri net which is that
same as in [4], except that the net can be divided into
several sub-structures, which improves the readability,
maintainability of the model.
Transition Firing
Object Subnet
Definition 3 (route) given a place p, a transition string t1t2 ·
· · tn is called a route to p if p can get a token through firing
this transition string in sequence from a group of source
places. If a transition string fire in sequence, we call the
corresponding route active. For a place p, it is possible that
there is more than one route to it. For example, in Fig.5,
t1t3t4 is a route to P6; t2 is another route to it. Let I (t) =
{pi1, pi2, · · ·, pin},
wI1, wI2, · · ·, wIn the corresponding input weights to
these places, λ1, λ2, · · ·, λn thresholds.
Definition 2 An object subnet is a 9-tuple with the same
structure as the fuzzy Petri net defined in the last section
Oi = (Pi, Ti, Di, Ii, Oi, αi, βi, Thi, Wi)
Where
Pi = {pi1, pi2, · · ·, pin} denotes a set of places;
Ti = {ti1, ti2, · · ·, tim} is a set of transitions;
Let O (t) = {po1, po2, · · ·, pom}, and wo1, wo2, · · ·, wom
the corresponding output weights to these places.
Di = {di1, di2, · · · , din} is a set of propositions;
46
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
4.
Logic causality: Fuzzy Petri net structure
preserves causality, thereby preserves the correct
logic relations.
5.
Knowledge learning: Knowledge in a knowledge
system may be modified frequently, OOFPN does
not affect the learning capability of AFPN.
We divide the set of places P into three parts P = PUI ∪
Pint ∪ PO, where P is the set of places of AFPN;
PUI = {p ∈ P | ·p = }, p ∈ PUI is called a user input
place;
Pint = {p ∈ P | ·p
interior place;
and p· }, p ∈ Pint is called an
PO = {p· = }, p ∈ PO is called an output place. In this
paper, is an empty set.
4.2 Modeling steps with OOFPN
Definition 4 The marking of a place m (p) is defined as the
certainty factor of the token in it.
For an actual knowledge system, we assume that the system
is described by production rules. One can develop its
OOFPN model according to the following steps.
Definition 5 ∀t ∈ T, t is enabled if ∀pIj ∈ I (t), m (pIj) >
0, j = 1, 2, · · ·, n.
Steps 1 Analyze knowledge resources of system, classify
the object classes hierarchically, and analyze their
relations.
Definition 6 When t is enabled, it produces a new certainty
factor CF (t)
CF (t) =
∑
.
, ∑
0, ∑
Step 2 Analyze the dynamics of each encapsulated object in
these hierarchies.
.
.
Step 3 Develop object subnets according to above analysis,
and map the production rules into OOFPN subnets.
(According to the mapping in the last section)
Step 4 Connect all the subnets which have common places
to obtain an OOFPN geometric structure of OOFPN.
IV. MODELING KNOWLEDGE SYSTEMS WITH
OOFPN
Step 5 According to the current state, set data to α, β, Th,
W. Then the OOFPN model is obtained.
4.1 OOFPN and Knowledge System Remark 1: For some complicated systems, it is possible
that we need to develop several hierarchies. Between these
hierarchies, a more detailed net is represented by a
condensed transition of its higher level.
OOFPN is a modeling approach rather than a model. It has
the following correspondence with real world:
1.
Objects and Relations: Real world knowledge
come from different resources and their
communication. OOFPN portrays knowledge
resources as object subnets and join them together
by their communication relation.
V.
CASE STUDY
2.
Fuzzy Production Rules: Every knowledge
resource in real world has its own rules which are
relatively independent and may be described by
WFPRs. OOFPN models and simulates WFPRs.
Example 1 There is an Intelligent Control Expert System.
The knowledge less for this expert system are from three
experts: The first one (E1) is familiar with continuous
process; The second one (E2) is expert on discrete event
system; The last one (E3) is a control engineer. There are a
lot of information are duplicate.
3.
Concurrency and Conflict: Concurrency occurs
within an object subnet and also among multiple
objects. Conflicts within OOFPN are what should
be prohibited in knowledge systems, because it
causes inconsistency.
For example E1 and E2 have some knowledge on practice
which is the same as that of E3. If we use OOFPN to model
this expert, this problem can be avoided. The structure is
shown in Fig.4. In order to illustrate clearly, we use
following rules for the expert system.
47
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
C21: Job of type 1 will be finished,
C22: Job of type 2 will be finished,
C31: Job is ready,
P21: Machine 2 is available,
P22: Machine 3 is available.
For E3: Control Engineer i.e. Operator, we have following
weighted fuzzy production rules Γ3
• IF P31 THEN C31
P31: Operator is available
Where P1− is proposition of E1, P2− is proposition of E2.
C1− is consequent of E1, C3− is consequent of E3. The
OOFPN model for this expert system is shown in Fig.6
The machine shop may have three different machines M1,
M2, M3 and two job types 1 & 2 and one operator. The
complex system would have the following conditions:
1. Job could be start processing if the operator and
respective machines are available,
2. Job type 1 will require two stages of machining
first it processed by machine M1 and then by
machine M2.
3. Job type 2 will require single stage of machining
that is it processed by machine M3.
Fig 5: An example of AFPS
For E1: Continuous Process i.e. Process Queue, we have
following weighted fuzzy production rules Γ1
• IF C31 THEN C11
• IF P11 AND C11 THEN C12
• IF P12 OR C12 THEN C13
Where C31: Job is ready,
C11: Job is processed by M1,
C12: Job will be processed by M2,
C13: Job will be processed by M3,
P11: Job is of type 1,
P12: Job is of type 2.
For E2: Discrete Event System i.e. Machines Availability
System, we have following weighted fuzzy production rules
Γ2
Fig 6: OOFN Model
• IF P21 AND C13 THEN C21
• IF P22 AND C31 OR C12 THEN C22
C12: Job will be processed by M2,
C13: Job will be processed by M3,
48
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
COMPUTATION:
[9]
GIVEN
λ1, λ2, · · ·, λn = .20,
[10]
P11= .78, T31= .76 = µ
P12= .85, T11=.86 = µ
P21=.90, T12= .90=µ
P22=.94, T13=.95=µ
P31= .90, T14=.69= µ
T21=.70=µ, T22=.82=µ, T23=.78=µ
C31= P31* T31=.90 * .76 =0.68
C11=C31*T11=.68 * .86 =0.58
C12=MIN (C11, P11)*T12= MIN (.58, .78)* .90 =0.52
C13=MAX (C11*T13, P11*T14) = MAX ((.52 * .95), (.85
* .69)) =0.58
C22=MAX (MIN (C13, P22)*T22, (C12*T21)) = MAX
(MIN (.58, .94)*.82, (.52*.70)) = 0.47
C21=MIN (P21, C13) *T23 = MIN (.90, .58)*.78 = 0.45
VI.
[11]
[12]
[13]
[14]
[15]
[16]
D.S. Yeung and E.C.C. Tsang, A multilevel weighted fuzzy
reasoning algorithm for expert systems, IEEE Trans. SMC-Part
A: Systems and Humans, 149-158, 28(2), 1998
F. Lara-Rosano, Fuzzy causal modeling of complex systems
through Petri paradigm and neural nets, in Advances in
Artificial Intelligence and Engineering Cybernetics, Vol III,
George E. Lasker (ed), Windsor, Canada: International Institute
for Advanced Systems Research and Cybernetics, 125-129,
1994
T. Cao and A.C. Sanderson, Representation and analysis of
uncertainty using fuzzy Petri nets, J. of Intelligent and Fuzzy
Systems, Vol.3, 3-19, 1995
S.M. Chen, A fuzzy reasoning approach for rule based systems
based on fuzzy logics, IEEE Trans. SMC-Part B: Cybernetics,
26(5), 769-778, 1996
M.L. Garg, S.I. Ahson and P.V. Gupta, A fuzzy Petri net for
knowledge representation and reasoning, Information
Processing Letters, Vol.39, 165-171, 1991
W. Yu and X. Li, Some New Results on System Identification
with Dynamic Neural Networks, IEEE Trans. Neural Networks,
Vol.12, No.2, 412-417, 2001
Wen Yu and Xiaoou Li, Some Stability Properties of Dynamic
Neural Networks, IEEE Trans. Circuits and Systems, Part I,
Vol.48, No.1, 256-259, 2001.
D.S. Yeung and E.C.C. Tsang, Fuzzy knowledge representation
and reasoning using Petri nets, Expert System Application,
Vol.7, 281-290, 1994
CONCLUSIONS
This paper introduces a new approach for complex
knowledge system modeling. The proposed OOFPN model
is a FPN model which is developed following object
oriented methodology. The illustrated example shows that it
is a bottom-up modeling approach which can make the
modeling process easier.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
H. Scarpelli, F. Gomide, and R.R.Yager, A reasoning algorithm
for high-level fuzzy Petri nets, IEEE Trans. Fuzzy Systems,
282-293, 4(3), 1996
S. Chen, J. Ke, and J. Chang, Knowledge representation using
fuzzy Petri nets, IEEE Trans. Knowledge and Data Engineering,
311-319, 2(3), 1990
X. Li, X. Xu and F.Lara , Modeling manufacturing systems
using object oriented colored Petri nets, International Journal of
Intelligent Control and Systems, Vol.3, 359-375, 1999
X. Li, W. Yu and F. Lara, Dynamic Knowledge Inference and
Learning under Adaptive Fuzzy Petri Net Framework, IEEE
Trans. On System, Man, and Cybernetics, Part C, vol.30, no.4,
442-450, 2000.
[5] C.G. Looney, Fuzzy Petri nets and applications, Fuzzy
Reasoning in Information, Decision and Control Systems, edited
by Spyros G. Tzafestas and Anastasios N. Venetsanopoulos,
Kluwer Academic Publisher, pp 511-527, 1994
A.J. Bugarn and S. Barro, Fuzzy reasoning supported by Petri
nets, IEEE Trans. Fuzzy Systems, 135-150, 2(2), 1994
K. Hirota and W. Pedrycz, OR/AND neuron in modeling fuzzy
set connectives, IEEE Trans. Fuzzy Systems, 151-161, 2(2),
1994
W. Pedrycz and F. Gomide, A generalized fuzzy Petri net
model, IEEE Trans. Fuzzy Systems, 295-301, 2(4), 1994
49
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Concurrent object oriented Real time system using Petri Net through Dynamic
Programming
Pravakar Mishra
SIET, Dhenkanal, India
the same place), Petri nets are well suited for
modeling the concurrent behavior of distributed
systems.
Abstract- The paper introduces the concurrent
object oriented and then considers how real time
support can be added. The paper describes how
concurrent objects can be mapped to sporadic and
periodic tasks through inheritance within the
concurrent object oriented. In this paper a
parametric description for the state space of an
arbitrary TPN (Time Petri Net) is given. An
enumerative procedure for reducing the state space
is introduced. The reduction is defined as a
runcated multistage decision problem and solved
recursively. A reachability graph is defined in a
discrete way by using the reachable integer-states
of the TPN.
II.
The following formal definition is loosely based on
(Peterson 1981). Many alternative definitions exist.
Syntax
A Petri net graph (called Petri net by some, but
see below) is a 3-tuple
, where
•
S is a finite set of places
•
T is a finite set of transitions
•
S and T are disjoint, i.e. no object can be
both a place and a transition
Keywords: Time Petri Net, dynamic program-ming,
state space reduction, integer-State, reach ability
graph
I.
FORMAL DEFINITION AND BASIC
TERMINOLOGY
•
PETRI NET BASICS
is a multiset of arcs, i.e. it defines arcs and
assigns to each arc a non-negative integer arc
multiplicity; note that no arc may connect two
places or two transitions.
The flow relation is the set of arcs:
A Petri net consists of places, transitions, and
directed arcs. Arcs run from a place to a transition
or vice versa, never between places or between
transitions. The places from which an arc runs to a
transition are called the input places of the
transition; the places to which arcs run from a
transition are called the output places of the
transition.
Places may contain a natural number of tokens. A
distribution of tokens over the places of a net is
called a marking. A transition of a Petri net may
fire whenever there is a token at the start of all
input arcs; when it fires, it consumes these tokens,
and places tokens at the end of all output arcs. A
firing is atomic, i.e., a single non-interruptible step.
Execution of Petri nets is nondeterministic: when
multiple transitions are enabled at the same time,
any one of them may fire. If a transition is enabled,
it may fire, but it doesn't have to.
Since firing is nondeterministic, and multiple
tokens may be present anywhere in the net (even in
. In many
textbooks, arcs can only have multiplicity 1, and
they often define Petri nets using F instead of W.
A Petri net graph is a bipartite multigraph
with node partitions S and T.
The preset of a transition t is the set of its input
;
places:
its postset is the set of its output places:
.
Definitions of pre- and postsets of places are
analogous.
A marking of a Petri net (graph) is a multiset of its
places, i.e., a mapping
. We say
the marking assigns to each place a number of
tokens.
50
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
A Petri net (called marked Petri net by some, see
such
,
above) is a 4-tuple
where
•
(S,T,W) is a Petri net graph;
•
M0 is the initial marking, a marking of the
Petri net graph.
Execution semantics
The behavior of a Petri net is defined as a relation
on its markings, as follows.
Note that markings can be added like any multiset:
The
execution
of
a
Petri
net
. The set of firing sequences is denoted as L(N).
Variations on the definition
As already remarked, a common variation is to
disallow arc multiplicities and replace the bag of
arcs W with a simple set, called the flow relation,
. This doesn't
limit expressive power as both can represent each
other.
Another common variation, e.g. in, e.g. Desel and
Juhás (2001),[2] is to allow capacities to be defined
on places. This is discussed under extensions
below.
graph
can be defined as the
transition relation
on its markings, as
follows:
•
for
any
t
in
T:
Formulation in terms of vectors and matrices
can
The markings of a Petri net
be regarded as vectors of nonnegative integers of
length | S | .
Its transition relation can be described as a pair of |
S | by | T | matrices:
−
,
defined
by
•
W
•
In words:
•
firing a transition t in a marking M
consumes W(s,t) tokens from each of its input
places s, and produces W(t,s) tokens in each of
its output places s
•
a transition is enabled (it may fire) in M if
there are enough tokens in its input places for
the consumptions to be possible, i.e. iff
•
that it is reachable from M if
W
+
,
defined
by
Then their difference
•
WT = W + − W −
can be used to describe the reachable markings in
terms of matrix multiplication, as follows. For any
sequence of transitions w, write o(w) for the vector
that maps every transition to its number of
occurrences in w. Then, we have
.
We are generally interested in what may happen
when transitions may continually fire in arbitrary
order.
We say that a marking M' is reachable from a
marking M in one step if
that
; we say
•
,
is a firing sequence of
is the reflexive transitive closure of
where
; that is, if it is reachable in 0 or more steps.
For
a
(marked)
Petri
net
.
Note that it must be required that w is a firing
sequence; allowing arbitrary sequences of
transitions will generally produce a larger set.
, we are interested in
the firings that can be performed starting with the
initial marking M0. Its set of reachable markings is
the
set
The reachability graph of N is the transition
relation
restricted to its reachable markings
R(N). It is the state space of the net.
A firing sequence for a Petri net with graph G and
initial marking M0 is a sequence of transitions
51
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Reachability
The reachability problem for Petri nets is to decide,
given a Petri net N and a marking M, whether
.
Clearly, this is a matter of walking the
reachability graph defined above, until either we
reach the requested marking or we know it can no
longer be found. This is harder than it may seem at
first: the reachability graph is generally infinite,
and it is not easy to determine when it is safe to
stop.
In fact, this problem was shown to be EXPSPACEhard[4] years before it was shown to be decidable at
all (Mayr, 1981). Papers continue to be published
on how to do it efficiently[5]
While reachability seems to a be a good tool to find
erroneous states, for practical problems the
constructed graph usually has far too many states
to calculate. To alleviate this problem, linear
temporal logic is usually used in conjunction with
the tableau method to prove that such states cannot
be reached. LTL uses the semi-decision technique
to find if indeed a state can be reached, by finding
a set of necessary conditions for the state to be
reached then proving that those conditions cannot
be satisfied.
Liveness
(b) Petri net Example
Mathematical properties of Petri nets
One thing that makes Petri nets interesting is that
they provide a balance between modeling power
and analyzability: many things one would like to
know about concurrent systems can be
automatically determined for Petri nets, although
some of those things are very expensive to
determine in the general case. Several subclasses of
Petri nets have been studied that can still model
interesting classes of concurrent systems, while
these problems become easier.
An overview of such decision problems, with
decidability and complexity results for Petri nets
and some subclasses, can be found in Esparza and
Nielsen (1995).[3]
A Petri net in which transition t0 is dead, and
is Lj-live
Petri nets can be described as having different
degrees of liveness L1 − L4. A Petri net (N,M0) is
called Lk-live iff all of its transitions are Lk-live,
where a transition is
•
dead, iff it can never fire, i.e. it is not in
any firing sequence in L(N,M0)
•
L1-live (potentially fireable), iff it may
fire, i.e. it is in some firing sequence in
L(N,M0)
52
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
•
it is 1-bounded; it is bounded if it is k-bounded for
some k.
A (marked) Petri net is called k-bounded, safe, or
bounded when all of its places are. A Petri net
(graph) is called (structurally) bounded if it is
bounded for every possible initial marking.
Note that a Petri net is bounded if and only if its
reachability graph is finite. Boundedness is
decidable by looking at covering, by constructing
the Karp–Miller Tree.
It can be useful to explicitly impose a bound on
places in a given net. This can be used to model
limited system resources. Some definitions of Petri
nets explicitly allow this as a syntactic feature.[7]
Formally, Petri nets with place capacities can be
defined as tuples (S,T,W,C,M0), where (S,T,W,M0)
is a Petri net,
an assignment of
capacities to (some or all) places, and the transition
relation is the usual one restricted to the markings
in which each place with a capacity has at most
that many tokens.
L2-live iff it can fire arbitrarily often, i.e.
if for every positive integer k, it occurs at least
k times in some firing sequence in L(N,M0)
•
L3-live iff it can fire infinitely often, i.e. if
for every positive integer k, it occurs at least k
times in V, for some prefix-closed set of firing
sequences
•
L4-live (live) iff it may always fire, i.e., it
is L1-live in every reachable marking in
R(N,M0))
Note that these are increasingly stringent
requirements: Lj + 1-liveness implies Lj-liveness, for
.
These definitions are in accordance with Murata's
overview,[6] which additionally uses L0-live as a
term for dead.
Boundedness
An unbounded Petri net, N.
For example, if in the net N, both places are
assigned capacity 2, we obtain a Petri net with
place capacities, say N2; its reachability graph is
displayed on the right.
A two-bounded Petri net, obtained by extending N
with "counter-places".
Alternatively, places can be made bounded by
extending the net. To be exact, a place can be made
k-bounded by adding a "counter-place" with flow
opposite to that of the place, and adding tokens to
make the total in both places k.
The reachability graph of N2.
A place in Petri net is called k-bounded if it does
not contain more than k tokens in all reachable
markings, including the initial marking; it is safe if
53
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
III.
DISCRETE, CONTINUOUS, AND
HYBRID PETRI NETS
•
As well as discrete events, there are Petri nets for
continuous and hybrid discrete-continuous
processes and useful in discrete, continuous and
hybrid control theory.[8] and related to discrete,
continuous and hybrid automata.
Extensions
There are many extensions to Petri nets. Some of
them are completely backwards-compatible (e.g.
coloured Petri nets) with the original Petri net,
some add properties that cannot be modelled in the
original Petri net (e.g. timed Petri nets). If they can
be modelled in the original Petri net, they are not
real extensions, instead, they are convenient ways
of showing the same thing, and can be transformed
with mathematical formulas back to the original
Petri net, without losing any meaning. Extensions
that cannot be transformed are sometimes very
powerful, but usually lack the full range of
mathematical tools available to analyse normal
Petri nets.
The term high-level Petri net is used for many Petri
net formalisms that extend the basic P/T net
formalism; this includes coloured Petri nets,
hierarchical Petri nets, and all other extensions
sketched in this section. The term is also used
specifically for the type of coloured nets supported
by CPN Tools.
A short list of possible extensions:
•
Additional types of arcs; two common
types are:
o
a reset arc does not impose a
precondition on firing, and empties the
place when the transition fires; this makes
reachability undecidable,[9] while some
other properties, such as termination,
remain decidable;[10]
o
an inhibitor arc imposes the
precondition that the transition may only
fire when the place is empty; this allows
arbitrary computations on numbers of
tokens to be expressed, which makes the
formalism Turing complete.
•
In a standard Petri net, tokens are
indistinguishable. In a Coloured Petri Net,
every token has a value.[11] In popular tools for
coloured Petri nets such as CPN Tools, the
values of tokens are typed, and can be tested
(using guard expressions) and manipulated
with a functional programming language. A
subsidiary of coloured Petri nets are the wellformed Petri nets, where the arc and guard
•
•
•
54
expressions are restricted to make it easier to
analyse the net.
Another popular extension of Petri nets is
hierarchy: Hierarchy in the form of different
views supporting levels of refinement and
abstraction were studied by Fehling. Another
form of hierarchy is found in so-called object
Petri nets or object systems where a Petri net
can contain Petri nets as its tokens inducing a
hierarchy of nested Petri nets that
communicate by synchronisation of transitions
on different levels. See[12] for an informal
introduction to object Petri nets.
A Vector Addition System with States
(VASS) can be seen as a generalisation of a
Petri net. Consider a finite state automaton
where each transition is labelled by a
transition from the Petri net. The Petri net is
then synchronised with the finite state
automaton, i.e., a transition in the automaton
is taken at the same time as the corresponding
transition in the Petri net. It is only possible to
take a transition in the automaton if the
corresponding transition in the Petri net is
enabled, and it is only possible to fire a
transition in the Petri net if there is a transition
from the current state in the automaton
labelled by it. (The definition of VASS is
usually formulated slightly differently.)
Prioritised Petri nets add priorities to
transitions, whereby a transition cannot fire, if
a higher-priority transition is enabled (i.e. can
fire). Thus, transitions are in priority groups,
and e.g. priority group 3 can only fire if all
transitions are disabled in groups 1 and 2.
Within a priority group, firing is still nondeterministic.
The non-deterministic property has been a
very valuable one, as it lets the user abstract a
large number of properties (depending on
what the net is used for). In certain cases,
however, the need arises to also model the
timing, not only the structure of a model. For
these cases, timed Petri nets have evolved,
where there are transitions that are timed, and
possibly transitions which are not timed (if
there are, transitions that are not timed have a
higher priority than timed ones). A subsidiary
of timed Petri nets are the stochastic Petri nets
that add nondeterministic time through
adjustable randomness of the transitions. The
exponential random distribution is usually
used to 'time' these nets. In this case, the nets'
reachability graph can be used as a Markov
chain.
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
•
time units have elapsed, and it has to fire not later
than bt time units, unless t got disabled in between
by the firing of another transition. The firing itself
of a transition takes no time. The time interval is
designed by real numbers, but the interval bounds
are nonnegative rational numbers. It is easy to see
(cf. [2]) that w.l.o.g. the interval bounds can be
considered as integers only. Thus, the interval
bounds at and bt of any transition t are natural
numbers, including zero and at _ bt or bt = 1.
Dualistic Petri Nets (dP-Nets) is a Petri
Net extension developed by E. Dawis, et al.[13]
to better represent real-world process. dP-Nets
balance the duality of change/no-change,
action/passivity, (transformation) time/space,
etc., between the bipartite Petri Net constructs
of transformation and place resulting in the
unique characteristic of transformation
marking, i.e., when the transformation is
"working" it is marked. This allows for the
transformation to fire (or be marked) multiple
times representing the real-world behavior of
process throughput. Marking of the
transformation assumes that transformation
time must be greater than zero. A zero
transformation time used in many typical Petri
Nets may be mathematically appealing but
impractical in representing real-world
processes. dP-Nets also exploit the power of
Petri Nets' hierarchical abstraction to depict
Process architecture. Complex process
systems are modeled as a series of simpler
nets interconnected through various levels of
hierarchical
abstraction.
The
process
architecture of a packet switch is demonstrated
in,[14] where development requirements are
organized around the structure of the designed
system. dP-Nets allow any real-world process,
such as computer systems, business processes,
traffic flow, etc., to be modeled, studied, and
improved.
There are many more extensions to Petri nets,
however, it is important to keep in mind, that as the
complexity of the net increases in terms of
extended properties, the harder it is to use standard
tools to evaluate certain properties of the net. For
this reason, it is a good idea to use the most simple
net type possible for a given modelling task.
IV.
FUNDAMENTAL PROPERTY
The properties of a Petri net, both the classical one
as well as the TPN, can be divided into two parts:
There are static properties, like being pure,
ordinary,
free
choice,
extended
simple,
conservative, etc., and there are dynamic properties
like being bounded, live, reachable, and having
place- or transitions invariants, deadlocks, etc.
While it is easy to prove the static behavior of a net
using only the static definition, the dynamic
behavior depends on both the static and dynamic
definitions and is quite complicated to prove. That
means that in order to get good knowledge of the
dynamical behavior of the net, the set of all
possible situations reachable for the net have to be
known, i.e. the state space must be known. As
already mentioned, this set is in general infinite and
therefore hard to handle.
Nevertheless, it is possible to pick up some
“essential” states only, so that qualitative and
quantitative analysis is possible. In [3] it is shown,
that the essential states are the integer-states.
The aim of this section is to justify the reduction of
the state space of a certain TPN to a set of all its
reachable integer-states as an adequate set for
testing dynamical properties. To do this we use
dynamic programming.
Notions, notations, definitions and approach
referring to dynamic programming are used similar
to [5]. We consider the problem as a nonoptimization problem just like the abstract dynamic
programming model considered in chapter 14.3 in
[5] and solve it.
Time Petri nets were introduced in the early
seventies as already mentioned. Berthomieu and
Menasche in [10] res. Berthomieu and Diaz in [11]
provide a method for analyzing the qualitative
behavior of the net. They divide the state space in
state classes which are described by a marking and
time domain given by inequalities. The reachability
graph that they defined consists of these classes as
Other models of concurrency
Other ways of modelling concurrent computation
have been proposed, including process algebra, the
actor model, and trace theory.[15] Different models
provide tradeoffs of concepts such as
compositionality, modularity, and locality.
An approach to relating some of these models of
concurrency is proposed in the chapter by Winskel
and Nielsen.[16]
Time Petri nets (TPN) are derived from classical
Petri nets. Additionally, each transition t is
associated with a time interval [at, bt] . Here at and
bt are relative to the time, when t was enabled last.
When t becomes enabled, it can not fire before at
55
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
REFERENCES
vertices and edges labeled by transitions. Thus, the
edges of this graph contain essential time
information (systems of inequalities). This is in
contrast to the reachability graph used in this paper,
which is an usual weighted digraph, and the time
appears explicitly as weights on some edges. The
reachability graph defined in [11] has also the
property that the graph is finite iff the TPN is
bounded. A similar definition for a reachability
graph for a TPN delivers [12].
A new direction of investigation was started at the
beginning of the nineties with the deployment of
timed automata. Several authors, i.e. recently in
[13], [14] etc., translate a given TPN into a timed
automata and then analyse the timed automata in
order to gain knowledge about the TPN. In this
case well proved algorithms in the area of timed
automata (mainly for model checking) can be used.
Only few papers are published connecting the
theory of Petri Nets and dy- namic programming.
Mostly, they consider quantitative properties of
systems.
V.
[1] Merlin, P.M.: A Study of the Recoverability of Computing
Systems. PhD thesis, University of California, Computer
Science Dept., Irvine (1974)
[2] Popova, L.: On Time Petri Nets. J. Inform. Process. Cybern.
EIK 27(1991)4 (1991) 227–244
[3] Popova-Zeugmann, L., Schlatter, D.: Analyzing Path in
Time Petri Nets. Funda-menta Informaticae (FI) 37, IOS
Press, Amsterdam (1999) 311–327
[4] Bellman, R.: Dynamic programming. Princeton University
Press, Princeton, New Jersey (1957)
[5] Sniedovich, M.: Dynamic programming. Marcel Dekker,
New York (1992)
[6] Bertsekas, D.: Dynamic programming and optimal control,
Vol. I, 2nd edition. Athena Scient., Belmont, Mass. (2000)
[7] Popova-Zeugmann, L.: Zeit-Petri-Netze. PhD thesis,
Humboldt-Universit¨at zu Berlin (1989)
[8] Ebbinghaus, H.D., Flumm, J., Thomas, W.: Mathematical
Logic. Springer- Verlag, New York-Berlin-HeidelbergLondon-Paris-Tokyo- Hong Kong-Barcelona-Budapest
(1994)
[9] Popova-Zeugmann, L., Werner, M.: Extreme runtimes of
schedules modelled by time petri nets. Fundamenta
Informaticae (FI) 67, IOS Press, Amsterdam (2005)163–
174
[10] Berthomieu, B., Menasche, M.: An Enumerative
Approach for Analyzing Time Petri Nets. In: Proceedings
IFIP Congress. (1983)
[11] Berthomieu, B., Diaz, M.: Modeling and Verification of
Time Dependent Systems Using Time Petri Nets. In:
Advances in Petri Nets 1984. Volume 17, No. 3 of IEEE
Trans. on Software Eng. (1991) 259–273
[12] Boucheneb, H., Berthelot, G.: Towards a simplified
building of time petri net reachability graphs. In:
Proceedings of Petri Nets and Performance Models PNPM
93, Toulouse France, IEEE Computer Society Press
(1993)
[13] Cassez, F., Roux, O.H.: Structural translation from time
Petri nets to timed au-tomata. In: Fourth International
Workshop on Automated Verification of Critical Systems
(AVoCS’04). Electronic Notes in Theoretical Computer
Science, London (UK), Elsevier (2004)
[14] Penczek, W.: Partial order reductions for checking
branching properties of time petri nets. Proc. of the Int.
Workshop on CS&P’00 Workshop, Informatik-Berichte
Nr.140(2) (2000) 189–202
CONCLUSIONS
In this paper a methodology that deploys
dynamic programming in order to reduce the state
space of a TPN is used. Thus, an enumeration
procedure can compute a reachability graph for a
given TPN. While the graph is a usual directed
weighted graph, the behaviour of the net can be
studied by means of prevalent methods of graph
theory. This is especially fruitful if the considered
TPN is bounded. Now in order to accomplish
quantitative analysis effective algorithms can be
used, e.g., for computing minimal and maximal
time length of runs, existence of a certain run with
a given time length, etc.
[15] Yee, S., Ventura, J.: A dynamic programming algorithm to
determine optimal as-sembly sequences using petri nets.
International Journal of Industrial Engineering - Theory,
Applications and Practice, Vol.6, No.1 (1999) 27–37
56
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Effective Energy efficient protocol in wireless sensor network
S. Abinash
J. Aparajeeta
Department of Computer Science Engineering
Synergy Institute of Engineering & Technology,
Dhenkanal, Odisha, India
E-mail: [email protected]
Department of Electronics and Communication
NMIET, Bhubaneswar, India
E-mail: [email protected]
Abstract- Energy is the most critical resource in the life
of a wireless sensor node. As in many of the application
of wireless sensor network, sensors are deployed in
adverse environments like in volcano monitoring,
underwater monitoring and Battlefield surveillance. In
such conditions it is difficult to manually handle the
batteries as use of solar cells may not be useful in all
cases. Therefore, its usage must be optimized to
maximize the network life. It is known that for higher
path loss exponent values, utilizing shorter
communication links reduces the transmitter energy,
whenever the radio equipment has power adjustment
capability. Although the transmitter energy is one of
the major factors of total energy dissipation, neglecting
the overhead energy could result in optimal energy
usage. Routing algorithms should also be concerned
about the overhead energy which is wasted at each hop
of data transfer. In this paper, we explained some
techniques that help in reducing the energy
consumption of the sensor node.
Keywords: protocol.
Wireless sensor network, ZigBee, routing
I.
INTRODUCTION
Wireless sensor networks are changing our way of life
just as the Internet has revolutionized the way people
communicate with each other. Wireless sensor networks
combine distributed sensing, computation and wireless
communication. This new technology expands our sensing
capabilities by connecting the physical world to the
communication networks and enables a broad range of
applications. A large number of sensor nodes are now
being deployed into various environments and provide an
unprecedented view of how the world around us is
evolving.
Recent advances in digital cellular telephony technology,
distributed sensor networks (DSNs) and sensor fusion open
new avenues for the implementation of wireless sensor
networks in developing applications such as consumer
electronics, home and building automation, industrial
controls, PC peripherals, medical sensor applications, toys,
games, etc. The idea of this wireless sensor network
networks has been investigated by a large number of
authors.
Wireless sensor networks (WSN) are composed of small
miniaturized devices with limited sensing, processing and
57
computational capabilities. Wireless sensors can be
densely deployed across the monitored area, and enable a
broad range of applications such as environmental
monitoring, monitoring of fire and earthquake
emergencies, vehicle tracking, traffic control and
surveillance of city districts.
Power is one of the most important design constraints in
wireless sensor network architectures. The life of each
sensor node depends on its energy dissipation. In
applications where the sensors are not equipped with
energy scavenging tools like solar cells, sensors with
exhausted batteries cannot operate anymore. Moreover,
since sensor nodes behave as relay nodes for data
propagation of other sensors to sink nodes, network
connectivity decreases gradually. This may result in
disconnected sub networks of sensors, i.e., some portions
of the network cannot be reachable at all. Therefore, the
level of power consumption must be considered at each
stage in wireless sensor network design.
Energy consumption of wireless sensor network depends
up on network architecture, network size, sensor node
population model, the generation rate of sensing data,
initial battery budget available at each sensor, and data
communication protocols. Key data communication
protocols include those for medium access control, traffic
routing, as well as sleep (or duty cycle) management. So
different techniques are developed in different fields and
we described few techniques in this paper.
Different wireless standards are available like Bluetooth
and WiFi that address mid to high data rates for voice, PC
LANs, video, etc. Sensors and controls don’t need high
bandwidth but they do need low latency and very low
energy consumption for long battery lives and for large
device arrays. So a new global standard named ZigBee has
been introduced by IEEE with IEEE 802.15.4 standard and
the ZigBee Alliance to provide the first general standard
for these applications. Feature of ZigBee described in this
paper.
Data collected by many sensors in WSNs is typically
based on common phenomena, so there is a high
probability that this data has some redundancy. Such
redundancy needs to be exploited by the routing protocols
to improve energy and bandwidth utilization. We have also
described different energy efficient routing protocols in
this paper.
Sensor nodes can use up their limited supply of energy
performing computations and transmitting information in a
wireless environment. As such, energy-conserving forms
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
of communication and computation are essential. Sensor
node lifetime shows a strong dependence on battery
lifetime. In a multi-hop WSN, each node plays a dual role
as data sender and data router. The malfunctioning of some
sensor nodes due to power failure can cause significant
topological changes, and might require rerouting of packets
and reorganization of the network.
Along with the communication standard, routing
protocols, sensor node architecture, another factor that
affects power consumption is the addressing technique.
Problem occurs when all the sensor nodes try to send their
data to the coordinators present in a limited area because of
beacon collision. So addressing technique to avoid data
interference and node failure is also described.
This paper is organized as follows. Section II provides a
detail description of ZigBee standard for communication.
In Section III different routing protocols are described and
an addressing scheme is described in section IV. We
concluded the paper with final remark in section V.
II. ZIGBEE
The FFD can operate in three modes (Fig. 1.) serving as a
personal area network (PAN) coordinator, a coordinator, or
a device. An RFD is intended for applications that are
extremely simple, such as a light switch or a passive
infrared sensor; they do not have the need to send large
amounts of data and may only associate with a single FFD
at a time. Consequently, the RFD can be implemented
using minimal resources and memory capacity.
Fig.1
ZigBee is a new global standard for wireless
connectivity, focusing on standardizing and enabling
interoperability of products. ZigBee is a communications
standard that provides a short-range cost effective
networking capability. It has been developed with the
emphasis on low-cost battery powered applications.
ZigBee got its name from the way bees zig and zag while
tracking between flowers and relaying information to other
bees about where to find resources.
ZigBee is built on the robust radio (PHY) and medium
attachment control (MAC) communication layers defined
by the IEEE 802.15.4 standard. ZigBee looks rather like
Bluetooth but is simpler, has a lower data rate and spends
most of its time snoozing. It is now widely recognized that
standards such as Bluetooth and WLAN are not With
Zigbee, the case is different, it is the only standard that
specifically addresses the needs of wireless control and
monitoring applications. It has large number of
nodes/sensors necessitates wireless solutions, very low
system/node costs, need to operate for years on
inexpensive batteries; this requires low power RF-ICs and
protocols, reliable and secure links between network
nodes, easy deployment and no need for high data rates[1].
The ZigBee network node is designed for, battery
powered or high energy savings, searches for available
networks, transfers data from its application as necessary,
determines whether data is pending, requests data from the
network coordinator, can sleep for extended periods. There
are two physical device types for the lowest system cost
defined by the IEEE. Full function device (FFD) can
function in any topology, is capable of being the network
coordinator and can talk to any other device. Reduced
function device (RFD) is limited to star topology, can not
become a network coordinator, talks only to a network
coordinator has very simple implementation. An IEEE
802.15.4/ZigBee network requires at least one full function
device as a network coordinator, but endpoint devices may
be reduced functionality devices to reduce system cost.
58
Topology Models
The ZigBee network coordinator (Fig. 2) sets up a
network, transmits network beacons, manages network
nodes , stores network node information, routes messages
between paired nodes , typically operates in the receive
state. An FFD used as a coordinator needs sufficient
memory to hold the network configuration, data, and
processing power to self-configure the network in addition
to its application task. A router stores and forwards
messages to and from devices that can’t directly swap
messages between them. A coordinator would use a lot
more power than a simple node at the edge of the network
and may require line power or be powered from a device
with a substantial power supply.
ZigBee uses direct sequence spread spectrum (DSSS)
modulation in mixed mesh, star, and peer to-peer
topologies (including cluster-free) to deliver a reliable data
service with optional acknowledgments.
Figure .2 ZigBee Network Model
The range per node is a nominal 10 m, but popular
implementations have a single-hop range of up to 100 m
per node line of sight (and farther if relaying through other
nodes). ZigBee employs 64- bit IEEE addresses and
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
shorter 16-bit ones for local addressing, which allows
thousands of nodes per network.
ZigBee might be a best option if the following is
required: small size, cost sensitivity, low latency, low
power, and interoperability. But the biggest reason to
choose ZigBee is by far the implementation of the cuttingedge technology. It zigzags its way around the other
wireless options. Although it is inferior to almost all of the
others in data rate, it surpasses them in the terms of
probability in sophisticated equipment and data control.
ZigBee is the best solution for low data rate, short range
communications (Fig. 3.) in a energy efficient way.
Fig 3
ZigBee position in wireless standard spectrum
III.
ROUTING PROTOCOLS
One of the main design goals of WSNs is to carry out
data communication while trying to prolong the lifetime of
the network and prevent connectivity degradation by
employing aggressive energy management techniques. The
design of routing protocols in WSNs is influenced by many
challenging factors. These factors must be overcome
before efficient communication can be achieved in WSNs.
To minimize energy consumption some well-known
routing tactics as well as tactics special to WSNs, such as
data aggregation and in-network processing, clustering,
different node role assignment, and data-centric methods.
Almost all of the routing protocols can be classified
according to the network structure as flit, hierarchical, or
location-based. Furthermore, these protocols can be
classified into multipath-based, query-based, negotiationbased, quality of service (QoS)-based, and coherent-based
depending on the protocol operation.
In flat networks all nodes play the same role, while
hierarchical protocols aim to cluster the nodes so that
59
cluster heads can do some aggregation and reduction of
data in order to save energy. Location-based protocols
utilize position information to relay the data to the desired
regions rather than the whole network. The last category
includes routing approaches based on protocol operation,
which vary according to the approach used in the protocol.
The transmitter energy is one of the major factors of total
energy dissipation, neglecting the overhead energy could
result in suboptimal energy usage. Routing algorithms
should also be concerned about the overhead energy which
is wasted at each hop of data transfer. One of the efficient
energy aware routing protocols is EADD: Energy Aware
Directed Diffusion for Wireless Sensor Networks. This
scheme changes the node’s forwarding moment that
depends on each node’s available energy. EADD allows
the nodes to response more quickly than the nodes which
have lower available energy. This scheme is very simple so
that it can be adapted to any forwarding strategies for
routing protocols of wireless sensor networks. EADD is
helpful to achieve balanced nodes’ energy distribution and
extension of network life cycle [3].
This protocol focus on following considerations:
• Total communication cost of the path
• Average remaining energy of the nodes
on the path
• Minimum node energy on the path
• Node connectivity
It is likely that if the nodes on a gradient path have larger
average remaining energy and minimum node energy than
the others, the gradient will be reinforced. Conversely, the
total communication cost and node connectivity should
have smaller value than the others to set up energy efficient
path.
However they didn’t state when to compare the paths to
select the best path. The best way to select the most energy
efficient path is to wait until entire gradient arrive but it
lower the routing performance. Consequently, they need to
restrict the gradient to reinforce.
EADD decides the moment to forward a packet with
each node’s available energy. Let’s assume that there are
two gradient paths (path X and path Y) which receive same
interest message from the sink node. The nodes on Path X
have 60%, 30% and 20% available energy whereas the
nodes on path Y have 80%, 70% and 20% available
energy. In EADD, path X and Y has different arrival time.
If a node has more available energy, the node can get faster
response time.
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Routing protocols in WSNs
Network structure
Flat
network
routing
Protocol operation
Locationbased
routing
Negotiationbased
routing
Hierarchical
network
routing
Multipathbased routing
Figure 4
60
30
20
Path
Y
80
70
20
Fig. 6 An illustration of a short address in binary, dot-decimal
notation, and hexadecimal.
Nodes with different available energy
IV.
QoSbased
routing
The nodes of the WSN organize themselves in α levels
depending on either their location or the availability of
addresses in the scanned nodes. In this case, α is equal to
five. The maximum numbers of children a parent may
have and α are static design parameters, which have to
be determined before the deployment of the WSN. These
parameters are implicit in the short address. Explaining
how they are implicit is easier if we defined a dotdecimal notation like the one used in TCP/IP (see Fig.
6).
The 16 bits of the short address are divided into α - 1
groups. Each group of bits is written in binary, separated
by dots. Each one is associated with a hierarchical level
of the topological tree. The first is associated with Level
1, the second with Level 2, and so on.
Waiting
Path
X
Coherentbased routing
Routing protocols in WSNs [2]
After source node receives the interest message,
EADD start to run. When set up a gradient between
source and destination, the nodes on gradient should wait
until calculated time pass over. That time depends on
each node’s available energy. When a node sets up the
gradient with previous node, it should fix appointed time
to forward the gradient to next node
EADD selects a path that has more available energy
than the others. It changes the node’s forwarding
moment that depends on each node’s available energy
i.e., it allows the nodes to response more quickly than
the
nodes
which
have
lower
available
energy.
Figure 5
Query-based
routing
ADDRESSING SCHEME
The Level 0 is occupied by the PAN Coordinator,
whose address is statically assigned (0.0.0.0). A node of
Level N with an address ANN assigns to its children
addresses with their first N groups of bits equals to
ANN, being only different the N + 1 group, and zeros the
rest of groups.
A large number of papers have been published in the
last years dealing with the efficient management of
addresses in WSN. In [4] addressing techniques for tree
topology is described where all the addresses are
generated by the parent node, have 16 bits, and are
assigned according to the procedure defined in IEEE
802.15.4.
60
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
In Fig. 10, we can observe an example of routing, in
which the node 1.1.1.0 sends to the node 2.1.0.0 a
packet. The complexity of the routing algorithm is 3
bitwise operations and 3 assignments only. In the same
way, the level of a node can be obtained with α bitwise
operation.
Fig. 7 Example of address assignment.
Fig. 7 shows an example of address assignment. The
2ndlevel node with address 1.1.0.0 has received the first
address of the pool of its parent, which has address
1.0.0.0. All the children of this node have the same
prefix, 1.0.0.0. They only differ in the second group of
bits: from 1.1.0.0 to 1.15.0.0. As it can be noticed, since
the level of the nodes is 2, only the first 2 groups of bits
are not zero. Therefore, the level of a node is implicit in
its address. In fact, it can be calculated as the number of
groups of bits that are not zero.
Fig. 10 Example of routing of a packet across the network
V.
CONCLUSION
Wireless Sensor Networks have received significant
attention recently due to a wide range of compelling
potential applications, such as traffic monitoring,
intelligent control systems and digital surveillance of
battle fields. In sensor networks, a large number of
small, inexpensive, battery-powered sensors are densely
deployed in system environments to capture the status of
interest and collect useful information from their
surroundings. Fairness in energy consumption of
network nodes has direct affect on the network lifetime.
In this paper, different routing algorithms of wireless
sensor network are discussed. The main difference of
algorithms which were discussed in this paper is their
own mechanism to provide fairness in energy
consumption. Using other topologies for wireless sensor
networks is also applicable. Each of those topologies is
efficient for individual application. For future works,
evaluating other topologies for wireless sensor networks
is suggested.
Fig. 8 Pseudo code of the routing algorithm
Reference
Fig. 9 Pseudo code of the algorithm to find out the depth of a node
using its address as input parameter
[1]
[2]
The level mask (LM) of Level N (LMN) is defined as
a short address with the bits of the first N groups equal to
1 and the rest of them equal to zero. It is possible to
implement lightweight algorithms to route packets and
calculate the level of a node in the tree by using this LM.
Fig.8 and Fig.9 show the pseudo code of those
algorithms. The routing problem is solved when it is
known: whether the packet has to be routed or processed,
the address of the next hop (NH), and the transmission
mode (TM).
[3]
[4]
61
Tomasz Augustynowicz,"ZigBee IEEE 802.15.4" URL:
http://www.cs.tut.fi/kurssit/8304700/sem7talkl.pdf
J. N. Al-Karaki Et Al.,” Routing Techniques In Wireless
Sensor Networks: A Survey”, IEEE Transaction on wireless
communication, Dec 2004.
j. Choe et al.,” EADD: Energy Aware Directed Diffusion
for Wireless Sensor Networks”, International Symposium
on Parallel and Distributed Processing with Applications,
2008.
M. A. Lopez-Gomez et al.,” A Lightweight And EnergyEfficient Architecture For Wireless Sensor Networks” IEEE
Transactions on Consumer Electronics, Vol. 55, No. 3,
AUGUST 2009.
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Identification Of Non Liner System Using Sliding Mode Based Adaptive
Learning With Augmented Weight Vector
B.N.Sahu ,
P.K.Nayak ,
Asst.Prof.,ITER,
S.O.A. University, Bhubaneswar,
email: [email protected]
Asst.Prof,
SIET,Dhenkanal
email: [email protected]
Abstract- This paper presents sliding mode control
approach for identification of a non liner system. Sliding
mode control approach proposed in this paper for the
synthesis of an adaptive learning algorithm in a neuron
whose weights are constituted by an augmented weight
vector. The approach is shown to exhibit robustness
characteristics and fast convergence property. A simulation
example dealing with applications of the proposed
algorithm is given.
II.
This model takes into consideration only one neuron in
which the traditional adjustable weights are substituted by
first order, linear, dynamic order filters. It is described as
below:
y& i (n ) = ai (n) yi (n ) + ki (n) xi (n), i = 1,2,.........n
(1)
where the time-varying scalar functions ai (n) and ki (n) ,
i=1,2,…..m play the role of adjustable weight parameters.
The input
xi is assumed to possess bounded time
derivatives and Fig. 1 represents the dynamical-filter weight
neuron model.
Keywords: System Identification, Neural network, Sliding
Mode, Karhunen-Loeve Transform, Nonlinear Dynamic
System.
I.
ARCHITECTURE OF THE SYSTEM
The parameters a (n) and k (n) are
a (n) =col ( a1 (n) , a2 (n) ,……., an (n)) ,
k (n) =col ( k1 (n) , k2 (n) ,……., kn (n))
INTRODUCTION
(2)
The neuron output is obtained as
System identification process is used to identify an
unknown system, such as the response of an unknown
communications channel or the frequency response of an
auditorium, to pick fairly divergent applications. In this
article the continuous time sliding mode control approach
for the adaptation of time-varying neuron weights is briefly
revisited .A sliding mode control strategy is proposed for
the synthesis of an adaptive learning algorithm in a neuron
whose weights are constituted by an augmented weight
vector which is used to identify a non liner system.
n
yˆ (n) = ∑ yi (n)
(3)
i =1
The learning error e (n) is given by
e (n)= yˆ (n) − yd ( n)
(4)
Where yd (n) is the desired output and is bounded. Also its
derivative is assumed to be bounded. Taking the derivative
of the error
n
e&( n) = ∑ [ai ( n) yi (n) + ki ( n) xi (n)] − y& d (n)
i =1
⎡ y ( n) ⎤
(5)
k (n)] ⎢
⎥ − y& d (n)
⎣ x ( n) ⎦
e&(n) = −W1sign(e(n)) − W2e(n )
(6)
To satisfy equation (6), a (n) and k (n) parameters are
[
= a ( n)
chosen as
a(n) =
( y&d − W1sign(e(n)) − W2e(n))
and k(n) =
Figure 1. Neuron model
62
x(n) + y(n)
2
2
y(n)
(−W1y&d sign(e(n)) −W2e(n))
x(n) + y(n)
2
2
x(n)
(7)
(8)
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
6
The sliding mode condition necessitates that W1 and W2 are
positive quantities and to be chosen such a way that tracking
error e(n) converges to zero.
III.
4
Output
2
ROBUST LEARNING ALGORITHM
0
To make the learning algorithm a robust one, an unmeasurable norm-bounded perturbation vector η (n) is
-2
140
Desired
Error
Error0.15
≤ Vξ
0.1
0.05
(9)
0
2
0
20
40
60
a (n) and k (n) thus become
ξ (n) + y(n)
2
200
0.2
ξi (n) = ξ12 (n) + ξ22 (n) + LL+ ξn2 (n)
a(n) =
180
0.25
ξ (n) becomes
ξi (n) = xi (n) + ηi (n) , and
− (W1sign(e(n)) − W2e(n))
160
(a)
added to the neuron input vector x(n) . Thus the new vector
The adoption parameters
Time step
Estimated
y(n)
and k ( n ) = − (W1sign (e ( n )) − W2 e ( n )) ξ ( n )
2
2
ξ (n) + y (n)
80
100
Time step
120
140
160
180
200
(b)
(10)
Figure2. Duffing System without Noise
(a)Desired vs. estimated signal
(b)Estimated error
(11)
4
3
Output
2
IV.
AN ILLUSTRATIVE SIMULATION EXAMPLE
1
0
-1
Example of system to be identified is described by the
difference equation.
Time step
estimated
Desired
⎧u(t) = 0.6sin α (π t ) + 0.3 sin(3π t ) + 0.1sin(α t )
⎪
)
)
) )
)
⎪ f [ y (t ), y (t − 1)] = y (t ) y (t − 1)[ y (t ) − 2.5]
)
)
⎪
g[ y (t ), y (t − 1)] = 1 + y 2 (t ) + y −2 (t − 1)
)
)
⎨
f [ y (t ), y (t − 1)]
)
⎪
y (t + 1) = )
+ u (T )
)
⎪
g[ y (t ), y (t − 1)]
)
⎪
y (t + 1) = ℜ[ y (t + 1)]
⎩
(a)
2
1.5
Error
1
0.5
0
0
20
40
60
80
100
120
Time step
140
160
180
200
(b)
Figure3. Duffing System with Noise
(a)Desired vs. estimated signal
( b)Estimated error
where R [.] denotes the real part and α is an r.v uniformly
distributed in the interval [1, 2, 5] with E{α} = 3.5, with N
= 50, L = 200 i.e n = 1, 2, 3, ….200. From figure 2 and 3 we
observe that the result using proposed algorithm is very
good.
V.
CONCLUSIONS
The paper presents a new adaptive digital filter for nonlinear
dynamic system identification. The filter is based on a firstorder filter weight neuron architecture, where the weights
are updated by an derivative based sliding mode adaptive
learning algorithm. The sliding mode learning technique
ensures robustness and uncertainty in parametric variations
of the nonlinear dynamic system. The examples presented
in this paper clearly demonstrate the superiority of this new
approach for dynamic system identification in real-time for
practical systems or plants.
63
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
REFERENCES
[1]
Claudio Turchetti,Giorgio Biagetti, Francesco Gianfelici, Paolo
Crippa , Nonlinear System Identification: An Effective framework
Based on the Karhhunen-Loeve Transform IEEE Trans. Signal
process.,57(2)(2009) 536-550.
[2] A. Carini, G. L. Sicuranza, Optimal regularization parameter of the
multichannel filtered-X affine projection algorithm, IEEE Trans.
Signal process., 55(10)(2007) 4482-4895.
[3] T. Ogunfunmi, Adaptive Nonlinear System Identification- The
Volterra and Wiener Model Approaches, Springer, New York, 2007.
[4] S. Haykin, Adaptive Filter Theory, 2nd edition ed., Prentice Hall,
1991.
[5] A. Carini, G. L. Sicuranza, Transient and steady-state analysis of
filtered-X affine projection algorithm, IEEE Trans. Signal process.,
54(2)(2006) 665-678.
[6] H.Sira-Ramirez, E.Colina-Morles, F.Rivas-Echeverria, Sliding modebased
adaptive
learning
in
dynamical-filter-weights
neurons,Int.Journal of Control,73(8)(2000) 678-685.
[7] K. Narendra, K. Parthasarathy, Identification and control of
dynamical systems using neural networks.IEEE Transactions on
Neural Networks,1(1)(1990) 4-27.
64
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
FACTORS INFLUENCING THE INTENTION TO USE WIRELESS
Technology in Healthcare: An Indian Study
Dillip Kumar, Mishra*, Lokanath Sarangi ** & B. D. Pradhan ***
*, **NM Institute of Engineering & Technology, Bhubsneswar
***Synergy institute of engineering and technology, Dhenkanal, Odisha
*[email protected], **[email protected]
Abstract - This study reports the factors that influence the
intention in using a wireless technology for the Indian
healthcare setting. Using both qualitative and quantitative
techniques, physicians as well as health professionals from the
Indian medical systems were approached for data collection.
The qualitative data were used as a basis to develop a
quantitative instrument. Both types of data (qualitative and
quantitative) established technology factors, clinical factors,
administrative factors and communication factors play a
crucial role in determining the intention in using wireless
technology in the Indian healthcare. These factors were further
validated using a second order regression model to ensure their
validity and reliability. The major contribution of this paper is
identifying a number of factors influencing the intention and
statistically validating such factors, perhaps for the first time
in the Indian healthcare context.
II.
The research question dictates the need to have quantitative
research methods, while the behavioural component of the
same investigation dictates qualitative research methods. In
essence, to answer the research question, both methods are
required. Qualitative methods will help to understand the
domain and the context in a practical sense. Quantitative
methods will assist to generalise our findings. Within this
method, I used a mixed-method approach, where the initial
exploratory phase is conducted using a qualitative approach
and the second main phase is conducted using a quantitative
approach.
A. Data Collection & results
Keywords: PLS Model, Healthcare Technology, Wireless
Technology
I.
METHODOLOGY
As argued, for the first stage of this research a qualitative
approach was used to collect initial sets of themes for the
adoption of wireless technology by the physicians in the
Indian healthcare systems. For this purpose, the first stage of
the data collection concentrated on randomly identifying 30
physicians each from India with some form of wireless
technology already in use. The physicians were also selected
based on their wireless technology awareness or working
experience. They were drawn from both private and
government hospitals. A set of initial themes were extracted
from these interviews for a quantitative instrument. The
qualitative analysis indicated that there is a clear set of
drivers and inhibitors emerging from the interviews. The
driver themes were extracted when there was a positive
statement and the inhibitors when there was a negative
sentiment. Therefore, it appears that positive influences
drive the technology adoption and the negative influences
inhibit the technology adoption.\
INTRODUCTION
Latest trends in the healthcare sector include the design of
more flexible and efficient service provider frameworks. In
order to accomplish this service provider framework,
wireless technology is increasingly being used in healthcare
specifically in clinical domain for data management. Even
though the future of wireless devices and usability is
promising, adoption of these devices is still in infant stages
due to the complex and critical nature of the healthcare
environment [1]. However, there is limited knowledge and
empirical research in regards to the effectiveness and
adoption of wireless technology in the healthcare systems.
[2], after an evaluation of about fifteen articles in the
combined domain of technology and health asserted that
current technology acceptance models are only partially
adequate and applicable in the professional contexts of
physicians (p.22). A profound implication of this assertion
is that the relationship of wireless technology adoption,
strategy, implementation and environmental issues
pertaining to the clinical domains are yet to be established.
This notion prompted this research with the following
research question:
• What are the clinical influences of wireless technology in
healthcare systems in India?
I employed a qualitative method to extract initial themes
from healthcare stakeholders and then derived a quantitative
instrument based on this qualitative data. This is explained
in the next section – methodology.
B. Quantitative Data Collection & analysis
The survey was then distributed to over 300 physicians in
India. The sample was randomly chosen from the telephone
book. A total of 200 surveys were received. The survey
responses were then transcribed into a spreadsheet file. The
reliability test returned a Cronbach alpha value of 0.965 for
the instrument indicating high reliability [3]. We ran this
test because the instrument was generated from the
interview data and, hence, it was necessary to establish
statistical reliability. In addition, reliability tests were also
65
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
testing. A further factor analysis also returned a set of new
factors and these were groped under ‘clinical usefulness and
reported in the following table.
run for three factor groupings, namely, drivers, inhibitors of
adoption and other technology factors. The reliability tests
returned values of 0.941, 0.447 and 0.536, respectively,
indicating that the data were suitable for further analysis
The above factors were then tested using Partial
Least Squares (PLS) program in order to verify
their statistical granularity. The following table is
extracted from PLS to depict the weights of each
construct.
The study supported that clinical usefulness of
wireless technology influence technology adoption.
It is also noted that R2 is significant (0.976).
Therefore the data explains 97.6% of the variance
of clinical usefulness in India.
should look into the use of multi-group analysis.
The issues of sample size should also be addressed.
This research has reported how a ground up
research is undertaken in order to establish factors
influencing technology adoption.
III.
IMPLICATIONS AND CONCLUSIONS
REFERENCES
[1]
This research has a number of theoretical and
practical implications. In order to discover the
factors of wireless technology adoption in health
sectors in India, traditional adoption models were
not used. A ground up approach was used by
developing the factors via qualitative filed study.
From theoretical view point it is shown how a
ground up approach can be applied in situations
where no traditional model can be applied. This
paper details this process. The future research
[2]
[3]
66
A. Crow, "Defining the balance for now and the
future - Clinicians perspective of implementing a care
coordination information systems management,"
presented at HIC 2004, Brisbane, Australia, 2004.
T. A. M. Spil and R. W. Schuring, E-Health system
Diffusion and Use. Hershey: IDea Group Publishing,
2006.
W. Zikmund, Business research
methods,
International Ed. ed. Orlando, FL: The Dryden Press,
1994
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
UWB COMMUNICATIONS – A STANDARDS WAR
Dillip Kumar Mishra*, Lokanath Sarangi**, & Narendra Kumar Pattanaik**
*, **Asst .Professor, NM Institute of Engineering & Technology
***Sr. Lecturer, SIET, Dhenkanal, Odisha
*[email protected], **[email protected]
Abstract - Ultra Wideband radio communications is an
emerging technology for very high speed wireless
communications, especially suited for data connections
between consumer devices such as computer peripherals,
laptops, PDAs, home theater equipment, digital cameras and
portable audio devices. Key advantages of UWB include
unprecedentedly high wireless data transfer speeds (ranging
from 100 Mb/s to 500 Mb/s or more), low power
consumption, very high spatial capacity of wireless data
transmission, and sophisticated usage of radio frequencies
that allows UWB to coexist with other simultaneously
operating RF systems. At present, two competing proposals
are being presented as candidates for a UWB
communications standard under IEEE standardizing
process. The IEEE 801.15.3a Task Group has been trying to
reach a decision upon a standard for a UWB Physical Layer
(PHY) specification, but the opposing parties, Motorola and
the MBOA Alliance (consisting of more than 90 companies
as of April 2004), have been unable to agree. It is unclear if
the IEEE process will ever reach its goal as both parties may
eventually start launching products hoping their products
will eventually emerge as de facto standards.
Key Words: UWB, ultra wideband communications, MBOA
can be taken advantage of in positioning applications.
Alliance, Motorola, IEEE 802.15.3a, standards war.
challenging for producing reasonable cost consumer
Very fast impulse rates enable high connection speeds, up
to 500 Mb/s or even 1 Gb/s over short distances. Because
UWB signals occupy a very broad radio frequency
spectrum, low transmission power must be used in order
not to interfere with existing RF systems, such as GPS.
The practical approach is to set UWB power levels so low
that the signals cannot be distinguished from external
noise by traditional RF systems operating simultaneously
in the overlapping frequencies. UWB is not a new idea: it
actually dates back to the 1980’s (Foerster et al, 2001).
However, it has been used mainly in radar-based
applications since the timing and synchronization
requirements of UWB communications have been too
products.
I.
However,
semiconductor
INTRODUCTION
recent
technology
have
developments
made
in
consumer
“As opposed to traditional narrowband radios, Ultra-
applications possible, and the regulatory steps taken in the
Wideband (UWB) is a wireless digital communication
US, namely by the Federal Communications Commission
system exchanging data using short duration pulses. The
in 2002, have speeded up industry efforts aiming at
complexity of the analog front-end in UWB is drastically
product launches. During the last 12 months the efforts of
reduced due to its intrinsic baseband transmission. Based
the industry have been aimed at designing the best
on this simplification and the high spreading gain it
possible UWB solution for consumer devices. Everything
possesses, UWB promises low-cost implementation with
started out as impulse radio but after FCC published the
fine time resolution and high throughput at short distances
regulations for commercial UWB devices, the field has
without
wireless
split in two: an impulse radio UWB approach backed by
communication systems.” (Stanley Wang, Berkeley
Motorola, and a multi-band OFDM solution backed by a
University UWB Group) As the above quote suggests,
90-company industry alliance, MBOA (Multi-Band
impulse radio UWB is fundamentally different from what
OFDM Alliance). The two opposing standard proposals
is usually thought of RF communications. Instead of
have been presented (IEEE, 2004), and now both parties
using a carrier frequency, as traditional systems like FM
continue to develop their own products, as well as
radio or GSM networks do, the UWB impulse radio
participate in the formal standardizing process. Full-
technology is based on sending and receiving carrierless
fledged standard proposals can be expected from either
radio impulses using extremely accurate timing (Win and
party during the next couple of months. From a
Scholtz, 1998). The radio impulses are transmitted in sub-
communications point of view UWB is not a technology
nanosecond intervals which inherently leads to spectrally
for cellular networks, instead it can be seen as a
wide signals and a very accurate spatial resolution, which
complementing technology for WLANs. However, the
interfering
with
other
existing
67
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
most prominent application field for UWB is WPAN,
testing, hence assuring customers of a reliable and a well-
wireless personal area networks, or to put it in more
known standard, pretty much in the same way as Wi-Fi
practical terms: cable replacement. This application field
works in the 802.11 field. The difference here is that
in particular lets UWB excel in what it is best: very high
WiMedia was founded way before UWB product
bandwidth, short- to medium-range wireless connectivity
launches whereas Wi-Fi came into being years after initial
at very low cost and very low power consumption. These
IEEE work on 802.11 standards.
features are, in the end the day, the key features of both
technical viewpoint, WiMedia works on a middle layer—
UWB PHY proposals, even though they differ quite
between for example the UWB physical layer (PHY) and
significantly from each other technically. In the above
a higher level standard such as Wireless USB—and the
table UWB is compared to WLAN and Bluetooth, its
physical layer is where the battle really goes on.
closest parallels in wireless communication. Probably the
III.
However, from a
WARS ON STANDARDS
biggest advantages of UWB compared to Bluetooth or
When two rival mutually incompatible technologies
802.11x are the capability reach 500 Mb/s data transfer
struggle to become a de facto or industry standard, the
speeds and a superior mW/Mbps ratio. In practice the
situation can be called a standards war (Shapiro and
UWB devices would consume about the same amount of
Varian, 1999). According to Shapiro and Varian these
power as Bluetooth devices but with a hundredfold data
wars can end in a truce (possibly a compromised,
transfer speed. Wired solutions in the application field of
common standard), a duopoly (two significant but
UWB include USB (Universal Serial Bus), USB 2.0, and
incompatible solutions prevail in the market), or a battle
Firewire (IEEE 1394). Not coincidentally, one of the
with an ultimate winner and a loser. In the case of UWB
higher data speeds specified for MBOA’s UWB is 480
communications technology the battle has been a very
Mb/s, the exact speed of USB 2.0.
heated one for already a year with two counterparts,
II.
Motorola and the MBOA Alliance. According to the
BEFORE THE WAR
The UWB communications technology development
taxonomy by Shapiro and Varian (1999), standards wars
started gaining speed in the end of 1990’s as companies
can be classified as presented in the table below. Both the
such as Discrete Time Communications and Xtreme
Motorola proposal and the Multi-Band OFDM proposal
Spectrum were founded (1996 and 1998, respectively).
by define a technology that is by far incompatible with
Each of the early developers were experimenting with the
any existing communications devices, hence this is clearly
technology and presenting new results and product demos
a battle between rival revolutions.
from time to time, but a real boost to the UWB R&D was
Further, Shapiro and Varian (1999) name seven key assets
given by FCC in February 2002 when FCC published new
that are usually decisive in waging a standards war. They
regulations under which it became possible to design
are:
UWB products for the commercial market. As things
1)
Control over an installed base of customers
proceeded, several organizations and working groups
2)
Intellectual property rights
became associated with UWB-related issues. For example
3)
Ability to innovate
4)
First-mover advantages
5)
Manufacturing abilities
6)
Strength in complements
7)
Reputation and brand name
WiMedia Alliance (www.wimedia.org) was founded to
take care of WPAN technology branding by bringing
industry players together and providing e.g. compliance
designing and producing new high-quality products, but
Because we are looking at a case of rival revolutions, the
the MBOA Alliance has an advantage because of its sheer
asset number 1 does not really apply. Also the IPR issue
magnitude: with over 90 companies in the alliance today
seems of secondary significance as both parties own the
MBOA can be quite confident about its
key rights for their respective technical solutions.
First-mover advantages bring another slight difference.
However, in the ability to innovate the competing parties
Motorola’s hands are on the technology developed by
differ. Both Motorola and the leading companies in the
Xtreme Spectrum, an UWB pioneer now acquired by
MBOA Alliance have a very strong reputation in
available information
68 Motorola, and according to publicly
SIET, Dhenkanal,
Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
the technology itself is mature, practically ready for
made possible adjusting the UWB traffic according to
commercial product launches any day now. The
local radio conditions. With these intentions, six UWB
technology promoted by MBOA Alliance has developed
developer companies formed the Multi-Band Coalition
rapidly during the last year, and the former time gap
(MBC) in January 2003.
compared to Motorola has narrowed. However, as a 90-
However, already in March 2003 Texas Instruments
company alliance today, MBOA is not likely to be as
presented a radically enhanced radio implementation, the
agile in its moves, giving Motorola a possibility to try and
Multi-Band OFDM, which integrated UWB with the
launch consumer products before MBOA. The last three
proven Orthogonal Frequency Division Multiplexing, also
assets, however, seem to be strongly in favor of the
used in ADSL, DVB, 802.11a, VDSL, and many other
MBOA Alliance: Manufacturing abilities, Strength in
current radio communication technologies. (At the same
complements, and Reputation and brand name. With the
time, March 2003, Motorola teamed with Extreme
huge industry backing MBOA can produce a vast variety
Spectrum in backing the opposing physical layer proposal
of UWB-enabled products compared to Motorola, and
for UWB which uses Direct Sequence Code Division
especially in the strength in complements the alliance
Multiple Access, or DS-CDMA.) The battle for a standard
seems
Intel
in IEEE continued for the rest of the year 2003. Many
(motherboards, processors), Nokia (mobile equipment),
companies joined the MBOA, but Motorola still managed
Samsung, Panasonic, Philips, Texas Instruments, Fujitsu,
to restrain the alliance from gaining the necessary 75 % of
NEC,
(consumer
all votes in the IEEE process, despite voting in numerous
electronics, computers, peripherals) onboard, the MBOA
meetings. This lead the alliance into forming a new
Alliance companies clearly have the power to introduce a
special interest group (SIG) in January 2004, in which
dominant design. capability to bring about successful new
MBOA tries to put its proposal forward without a formal
products in all application fields of UWB technology.
decision from IEEE.
invincible.
Toshiba,
Having
and
companies
Hewlett-Packard
like
IV.
CASE MBOA
The MBOA Alliance (Multi-Band OFDM Alliance) was
V.
Motorola, after buying all assets of Xtreme Spectrum,
formed in June 2003 but the story of the alliance dates
continues to support its own proposal for an UWB PHY,
back to October 2002 when several UWB developer firms
whereas MBOA consists today of over 90 member
started to discuss multi-band approaches to the UWB
companies such as Intel, Microsoft, Nokia, Samsung,
development. However, the bandwagon started originally
Philips, Panasonic, Hewlett Packard, Toshiba, NEC,
rolling already in February 2002 when FCC released its
Fujitsu, Sharp, Mitsubishi, Olympus, Realtek, TDK,
first report and order on UWB regulations (FCC, 2002).
Texas Instruments, VIA and so on.
The regulations were read and understood by several
As the latest actions, Motorola has announced a very
UWB developers—and this time from a slightly different
liberal IPR policy if its proposal becomes selected, to
viewpoint. Keeping the goal in producing an efficient
which all MBOA members have responded by agreeing to
solution to the actual problem: very high speed wireless
the IEEE policy and providing any IP adopted as part of
connectivity with low cost and low power consumption
the 802.15.3a standard specification under Reasonable
(enabling cable replacement), these companies put aside
the
traditional
impulse
radio
approach
to
UWB BATTLE FIELD TODAY
and Non-Discriminatory (RAND) terms.
UWB
In February 2004 Intel unveiled its plans to support
communications, and instead came up with a multi-band
MBOA proposal as a building block for Wireless USB or
approach. In this approach, parts of the 7.5 GHz wide free
WUSB, delivering same speeds as USB 2.0—480 Mb/s—
spectrum appointed by FCC were divided into more than
over
500 MHz wide slices. This allowed two advantages. First,
distances
up
to
10
meters.
(Ultra-
widebandplanet.com, 2004). Soon after this Motorola
separate 500 MHz wide bands were much simpler to
came out with a compromise proposal for the IEEE
implement with current CMOS compared to several GHz
standardizing process, according to which both MBOA
wide impulse radio signals, and second, these 500 MHz
and Motorola versions of UWB physical layer could
bands could be dynamically turned on and off which
69
coexist in the same standard.
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Wireless USB or Wireless Firewire, the multi-band
VI.
CONCLUSIONS
OFDM solution seems the inevitable choice. The MBOA
Since late 1990’s and early 2000’s when impulse radio
Alliance has the market power and the technical expertise
UWB was a very hot topic—hyped with the most
to pull it off, which makes it merely a matter of time.
laudatory technical superlatives, a lot has changed. The
According to their press releases, consumer products can
FCC ruling on UWB in February 2002 seems to have
be expected during the year 2005, regardless of the
been a great divider after which the technology
progress in IEEE. However, this does not mean that
development gained a lot of speed and got entirely new
Motorola’s UWB proposal will die out. Latest version of
directions. Although the FCC ruling was made in the
the DS-UWB by Freescale Semiconductor (former
spirit of impulse radio UWB—having special emphasize
Motorola’s Semiconductor Products Sector, now spun
on several impulse radio applications such as ground
off) claims to provide 1.3 Gb/s with two meter range and
penetrating radar and super accurate positioning—it has
a lot higher efficiency than the MBOA’s proposal
turned out that instead of giving rise to impulse radio
(Meade, 2004). It may well be that—after all—the market
UWB products, the FCC regulations just opened up an
will get two different UWB versions that both become
unprecedentedly wide free spectrum slot (of more than 7
successful. This would require that the application field,
GHz, ranging from approximately 3 GHz to 10 GHz)
product
which now is likely to become used for multi-band
eventually produce two very different technologies for
OFDM Wireless PAN networking. Despite this partly
different purposes, although at one point they competed
unintentional and very complex evolution path, UWB
for the same standard.
[1].
sectors of consumer electronics, and as the amount of
personal data keeps growing (due to the introduction of
[2].
digital cameras, camera phones and camcorders as well as
the digitalization of TV content), the demand for very
and
usefulness/usability
issues
REFERENCES
communications technology is likely to have a very bright
future. The trend of “unwiring” is very strong in all
branding,
[3].
high speed wireless data transmission is growing.
Besides PC-to-MediaDevice connections and cable [4].
replacement in ICT installations, UWB will also be [5].
needed in for example home theaters where high
bandwidth signals must be transmitted from source [6].
devices to the video projector and multiple speakers, all in
[7].
situated in different corners of the room. The standards
war itself seems almost over. When it comes to PC-
FCC 2002. Revision of Part 15 of the Commission’s
Rules Regarding Ultra-Wideband Transmission
Systems
Foerster J et al. 2001. Ultra-Wideband Technology
for Short- or Medium-Range Wireless
Communications, Intel
IEEE, 2004. 802.15 WPAN High Rate Alternative
PHY Task Group 3a (TG3a) website,
Multi-band OFDM Alliance, 2004. MBOA
Frequently Asked Questions.
Shapiro and Varian 1999. Information rules: a
strategic guide to the network economy, Harvard
Business School Press.
Ultrawidebandplanet.com / Lipset V. 2004. Intel
Backs UWB for Wireless USB, 18 February 2004,
Win M and Scholtz R 1998, Impulse Radio-How it
Works, IEEE Communications Letters, vol. 2, pp.
36-38.
related cable replacement solutions and things like
70
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Mar 03
Majority of IEEE
proposals based on
Multi-Band
Oct 02
Fo ur UWB
developers discuss
Multi-Band
approaches
Texas Instruments
presents MB-OFDM
Jul 03
IEEE down-selects to
MB-OFDM, w hich
obtains 60 % of
votes
Motorola a nd
Xtreme Spectrum
team o n UWB
Jan 03
Six UWB developers
form Multi-Band
Coalition (MBC)
Mar 04
90 companies in
MBOA
Jun 03
MBC, TI, Sony,
Samsung and others
merge: MB-OFDM
Alliance (MBOA) is
formed
Motorola: a
compromise
proposal including
both PHYs
Jan 04
MBOA forms a new
SIG (Special Interest
Group) outside IEEE
Nov 03
35 companies in
MBOA
Motorola buys
Xtreme Spectrum
Feb 04
Intel backs up
MBOA-based UWB
for Wireless USB
Xtreme promises
royalty-free IPR
IEEE voting: again
Fig 1: Wars on Standard
no result
Table 1: Wireless PAN and Wireless LAN Communication technologies in comparison
Bluetooth
802.11b
802.11a
UWB
Frequency band
2.4 GHz
2.4 GHz
5 GHz
3 - 10 GHz
Typical
carrier rate
1 Mb/s
5.5 Mb/s
(max. 11 Mb/s)
36 Mb/s
(max. 54 Mb/s)
100 - 500 Mb/s
Outdoor
range
10100 m
105 m
(11 Mb/s)
Î
325 m (1 Mb/s)
30 m
(54 Mb/s) Î
305 m
(6 Mb/s)
Indoor
range (m)
10
30 m (11 Mb/s)
Î
60 m (1 Mb/s)
12 m (54 Mb/s)
Î
91 m (6 Mb/s)
Appr. 10 m
Now
Now
Now
2005?
30
1
83
1000
Availability
Spatial capacity
in kbps/m2
Appr.
10 m
Î
50 m
Table 2: Types of Standards Wars
(Shapiro and Varian, 1999)
Rival Technology
Compatible
Incompatible
Your
Technology
Compatible
Rival
evolutions
Evolution
versus
revolution
Incompatible
Revolutio
n versus
evolution
Rival
revolutions
71
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Application Of Multimedia Communication System : Disaster-affected
Areas
Madhumita Dash ; Prfessor ; ABIT
[email protected]
R N Panda ; Asst. Professor ; DMSS ; Durgapur (WB)
Leena Samantaray ; Prfessor ; ABIT
communications is the field referring to the
representation, storage, retrieval and dissemination
of machine-processable information expressed in
multiple media, such as text, image, graphics,
speech, audio, video, animation, handwriting, data
files. With the advent of high capacity storage
devices, powerful and yet economical computer
workstations and high speed integrated services
digital networks, providing a variety of multimedia
communications services is becoming not only
technically but also economically feasible.
In disaster-struck fields where traditional
communication services such as fixed or mobile
telephone and local internet access are completely
inoperable,
a
fast-deploying
multimedia
communication system that a number of emergency
rescue teams can rely on and collaborate with a
distant command headquarter will prove very
useful in saving the lives of victims. A Multimedia
Emergency Communication Network which aims
for the situations where there is very little, severely
disabled, or no communication infrastructure
available called DUMBONET and the objectives
are to provide a collection of post-disaster
emergency communication tools –which can be
quickly and reasonably deployed for rescuer
activities
And
to
enable
multimedia
communications –Photos, videos, texts, audios
.Hence DUMBONET is designed to provide a
reliable communication infrastructure in emergency
situations. Let assume that a number of isolated
disaster affected sites (for example, different sea
shore area affected by tsunami) comprises a local
Permission to make digital or hard copies of all or
part of this work for personal or classroom use is
granted without fee provided that copies are not
made or distributed for profit or commercial
advantage and that copies bear this notice and the
full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to
lists, requires prior specific permission and/or a fee.
A headquarter is considered as a special site having
the privilege of talking to every sites on the net and
sometimes broadcast messages to all sites. A
disaster site of DUMBONET can maintain a
communication channel with headquarter while
possibly opening up communication channels with
Abstract-In this paper, it is outlined that
multimedia communication has become a major
theme in today’s information technology that merges
the practices of communications, computing and
information processing into an interdisciplinary
fields. The challenge of multimedia communications is
to provide services that integrate text, sound, image
and video information and to do it in a way that
preserves the case of use and interactivity. This paper
also describes an emergency network platform based
on a hybrid combination of mobile ad hoc networks
(MANET), a satellite IP network operating with
conventional terrestrial Internet. It is designed for
collaborative simultaneous emergency response
operations deployed in a number of disaster-affected
areas. This paper involves multidisciplinary research
areas as MANET routing, peer-to-peer computing,
sensor network and face recognition. Here a brief
description of elements for multimedia systems is
presented. User and network requirements are
discussed together with pocket transfer concept. An
overview of multimedia communication standards is
given. The issues concerning multimedia digital
subscriber lines are outlined together with
multimedia over wireless, mobile and broadcasting
networks as well as digital TV infrastructure for
interactive multimedia services. This paper explains
the design of emergency network called DUMBONET
and our emergency response application system. The
paper also describes our field experience and
identifies several challenges to overcome to improve
our system.
Keywords:
disaster
emergency
response
communication, mobile ad hoc network (MANET),
optimized link state routing (OLSR), peer-to-peer
ubiquitous computing, face recognition, sensor
network, multimedia communication, multimedia,
standard, network, communication, system, user,
requirement, asynchronous transfer mode, terminal,
Internet, protocol
I.
INTRODUCTION
A paradigm shift is underway in the Internet.
Networked devices, formerly situated on the desks
of scientist and business, are now consumer parts
and provide information, communication and
entertainment. Multimedia and multimedia
communication can be globally viewed as a
hierarchical system. The multimedia software and
applications provide a direct interactive
environment
for
users.
Multimedia
72
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
other sellected peeringg sites on thhe net based on
demand. At each disaster
d
site, as traditionnal
i
no longger
communiication infrastructure is
availablee, there we shhall bring in mobile devicces
capable of
o creating a self-organizinng, self-resilieent
mobile ad-hoc
a
netwoork (MANET
T) that perm
mits
multimeddia communiccation among the devices. We
W
also need to providee multimedia communicatiion
d
sitess and with different
d
rescuuer
among different
team and
a
commaand headquuarter. Haviing
multimeddia internet caapabilities alllows rescuers to
collaboraate more efffectively byy sending and
a
receivingg rich and cruucial multimeddia informatioon.
Rescuerss may also consult withh case expeerts
through the Internet to
t gain know
wledge necessaary
for the opperation.
II.
nd delay.)
(i..e. bandwidthh, packet loss pattern, an
HE ARCHITE
ECTURE OF DUMBONET
T
TH
DUMBO
ONET is a sinngle mobile ad
a hoc netwoork
comprisinng a number of
o connected sites
s
each withh a
variety of
o mobile noodes, end syystems and liink
capacities. A node on the net caan communicaate
b
to the
t same site, or
with anyy other node belonging
with a node
n
at any another site which is off a
distance away as well as commuunicating withh a
h
sittuated on the normal Internnet.
remote headquarter
Within each
e
site, noddes share a reelatively simiilar
network conditions while
w
betweeen sites a loong
w long distance.
delay sattellite link is used to allow
The headdquarter is connsidered a speecial site haviing
the privillege of talkingg to every sitees on the net and
a
sometimees broadcast messages to
t all sites. A
normal site of DUM
MBONET can maintain a
communiication channel with the heeadquarter whhile
possibly opening up communicatio
c
n channels with
w
other sellected peeringg sites on thhe net based on
demand. Figure shhows an abstract model of
ONET. We asssume a num
mber of isolatted
DUMBO
disaster affected sitees and a distant commaand
headquarrter. At each disaster sitee, as traditionnal
communiication infrastructure is
i
no longger
availablee, we shall briing in mobile devices capabble
of creatinng a self-organizing, self--resilient mobbile
ad-hoc network (MAN
NET) that perm
mits multimeddia
communiication amongg the devicess. We also neeed
to proviide multimedia communnication amoong
different sites and witth the commaand headquartter.
c
is to deploy satelllite
A highlyy practical choice
access which
w
can restore connnectivity in a
relativelyy short amounnt of time buut it has a hiigh
propagatiion delay. Ouur main challeenge is to creaate
a single networking
n
domain called ‘DUMBONE
ET’
that enabbles effectivee multimedia communicatiion
among the
t
disaster-aaffected sitess and with the
t
commandd headquarterr. DUMBON
NET consists of
heterogenneous networrks having diffferent MANE
ET
devices, various link types
t
(i.e. WiF
Fi, satellite, and
a
terrestriaal) with very different linkk characteristtics
o form a MA
ANET, every m
mobile devicee is set to
To
usse the adhoc (peer-to-peer)
(
WiFi mode and
a to run
th
he Optimizedd Link Staate Routing (OLSR)
prrotocol. OLSR
R is a link state routing protocol,
an
nalogous to OSPF
O
and reelies on know
wledge of
co
omplete topollogy informattion at all no
odes. The
OLSR protocol uses a speccial mechanissm called
Multi-Point
M
Reelay (MPR) too reduce the number
n
of
co
oded messagges. We uused OLSR
R v0.4.0
im
mplementationn from UniK] in our all thee devices.
Ev
very mobile device
d
is set too use the ad-h
hoc (peerto
o-peer) WiFi mode and ussed the BSSIID named
"D
DUMBO". Static
S
IPv4 address from
m subnet
19
92.168.1.0 haas been assiggned to each
h laptops,
deesktops and PDA.
P
To rem
move the amb
biguity of
id
dentifying a node in whhich disaster site, we
maintained
m
som
me criteria dduring assigniing an IP
ad
ddress to a noode. All the nnodes in disasster site 1
were assigned with IP addrresses pool below
b
100
an
nd other site was
w assigned w
with IP addressses above
10
00. The netw
work in headdquarter were assigned
with IPs abovee 200. Each M
MANET comm
municates
with each othher using a ggeostationary satellite,
kn
nown as IPSttar with kubaand satellite symmetric
s
ch
hannels with 500 kbps baandwidth from
m site1 to
sitte2 and 300 kbps
k
bandwiddth from site2
2 to site1.
Any trace from
m a site's trannsceiver to heeadquarter
oes to IP-STA
AR gateway uusing satellitee channel,
go
th
hen from IPST
TAR gateway to AIT Netw
work using
terrestrial netw
work, or vicee-versa. The IPSTAR
munication
arrchitectural deesign makes all the comm
fro
om any IPST
TAR transceivver (ground station)
s
is
ro
outed throughh IPSTAR gatteway (groun
nd station)
ass shown inn Figure 2.. As a reesult, the
co
ommunicationn from a trannsceiver to traditional
t
In
nternet requirres 1-hop sattellite commu
unication.
And, the comm
munication bettween two traansceivers
mmunication. IPSTAR
reequires 2-hop satellite com
haas mobile sattellite transceeiver which allows
a
us
qu
uick (within a few hours tiime) restore of
o internet
co
onnectivity inn the disaster affected areaas, proved
73
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
extremely beneficial to the search and rescue
operation
where
traditional
terrestrial
communication infrastructure is severely disabled.
An emergency and disaster response model is
presented, which makes use of the ambient
intelligence (AmI) technologies to support
communications among participating rescue teams
such as police, fire fighters and ambulance services
. The ambient intelligence technologies provide
adaptive and assistive services to users by
assuming a great number of interoperating devices
such as sensors, actuators and other devices
performing storage, processing and communication
of data. Figure 1 describes the scenario. The
hospitals, police cars, ambulances, fire fighters and
medical teams are integrated in to a single virtual
team performing disaster management operations.
The system uses body area network (BAN),
personal area network (PAN), mesh network, ad
hoc network, sensor network, cellular network,
terrestrial trunked radio (TETRA) network and
global network as communication means. The
proposed system is a conceptual scenario for future
emergency response communications. It is possible
that some nodes might intercept secret information,
such as patient’s history, or generate fake
information. Privacy and authentication are the key
requirements in this type of scenario for reliable
communications. The access to patient’s history at
some remote hospital from the emergency site also
requires an adaptive access control mechanism.
The data integrity is also crucial as the emergency
related information passing through heterogeneous
networks may also be modified intentionally or
accidentally.
III.
APPLICATIONS OF MULTIMEDIA
COMMUNICATION SYSTEM
We deploy a specially customized multimedia
application that allows every rescuer and a
command headquarter to communicate using video,
voice and short messages. The rescuer application
operates relatively in peer-to-peer (P2P) mode that
does not need a centralized server. The command
headquarter application, if running, is a special
version of the peer-to-peer multimedia application
which has a more complete view of the search and
rescue operation. The command center application
additionally runs a face image similarity search
application which is based on the well-known
Eigenface algorithm. It incorporates a mathematical
dimensionality reduction technique that helps
identify victims by comparing the features of the
query face image to the face image features of the
known missing persons. Placing sensors on
DUMBONET provides useful information for
rescue operation as well as for emergency warning
or preparedness operation. Sensor equipments from
the Live E! project have been enhanced with OLSR
routing capability and integrated into DUMBONET
to provide rescuers with the readings of
temperature, humidity, pressure, wind speed, wind
direction, rainfall and CO2. Sensor applications are
useful in terms of measuring and identifying
environmental and potentially harmful factors that
may affect the rescue operation.
74
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
IV. CHALLENGES IN DUMBONET AND
EMERGENCY RESPONSE APPLICATIONS
video and audio streaming, especially through the
uses of powerful codec and error correction methods,
is highly apparent. In addition, emergency response
applications must be resilient in the MANET
environment. Providing security capabilities for
DUMBONET and emergency response applications
is also necessary as medical and personal information
will have to pass through this system. There are
needs to integrate encryption, authentication,
verification and access control methods into
DUMBONET along with the emergency response
applications.
Maintaining MANET connectivity and quality of
service (QoS) in disaster-affected areas is one of our
primary research areas. The operational range of the
IEEE 802.11b WiFi is typically between 30 - 100
meters. There is a tradeoff between the WiFi power
setting and the operational distance as a device
transmitting at a higher power level covers a wider
operational area but results in a shorter battery life.
Each device’s actual operational range is further
limited due to other environmental factors like
obstacles, debris, difficult terrains, antenna
angle/orientation, and more. In certain test cases, the
actual OLSR routing path not to be what had
anticipated. Variants in device power settings and
WiFi chipsets could result in one mobile device
choosing a physically farther device as its preferable
MPR instead of choosing a physically nearer one.
The default OLSR implementation assumes
homogenous network and does not take into account
the different characteristics of the links when
computing MANET routes. For example, the OLSR’s
neighbor discovery mechanism (i.e. HELLO) does
not distinguish a next hop WiFi neighbor that
transmits at 10mW from a next hop WiFi neighbor
that transmits at 100mW. Likewise, it does not
distinguish a regular WiFi link from a link that
incidentally passes through a satellite tunnel (i.e.
through the VPN bridge.) . A MANET environment
has variable bandwidths, topology changes and
oftentimes severe packet losses. The long
propagation delay of the satellite channel also
deteriorates the quality of interactive streaming
audio. The common E-model would predict badquality voice over IP (VoIP) experiences in most of
our test scenarios. The need to improve the quality of
V.
LOSS BEHAVIOR
Packet loss in a network like DUMBONET could
occur in various places like in the MANET, over the
satellite and in the end node itself because application
layer. The presence of mobile nodes in MANET can
cause lots of link failures with other nodes hence fail
to transport any upper layer trace. At the same time,
to frequent topology changes creates more routing
table changes in the mobile nodes creating failure of
packet delivery to the right destination. There are
also a possibility of packet loss in satellite network
because of many factors including bad weather, sun
interference, channel noise, equipment (like antenna)
problems, routing policy etc. The cumulative
distribution of loss rates for all audio calls in
DUMBONET. Cumulative distribution helps to show
the percentage of calls which has a loss rate x or less.
It needs to mention that, audio calls are interactive
(i.e. bidirectional call). The satellite channels are
contributing a huge number of packet loss in audio
calls. Throughput Analysis Packet loss in the network
can degrade the trace through-put in a significant
level. For each audio and video categorized based on
different hop counts they have traversed like hop-1
75
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
MANET, 2-hop MANET, 3-hop MANET, 4-hop
MANET, 5-hop MANET in addition of satellite-hop.
The average throughput of a video stream can be as
high as around 78 kbps traversing 1-hop MANET
and 1-hop satellite link. But it degrades dramatically
while it traverses more hops in the MANET and
could be as low as upto 15 kbps for 5-hop in
MANET and 1-hop satellite link.
VI.
RELATEDWORKS
There has been several works done on disaster
scenarios but very few has real implementations or
no data available Takaaki et al. proposed a data
collection and management system for victims on
disaster. In our system, the data is collected by an adhoc network constructed by handheld and vehicular
terminals equipped with wireless interface and GPS
receivers. A multiple access control (MAC) protocol
has been proposed to allows survivors in disaster
areas to establish contact to the base stations (BSs)
provided by the rescue team but our DUMBONET
can be used on the absence of any BS. This protocol
relies on downstream broadcast single-hop wireless
transmissions from the BSs to the survivors and
upstream multi hop wireless transmissions. For
MAC, the protocol uses a combination of tree
splitting and code division multiple access (CDMA).
Compared to the single-hop approach for upstream
transmissions, it shows that the multi-hop approach
can expend less transmission energy, especially when
data aggregation is possible at relay points. The real
implementation of this protocol is not available.
Another proposal on emergency network is a Hybrid
Wireless Mesh Network (HWMN) for creating a
communication infrastructure where the existing
communication infrastructure is damaged or
unavailable . They are still in developing process to
collect the simulation results in cellular networks
investigating different real scenarios that may occur
at ground zero. Nelson et al. proposed a bandwidth
adaptation algorithm to adjust the allocation of
dynamically in a Hybrid MANET- Satellite-Internet
Network in response to trace and link status changes.
Implementation issues are discussed with simulation
results.
VII.
response operations in disaster-affected areas and
also is a multidisciplinary effort that aims to create a
real, viable system that can be used during disasterrelated emergency situations that traditional
communication infrastructure is not operational. The
system can be tested in the field to gain better
understanding and insights. The potential research
issues and enhancements have been described here
and it will continue to improve many aspects of
DUMBONET for emergency response applications
in the time to come.
CONCLUSION
REFERENCES
[1].
[2].
[3].
[4].
[5].
[6].
[7].
[8].
[9].
[10].
[11].
[12].
[13].
[14].
[15].
Designing a robust communications infrastructure for
emergency applications is a demanding effort. It
should allow for reliable communication among
different response organizations over a distributed
command and control infrastructure. Additionally, it
should facilitate the distribution of warning and alert
messages to a large number of users in a
heterogeneous environment.
An emergency network platform based on OLSR,
called DUMBONET along with its P2P multimedia
communication system for collaborative emergency
[16].
[17].
76
B. Manoj and A. H. Baker. Communication challenges in
emergency response. Communications of the ACM,
50(3):51{53, March 2007.
Clausen, T., Jacquet, P., Laouiti, A., Minet, P., Muhlethaler,
P., Qayyum, A., Viennot, L.: Optimized Link State Routing
Protocol (OLSR), IETF RFC 3626 (2003)
Dumbo: Digital ubiquitous mobile broadband olsr.
http://www.interlab.ait.ac.th/dumbo/, 2007 (accessed June
24, 2007).
Ethereal
Network
Protocol
Analyzer,
http://www.ethereal.com/download.html
Kanchanasut, K., Tunpan, A., Awal, M.A., Das,
K.D.,Wongsaardsakul, T., Tsuchimoto,
Y.A: Multimedia Communication System for Collaborative
Emergency Response Operation in Disaster-affected Areas,
Interlab Technical Report TR 2007-1
Karygiannis, A., Antonakakis, E.: mLab: A Mobile Ad Hoc
Network Test Bed. In: 1st Workshop on Security, Privacy
and Trust in Pervasive and Ubiquitous Computing in
conjunction with the IEEE International Conference in
Pervasive Services (2005)
Marques, P., Castro, H., Ricardo, M.: Monitoring emerging
IPv6 wireless access networks. Wireless Communication,
IEEE (2005)
Measurement
Tools
Taxonomy,
http://www.caida.org/tools/taxonomy
MRTG - The Multi Router Traffic Grapher,
http://oss.oetiker.ch/mrtg/
M. Kyng, E. Nielsen, and M. Kristensen. Challenges in
designing interactive systems for emergency response. In
Proceedings of the 6th ACM conference on Designing
Interactive systems, (University Park, PA,USA, June 26 28, 2006), DIS '06, pages 301{310, 2006.
N. X. Liu, X. Zhou, and J. S. Baras. Adaptive hierarchical
resource management for satellite channel in hybrid manetsatellite-internet network. In Proceedings of 2004 IEEE
59th Vehicular Technology Conference, (Milan, Italy, May
17-19, 2004), pages 4027{4031, 2004.
Nlanr/dast : Iperf - the tcp/udp bandwidth measurement tool.
http://dast.nlanr.net/Projects/Iperf/, 2007 (accessed June 24,
2007).
[olsr.org]. http://www.olsr.org, 2007 (accessed June 24,
2007).
P. Saengudomlert, K. Ahmed, and R. Rajatheva. Mac
protocol for contacts from survivors in disaster areas using
multi-hop wireless transmissions. In Proceedings of Asian
Internet Engineering Conference (AINTEC) 2005,
(Bangkok, Thailand, December 13-15, 2005), pages 46{56,
2005.
R. B. Dilmaghani and R. R. Rao. Hybrid wireless mesh
network deployment: a communication testbed for disaster
scenarios. In Proceedings of the 1st international Workshop
on Wireless Network Testbeds, Experimental Evaluation
Characterization (Los Angeles, CA, USA, September 29 29, 2006), page 90, 2006.
Shakya, S.: Development of automated real time
performance measurement and visualization framework for
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
[18].
[19].
[20].
[21].
real mobile ad-hoc network. Thesis Report, Asian Institute
of Technologies, AIT (May 2007)
TcpDump
Network
Monitoring
Tool,
http://www.monkey.org/‫׽‬dugsongtcpdump.org/
The optimized link state routing protocol (olsr)
http://www.ietf.org/rfc/rfc3626.txt, 2007
(accessed June 24, 2007).
T. Umedu, H. Urabe, J. Tsukamoto, K. Sato, and T.
Higashino. A manet protocol for information gathering from
disaster victims. In Proceedings of the 4th IEEE
[22].
[23].
International Conference on Pervasive Computing and
Communications Workshops, March 2006, PERCOMW'06,
pages 442{446, 2006.
T. Wongsaardsakul and K. Kanchanasut. Peer-to-peer
emergency application on mobile ad-hoc network. Technical
report, February 2007. TR 2007-2, intERLab, Asian
Institute of Technology.
Welcome to ipstar - broadband satellite system.
http://www.ipstar.com/en/index.aspx, 2007 (accessed June
24, 2007).
.
77
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
PAPR Analysis of OFDM Systems
Saumendra Ku. Mohanty1
[email protected]
ITER,BBSR
Prasant Ku. Nayak2
[email protected]
SIET,DHENKAL
Abstract— In current scenario the technology related to
B. D. Pradhan3
[email protected]
SIET,DHENKAL
Mihir.N.Mohanty4
[email protected]
ITER,BBSR
Th e c e n t r a l limit th eorem eff e c t iv e ly
d ec id es the envelope of the OFDM signal and it is
shown that, effectively, the PAPR grows as 2 ln N and
not linearly with N [7] ,where N is the total number of
subcarriers .In [3], Tellambura investigated the
differences between the continuous-time PAPR and
discrete- time PAPR. To do this, Tellambura
introduced a
practical scheme to compute the
continuous-time PAPR, using Chebyshev polynomials
of the first kind. The scheme was then used to
obtain numerical results. Based on these results, a
common rule-of-thumb that has since emerged in the
OFDM research community is that the discrete-time
PAPR with four- time over sampling is a sufficiently
accurate approximation of the continuous-timePAPR
[2].
wireless applications is very useful as well as essential.
Still the fast way is required for communication, that
leads to avoid the traffic jam. Multicarrier transmission
is one of the solution for the same which is termed as
Orthogonal Frequency Division Multiplexing (OFDM) or
discrete multitone (DMT), based system. It can be of
huge interest because it provides greater immunity to
multipath fading and impulse noise, and eliminates the
need for equalizers. Also it satisfies the high-speed
wireless communications and recent advances in digital
signal processing technology In this paper, two aims will be studied. First, it
introduces a practical technique for evaluating the
continuous-time PAPR of OFDM signals using complex
modulation is presented. Second, it introduces a conventional
OFDM systems with the limitation of their behavior with peakto-average-power ratio (PAPR). Computing the continuoustime PAPR of OFDM signals is computationally challenging.
The pioneering work of calculating PAPR of single carrier
FDMA, multi-carrier BPSK-OFDM(Real-Valued Modulation)
and multi-carrier QPSK-OFDM(Complex Modulation) is
achieved.
Unfortunately, Tellambura’s
method
[3]
applies only to real-valued modulation schemes like
BPSK (and results were only presented for N=512
BPSK-OFDM, but not complex-valued schemes like
QPSK. To circumvent this shortcoming, [4] extended
Tellambura’s method to complex modulation schemes,
using Chebyshev polynomials of both the first and
second kinds. However, neither [3] nor [4] present any
analysis of the error from using the discrete-time
PAPR instead of continuous- time PAPR. Thus, even
though the empirical distribution of the continuoustime PAPR and the four-time oversampled discretetime PAPR may look close, there is no guarantee that
the error is bounded. Some analytical bounds have
been provided in [5]–[6]. However, due to the lack of
computationally feasible methods to obtain the
continuous-time PAPR, [5]–[6] used the discrete-time
PAPR to verify their continuous- time PAPR bounds.
Index Terms—OFDM, DMT, peak-to-average-power ratio, EPF,
multicarrier modulation, SC-FDMA, MC-BPSK, MC-QPSK
I. INTRODUCTION
The major challenge in Orthogonal Frequency
Division Multiplexing (OFDM) is that the output signal
may have a potentially very large peak-to-average
power ratio (PAPR, also known as PAR). The resulting
technical challenges, as well as PAPR-reduction
techniques and related issues, have been widely studied
and reported in the research literature [1], [2].
The most widely PAPR reduction techniques
known are based on amplitude clipping or on some forms
of coding [2]. In this work, it has been tried to characterize
analytically the statistics of the PAPR. Problem in OFDM
by considering the probability that the PAPR of an OFDM
symbol will exceed a given level.
Here the expression is the instantaneous envelope power as
a polynomial of powers of tan (πt). In contrast with [4],
the proposed method only employs Chebyshev
polynomials of the first kind. Also, because of the one-toone relationship between tan(πt) and t in0≤ t≤ 1,
the new method does not require breaking the
problem into two domains (0 ≤ t≤0.5and 0.5 ≤ t≤ 1)
and carefully mapping the roots differently for each
domain. Furthermore, comparisons are made between the
distribution of the continuous-time PAPR obtained
through the proposed method with the discrete-time
PAPR obtained from over sampled signals and some of
the analytical upper bounds derived in [5]–[6].
Since the actual signal that enters the power
amplifiers is a continuous-time signal, we ultimately
want to reduce the PAPR of the continuous-time
OFDM signal (we call this the “continuous-time PAPR”
for convenience). However, the evaluation of the
continuous-time PAPR is analytically non- trivial and
computationally expensive. Therefore, most PAPRreduction
techniques
focus
on
discrete-time
approximations of the continuous-time PAPR. The
discrete-time approximations result in what we call the
“discrete-time PAPR”.
78
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
symbols to be dedicated to controlling PAPR. In SLM,
PTS, and interleaving, the data rate is reduced due to the
side information used to inform the receiver of what has
been done in the transmitter. In these techniques the side
information may be received in error unless some form of
protection such as channel coding is employed. When
channel coding is used, the loss in data rate due to side
information is increased further.
II. Criteria For Selection Of PAPR Reduction
Technique
As in everyday life, we must pay some costs for
PAPR reduction. There are many factors that should be
considered before a specific PAPR reduction technique is
chosen. These factors include PAPR reduction capability,
power increase in transmit signal, BER increase at the
receiver, loss in data rate, computational complexity
increase, and so on. Next, we briefly discuss each item.
Computational complexity:
Clearly, this is the most important factor in
choosing a PAPR reduction technique. Careful attention
must be paid to the fact that some techniques result in
other harmful effects. For example, the amplitude clipping
technique clearly removes the time domain signal peaks,
but results in in-band distortion and out-of-band radiation.
Computational complexity is another important
consideration in choosing a PAPR reduction technique.
Techniques such as PTS find a solution for the PAPR
reduced signal by using many iterations. The PAPR
reduction capability of the interleaving technique is better
for a larger number of interleavers. Generally, more
complex techniques have better PAPR reduction
capability.
Power increase in transmit signal:
Other considerations:
Some techniques require a power increase in the
transmit signal after using PAPR reduction techniques. For
example, TR (Tone rejection)requires more signal power
because some of its power must be used for the PRCs. TI
(Tone injection)uses a set of equivalent constellation
points for an original constellation point to reduce PAPR.
Since all the equivalent constellation points require more
power than the original constellation point, the transmit
signal will have more power after applying TI. When the
transmit signal power should be equal to or less than that
before using a PAPR reduction technique, the transmit
signal should be normalized back to the original power
level, resulting in BER performance degradation for these
techniques.
Many of the PAPR reduction techniques do not
consider the effect of the components in the transmitter
such as the transmit filter, digital-to-analog (D/A)
converter, and transmit power amplifier. In practice, PAPR
reduction techniques can be used only after careful
performance and cost analyses for realistic environments.
PAPR reduction capability:
PAPR reduction for OFDM/OFDMA
Recently, OFDMA has received much attention
due to its applicability to high speed wireless multiple
access communication systems. The evolution of OFDM
to OFDMA completely preserves the advantages of
OFDM. The drawbacks associated with OFDM, however,
are also inherited by OFDMA. Hence, OFDMA also
suffers from high PAPR. Some existing PAPR reduction
techniques, which were originally designed for OFDM,
process the whole data block as one unit, thus making
downlink demodulation of OFDMA systems more difficult
since only part of the subcarriers in one OFDMA data
block are demodulated by each user’s receiver [29]. If
downlink PAPR reduction is achieved by schemes
designed for OFDM, each user has to process the whole
data block and then demodulate the assigned subcarriers to
extract their own information. This introduces additional
processing for each user’s receiver. In the following we
describe some modifications of PAPR reduction
techniques for an OFDMA downlink. The PAPR problem
for an OFDMA uplink is not as serious as that for
downlink transmission since each user’s transmitter
modulates its data to only some of the subcarriers in each
data block.
BER increase at the receiver:
This is also an important factor and closely
related to the power increase in the transmit signal. Some
techniques may have an increase in BER at the receiver if
the transmit signal power is fixed or equivalently may
require larger transmit signal power to maintain the BER
after applying the PAPR reduction technique. For example,
the BER after applying ACE(Active constellation
extension) will be degraded if the transmit signal power is
fixed. In some techniques such as SLM(Selected
Mapping), PTS(Partial Transmit Sequence), and
Interleaving, the entire data block may be lost if the side
information is received in error. This may also increase the
BER at the receiver.
Loss in data rate:
Some techniques require the data rate to be
reduced. As shown in the previous example, the block
coding technique requires one out of four information
III.
ANALYTICAL MODEL
The baseband continuous-time OFDM signal with N
79
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
carriers can be expressed as
Thus, a practical approach to computing Pa (t∗ ) is to first
find the roots of ∂Pa(t)/∂t followed by comparing the
values of Pa(t) at only the real roots. Following the
approach of [2], we denote by Tk (t)=cos(k cos−1t) the
where {sn } are data symbols and t is normalized
with respect to the OFDM symbol duration. With unity
average power, the continuous-time PAPR, γc , is defined
as
2
γc = max |x(t)| . ………… (2)
γc measures the instantaneous envelope peak power of
the baseband signal and represents the maximal PAPR. It
is non- trivial to compute. Tellambura’s method [3]
works only for the special case of real-valued
modulation.
kth-order Chebyshev polynomial. For each k, Tk (x) can
be expressed as a kth-degree polynomial in terms of x,
where T0 (x) =1,T1 (x) =x andTk+1(x) =2xTk (x)−Tk−1
(x) for k > 1. Exploiting the equalities Tk (cos θ) = cos
kθ and sin(θ) =
Chebyshev
, we can rewrite (5) in terms of
polynomials
as
Being different from the BPSK-OFDM systems
considered in [3], the complex OFDM signal introduces
the second term on the right hand side (R.H.S.) of (8),
which presents a major challenge in obtaining exact γc
values.
As a computationally feasible alternative, the
discrete-time PAPR, γd , is often used instead of γc and
defined as
……..(3)
IV. PROPOSED MODEL
where
All trigonometric functions of an angle θ may be
expressed as rational expressions in terms of t=
tan(θ/2)[30]. Let x = tan(πt).Substituting(1 – x2)/(1+
x2) for cos(2πt) and 2x/(1 + x2 ) for sin(2πt), and letting
with L being the over sampling rate.
γk =cos(π/2k) and ζk = sin(π/2k), we have
Let Pa(t)=|x(t)|2.Without loss of generality, no
assumptions are made on the modulation scheme used
to generate{Sn}. It can be easily shown that,
We need only to find the roots of ∂Pa (x)/∂x,
since ∂Pa (t)/∂t = ∂Pa (x)/∂x(π sec2 (πt)). Because
Tk (x) is an order-k polynomial, the highest power of
1/(1+ x2 ) in(9) is N − 1. Hence we can remove the
where βk and αk are defined as follows
denominator and thus obtain a polynomial Q(x) by
writing
and
Q(x) is a polynomial of degree at most 2N in x and
all roots of ∂Pa (x)/∂x are also roots of Q(x). Thus, ∂Pa
(x)/∂x has at most 2N roots. Pa (x) can be routinely
computed from (9) by expanding the Chebyshev
polynomials, factoring out 1/(1+x2)N ,and collecting
terms. We may then evaluate the values of Pa (x)
at the real roots, and the maximum is γc .
with (·)∗denoting complex conjugation and R {·}and I {·}
being the real and imaginary part of the enclosed quantity,
respectively.
Clearly, a necessary condition for Pa (t) to achieve its
maximum at t*,i.e maxt Pa(t)=Pa(t*), is
V. Numerical Procedure Summary
80
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Fig.5 shows
the
complementary cumulative
distribution function (CCDF) of γd with different
over sampling rates, L = 1, 2, 4, 8. The CCDF of γc
labelled as “continuous-time is also plotted in Fig.5. As
indicated in Fig.5, γ obtained from over sampled signals
approaches γc as L increases, and γd obtained with a
over sampling rate greater than or equal to L = 4 is an
accurate approximation of γc . These results agree with
those reported in [3] where real-valued OFDM signals
(and BPSK-OFDM in particular) were considered.
The proposed method for computing the continuoustime PAPR for a given symbol set {sn } and number of
sub carriers N is summarized as follows.
1) Compute βk and αk for k = 1, 2, · · · , N
−1according to (6) and (7);
2) Compute Pa (x) according to (9), expanding and
collecting the coefficients of the different powers of x;
3) Find the derivative of Pa (x);
In this section, we evaluate the performance of
PAPR of different real and complex modulation schemes
used in OFDM systems
4) Find the roots of Q(x), and hence of ∂Pa (x)/∂x
using standard polynomial root finding algorithms;
5) Keep only the real roots of Q(x);
CDF plots of PAPR Tx with BPSK modulation
0
10
SCFDMA
QPSK-OFDMA
BPSK-OFDM
6) Evaluate and compare the values of Pa (x) at the
real roots, and obtain γc .
-1
10
P robability , X < = x
Each step is straightforwardly handled by common
mathematical software like Mathematica or Matlab. In
our experiments, we have found that step 2 (expanding
and simplifying Pa (x)), while conceptually easy, may
dominate the computation time, especially for large N .
In particular, expanding and simplifying Tk[γk(1−
x2)/(1+x2) +ζk(2x)/(1+x2 )] is a time consuming
-2
10
-3
10
operation for large k. For a given N ,pre-computing
these terms helps to significantly reduce the
computation time
-4
10
0
2
4
6
papr, x dB
8
10
12
Fig6. Comparison Of PAPRs (with different
modulation schemes)
VI. RESULTS
In this section, we evaluate the proposed scheme
N=512 with different
using QPSK-OFDM system for
sampling rates
Fig.6 shows that the transmitted SC-FDMA
signal with a single carrier has the probability of
errors is very less as it is a continuous-time real valued
modulation scheme . In fact, for a PAPR of ~7dB, we
get a probability of error ~0.0001, as shown in the plot.
0
10
For a transmitted BPSK-OFDM signal with
multicarrier has the probability of errors is high for a
slight increase in PAPR as it is a continuous-time real
valued multicarrier modulation technique . In fact, for
a PAPR of ~8dB, we get a probability of error ~0.01,
as shown in the plot.
continuous
L-2
L-4
L-8
-1
P ro b ab ility E rro r
10
-2
10
It is found f r o m the transmitted QPSK-OFDM
signal that for a multi carrier ,effect on the probability
of errors is very low for a slight increase in PAPR as it
is a discrete-time real valued multicarrier modulation
technique . In fact, for a PAPR of ~10dB, we get a
probability of error ~0.0001, as shown in the plot
-3
10
-4
10
0
2
4
6
8
10
12
SNR in dB
PAPR(γ) in dB
VI. CONCLUSION
Fig5. Simulation of PAPR of QPSK-OFDM (for different
Using
Sampling rate)
81
the
proposed
scheme,
we
SIET, Dhenkanal, Odisha
have
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
[3] C. Tellambura, “Computation of the continuous-time PAR of an
OFDM signal with BPSK subcarriers,” IEEE Commun. Lett., vol.
5,no.5, pp.185–187, May 2001.
shown(Fig.5) for complex-valued modulations (like
QPSK-OFDM) that the discrete-time PAPR obtained
from TWO-times oversampled signals may be
considered a sufficiently accurate approximation of the
continuous-time PAPR.
[4] H. Yu and G. Wei, “Computation of the continuous-time PAR
of an OFDM signal,” in Proc. 2003 IEEE Int’l Conf. Acoust.
Speech Signal Process., Hong Kong, pp. 529–531, Apr. 2003.
[5] M. Sharif, M. Gharavi-Alkhansari, and B. H. Khalaj, “On the peakto- average power of OFDM signals based on oversampling,” IEEE
Trans. Commun., vol. 51, no. 1, pp. 72–78, Jan. 2003.
We have also used our scheme to examine the
empirical plots (Fig.6), where we can conclude that ,
the discrete time PAPR of QPSK-OFDM has the less
probability of errors even with the higher order
nonlinearity in the system. This means that the signal
is highly resistant to clipping distortions caused by the
power amplifier used in transmitting the signal. It also
means that the signal can be purposely clipped by up
to ~2dB so that the probability of errors in both the
cases (BPSK & QPSK) can be reduced allowing an
increased transmitted power. [6]
Gerhard Wunder and Holger Boche, “Upper bounds on the
statistical distribution of the crest-factor in OFDM transmission,”
IEEE Trans. Inform. Theory, vol. 4, no. 2, pp.488–494, Feb.
2003.
[7] Nati Dinur and Dov Wulic,”Peak-to-Average Power Ratio in HighOrder OFDM”, IEEE transactions on communications, vol. 49, no. 6,
,pp.1063-1072,june 2001
REFERENCES [1] K.Daniel Wong,
Man-On Pun, andH. Vincent Poor, “ The
Continuous Time Peak-to-Average Power Ratio of OFDM Signals
Using Complex Modulation Schemes” IEEE transactions on
communications, vol. 56, no. 9, pp.1390-1393, september 2008
[2] S. H. Han and J. H. Lee, “An overview of peak-to-average
power ratio reduction techniques for multicarrier transmission,”
IEEE Wireless Commun., vol. 12, no. 2, pp. 56–65, Apr. 2005.
82
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
High-speed full adder based on minority function and bridge style
1
2
Asirbad Behera
Subhrajit Dey
Dept. of Electronics and Telecommunication Engineering,
Synergy Institute of Engineering & Technology, Dhenkanal, 759001, Orissa, India
1
2
[email protected]
[email protected]
Abstract - A new high-speed and high-performance Full
Adder cell is implemented based on CMOS bridge style
and minority function. Several simulations conducted at
nanoscale using different power supplies, load capacitors,
frequencies and temperatures demonstrate the superiority
of the advanced design in terms of delay and power-delay
product (PDP) compared to the other cells. The
performance of many applications such as digital signal
processing depends on the performance of the arithmetic
circuits to execute complex algorithms such as
convolution, correlation and digital filtering. Usually, the
performance of the integrated circuits is influenced by
how the arithmetic operators are implemented in the cell
library provided to the designer and used for synthesizing.
As more complex arithmetic circuits are presented each
day, the power consumption becomes more important.
The arithmetic circuits grows more complex with the
increasing processor bus width, so energy consumption is
becoming more important now than ever due to the
increase in the number and density of transistors on a
chip and faster clock. Increasing demand for fast growing
technologies in mobile electronic devices such as cellular
phones, PDA’s and laptop computers requires the use of a
low-power Full Adder in VLSI systems since it is the core
element of arithmetic circuits. Decreasing the power
supply leads to power consumption reduction. However,
lowering supply voltage also increases circuit delay and
degrades drive ability of cells designed with certain logic
styles
Keywords: Minority based full adder, Inverter based full adder,
Minority function bridge style full adder.
I.
INTRODUCTION
The performance of many applications such as digital signal
processing depends on the performance of the arithmetic
circuits to execute complex algorithms such as convolution,
correlation and digital filtering. Usually, the performance of
the integrated circuits is influenced by how the arithmetic
operators are implemented in the cell library provided to the
designer and used for synthesizing. As more complex
arithmetic circuits are presented each day, the power
consumption becomes more important. The arithmetic circuits
grows more complex with the increasing processor bus width,
so energy consumption is becoming more important now than
ever due to the increase in the number and density of
transistors on a chip and faster clock [1]. Increasing demand
for fast growing technologies in mobile electronic devices
such as cellular phones, PDA’s and laptop computers requires
83
the use of a low-power Full Adder [2–5] in VLSI systems
since it is the core element of arithmetic circuits. Decreasing
the power supply leads to power consumption reduction.
However, lowering supply voltage also increases circuit delay
and degrades drive ability of cells designed with certain logic
styles. A specific task of our work is to make a comparison of
the power consumption of the Full Adders designed with
different logic styles. We measured the energy consumption
by the product of average power and worst case delay. The
power-delay product (PDP) represents a trade-off between
two compromising features of power dissipation and circuit
latency.
II.
PREVIOUS WORKS
In this section some state-of-the-art Full Adder cells, which
are compared with the proposed design, are reviewed in brief.
MinFA (Fig. 1) [9] is a Minority based Full Adder which
has 34 transistors. Although this low-power CMOS based
design is modular, it has a long critical path and not a high
driving capability at Sum output node, which leads to long
propagation delay.
InvFA (Fig. 2) [10] is an Inverter based Full Adder and is
composed of seven capacitors and four inverters. The main
advan- tage of this design is its simplicity, modularity and low
number of transistors. Although it has driving capability at the
output nodes, its relatively long critical path results in long
delay.
HCFA (Fig. 3) [13] is designed based on Hybrid CMOS
style and has 24 transistors. The XOR structure, used in this
design, is not full-swing. Using an XNOR circuit followed by
an inverter instead of a CMOS buffer, the Sum signal
becomes full-swing with a shorter critical path.
CLRCL (Fig. 4) [14], which has 12 transistors, is designed
based on pass-transistor logic style. Besides its little number
of transistors, this design suffers from many drawbacks. The
output and some internal nodes have threshold loss problem
and are not full- swing which leads to low driving and long
propagation delay.
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
=
(A+B+Cin) + ABCin
……………………………….1
= Minority(a,B,Cin) …………………………..2
This means that the
Minority circuit and the
function can be implemented by a
function can be implemented
. The Minority function acts as follows: If the
using number of ‘‘0’’s becomes greater than the number of ‘‘1’’s at
the input, the output will be ‘‘1’’. Minority is a function of
odd number of inputs. The proposed Full Adder cell (MBFA)
is designed using a 3-input Minority circuit, followed by a
Bridge style structure (Fig. 5).
The advanced adder module has advantages of the Bridge
style including low-power consumption and the simplicity of
the design. In comparison with BCFA, the new design has
some great advantages which improve the metrics of the
Fig. 1. MinFA
proposed design significantly. The
node is the Achilles’
heel of BCFA because the Bridge circuit which has not high
driving power should drive a 2C capacitor and an inverter.
This increases the delay of the circuit specifically at low
voltages and nanoscale. However in the proposed circuit, an
inverter with a high driving power drives four transistor gates
and an inverter. The proposed design eliminates the 2C
capacitor of the BCFA design. Furthermore, as in the
proposed design three capacitors perform voltage summation
to implement scaled-linear sum instead of five capacitors, it
has larger noise margins in comparison with BCFA structure.
Fig. 2. InFA
Fig. 2. HCFA
Fig.5. MBFA
Fig. 4. CLRCL
III. MINORITY FUNCTION BRIDGE STYLE FULL
ADDER(MBFA)
The functionality of the proposed Full Adder cell is
based on the following equations:
84
Fig.6 Layout of MBFA
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
The new design has no threshold loss problem at its nodes and
has larger noise margin than HCFA and CLRCL state-of-theart Full Adder cells, specifically at low voltages. The MBFA
has a symmetric structure which leads to simpler layout
process. The layout view of MBFA Full Adder is shown in
Fig. 6.
IV.
EXPERIMENTAL RESULTS AND COMPARISON
In this section, the proposed design and the other state-of-theart adders including MinFA, InvFA, NMNFA, BCFA, HCFA
and CLRCL, are all simulated in various situations using
Synopsys HSPICE with standard Nanoscale CMOS
technologies at room temperature. The designs are simulated
at different supply voltages and with the aim of reaching the
optimum PDP. In addition, various load capacitors and
different frequencies are used for the simulations. The
propagation delay of each adder cell is measured from the
moment that the input signal reaches 1/2Vdd to the moment
that the output signal reaches the same voltage level. The
average power consumption is also measured by applying a
complete pattern during a long period of time. In this
experiment, the adder cells are simulated at 32 nm feature
size, 0.9, 0.8 and 0.7 V supply voltages, 100 MHz frequency
and with 3 fF load capacitors.
Fig.8 PDP vs frequency
Fig.9 PDP vs temperature
To evaluate the performance of the proposed adder cell more accurately, it is
also tested using a larger test-bench which is shown in Fig. 10.
Fig.7 PDP vs load capacitor
Fig.10 Larger test-bench
To evaluate the driving capability of the adder cells, they
are simulated using several output load capacitors, ranged
from 2 up to 5 fF at the previously mentioned simulation
conditions. The PDPs of the adders are evaluated and
plotted in Fig. 7.
To evaluate the functionality and performance of the
adder cells at different operative frequencies, they are
tested at 100 MHz up to 1 GHz operating frequencies.
The experimental results are plotted in Fig. 8.
To test the immunity of the circuits to the ambient
tempera- ture noise and variations, the designs are also
simulated in a vast range of temperatures, ranged from 0
to 700 C at the previously mentioned simulation
conditions. The results of this experiment are shown in
Fig. 9.
85
In addition, all of the designs are also simulated at 45 nm
technology node at 1 V, 100 MHz operation frequency, and
with 4 fF output load capacitance. The simulation results,
shown in Table 2 demonstrate the superiority of the proposed
design in terms of delay, power consumption and PDP.
Inaccurate chip fabrication process leads to variability in
process parameters such as threshold voltage (Vth) gate oxide
thickness (Tox) and effective channel length (Leff) which
should be also taken into consideration. Die-to-Die (D2D) and
Within-Die (WID) variations are two different types of
process variations. WID itself is divided into random and
systematic components. These variations can degrade the
robustness and performance of the nanoscale VLSI circuits.
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Therefore, the proposed design is also examined in the
presence of process variations, including the parameter
deviations in threshold voltage and channel length which are
the most common WID process variations. For this
experiment, Monte Carlo transient analysis with a reasonable
number of 30 iterations for each simulation is conducted
using the HSPICE simulator.
Fig. 11 Parameter deviation versus Lch variations
Table 1 Maximum parameter variation of the proposed circuit versus
capasitance value deviation
Fig. 11 Parameter deviation versus Vt variations
86
Capacitance
value
deviation(%)
Delay
( *10-10s)
Power
(*10-7W)
PDP
(*10-17J)
5
10
15
20
25
30
35
40
45
50
0.055
0.112
0.158
0.209
0.245
0.323
0.370
0.494
0.627
0.796
0.022
0.039
0.061
0.093
0.134
0.182
0.243
0.312
0.408
0.539
0.233
0.458
0.671
0.907
1.098
1.452
1.721
2.239
2.951
3.888
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
the-art adder cells. Simulation results have been demonstrated
the superiority of the new design in terms of the mentioned
metrics compared to the other modern designs. Moreover,
some additional experiments have been performed to evaluate
the immunity of the proposed design to the inaccuracy of
fabrication and process variations. The presented structure has
been also implemented with CNFET nano-device and
CNCAPs as an instance of the possible future works.
REFERENCES
[1].
[2].
[3].
Fig. 12 New design with CNFET technology
[4].
Table 2 Simulation result of the proposed Full Adder cell using CNFET
technology
Technology
CNFET
MOSFET
Delay
( *10-10s)
32.068
128.33
Power
( *10-7W)
2.9411
3.1852
[5].
PDP
( *10-17J)
9.4315
40.877
[6].
[7].
[8].
To focus on the improvements of the new structure in
comparison with BCFA, which is also based on CMOS
Bridge style and capacitors, the proposed design is compared
with this cell. Figs. 10 and 11 demonstrate the maximum
delay, power and PDP variations of the Full Adders with
respect to threshold voltage and channel length variations,
respectively. It can be inferred from the results that the
proposed Full Adder cell can operate correctly and experience
little parameter variations in the presence of process
variations. Full Adder cell outperforms BCFA in terms timing
variation, which is the most important parameter variation in
VLSI circuits, as well as PDP variation. It shows that the new
Bridge-Cap structure also leads to more robustness compared
to the previous one.
In this circuit the diameter of the nanotubes of each
CNFET (DCNT) is 1.487 nm for each transistor. This design is
simulated at 0.8 V, 100 MHz frequency and with 2.1 fF load
capacitor.
V.
CONCLUSION
Minority function is used in order to implement Sum signal
based on Bridge style. The new design has been evaluated in
terms of average power, critical path delay, PDP, leakage
power and area and has been compared with several state-of-
87
[9].
[10].
[11].
[12].
[13].
[14].
C.H. Chang, J. Gu, M. Zhang, A review of 0.18 µm Full Adder
performances for tree structured arithmetic circuits, IEEE
Transactions on Very Large Scale Integration (VLSI) Systems 13
(6) (2005) 686–695.
T. Vigneswaran, B. Mukundhan, P. Subbarami Reddy, A novel
low power high speed 14 transistor CMOS full adder cell with
50% improvement in threshold loss problem, Processing of World
Academy of Science, Engineering and Technology 13 (2006).
F. Moradi, D.T. Wisland, H. Mahmoodi, S. Aunet, T.V. Cao, A.
Peiravi, Ultra low power full adder topologies, in: Proceedings of
the IEEE International Symposium on Circuits and Systems, May
2009, pp. 3158–3161.
A.M. Shams, T.K. Darwish, M.A. Bayoumi, Performance analysis
of low-power 1-bit CMOS Full Adder cells, IEEE Transactions on
VLSI Systems 10 (2002) 20–29.
M. Rouholamini, O. Kavehei, A. Mirbaha, S. Jasbi, K. Navi, A
new design for 7:2 compressors, in: Proceedings of the ACS/IEEE
International Conference on Computer Systems and Applications,
AICCSA, 2007.
K. Navi, R. Faghih Mirzaee, M.H. Moaiyeri, B. Mazloom Nezhad,
O. Hashemipour, K. Shams, Ultra high speed Full Adders, IEICE,
Electronic Express 5 (18) (2008) 744–749.
W. Ibrahim, V. Beiu, M.H. Sulieman, On the reliability of
majority gates full adders, IEEE Transaction on Nanotechnology 7
(2008) 56–67.
K. Navi, O. Kavehei, M. Rouholamini, A. Sahafi, Sh. Mehrabi, N.
Dadkhahi, Low-power and high-performance 1-bit CMOS full
adder cell, Journal of Computers 3 (2) (2008).
K. Granhaug, S. Aunet, Six subthreshold Full Adder cells
characterized in 90 nm CMOS technology, in: Proceedings of the
2006 IEEE Design and Diagnostics of Electronic Circuits and
systems, Washington, DC, USA, 2006, pp. 25–30.
K. Navi, M. Maeen, V. Foroutan, S. Timarchi, O. Kavehei, A
novel low power full-adder cell for low voltage, Integration, the
VLSI Journal, Elsevier 42 (4) (2009) 457–467.
K. Navi, V. Foroutan, M. Rahimi Azghadi, M. Maeen, M.
Ebrahimpour, M. Kaveh, O. Kavehei, A novel low-power fulladder cell with new technique in designing logical gates based on
static CMOS inverter, Microelectronics Journal, Elsevier 40 (10)
(2009) 1441–1448.
K. Navi, M.H. Moaiyeri, R. Faghih Mirzaee, O. Hashemipour, B.
Mazloom Nezhad, Two new low-power full Adders based on
majority-not gates, Microelectronics Journal, Elsevier 40 (2009)
126–130.
C.K. Tung, Y.C. Hung, S.H. Shieh, G.S. Huang, A low-power
high-speed hybrid CMOS Full Adder for embedded system,
Design and Diagnostics of Electro- nic_Circuits and Systems,
2007, DDECS ’07, IEEE, 2007, pp. 1–4.
J.F. Lin, Y.T. Hwang, M.H. Sheu, C. Ho, A novel high speed and
energy efficient 10 transistor Full adder design, IEEE Transaction
on Circuits & Systems I 54 (5) (2007) 1050–1059.
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
A Novel Approach for the Detection of Stator Inter-turn Short Circuit Fault in Induction Motor
Rudra naryan Das
Ramakanta Mahanta
Department of Electrical Engineering,
Department of Electrical Engineering
KIIT, Deemed University
Bhubaneswar
Synergy Institute of Engineering & Technology
Dhenkanal
Abstract— Industrial motors are subjected to different incipient
faults. Stator winding faults are considered as one of the main
faults occurring in the induction motor, which are about 30%–
40%. These faults could be due to turn to turn, phase to phase, or
winding to earth short circuit. If undetected they can lead to motor
failure. Therefore, a monitoring system is necessary to increase the
life span of machines. Monitoring of machine is useful to warn of
impending failures, prevent further damage, and thus reduces
maintenance costs. This paper presents a neural network approach
to detect an inter-turn short circuit fault in the stator windings of
the induction motor. The fault detection and location are achieved
by a feed-forward multilayer perceptron neural network trained by
back propagation algorithm. The location process is based on
monitoring the three phase shifts between the line current and
phase voltages of the induction machine. Here we used the phase
shifts as the input to the neural network. The simulation results are
presented to demonstrate the effectiveness of the NN method of
fault diagnosis.
(b) Rotor faults: rotor winding open or short
circuits for wound rotor machines, and broken
bar or cracked end ring faults for cage
machines.
Rotor mechanical faults: bearing damage, static and
dynamic eccentricity, bent shaft and misalignment
The first approach is based on signal analysis [1] -[3],which
uses the techniques of time domain, frequency domain,
time-frequency domain and high order spectra. The second
class is based on analytical approach [4]-[5],which involves
detailed mathematical models that use of some measured
input and output and generate feature such as residuals
(means the difference between the nominal and the
estimated model parameters). Parameter estimation and state
estimation. The third approach is the knowledge based
approach which is used to automate the analysis of the
measured signals by incorporating the artificial intelligence
(AI) tools into the online monitoring schemes [6]. Recent
developments in diagnostic systems with the need of
organizing and managing more and more complex software
applications has led to the consideration of radically
different diagnostic strategies by making extensive use of
artificial intelligence (AI) based techniques [7] including
expert systems, fuzzy systems, neural networks, combined
techniques and support vector machines.
Keywords- Diagnosis, induction machine, back propagation,
interturn short circuit, neural network, and phase shifts
I.
INTRODUCTION
A fault can be defined as an unexpected change of the
system functionality which may be related to a failure in a
physical component or in a system sensor or actuator. A
fault diagnosis system should perform two tasks, namely
fault detection and fault isolation. The purpose of the former
is to determine that a fault has occurred in the system. To
achieve this goal, all the available information from the
system should be collected and processed to detect any
change from nominal behavior of the process. The second
task is devoted to locate the fault source. The induction
motor is considered as a robust machine but similar to other
rotating electrical machines, it is subjected to both electrical
and mechanical faults. These faults can be classified as
follows.
Neural Network had gained popularity over other techniques
due to their generalization capability during real time
inferences, which means that they are able to perform
satisfactorily even for unseen fault. Unlike the parameter
estimation technique, neural networks can perform fault
detection based on measurements and training without the
need of complex and rigorous mathematical models. In
addition, heuristic interpretation of motor conditions, which
sometimes only humans are capable of doing, can be easily
implemented in the neural networks through supervised
training. For many faults detection schemes, redundant
information is available and can be used to achieve more
accurate results. This concept can be easily adopted in
neural network implementation with its multiple input
(a) Stator faults: stator winding open or short
circuits and stator inter-turn fault.
88
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
parallel processing features to enhance the robustness of the
network performance. Different kinds of fault indicators are
used as neural network inputs in extensive research works
based on Neural Networks approach for Induction motor
stator fault diagnosis. We can find the use of current and
speed [8], the three currents and three voltages [9], the
negative and positive sequence stator currents, the slip and
the rated slip [10], power spectral density of the residual
currents [11], noisy residual current [12], current and
vibration signals [13], stator current Park’s vector patterns
[14] and vibration spectra [15].
signature (NN inputs) and the corresponding operating
condition (NN outputs) to be able to locate correctly the
faulty phase.Fig.1 shows the neural network has three
inputs, which are the three phase shifts and three outputs
corresponding to the three phase of the Induction motor,
where the fault can occur. If a short circuit is detected and
located on one of the three phase, the corresponding NN
output is set to “one”, otherwise it is “zero”.
III.
The basis of any reliable diagnostic method is a good
understanding of the machine behavior in healthy state and
under fault conditions. The fig .2 shows a suitable model
which must take into account the presence of inter-turn short
circuit fault in the stator winding of an induction motor is
required. In faulty case, the model can be characterized by
two modes. The common mode corresponds to the dynamic
model in healthy operation of the machine (Park’s model),
and the differential mode model explains the faults .This
model, which is very simple to implement because it is
expressed in Park’s frame offers the advantage to explain
the defect through a short circuit element Ossk dedicated to
each phase of the stator (k= 1, 2, 3).
In our approach, we have the three phase shifts between the
current and the phase voltages of the induction motor as
inputs to Neural Networks. The phase shift is more
preferable than the others, as inter-turn short circuit fault
feature signal [16]. However the study is limited only to the
detection of fault by the simple appearance of unbalance of
the three phase shifts. The three phase shifts are considered
as robust and efficient indicators of stator fault.
Consequently, monitoring these three phase shifts by a
neural network allows one to detect and locate automatically
an inter-turn short circuit fault overcoming the problem
under different load conditions.
II.
Induction motor model for fault detection
FAULT DIAGNOSIS SYSTEM
The proposed work consists of detection and the location of
an inter-turn short circuit fault on the stator windings of a
three phase induction motor by using a feed forward
multilayer perceptron (MLP)Neural network.
Fig.1.
Block diagram of the fault location procedure.
Fig.2.
Fig.1 shows the block diagram of the fault location
procedure. The first step of this procedure is the acquisition
of the three currents and three voltages from the machine in
order to extract the three phase shift between the line
currents and the phase voltage. The neural network had to
trained offline using the back propagation algorithm. Neural
Network had to learn the relationships between the fault
Stator faulty model in dq frame
The model of the differential mode introduces two
parameters defining the faults in the stator
(1) The location parameter θssk, which is the angle between
the inter-turn short circuit stator winding and the first
stator phase axis. This parameter can take only the three
89
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
values 0, 2π/3, and 4π/3, corresponding to the short
circuit on the phase as, bs or cs respectively.
(2) The detection parameter λssk, which is equal to the ratio
between the number of inter-turn short circuit windings
and the total number of turns in the healthy phase. This
parameter permits one to quantify the unbalance.
⎡1 0 0 0⎤
H =⎢
⎥
⎣0 1 0 0 ⎦
More detailed description regarding the faulty model is
presented in reference [17].
The state space representation of the faulty model is as
follows
⎡ 2 3
⎤
I (λss,θss ) = ⎢
λ
.
R
(
θ
).
O
(
θ
)
R
(
θ
)
−
∑ ssk
ssk
⎥
⎣3.Rs k=1
⎦
X& = F ( ω ). X + G .U
⎡ cos(θssk)2
cos(θssk)sin(θssk)⎤
Ossk = ⎢
⎥
2
cos(
θ
)
sin(
θ
)
sin(
θ
)
ssk
ssk
ssk
⎣
⎦
Y = H . X + I ( λ ss , θ ss ).U
Where
X =
[i
ds
]
iqs φdr φqr
T
⎡ cos( θ )
R (θ ) = ⎢
⎣ sin( θ )
ids , iqs
⎡ Rs + Rr
⎢− L
f
⎢
⎢ −ω
⎢
F(ω) = ⎢
⎢ Rr
⎢
⎢
⎢ 0
⎢⎣
Rr
Lm.Lf
ω
−
Rs + Rr
Lf
−
ω
0
Lf
R
− r
Lm
Rr
0
− sin( θ ) ⎤
cos( θ ) ⎥⎦
: dq stator current components;
ω ⎤
⎥φ dr , φ qr : dq rotor flux linkage;
⎥
⎥V ,V
⎥ ds qs : dq stator voltage;
⎥θ
: Electrical angle;
0 ⎥ω = dθ dt
⎥
R ⎥Rs, Lf, Rr, and Lm are the stator resistance, global leakage
− r ⎥inductance referred to the stator, rotor resistance, and
Lm ⎥⎦magnetizing inductance, respectively
Lf
Rr
Lm.Lf
IV.
SIMULATION RESULTS
Characteristics of the simulated phase shifts
⎡1
⎢L
G=⎢ f
⎢ 0
⎢
⎣
0
1
Lf
⎤
0 0⎥
⎥
0 0⎥
⎥
⎦
T
This section presents the study of the behavior of the three
phase shifts in the presence of inter-turn short circuit fault
and under different load conditions. According to their good
features we have selected the three phase shifts (pha, phb
and phc) as the best suitable inputs of the NN. Under normal
operation and balanced conditions, machine performance
gives phase voltages and line currents equal in magnitude
and shifted by 120° electrical, but under faulty operation
,the currents are altered and consequently, the phase shifts.
90
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
To investigate the currents of the induction motor under
inter-turn short circuit fault. We have written a suitable
program of the induction motor before and after fault
condition. For this program we have taken a fixed sampling
step of 0.5ms.The resolution of differential equations of the
model was made by the order four Runge-Kutta methods.
Fig-3 show the profiles of the simulated three lines current
with no load torque and under a stator fault of 48 shorted
turns on one of the three phases introduced at 0.5 sec. When
a fault occurred on one of the three phases, an important
arising of the current appears particularly in the
corresponding phase. Thus it is clear that an inter-turn short
circuit principally affects the stator current of the faulty
phase in peak value. The other stator phase currents suffer
smaller influences.
Fig.5 Phase shift characteristics for fault on phase bs.
Fig.4.Phase shift characteristics for fault on
phase as.
Fig.3.Fault effect on the three line’s currents
Fig 4-6 show the fault effect characteristics of the three
phase shifts under a load torque (T) of 3N-m,as function of
the faulty turn number (n) in the case of fault on the phases
as (fig.4) bs(fig.5) and cs(fig.6). The fig. shows any number
of faulty turns, the simultaneous three values of the phase
shifts are quite distinct. This difference of values is linked to
the importance of fault, which is expressed by the number of
shorted turns. It can be noted that in case of stator fault on
one of the three phases, the smallest value of the three phase
shifts is usually upon the phase where the fault has occurred.
With this characteristic we can localize the faulty phase.
Fig.5 Phase shift characteristics for fault on phase bs.
91
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
conditions, which contains all the possible fault occurrences
and even the healthy cases.
For this purpose, the input dataset, which is shown in figure7, is composed by a successive range of several examples in
different operating conditions of the IM. Each example is
composed by the three-phase shifts. All these examples are
presented to the NN under three load conditions (T=7, 5 and
3N-m) and represent the following different operating cases
of the IM:heathy (three points) and fault of an odd number n
of shorted turns (n=1, 3, 5, 7, 9, 11, 13 and 15) on each
stator phase [24(8×3) points]. Thus a total of 75 (24×3+3)
training samples is thought to the NN.
The output data set is formed by the following desired
output (Ti) which indicates the state of each phase.
T1=1 for a short-circuit at phase as: otherwise, T1=0;
T2=1 for a short-circuit at phase bs: otherwise, T2=0;
T3=1 for a short-circuit at phase cs: otherwise, T3=0;
Fig.6.Phase shift characteristics for fault on phase cs
Therefore, the output states of the NN are set to the
following:
[0; 0; 0] no fault (healthy mode);
[1; 0; 0] fault occurred at phase as;
[0; 1; 0] fault occurred at phase bs;
[0; 0; 1] fault occurred at phase cs;
Structure of the NN
Here we have used a feed forward MLP network performed
by back propagation algorithm. The number of inputs and
outputs is fixed by the number of the fault indicates, which
are the three phases shifts, and the number of fault cases,
which are the three phases of the IM respectively, but the
number of neurons in the hidden layer is not known.
Fig.7.Simulated training input data set of NN
After this detailed analysis, we can conclude that from the
features extracted from the behavior of the three phase shifts
under fault conditions are efficient indicator to detect a
stator fault, locate the phase where the phase occurred and
also provide the information about the fault severity. Thus
the three phase shifts provides the adequate data for neural
diagnosis system in order to ensure effective monitoring.
Database selection
A training database constituted by input and output data sets
has been applied to the NN. The input data are collected
through simulation by Mat lab, using the model in fig-2. To
achieve a good location of the IM faulty phase, the training
data should represent the complete range of the operating
92
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
-0.3
10
-0.4
10
-0.5
T raining m s e
10
-0.6
10
-0.7
10
-0.8
10
-0.9
10
0
Fig.8. NN architecture
100
200
300
400
500 600
Epochs
700
800
900
1000
Fig. 9 Training performance of the NN
If the number of the neurons in the hidden layer is too few,
the NN cannot learn well, and if this number is too large, the
NN may simply memorize the training set. First, we start
with a few neurons (i.e. two neurons); then, we add other
ones until an appropriate number that provides us a low
means square error (MSE) are reached. In our case, the best
training performances of the NN are obtained with five
neurons in the hidden layer. Fig-8 shows the structure of the
MLP network adopted for the location of the faulty phase of
IM. This network has three inputs (pha, phb and phc), three
outputs (phase as , bs and cs), and a hidden layer of five
neurons.
Training results
The performance of the NN is indicated by its mse shown in
Fig. 9. After learning for 1000 epochs, the NN reaches a low
training mean square error that is equal to zero.
The training outputs of NN are shown in Figs. 10. From the
figure we clear that the NN has well learned the input data
and has correctly reproduced the desired outputs. The NN
output1 (O1) as set to (1, 0, 0) to indicate faults on phase as
with good accuracy. This means that the NN is able to
locate the faults correctly on phase as. The NN output2 (O2)
as set to (0, 1, 0) to indicate faults on phase bs with good
accuracy. This means that the NN is able to locate the faults
correctly on phase bs. The NN output3 (O3) as set to (0, 0, 1)
to indicate faults on phase cs with good accuracy. This
means that the NN is able to locate the faults correctly on
phase cs.
Fig. 10 Training output of the NN
V.
CONCLUSION
This paper presents a neural network method to detect an
inter-turn short circuit on the stator windings of the
Induction Motor. The diagnostic process is monitoring
simultaneously the values of the three-phase shifts between
the line current and the phase voltage by a simple multilayer
perception Neural Network. The features of the phase shifts
give the information about the detection of inter-turn short-
93
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
circuit fault in the stator windings of an Induction Motor.
The simulation results prove that this approach is useful to
ensure a reliable and accurate fault diagnosis process.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
M. Benbouzid and H. El, “A review of induction motors signature
analysis as a medium for faults detection,” IEEE Trans. Ind.
Electron., vol. 47, no. 5, pp. 984–993, Oct. 2000.
G. G. Acosta, C. J. Verucchi, and E. R. Gelso, “A current monitoring
system for diagnosing electrical failures in induction motors,” Mech.
Syst. Signal Process., vol. 20, no. 4, pp. 953–965, May 2006.
S. Nandi and H. A. Tolyat, “Novel frequency domain based technique
to detect incipient stator inter-turn faults in induction machines,” in
Conf. Rec. IEEE IAS Annu. Meeting, 2000, pp. 367–374.
S. Bachir, S. Tnani, J.-C. Trigeassou, and G. Champenois, “Diagnosis
by parameter estimation of stator and rotor faults occurring in
induction machines,” IEEE Trans. Ind. Electron., vol. 53, no. 3, pp.
963–973, Jun. 2006.
] F. Filippetti et al., “State of art of model diagnostic procedures for
induction machines inter-turns short circuits,” in Proc. IEEE
SDEMPED, Gijon, Spain, Sep. 1999, pp. 19–31.
V. Uraikul, C. W. Chan, and P. Toniwachwuthikul, “Artificial
intelligence for monitoring and supervisory control of process
systems,” Eng. Appl. Artif. Intell., vol. 20, no. 2, pp. 115–131, Mar.
2007.
A. Siddique, G. S. Yadava, and B. Sin, “Applications of artificial
intelligence techniques for induction machine stator fault diagnostics:
Review,” in Proc. SDEMPED, Atlanta, GA, Aug. 24–26, 2003, pp.
29–34.
M. Y. Chow and S. O. Yee, “Using neural networks to detect
incipient faults in induction motors,” J. Neural Netw. Comput., vol. 2,
no. 3, pp. 26– 32, 1991.
S. R. Kolla and S. D. Altman, “Artificial neural network based fault
identification scheme implementation for a three-phase induction
motor,” ISA Trans., vol. 46, no. 2, pp. 261–266, Apr. 2007.
F. Filippetti, G. Franceschini, C. Tassoni, and P. Vas, “Recent
developments of induction motor drives fault diagnosis using AI
techniques,” IEEE Trans. Ind. Electron., vol. 47, no. 5, pp. 994–1003,
Oct. 2000.
M. Bouzid, N. Mrabet, S. Moreau, and L. Signac, “Accurate detection
of stator and rotor fault by neural network in induction motor,” in
Proc. IEE SSD, Hammamet, Tunisia, Mar. 21, 2007, vol. III, pp. 1–7.
94
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Design of PI Controller for DC-DC Buck Converter
1
Usharani Raut1
Dept. Of EEE, IIIT, BPUT, Odisha, Bhubaneswar
E-mail: [email protected]
2
Sanjaya Kumar Jena2
Dept. of CSE, SOA Univ., ITER, Odisha, Bhubaneswar
E-Mail: [email protected]
switched mode DC-DC converters are non-linear and
time variant systems, and do not lend themselves to the
application of linear control theory, so fuzzy controllers
and fuzzy model reference learning controllers are
suggested against linear controllers. FMRLC reduce the
great trial and error involved in the design of fuzzy
controller. The above controllers for the proposed
converter are implemented on TMS320LF2407A to
obtain the real time results.
Abstract - DC-DC power converters from a subset of
electrical power converters which interfaces between the
available source of electrical power and the utilization
equipment. The need for this interface arises on account of
the fact that in most situations the source of available power
and the condition under which the load demands power are
incompatible with each other. This paper deals with
modeling, simulation and implementations of DC-DC
converters. In the system under consideration, a proto type
buck converter is developed and is evaluated for PI
controller. Such converter is modeled through state space
averaging technique with small signal transfer functions.
The main advantage with PI controller is its simplicity in
design and implementation. But the limitations with these
controllers are that dynamic and steady state performances
deteriorate if the loading conditions differ from nominal
conditions to a large extend. In this direction fuzzy
controller which incorporates its knowledge from the
conventional PI controller and the heuristic knowledge is
incorporated with the tuning of rules in order to obtain the
robust controller for large changes in the parameters. It is
found that trials and errors are less in this method but the
design procedure is quite lengthy compared to PI. Fuzzy
model reference learning controller is suggested for
eliminating this trials and simplifying the design procedure
with fuzzy controller. The above technique is first simulated
in MATLAB and then experimentally verified through
digital signal processor TMS320LF2407A.
.
II. STATE SPACE MODELING AND ANALYSIS OF
BUCK CONVERTER
Figure 1: Step down or Buck Converter
Key Words: Buck converter, proportional-integral control,
Fuzzy controller, Fuzzy model reference learning controller,
Digital signal processor (DSP)
The above Figure:1 shows the model of buck converter
with Vin as input to the converter, L is the inductance in
Henry of the inductor used in the buck converter having
RL as internal resistance in Ohm, V0 is the output voltage
of buck converter, FD is the freewheeling diode, C is the
capacitance of the capacitor in Farad and R is the load
resistor in Ohm.
The
corresponding
state
model
is
as
I. INTRODUCTION
With the advent of commercial high speed switching
devices, various control strategies are available for the
design of DC-DC converters at present. The popular
techniques mostly include average state space based
dynamic models, duty ratio programmed model, current
programmed techniques and soft switching converters. As
a first step of work, well accepted PI controller is
investigated for the prototype DC-DC buck converter.
Transfer function with state space averaging technique is
used for controller implementation. Design of PI
controller is carried out using SISOTOOL facility of
MATLAB as per the desired specifications. But since the
⎡ di
⎤
⎡
R
L
L
follows: ⎢ d t ⎥ ⎢ − L
⎢
⎥ = ⎢
⎢ dv o ⎥
⎢⎣ d t ⎥⎦
⎢ 1
⎣⎢ C
1
L
1
−
RC
−
⎤
(1)
⎥ ⎡ iL ⎤ ⎡ 1 ⎤
⎥ ⎢ ⎥ + ⎢ L ⎥ v in d
⎥ ⎣ vo ⎦ ⎢ 0 ⎥
⎣ ⎦
⎦⎥
Where iL is inductor current, d is duty cycle. In order to
get the dynamic model, small signal perturbation is
introduced.
95
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
v in = Vin + vˆin , v o = Vo + vˆo
(2).
Here the load is assumed as pure resistive, where capital
letters have been used to describe the converter operating
point.
The corresponding transfer function can be written as
follows:
Vin
vˆ o ( s )
=
2
dˆ ( s ) as + bs + c
x 10
4
Root Locus Editor for Open Loop 1 (OL1)
0.115
0
0.26
2e+004
(3)
1e+004
1 0.5
-60 G.M.: Inf
Freq: Inf
Stable loop
-80
-45
Im a g A x is
L
R
+ RLC , c = 1 + L
R
R
(4)
5e+003
-1 0.5
1e+004
-2
2e+004
0.26
-3
-6000
15V
Output Voltage
3.3V
Load
Inductor (L)
Inductor resistor (rL)
Capacitor(C)
2 Ω (normal)
160 μH
0.15 Ω
100 μF
-3000
-2000
-1000
0
10
5
10
6
Frequency (rad/sec)
III. TEST SETUP
9.375e008
+ 5938s + 6.719e007
Table 2: Prototype Model Designing Parameter
(5) Input voltage
Output voltage
Rated load
15V DC
3.0V DC
2Ω
Peak-peak voltage ripple
< 3.5 %
Switching frequency
40 KHz
Inductor
160 μH, 0.15 Ω
Mosfet driver
Diode
Capacitor
TLP 250
UF5407
100µF, 25V
Open-Loop Bode Editor (C)
40
0.044 0.032 0.0210.013
0.006
3.5e+004
20
3e+004
0.1
M agnitude (dB)
2.5e+004
2e+004
2
1.5e+004
0.2
5e+003
-40 G.M.: Inf
Freq: Inf
Stable loop
-60
0
0
1e+004
Phase (deg)
5e+003
-1
0
-20
1e+004
1
Imag Axis
-4000
P.M.: 55.6 deg
Freq: 9e+003 rad/sec
-180
3
4
10
10
The prototype model is constructed at the
laboratory of control systems in Birla Institute
of Technology, Ranchi, India with the following
parameters:
The root locus and bode plot of the uncompensated
system is shown in figure: 2. The pink spots are the roots
of the uncompensated system. The phase margin of the
uncompensated system is found to be 11.5 deg.
3
-5000
0.085
Figure 3: Root locus and Bode Plot of Compensated
System (Buck Converter)
The corresponding transfer function from equation (4) is
obtained as follows
Root Locus Editor (C)
0.115
-135
2.5e+004
0.056 0.036 0.016
Real Axis
Input Voltage
0.065
-90
1.5e+004
Table 1: Parameters of DC-DC buck converter
4
-40
0
0.17
x 10
-20
1.5e+004
Buck converter, with following parameters as shown in
Table 1: are considered.
4
0.056 0.036 0.016
2.5e+004
5e+003
a = LC , b =
s2
0.085
2
Where,
G p (s) =
Open-Loop Bode Editor for Open Loop 1 (OL1)
20
0.17
P h a s e (d e g )
Gp (s ) =
3
M a g n itu d e (d B )
i L = IL + iˆL , d = D + dˆ
To reduce overshoot and to eliminate the steady state
error a PI controller is designed for the following desired
specifications: The Phase margin >= 550, Gain crossover
frequency >= 30000 rad/sec, Steady state error (for unit
step input) = 0
The root locus and bode plot of compensated system is
shown in figure 3
-45
0.2
1.5e+004
-2
2e+004
-90
2.5e+004
-3
0.1
3e+004 -135
0.065
-4
-4000 -3500
-3000
-2500
3.5e+004
0.044 0.032 0.0210.013
0.006
-2000 -1500
Real Axis
-1000
-500
0
P.M.: 11.5 deg
Freq: 3.14e+004 rad/sec
-180
2
3
4
10
10
10
Frequency (rad/sec)
5
10
6
10
Figure2: Root Locus and Bode Plot of Uncompensated System (Buck
Converter)
96
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
IV. HARDWARE SCHEME WITH PI CONTROLLER
(c)
Figure 4: Practical Implementation Scheme
V. EXPERIMENTAL RESULTS WITH PI
CONTROLLER
The experimental results are shown in figure 5. Figure 5(a)
corresponds to output voltage and corresponding PWM for
the reference voltage of 3V and nominal load of 2Ω. Figure
5(b) shows the recovery period of 2ms with input voltage
change from 12V to 18V. Figure 5(c) and 5(d) corresponds
to small change in reference voltage and load current
respectively.
(d)
Figure 5: Experimental results for (a) Output voltage and
corresponding PWM
(b) Input voltage change from 12V to 18V (c)
Small reference change from 3V to 3.3V (d) Load change response
from 1.5A to 2.1A.
VI. STRUCTURE OF FUZZY LOGIC CONTROLLER
The structure of proposed PI-like fuzzy knowledge
base controller mainly consists of normalization,
fuzzification, membership function definition, rule base,
defuzzification and denormalization. Fuzzy control
technique helps to incorporate heuristic knowledge by
incorporating knowledge base derived from PI controller.
The design procedure presented by Alexender perry and
P.C sen [] with fuzzy controller reveals large dynamic
performance by preserving the small signal
performance/stability. Output voltage error with respect
to reference voltage and change in error are selected as
input variables for fuzzy controller as
(6)
e(k) = Vref - Vo
∆e(k) = e(k) – e(k-1)
(7)
Where e(k) is the error and ∆e(k) is the change in error at
the kth sample with sampling period Ts. The output of the
FLC is an incremental change of actuating signal ∆u is
obtained as:
(a)
(b)
97
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
u(k ) = Δu (k ) + u(k − 1)
Table 4: Values of change in error from ∆e1 to ∆e9
(8)
The continuous control output is obtained by using zero
order hold (ZOH) between samples. This controller will
now incorporate the knowledge base of continuous PI
controller as follows:
The PI controller Gc(s) with parameters a and K is given
by
Gc ( s) =
U (s)
as + 1
=K
E (s)
s
∆e1
∆e2
∆e3
∆e4
∆e5
∆e6
∆e7
∆e8
∆e9
-6
-1
-0.1
-0.016
0
0.016
0.1
1
6
The rule table is given in Table 5:. The rules and output
were initialized by using the relationship given in
equation(15). Each entry of the following rule table gives
the change of duty cycle ∆u when membership is full to
both the corresponding fuzzy sets in the rule antecedent.
(9)
VII. SIMULATION RESULTS:
The discrete controller is obtained by using bilinear
transformation method computed as
2 ⎛ 1 − z −1 ⎞
⎜
⎟
Ts ⎝ 1 + z −1 ⎠
3.5
(10)
Using equation (10) in equation (9) to obtain discrete
equivalent Gc(z) of Gc(s) will be derived as
Gc ( z ) =
FLC
PI
3
U ( z ) m.z + n
=
E ( z)
z −1
2.5
OUTPUT
VOLTAGE(volt)
s=
(11)
2
1.5
1
0.5
Where the parameters m and n are given by
0
0
T ⎞
⎛
m = K ⎜a + s ⎟
2⎠
⎝
(12)
⎛T
⎞
n = K ⎜ s − a⎟
⎝2
⎠
(b)
O U T P U T V O LT A G E (v o lt)
FLC
PI
3
e4
-0.016
e5
0
e6
0.016
2
1
0
0.008
(15)
0.009
0.01
0.011
0.012
TIME(sec)
e7
0.1
e8
1
e9
6
0.013
0.014
FLC
PI
3.2
O
U
T
P
U
TV
O
LT
A
G
E
-0.1
3
-3
x 10
0.5
Table 3 :Values of error from e1 to e9
-1
2.5
1.5
(14)
Δu (k ) = (m + n)e(k ) − n.Δe(k )
-6
2
2.5
Change in u(k) expressed as ∆u(k) is given by
e3
1.5
TIME(sec)
3.5
(13)
u(k) = (m+ n)e(k) − ne(k) − e(k −1) + u(k −1)
e2
1
(a)
The difference equation for (5.3) can now be expressed as
e1
0.5
3.15
3.1
3.05
3
2.95
2.9
0.009
0.01
0.011
0.012
0.013
TIME(sec)
0.014
0.015
(c)
98
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
O U T P U T V O LT A G E (v o lt)
3.5
FLC
PI
3
2.5
2
1.5
1
0.5
0
0.008
0.009
0.01
0.011
0.012
TIME(sec)
0.013
0.014
O U T P U T V O LT A G E (V olt)
(c)
PI
FLC
(b)
3.1
3.05
3
0.009
0.01
0.011
0.012
0.013
TIME(sec)
0.014
0.015
(d)
Figure 6:(a) Startup response (b) Response to load
change from 3A to 1.5A( c ) Reference change from 3V to
1V(d) Reference change from 3.1V to 3V.
(c )
VIII. Hardware Results with FLC:
(d)
Figure 7: Experimental results of FLC for Large change in (a)
Reference change from 3V to 1V (b) Reference change from 1V to
3V (c) Load change from 1.5A to 3A (d) Load change from 3A to
1.5A.
(a)
99
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Table 5: Rule Table for Fuzzy Controller
Error
CHANGE IN ERROR
B1
B2
B3
B4
B5
B6
B7
B8
B9
A1
-0.1562
-0.0624
-0.0455
-0.044
-0.0436
-0.0433
-0.0418
-0.0249
0.0689
A2
-0.1199
-0.026
-0.0092
-0.0076
-0.0073
-0.007
-0.0054
0.0115
0.1053
A3
-0.1133
-0.0195
-0.0026
-0.001
-0.0007
-0.0004
0.0011
0.018
0.1118
A4
-0.1127
-0.0189
-0.002
-0.0004
-0.0001
0.0002
0.0018
0.0186
0.1125
A5
-0.1126
-0.0188
-0.0019
-0.0003
0
0.0003
0.0019
0.0188
0.1126
A6
-0.1125
-0.0186
-0.0018
-0.0002
0.0001
0.0004
0.002
0.0189
0.1127
A7
-0.1118
-0.018
-0.0011
0.0004
0.0007
0.001
0.0026
0.0195
0.1133
A8
-0.1053
-0.0115
0.0054
0.007
0.0073
0.0076
0.0092
0.026
0.1199
A9
-0.0689
0.0249
0.0418
0.0433
0.0436
0.044
0.0455
0.0624
0.1562
IX. CONCLUSION
Since the present work uses the digital control ,this
will be free from ageing effect.The modifications can be
carried out for the desired performance by changing the
program without any modification to analog circuit..
REFERENCES
[1]. Chin Chang. “Robust Control of DC-DC Converters: The
Buck Converter”. IEEE, pages 995–1057, 1994.
[2]. Texas Instruments. “TMS320LF/LC240XA DSP Controller
System and Peripherals User’s Guide”(Literature Number:
Spru357b).
[3]. Texas Instruments. “Code Composer Getting
Starting
Guide” (Literature Number: Spru296).
[4]. Texas Instruments. “TMS320C1X/C2X/C2XX/C5X Assembly
Language Tools Getting Started Guide” (Literature Number:
Spru018).
[5]. Mohan, Underland, and Robbins. “Power Electronics
Converters Applications and Design”. John Wiley &Sons,
2003
[6]. Alexender G Perry, Guang Feng, Yan-Fei, and Paresh C.
Sen. “A Design Method for PI Like Fuzzy Logic Controllers
for DC-DC Converter”. IEEE Trans. on Industrial
Electronics, 54(5), October-2007
[7]. Vitor Fern˜ao Pires and Jos´e Fernando A. Silva. “Teaching
Non-linear Modeling, Simulation,and Control of Electronic
Power Converters using Matlab/Simulink”..IEEE Transaction
on Education, 45(3):253–261, August
[8]. V. Ramanarayanan. “Switched Mode Power Conversion” .
Dept. of Electrical Engineering, IISc Banglarore
[9]. C T Rim, G B Young, and G H Cho. “A State Space
Modeling of Non-ideal DC-DC Converters ’. PESC 88,
pages 943–950, April 1988.
[10]. W.C So, C. K. Tse, and Y.S. Lee. “A Fuzzy for Controller
DC-DC Converters”. IEEE, pages 315–320, 1994.
.
100
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Dielectric behavior of β − lead fluoride
Y. Ranga Reddy
Vidya Bharathi Institute of Technology,
Pembarthy, Warangal Dist. A.P. India
E-mail: [email protected]
In the present investigation, dielectric properties of β - PbF2
have been measured in the temperature range of -1800C to
2400C and in the frequency range of 102Hz to 5x107Hz.
Abstract— A detailed study of effect of temperature and
frequency on dielectric constant (ε ) and loss (tanδ ) of β PbF2 was performed. The measurements were taken from the
frequency range of 102Hz to 5x107Hz and in the temperature
range of -1800C to 2400C. The value of static dielectric constant
at room temperature is 28.00. The value of ac conductivity is
II.
σ=εε0ω tanδ where ε0 is the
permittivity of the free space and ω is the angular frequency.
EXPERIMENTAL
calculated from the relation,
The model for cylindrical cup deep drawing was built in
DYNAFORM preprocessor and for square cup deep
drawing was constructed using the SolidWorks and then
converted into a FE mesh using the preprocessor
DYNAFORM (Fig. 1). The blank was taken as deformable
body whilst the punch, the die and the blank holder were
simulated as rigid bodies. The blank was simulated using
four node Belytschko-Tsay elements. The material
characteristics of the punch, the die and the blank holder
were the same. The punch is made to move into the die with
a constant velocity. The force on the blank holder is kept
constant. Three different yield criterions were used in FE
simulations.
For cylindrical cup drawing blank was made of EDD
steel and for square cup drawing the blank was made of IF
steel. Different FE simulations were carried out by varying
the BHF to find a safe range in which the cups can be
successfully drawn, using two yield criterions. BHF was
varied from 500N to 30000N for cylindrical cups and from
500N to 35000N for square cups. The punch corner radius
and die corner radius were varied from 2mm to 12mm in
steps of 2mm for cylindrical cup drawing. For square cup
drawing punch profile radius was varied from 2mm to
14mm in steps of 3mm and die profile radius was varied
from 2 mm to 15mm in steps of 3mm in simulations to
examine their impact on LDR. Similarly the coefficient of
friction was varied from 0.05 to 0.25 in steps of 0.05 for
both square and cylindrical cups. Four materials namely
DP980 steel, HSLA steel, DQ steel, DDQ steel which have
different normal anisotropy values, were used to know the
effect of normal anisotropy on LDR. The normal anisotropy
values were taken in range 0.8-3.0. The LDR of different
materials were compared with the results obtained
analytically. In simulations to decide whether the blank
material has failed or not, forming limit diagram (FLD)
were been used. In FLD when the strain conditions cross
the safe limit, it indicates that necking (i.e. strain is
concentrating at a localized region) has initiated.
The activation energy for conduction in the intrinsic region of
the plot σ versus reciprocal of temperature is calculated to be
0.92 eV.
Keywords- Dielectric constant, dielectric loss, electrical
conductivity, activation energy.
I.
INTRODUCTION
Lead fluoride can found either is cubic structure with four
molecules per unit cell (β -PbF2) or in orthorhombic phase
(α − PbF2) at high temperatures [1]. Cubic phase becomes
a ‘super ionic conductor’ at high temperatures .The
conductivity ofβ - PbF2 is one of the highest values for any
known solid ionic conductors [2]. As it exhibits a variety of
interesting physical properties like radiation resistance [3],
high ionic conductivity at a relatively low temperature,
associated specific heat anomaly, behaving as an extrinsic
semiconductor [4] attracted considerable recent attention.
Denham et al [5] has derived the dielectric properties of lead
fluoride from experimental studies on infrared and Raman
spectra. Direct measurement of dielectric properties has
been reported by Axe et al [6]. Samara [7] studied the effect
of temperature and pressure on dielectric properties of
cubic and orthorhombic modifications of lead fluoride over
the range of 4K to 350K. Complex admittance study on β PbF2 was done by Bonn and Schoonman [8]. Schoonman et
al [9] reported the ionic and electronic conductivity in very
limited temperature region (325K- 410K). The measurement
of dielectric constant (ε ) and loss (tanδ ) ofβ -PbF2 in a
wide range of frequency and at higher temperatures is not
reported so far.
101
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
BHF range in which cup can be drawn successfully is 500N
to 20000N, Fig. 5.
Analytical relations to determine LDR are
Whitely’s formula,
(1)
Leu’s formula,
Where f is drawing efficiency, assumed to be 0.9
n is strain hardening exponent,
is normal anisotropy.
(2)
III.
D. Influence of design parameters like punch corner and
die corner radius on LDR
By simulations using Hill’s and Barlat’s yield criterion the
effect of punch and die corner radius on the LDR has been
determined for square and cylindrical cups. It has been
observed that LDR increases to a maximum value with
increase in punch corner radius and then it becomes
constant, shown in Fig 6. Similar trend is observed with
increase in die corner radius. Though the value at which the
maximum LDR is observed differs for both punch corner
and die corner radius as well as for cup shape. Fig. 7
showing the variation of LDR with the die corner radius.
RESULTS AND DISCUSSION
A. Earing profile comparison for cylindrical cup
The earing profile obtained from the FE simulations using
Hill’s and Barlat’s yield criterion was compared with the
experimental profile [16] as shown in Fig 2. From Fig. 2 it
is difficult to find out the yield criterion which is predicting
the ear profile more accurately. Hence, percentage ear
height with respect to minimum cup height was determined.
The percentage ear height obtained from experiment [16]
and different criterions have been compared in Fig. 3, for
the drawn cup based on minimum cup height. From Fig. 3 it
is observed that Hill’s yield criterion predicts the ear profile
more accurately. The maximum percentage ear height was
14.22% in experiment and 14.43% in Hill’s yield criterion.
However, earing was not observed in case of von-Mises
yield criterion, which is an isotropic yield criterion. Similar
to the cylindrical cup drawing case percentage ear height
with respect to minimum cup height has been determined
for all the three cases and compared. It is observed that FE
simulations are predicting similar ear height (as well as
percentage ear height) with both Barlat’s and Hill’s yield
criterion. Hence it can be concluded that the geometry of
tooling is mainly responsible for ear formation in square
cups.
E. Influence of process parameters like coefficient of
friction on LDR
It has been observed for cylindrical cup as well as square
cup drawing, using Hill’s yield criterion and Barlat’s yield
criterion that with the increase in coefficient of friction the
value of LDR decreases though the pattern differs in both
cases as shown in Fig 8. Increasing friction restricts the
material to be drawn in die. Similar kinds of results were
observed by G. C M. Reddy et al. [3].
F. Influence of material property like r and n- value on
LDR
To determine the effect of r-value on LDR, six materials
having different r-values were used including EDD and IF
steel and there LDR were compared. Hills yield criteria was
used in simulations to find out LDR for all materials. LDR
was also determined using analytical formulae, Fig. 9. Fig.
10 is showing the variation of LDR with n values for
cylindrical cup. It was observed that n-value doesn’t causes
very significant effect on the LDR.
G. Modification of blank shape to minimize earing
The initial blank shape was modified as shown in Fig. 11(a)
for a circular blank of 82 mm to minimize earing. The %ear
height was determined for the case of initial circular blank
and modified blank (Fig. 11(b)) and the ear height decreased
by 62.86%. It shows that if modified blank will be used
instead of circular blank to draw cylindrical cups then a lot
of material can be saved.
B. LDR comparison using different yield criterion
The LDR obtained from FE simulations using Hill’s, VonMises’s and Barlat’s yield criterion and analytical
relationships have been compared with the experimental
results, shown in Fig 4. The LDR predicted by Barlat’s yield
criterion is very close to experimental LDR. Whitely’s
formula and Leu’s formula over predicted the LDR because
these relations does not take design parameters into account.
Von-mises’s yield criterion, an isotropic yield criterion,
doesn’t consider while evaluating LDR. So it predicted a
lower value of LDR.
IV.
CONCLUSIONS
1. FE simulations predict the LDR more accurately
compared to analytical results. Barlat’s yield criterion
predicts the deep drawing behavior of EDD and IF steels
better than Hills yield criterion in terms of LDR prediction.
2. Planar anisotropy is responsible for ear formation in the
deep drawn cups. The Hill’s yield criterion predicts better
ear profile of EDD steel for cylindrical cup than Barlat’s
yield criterion. In case of square cups, geometry of tooling
is mainly responsible for ear formation. The Hill’s and
C. Determination of BHF range for LDR
A range of BHF is determined for LDR, in cylindrical cup
using blank of EDD steel and square cup using blank of IF
steel, between which the cup can be drawn without
wrinkling and tearing. Range of BHF is determined for the
LDR based on both yield criterions. For cylindrical cup the
102
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
blank holding force. Journal of materials processing
technology, 1147 (2001) 168-173.
12. M. Colgan, J. Monaghan, Deep drawing process:
analysis and experiment, Journal of Materials
Processing Technology 132 (2003) 35–41.
13. V. Savas, O. Secgin, An experimental investigation of
forming load and side-wall thickness obtained by a new
deep drawing die, Int J Mater Form (2010) 3:209–213.
14. C. Özek, M. Bal, The effect of die/blank holder and
punch radiuses on limit drawing ratio in angular deepdrawing dies, Int J Adv Manuf Technol (2009)
40:1077–1083.
15. R. Padmanabhan, M.C. Oliveira, J.L. Alves, L.F.
Menezes, Influence of process parameters on the deep
drawing of stainless steel, Finite Elements in Analysis
and Design 43 (2007) 1062 – 1067.
16. N. Kishore and D. Ravi Kumar, Optimization of initial
blank shape to minimize earing in deep drawing using
finite element method, Journal of Materials Processing
Technology , 130-131, (2002) 20-30.
17. H. Shim, K. Son, K. Kim, Optimum blank shape design
by sensitivity analysis, Journal of Materials Processing
Technology 104 (2000) 191-199.
18. V. Vahdat, S. Santhanam, Y.W. Chun, A numerical
investigation on the use of drawbeads to minimize ear
formation in deep drawing, Journal of Materials
Processing Technology 176 (2006) 70–76.
19. T.S. Yang and Y.C. Hsu, The Prediction of Earing and
the Design of Initial Shape of Blank in Cylindrical Cup
Drawing, Materials Science Forum, Vols. 532-533
(2006) pp 865-868.
20. T. S. Yang and R. F. Shyu, The design of blank's initial
shape in the near net-shape deep drawing of square cup,
Journal of Mechanical Science and Technology 21
(2007) 1585-1592.
21. V. Pegada, Y. Chun and S. Santhanam, An algorithm for
determining the optimal blank shape for the deep
drawing of Aluminum cups, Journal of Materials
Processing Technology, 125-126(2002) 743-750.
22. K. Son, H. Shim, Optimal blank shape design using
initial velocity of boundary nodes, Journal of Materials
Processing Technology 134 (2003) 92-98.
23. S. Kim, M. Park, S. Kim, D. Seo, Blank design and
formability for non-circular deep drawing processes by
the finite-element method, Journal of Materials
Processing Technology 75 (1998) 94–99.
24. B. Rambabu, Optimization of blank shape and
orientation in square cup deep drawing using FEM,
MTech Thesis, Department of Mechanical Engineering,
IIT Delhi, 2004
Barlat’s yield criterion predicted the same ear profile in
square cup.
4. With increase in punch corner radius and die corner
radius, the LDR increases but the increase is more
significant in case of die corner radius. The LDR decreases
linearly with the increase in friction. The LDR varies
significantly with value but strain hardening exponent does
not affect it much.
5. It is possible to reduce the earing height of anisotropic
sheets in deep drawing by using noncircular blanks and for
that a new approach was employed. The ear height
decreased by 62.86% when modified blank was chosen deep
drawn instead of circular blank.
REFERENCES
1. Z. Marciniak, J.L. Duncan, S. J. Hu: Mechanics of Sheet
Metal Forming, Butterworth-Heinemann, London,
2002.
2. D. Banabic, H.J. Bunge, K. Pohlandt, A.E. Tekkaya,
Formability of Metallic Materials, Springer, Germany,
2000.
3. W.F. Hosford, R.M. Caddell, Metal Forming Mechanics
and metallurgy, Cambridge University press, New
York, 2010.
4. M.M. Moshksar, A. Zamanian, Optimization of the tool
geometry in the deep drawing of aluminium, Journal of
Materials Processing Technology 72 (1997) 363–370.
5. D.H. Park, Y.M. Huh, S.S. Kang, Study on punch load of
non-axisymmetric deep drawing product according to
blank shape, Journal of Materials Processing
Technology, 130–131 (2002) 89–94.
6. G.C. Mohan Reddy, P.V.R. Ravindra Reddy, T.A.
Janardhan Reddy, Finite element analysis of the effect
of coefficient of friction on the drawability, Tribology
International 43 (2010) 1132–1137.
7. M.A. Ahmetoglu, G.K. Taylan Altan, Forming of
aluminum alloys-application of computer simulations
and blank holding force control, Journal of materials
processing technology, 71(1997) 147-151.
8. S. Zhang, K. Zhang, Z. Wang, C. Yu, Y. Xu and Q.
Wang, Research on Thermal Deep-drawing Technology
of Magnesium Alloy (AZ31B) Sheets ,J. Mater. Sci.
Technol., Vol.20 No.2, 2004.
9. Y.M. Huang and J.W. Cheng, Influence of lubricant on
limitation of formability of cylindrical cup-drawing,
Journal of Materials Processing Technology 63 (1997)
77-82.
10. M. Gavas, M. Izciler, Effect of blank holder gap on deep
drawing of square cups, Materials and Design 28
(2007) 1641–1646.
11. L. Gunnarsson, E. Schedin, Improving the properties of
exterior body panels in automobiles using variable
103
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Punch
Blank holder
Blank
Die
(2a)
(2b)
Fig. 1(a) Meshed cylindrical deep drawing set up and (b) square deep drawing set up
Fig. 2 Comparison of cup height for a DR of 2.16 (84.5 mm blank diameter)
[16]
104
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
[16]
Fig. 3 Comparison of %ear height above the minimum cup height based on experiment and yield criterions.
[16,24]
Fig. 4 Comparison of LDR obtained through different methods for cylindrical cup drawing (EDD steel) and square cup (IF
steel)
105
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Wrinkling
BHF <500N
Successfully drawn cup
500N < BHF < 20000 N
tearing
BHF>20000N
Fig. 5 BHF range in which cylindrical cup (EDD steel) can be drawn safely based on Hill’s yield criterion.
Fig. 6 Variation of LDR with punch corner radius for cylindrical cup (EDD steel)
106
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Fig. 7 Variation of LDR with die corner radius for cylindrical cup (EDD steel)
Fig. 8 Variation of LDR with coefficient of friction for cylindrical cup drawing (EDD steel)
107
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Fig. 9 Comparison of LDR vs r-value predicted by the simulations and analytical equations
Fig. 10 Variation of LDR with n value (EDD steel)
108
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
(a)
(b)
Fig. 11 (a) schematic of the modified blank (b) Comparison of %ear height above the minimum cup
height based on modified and circular blank.
109
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Finite Element Prediction of Formability and Earing Defect in Deep Drawing
Process
A. Shukla, S.K. Panda*
Department of Mechanical Engineering
Indian Institute of Technology- Kharagpur
Kharagpur, WB, India-721302
*email: [email protected]
Abstract— Deep drawing process is used extensively in
flashlights, cans, kitchen sinks, pans, cover cases of
notebook, cameras, mobile phones, electrical enclosures,
brackets, connectors, etc are made by deep drawing.
However several defects like wrinkling, tearing, earing,
cracks are observed in the components during the process
[2]. Wrinkling appears because of buckling of sheet metal
due to compressive stress acting at the flange region. To
avoid wrinkling generally a force is applied on the flange
region with the help of blank holder. But very large values
of blank holder force (BHF) can cause fracture in the
product either at punch corner or die corner depending on
their corner radius. Ears like wavy projections are formed
due to uneven metal flow in different directions, which is
primarily due to the presence of the anisotropy in the sheet.
Earing is highly undesirable because it adds not only an
additional processing step of trimming, which causes loss of
material but the metal representing ear, will undergo
deformation and that demands extra load and work.
Formability [3] can be defined as the ease with which a
metal can be formed into desired shape. Limiting drawing
ratio (LDR) is a measure of drawability. LDR is defined as
the ratio of maximum blank diameter that can be drawn into
a complete cup without cracks or wrinkles to the punch
diameter. It is generally affected by design features like
punch corner radius, die corner radius, clearance between
die and punch, material parameters like strain hardening
exponent(n), anisotropy and process parameters like
friction, BHF, temperature. Moshksar and Zamanian [4]
investigated the effect of punch and die profile radius on the
drawing load and formability of aluminum sheet metal and
determined a suitable range of punch and die radius between
which the cups can be drawn successfully. D. Park et al. [5]
investigated the effect of profile radius of tools and blank
shapes on formability of non-axisymmetric deep drawing
process. G. M. Reddy et al. [6] studied the effect of
coefficient of friction on the deep drawing using
experiments as well as simulations. They concluded that the
LDR decreases linearly with the increase of coefficient of
friction. M. A. Ahmetoglu et al. [7] investigated the effect
of process parameters such as initial blank shape and the
BHF on the final part quality in deep drawing. Zhang et al.
[8] performed experiments on thermal deep drawing of
magnesium alloy sheets to obtain the optimum forming
temperature range. R. Huang and Cheng [9] investigated the
effect of two lubricants namely solid zinc stearate and liquid
automobile, electrical, home appliance, and aerospace
industries. Occurrence of the defects such as earing, wrinkling
and cracks makes deep drawing process less efficient in terms
of material saving and productivity. Also, there is an
increasing demand to draw deeper parts which requires
formability of different materials to be assessed. Formability in
deep drawing process is measured in terms of limiting drawing
ratio (LDR). Earing and LDR are affected by various design,
process and material parameters. In the present work,
interstitial free (IF) and extra deep drawing (EDD) steels
having
potential
applications
in
automotive
part
manufacturing industries had been used. The effect of above
mentioned parameters on LDR and earing was studied using
LS-DYNA, a finite element method (FEM) based software, by
incorporating Hill’s, Barlat’s and von-Mises’s yield criteria
while modeling both cylindrical and square cup drawing. The
predicted ear profile based on above yield criteria were
compared with the experimental results and it was found that
Hill’s yield criterion predicted ear profile more accurately than
Barlat’s criterion in case of cylindrical cup. However, in case of
square cups both Hill’s and Barlat’s criteria predicted nearly
same ear profile. The FE predicted LDR were compared with
the analytical predictions and experiment results from
available literature, and it was observed that LDR predicted by
Barlat’s criterion was nearest to experimental results. It was
also observed that LDR increased with increase in punch and
die corner radius but the increase was higher in case of latter.
The LDR decreased linearly with the increase in friction. A
range of blank holding force was determined so that the cups
can be drawn safely without wrinkling or tearing. It was also
observed that LDR is significantly affected by planar
anisotropy than strain hardening index. The initial blank
shape was modified to get ear free products, which yielded
significant results during deep drawing.
Keywords- Deep drawing, earing; limiting drawing ratio;
finite element method; sheet metal
I.
INTRODUCTION
Deep drawing is a metal forming process in which a flat
sheet metal, called blank, is deformed into a cup shaped
component by pressing the central portion of the sheet into a
die opening using a punch. In this process one of the
principal strains in the plane of the component is positive
and other is negative, with a change in thickness [1]. The
component may be circular, rectangular, or a complex
shape. Large variety of parts like automotive fuel tanks,
110
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
work is done to determine the effect of various parameters
(like punch corner radius, die corner radius, BHF, friction,
normal anisotropy) on the LDR of material IF steel and
EDD steel deep drawing. Hence FE can detail into the
forming behavior of blank during deep drawing.
press oil which are used in industry, on the drawing force,
thickness distribution in case of deep drawing. Gavas and
Izciler [10] studied the effect of blank holding gap (BHG)
on deep drawing of square cups of aluminum by
experiments. Gunnarsson et al. [11] used three different
BHF configurations taking exterior body panels used in
automotive industry as sheet metal for drawing. Colgan &
Monaghan [12] varied parameters like the punch and die
radius, the punch velocity, clamping force, and friction and
draw depth to determine most influencing factor on deep
drawing. They observed that smaller the die radius higher is
the drawing force induced and greater is the overall thinning
of the cup sidewall. Savas et al. [13] examined the effect of
blank holder and die shapes on deep drawing process.
Cebeli Özek et al. [14] investigated the effect of die/blank
holder angle and radiuses of die/punch on the LDR with the
help of experiments. Padmanabhan et al. [15] determined
the proportion of contribution of three important parameters
namely die radius, blank holder force and friction
coefficient using FEM with Taguchi technique on deep
drawing. They concluded that die radius has major influence
on the deep-drawing process, followed by friction
coefficient and blank holder force. Naval Kishore et al. [16]
studied the earing problem in case of deep drawing of
cylindrical cups. They also determined LDR from
experiments, simulations and theoretically and compared
them. Shim, Son and Kim et al. [17] proposed a method of
blank shape design based on sensitivity analysis for the noncircular deep drawing process. Vahid Vahdat et al. [18] used
the concept of drawbeads to minimize ear formation in deep
drawing process. Yang & Hsu [19] utilized FEM based
method to investigate earing in deep drawing and used
reverse forming approach to modify initial blank shape to
avoid earing. Yang & Shyu [20] used FEM based approach
to optimize the blank shape for square cup deep drawing
process. They also used reverse forming method to optimize
blank shape. Pegada et al. [21] optimized the blank shape
using finite element analysis and an algorithm was
developed for it. Son and Shim [22] proposed a new method
to find the optimal blank design for arbitrary shaped cups,
named as initial nodal velocity (INOV). S. Kim et al. [23]
proposed a new method of determining an optimum blank
shape for non-circular deep drawing processes with the
FEM. In this method the ideal cup shape with uniform wall
height is assumed and the metal flow is traced backwards.
However it is important to predict the ear height before
performing the drawing operation. The work was extended
to square cup by Raju et al. [24].
It was found that LDR determined analytically does not
take into account the design parameters and so the LDR
predicted by the experiments and analytical relationships
differs. However finite element (FE) simulations can predict
LDR considering these parameters. FE simulations can
predict LDR based on different isotropic and anisotropic
yield criterions and the ear profile for both cylindrical and
square cup. It is observed from the literature that not ample
II.
METHODOLOGY
The model for cylindrical cup deep drawing was built in
DYNAFORM preprocessor and for square cup deep
drawing was constructed using the SolidWorks and then
converted into a FE mesh using the preprocessor
DYNAFORM (Fig. 1). The blank was taken as deformable
body whilst the punch, the die and the blank holder were
simulated as rigid bodies. The blank was simulated using
four node Belytschko-Tsay elements. The material
characteristics of the punch, the die and the blank holder
were the same. The punch is made to move into the die with
a constant velocity. The force on the blank holder is kept
constant. Three different yield criterions were used in FE
simulations.
For cylindrical cup drawing blank was made of EDD
steel and for square cup drawing the blank was made of IF
steel. Different FE simulations were carried out by varying
the BHF to find a safe range in which the cups can be
successfully drawn, using two yield criterions. BHF was
varied from 500N to 30000N for cylindrical cups and from
500N to 35000N for square cups. The punch corner radius
and die corner radius were varied from 2mm to 12mm in
steps of 2mm for cylindrical cup drawing. For square cup
drawing punch profile radius was varied from 2mm to
14mm in steps of 3mm and die profile radius was varied
from 2 mm to 15mm in steps of 3mm in simulations to
examine their impact on LDR. Similarly the coefficient of
friction was varied from 0.05 to 0.25 in steps of 0.05 for
both square and cylindrical cups. Four materials namely
DP980 steel, HSLA steel, DQ steel, DDQ steel which have
different normal anisotropy values, were used to know the
effect of normal anisotropy on LDR. The normal anisotropy
values were taken in range 0.8-3.0. The LDR of different
materials were compared with the results obtained
analytically. In simulations to decide whether the blank
material has failed or not, forming limit diagram (FLD)
were been used. In FLD when the strain conditions cross
the safe limit, it indicates that necking (i.e. strain is
concentrating at a localized region) has initiated.
Analytical relations to determine LDR are
111
Whitely’s formula,
(1)
Leu’s formula,
Where f is drawing efficiency, assumed to be 0.9
n is strain hardening exponent,
(2)
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
increase in punch corner radius and then it becomes
constant, shown in Fig 6. Similar trend is observed with
increase in die corner radius. Though the value at which the
maximum LDR is observed differs for both punch corner
and die corner radius as well as for cup shape. Fig. 7
showing the variation of LDR with the die corner radius.
is normal anisotropy.
III.
RESULTS AND DISCUSSION
A. Earing profile comparison for cylindrical cup
The earing profile obtained from the FE simulations using
Hill’s and Barlat’s yield criterion was compared with the
experimental profile [16] as shown in Fig 2. From Fig. 2 it
is difficult to find out the yield criterion which is predicting
the ear profile more accurately. Hence, percentage ear
height with respect to minimum cup height was determined.
The percentage ear height obtained from experiment [16]
and different criterions have been compared in Fig. 3, for
the drawn cup based on minimum cup height. From Fig. 3 it
is observed that Hill’s yield criterion predicts the ear profile
more accurately. The maximum percentage ear height was
14.22% in experiment and 14.43% in Hill’s yield criterion.
However, earing was not observed in case of von-Mises
yield criterion, which is an isotropic yield criterion. Similar
to the cylindrical cup drawing case percentage ear height
with respect to minimum cup height has been determined
for all the three cases and compared. It is observed that FE
simulations are predicting similar ear height (as well as
percentage ear height) with both Barlat’s and Hill’s yield
criterion. Hence it can be concluded that the geometry of
tooling is mainly responsible for ear formation in square
cups.
E. Influence of process parameters like coefficient of
friction on LDR
It has been observed for cylindrical cup as well as square
cup drawing, using Hill’s yield criterion and Barlat’s yield
criterion that with the increase in coefficient of friction the
value of LDR decreases though the pattern differs in both
cases as shown in Fig 8. Increasing friction restricts the
material to be drawn in die. Similar kinds of results were
observed by G. C M. Reddy et al. [3].
F. Influence of material property like r and n- value on
LDR
To determine the effect of r-value on LDR, six materials
having different r-values were used including EDD and IF
steel and there LDR were compared. Hills yield criteria was
used in simulations to find out LDR for all materials. LDR
was also determined using analytical formulae, Fig. 9. Fig.
10 is showing the variation of LDR with n values for
cylindrical cup. It was observed that n-value doesn’t causes
very significant effect on the LDR.
G. Modification of blank shape to minimize earing
The initial blank shape was modified as shown in Fig. 11(a)
for a circular blank of 82 mm to minimize earing. The %ear
height was determined for the case of initial circular blank
and modified blank (Fig. 11(b)) and the ear height decreased
by 62.86%. It shows that if modified blank will be used
instead of circular blank to draw cylindrical cups then a lot
of material can be saved.
B. LDR comparison using different yield criterion
The LDR obtained from FE simulations using Hill’s, VonMises’s and Barlat’s yield criterion and analytical
relationships have been compared with the experimental
results, shown in Fig 4. The LDR predicted by Barlat’s yield
criterion is very close to experimental LDR. Whitely’s
formula and Leu’s formula over predicted the LDR because
these relations does not take design parameters into account.
Von-mises’s yield criterion, an isotropic yield criterion,
doesn’t consider while evaluating LDR. So it predicted a
lower value of LDR.
IV.
CONCLUSIONS
1. FE simulations predict the LDR more accurately
compared to analytical results. Barlat’s yield criterion
predicts the deep drawing behavior of EDD and IF steels
better than Hills yield criterion in terms of LDR prediction.
2. Planar anisotropy is responsible for ear formation in the
deep drawn cups. The Hill’s yield criterion predicts better
ear profile of EDD steel for cylindrical cup than Barlat’s
yield criterion. In case of square cups, geometry of tooling
is mainly responsible for ear formation. The Hill’s and
Barlat’s yield criterion predicted the same ear profile in
square cup.
4. With increase in punch corner radius and die corner
radius, the LDR increases but the increase is more
significant in case of die corner radius. The LDR decreases
linearly with the increase in friction. The LDR varies
significantly with value but strain hardening exponent does
not affect it much.
C. Determination of BHF range for LDR
A range of BHF is determined for LDR, in cylindrical cup
using blank of EDD steel and square cup using blank of IF
steel, between which the cup can be drawn without
wrinkling and tearing. Range of BHF is determined for the
LDR based on both yield criterions. For cylindrical cup the
BHF range in which cup can be drawn successfully is 500N
to 20000N, Fig. 5.
D. Influence of design parameters like punch corner and
die corner radius on LDR
By simulations using Hill’s and Barlat’s yield criterion the
effect of punch and die corner radius on the LDR has been
determined for square and cylindrical cups. It has been
observed that LDR increases to a maximum value with
112
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
13. V. Savas, O. Secgin, An experimental investigation of
forming load and side-wall thickness obtained by a new
deep drawing die, Int J Mater Form (2010) 3:209–213.
14. C. Özek, M. Bal, The effect of die/blank holder and
punch radiuses on limit drawing ratio in angular deepdrawing dies, Int J Adv Manuf Technol (2009)
40:1077–1083.
15. R. Padmanabhan, M.C. Oliveira, J.L. Alves, L.F.
Menezes, Influence of process parameters on the deep
drawing of stainless steel, Finite Elements in Analysis
and Design 43 (2007) 1062 – 1067.
16. N. Kishore and D. Ravi Kumar, Optimization of initial
blank shape to minimize earing in deep drawing using
finite element method, Journal of Materials Processing
Technology , 130-131, (2002) 20-30.
17. H. Shim, K. Son, K. Kim, Optimum blank shape design
by sensitivity analysis, Journal of Materials Processing
Technology 104 (2000) 191-199.
18. V. Vahdat, S. Santhanam, Y.W. Chun, A numerical
investigation on the use of drawbeads to minimize ear
formation in deep drawing, Journal of Materials
Processing Technology 176 (2006) 70–76.
19. T.S. Yang and Y.C. Hsu, The Prediction of Earing and
the Design of Initial Shape of Blank in Cylindrical Cup
Drawing, Materials Science Forum, Vols. 532-533
(2006) pp 865-868.
20. T. S. Yang and R. F. Shyu, The design of blank's initial
shape in the near net-shape deep drawing of square cup,
Journal of Mechanical Science and Technology 21
(2007) 1585-1592.
21. V. Pegada, Y. Chun and S. Santhanam, An algorithm for
determining the optimal blank shape for the deep
drawing of Aluminum cups, Journal of Materials
Processing Technology, 125-126(2002) 743-750.
22. K. Son, H. Shim, Optimal blank shape design using
initial velocity of boundary nodes, Journal of Materials
Processing Technology 134 (2003) 92-98.
23. S. Kim, M. Park, S. Kim, D. Seo, Blank design and
formability for non-circular deep drawing processes by
the finite-element method, Journal of Materials
Processing Technology 75 (1998) 94–99.
24. B. Rambabu, Optimization of blank shape and
orientation in square cup deep drawing using FEM,
MTech Thesis, Department of Mechanical Engineering,
IIT Delhi, 2004
5. It is possible to reduce the earing height of anisotropic
sheets in deep drawing by using noncircular blanks and for
that a new approach was employed. The ear height
decreased by 62.86% when modified blank was chosen deep
drawn instead of circular blank.
REFERENCES
1. Z. Marciniak, J.L. Duncan, S. J. Hu: Mechanics of Sheet
Metal Forming, Butterworth-Heinemann, London,
2002.
2. D. Banabic, H.J. Bunge, K. Pohlandt, A.E. Tekkaya,
Formability of Metallic Materials, Springer, Germany,
2000.
3. W.F. Hosford, R.M. Caddell, Metal Forming Mechanics
and metallurgy, Cambridge University press, New
York, 2010.
4. M.M. Moshksar, A. Zamanian, Optimization of the tool
geometry in the deep drawing of aluminium, Journal of
Materials Processing Technology 72 (1997) 363–370.
5. D.H. Park, Y.M. Huh, S.S. Kang, Study on punch load of
non-axisymmetric deep drawing product according to
blank shape, Journal of Materials Processing
Technology, 130–131 (2002) 89–94.
6. G.C. Mohan Reddy, P.V.R. Ravindra Reddy, T.A.
Janardhan Reddy, Finite element analysis of the effect
of coefficient of friction on the drawability, Tribology
International 43 (2010) 1132–1137.
7. M.A. Ahmetoglu, G.K. Taylan Altan, Forming of
aluminum alloys-application of computer simulations
and blank holding force control, Journal of materials
processing technology, 71(1997) 147-151.
8. S. Zhang, K. Zhang, Z. Wang, C. Yu, Y. Xu and Q.
Wang, Research on Thermal Deep-drawing Technology
of Magnesium Alloy (AZ31B) Sheets ,J. Mater. Sci.
Technol., Vol.20 No.2, 2004.
9. Y.M. Huang and J.W. Cheng, Influence of lubricant on
limitation of formability of cylindrical cup-drawing,
Journal of Materials Processing Technology 63 (1997)
77-82.
10. M. Gavas, M. Izciler, Effect of blank holder gap on deep
drawing of square cups, Materials and Design 28
(2007) 1641–1646.
11. L. Gunnarsson, E. Schedin, Improving the properties of
exterior body panels in automobiles using variable
blank holding force. Journal of materials processing
technology, 1147 (2001) 168-173.
12. M. Colgan, J. Monaghan, Deep drawing process:
analysis and experiment, Journal of Materials
Processing Technology 132 (2003) 35–41.
113
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Punch
Blank holder
Blank
Die
(2a)
(2b)
Fig. 1(a) Meshed cylindrical deep drawing set up and (b) square deep drawing set up
[16]
Fig. 2 Comparison of cup height for a DR of 2.16 (84.5 mm blank diameter)
114
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
[16]
Fig. 3 Comparison of %ear height above the minimum cup height based on experiment and yield criterions.
[16,24]
Fig. 4 Comparison of LDR obtained through different methods for cylindrical cup drawing (EDD steel) and square cup (IF
steel)
115
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Wrinkling
BHF <500N
Successfully drawn cup
500N < BHF < 20000 N
tearing
BHF>20000N
Fig. 5 BHF range in which cylindrical cup (EDD steel) can be drawn safely based on Hill’s yield criterion.
Fig. 6 Variation of LDR with punch corner radius for cylindrical cup (EDD steel)
116
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Fig. 7 Variation of LDR with die corner radius for cylindrical cup (EDD steel)
Fig. 8 Variation of LDR with coefficient of friction for cylindrical cup drawing (EDD steel)
117
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Fig. 9 Comparison of LDR vs r-value predicted by the simulations and analytical equations
Fig. 10 Variation of LDR with n value (EDD steel)
118
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
(a)
(b)
Fig. 11 (a) schematic of the modified blank (b) Comparison of %ear height above the minimum cup
height based on modified and circular blank.
119
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
SOLID PARTICLE EROSION OF CHOPPED LANTANA-CAMARA FIBER
REINFORCED POLYMER MATRIX COMPOSITE.
Dr Chittaranjan Deo*, Associate-Professor
Mr. Biswajit Mishra, Assistant- Professor
Department of Mechanical Engineering
Synergy Institute of Engineering &Technology
Dhenkanal-759001, Odisha.
Visualizing the importance of polymeric
composites, lot of work has been done to evaluate various
types of polymers and their composites to solid particle
erosion [6-10]. Most of these workers have carried out wide
range of thermoset and thermoplastic PMCs having glass,
carbon, graphite and Kevlar fibers in the form of tape, fabric
and chopped mat as reinforcement. However there is no
information available on the erosion wear behaviour of
natural fiber composite. There is little information available
on tribological behaviour of BFRPC [9]. Various
researchers have correlated erosion rate of composite with
some important factors such as Properties of the target
materials, Testing environment, Operating parameters, and
Properties of the erodent [11, 12]. Hence in the present
work an attempt has been made to study the erosive wear
behaviour of Lantana-Camara reinforced polymer
composite.
Abstract - The present investigation reports about, the solid
particle erosion behaviour of randomly oriented short
Lantana-Camara fiber reinforced polymer composites
(LCRPCs) using silica sand particles (200±50 μm) as an
erodent. The erosion rates of these composites have been
evaluated at different impingement angles (150–900) and
impact velocities (48m/s–109 m/s) with constant feed rate of
erodent (1.467 ± 0.02 gm/min). Highest wear rates were
investigated at impingement angles 450. Erosive wear rates
found to have a close relationship with the impingement
angle of the erodent and speed. The morphology of eroded
surfaces was examined by using scanning electron
microscopy (SEM). Possible erosion mechanisms were
discussed.
Keywords: Solid particle erosion; Lantana-Camara;
Composites; Wear mechanism; Scanning electron
microscope.
II.
I.
EXPERIMENTAL TECHNIQUE
INTRODUCTION
2.1. Test materials
The chopped Lantana-Camara fibers collected
locally are reinforced in the mixture of epoxy resin (Araldite
LY556 supplied by Ciba-Geigy of India Ltd.) to prepare the
composites. The composite slabs are made by conventional
hand-lay-up technique in a Per-pex sheet mold (dimension
130X100X6 mm). Ten percent of hardener HY 951 is
mixed in the resin prior to reinforcement. Four composites
of different Lantana-Camara fiber weight fractions (10, 20,
30 and 40 wt.%) are fabricated. The castings are put under
load for about 72 hrs. for proper curing at room temperature.
Specimens of suitable dimension (30 mm x 30 mm x 3.0
mm of thickness) were cut for erosion tests using a diamond
cutter for erosion test.
Polymer composite have been used for a long times
with an increased demand in various engineering fields, due
to their high specific mechanical properties as compared to
other conventional materials. There are number of
application areas for these composite. One of such area is
Tribo-applications such as bearings, gear, etc, where liquid
lubricants can not be used because of various constraints
[1]. Apart from adhesive wear mode, some polymers and
composites have exhibited excellent tribo-potential in other
wear situation such as abrasive, fretting, reciprocating and
erosive [2]. These composites are also being used in
application such as pipe line carrying sand slurries in
petroleum refining, helicopter rotor blades, pump impeller
in mineral slurry processing, high speed vehicles and
aircraft operating in desert environments, radomes, surfing
boats where the component encounter impact of lot of
abrasives like dust, sand, splinters of materials, slurry of
solid particle and consequently the material undergo erosive
wear [3]. It is well established that erosive resistance of
polymer is low in comparison to monolithic material and
when reinforced its resistance usually becomes higher than
un-reinforced polymer composite [4-5].
2.2. Wear test and Measurement
The schematic figure of the erosion test apparatus as per
ASTM-G76 shown in Figure-1. The rig consists of an air
compressor, a particle feeder, and an air particle mixing and
accelerating chamber. The compressed dry air is mixed with
the particles, which are fed at a constant rate from a
conveyor belt-type feeder in to the mixing chamber and then
accelerated by passing the mixture through a tungsten
carbide converging nozzle of 4-mm diameter. These
accelerated particles impact the specimen, and the specimen
120
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
(4) Particle-air mixing chamber. (5) Nozzle. (6) X–Y and
h axes assembly. (7) sample holder.
could be held at various angles with respect to the impacting
particles using an adjustable sample holder. The impact
velocities of the erodent particles were determined
experimentally using the rotating double disc method
developed by Ives and Ruff [13].
The conditions under which the erosion tests were
carried out are listed in Table 1. A standard test procedure
was employed for each erosion test. The samples were
cleaned in acetone, dried and weighed to an accuracy of
1×10−3 gm using an electronic balance, eroded in the test rig
for 15 min. at a given impingement angle and then weighed
again to determine weight loss(Wc). The ratio of this weight
loss to the weight of the eroding particles (Ws) causing the
loss (i.e. testing time × particle feed rate) is then computed
as the dimensionless incremental erosion rate. This
procedure is repeated till the erosion rate attains a constant
steady-state value. The erosion rate is defined as the weight
loss of the specimen due to erosion divided by the weight of
the erodent causing the loss.
2.4 Micro-hardness test.
Micro-hardness measurement is done by using Leco’s
Vickers hardness tester equipped with a square based
pyramidal (angle 1360 between opposite faces) diamond
indenter under the load ranging from 0.3N to 3N.
III.
RESULTS AND DISCUSSION
Fig. [2] shows the micro-hardness values of composite with
different fiber loading. It is seen from the plot that a
marginal decrease in hardness value occurs with increase in
fiber loading, except 40% fiber loading. The hardness
values slightly increases with load upto1N and then almost
remain unaltered up to 3N.
Fig[3(a)-(d)] shows the result of the erosion rate
for different fiber loading as function of angle of
impingement. As expected wear rate of samples were
remarkably higher at higher particle speed. Particles have a
higher kinetic energy at higher velocity, which results in
greater impinge effect and results in wear. It is evident from
the plot that the erosion rate increases with increase in
impact angle and attains a peak value (αmax) at 450 and
minimum (αmin) at 900(The samples could not be studied at
150 because samples of required size was unavailable). The
behaviour of ductile material is reported by maximum
erosion rate at low impingement angle (15-30). On the other
hand brittle material shows maximum erosion under normal
impingement angle (90). The present reinforced composite
exhibit a semi ductile behaviour with maximum erosion
occurring in the angle range 45-60. As evident from
literature and pointed out by Rattan et al. [14] , there is no
fixed trends correlating ductile or brittleness of material
with αmax or αmin. Thermoplastic generally exhibit a more
ductile response than the thermosets [15]. Nejat Sarı et al
[16] reported that unidirectional carbon fiber reinforced PEI
composite shows semi ductile behaviour under low particle
speed erosive studies. Highest wear rate were investigated at
450. the results of our studies supports the results of
previous studies. Therefore it can be concluded that
Lantana-Camara reinforced polymer composite showed
semi-ductile behaviour.
Table : - 1
Test parameters
Erodent: Silica sand
Erodent size (µm):200 ± 50
Erodent shape: Angular
Hardness of silica particles (HV):- 1420 ± 50
Impingement angle (α0):30, 45, 60, 90
Impact velocity (m/s):48, 70, 82,109
Erodent feed rate (g/min):-1.467 ± 0.02
Test temperature: (270C)
Nozzle to sample distance (mm):- 10
Fig. 1 Details of erosion test rig. (1) Sand hopper. (2)
Conveyor belt system for sand flow. (3) Pressure transducer.
121
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
10%
20%
30%
40%
Erosion rate
Micro-Hardness (HV)
18
1 7 .5
17
0.005
0.004
0.003
0.002
0.001
0
0
15
30
48m/s
70m/s
82m/s
109m/s
45
60
75
90
105
Impigment Angle
Fig. 3(c)
1 6 .5
0
1
2
3
L o a d (N )
Fig.2 Micro hardness as a function of load for different
composite.
Erosion rate
48m/s
82m/s
70m/s
109m/s
0.005
0.004
0.003
0.002
0.001
0
Fig. 3(d)
0
15
30
45
60
75
90
Fig. 3. Erosion rate as a function of impingement angles for
different particle speeds of (a) 10%, (b) 20%, (c) 30% and
(d) 40%fiber loading.
105
Impigment Angle
Fig.3(a)
IV.
Erosion rate
48m/s
82m/s
70m/s
109m/s
To characterize the morphology of as received and eroded
surfaces and the mode of material removal, the eroded
samples are observed under scanning electron microscope.
Fig.[6(a)] shows the composite eroded at 600 impingement
angle. It can be seen from the surface of the samples that
material removal is mainly due to micro-cutting and microploughing. Fig .[6(b)] shows the micrograph of surfaces
eroded at an impingement angle of 450 with higher particle
speed . It is well known that the fiber in composite subjected
to particle erosion, encountered intensive debonding and
breakage of the fibers, which were not supported enough by
the matrix. The continuous impingement of silica sand on
the fibers and breaks the fiber because of the formation of
cracks perpendicular to their length. The bending of fibers
becomes possible because of softening of the surrounding
matrix, which in turn lowers the strength of the surrounding
fibers.
0.005
0.004
0.003
0.002
0.001
0
0
15
30
45
60
75
90
SEM STUDIES
105
Impigment Angle
Fig. 3(b)
122
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
REFERENCES
[1] J.K. Lancaster, in: K. Friedrich (Ed.), Friction and Wear of Polymer
Composites, Composite Materials Science Series I, Elsevier,
Amsterdam, 1986, pp. 363–396.
[2] J. Bijwe, M. Fahim, in: H.S. Nalwa (Ed.), Hand Book of Advanced
Functional Molecules and Polymers, Gordon and Breach, London,
Tokyo, Japan, 2000 (in press).
[3] Rajesh John J., Bijwe Jayashree, Tewari U.S. and Venkataraman,
erosive wear behavior of various polyamides, Wear, 2001, Vol. 249,
pp. 702 – 714.
[4] Roy M., Vishwanathan B., Sundararajan G., The solid particle
erosion of polymer matrix composites. Wear, 1994, Vol. 171, pp.
149-161.
[5] Hager A., Friedrich K., Dzenis Y A., Paipetis S. A., Study of erosion
wear of advanced polymer composites. In: Street K, editor. ICCM-10
Conference preceedings, Whistler, BC, Canada. Cambridge (UK):
Woodhead Publishing; 1995, pp. 155-162.
[6] Harsha A. P., Thakre Avinash A., Investigation on solid particle
erosion behaviour of polyetherimide and its composites, Wear, 2007,
Vol. 262, pp. 807-818.
[7] Bijwe J., Indumathi J., John Rajesh J., Fahim M., Friction and wear
behavior of polyetherimide composites in various wear modes, Wear,
2001, Vol. 249, pp. 715-726.
[8] Bijwe J., Indumathi J., Ghose A.K., On the abrasive wear behavior of
fabric-reinforced polyetherimide composites, Wear, 2002, Vol. 253,
pp. 768-777.
Fig. 4 (a)
[9] Ei-Tayeb N.S.M., A study on the potential of sugarcane
fibers/polyester composite for tribological applications, Wear, 2008,
Vol. 265, pp. 223-235.
[10] Tewari US, Harsha AP, Hager AM, Friedrich K. Solid particle
erosion of carbon fibre– and glass fibre–epoxy composites. Compos
Sci Technol 2003;63:549–57
[11] Tewari US, Harsha AP, Ha¨ger AM, Friedrich K. Solid particle
erosion of unidirectional carbon fibre reinforced polyetheretherketone
composites. Wear 2002;252:992–1000.
[12] Bhushan B. Principles and applications of tribology. New York:
Wiley; 1999.
[13] A. W. Ruff and L. K. Ives, Measurement of solid particle velocity in
erosive wear, Wear, 35 (1975) 195 - 199.
[14] Rattan R., Bijwe Jayashree. Influence of impingement angle on solid
particle erosion of carbon fabric reinforced polyetherimide composite,
Wear, 2007, Vol. 262, pp. 568-574.
[15] P. Lee-Sullivan, G. Lu, Erosion of impact-notched holes in GFRP
composites, Wear, 1994, Vol. 176, pp. 81-88.
[16] Nejat Sarı, Tamer Sınmazc¸elik * Erosive wear behaviour of carbon
fibre/polyetherimide composites under low particle speed. Materials
and Design 28 (2007) 351–355.
Fig. 4 (b)
Fig. 4 SEM micrographs (a) eroded at 60° (b) surface
eroded at 45°
V.
CONCLUSION
Based on this study of the solid particle erosion of LantanaCamara fiber reinforced in epoxy resin composites at
various impingement angles and impact velocities with
constant mass of erodent the following conclusions can
drawn:
1. The influence of impingement angle on erosive
wear of composites under consideration exhibits
semi-ductile erosive wear behaviour with
maximum wear rate at 450 impingement angle.
2. The erosion rate of composites increases with
increase of fiber content and increase of velocity of
impact.
3. SEM studies of worn surfaces support the involved
mechanisms and indicated micro-cracking, sand
particle embeddedment, chip formation, exposure
of fibers, fiber cracking and removal of the fibers.
123
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Six-Sigma: A Strategic Tool for Quality Improvement
M. S. Khan, B.B. Sahoo, C. Mishra, Jyotiprakash Bhol
Department of Mechanical Engineering,
Synergy Institute of Engineering and Technology
Dhenkanal, Odisha, India
E-mail: [email protected]
J. N. Pal
AGM (Power Plant)
JSPL, Angul, Odisha, India
thousands of opportunities -- didn't provide enough
granularities. Instead, they wanted to measure the
defects per million opportunities. Motorola developed
this new standard and created the methodology and
needed cultural change associated with it. Six-Sigma
helped Motorola realize powerful bottom-line results
in their organization - in fact, they documented more
than $16 Billion in savings as a result of Six Sigma
efforts [4].
Since then, hundreds of companies around the
world have adopted Six Sigma as a way of doing
business. This is a direct result of many of America's
leaders such as Larry Bossidy of Allied Signal (now
Honeywell) and Jack Welch of General Electric
Company openly praising the benefits of Six Sigma
[5].
Abstract- Six-sigma quality has gained considerable
attention since its development by Motorola Corporation
in late 1980s. The relentless drive in recent years towards
adoption of six-sigma for improving quality both in
service as well as in manufacturing sectors has led to
unrealistic expectations as to what six sigma is truly
capable of achieving. This paper describes the
philosophy of six-sigma in the context of quality
improvement. The important methodologies for
improving quality in service and manufacturing sector
are described. Various tools of six- sigma are also listed
and focused in brief. A case study of receiving orders
and shipping computers is used to explore the
effectiveness of the six sigma tools.
Keywords: Six Sigma; Process Improvement; Voice of
Customers (VOC); Critical to Quality (CTQ)
I. INTRODUCTION
Six-Sigma is a rigorous and disciplined
methodology that uses data and statistical analysis to
measure and improve a company's operational
performance by identifying and eliminating "defects"
in manufacturing and service-related processes.
Commonly defined as 3.4 defects per million
opportunities, Six Sigma can be defined and
understood at three distinct levels: such as metric,
methodology and philosophy[1], [2], [3].
II. LEVEL OF SIX-SIGMA
A. Metric
3.4 Defects per Million Opportunities (DPMO).
DPMO allows you to take complexity of
product/process into account. Rule of thumb is to
consider at least three opportunities for a physical
part/component - one for form, one for fit and one for
function, in absence of better considerations. Also one
has to view Six Sigma in the Critical to Quality
characteristics and not the whole unit/characteristics.
The following table shows how the Six-Sigma gives
its performance level for Defects Per Million
Opportunities (DPMO).
A. The History of Six-Sigma
The roots of Six Sigma as a measurement
standard can be traced back to Carl Frederick Gauss
(1777-1855) who introduced the concept of the normal
curve. Six Sigma as a measurement standard in
product variation can be traced back to the 1920's
when Walter Shewhart showed that three sigma from
the mean is the point where a process requires
correction. Many measurement standards (Cpk, Zero
Defects, etc.) later came on the scene but credit for
coining the term "Six Sigma" goes to a Motorola
engineer named Bill Smith. (Incidentally, "Six Sigma"
is a federally registered trademark of Motorola).
In the early and mid-1980s with Chairman Bob
Galvin at the helm, Motorola engineers decided that
the traditional quality levels -- measuring defects in
TABLE I: SIGMA PERFORMANCE LEVELS - ONE TO SEVEN
SIGMA
Short
LongSigma
Percent
Percentage
DPMO
-term
term
level
defective
yield
*Cpk
Cpk
1σ
691,462
69%
31%
0.33
–0.17
2σ
308,538
31%
69%
0.67
0.17
3σ
66,807
6.7%
93.3%
1.00
0.5
4σ
6,210
0.62%
99.38%
1.33
0.83
5σ
233
0.023%
99.977%
1.67
1.17
6σ
3.4
0.00034%
99.99966%
2.00
1.5
7σ
0.019
0.0000019
%
99.9999981
%
2.33
1.
*Cpk : A measure of process capability index
124
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
b) DMADV or DFSS
The DMADV project methodology, also known
as DFSS ("Design For Six Sigma"), features five
phases:
•
Define design goals that are consistent with
customer demands and the enterprise strategy.
•
Measure and identify CTQs (characteristics
that are Critical To Quality), product capabilities,
production process capability, and risks.
•
Analyze to develop and design alternatives,
create a high-level design and evaluate design
capability to select the best design.
•
Design details, optimize the design, and plan
for design verification. This phase may require
simulations.
•
Verify the design, set up pilot runs,
implement the production process and hand it
over to the process owner(s).
The following table gives some example for the
defects in 1, 3 and six-sigma regime
TABLE II: SOME REAL WORLD EXAMPLES
Situation/Example
Pieces of your mail lost per
year [1,600 per year]
Number of empty coffee pots
at work (who didn't fill the
coffee pot again?) [680 per
year]
Number
of
telephone
disconnections [7,000 talk
minutes]
Erroneous business orders
[250,000 per year]
In 1
Sigma
World
In 3
Sigma
World
In 6
Sigma
World
1,106
107
<1
470
45
<1
4,839
467
0.02
172,924
16,694
0.9
B. Six Sigma Methodology – DMAIC & DMADV or
DFSS
a) DMAIC
DMAIC is a structured Six Sigma approach to
process improvement. (DMAIC) process includes:
Define, Measure, Analyze, Improve and Control.
The DMAIC approach can be applied in any
situation where a process has a defined measurable
results whether it’s manufacturing a specific
product such as phones or transactional such as
engineering design firm. Measurable results can
include anything from a detailed manufacturing
process to a higher level or customer satisfactions
[6], [7].
Define, Measure, Analyze, Improve, Control.
DMAIC refers to a data-driven quality strategy for
improving processes, and is an integral part of the
company's Six Sigma Quality Initiative. DMAIC is an
acronym for five interconnected phases: Define,
Measure, Analyze, Improve, and Control. Figure 1 and
Figure 2 depicts the methodology with the flow chart.
Figure 2: Flow Chart for DMAIC & DMADV or DFSS
Each step in the cyclical DMAIC Process is required
to ensure the best possible results. The process steps
are described as follows:
a) Define
The Define phase is where the team begins the
journey into the problem at hand. Initially the
champion determines if the problem warrants using six
sigma methodology; it is possible that the issue can be
resolved using 8D, the 8 disciplines of problem
solving.
The key deliverable for this phase of the DMAIC
process is the project charter. The project charter
document is a living document throughout the life of
the project, that is, it is expected that the project
charter may be revised from time to time during the
project lifetime.
Important aspects of the project charter are as
follows:
Figure 1: Methodology of Six-Sigma- DMAIC
125
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
¾
The Business Case- A well-written business
case will explain to top management the
importance of the project. It could detail the
costs incurred to date for this problem,
describe the consequences of taking no
action, correlate the project to current
business objectives, and specify the potential
impact of the project in monetary values
¾ The Problem Statement- The purpose of the
problem statement is to clearly describe the
problem at hand and to provide important
details of the problem's impact to your
organization. More information on creating a
problem statement can be found by clicking
on the link.
¾ The Goal Statement- This element defines
the expected results from the project. The
results should include information regarding
project completion time line, savings
expected, improvement objectives and how
they will be measured, and how reaching this
goal will influence any Critical to Quality
(CTQ) elements of your project.
¾ Project Scope- The project scope itemizes
the project boundaries. It is imperative that
the beginning and ending process steps are
identified. This will help keep your team
focused and help prevent "scope-creep."
¾ Cost of Poor Quality- The COPQ metric
states in financial terms how much the
problem had cost your company over a given
time period.
b) Measure
The measure phase is the second phase of the
DMAIC process. The objective of this phase is to
garner as much information from the current process.
The improvement team needs to know exactly how the
process operates, and is not concerned with how to
improve the process at this time. The important tasks
in the measure phase are the creation of a detailed
process map, collection of baseline data, and finally
summarizing the collected data. In most projects, the
process map will be completed first. The process map
provides a visual representation of the process under
investigation. It can also provide additional awareness
of process inefficiencies such as, cycle times,
bottlenecks or identify non-value added process
requirements. The process map may also show where
data can be collected.
Two critical aspects of process mapping are:
i.
Draw the process map exactly as it exists. If
you create the map at your desk, you are
likely to miss key elements of the process,
such as any redundant work or rework loops.
ii.
Always walk the process to validate the
correctness of your process map.
Create a data collection plan
In the define phase your team developed a list of
CTQ (Critical to Quality) characteristics. Data to be
collected should relate both to the problem statement
and what the customer considers to be critical to
quality. This data will be used both as baseline data for
your improvement efforts and to calculate the current
state process sigma.
The data will then be graphed or charted to obtain
a visual representation of the data. If the team was
collecting error data, then a Pareto Chart would be a
likely graphical choice to help prioritize the team's
efforts. Or perhaps a trend chart is needed to show
how the process reacts over time. Histograms are
another excellent way to observe your process data.
Another widely utilized tool in the measure phase is
the Control Chart. The control chart is both a visual
depiction of a process and a statistical tool that shows
which elements of variation are common causes
(natural variation within the process) and special cause
(variation caused by an external factor).
Current state process sigma is then calculated
from the collected data. This metric allows a
comparison between different processes and
illuminates the difference between the current state
and the improved state of the process.
c) Analyze
The third phase of the DMAIC process is the
analyze phase, where the team sets out to identify the
root cause or causes of the problem being studied. But
unlike other simpler problem solving strategies,
DMAIC requires that the root cause be validated by
data.
Several root cause analysis methods are available
for use in the analyze phase, including
Brainstorming, 5 Whys, and the Fishbone Diagram,
also known as a Cause and Effect Diagram or an
Ishawaka Diagram. As with most root cause tools, the
team should utilize the process map, the collected
process data and other knowledge accumulated during
the define and measure phases to help them arrive at
the root cause.
Validating the Root Cause
The true power of the analyze phase of the
DMAIC process is the statistical analysis that is
conducted. Six Sigma belts are looking for statistically
significant events upon which to act. It's this higher
level of analysis that sets Six Sigma apart from lower
level problem solving strategies.
Techniques such as ANOVA (Analysis of
Variance), Correlation Analysis, Scatter plots, and Chi
Square analysis are commonly used to validate
potential root causes.
d) Improve
The objective of the DMAIC improve phase is to
determine a solution to the problem at hand.
126
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
•
Brainstorming is commonly used to generate an
abundance of potential solutions. It is a great idea to
include people who perform the process regularly.
Their input to solution creation can be invaluable, plus
they may also provide the best potential solution ideas
because of their process knowledge. In fact, it's a great
idea to communicate to those involved in the process
on a regular basis throughout the improvement project.
Some prefer to conduct free-form brainstorming
sessions, but with the addition of some simple rules
for brainstorming a highly successful session will be
conducted, and you'll probably have some fun in the
process.
Selecting the Best Solution
Keep in mind that the term best does not mean the
same thing to all people. What the team should strive
to find is the best overall solution. A solution criteria
list is another good tool to assist in selecting the best
solution. An example is shown below:
Time
• Time to implement the solution
• Cycle time reduction
Cost
• Cost to implement
• Process cost reduction
Misc
• Defect reduction
• Simplify the process
The team then evaluates the list of potential
solutions against the list of criteria. Not only does this
speed up the process of evaluation, it also gives all
team members the same basis for choosing the best
possible solution.
Validating the Selected Solution
Prior to implementation, the team must be assured
that the selected solution actually works... yes,
imagine that, let's be certain before we fully
implement. Pilot programs, computer simulations,
segmented implementation are all possibilities at this
point. The team also creates a future state process
map as part of the improve phase. This is done so that
after implementation, the team can once again walk
the process to ensure the implementation was
accomplished correctly.
e) Control
The final DMAIC phase is the control phase; its
objective, simply put, is to sustain the gains that were
achieved as a result of the improve phase. The team
should create a plan that details the steps to be taken
during the control phase. These might include:
• Review and update the process map
• Update any affected work instructions
• Develop training that describes the newly
implemented methods
Determine new metrics to verify the
effectiveness of new process
• Determine if the process changes can be
effectively implemented in other processes
Once the control phase tasks have been completed, it's
time to transfer ownership of the new process to the
original process owner. The team should discuss with
the facilitator any new potential project ideas that may
have come up during the course of the improvement
project.
All that's left is to celebrate the team's success. The
scale of the celebration is up to each individual
company, but in order to create a robust improvement
environment, recognition of the team's efforts should
take place.
Six Sigma Quality Tools
i.
Brainstorming
ii.
Cause & Effect / Ishikawa / Fishbone
iii.
Control Charts
iv.
Creativity / Out of the Box Thinking
v.
Design Of Experiment
vi.
Flow Charting
vii.
FMEA / Risk Assessment
viii.
Histogram
ix.
Kano Analysis
x.
Pareto
xi.
Poka Yoke (Mistake Proofing)
xii.
Process Mapping
xiii.
Quality Function Deployment / House of
Quality
xiv.
SIPOC Diagram
Some of the tools are described briefly below;
Brainstorming
Brainstorming is a tool that allows for open and
creative thinking. It encourages all team members to
participate and to build on each other's creativity. It is
helpful because it allows your team to generate many
ideas on a topic creatively and efficiently without
criticism or judgment. Brainstorming can be used any
time you and your team need to creatively generate
numerous ideas on any topic. You will use
brainstorming many times throughout your project
whenever you feel it is appropriate. You also may
incorporate brainstorming into other tools, such as
QFD, tree diagrams, process mapping, or FMEA.
Cause & Effect / Ishikawa / Fishbone
A cause and effect diagram is a visual tool that
logically organizes possible causes for a specific
problem or effect by graphically displaying them in
increasing detail. It is sometimes called a fishbone
diagram because of its fishbone shape. This shape
allows the team to see how each cause relates to the
effect. It then allows you to determine a classification
related to the impact and ease of addressing each
cause. It allows your team to explore, identify, and
127
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
changing one factor at a time, usually by trial and
error. This approach often requires a great many runs
and cannot capture the effect of combined factors on
the output.
DOE uses an efficient, cost-effective, and
methodical approach to collecting and analyzing data
related to a process output and the factors that affect it.
By testing more than one factor at a time, DOE is able
to identify all factors and combinations of factors that
affect the process Y
In general, DOE can be used to Identify and
quantify the impact of the vital few Xs on the process
output. Describe the relationship between Xs and a Y
with a mathematical model Determine the best
configuration.
Pareto Chart
A Pareto chart is a graphing tool that prioritizes a
list of variables or factors based on impact or
frequency of occurrence. This chart is based on the
Pareto principle, which states that typically 80% of the
defects in a process or product are caused by only 20%
of the possible causes. It is easy to interpret, which
makes it a convenient communication tool for use by
individuals not familiar with the project. The Pareto
chart will not detect small differences between
categories; more advanced statistical tools are required
in such cases.
In the Define phase to stratify Voice of the
Customer data...In the Measure phase to stratify data
collected on the project Y…..In the Analyze phase to
assess the relative impact or frequency of different
factors, or Xs
Quality Function Deployment
A methodology that provides a flow down process
for CTQs from the highest to the lowest level. The
flow down process begins with the results of the
customer needs mapping (VOC) as input. From that
point we cascade through a series of four Houses of
Quality to arrive at the internal controllable factors.
QFD is a prioritization tool used to show the relative
importance of factors rather than as a transfer function.
QFD drives a cross-functional discussion to define
what is important. It provides a vehicle for asking how
products/services will be measured and what the
critical variables to control processes are. The QFD
process highlights trade-offs between conflicting
properties and forces the team to consider each trade
off in light of the customer's requirements for the
product/service. Also, it points out areas for
improvement by giving special attention to the most
important customer wants and systematically flowing
them down through the QFD process.
QFD produces the greatest results in situations
where
1. Customer requirements have not been clearly
defined
display all of the possible causes related to a specific
problem. The diagram can increase in detail as
necessary to identify the true root cause of the
problem. Proper use of the tool helps the team
organize thinking so that all the possible causes of the
problem, not just those from one person's viewpoint,
are captured. It can be used whenever it needs to break
an effect down into its root causes. It is especially
useful in the Measure, Analyze, and Improve phases of
the DMAIC process.
Control Charts
Control charts are time-ordered graphical displays
of data that plot process variation over time. Control
charts are the major tools used to monitor processes to
ensure they remain stable. Control charts are
characterized by a centerline, which represents the
process average, or the middle point about which
plotted measures are expected to vary randomly.
Upper and lower control limits, which define the area,
three standard deviations on either side of the
centerline. Control limits reflect the expected range of
variation for that process. Control charts determine
whether a process is in control or out of control. A
process is said to be in control when only common
causes of variation are present. This is represented on
the control chart by data points fluctuating randomly
within the control limits. Data points outside the
control limits and those displaying nonrandom
patterns indicate special cause variation.
Control charts serve as a tool for the ongoing
control of a process and provide a common language
for discussing process performance. They help you
understand variation and use that knowledge to control
and improve your process. In addition, control charts
function as a monitoring system that alerts you to the
need to respond to special cause variation so you can
put in place an immediate remedy to contain any
damage.
In the Measure phase, use control charts to
understand the performance of your process as it exists
before process improvements. In the Analyze phase,
control charts serve as a troubleshooting guide that can
help you identify sources of variation (Xs). In the
Control phase, use control charts to : 1. Make sure the
vital few Xs remain in control to sustain the solution 2. Show process performance after full-scale
implementation of your solution. You can compare the
control chart created in the Control phase with that
from the Measure phase to show process improvement
-3. Verify that the process remains in control after the
sources of special cause variation have been removed.
Design of Experiment (DOE)
Design of experiment (DOE) is a tool that allows
you to obtain information about how factors (Xs),
alone and in combination, affect a process and its
output (Y). Traditional experiments generate data by
128
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
that continuous quality improvement is critical to
long-term business success. However, why has the
popularity of TQM waned while Six Sigma's
popularity continues to grow in the past decade?
Unlike TQM, Six Sigma was not developed by
technicians who only dabbled in management and
therefore produced only broad guidelines for
management to follow. The Six Sigma way of
implementation was created by some of America's
most gifted CEOs - people like Motorola's Bob
Galvin, Allied Signal's Larry Bossidy, and GE's Jack
Welch. These people had a single goal in mind:
making their businesses as successful as possible.
Once they were convinced that tools and techniques of
Six Sigma could help them do this, they developed a
framework to make it happen. Some differences
between TQM and Six Sigma are summarized in Table
III below.
2. There must be trade-offs between the elements of
the business 3. There are significant investments in
resources required
C. Difference between DMAIC and DFSS
Many experts are of the opinion that with
similarities between Six Sigma and DFSS, FSS can be
called as a logical extension of Six Sigma. Though this
may be true, there are some differences between
DMAIC and DFSS.
The basic difference lies in the fact that DMAIC
is a methodology that focuses on bringing about
improvements to the existing products and services of
the organization. DFSS aims at designing a new defect
free product or service to meet CTQ factors that will
lead to customer satisfaction.
DMAIC focuses on detecting and solving
problems with existing products and services, while
DFSS approach is that of preventing a problem.
The benefits and savings of DMAIC are quickly
quantifiable while those of the DFSS will be visible
only in the long term. It can be around six months or
more before the result of a newly developed product is
visible. One may say that DMAIC is based more on
manufacturing or transactional processes, while DFSS
encompasses marketing, research and design as well.
DFSS brings about a huge change of roles in an
organization. The DFSS team is cross-functional, as
the key factor is covering all aspects for the product
from market research to process launch.
Thus, DFSS provides tools to get the
improvement process done efficiently and effectively.
It proves to be powerful management technique for
projects. It optimizes the design process so as to
achieve the level of Six Sigma for the product.
TABLE III DIFFERENCE BETWEEN TQM & SIX SIGMA
TQM
Six Sigma
A functional specialty within
An infrastructure of dedicated
the organization.
change agents. Focuses on
cross-functional value delivery
streams rather than functional
division of labour.
Focuses on quality.
Focuses on strategic goals and
applies them to cost, schedule
and other key business metrics.
Motivated by quality idealism.
Driven by tangible benefit far a
major stockholder group
(customers, shareholders, and
employees).
Loosely monitors progress
Ensures that the investment
toward goals.
produces the expected return.
People are engaged in routine
“Slack” resources are created to
duties (Planning, improvement,
change key business processes
and control).
and the organization itself.
Emphasizes problem solving.
Emphasizes breakthrough rates
of
improvement.
Focuses on standard
Focuses on world class
performance, e.g. ISO 9000.
performance, e.g., 3.4 PPM
error rate.
Quality is a permanent, fullSix Sigma job is temporary.
time job. Career path is in the
Six Sigma is a stepping-stone;
quality profession.
career path leads elsewhere.
Provides a vast set of tools and
Provides a selected subset of
techniques with no clear
tools and techniques and a
framework for using them
clearly defined framework for
effectively.
using them to achieve results
(DMAIC).
Goals are developed by quality
Goals flow down from
department based on quality
customers and senior
criteria and the assumption that
leadership's strategic
what is good for quality is good objectives. Goals and metrics
for the organization.
are reviewed at the enterprise
level to assure that local suboptimization does not occur.
Developed by technical
Developed by CEOs.
personnel.
Focuses on long-term results.
Six Sigma looks for a mix of
Expected payoff is not wellshort-term and long-term
defined.
results, as dictated by business
demands.
D. Comparison between Six Sigma & TQM
Six-sigma has been around for more than 20 years
and heavily influenced by TQM (total quality
management) and Zero Defect principles. In its
methodology, it asserts that in order to achieve high
quality manufacturing and business processes,
continued efforts must be made to reduce variations.
The Six Sigma system strives to reduce these
variations in both business and manufacturing and in
order to do so; these processes must be measured,
analyzed, controlled and improved upon. In order to
improve upon these processes, the Six Sigma system
requires sustained commitment from an entire
organization – especially from the top echelons to help
guide lower rung workers and policies [8], [9], [10].
In some aspects of quality improvement, TQM
and Six Sigma share the same philosophy of how to
assist organizations to accomplish Total Quality. They
both emphasize the importance of top-management
support and leadership. Both approaches make it clear
129
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
E. Comparison between Six Sigma & ISO 9000
In essence, ISO 9000 requires you to: * Say what
you do * Do what you say * Record what you did *
Check on the results * Act on the difference.
There is NO requirement to: * To discover and
reduce/eliminate sources of variation * To actively
promote employee involvement Six Sigma is a datadriven approach to process improvement aimed at the
near-elimination of defects from every product,
process and transaction. The purpose of Six Sigma is
to gain BREAKTHROUGH knowledge on how to
improve processes to do things BETTER, FASTER,
and at LOWER COST. Six Sigma improvements must
provide TANGIBLE BUSINESS RESULTS in the
form of cost savings that are directly traceable to the
bottom line. ISO 9000 doesn't even begin to look at
the bottom line.
over $2.5 billion across the organization from Six
Sigma." "Motorola reduced manufacturing costs by
$1.4 billion from 1987-1994." "Six Sigma reportedly
saved Motorola $15 billion over the last 11 years."
F. Six-sigma philosophy
The philosophy of Six-Sigma to provide
businesses with the tools to improve the capability of
their business processes. This increase in performance
and decrease in process variation leads to defect
reduction and vast improvement in profits, employee
morale and quality of product. The goal of Six Sigma
is to increase profits by eliminating variability, defects
and waste that undermine customer loyalty.
IV. MANAGING RESISTANCE TO SIX SIGMA
CHANGE
A critical component of any successful Six Sigma
project is to overcome resistance to change. The
reason: Without user acceptance, any process
improvement is doomed to fail. Therefore, proper
anticipation and understanding the approaches to
various resistance tactics is essential to success [11],
[12], [13]. People resist changes in the workplace in
many ways, but among the more common examples
are to:
• Ignore the new process
• Fail to completely or accurately
comprehend
• Disagree with the validity of benefits
• Criticize tools or software applications
• Grant exceptions
• Delay the implementation
TABLE IV ADOPTION OF SIX-SIGMA IN SOME IMPORTANT
ORGANIZATION
Company Name
Motorola (NYSE:MOT)
Allied Signal (Merged With Honeywell
in 1999)
GE (NYSE:GE)
Honeywell (NYSE:HON)
Ford (NYSE:F)
III. FINANCIAL GAINS BY
IMPLEMENTING SIX-SIGMA
The financial benefits of implementing Six Sigma
can be significant. Many people say that it takes
money to make money. In the world of Six Sigma
quality, the saying also holds true: it takes money to
save money using the Six Sigma quality methodology.
We can't expect to significantly reduce costs and
increase sales using Six Sigma without investing in
training, organizational infrastructure and culture
evolution. Surely one can reduce costs and increase
sales in a localized area of a business using the Six
Sigma quality methodology - and an probably do it
inexpensively by hiring an ex-Motorola or GE Black
Belt. Just like the scenario as a "get rich quick"
application of Six Sigma. But is it going to last when a
manager is promoted to a different area or leaves the
company? Probably not. To produce a culture shift
within organization, a shift that causes every employee
to think about how his or her actions impact the
customer and to communicate within the business
using a consistent language, it's going to require a
resource commitment. It takes money to save money.
"Companies of all types and sizes are in the midst of a
quality revolution. GE saved $12 billion over five
years and added $1 to its earnings per share.
Honeywell (AlliedSignal) recorded more than $800
million in savings." "GE produces annual benefits of
Year Began Six
Sigma
1986
1994
1995
1998
2000
Proper training is critical for ensuring people
adapt to a new process, especially when they have
become accustomed and experienced in another
process
V. A CASE STUDY
An example of a Six Sigma Project is the process
of receiving orders and shipping custom computers.
Whenever customers are involved in a process there
will be some variation and in all processes there will
be defects. In the case of a customized computer
company there is a process for receiving orders from
the customer including specifications, shipping
address, and billing information, etc. Over time there
will be customer complaints which are a manifestation
of defects and variation. A Six Sigma project will
define the process and what is happening. Black Belts
will identify and categorize the defects and use tools
such as the fishbone diagram or failure mode analysis
130
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
REFERENCES
to trace these defects back to the root cause [8]. The
Six Sigma team will then work to eliminate the
cause/s. In the computer company case the cause of
many of the defects could be untrained customer
service personnel who do not receive all the correct
information from the customer or it could be a faulty
component used in many of the computers such as a
bad hard disk. Black Belts and Green Belts will work
together to find the root cause/s and eliminate it/them.
Six Sigma team members must be creative and possess
good problem solving skills because of the divergence
of the Six Sigma process as it is applied to a wide
variety of projects [14].
1. Dedhia, 2005 N.S. Dedhia, Six sigma basics, Total Quality
Management 16 (5) (2005), pp. 567–574
2. Pande, P., et al, The Six Sigma Way: How GE, Motorola, and
Other Top Companies are Honing Their Performance, (2000),
McGaw-Hill Trade, ISBN: 0071358064.
3. Forrest W. Breyfogle III, et al, (2000), Managing Six Sigma: A
Practical Guide to Understanding, Assessing, and Implementing
the Strategy That Yields Bottom-Line Success, WileyInterscience, ISBN: 0471396737.
4. Greg Brue and Rod Howes. “Six Sigma The McGraw-Hill 36Hour Course,” 2006.
5. Cary W. Adams, Praveen Gupta, and Charles E. Wilson, Jr.
“Six Sigma Deployment” 2003
6. Roland R Cavanagh, Robert P. Neuman, and Peter S. Pande.
“What is Design for Six Sigma?” 2005
7. MaCarty, Daniels, Bremer, and Gupta. “The Six Sigma Black
Belt Handbook,” 2005
8. De Feo and Barnard. “Six Sigma, Breakthrough and Beyond,”
Juran Institute 2004
9. Antony, 2004 J. Antony, Some pros and cons of six sigma: An
academic perspective, The TQM Magazine 16 (4) (2004), pp.
303–306.
10. Antony, 2006 J. Antony, Six sigma for service processes,
Business Process Management Journal 12 (2) (2006), pp. 234–
248.
11. Antony, 2007 J. Antony, Is six sigma a management fad or
fact?, Assembly Automation 27 (1) (2007), pp. 17–19.
12. De Mast, 2006 J. De Mast, Six sigma and competitive
advantage, Total Quality Management 17 (4) (2006), pp. 455–
464
13. Gijo and Rao, 2005 E.V. Gijo and T.S. Rao, Six sigma
implementation – Hurdles and more hurdles, Total Quality
Management 16 (6) (2005), pp. 721–725.
14. Banuelas et al., 2005 R. Banuelas, J. Antony and M. Brace, An
application of six sigma to reduce waste, Quality and Reliability
Engineering International 21 (2005), pp. 553–570.
VI. CONCLUSIONS
An overview of six-sigma is presented. The
methodology such as DMAIC and DADV are
discussed in detail. Six-sigma are compared with
TQM and ISO 9000. Implications of implementation
of six-sigma are also discussed. Finally, a case study
has been provided for better understanding of the
quality improvement through six-sigma.
The six-sigma methodologies actually work.
Companies have saved hundreds of millions of dollars
using it. But, a word of caution, which may useful in
applying six sigma in any organization is that, do not
be attempt to solve everything at once, do not work
outside of your project scope, and do not skip steps in
the DMAIC/DADV processes. Trust it to work and it
will work for you.
131
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
RAPID PROTOTYPING TECHNOLOGIES- AN OVERVIEW
C.K. Mishra, B.B. Sahoo, J.P. Bhol and M.S. Khan
B.K. Sahoo
Department of Mechanical Engineering
Synergy Institute of Engineering and Technology
Dhenkanal, Odisha – 759001, India
E-mail: [email protected]
Associate Manager
Vedanta Power Plant Limited
Korba – 495450, India
stainless steel and titanium. With rapid prototyping, each
layer is built to match the virtual cross section taken from
the CAD model. Therefore, the final model is built up
gradually with the help of these cross sections. Finally, the
cross sections are either glued together or fused with a laser.
The fusing of the model automatically creates its final
shape.
Rapid prototyping is necessary for those who want to
create models for clients, such as architects and engineers.
Rapid prototyping can reduce the design cycle time,
allowing multiple tests to be performed on the design at a
low cost. This is because each prototype can be completed
within days or hours, rather than taking several weeks. With
the help of rapid prototyping, all of these tests can be
performed well before beginning volume production. In
addition to engineers and architects, other professionals
benefit from rapid prototyping, such as surgeons, artists, and
archaeologists. With additive manufacturing, the machine
reads in data from a CAD drawing and lays down
successive layers of liquid, powder, or sheet material, and in
this way builds up the model from a series of cross sections.
These layers, which correspond to the virtual cross section
from the CAD model, are joined together or fused
automatically to create the final shape. The primary
advantage to additive fabrication is its ability to create
almost any shape or geometric feature.
The standard data interface between CAD software and
the machines is the STL file format. An STL file
approximates the shape of a part or assembly using
triangular facets. Smaller facets produce a higher quality
surface. The word "rapid" is relative: construction of a
model with contemporary methods can take from several
hours to several days, depending on the method used and the
size and complexity of the model. Additive systems for
rapid prototyping can typically produce models in a few
hours, although it can vary widely depending on the type of
machine being used and the size and number of models
being produced simultaneously. Some solid freeform
fabrication techniques use two materials in the course of
constructing parts. The first material is the part material and
the second is the support material (to support overhanging
features during construction). The support material is later
removed by heat or dissolved away with a solvent or water.
Abstract— Rapid prototyping is a computer program
that constructs three-dimensional models of work
derived from a Computer Aided Design drawing. With
the use of rapid prototyping, one can quickly and easily
turn product designs into physical samples. The creation
of physical samples through rapid prototyping is
achieved through Adobe Portable Document Format
(PDF) and CAD formats, as well as through crossfunctional teams and integration. The rapid prototype
creates an early iteration loop that provides valuable
feedback on technical issues, creative treatment, and
effectiveness of instruction. The design document itself is
changed to reflect this feedback, and in some cases, a
new prototype module is developed for subsequent
testing of the refinements. This design method makes the
design and development process open to new emerging
ideas; makes the design open to emerging needs from
test and evaluation phases.
Keywords-component; Rapid Prototyping, CAD, Prototype
I.
INTRODUCTION
Rapid prototyping is the automatic construction of
physical objects using additive manufacturing technology.
The first techniques for rapid prototyping became available
in the late 1980s and were used to produce models and
prototype parts. Today, they are used for a much wider
range of applications and are even used to manufacture
production-quality parts in relatively small numbers. Some
sculptors use the technology to produce complex shapes for
fine arts exhibitions. Rapid prototyping was first introduced
to the market in 1987, after it was developed with the help
of stereo lithography. Today, rapid prototyping is also
known as solid freeform fabrication, 3-dimensional printing,
freeform fabrication, and additive fabrication. The
manufacturing process of rapid prototyping can produce
automatic construction of physical models with 3dimensional printers, stereo-lithography machines, and even
laser sintering systems.
Using a CAD drawing to create a physical prototype is
quite simple for the user. First, the machine reads the data
from the provided CAD drawing. Next, the machine lays a
combination of liquid or powdered material in successive
layers. The materials used in rapid prototyping are usually
plastics, ceramics, wood-like paper, or metals such as
132
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
II.
DIFFERENT RAPID PROTOTYPING TECHNOLOGIES
Why Rapid Prototyping?
Prototyping technologies
Base materials
Selective laser sintering (SLS)
Direct Metal Laser Sintering
(DMLS)
Fused deposition modeling (FDM)
Stereo lithography (SLA)
Laminated object manufacturing
(LOM)
Electron beam melting (EBM)
3D printing (3DP)
Thermoplastics, metals powders
Almost any alloy metal
The reasons of Rapid Prototyping are
• To increase effective communication.
• To decrease development time.
• To decrease costly mistakes.
• To minimize sustaining engineering changes.
• To extend product lifetime by adding necessary
features and eliminating redundant features early in
the design.
Thermoplastics, eutectic metals
photopolymer
Paper
Titanium alloys
Various materials
Rapid Prototyping decreases development time by allowing
corrections to a product to be made early in the process. By
giving engineering, manufacturing, marketing, and
purchasing a look at the product early in the design process,
mistakes can be corrected and changes can be made while
they are still inexpensive. The trends in manufacturing
industries continue to emphasize the following:
Classification of rapid prototyping:
Based on material addition
(i) liquid
Solidification of liquid
a) point by point
b) layer by layer
c) holographic surface
(ii) Solidification of electroset liquid
(iii) Solidification of molten material a. point by point b.
layer by layer
•
•
•
•
(iv) Discrete a. joining of powder particles by laser b.
bonding of powder particles by binders
Rapid Prototyping improves product development by
enabling better communication in a concurrent engineering
environment.
(v) Solid a. joining of sheets by adhesive b. joining of sheets
by light; uv light or laser
Methodology of Rapid Prototyping
Material
reduction
Concept
modelers
(desktop
manufacturing):
Following are few commercially available concept
modelers:
1. 3D systems Inc Thermojet printer(multi-jet printing)
2. Sanders model maker 2
3. Z corporation Z402
4. Stratasys genisys x5
5. JP system 5
6. Object quadra system
III.
Increasing number of variants of products.
Increasing product complexity.
Decreasing product lifetime before obsolescence.
Decreasing delivery time.
The basic methodology for all current rapid prototyping
techniques can be summarized as follows:
•
•
•
MARKET ACCEPETANCE OF RAPID PROTOTYPING
A CAD model is constructed, then converted to STL
format. The resolution can be set to minimize stair
stepping.
The RP machine processes the .STL file by creating
sliced layers of the model.
The first layer of the physical model is created. The
model is then lowered by the thickness of the next layer,
and the process is repeated until completion of the
model.
• The model and any supports are removed. The
surface of the model is then finished and cleaned.
Table 1 Historical development of Rapid Prototyping and
related technologies
Year of inception
1770
1946
1952
1960
1961
1963
1988
133
Technology
Mechanization
First computer
First Numerical Control (NC) machine tool
First commercial laser
First commercial Robot
First interactive graphics system (early version
of Computer Aided Design)
First commercial Rapid Prototyping system
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
IV.
REFERENCES
CONCLUSION
This paper provides an overview of RP technology in
brief and emphasizes on their ability to shorten the product
design and development process. Classification of RP
processes and details of few important processes is given.
The description of various stages of data preparation and
model building has been presented. An attempt has been
made to include some important factors to be considered
before starting part deposition for proper utilization of
potentials of RP processes.
[1]
[2]
[3]
[4]
[5]
Pandey.M.Pulak,
Rapid
Prototyping
Technologies,
Applicationsand Part Deposition Planning.
http://www.efunda.com/home.cfm
http://en.wikipedia.org/wiki/Rapid_application_development
http://www.wisegeek.com/
Chua
C.K,
LeongK.F
and
LIM
C.S,Rapid
Prototyping,Principles & Applications,Second Edition
Figure 1. Rapid Prototyping Process flow Chart
134
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Minimization of Cost Though Mist Application of Cutting Fluid in Metal Cutting
Girija Mishra
Mechanical Engineer
Neelachal Refractories Limited
Dhenkanal, Odisha,India
Jyotiprakash Bhol, B.B. Sahoo, C. K.Mishra, M. S. Khan
Department of Mechanical Engineering,
Synergy Institute of Engineering and Technology
Dhenkanal, Odisha, India
e-mail: [email protected]
experienced by a cutting tool is influenced by the magnitude
of stress and temperature at the tool chip interface. Factors
such as the interaction between cutting tool and material
being cut, cutting speed, feed rate, depth of cut, continuous
or intermittent cutting, and the presence of cutting fluid and
its type, will influence the damage or wear rate of a cutting
tool. The way in which cutting fluids work and assist the
cutting process is complex and is the subject of longstanding research [2], [3] and [4] and in many instances the
use and adoption of cutting fluids has been an automatic
choice based on the assumption that they are essential for
reliable and predictable machining processes.
Abstract—
During machining operation, friction between
work piece-cutting tool and cutting tool-chip interfaces result
high temperature on cutting tool. The effect of this generated
heat affects shorter tool life, higher surface roughness and
lowers the dimensional sensitiveness of work material. This
result is more important when machining of difficult-to-cut
materials, due to occurrence of higher heat. Knowledge of the
performance of cutting fluids in machining different work
materials is of critical importance in order to improve the
efficiency of any machining process. The efficiency can be
evaluated based on certain process parameters such as flank
wear, surface roughness on the work piece, cutting forces
developed, temperature developed at the tool chip interface,
etc. Application of cutting fluids in conventional method
reduces the above problems to some extent through cooling
and lubricating of the cutting zone. But in this process the
cooling rate is low. For this reason mist application technique
has become the focus of attention of researchers and
technicians in the field of machining as an alternative to
traditional flood cooling. The concept of mist application of
cutting fluid some time referred to as near dry machining. The
minimization of the requirement of cutting fluids leads to
economical benefits, and environmental friendly machining.
II.
Turning is a widely used machining process in which a
single point cutting tool removes material from the surface of
a rotating cylindrical work piece. The material removed,
called chip, slides on the face of tool, known as tool rake
face, resulting in high normal and shear stresses and,
moreover, to a high coefficient of friction during chip
formation. Most of the mechanical energy used to form the
chip becomes heat, which generates high temperatures in the
cutting region. Due to the fact that, higher the tool
temperature, the faster the wear, the use of cutting fluids in
machining processes has, as its main goal, the reduction of
the cutting region temperature, either through lubrication and
reduction of friction wear, and through a combination of
these functions. Among all the types of wear, flank wear
affects the work piece dimension, as well as quality of
surface finish obtained, to a large extent.
For low speed machining operations, lubrication is a
critical function of the cutting fluid. Cooling is not a major
function of the cutting fluid as most of the heat generated in
low speed machining can be removed by the chip. For high
speed machining operations, cooling is the main function of
the cutting fluid, as the chip does not have sufficient time to
remove the generated heat. The lubrication effects of the
cutting fluid in high speed machining operations are also
limited. The flow of the cutting fluid to the tool-cutting
surface interface is due to capillary flow. In high speed
machining, there is insufficient time for the capillary flow of
the fluid to reach the tool-cutting surface interface. Because
of the reduced lubrication effects of the cutting fluid, more
heat is generated, making the cooling function of the cutting
fluid even more critical.
Keywords- Machining, Tool wear, Surface Roughness,
Cutting Fluid, Mist Application Introduction
I.
MACHINING
INTRODUCTION
Tool wear and breakage has been an issue with cutting
tools since they were created. Tool wear weakens the cutting
tool, increases the forces used in cutting and causes a lack of
consistency in material removal. Parts and time lost to scrap
and rework from tool wear are costly to companies.
Companies spend money to grind and replace cutting tools
due to tool wear. There are many factors that contribute to
the wear of cutting tools: the work pieces properties, cutting
tool properties, cutting surface speed, cutting feed rate, depth
of cut and machine rigidity. Traditionally, cutting fluids have
been seen as a solution rather than a problem in metal
cutting. They have proven to be a significant benefit to the
metal cutting process and do have an important role in
improving and maintaining surface finish, promoting swarf
removal, cutting force reduction, size control, dust
suppression, and corrosion resistance to the work and the
machine tool . In practice, the extent of flank wear is used as
the criteria in determining the tool life[1]. The damage
135
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
the cutting zone. The mist application system has three
components, these are:
A. Selection of Cutting Fluids and Application
The cutting fluids applied in machining processes
basically have three characteristics [5, 6, 7]. These are:
• Cooling effect
• Lubrication effect
• Taking away formed chip from the cutting zone
The cooling effect of cutting fluids is the most important
parameter. It is necessary to decrease the effects of
temperature on cutting tool and machined work piece.
Therefore, a longer tool life will be obtained due to less tool
wear and the dimensional accuracy of machined work piece
will be improved [5,6,7].
The lubrication effect will cause easy chip flow on the
rake face of cutting tool because of low friction coefficient.
This would also result in the increased by the chips.
Moreover, the influence of lubrication would cause less
built-up edge when machining some materials such as
aluminum and its alloys. As a result, better surface roughness
would be observed by using cutting fluids in machining
processes [5, 6, 7].
It is also necessary to take the formed chip away quickly
from cutting tool and machined work piece surface. Hence
the effect of the formed chip on the machined surface would
be eliminated causing poor surface finish. Moreover part of
the generated heat will be taken away by transferring formed
chip [6, 7].
There are so many type of application and position of for
the application of cutting fluid where we can categorically
select the following three:
• Flood Type: Where a flood of cutting fluid is applied
on the work piece
• Jet Type: Where a jet of cutting fluid is applied on
the work piece directed at the cutting zone
• Mist Type: Where cutting fluid is atomized by a jet
of air and the mist is directed to the cutting zone
One of the primary driving forces behind the
implementation of micro-lubrication is waste reduction. The
fluid is atomized, often with compressed air, and delivered to
the cutting interface through a number of nozzles. Because
the fluid is applied at such low rates, most or all of the fluid
used is carried out with the part. This eliminates the need to
collect the fluid while still providing some fluid for
lubrication, corrosion prevention, and a limited amount of
cooling. Because of the low flow rates, coolant cannot be
used to transport chips, meaning alternative methods for chip
extraction must be implemented. However, the chips that are
extracted should be of higher value since they are not
contaminated with large quantities of cutting fluid.
•
•
Compressor
Mist generator
Nozzle
•
A fluid chamber is required and has been designed with
larger capacity so as to able to supply fluid continuously
during machining. In mist application system a compressor is
used to supply air at high pressure. The cutting fluid which is
to be used is placed in the mist generator, and there is a
connection of high pressure air line from the compressor
with the help of flexible pipe at the bottom of the mist
generator. When the air at high pressure enters the mist
generator it carries a certain amount of cutting fluid along
with it, and this cutting fluid coming out from the nozzle
with the air coming in another from the compressor as a jet,
which is applied to the hot zone. In the mist generator there
is a regulating valve by which the flow rate of the cutting
fluid can be controlled. The schematic view of the setup are
shown in Fig.1.
.
Fig.1: Schematic Diagram for Mist Type Application
Setup
III.
B. Working Principle of Mist Application System
Mist application of cutting fluid refers to the use of
cutting fluids of only a minute amount, typically a flow rate
of 50 to 500 ml/hr which is about three to four orders of
magnitude lower than the amount commonly used in flood
cooling condition, where for example, up to 10 liters of fluid
can be dispensed per minute. Mist application requires a high
pressure and impinged at high speed through the nozzle at
ECONOMIC ASPECTS
By application of the mist type strategy of cutting fluid a
lot of the factors are being saved and the economics of metal
cutting is improved. Here are some factors that contribute to
the cause and make this application as a candidature for
further investigation.
A. Volume of Cutting Fluids
The volume of the cutting fluid has drastically small as
compared to the other two types of strategies. In the flood
136
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
type a massive amount of fluid is required where as in jet and
mist type of application a small amount of fluid is required
which save on the cost of machining directly. The cost of
disposal management is also less which is a primary problem
in flood type. As the volume is comparatively less storage
and contamination does not create a problem. This also
directly decreases the health hazards like foul smell, skin
diseases.
Flank Wear
It has been noticed that in mist type of cutting fluid
application the flank wear is less as compared to the other
two types of applications [8]. The graph taken from the
research is displayed below. It clearly indicates that
irrespective of cutting speed and depth of cut the amount of
flank wear is less as compared to flood cutting. The less the
flank wear, the more the tool life. This will contribute to the
saving of tool inventory and help to reduce the overall
machining cost.
B.
Fig 3: Tool Wear vs Depth of Cut
C.
Thermal Aspects
The turning is associated with high temperature rise
which is responsible for aggravating several problems like
thermal damage of the ground surface, change in hardness,
change in surface roughness etc. Thus the quality of the work
piece is hampered and the tool life is decreased. Research
has shown the generation of heat is less as compared to flood
cutting and dry cutting. This minimizes the deformation of
cutting tool and crater wear thus increases the tool life which
in long run affects the machining economics.
Fig 2: Tool Wear vs Spindle Speed
As flank wear will be less we can get better surface finish
of the work pieces which will add value to the product and
thus an improved quality product will be available.
Fig 4: Temperature vs Spindle Speed
137
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
used cutting fluids and
environmental pollution.
nearly
eliminating
REFERENCES
[1]
Byrd, J.D., Ferguson, B.L., 1978. A study of the influence of hard
inclusions on carbide tool wear utilizing
a
powder metal
technique , Proceedings of the Sixth NAMRC, pp. 310–315.
[2] J.W.Sutherland, A.Gandhi et.al , 2000.Cutting Fluids in
Machining:Heat Transfer and Mist Formation issues, Proceedings of
the NSF Design & Manufacturing Research Conference, January 3-6,
2000.
[3] Gunter, K.L., Experimental Investigation Cutting Fluid Mist
Formation via Atomisation in the Turning Process, M. S.Thesis,
Michigan Technolgical University, 1999.
[4] Yue. Y., K.L.Gunter et.al, An Examination of Cutting Fluid Mist
Formation in Turning, Trans.of NAMRI/SME, Vol.27, May 1999, pp.
221-226
[5] M.A. El Baradie, Cutting Fluids, Part I: Characterisation, Journal of
Materials Processing Technology 56 (1996) 786-797.
[6] G. Avuncan, Machining Economy and Cutting tools, Makine Tak m
Endüstrisi Ltd. Publication, stanbul, 1998, 375-403.
[7] Kavuncu, Cutting Oils In Metal Machining, Turkish Chambers of
Mechanical Engineers Publication, Istanbul.
[8] Md.Abdul hasib et.al, Mist Application of cutting fluid, International
journal of Mechanicl and Mechatronics Engineering, Vo. 10, No
4,pp.13-18.
[9] Lowa Waste Reduction Center. “Cutting Fluid Management in Small
Machine Shop Operations –Third
Edition,” (Cedar Falls, Iowa:
University of Northern Lowa, 2003), p.07.
[10] Khire M.Y. & Spate K D Some Studies of Mist Application Cutting
Fluid on Cutting Operation. Journal of the Institution of Engineers
(India). Volume 82, October 2001, page 87 to 93.
[11] L M Hartmaan, et al. ‘Lubricating Oil Requirements for Oil Mist
Systems.’ Journal of American Society of Lubrication Engineers,
January 1972.
[12] M.B. Da Silva, J. Wallbank, Lubrication and application method in
machining, Lubrication and Tribology 50 (1998) 149-152.
Fig 5: Temperature vs Depth of Cut
IV.
•
•
•
•
CONCLUSIONS
The selection of cutting fluids for machining
processes generally provides various benefits such as
longer tool life, higher surface finish quality and
better dimensional accuracy.
The mist application enables reduction of the turning
zone temperature up to 10% to 40% more than
conventional methods depending on the process
parameter.
The tool wear is measured for dry, flood and mist
condition, in which the mist condition provides
minimum tool wear and thus economize the
machining.
The regeneration methods of used cutting fluids
would also provide various advantages such as
reducing cutting the fluids cost, disposals cost of
138
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Performance & Emission Studies on a Single Cylinder DI Diesel Engine
Fueled with Diesel & Rice Bran Oil Methyl Ester Blends
Bhabani Prasanna Pattanaik1
Department of Mechanical Engg.
Gandhi Institute for Technological Advancement
Bhubaneswar-752054, Orissa, India
e-mail: [email protected]
1
the vegetable oil are formed and glycerine is produced
as a byproduct in the process.
Abstract- The present experimental work studies
about the production of biodiesel from crude Rice
bran oil by transesterification method and using the
Rice bran oil biodiesel in the form of various blends
with Diesel fuel in a four stroke single cylinder
direct injection Diesel engine for the investigation
on various engine performance and emission
parameters.
Keywords:
Biodiesel,
Transesterification, Blend.
I.
Rice
bran
The monoalkyl or methyl esters of the
vegetable oil produced during transesterification are
popularly known as biodiesel. In India efforts are being
made for using non-edible and under exploited oils for
production of methyl esters or biodiesel. Blending
petroleum Diesel fuel with methyl esters of vegetable
oils is the most common practice of using biodiesel in
diesel engines in present time. There have been reports
that significant reduction in the exhaust gas are
achieved with the use of blends in Diesel engines.
Several studies have shown that diesel and biodiesel
blends reduce smoke opacity, particulate matters, unburnt hydrocarbons, carbon dioxide and carbon
monoxide emissions, but the NOx emissions have
slightly increased. It was reported from several
previous studies that the transesterification of the crude
vegetable oil with alcohol in the presence of catalyst is
the easiest method for production of biodiesel.
oil,
INTRODUCTION
The use of vegetable oils in Diesel engines
replacing petroleum diesel is being studied over the last
century. Many scientists and researchers over the years
have studied various types of vegetable oils and their
use in Diesel engines. However some physico-chemical
properties of vegetable oils like high density and
viscosity, low volatility and formation of carbon
deposits tend to limit their use as fuel in diesel engines.
It was experimentally proven and worldwide accepted
that the transesterification process is an effective
method for biodiesel production and reduce
The present experimental work investigates
about the production of biodiesel from rice bran oil by
transesterification with methanol, preparation of test
fuels for the engine experiments in the form of three
blends of rice bran oil biodiesel (RBOBD) and Diesel
as B20, B30 and B50 and measurement of various
engine performance parameters and exhaust emissions.
Basanta Kumar Nanda2
2
Department of Mechanical Engg.
II.
Maharaja Institute of Technology
EXPERIMENTAL PROCEDURES
2.1
Biodiesel
Production
transesterification method
Bhubaneswar, Orissa, India
by
base-catalyzed
Rice bran oil and methanol were mixed in a molar ratio
of 3:1 and the mixture was poured into the test reactor.
Then base catalyst (KOH) in 1% w/w was added into
the already present mixture in the reactor. The mixture
inside the reactor was heated to a temperature of 65oC
and stirred continuously. The mixture in the reactor
was allowed to remain at the same temperature for a
period of 3 hrs and then it was allowed to settle under
gravity. After settling two layers were formed, the
upper layer was found to be Rice bran oil methyl esters
e-mail: [email protected]
in viscosity and density of vegetable oils. The
transesterification process is a reversible reaction
between the triglycerides of the vegetable oil and
alcohol in the presence of an acid or base as catalyst.
As a result of transesterification the monoalkyl esters of
139
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
2.4 Experimental Setup
(RBOME) and the lower layer being glycerol. The
glycerol was separated out and the RBOME was mixed
with 10% (by vol) hot water and shaken properly and
allowed to settle again for 24 hrs. After settling was
over the lower layer was separated which contained
water and KOH. The part was biodiesel and moisture.
After complete removal of moisture, pure biodiesel or
RBOME was obtained.
A schematic diagram of the experimental setup and test
apparatus are given below.
2.2 Preparation of biodiesel blends
After production the RBOME was blended with neat
diesel fuel in various concentrations to prepare
biodiesel blends. These blends were subsequently used
in the engine tests. The level of blending for
convenience is referred as BXX. Where XX indicates
the percentage of biodiesel present in the blend. For
example a B20 blend is prepared with 20% biodiesel
and 80% diesel oil by volume. During the present
engine experiments the blends prepared and used were
B20, B30 and B50.
1. Test engine, 2. Dynamometer, 3. Diesel tank, 4. Fuel
blend tank, 5. Diesel burette, 6. Fuel blend burette, 7.
Air tank , 8. Air flow meter, 9. Air intake manifold, 10.
Exhaust, 11. Smoke meter, 12. Exhaust Gas analyzer,
13. Stop watch, 14. RPM indicator, 15. Exhaust temp.
indicator, 16. Coolant temp. indicator, 17. Lub. Oil
temp. indicator, 18. Rotameter, 19. Pressure sensor, 20.
Charge amplifier, 21. Computer.
2.3 Characterization of Test Fuel
The test fuels used in the engine during the experiments
were B20, B30, B50 and Diesel oil. Before application
on the engine, various physico-chemical properties of
all the above test fuels were determined and compared
to each other.
(Fig. 1. Experimental Setup)
Table 1
Table 2
Properties of Diesel and RBOME
Properties
Density at
Diesel
0.82
RBOME
0.87
2.7
4.81
Test Engine Specification
Sl. No.
Item
1.
Engine type
20oC
Description
4-Stroke CI
2.
No. of cylinder
Viscosity at
3.
Cooling method Water cooled
40oC
4.
Bore × Stroke
80 × 110 mm2
5.
Compression Ratio
16.7:1
6.
Injection Pressure170 bar
7.
Rated output
Kinematic
Heating value
42.50
38.81
(MJ/kg)
Flash point (oC) 67
166
Cloud point (oC) -6
-1
Cetane Index
50
One
5.1 kW at
1500 rpm
47
140
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
2.5 Description of Experimental Setup
Fig 3. BTE vs BMEP
III.
BTE (%)
10
20
30
The present set of experiments were conducted on a
four stroke single cylinder direct injection water cooled
diesel engine equipped with eddy current
dynamometer. Two separate fuel tanks with fuel flow
control valves were used for the operation of the engine
on diesel and biodiesel. One of the fuel tanks contained
diesel and the other tank was filled with individual fuel
blends of B20, B30 and B50. The engine was operated
with full load and constant speed and the performance
parameters like brake power, torque, specific fuel
consumption and brake thermal efficiency were
measured for diesel and all the test fuels. The CO and
HC emissions were also measured for diesel and all the
test fuels by using the data obtained from the exhaust
gas analyzer.
B30
0
0
2
4
6
BMEP (bar)
The brake thermal efficiency (BTE) increases
with increase in brake power for all types of fuels. The
BTE is observed higher in case of diesel than all the
three other blends of RBOME. As the percentage of
biodiesel increases in the blend it results in a slight
decrease in BTE. This may be due to the fact that with
higher blends of biodiesel, the fuel is more viscous
hence lower is the heating value.
3.1 Engine Performance Analysis
3.1.1 Brake Power
Fig 2. Brake Power vs Load
3.1.3 Brake Specific Fuel Consumption
The brake specific fuel consumption (BSFC)
was found to be lowest for diesel and tend to increase a
little with the RBOME blends. The BSFC is more with
higher blends of biodiesel. This is because of lower
heating value and higher viscosity of the blends.
Diesel
B20
B30
35
B50
0
20
40
60
Fig 4. BSFC vs BMEP
30
80 100
BSFC (kg/kWhr)
Brake Power (kW)
B20
B50
RESULTS & DISCUSSION
4
3.5
3
2.5
2
1.5
1
0.5
0
Diesel
Load (%)
The power developed by the engine at varying
load is higher for diesel and slightly less for the blends
of RBOME. However with B20 blend the brake power
developed is very close to that with diesel. The lower
value of
25
20
B50
15
B30
10
B20
5
Diesel
0
3.1.2 Brake Thermal Efficiency
0
1
2
3
4
5
6
7
BMEP (bar)
3.2 Engine Exhaust Emission Analysis
141
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
3.2.1 CO Emissions
are more. However at full load operation the HC
emission is maximum for diesel. With higher blends of
biodiesel the HC emission reduces.
Fig 5. %CO vs Load
3.2.3 Smoke Opacity
0.35
0.3
Fig 7. Smoke Opacity vs Load
Diesel
0.2
B20
0.15
Smoke Opacity (%)
CO (%)
0.25
B30
0.1
B50
0.05
0
0
20 Load
40 (%)
60 80 100
The variation in CO emission at different
loads with all the test fuels is shown in fig 5. At low
and medium loads, CO emissions of the blends were
not much different from those of diesel fuel. However
at full load conditions the CO emissions of the blends
decrease significantly when compared to those of
diesel. This type of behavior can be attributed towards
the complete combustion occurring in case of blends
due to the presence of oxygen in the methyl esters of
rice bran oil.
HC (ppm)
B50
20
IV.
40
60
80
100
CONCLUSIONS
The objective of this study was production and
characterization of biodiesel from Rice bran oil
and preparation of B20, B30, and B50 blends for
use in a single cylinder DI diesel engine. Based on
the experimental results found the following
conclusions can be drawn:
100
80
Diesel
B20
B30
(1) The physico-chemical properties of biodiesel
obtained from rice bran oil are little different
from those of diesel oil. The viscosity of
biodiesel is higher than that of diesel
especially at low temperatures.
(2) The brake power of the engine using all the
blends of RBOME is very close to the value
obtained with diesel.
B50
0
B30
Fig 7. Shows the variation in smoke emissions
at different loads for all the test fuels used in the
experiments. The smoke is formed due to incomplete
combustion of fuel in the combustion chamber. It is
seen from the above set of results that the smoke
emissions are less with blends of RBOME in
comparison to that of diesel fuel. This is because of
better combustion of blends due to the availability of
more oxygen in biodiesel.
120
20
B20
Load (%)
Fig 6. %HC vs Load
40
Diesel
0
3.2.2. HC Emissions
60
18
16
14
12
10
8
6
4
2
0
0 20 40 60 80 100
Load (%)
The HC emission from the engine at different
loads is shown in the above result. At lower loads the
HC emissions are usually less and at higher loads they
142
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
(3) The BTE of the test engine for the three blends
was found little lower than the value obtained
with diesel.
(4) The BSFC of the blends are higher than that of
diesel. The higher fuel consumption with the
blends reflect to the lower heating value of the
biodiesel. The BSFC increases linearly with
the increase in biodiesel percentage in the
blend.
(5) As per the exhaust emissions with the blends,
it was found that the CO, HC and smoke
emissions were reduced significantly when
compared to those of diesel. The results
obtained show a 49% reduction in smoke,
35% reduction in HC and 37% reduction in
CO emissions at full load.
Form the above conclusions drawn, it was
found that the performance of the test engine when
operating with RBOME blends were very close to that
of diesel oil and significant improvement was noticed
in the exhaust emissions of CO, HC and smoke when
the engine was operating with the blends.
Therefore it can be concluded that the blends
of RBOME can be successfully used as alternative fuel
in diesel engines without any engine modifications.
Acknowledgement
The authors thank the Department of
Mechanical Engg., Jadavpur University, Kolkata for
providing laboratory facilities for the conduct of
experiments and Prof. (Dr) Probir Kumar Bose,
Director, NIT Agartala for his valuable guidance and
help during the course of the present research work.
REFERENCES
[1]. Chandan Kumar, Gajendra Babu MK,
Das LM.
Experimental investigations on a Karanja oil ester fueled
DI diesel engine. SAE Paper No. 2006-01-0238; 2006.
p.117–24.
[2]. Ramadhas AS, Muraleedharan C, Jayaraj S. Performance
and emission evaluation of a diesel engine fueled with
methyl esters of rubber seed oil. Renewable Energy
2005;30:1789–800.
[3]. Puhan Sukumar, Vedaraman N, Sankaranarayanan G,
Bharat Ram Boppana V. Performance and emission study
of Mahua oil (Madhuca indica oil) ethyl ester in a 4-stroke
natural aspirated direct injection diesel engine. Renewable
Energy 2005;30:1269–78.
[4]. Chang DYZ, Van Gerpen JH, Lee I, Johnson LA,
Hammond EG, Marley SJ. Fuel properties and emissions
of soybean oil esters as diesel fuel. Journal of the
American Oil Chemists Society 1996;73:1549–55.
[5]. Gerhard V. Performance of vegetable oils and their
monoesters as fuels for diesel engines. SAE 1983:831358.
[6]. Rao PS, Gopalkrishnan KV. Vegetable oils and their
methyl esters as fuel in diesel engines. Indian J Technol
1991;29:292–7.
[7]. Bhattacharya SK, Reddy CS. Vegetable oils as fuels for
internal combustion engines: a review. J Agr Eng, Indian
Soc Agr Eng (ISAE) 1994;57(2):157–66.
[8]. Senthil Kumar M, Ramesh A, Nagalingam B. Complete
vegetable oil fuelled compression ignition engine. SAE
paper No. 2001-28-0067, 2001.
143
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Mak
king of a Lifting
L
& Crushingg Machinee using Ch
hain Driv
ve
Mr. S.K
K.Bisoi , P. Mohapatra,
M
R
R.Behera,
annd A.Mishra
Departmeent of Mechaniccal Engineering
g
Synergy Instituute of Engineerring and Technoology
Dhenkaanal, Odisha – 7759001, India
E-maail: susil1972@
@gmail.com
C.K.Sawaant
Mechanical Enggineer
M
Shirddi Mechanical E
Enterpriser
Nassik – Maharashttra, India
The first conntinuous poweer-transmittin
ng chain drivee was
(
deepicted in thee written treattise of the Soong Dynasty (96012279) Chinese engineer Suu Song (1020
0-1101 AD), who
ussed it to operate the armilllary sphere of his astronom
mical
cloock tower as well as the clock jack fig
gurines presennting
thhe time of dayy by mechaniccally banging gongs and drrums.
Thhe chain drivve itself was given power via the hydrraulic
woorks of Su's water clock ttank and wateerwheel, the latter
whhich acted as a large gear. The endless power-transmi
p
itting
chhain drive wass invented sepparately in Eurrope by Jacquues de
Vaaucanson in 1770 for a silkk reeling and thhrowing mill. J. F.
Trretz was the first
f
to apply tthe chain drivve to the bicyccle in
18869.
Abstract— Seelecting of prooper size chaain, belt, pulley and
induction mootor, is most crucial in deesigning a liffting &
crushing macchine standard
d and quality of componen
nts and
materials playy a vital role in the assemb
bly of such crrushing
machine. Its fiield of applicattion is of wide range based i..e. from
the day to daay life applicaations like in coal depots, garbage
g
centers to ind
dustrial dump yards and navval duck yardss it can
be utilized as the situation w
warrants. Use of chain drivees is the
i constructio
onal mechanism
m. The
most promineent factor in its
Plank size and specification
ns need to be appropriated as per
P
(G
Greek Design) and
the utility standard. Polybolos
astronomical clock towers ( Chinese origgin) are the source of
uring of this eq
quipment.
inspiration in the manufactu
Keywords-Sprrocket.Plank,Chhain Drive, Polyybolos
I.
INTRODUCTIO
ON
As the name suggests,
s
this Lifting and Crushing
C
Mach
hine, is
used for the purpose of lifting loadss or weights and to
crush them byy ensuring freee fall of the plank.
p
This machine
m
is designed byy using Chainn Drive whichh is the main idea of
the topic.
The oldest knnown applicatiion of a chainn drive appearss in the
Polybolos, a repeating crrossbow descrribed by the Greek
engineer Philon of Byzaantium (3rd century
c
BC).IIn this
Polybolos; tw
wo flat-linked chains weere connectedd to a
windlass, whhich by wiinding back and forth would
automaticallyy fire the macchine's arrowss until its maagazine
was empty.
Although thiss device did not transmit power continnuously
since the chains "did not transmit poower from sh
haft to
shaft", the Grreek design marks
m
the begiinning of the history
of the chain drive
d
since "noo earlier instannce of such a cam is
known, and none as com
mplex is knoown until the 16th
century. It is here that the flat-link chain
n, often attributed to
Leonardo da Vinci,
V
and acttually made its first appearaance."
Fig 2 P
Polybolos
II.
M
MECHANISM
& WORKING PRINCIPLE
Foollowing aspeects are needeed to be lookeed at for a conncise
cooncept of this lifting & crushing machinee.
Rooller Chain
M roller chain is made froom plain carbo
Most
on or alloy steeel.
A bicycle chainn is a roller cchain that trannsfers power from
thhe pedals to thhe drive-wheell of a bicycle, thus propellinng it.
M
Most
bicycle chains
c
are maade from plaiin carbon or alloy
steeel, but some
s
are chrome-plateed or staiinless
steeel to prevent rust.
Fig 1: chain drive
1
144
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Spprocket Thickkness : 0.284”
Working Load : 810lbs
W
Effficiency :
A bicycle chhain can be veery efficient: one
o study reported
effficiencies as high as 98.66%. Efficiency
y was not grreatly
afffected by thee state of lubrrication. A laarger sprockett will
give a more effficient drive,
reduciing the moveement
anngle of the linnks. Higher cchain tension
n was found to
t be
m
more
efficient
Us For Chain
Uses
n:
Bicycle chaain is one that transfers pow
wer from the pedals
to the drive-wheel of a bicyclle thus propelling it.
Fig 3
Sp
prockets
A sprocket is a toothed w
wheel on whiich a chain rides.
r
Spprockets shouuld be as large as poossible givenn the
appplication. Thhe larger a spprocket is, thee less the worrking
load for a givenn amount of trransmitted
power,alloowing
thhe use of a smaaller-pitch chaain.
T
Types
:
There are four typees of sprockeets: such as Plain
Pllate Sprocketts, Hub on oone Side, Huub on both Side,
Deetachable Hubb
Diimensions off a Sprocket:
Piitch Diameter:: P/sin(180°/N
N)
Ouutside Diametter: P X (0.6 + cot(180°/N)))
Spprocket Thickkness: 0.93 x R
Roller Width – 0.006”
W
Where,
P is thhe pitch of thee chain. N is thhe number of teeth
onn the sprocket.
Roller Chain
C
Width: Chain comes in either 3/322”, 1/8”, 5/332”, or
3/16”.Roller width:
w
5/32” is used on caargo bikes, 1/88” with
the common low cost coaaster bike, hubb and fixed gearing
g
and on bicyclles.
Sizes:
Thee chain is usedd on modern bicycles
b
havee a 1/2”
pitch, which is
i ANSI standdard #40.
Chain Constrruction:
The roller turrns freely on the bushing, which
w
is attacched on
each end to the inner pllate. A pin passes throuugh the
bushing, and is attached at each end to thhe outer plate..
Pu
ulleys
Fig 4 Plates of Chain Linnk
Fig 5
Selecting A Chain:
C
The working load sets a low
wer limit on pitch.
p
The speeed sets
an upper limitt.
L
Length:
match the distaance betweenn crank
The chain leength must m
and rear hub and the size of the front chain ring annd rear
cog.
A
ANSI
Standaard Chain Dim
mension:
Chain No. : 40
Pitch : 1/2”
Roller Diameeter : 5/16”
Roller Width : 5/16”
Sprocket
Heelps to transfeer the power.
Used to changee the direction of an applied
d force.
Used to transmiit rotational m
motion.
Used to realize a mechanical advantages inn either a lineaar or
rootational system
m of motion.
“V
V” Groove Pu
ulleys :
Used to transmiit rotating mecchanical poweer between
tw
wo shafts .Avaailable in all grroove sizes from 1 to 12.
2
145
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Materials used are cast iron, aluminium, stainless steeel,etc.
Fig 8 Moovable Pulley
Fig 6 V Groove Pulleey
How a Pulley Works:
H
W
Thhe theory of operation
o
for a pully is thhat the pulleyss and
linnes are weighhtless, and thhere is no ennergy loss duue to
friiction.
Inn equilibrium, the total forcce on the pullley must be zero.
Thhe force on thhe axle of thee pulley is shaared equally by the
tw
wo lines loopinng through thee pulley.
Types of Pullley Systems :
Fixed Pulley :
The fixed pulley has a fixed axle. It is used
u
to
change the dirrection of the force on a rop
pe.
Mechhanical advanntages are:
1. The
T force is eqqual on both Sides of the puulley
2. There
T
is no muultiplication Of
O force
Fig 9
Fig 7
Workking of a Pulleey
Fixed Pulleyy
In
nduction Mottor
Ann induction motor
m
is a typpe of alternatting current motor
m
whhere the pow
wer is suppliied to the ro
otor by meanns of
eleectromagneticc induction. IInduction mootors are prefferred
foor their ruggedd construction, absence of brushes
b
and abbility
to control the sppeed of the mootor.
Movable Pullley :
M
The movablee pulley has a free axle. It is used to multiply
m
forces.
Mechanical advantage is : if one end
e
of the rope
r
is
anchored, pullling on the other
o
end of th
he rope will apply
a
a
double force to
t the object attached to thhe pulley.
3
146
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Fig 10 Induction Mottor
Fig 122 V-belts
Principle of Operation
O
:
The induction mootor does noot have any direct
supply onto thhe motor; a seecondary currrent is inducedd in the
rotor. For thiss, stator windiings are arranged around thhe rotor
so that they create
c
a rotatting magnetic field pattern which
sweeps fast thhe rotor. Thiss changing maagnetic field induces
i
current in thee rotor conduuctors .This current interaccts with
the rotating magnetic
m
field created by the stator and caauses a
rotational motion on the rottor.
R
Rotor
Speed :
Rotor speed,
s
Nr=Nss(1-S) where, Ns=120F/p where,
Ns= Revolutions per minutte(rpm)
F=AC power frequeency(Hertz) p=Number
p
off poles
per phase(an even number)) S=(Ns-Nr)/N
Ns where, S iss slip.
Speed Controol :
An induction motor haas no brushhesand is eaasy to
control.The innduction motoor runs on indduced current.. Speed
of the indutioon motor varies according to the load suupplied
to the inductioon motor. As the load on th
he induction motor
m
is
increased, thee speed of thee motor gets decreased andd viceversa
Rope
t
to im
mprove
A rope is a length of fibbres, twisted together
strength for pulling
p
and coonnecting. It has tensile sttrength
but is too flexxible to providde compressivve strength.
It is thickker and stronnger than corrd, line, strinng, and
twine.
Addvantages:
Permit Large speed ratios and provide
p
long life.
Eaasily installedd and removedd and quiet low
w maintenancee
Liimitations :
V-belts will slip and crreep. They shhould not be used
whhere synchronnous speeds arre requires.
Tyypes :
Industrial V-belts
V
:
Agricultural belts :
Automotivee belts :
V-BELT
V-belts are thhe workhorsee within induustry, available from
virtually everry distributor and adaptablle to practicallly any
drive to transm
mit the powerr from motor to
t its destinatiion.
Operation :
V-belts drives operate bestt at speeds beetween 1500 to 6000
ft/min. For standard
s
belts, ideal peakk capacity sp
peed is
approximately
y 4500 ft/minn. Narrow V-bbelts will operate up
to 10,000ft/m
min.
Sp
pecification Of
O Material U
Used
Pu
ulleys :
Grroove Depth : 11mm
Ouuter Groove Width
W
: 13mm
m
Innner Groove Width
W
: 4.5mm
m
Diiameter of larger pulley : 112mm
Diiameter of sm
maller pulley : 62.5mm
W
Width
of largerr pulley : 21.5m
mm
W
Width
of smalleer pulley : 19.5mm
Roope :
Diiameter of staandard rope : 77mm
Beelt :
4
147
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
is lifted. As meentioned earliier the motor is coupled with
w a
puulley a some specific
s
size oon which a V-bbelt is passingg and
byy which the front sprocket iis rotated.The front sprockeet has
a ppulley size grreater than thee one coupledd with the mottor in
orrder to decrease the speed ttransmitted upp to the necesssary.
Thhen the front part
p is linked to a motor co
oupled by a puulley.
A V-belt used in
i order to traansmit the pow
wer to rear onne for
liffting objects.. Power suppply will givven to the motor
m
(innduction-typee) i.e. A.C. suupply for the motor to run.. The
m
motor
speed deepends on the amount of loaad or weight given
g
to it. When requuired height is reached then
n by some loccking
m
mechanism
we fix the plankk and the load
d is removed from
thhe plank. In orrder to crush stone or any material, oncce the
heeight is reacheed the power ssupply will reemain and then the
plank is allow
wed to fall frreely from th
he height forr the
required processs to complete.
Thhough higher efficiency is eexpected from
m this machinee still
thhen its perforrmance depeends upon th
he quality off the
m
materials,
compponents and onn the field of applications.
a
Top thicknesss: -13 mm
Belt Thicknesss: -10 mm
Belt Angle: - 30°
Belt Ride-outt: -1.8 mm
Length: - 51 inches
i
Poly ' V Belt,, Heat & Oil R
Resistant
A motor staart:
A.C
Capacitor: - 60-80
6
µF
220Volt Surgge
Operating tem
mperature rangge: - -30° C / +70°
+
C
1.7% Duty Cyycle
Speed: - 1460
0 RPM
Sprockets :
f
Sprockett: -115 mm
Diameter of front
Diameter of rear
r Sprocket: -115 mm
Mean Distancce between sprockets: - 10660 mm
R
Roller
chain :
Pitch: -16 mm
m
Roller Dia:- 3.5
3 mm
Roller Width::- 5 mm
I
Iron
plank :
Length: - 6100mm , Breadthh: - 455mm , Width: -20mm
m
Mechanism Of
O Lifting & Crushing Machine
M
IV.
Thhis is very useeful for working in many in
ndustries, garbbages
arrea, lifting purrpose in parts or industries etc. This thinnking
annd developmeent can bring a vast development in the field
at lifting mechanisms by ussing chain driives. As theree is a
remarkable devvelopment in ttechnology, on
ne can utilize with
thhe help of it. In future, thiis developmeent can lead to
t an
eaasier, faster, cheaper
c
and effficient means at lifting obbjects
annd crushing.
REFER
RENCES
[
[1].
[[2].
Figg
[[3].
13 Workinng of a Lifting
g Machine
[44].
III.
CO
ONCLUSIONS
Masataaka Nakakomi,Safety Design
n of roller chain,
c
yoken-ddo, Japan(19899).
The complete guide too chain. Kogyoo chosaki publiishing
P. 211. (Retrrieved 2006-05
5-17).
co., Ltd.. pp. 240.P
Needdhham, Joseph(19986).
Sciencee and Civilizaation in china: volume 4, Paart 2,
Mechannical Engineerring. Cave Boooks,Ltd. Page 109.
Templee, Robert. (19866).
The Geenius of China: 3,000 years off science, Discoovery,
and Invvention. With a forward by Josshep Needham.. New
work: Simon
S
and Schuuster, Inc. Page 72.
RESUL
LTS AND DISCU
USSION
The front sprrocket is connnected to thee rear sprockeet by a
chain.The chhain is standarrd roller chaiin basically used
u
in
bicycle & rickkshaws. Now the rear part of the sprockket with
the axel is atttached with thhe ropes are atttached to thee plank.
On the rear spprocket,the roope is attachedd to the axel shaft
s
at
one end are connected
c
to thhe plank.oncee the rear spro
ocket is
rotated,the plaank attached by
b the ropes goes
g
up and thhe load
5
148
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Development of a Compressed Air Engine
B.B. Sahoo, J.P. Bhol, C.K. Mishra and M.S. Khan
T.S. Reddy
Department of Mechanical Engineering
Synergy Institute of Engineering and Technology
Dhenkanal, Odisha – 759001, India
E-mail: [email protected]
Mechanical Engineer
RamTech Corporation Limited
Secunderabad – 500003, India
electrical grid. This presents significant cost benefits.
Pollution created during fuel transportation would be
eliminated. The various important parts involved in the
efficient working of a Compressed air engine are a high
strength storage tank, a modified engine for compressed air
technology, a lever arrangement to control the flow/volume
of air entering into the cylinder, high fatigue resistant heavy
duty hose pipes and a strong chassis for taking the load of
the above components. Start the engine, drive around, fill up
with fuel, pay a lot of money and pollute the atmosphere
some more! But, it doesn’t have to be that way; many
alternative sources of fuel are being developed. Hence, in
this project a dependable, innovative and can-be-reliable
alternative has been discussed in details, taking into account
all the possible merits, demerits and scope of improvement,
so that compressed air engine makes its phenomenal
presence in the near future.
A storage tank is a container, usually for holding
liquids, sometimes for compressed gases (gas tank). The
term can be used for reservoirs (artificial lakes and ponds),
and for manufactured containers. There are usually many
environmental regulations applied to the design and
operation of storage tanks, often depending on the nature of
the fluid contained within. Large tanks tend to be vertical
cylindrical, or to have rounded corners transition from
vertical side wall to bottom profile. Storage tank used in the
working model is of mild steel material but several
modifications are possible like use of carbon fiber tanks
which are lighter and can withstand high pressure.
The pneumatic motor was first applied to the field of
transportation in the mid-19th century. Though little is
known about the first recorded compressed-air vehicle, it is
said that the Frenchmen Andraud and Tessie of Motay ran a
car powered by a pneumatic motor on a test track in
Chaillot, France, on July 9, 1840. Although the car test was
reported to have been successful, the pair didn’t explore
further expansion of the design. The first successful
application of the pneumatic motor in transportation was the
Mekarski
system
air
engine
used
in
locomotives. Mekarski’s innovative engine overcame
cooling that accompanies air compression by heating air in a
small boiler prior to use. The Tramway de Nantes, located
in Nantes, France, was noted for being the first to use
Mekarski engines to power their fleet of locomotives. The
tramway began operation on December 13, 1879, and
continues to operate today, although the pneumatic trams
Abstract—A
compressed air engine is a pneumatic engine
that uses a motor powered by compressed air. The engine can
be powered solely by air, or combined (as in a hybrid electric
vehicle) with gasoline, diesel, ethanol, or an electric plant with
regenerative braking. Compressed air engines are powered by
motors fueled with compressed air, which is stored in a tank at
high/maximum pressure. Rather than driving engine pistons
with an ignited fuel-air mixture, compressed air cars use the
expansion of compressed air, in a similar manner to the
expansion of steam in a steam engine.
Keywords-Compressed air, storage tank, pollution, petrol engine
I.
INTRODUCTION
A compressed air engine is a pneumatic actuator that
creates useful work by expanding compressed air. A
compressed air vehicle is a vehicle that uses an engine
powered by compressed air. They have existed in many
forms over the past two centuries, ranging in size from hand
held turbines up to several hundred horsepower. Some types
rely on pistons and cylinders, others use turbines. Many
compressed air engines improve their performance by
heating the incoming air, or the engine itself. Some took this
a stage further and burned fuel in the cylinder or turbine,
forming a type of internal combustion engine. A compressed
air vehicle is powered by an air engine, using compressed
air, which is stored in a tank. Instead of mixing fuel with air
and burning it in the engine to drive pistons with hot
expanding gases, compressed air vehicles use the expansion
of compressed air to drive their pistons. One manufacturer
claims to have designed an engine that is 90 percent
efficient. Actually all engines work with compressed air.
Most engines suck it in, heat it up, it pressurizes and it
pushes on a piston. In an air car we pressurize the air first,
so when we apply it to the piston, the piston is pushed. The
future of transportation will soon be whooshing down the
road in the form of an unparalleled “green” earth- friendly
technology that everyone will want to get their hands on as
soon as they can: The Compressed Air Engine. It is hard to
believe that compressed air can be used to drive vehicles.
However that is true with the “Compressed air Engine”.
There is currently some interest in developing air cars.
Several engines have been proposed for these, although
none have demonstrated the performance and long life
needed for personal transport. Transportation of the fuel
would not be required due to drawing power off the
149
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
were replaced in 1917 by more efficient and modern
electrical trams. American Charles Hodges also found
success with pneumatic motors in the locomotive industry.
In 1911 he designed a pneumatic locomotive and sold the
patent to the H. K. Porter Company in Pittsburgh for use in
coal mines. Because pneumatic motors do not use
combustion they were a much safer option in the coal
industry. Many companies claim to be developing
Compressed air cars, but none are actually available for
purchase or even independent testing. The laws of physics
dictate that uncontained gases will fill any given space. The
easiest way to see this in action is to inflate a balloon. The
elastic skin of the balloon holds the air tightly inside, but the
moment you use a pin to create a hole in the balloon's
surface, the air expands outward with so much energy that
the balloon explodes. Compressing a gas into a small space
is a way to store energy. When the gas expands again, that
energy is released to do work. That's the basic principle
behind what makes an air engine go [1, 2].
The first air engine will have air compressors built into
them. After a brisk drive, you'll be able to take the car
home, put it into the garage and plug in the compressor. The
compressor will use air from around the car to refill the
compressed air tank. Unfortunately, this is a rather slow
method of refueling and will probably take up to two hours
for a complete refill. If the idea of an air car catches on, air
refueling stations will become available at ordinary gas
stations, where the tank can be refilled much more rapidly
with air that's already been compressed. Filling your tank at
the pump will probably take about three minutes.
Compressed air engine promisingly produces zero air
emissions in the atmosphere, as there is no combustion
taking place inside the engine cylinder. The compressed air
when enters the cylinder expands, which is at high pressure
hence forces the piston from top dead center to bottom dead
center without any fireworks or combustion and hence
resulting in a cooler and fresh air, having no smoke, no
chemicals and no particulate matters, from the compressed
air engine. A zero pollution vehicle is a vehicle that emits
no tailpipe pollutants from the onboard source of power.
Harmful pollutants to the health and the environment
include particulates (soot), hydrocarbons, carbon monoxide,
ozone, lead, and various oxides of nitrogen. A zeroemissions vehicle does not emit greenhouse gases from the
onboard source of power at the point of operation.
The first vehicle will almost certainly use the
Compressed Air Engine developed by the French company,
Motor Development International (MDI). Air cars using this
engine will have tanks that will probably hold about 3,200
cubic feet (90.6 kiloliters) of compressed air. The vehicle's
accelerator operates a valve on its tank that allows air to be
released into a pipe and then into the engine, where the
pressure of the air's expansion will push against the pistons
and turn the crankshaft. This will produce enough power for
speeds of about 35 miles (56 kilometers) per hour. When the
air car surpasses that speed, a motor will kick in to operate
the in-car air compressor so it can compress more air on the
fly and provide extra power to the engine. The air is also
heated as it hits the engine, increasing its volume to allow
the car to move faster. In the original Nègre air engine, one
piston compresses air from the atmosphere to mix with the
stored compressed air (which will cool drastically as it
expands). This mixture drives the second piston, providing
the actual engine power. MDI's engine works with constant
torque, and the only way to change the torque to the wheels
is to use a pulley transmission of constant variation, losing
some efficiency. When vehicle is stopped, MDI's engine had
to be on and working, losing energy. In 2001-2004 MDI
switched to a design similar to that described in Regusci's
patents, which date back to 1990. It has been reported in
2008 that Indian car manufacturer Tata was looking at an
MDI compressed air engine as an option on its low priced
Nano automobiles. Tata announced in 2009 that the
compressed air car was proving difficult to develop due to
its low range and problems with low engine temperatures.
The main objective of our project is to prepare a
working model of an engine that would run by the help of
compressed air technology and would be completely
different from the conventional I. C Engines, where
combustion takes place. Our aim is to design a pneumatic
engine that should be eco-friendly with nearly zero emission
& very economical with lesser overall cost. The idea of
causing no harmful emissions to the atmosphere and crisis
of petroleum fuels in the near future has lead to work for
such an innovative and creative project. Now-a-days there
have been regularly held camps, which work on making
people aware about the depletion of the Ozone layer due to
the harmful effect of the green house gases. Hence this is a
step forward to save the ozone layer to, by producing a
vehicle which does not emit any of the harmful green house
gases, providing fresh air to the environment. The power of
air is very well known by people now-a-days, it has been
used in the wind mill technology, in combustion process, in
the latest welding technologies, well now we are aiming
at/trying to implement air in the compressed air technology,
which not only provides necessary power to drive the
piston, but, it is also plentily available in the atmosphere,
helping to build or develop an engine which runs on
cheapest fuel source, air, acting as a medium of power
transmission.
II.
THE TEST BED
Internal combustion engines are those heat engines that
burn their fuel inside the engine cylinder. In internal
combustion engine the chemical energy stored in their
operation. The heat energy is converted in to mechanical
energy by the expansion of gases against the piston attached
to the crankshaft that can rotate. The engine which gives
power to propel the automobile vehicle is a petrol burning
internal combustion engine. Petrol is a liquid fuel and is
called by the name gasoline in America. The ability of
petrol to furnish power rests on the two basic principles,
Burning or combustions always accomplished by the
production of heat. When a gas is heated, it expands. If the
150
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
In our project we modified these four strokes into totally
two stoke with the help of inner CAM alteration as shown in
Fig. 1. In air engine we can design a new CAM which is
operate only Inlet stroke and exhaust stroke. Actually in four
stroke engine the inlet and exhaust valve opens only one time
to complete the total full cycle. In that time the piston
moving from top dead center to bottom dead center for two
times. A stroke refers to the full travel of the piston from
Top Dead Center to Bottom Dead Center. In our air engine
project, we have opened inlet and exhaust valve in each and
every stroke of the engine so that it will convert the four
stroke engine to two stroke engine by modifying the CAM
shaft of the engine.
volume remains constant, the pressure rises according to
Charlie’s law.
Working principle
There are only two strokes involved namely the
compression stroke and the power stroke; they are usually
called as upward stroke and downward stroke respectively.
During the upward stroke, the piston moves from bottom
dead center to top dead center, compressing the charge airfuel mixture in combustion chamber of the cylinder. At the
time the inlet port is uncovered and the exhaust, transfer
ports are covered. The compressed charge is ignited in the
combustion chamber by a spark given by spark plug. The
charge is ignited the hot gases compress the piston moves
downwards, during the downward stroke the inlet port is
covered by the piston and the new charge is compressed in
the crankcase, further downward movement of the piston
uncovers first exhaust port and then transfer port and hence
the exhaust starts through the exhaust port. As soon as the
transfer port open the charge through it is forced in to the
cylinder, the cycle is then repeated.
Today, internal combustion engines in cars, trucks,
motorcycles, aircraft, construction machinery and many
others, most commonly use a four-stroke cycle. The four
strokes refer to intake, compression, combustion (power),
and exhaust strokes that occur during two crankshaft
rotations per working cycle of the gasoline engine and diesel
engine [3]. The cycle begins at Top Dead Center, when the
piston is farthest away from the axis of the crankshaft. A
stroke refers to the full travel of the piston from Top Dead
Center to Bottom Dead Center.
i. Intake stroke: On the intake or induction stroke of the
piston , the piston descends from the top of the cylinder to
the bottom of the cylinder, reducing the pressure inside the
cylinder. A mixture of fuel and air is forced by atmospheric
(or greater) pressure into the cylinder through the intake
port. The intake valve(s) then close.
ii. Compression stroke: With both intake and exhaust
valves closed, the piston returns to the top of the cylinder
compressing the fuel-air mixture. This is known as the
compression stroke.
iii. Power stroke: While the piston is close to Top Dead
Center, the compressed air–fuel mixture is ignited, usually
by a spark plug (for a gasoline or Otto cycle engine) or by
the heat and pressure of compression (for a diesel cycle or
compression ignition engine). The resulting massive
pressure from the combustion of the compressed fuel-air
mixture drives the piston back down toward bottom dead
center with tremendous force. This is known as the power
stroke, which is the main source of the engine's torque and
power.
iv. Exhaust stroke: During the exhaust stroke, the piston
once again returns to top dead center while the exhaust
valve is open. This action evacuates the products of
combustion from the cylinder by pushing the spent fuel-air
mixture through the exhaust valve(s).
Figure 1 Schematic layout of compressed air engine
mechanism
Engine modifications
In this project we use SPARK IGNITION engine of the type
four stroke single cylinder of Cubic capacity 100 cc. Engine
has a piston that moves up and down in cylinder. A cylinder
is a long round air pocket somewhat like a tin can with a
bottom cut out. Cylinder has a piston which is slightly
smaller in size than the cylinder the piston is a metal plug
that slides up and down in the cylinder Bore diameter and
stroke length of the engine are 50mm and 49mm
respectively. The use of Hero Honda CD DAWN engine was
made (Figure 2).
Figure 2 Hero Honda CD DAWN engine
151
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
8.
Compressed Air Engine
The air engine is designed as per the details given below [4,
5].
1. Storage Tank
Length of the storage tank = 44.5 cm.
Diameter of the storage tank = 20 cm. = Height
Volume of the storage tank = 22× 20×20× 44.5/7
= 55920.35 cm3 = .0559 m3
Material of the storage tank = Mild Steel
Number of valves = 2 [Inlet and Exit]
Number of pressure gauge = 1
2.
3.
9.
Lubrication
20-40, 4T was used to lubricate the engine
moving parts.
Grease has been used to lubricate the bearing
and wire arrangements in the accelerator wire
to reduce friction.
Bolts
Length of the bolts = 5 inches
Diameter = 6 mm. (one-fourth)
III. RESULTS AND DISCUSSION
From the Table 1 we observe that the rpm of the flywheel
in no load condition at a maximum pressure of as much as 8
kg/cm2 was found to be 1290. This value goes on decreasing
as we decrease the pressure and at about 5 kg/cm2 we got the
lowest rpm of the flywheel to be 1050. In the loaded
condition I. e in the first gear, at that maximum pressure, the
rpm was lesser than what it was in the no load condition. In
this fashion it goes on decreasing with the decrease in
pressure. The flywheel rpm goes on reducing in the similar
way with the increase in the number of gears and decrease in
pressure. Also, from the above mentioned table, we infer that
the output rpm of each gear was directly proportional to the
increase in pressure. Thus, we obtained a maximum rpm of
the wheel to be 140 in the top gear at 8 kg/cm2.
Accelerator wire
Head/knob of the accelerator wire = 5 mm.
Thickness of the wire = 1 mm.
This accelerator wire is connected to the outlet
valve of the storage tank regulating the amount of
air to be provided to the inlet of the cylinder.
Frame
Material of the angles = Mild Steel
Length of the angles = 128×2 cm. = 256 cm.
Width of the angles = 31.5×3 cm. = 94.5 cm.
Height of the angles = ((31.5×4) + 27 + (31.5×2)
+ (82×2) + 21)cm.= 401 cm.
Length of the rectangular bars= 22×8 cm.= 176 cm.
Thus, total = (256 + 94.5 + 401 + 176) cm. = 927.5 cm.
4.
Bearing
The Bearing Number = 6202
Number of bearings used = 2
Material of the bearing = High Speed Steel
Outer diameter of the bearing (D) = 35 mm.
Bearing thickness = 12mm.
Inner diameter of the bearing (d) = 15 mm.
Maximum speed = 14000 rpm.
Mean diameter (dm) = (D+d)/2 = (35+15)/2 =
25 mm.
5. Wheel arrangement
Perimeter of the wheel = 129.5 cm.
Diameter of the wheel = 41 cm.
Material of the axle = High Speed Steel
Material of the sprocket = Cast Steel
This is a Rim and Tire arrangement that consist of sprocket
and chain and has the axle connected to ball bearing.
6. Pipes
Inner diameter of the Nylon pipes with carbon
additives = 8 mm.
Inner diameter of the Cross nylon pipes = 6 mm.
Figure 3 Variation of RPM at No Load at various pressures
The 8 mm. pipe is connected between compressor and
storage tank and the 6 mm, pipe between storage tank and
engine.
7.
Clamps
Number of clamps used = 4
Two numbers of clamps were used each in 8
mm. and 6 mm. pipes.
Figure 4 Variation of RPM taken at various pressures and load conditions
152
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
From the Table No. 2 we infer that rpm of the flywheel
of petrol engine was found to be 1400. Also from the above
mentioned table, we observe that in the first gear, the wheel
rpm was lesser than the flywheel rpm by 50. In the second
gear, the wheel was making 1290 revolutions per minute. In
the third gear, the rpm was 1200. When the petrol engine
was shifted to the fourth gear, the wheel rpm was 1108 as
the accelerator was kept constant from the beginning.
successfully completed experimental data’s but also
concerns us about the future work which is to be done, to
make this particular project reach millions of hands, by using
which people contribute their part in making their home
planet pollution free and share the burden of chemical fuels
to meet the economic needs in day-to-day life. First
conclusion is that, Carbon fiber storage tank can be used for
high retention of pressure and also because of it less weight.
The exhaust out of the vehicle (expanded air) can be used
efficiently to cool a cabin. Thirdly this technology does not
require any coolant in the engine or fins, since there is no
combustion taking place inside the cylinder. Also, the piston
inside the cylinder can reciprocate for a longer period with
least lubrication possible, i.e. it is highly economical. Hence,
Compressed Air Technology can also be a healthy
alternative for the present combustion engines, without
emitting any harmful emissions to the atmosphere, meeting
the requirement of time further.
REFERENCES
[1] Compressed Air Technology. Obtained through the internet:
http://www.mdi.com/ [Accessed on 15/1/2011 at 10:10 hrs]
[2] Gupta, R.B.(2005),’An overview on automobiles’, Automobile
Engineering, pp.25-55
[3] Ganeshan, V. (2004),‘Study of two stroke and four stroke engine’,
Internal Combustion Engine,pp.23-43
[4] Khurmi, R.S. and Gupta, J.K. (2004),’Design of internal combustion
engine parts, pistons and gears’, Machine Design, pp.1145-1223
[5] Khurmi, R.S. and Gupta, J.K. (2004),’Study of chain drives and
flywheel’, Machine Dynamics, pp.565-611
Figure 5 Variation of output RPM taken at various pressures at loads
IV.
CONCLUSIONS
Thus we can conclude that the RPM of the fly wheel goes
on decreasing from no load to the maximum load condition
at the pressure as much as 8kg/cm2. But at this pressure the
output RPM in the top gear condition is maximum. Although
the process of compressed air technology is still under
development, there are certain conclusions which can be
necessarily drawn, as it not only helps to keep certain
Table 1 Observations of RPM with compressed air technology
Table 2 Observations with fuel in engine
RPM
Pr. in
Neutral
Neutral
Gear 1
Gear 2
Gear 3
Gear 4
1400
1350
1290
1200
1108
First Gear
Second Gear
2
kg/cm
rpm
Third Gear
Fourth Gear
Input rpm
Output rpm
Input rpm
Output rpm
Input rpm
Output rpm
Input rpm
Output rpm
8
1290
1230
60
1210
110
1130
125
1100
140
7
1220
1210
55
1160
90
1100
90
1050
125
6
1150
1100
40
1080
70
1070
82
990
110
5
1050
1040
30
1020
50
1000
70
950
100
153
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Simulation of post combustion CO2 capture by MEA scrubbing method
P.P. Tripathy, Peter L. Douglas, Eric Croiset
Department of Chemical Engineering, University of Waterloo,
200 University Avenue West, Waterloo, Canada N2L 3G1
e- mail: [email protected]
ABSTRACT- There is growing concern that anthropogenic
carbon dioxide (CO2) emissions are contributing to global
climate change. Capture and storage of CO2 from fossil fuel
fired power plants is drawing increasing interest as a potential
method for the control of greenhouse gas emissions. Postcombustion CO2 capture and storage (CCS) presents a
promising strategy to capture, compress, transport and store
CO2 from a high volume–low pressure flue gas stream emitted
from a fossil fuel-fired power plant. Technically the capture of
CO2 from the flue gas of coal fired power plants using a mono
ethanolamine (MEA) absorption process is a viable short to
medium term strategy for mitigation of the atmospheric CO2
emissions from large point sources. We found an amine solvent
had a good performance using thermodynamic simulation. An
optimization study for post combustion CO2 capture from the
flue gas of coal fired power plant based on absorption process
with MEA solution, using ASPEN plus software with the
RADFRAC subroutine was performed. The solvent exhibited that
the CO2 recovery ratio and heat consumption for CO2
regeneration were 98% and 2.9 GJ/t-CO2 by the simulation
results respectively.
The application of post-combustion CO2 capture process
and subsequent geological storage (CCS) significantly
reduce the greenhouse gas emissions of coal-fired power
plants. In a post-combustion CO2 capture process the CO2
is separated from the flue gas of a conventional steam
power plant. The CO2 content in the flue gas of typical
coal-fired power plants lies in the range of 12-15 vol%
(wet) and the flue gas is present at atmospheric pressure
[3]. There are a large number of post-combustion
concepts for the capture of CO2 from coal derived flue
gas, but it is agreed that under these boundary conditions
the implementation of an absorption-desorption-process
using a chemical solvent is the most developed and best
suited process for deployment in the near- to middle-term.
Technologies to separate CO2 from flue gases are based
on absorption, adsorption, membranes or other physical
and biological separation methods. For many reasons
amine based CO2 absorption systems are the most suitable
for combustion based power plants [4]: for example, they
can be used for dilute systems and low CO2
concentrations, the technology is commercially available,
it is easy to use and can be retrofitted to existing power
plants. Absorption processes are based on thermally
regenerable solvents, which have a strong affinity for
CO2. They are regenerated at elevated temperature. The
process thus requires thermal energy for the regeneration
of the solvent.
Keywords: CO2 capture, Absorption, MEA, ASPEN Plus
I.
INTRODUCTION
Over the past decade, the global warming resulting from
anthropogenic carbon dioxide (CO2) has become one of
the most important environmental issues that are causing
global warming and forcing climate change. In 2005 the
CO2 concentration in the atmosphere was 379 ppm, which
greatly exceeds the natural range of the last 650,000 years
(180 – 300 ppm) [1].
Over 70% of India’s carbon emissions are
associated with the burning of fossil fuels, with a
significant proportion of these associated with coal-fired
power plants [2]. A drastic reduction of CO2 emissions
resulting from fossil fuels can only be obtained by
increasing the efficiency of power plants and production
processes, and decreasing the energy demand, combined
with CO2 capture and long term storage (CCS). CCS is a
promising method considering the ever increasing
worldwide energy demand and the possibility of
retrofitting existing plants with capture, transport and
storage of CO2. The captured CO2 can be used for
enhanced oil recovery, in the chemical and food
industries, or can be stored underground instead of being
emitted to the atmosphere.
Monoethanolamine (MEA) is often regarded as the first
chemical solvent to be used in the early large-scale
applications of post-combustion CO2 capture in coal-fired
power plants. Several researchers have modelled and
studied the MEA absorption process [4-8], most of their
conclusions focused on reducing the thermal energy
requirement to reduce the overall process expenses.
In this study, post combustion CO2 capture from the flue
gas of coal fired power plant based on absorption process
with MEA solution, using ASPEN plus software with the
RADFRAC subroutine was performed. After the process
simulation a design model for both the absorber and the
stripper was built to investigate the effect of chemical
reaction and mass transfer on the absorption process. The
154
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
rich solution. The preheated rich solution flows to the
stripper, where CO2 desorbs. The stripper overhead
product (mostly CO2 and H2O) flows to a partial
condenser, where the gas is cooled and water is
condensed. The remaining CO2 vapour then flows to the
compressor. The reboiler provides heat for the CO2
desorption by condensing LP steam from the power plant.
CO2 recovery ratio and heat consumption for CO2
regeneration were also studied.
II.
METHODOLOGY
In this work the CO2 capture process is modeled within
ASPEN Plus® 2006.5 (Figure 1).
• A flue gas cooler is considered to achieve higher
rich loadings in the absorber.
• The columns are modelled by multiple
equilibrium stages, where the number of stages
was increased from 5 to 20 and 12 in the
absorber and stripper respectively, to ensure an
accurate representation of the temperature profile
especially in the absorber.
• The logarithmic mean temperature difference
(LMTD) in the rich-lean heat exchanger (RLHX)
was increased from 5 to 10 K in the base case, as
this value leads to a more reasonable component
size and is more realistic to become realised in a
commercial-scale process.
• A neutral water balance is kept at all times.
2.1. Baseline case definition and simulation
Simulations were performed using Aspen Plus software.
Thermodynamic and transport properties were modelled
using a so-called “MEA Property Insert”. Property inserts
are special Aspen Plus templates designed for particular
systems with specified conditions and components; the
MEA Property Insert is included in the base version of
Aspen Plus. The absorber and stripper were modelled
using the RADFRAC unit operation model.
The following base case was defined:
• a 90% CO2 removal;
• a 30 MEA wt.% absorption liquid;
• using a lean solvent loading of 0.24 mol
CO2/mol MEA
2.2. Design model
The reactive absorption of the CO2–MEA–H2O system is
complex because of multiple equilibrium and kinetic
reversible reactions. The equilibrium reactions included in
this model are:
MEA+H3O+ ⇔ MEA++H2O (amine protonation)
CO2+2H2O ⇔ H3O++HCO3-(bicarbonate formation)
HCO3-+H2O ⇔ H3O++CO3-2(carbonate formation)
MEA+ HCO3- ⇔ MEACOO- + H2O (carbamate
formation)
2H2O ⇔ H3O+ + OH- (water hydrolysis)
Figure 1. Process flow diagram for CO2 capture from flue
gas by chemical absorption
III.
The cooled flue gas enters the absorber at a temperature
of 40°C. The cool lean solution enters the top of the
absorber. The CO2 is absorbed by the solution as it flows
downward. In the washing section vaporised or entrained
solvent is recovered from the CO2-lean treated gas and a
neutral water balance is kept by controlling the degree of
cooling of the circulating wash water. A reflux of 3%
from the washing section to the absorber is assumed. To
avoid the build-up of solvent concentration or particles in
the wash water, make-up water is provided by recycling
the condensate from the stripper overhead condenser back
to the washing section. The CO2-rich solution exits the
bottom of the absorber. In the rich-lean heat exchanger
(RLHX), sensible heat is transferred from the lean to the
RESULTS AND DISCUSSION
The capture base case was simulated using a complete
closed flow sheet to keep the overall water balance to
zero. This makes the flow sheet more difficult to converge
due to the recycle structure in the flow sheet. However,
this is important as only then the results will be realistic.
The results of the baseline case simulations are shown in
Table 1. The energy requirement was 2.9 GJ/ton CO2,
which agrees well with the numbers reported in industry
today. For example, the Fluor Econamine FGTM process
requires 4.2 GJ/ton CO2 [6] and the Fluor Econamine FG
PlusTM technology required a somewhat lower energy
requirement of 3.24 GJ/ton CO2 [9].
155
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
consumption for CO2 regeneration were 98% and 2.9GJ/t
-CO2, respectively.
Table 1: Results of the baseline case simulation
Amine lean solvent loading (mol CO2/mol MEA) - 0.242
Amine rich solvent loading (mol CO2/mol MEA) - 0.485
Thermal heat required (GJ/ton CO2) - 2.9
Solvent flow rate required (m3/ton CO2) - 20.0
REFERENCES
[1] IPCC (Feb 2007) IPCC Fourth Assessment Report, Summary for
Policymakers, Climate Change 2007: The Physical Science Basis
(WGI).
[2] IEA (2007). World Energy Outlook 2007: China and India Insights.
Paris, International Energy Agency.
[3] Oexmann, J and Kather A. (2009). Post-combustion CO2 capture in
coal-fired power plants: comparison of integrated chemical absorption
processes with piperazine promoted potassium carbonate and MEA.
[4] Rao, A.B., Rubin, E.S. (2002). A technical, economic and
environmental assessment of amine-based CO2 capture technology for
power plant greenhouse gas control. Environ. Sci. Technol. 36, 4467–
4475.
[5] Mariz, C.L. (1998). Carbon dioxide recovery: large scale design
trends. J. Can. Pet. Technol. 37, 42–47.
[6] Chapel, D., Ernst, J., Mariz, C. (1999). Recovery of CO2 from flue
gases: commercial trends. Can. Soc. Chem. Eng.
[7] Alie, C., Backham, L., Croiset, E., Douglas, P. (2005). Simulation of
CO2 capture using MEA scrubbing: a flowsheet decomposition method.
Energy Convers. Manage. 46, 475– 487.
[8] Singh, D., Croiset, E., Douglas, P., Douglas, M. (2003). Techno
economic study of CO2 capture from an existing coal-fired power plant:
MEA scrubbing vs. O2/CO2 recycle combustion. Energy Convers.
Manage. 44, 3073–3091.
[9] IEA Greenhouse Gas R&D Programme, 2004. Improvement in
power generation with post-combustion capture of CO2. Report No.
PH4/33.
Cooling water required
Feed cooling water (m3/ton CO2) - 8
Condenser (m3/ton CO2) - 42.5
Lean cooler (m3/ton CO2) – 41.5
Scrubber (m3/ton CO2) - 0.25
CO2 product compressor intercooling (m3/ton CO2) 12.06
Total cooling water required (m3/ton CO2) - 105
IV.
CONCLUSIONS
The modelling work and simulation results have shown
that Aspen Plus with RADFRAC subroutine is a useful
tool for the study of CO2 absorption processes. The lean
solvent loading was found to have a major effect on the
process performance parameters such as the thermal
energy requirement. Therefore it is a main subject in the
optimisation of solvent processes. From the simulation
result it was found that CO2 recovery ratio and heat
156
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Characterisation of Prestressed Concrete Sleepers Subjected to Static
Loading
Ramakanta Panigrahi1, Purna Chandra Mishra2 & Debashish Bhuyan3
1
Profeesor (Civil), Synergy Institute of Engineering & Technology, Dhenkanal, Odisha
2
Associate Professor (Mechanical), School of Mechanical Engineering, KIIT, Bhubaneswar
3
Debashish Bhuyan, Lecturer (Civil), C V Raman College of Engineering, Bhubaneswar
track system of steel rails, rail pads, fasteners,
and concrete sleepers laid on ballast and
subgrade is still used widely, but the demand for
transportation and logistics has increased greatly
over recent years.
Railway tracks in India
have been deteriorating due, not only to
increased traffic, but also heavier wheel loads
and improper maintenance.
Abstract— Contemporary knowledge has led to
two key paradoxical interests and concerns in the
railway engineering community. First, track
maintenance engineers pay more attention to the
observable cracks and damage caused to
concrete sleepers by the high-intensity
dynamic loading due to wheel irregularities
or defects in the rails. On the other hand,
structural engineers are concerned about
whether the concrete sleepers are over
designed despite the fact that they possess large
amounts of untapped reserve strength. Static
behaviours of prestressed concrete sleepers are
the first step to get better insight into the limit
states concept of design. Static tests of the
sleepers also provide understanding of the
ultimate failure and energy absorption
mechanisms. In the present study, static,
repeated low-velocity and ultimate impact tests
on concrete sleepers have been performed. The
failure modes, maximum loads, visualized and
measured cracking loads of each section of
concrete sleeper tested have been summarized.
The load-carrying capacities of railway concrete
sleepers at rail seat and at the centre of sleepers
have been evaluated.
The problem of cracking sleepers and corollary
damage are largely due to the high intensity loads
from wheel or rail irregularities such as wheel burns,
dipped joints, rail corrugation, or defective track
stiffness. Although most problems in India were
primarily associated with wheel defects, similar
effects from rail abnormalities could also be found
on the tracks [1]. The principal cause of cracking is
the high magnitude of the loads, although the
duration of the pulse also contributes to
excessive
flexural
vibrations.
These high
frequency impacts commonly excite resonance in
the tracks and sleepers. For instance, the first
bending resonance of concrete sleepers would
accelerate cracking at mid span, while the
second and third bending modes enlarge the
cracks at the rail seats. There was no report about
whether those cracks were severe or detrimental
to the structural condition of individual sleepers.
They were a major concern, because cracks could be
tolerated during the 50 year service life, as stated in
the current (Standards India, 2008) permissible
design. This is because of the lack of knowledge of
the dynamic behaviour, failure, and residual
capacity of concrete sleepers under severe impact
loads. It was also found that using a very high
impact factor in the current design (from 50%
to 200%) did not prevent the sleepers from
cracking [2]. This implied the need for further
research related to the reaction of concrete
sleepers under more realistic loads and
surrounding conditions.
Keywords: concrete sleeper, static test, rail seat
I. INTRODUCTION
Prestressed concrete sleepers are a major part of
ballasted railway tracks. It is the cross tie beam
that distributes service loads from the rails to the
supporting formation. A notion has long been
established that concrete sleepers have a large
redundant capacity.
Nevertheless,
premature
cracking of concrete sleepers in India raised a
widespread concern about their reaction to high
intensity impact loads from wheel or rail
irregularities. Railways play a major role in
transporting
population,
resources,
and
merchandise, etc., over a continent as large as
India. The railway industry has grown significantly
over the past century and continuously developed
new technology suitable for a particular solution to a
specific local area. In India, the traditional ballasted
This study reviews our fundamental understanding
of the dynamic characteristics of railway tracks and
components including the at-large loading
conditions on railway tracks.The load carrying
capacity and energy absorption mechanisms of
157
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
prestressed concrete sleepers under different loading
conditions are evaluated.
II.
bending moment. However, for sleepers with track
gauge of 1067 mm, it should not less than 14 kNm.
Concept of permissible stresses has been
governed in the 2003 release. The Standard also
gives consideration that need not to check sleeper
section for stresses other than flexural stresses, e.g.,
shear, if the design is complied with all clauses in
the Standard. It is noteworthy that for
prestressed concrete sleepers, the influence of
the dead load can be ignored and the design load
can be expressed by the wheel load alone [5].
RESENT STUDY
A. Static Load Modelling
Concrete sleepers have a major role in
distributing axle loads to formation. The axle
loads could be considered static or quasi-static
when the speeds of trains are quite moderate [3].
However, in general, the axle loading tends to
physically behave like the dynamic impact pulses
due to the continual moving ride over track
irregularities and faster speeds. These dynamic
effects would then deteriorate the mechanical
properties of the track components and
undermine the load-carrying capacity of the
concrete sleepers [4].
B. Impact Resistance of Concrete Sleepers
Train and track interactions during services
normally generate substantial forces on railway
tracks. Such forces are transient by nature and of
relatively large magnitude and are referred to as
impact loading.
There
has
been
no
comprehensive
review
of
the typical
characteristics of the loading conditions for
railway track structures, in particular, impact loads
due to the wheel/rail interaction, published in the
literature. The previous section presents a review
of basic design concepts for railway tracks,
abnormalities on tracks, and a variety of typical
dynamic impact loadings imparted by wheel/rail
interaction and irregularities. The characteristics of
typical impact loads due to wheel and rail
irregularities, e.g. rail corrugation, wheel flats and
shells, worn wheel and rail profiles, bad welds
or joints, and track imperfections, are presented
with particular emphasis on the typical shapes of the
impact load waveforms generally found on railway
tracks.
As mentioned, railway track experiences multiple
impacts loading. The behaviour of concrete sleepers
under impact loads is of great significance in order
to predict their dynamic responses and resistance to
impact loading. Although there have been an
extensive number of impact investigations, the
majority of those were performed on individual
materials such as polymer, steel, or plain
concrete [6], and composite sections such as fiber
reinforced concrete [7].
Impact tests were
introduced for prestressed concrete structures by
Military forces and nuclear industry [8]. The aim
of that project was to investigate the effect of blast
loads on the prestressed concrete members.
In the 1980s, the impact testing of concrete
sleepers was performed to investigate the flexural
cracks that were noticed at the rail seats of
over 50 percent of concrete sleepers in the
United States, although the sleepers were
installed in service for a few months [9]. Cracks
in concrete sleepers led to serious concerns
about
the concrete
sleepers’
durability,
serviceability,
and
load-carrying
capacity.
Clearly, the major factors that cause sleepers
cracking response mostly were due to the
wheel/rail interactions, rail irregularities, or wheel
Although concrete sleepers are affected by the
dynamic loading, the practical design standards
still relies on the sectional analysis and static
behaviour of the sleepers. There have been a number
of publications addressing the dynamic wheel load
factors, in order to perform the design calculations
for concrete sleepers using quasi-static analysis and
of strength, ductility, stability, fracture mechanics,
and so on, mostly refer to the static behaviour.
In
India,
Standards
India, revised
the
conventional design of railway prestressed
concrete sleepers and fastening assemblies. Also,
the maximum design flexural moments in sleepers
can be statically calculated from the pressure
distribution. It is found that the maximum
positive moment occurs at the rail seat, whilst
the maximum negative moment remains at the
middle of sleepers. The maximum positive design
bending moment at railseat (M R + ) for standard
and broad gauge sleeper (g > 1.5 m) can be read.
M R + = R (L − g ) / 8
(3.7)
for narrow gauge sleeper, the formula becomes:
M R+ = R (L − g ) / 6.4
(3.8)
In contrast, the maximum negative design
bending moment at mid span (M C − ) for
concrete sleepers with track gauge of 1.6m or
greater is:
[
] ( 3.9)
MC− = 0.5 Rg−Wg(L− g) −W(2g − L) / 8
2
Where W = 4 R / (3 L − 2 g )
(3.10)
The design formula for sleepers with track gauge of
1435 mm read:
M C− = R 2 g − L / 4
(3.11)
[
]
If the track gauge is less than 1435mm, the
purchaser shall specify the negative design
158
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
defects. It was also found that those impact loads
were greater than usual quasi-static ones as much as
three times [2].
The impact investigations were usually done to
understand how much energy consumed to
fracture, how many blows of multiple impact
causing the first crack or the prescribed distress,
and how bad of damaged zone under various
aspects [10]. Those aspects imply the toughness
and resistance of such systems under extreme
loading.
Impact testing devices have been developed for
more than two decades. There are two types of those
devices representing their own physical features:
former, drop weight hammer; and later, pendulum
machine. The drop weight hammer, which is the
most common technique, has been adopted
worldwide in impact testing on concrete
structures [6]. In that test, the number of blows
required to create the first visible crack was reported
in addition to the ultimate impact force. This
technique was proven versatile to be applied in
railway engineering research as it could simulate
both single and repeated impact loading on the
actual tracks [2]. Later, this drop-weight hammer
technique has been extended to various research,
for instance, investigations on bending-failuremodeand shear-failure-mode reinforced/prestressed
concrete beams .
III.
as follows.
RAYALSEEMA:
Broad gauge sleeper:
Rail Seat Section: + 23.4 kNm
- 15.7 kNm
Middle Section: + 9.4 kNm
- 17.9 kNm
Narrow gauge sleeper:
Rail Seat Section + 25.0 kNm
- 18.0 kNm
Middle Section + 19.0 kNm
- 18.0 kNm
The experiments are aimed at underpinning the
data from the sectional analyses (by manual
calculation and by a computer package, MS&C
Lab report, 2008) as well as understanding the
static ultimate behaviours of the railway
prestressed concrete sleepers. It is believed that
the prestressed concrete sleepers were cast under
high quality control, which results in the consistent
properties of the sleepers at each casting batch.
Two patterns of load-carrying capacity are of
structural-design interest and will be evaluated. The
first one is the maximum negative moment of
sleeper, which corresponds with the middle
section. The later one is the maximum positive
moment of sleeper, especially at rail seat. The test
setups were carried out complying with The
strain measurements on top and bottom fibres at
the surface of concrete sleepers were performed
according to IS1085.14-2003 requirements.
STATIC TEST RESULTS (NEGATIVE
MOMENT TEST AT RAIL SEAT)
B. Negative Moment at Mid Span
Figures 1 and 2 show the experimental setup
and instrumentation for the negative centre
negative moment test at the middle section. The
strain gauges were installed 10 mm from the top
and bottom surfaces at the centre of sleeper.
Linear variable displacement transformer (LVDT)
was used to measure deflection at the load point.
The rotations at supports that represent the gauge
rotations were measured using inclinometers. The
test program was carried out using displacement
control with loading rate of approximately10
kN/min, as prescribed by IS1085.14. The equipment
used in these tests includes: LVDT at middle span,
Inclinometers at rail seat supports, Strain gauges and
wires at top and bottom fibres, Load cell, Loading
frame, Data Logger, and Electronic load control.
The maximum experimental load was 133 kN,
which is equivalent to the mid-span bending
moment of about 45 kNm. It was found that the
hand calculations showed very good agreement
with the experimental results. The ultimate load
from the hand calculations (general prestressed
concrete theory) is 132 kN(or bending moment of
44.8 kNm), whereas the ultimate resistance of
139 kN (or bending moment of 47 kNm) was
predicted by Response-2000 [1].
A. Testing
In this section, the results of static testing of
prestressed concrete sleepers are presented. The
load-carrying capacities of railway concrete sleepers
at railseat and at the centre of sleepers are
highlighted. At the centre section, the negative
bending moment was applied through the fourpoint-load bending test. The similar setup was also
adopted in positive bending moment test program
at railseat section. The testing programs were
designed in accordance with IS1085.14-2003
Prestressed concrete sleepers and IS1085.19-2001
Resilient fastening assemblies (Standards India,
2001;
2003). It should
be
noted
that
RAYALSEEMA broad gauge sleeper was
employed in both negative and positive bending
moment tests. All tests were performed using
full-scale
sleepers without
cutting, scaling,
dividing, nor adjusting the sleepers. The detailed
experimental program will be presented in the next
section. Failure mechanisms, crack propagation and
post-failure behaviour of concrete sleepers will be
discussed.
Based on the available data from open literature, the
design moments of prestressed concrete sleepers are
159
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
Figure3. Load deflection curve of centre
negative moment test
Figure1. Experimental set up for centre negative
moment test at MS & C Lab, Kolkata, India
Inclinometer
LVDT
Figure4. Moment deflection curve of centre
negative moment test
Figure2. Instruments usd in the test
The load-deflection relation is presented in Figure 3.
The moment-deflection curve is shown in Figure 4.
The crack initiation load was detected visually
during each test as well as determined by the
use of the load-deflection relation. Crack
initiation was defined as the intersection between
the load-deflection relations in stages I and II as
shown in Figure 5. This simplified definition
was employed to obtain a consistent method for
the crack initiation load determination [3]. This
method provides a slightly higher cracking load
than that from the first deviation point from the
linear elastic part of load-deflection relationship.
Comparisons of measured and visualized crack
initiation loads showed very good agreement.
The visually determined crack initiation load was
about 79 KN while the measured one was about 75
kN.
Figure5. Measured cracking load of centre negative
moment test
160
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
C. Energy Absorption Characteristics
Energy absorption characteristics reflect how much
the structure can dissipate the work done by external
forces. The energy absorption capacity can be
computed from the area of the load-deflection curve.
This energy absorption has led to the prediction
of forces and distances that could significantly
affect the structure. In this study, the energy
absorption capacity was calculated using a direct
integration method; Newmark’s Beta (0.5) or the socalled trapezoidal rule. The energy absorption
capacity of the RALALSEEMA concrete sleeper
from the centre negative moment test is shown
in Figure 6. This graph indicates the amount of
inelastic deformation absorbed by the sleeper at
different deformation levels.
D. Rotational Capacity
Excessive rotation of rails is the main source for
derailment of rolling stocks. To determine these
rotations, the inclinometers were mounted at
both supported ends. The rotational capacity under
applied load and moment is presented in Figures 7
and 8, respectively. It was found that the left and
right hand side rotations were identical before the
sleeper fails. The maximum angle of rotation at
failure was about 1 degree at which the
maximum static load carrying capacity of the
mid-span cross section was reached
Figure7. Load rotation relation from centre
negative moment test
Figure6. Energy absorption characteristic due to
centre negative moment test
The bending failure of the test concrete sleeper
occurs at the first peak in the load- deflection
curve, as can be seen in Figure 3. At the first
peak, the concrete at the top fibre of rail seat start
to crush while major bending cracks arise from
the bottom fibre. The sudden failure could be
noticed right after the load approaches the peak
capacity. The remaining uncracked portion of
concrete and the yielding wires could still sustain
the applied load until the first wire snaps. The
failure mechanism will be described in details in
later section. Figure 6 shows that maximum energy
absorbed by the sleeper prior to the brittle failure is
about 1,800 J. In contrast, only 100 J of energy can
generate cracking in mid-span section of the tested
concrete sleeper. It should be noted that the total
energy absorbed after the fracture was mostly due to
the high strength prestressing wires.
Figure8. Moment rotation relation from centre
negative moment test
The angle of rotation that is associated with the
first cracking of the mid-span cross section for
the test specimen is about 0.2 degree. It should
be noted that the angle of rotation at first cracks
of concrete sleepers implies the allowable angle
161
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
of rotation designed in accordance with IS1085.14
(2003).
E. Stress Strain Curves
Four strain gages were installed to determine the
strain behaviour at the top and bottom fibres. The
stress strain curves for compressive and tensile
strains at top and bottom fibres, respectively, are
displayed
in
Figures 9
and
10.
The
corresponding stresses due to the applied bending
moment can be calculated using the bending stress
function (My/I) that is based on the neutral axis of
the gross section. From the experiments, it is clear
that the crushing and spalling of concrete
occurred within the top-fibre compressive zone of
the sleepers under the applied load.
Figure10. Tensile stress-strain curve of centre
negative moment test
It was found that the ultimate compressive strain of
concrete was about 0.004 before the brittle failure.
As the high-strength concrete was usually used
in the manufacture of concrete sleepers, the
compressive strain of 0.004 is relatively high
compared with the ultimate strain of normal
strength concrete (20 to 40 MPa), which is
around 0.003 [3]. The strain records then changed
the sign as the concrete bursts in tension.
F. Mode of Failure
Visible vertical crack due to the pure bending at
the mid span initially appeared at 79 kN load.
The concrete sleeper failed in flexure at the ultimate
load of 133.3 kN, which results in 16 mm deflection
at middle span and about 1 degree rotations at both
ends. At this ultimate state, the sleeper absorbed
energy of about 1,500 J. The detailed failure
mechanisms pre- and post-failure are shown in
Figures 4.12 and 4.13. Moment-curvature
relationship of a concrete sleeper can be found
from the end rotations through the structural
theory [3]. Figure 4.14 illustrates the momentcurvature relationship based on the rotation
measurements. By contrast, the moment-curvature
relationship can also be computed from linear
strain diagram (between gage distance of 170
mm) as shown in Figure 4.15. It should be noted
that these relations were on linear deformation basis.
However, those results were in quite good
agreement during pre-failure loading range.
The ultimate tensile strain of the concrete before
cracking was found at about 0.0004, which is about
10 percent of the ultimate compressive strain. It was
found that, once the strain of concrete exceeded
the tensile strain, the concrete started cracking.
Then, the strains changed the sign due to the
shrinkage of concrete after cracking in tension.
Figure9. Compressive stress-strain curve of
centre negative moment test
Figure11. Initial flexure cracks at middle span of
Rayalseema broad gauge sleeper
162
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
IV.
STATIC TEST RESULTS(POSITIVE
MOMENT TEST AT RAIL SEAT)
A. Testing
The schematic diagram for the experimental
setup available at Mechanical Systems & Control
Laboratory, Kolkata- 32, India, of rail-seat positive
moment test is shown in Figure 15. The test setup
is similar to previous test. However, the location of
measurements was changed to the centre line of
rail seat instead. In this test, the inclinometer was
also used and another rail seat was preloaded and
clamped.
Figure12. Failure of middle span of
Rayalseema broad gauge sleeper
Figure15. Experimental set up for rail seatpositive moment test
The test was carried out at the same loading
rate (about 10 kN/min). The equipment required
in these tests included:LVDT for sleeper deflections
at loading line, Strain gages and wires installed at
loading line and at middle span for obtaining both
top- and bottom-fibre strains, Load cell, Laser
deformation measurement of loading steel column,
Inclinometers at supports, Data Logger, and
Electronic load control. Vertical seat was used to
keep the loading path in vertical plane. The
maximum experimental load was found to be
585 kN, which is equivalent to bending moment
of about 63 kNm. Shear strength deficiency
governed the observed failure mode. It was found
that the predicted ultimate load from Response-2000
[1] was 539 kN (or bending moment of 58 kNm).
Figure13. Moment curvature relation of negative
middle section- End rotations
B. Load Deflection Relationship
The load-deflection relation for the railseat crosssection of the tested concrete sleeper is presented in
Figure 16, whereas the moment-deflection curve
can be seen in Figure 17. The crack initiation
load was detected visually during each test as
well as determined by the use of the load-deflection
relation. Crack initiation was defined as the
intersection between the load-deflection relations
Figure14. Moment curvature relation of
negative middle section – Strain diagram
163
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
of the average data in stages I and II as shown in
Figure 18. Comparisons of measured and visualized
crack initiation loads showed quite good agreement.
The crack initiation load determined by visual
observation was about 240 kN, while the measured
one was about 235 kN.
At the very beginning stage when the load was
applied at railseat, the vertical displacement of
the railseat was linearly proportional to the
applied load and bending moment up until the
tensile strains at the bottom fibre almost reach the
ultimate tensile strength. Once the strain reached
the tensile strength, the concrete started cracking
and the nonlinear relation between load and
deflection appeared. When the top-fibre strain of
concrete approached the ultimate compressive
strength, the fracture of concrete occurred.
Figure18. Measured cracking load of rail seat
positive moment test
C. Energy Absorption Characteristics
As aforementioned,
the energy
absorption
capacity reflects how much the structure can
dissipate the work done by external forces. The
energy absorption characteristics determined from
the rail seat positive moment test are shown
found in Figure 19. It should be noted that the
failure is indicated when the major fracture of
concrete occurs at the top fibre of railseat. At
fracture, energy given to deform sleeper railseat
vertically to about 6.5 mm was about 2,000 J, which
is slightly higher than that of mid-span cross section.
On the other hand, an amount of 250 J of energy is
required to cause cracking in the railseat of the
tested concrete sleeper.
Figure16. Load deflection curve of rail
seat positive moment test
Figure19. Energy absorption characteristic due to
rail seat positive moment test
Figure17. Moment deflection curve of
rail seat positive moment test
164
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
D. Rotational Capacity
In this railseat test, the inclinometers were
mounted coincident with both supports. Although
the rotations at these setup supports play an
insignificant role on the rail gauge, they provide
important information in relation to the curvature
at the inflection point between positive bending
moment at railseat and negative bending moment at
mid span. The rotational capacity under applied
load is presented in Figures 4.20. It is found
that the left and right hand side rotations were
almost similar before the sleeper fails. The
angle of rotation that is associated with the
fracture of railseat section is about 0.8 degree.
The allowable angle of rotation that causes cracking
at the railseat is found to be about 0.1 degree. It
should be noted that the rotations of angle, which
cause cracking and failure at rail seat of the
tested concrete sleeper, are less than those at
mid-span of the tested concrete sleeper. This is
because the shear span ratio of the railseat test
setup is much less than that of the mid-span test
setup.
implies that the failure mode was not due to
purely flexure. It was also found that the
maximum tensile strain of concrete at bottom
fibre was about 0.001 at which the ultimate tensile
strain of concrete was reached and the cracking
occurred.
Figure21. Compressive stress-strain
curve of railseat positive moment test
Figure20. Load rotation relation from
railseat positive moment test
Figure22. Tensile stress-strain curve of railseat positive
moment test
E. Load Strain Curve
Six strain gages were installed to determine the
strain behaviour at the top and bottom fibres of
both rail seat and middle sections. The load-strain
curves for compressive and tensile strains at top and
bottom fibres of rail seat section are displayed in
Figures 21 and 22. From the experiments, it was
found that the crushing and spalling of concrete did
not occur at the top-fibre compressive zone of
sleepers when the sleeper failed. It was found that
the maximum compressive strain at top fibre of
railseat was 0.0012. The load-strain curves showed
that the ultimate strains were not reached, which
F. Mode of Failure
Interestingly, visible vertical cracks due to pure
bending initially occurred at surfaces of the
sleeper’s rail seat about 240 kN load. However,
the concrete sleeper failed due to deficient shear
strength at the ultimate load of 583 kN, which
allows nearly 7 mm deflection at rail seat centre.
At this ultimate state, the sleeper absorbed energy of
about 2, 000 J. Some diagonal shear cracks
initiated at 525 kN load and dominated until the
sleeper failed. It can be seen that the failure
165
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
mode is of shear-bending damage. Momentcurvature of concrete sleeper at rail seat section
can be approximated from the end rotations as
shown in Figure 23. The moment curvature
relations were promising during the pre failure
stage.
REFERENCES
[1] Grassie, S.L., (1987), “Measurement and
attenuation of load in concrete sleepers”,
Proceedings of Conference on Railway
Engineering, Perth, September 14-16,
pp125-130.
of
[2] Wang, N.,
(1996), “Resistance
Concrete Railroad Ties to Impact
Loading”, Ph.D. Thesis, Department of
Civil Engineering, University of British
Columbia, Canada.
and
[3] Gustavson, R.
(2000), “Static
dynamic finite
element analyses of
of
concrete
sleepers”, Licentiate
Engineering Thesis, Department of
Structural
Engineering,
Chalmers
University of Technology, Sweden.
[4] Kaewunruen, S., and Remennikov, A.
2004, “A state-of-the-art review report
on vibration testing of ballasted track
components”, July-Dec Research Report,
CRC
Railway
Engineering
and
Technology, Australia, December, 20p.
[5] Wakui, H. and Okuda, H. (1999). “A
study on limit-state design method for
prestressed concrete sleepers”. Concrete
Library of Japan Society of Civil
Engineers; 33(1): 1-25.
[6] Banthia, N.P., Mindess, S., and Bentur, A.,
(1987). “Impact behaviour of concrete
beams”, Materials and Structures, 20, 293302.
[7] Abbate, S. (1998), “ Impact on Composite
Structures”, Cambridge University Press,
USA.
[8] Remennikov, A.M. (2003) “A Review of
Methods for Predicting Bomb Blast Effects
on Buildings”, Journal of Battlefield
Technology, 6(3) 5-10.
[9] Grassie, SL. (1996) “Models of railway
track and train-track interaction at high
frequencies: Results of benchmark test,
Vehicle
System
Dynamics,
2
(Supplement): 243-262.
[10] ACI
Committee
544
(1990),
“Measurement of properties of fibre
reinforced concrete”, ACI Manual of
Concrete Practice Part 5, 544, 2R-6.
Figure23. Moment curvature relation of
positive rail seat section- End rotations
V.
CONCLUSIONS
Railway sleepers are an important part of railway
infrastructure that distributes axle loads to ground.
Investigating the static behaviour of concrete
sleepers is the first step to gaining a better insight
into the limit states concept of design. The
emphases in this evaluation are placed on the
determination of maximum positive and negative
bending moments of the concrete sleepers at railseat
and mid-span, respectively. Applying a load at the
rail seat is the way to carry out the maximum
positive moment, whilst the ultimate negative
flexural moment can be found by means of pushing
the sleeper at the middle span in opposite direction.
From the experiments, the failure modes,
maximum loads, visualised and measured cracking
loads of each section of concrete sleeper tested
under static load was found. It is found that the
failure mode of the sleeper under hogging
moment at mid span tends to be flexural failure,
whilst the failure mode under sagging moment at
railseat seems to be combined shear-flexural
failure.
166
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
REVIEW ON ACTIVE VIBRATION CONTROL OF PLATES AND
SHELLS
S.Behera, S.S.Hota
Department of Civil Engineering
Synergy Institute of Engineering and Technology
Dhenkanal, Odisha – 759001, India
E-mail: [email protected]
respectively in active adaptive structures. As these are
Abstract - Direct and converse effects of
piezoelectric materials as sensor and actuator
respectively had been exploited immensely to
control the lower frequency vibration of structural
elements in aerospace, hydrospace and automobile
engineering .Though this active control of vibration
is advantageous over passive control of vibration
using visco-elastic layers, It has given rise to
complex analytical methods to predict the behavior
of smart plates and shells comprising of isotropic or
laminated composites. As any particular holistic
approach to review the existing analytical technique
is absent in literature, this paper focuses on various
analytical and semi-analytical methods used to
predict the structural performance of host plates
and shells integrated with various vibration controls
schemes including active control, passive control,
hybrid control and control by magneto-restrictive
layers under relevant sections. The plates and
shellsare having integrated or embedded piezopatches either in distributed or continuous form.
Important conclusionsfor further research work has
also been in-corporated.
Keywords- piezoelectric shells, FEM, Control
algorithm, constraint layer damping
I.
easy to save light weight no change in the geometric
and mechanical properties of the host structures is
experienced. Application of forces generates electric
field in these materials, which is called as the direct
effect. This property enables the material to be used as
sensors to measure structural deformation. The
converse effect offers the scope to use materials as
actuators for controlling vibration. A few examples of
these smart structures are fuel injectors, resonators,
application in hydroacoustics , aerospace structures.
Actuators and sensors are either integrated as layers
with the host structures or are surface bonded to it.
Depending on the arrangement of patches or layers of
piezo sensors and actuators on isotropic and composite
plate shells several mathematical models based on
classical theory, semi- analytical and finite element
method have been propounded. These models are
presented in chronological order in the subsequent
INTRODUCTION
section.
After the discovery of piezoelectric phenomenon in
II.
1880s by Curie Brothers, this was found wide spread
CHRONOLOGICAL REVIEW
application in many fields of engineering. The
(1990)n Tzou & Tseng prepared a new thin
fundamental equations were put forward by Voigth.
piezoelectric finite plate/shell element with two internal
The direct and converse effect of piezoelectricity has
degree of freedom for dynamic measurement and
made the materials to be used as actuators and sensors
167
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
control of vibration in a structure with distributed
integrated piezo sensor and actuator.
(1999) Celentano & Setola presented a model for a
beam like structure bonded with piezo – electric plates.
(1993) Miller et al developed a design strategy for fully
The mechanical behaviour was modeled using finite
anisotropic plates with distributed sensors & actuators
element method whereas the electrical ones are by RLC
on considering stabilitycriteria.
circuit.
(1997) Pan & Hansen modified Fluggees shell equation
(2000) Baz & Chen studied vibration control in a thin
to include linear inertia term so as to enable them to use
cylindrical shell treated with active constrained layer
the model for activel control & transmission in a
damping. Distributed parameter modeling was used for
cylindrical shell.
the purpose. Spill over problems was eliminated by use
of boundry control strategy. Effectiveness of this
Pietrzakowski used improved model for the distribution
strategy for controlling vibration of broad frequency
of electric potential across its thickness. Instead of
bands is demonstratal.
linear variation he adopted a half cosine and linear
variation. This variation was assumed taking into
(2001) Laminated plate & shell treated with piezo –
consideration deformation of actuator. The inplane
ceramic distributed actuators & sensors are studied by
distribution was determined from the solution of the
Narayan & Balamurugan for vibration control by using
governing electro- mechanical equations.
a nine noded shear flexible element with full Gauss
quadrature.
(1997) Chen et al presented an isoparametric thin plate
element of 4- nodes and 16 degree of freedom in total
(2001) Chantalakhana & Stanway experimentally
including a lone degree of freedom for electric potential
demonstrated the utility of model reduction and model
for studying vibration control of laminated structures.
updating to control spill over effects in a clampedclamped plate with constrained layer damping.
(1998) Wang & Vaicatis used Love’s shell theory to
predict vibration & noise control in double walled
(2001) He et al presented a finite element model for
composite core cylinder with a pair of a pair of
functionally graded aluminium plate with integrated
actuators. Galerkin type solution was obtained for
sensing & actuating layers.The elemet is 4- noded with
equations of motion.
6 degree of freedom.
(1999) Park & Baz experimental the superiority of
(2001) Park & Baz developed two finite element
Active constrained layer damping over passive
models based on layerwise and classical laminate
constrained l
theory to stimulate the interaction between the base
ayer domping both numerically and experimentally.
plate the piezo layers , the viscoelastic layer and the
Finite element model used Kirchoff’s plate theory for
control laws. The elements used have four nodes with
the viscoelastic layer.
five degree of freedom per node for the layer- wise
168
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
laminate theory and seven degree of freedom for
(2004) Abreu et al presented a finite element model of
classical laminate theory. The element based on layer-
a thin Kirchoff plate with four nodes bonded on the
wise laminate theory proved to be computationally
surface to piezo sensors and actuators to study the static
efficient.
& dynamic responses of composite plates.
(2002) Piezo isotropic & orthotropic stiffened plates
(2004) Mota et al developed a 3- noded triangular
were studied by Mukharjee et al by employing finite
piezo-laminated finite element with 18 degree of
element method and velocity feed back algorithim. The
freedom for structures using negative feed back control.
stiffeners need not to run along the mesh lines. The
(2005) Mota et al developed a 3- noded triangular
element posess three mechanical & one electrical
element with 24 degree of freedom to study dynamic
degree of freedom. The coupling between direct &
response of piezo- laminated structures using third
converse piezo – electric effect is neglected.
order shear deformation theory.
(2005) Mallik & Ray presented exact solutions to study
(2002) Corriea et al presented a semi - analytical
the performance of a layer of piezo fibre reinforced
conical shell finite element with five degree of freedom
composite material as distributed actuator for smart
at each node having higher order shera deformation. A
composites
layer - wise discretization across the thickness was also
longitudinal direction to simulate the bending modes
carried out.
and to increase the piezo coefficients.
(2002) Yaman et al developed a finite element model
Some idealized structures as the so called piezoelectric
of a smart plate by ANSYS with some experimental
infinite cylinder under external load were studied by
inputs to find optimum location for sensors &
Rajapakse and Rajapakse and Zhou. They used Fourier
actuators.
integral transforms to derive an analtic solution for an
plates.
The
reinforcement
was
in
infinite piezoelectric cylinder and an infinite composite
(2003) Kulkarni & Bajoria developed a 8- noded shell
cylinder sublected to axisymmetric electromechanical
element with 10 degree of freedom for adaptive
loading. Those works established a concise coupling
structures having distributed actuators & sensors.
effect between the mechanical and the electrical fields.
Displacement fields were different for inplane &
Rajapakse et al. present a theoretical study of a
transverse directions. Warping effect was considered
piezoelectric annular.
taking parabolic shear deformation.
The free vibration problem is not so flexible to
(2003) Narayanan & Balamurugan presented a finite
solve because is highly dependent on the mesh
element modeling for plate & shells. The element is 9-
parameter.
noded .
We see good agreement for most of the frequencies
Linear quadratic regulator approach is found more
when results are compared with the ones obtained with
effective for vibration control. Electro- mechanical
ANSYS.
coupling along with pyroelectric effects are included in
Bending frequencies converge and when compared
the study of responses to impact, harmonic and random
with commercial codes the results are close for the
excitation.
lower frequencies. Comparing the results for the first
169
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
bending frequency using the present model and a
feedback. Chen et al. derived a finite element for
commercial code the result is practically the same (
dynamic analysis of plates with a first – order shear
present model: 204.32 [Hz]
and commercial code:
deformation theory displacement fields, where the
203.79 [Hz]). Other feature of this work to put some
active control is obtained from the actuators potential
emphasis on the study of the coupling formation ( the
given by an amplified signal of the sensors potential.
coupling between harmonics as described before) and
Lam et al. developed a finite element model based
how relevant they can be in this kind of problems. For
on the classical laminated theory for the active
example a difference between coupled and uncoupled
vibration control of composites plates containing
results of about 12.6%.
piezoelectric layers acting as distributed sensors and
Assuming a semi – analytical displacement field
actuators. This model uses Newmark method, to
provides more accurate solutions and uses less
calculate the dynamic response of the laminated plate.
computational time compared with three- dimensional
Reddy presents a detailed theoretical formulation, the
commercial computer programs. Less computational
Navier solution and finite elent models based on the
time is required by reductionthe three dimensional to a
classical and shear deformation plate theories, for the
bi -dimensional mesh, but theproblem still remains
analysis of laminated composites plates with integrated
three – dimensional.
sensors and actuators. A simple negative velocity
feedback control algorithim coupling the direct and
A pioneering work is due to Alik and Hughes , which
converse piezoelectric effects is used to actively control
analysed the interactions between electricity and
the dynamic response of an integrated structure through
elasticity by developing a tetrahedral finite element.
closed loop control. Bohua and Huang derived an
Shepe control and dynamic control of structures are
analytical formulation for modeling the behaviour of
some of the current applications of the referred “
laminated
intelligent structures” described by Crawley and de
piezoelectric sensor and actuator, using the first - order
Luis. Recent surveys can be found in Senthil et al.,
shear deformation theory . Correia et al. have
Benjeddou, and Correia et al.
developed an axisymmetric shell model, which
composite
plates
with
integrated
Several researchers have carried out the modeling
combines the equivalent single - layer higher order
of composites structures containing piezolaminated
shear deformation theory to represent the mechanical
sensors and actuators using finite element formulation.
behaviour with a layerwise discretization in the
Ha et al. used a eight – noded brick finite element for
thickness direction to represent the distribution of the
the static and dynamic analyses of laminated structures
electrical potential of each piezoelectric layer.
containing piezoelectric ceramics subjected to both
mechanical and electrical loadings. Samanta et al.
(2007) Bending, force vibration, finite element of
developed an eight – noded quadratic isoparametric
laminated circular cylindrical shells was carried out by
finite element for the active vibration control of
santos et al.
laminated plates with distributed piezoelectric sensor
and actuators. Active vibration control capability is
studied using a simple algorithm with negative velocity
170
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
[12].
Reddy JN, “On laminate composite plates with integrated
sensors and actuators”, Eng Struct ,vol. 21, 1997, pp. 56893.
REFERENCES
[13].
[1].
Newmark NM ,” A method of computation for structural
model for piezoelectric composite laminates”, Smart
dynamics” , J Eng Mech Div , vol. 3, 1959, pp. 67-94.
Mater Struct , vol.6, 1997, pp. 583-91.
[14].
[2].
Alik H.Huhes T.,“Finite element method for piezoelectric
[15].
of intelligent structures”,de Luis J.,AIAA J ,vol. 25 (10),
[4].
[16].
H.S. TZOU, C.I. TSENG, ” Distributed piezoelectric
[17].
parameter systems: A piezoelectric finite
approach”,
Journal
of
Sound
[6].
[18].
C.Y. WANG, R. VAICATIS, “Active control of
C.H.PARK, A. BAZ, “Vibration control of bending
sensors and actuators” , AIAA J, vol. 30(30), 1992, pp.
,Journal of Sound and Vibration , vol. 227(4), 1999, pp.
772-80.
711-734.
S.E. MILLER, H. ABRAMOVICH, Y. OSHMAN,
[19].
Senthil VG, Varadan VV, Vardan VK, “A review and
“Active distributed vibration control of anisotropic
critique of theories for piezoelectric laminates”,Smart
piezoelectric laminated plates”, Journal of Sound and
Mater Struct, vol. 76, 1999, pp. 347-63.
[20].
G. CELENTANO, R. SETOLA,” The modeling of a
D.T. Detwiler, M.-H.H. Shen, V.B. Venkayya, “Finite
flexible beam with piezoelectric plates for active vibration
element analysis of laminated composite structures
control” ,Journal of Sound and Vibration , vol.223(3),
distributed
piezoelectric
actuators
1999, pp. 483-492.
and
[21].
Thin- Walled Structures, vol. 36, 2000,pp. 1-20.
Samanta B, Ray MC, Bhattacharaya R, “Finite element
[22].
Benjeddou A., “Advances in piezoelectric finite element
modeling of adaptive structural elements: a survey”,
vol. 34(9), 1996, pp. 1885-93.
Comput Struct, vol. 76, 2000, pp. 347-63.
Chen CQ, Wang XM, Shen YP, “ Finite element
approach
A. Baz, T. Chen ,” Control of axi-symmetric vibrations of
cylindrical shells using active constrained layer damping”,
model for active control of intelligent structures” ,AIAA J,
[23].
of vibration control using se3l-sensing
piezoelectric actuators” , Comput Struct ,
Correia VMF , Gomes MA , Suleman A, Mota Soarses
CM, Mota Soares CA, “Modelling and design of adaptive
vol. 60(3),
1996, pp. 506-12.
composite structures”, Comput Meth Appl Mech Eng ,
Rajapakse RKND, “Electroelastic response of a composite
vol. 185, 2000, pp. 325-46.
[24].
cylinder with a piezoceramic core”, SPIE Smart Mater
[11].
Journal of Sound and Vibration, vol. 203(3), 1997, pp.
modes of plates using active constrained layer damping”
1995, pp. 87-100.
[10].
X. PAN, C.H.HANSEN, “Active control of vibration
composite structures containing distributed piezoceramic
sensors”, Finite Elements in Analysis and Design, vol. 20,
[9].
of
865-888.
Ha KS , Keilers C, Chang FK, “Finite element analysis of
containing
[8].
analysis
Journal of Sound and Vibration , vol. 216(5), 1998, pp.
Vibration , vol. 183(5), 1995, pp. 797-817.
[7].
“Stress
vibrations and noise of double wall cylindrical shells”,
and
Vibration,vol.138(1), 1990, pp. 17-34.
[5].
Y,
409-434.
sensor/ actuator design for dynamic measurement/ control
element
Zhou
transmission in a cylindrical shell”,
1373-85.
of distributed
RKND,
pp. 169-177.
Crawley EF ,”Use of piezoelectric actuators as elements
1987, pp.
Rajapakse
piezoceramc cylinders” , Smart Mater Struct ,vol. 6, 1997,
vibration”, Int J Numer Meth Eng, vol. 2, 1970, pp. 151-7.
[3].
Lam Ky, Peng XQ, Liu GR, Reddy JN, “A finite element
X.Q.He , T.Y. Ng , S. Sivashanker , K.M. Liew , “ Active
Conf , 1996, pp. 684-94.
control of FGM plates with integrated piezoelectric
S.H. CHEN , Z.D. WANG AND X.H.LIU ,“Active
sensors and actuators”, International Journal of Solids and
Vibration
Control
and
Suppression
for
structures, vol. 38, 2001, pp. 1641-1655.
Intelligent
[25].
Structures”, Journal of Sound and Vibration, vol. 200(2),
Ho- Cheol Shin , Seung- Bok Choi, “Position control of a
two-link flexible manipulator featuring piezoelectric
1997, pp. 167-177.
actuators and sensors” , Mechatronics, vol. 11, 2001, pp.
707-729.
171
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
[26].
[27].
C.
CHANTALAKHANA, R. STANWAY,
“Active
[37].
of piezolaminated smart structures for active vibration
vibrations”, Journal of Sound and Vibration, vol. 241(5),
control with distributed sensors and actuators”, Journal of
2001, pp. 755-777.
Sound and Vibration, vol. 262, 2003, pp. 529-562.
[26] Chul H. Park, A.Baz, “Comparison between finite
[38].
Vibrations
using classical and layer- wise laminate theory”, Finite
Meccanica, vol. 38, 2003, pp. 659-668.
[39].
Nonlinear
Continuous
System”,
Jose M. Simoes Moita, Isidoro F.P. Correia, Cristovao M.
Mota Soares, Carlos A. Mota Soares, “Active control of
adaptive laminated structures with bonded piezoelectric
smart piezoelectric composite plate/shell structures and its
sensors and actuators”, Computers and Structures, vol. 82,
2004, pp. 1349-1358.
[40].
Yinming Shi, Hingxing Hua, Hugo Sol, “The finite
738.
element analysis and experimental study of beams with
Bohua S, Huang D,” Vibration suppression of laminated
active constrained layer damping treatments”, Journal of
composite beams with a piezo-electric damping layer”,
Sound and Vibration , vol. 278, 2004, pp. 343-363.
Compos Struct , vol. 53, 2001, pp. 437-47.
[41].
Z.K. Kusculuoglu, B. Fallahi, T.J. Royston, “ Finite
Correia IFP, Mota Soares CM, Mota Soares CA,
element model of a beam with a pieoceramic patch
Herskovits J, “Active control of axisymmetric shells with
actuator”, Journal of Sound and Vibration, vol. 276, 2004,
piezoelectric layers: a mixed laminated theory with a
pp. 27-44.
higher order displament field” ,Comput Struct , vol. 80,
[42].
Jose M. Simoes Moita, Cristovao M. Mota Soares, Carlos
A. Mota Soares, “ Active control of forced vibrations in
2002 ,pp.2265-75.
[31].
a
V. Balamurugan, S. Narayanan,” Shell finite element for
Element in Analysis and Design , vol. 37, 2001,pp. 713-
[30].
of
56.
application to the study of active vibration control”, Finite
[29].
[37] A. TYLIKOWSKI, “Stabilization of Parametric
element formulations of active constrained layer damping
Elements in Analysis and Design, vol. 37, 2001, pp. 35-
[28].
S. Narayana, V. Balamurugan, “Finite element modeling
constrained layer damping of clamped- clamped plate
I.F. Pinto Correia, Cristovao M. Mota Soares, Carlos A.
adaptive structures using a higher order model” ,
Mota
Composites Structures, vol. 71, 2005, pp. 349-355.
Soares,
J.
Herskovits,
“Active
control
of
[43].
axisymmetric shells with piezoelectric layers: A mixed
NILANJAN MALLICK, M.C. RAY, “Exact solutions for
laminated theory with a high order displacement field” ,
the analysis of piezoelectric fiber reinforced composites as
Computers and Structures, vol. 80, 2002, pp. 2265-2275.
distributed
actuators
for
smart
composite
plates”,
International Journal of Mechanics and Materials in
[32].
Design, vol. 2, 2005, pp. 81-97.
A. Mukherjee , S.P. Joshi, A. Ganguli, “ Active vibration
[44].
control of piezoelectric stiffened plates” Composite
[33].
[35].
Chen
Y,
Senjuntichai
T,
“Electroelastic field of a piezoelectric annular finite
A.S. AL- DMOUR, K.S. MOHAMMAD, “ Active control
cylinder”, Int j Solids Struct vol. 42, 2005, pp. 3487-
of flexible Structures using principal component analysis
3508.
[45].
Henrique Santos, Cristovao M. Mota Soares, Carlos A.
253(3), 2002, pp. 545-569.
Mota Soares, J.N. Reddy, “A finite element model for the
[33] A.Gornandt , U. Gabbert, “ Finite element analysis of
analysis of 3D axisymmetric laminated shells with
thermopiezoelectric smart structures” ,Acta Mechanica ,
piezoelectric sensors and actuators: Bending And free
vol. 154, 2002, pp. 129-140.
vibrations” Computers and Structures, 2007.
[46].
Sudhakar A. Kulkarni, Kamal M. Bajoria , “ Finite
Marek Pietrzakowski, “Piezoelectric control of composite
element modeling of smart plates/shells using higher order
plate vibration: Effect of electric potential distribution”,
shear deformation theory” ,Composite Structures, vol. 62,
Computers and Structures, vol. 86, 2008, pp. 948-954.
[47].
2003, pp. 41-50.
[36].
RKND,
Structure , vol. 55, 2002, pp. 435-443.
in the domain” Journal of Sound and Vibration, vol.
[34].
Rajapakse
FU Yi- ming, RUAN Jian –li, “Nonlinear active control of
J.S. Kumar, N. Ganesan, S. Swarnamani, Chandramouli
damaged piezoelectric smart laminated plates and damage
Padmanabhan, “Active control of cylindrical shell with
detection” Applied Mathematics and Mechanics, vol.
magnetostrictive layer”, Journal of Sound and Vibration,
29(4), 2008, pp.421-436.
vol. 262, 2003, pp. 577-589.
172
SIET, Dhenkanal, Odisha
National Conference on Recent Advance in Science and Technology(NCRAST), Sept. 30 - Oct. 1, 2011
[48].
Satyajit Panda, M.C. Ray, “Finite element analysis for
geometrically
nonlinear
deformations
of
[50].
Rastgoo, “Geometrically nonlinear vibration analysis of
functionally graded plates using vertically reinforced 1-3
piezoelectrically actuated FGM plate with an initial large
piezoelectric composite”, Int J Mech Mater Des vol. 4,
deformation”, Journal of Mechanical Science and
2008, pp. 239-253.
Technology, vol. 23, 2009, pp.2107-2124.
[51].
[49].
Farazad Ebrahimi, Mohammad Hassan Naei, Abbas
smart
Jose A. Carvalho, Isabel N. Figueiredo, Rebeca Martinez,
[48] Y.H. Zhang, S.L Xie, X.N. Zhang, “Vibration control
“Plate-like
of simply supported cylindrical shell using a laminated
numerical simulations” J Elast vol. 97, 2009, pp. 47-75.
piezoelectric actuator”, Acta Mech , vol.196, 2008, pp.
[52].
87-101.
smart
structures:
Reduced
models
and
V.L. Sateesh, C.S. Upadhyay, C. Venkatesan, “Nonlinear
analysis of smart composite plates including hysteresis
effects”, AIAA JOURNAL, vol. 48(9), 2010.
173
SIET, Dhenkanal, Odisha