Research Journal Vol - II - Dronacharya College of Engineering

Transcription

Research Journal Vol - II - Dronacharya College of Engineering
Dronacharya Research Journal
Issue II
Jan-June 2010
DRONACHARYA RESEARCH JOURNAL
______________________________________________________________________
Volume 1
Issue II
Jan-June 2010
______________________________________________________________________
Bi-annual Journal focusing on
Engineering, Technology, Management and Applied Sciences
1
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN: 0975-3389
DRONACHARYA RESEARCH JOURNAL
All rights reserved:
DRONACHARYA RESEARCH JOURNAL takes no responsibility for accuracy of layouts and diagrams.
These are schematic or concept plans.
Editorial Information:
For details please write to the Editor or Executive Editor, DRONACHARYA RESEARCH JOURNAL,
Dronacharya College of Engineering, Khentawas, Gurgaon-123506 (India).
Telephones:
Landline: 0124-2375502, 2375503, 2375504
Mobile: 09999908250, 09910380104
Telefax:
0124-2275328
E-mail.:
[email protected]
advisor.r&[email protected]
Website:
www.dronacharya.info
College does not take any responsibility about the authenticity and originality. Materials contained in this journal are the
views of the authors
(Although all submitted manuscripts are reviewed by experts).
2
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN: 0975-3389
DRONACHARYA RESEARCH JOURNAL
Advisory Board
Mr. Deepender Singh Hooda
Prof. P. Thrimurthy
Member of Parliament (Rohtak)
Kothi No.9, Pant Marg,
Near Gole Dak Khana,
New Delhi-110 001
E mail: [email protected]
Mob: 09818368168
President, Computer Society of India
20, Aamani Kanchukota Street
Vijaywada-520008 (Andhra Pradesh)
E-mail: [email protected]
Mob: 09440942418
Dr. R. K. Chauhan
Sh. M. C. Mittal
Former Secretary, University Grants Commission,
G-20, HUDCO Place, Andrew Ganj
New Delhi-110 049
Ph. 011-23239337, 23236288
E mail: [email protected]
Mob: 09818469075
Chief General Manager,
VXL Technologies Ltd.,
20/3, Mathura Road,
Faridabad – 121 006
E mail: [email protected]
Mob: 09810101278
Air Vice-Marshal G P S Dua
VSM (Retd)
Brig. N. Kumar
Director, Velocis System Pvt. Ltd.,
7/2, 7/3, Kalu Sarai, Vashisht House,
Begumpur, Near IIT Flyover,
E mail: [email protected]
Mob: 09971595500
41, Hemkund Colony, Greater Kailash-1
New Delhi-110 048
E mail: [email protected]
Mob: 09868941717
Mr. Bant Singh Singla
Dr. S. P. Khatkar
Chief Engineer, PWD (B&R) Haryana
Nirman Sadan, Plot No. I & II
Sector-33 A, Chandigarh-160034
E mail: [email protected]
Mob: 09356067509
Director, University Institute of
Engineering & Technology
Dean, Faculty of Engineering & Technology,
M.D. University, Rohtak-01262
E-mail: [email protected]
Mob: 09813805666
Mr. Rajinder Kumar Kaura
Mr. Aakash Gupta
Chairman and Managing Director,
Bergene, Associate Pvt Ltd.,
305-306 Magnum House-I,
Commercial Complex, Karampura
New Delhi-110015
E mail: [email protected]
Mob: 09818066888
Consultant, Mckinsey, 4 Echelon,
Institutional Area, Sector-32,
Gurgaon – 122 001
E mail: [email protected]
Mob: 09999675155
Dr. H. L.Verma
Dr. Dharmender Kumar
Dean and Professor,
Haryana School of Business
Former Pro Vice-Chancellor,
G J University of Science and Technology,
Hissar-125001
E mail: [email protected]
Mob: 09896272466
Dean, Engineering & Technology
HOD, Department of Computer Science &
Technology, GJU of Science & Technolgy
Hissar-125001
E-mail: [email protected]
Mob: 09467690800
3
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN: 0975-3389
Dronacharya Research Journal (Biannual)
Editorial Board
Editor-in-Chief
Prof (Dr.) B M K Prasad
Principal, Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected] Mob: 09910380104
Executive Editor
Prof (Dr.) C Ram Singla
Advisor (R&D) and Professor ECE, Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected] Mob: 09873453922
Editor
Prof (Dr.) Onkar Singh
Dean Academics & Professor ECE, Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected] Mob: 09999908250
Members
Dr. Dharmender Kumar
Dr. H. S. Dua
Dean, Engineering and Technology & Head,
Department of Computer Science,
G J University of Science & Technology,
Hissar-125001, India
E-mail: [email protected]
Mob.: 09467690800
Prof. & Head, Electronics & Communication Engg.
Dronacharya College of Engineering
Gurgaon-123506, India
E-mail: [email protected]
Mob.: 09990044822
Dr. Ishwar Singh
Prof. & Head, Bio-medical Engineering
Dronacharya College of Engineering
Gurgaon-123 506, India
E-mail: [email protected]
Mob.: 09312635883
Dr. D. P. Singh
Professor, Chemistry
(Former Dean Academics Affairs, Advisor Foreign Student Cell)
MD University, Rohtak-124001, India
E-mail: [email protected]
Mob.: 09813011463
Dr. Anil Vohra
Dr. S. V. Nair
Professor & HOD, Department of Electronic & Science
Kurukshetra University, Kurukshetra
E-mail: [email protected]
Mob.: 09355222388
Prof. & Head, Computer Science Engg.
Dronacharya College of Engineering
Gurgaon-123506, India
E-mail: [email protected]
Mob.: 09873319863
Dr. K. S. Yadav
Dr. Jitendra Kumar
Former Sr. Scientist Ceeri Pilani
Professor & HOD, Department of Electronics &
Communication Engineering,
Maharaja Agrasen Institute of Technology
Rohini, New Delhi-110086
E-mail: [email protected]
Mob.: 09911827235
Prof. & Head, Information Technology
Dronacharya College of Engineering
Gurgaon - 123506, India
E-mail: [email protected]
Mob.: 09810498546
Dr. S. K. Gupta
Head, Applied Sciences and Humanities
Dronacharya College of Engineering
Gurgaon - 123506, India
E-mail: [email protected]
Mob.: 09899290280
4
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN: 0975-3389
CONTENTS
Page No.
♦ Reliability Analysis of Two Unit Hot Stand by PLC System Installed at Steel Plant with
Corrective / Preventive maintenances with one Inspection & Repaired by Either Regular
Repairman or at Service Station
♦ Study of Network Management using Mobile Agent
14-19
♦ Power Dissipation & Delay in 6T-Symmetric SRAM
26-33
♦ Entrepreneurship Development in Punjab & Haryana: A Study with Special Reference to
Entrepreneurship Development Programme
34-42
♦ Strategic Management Practices: A Study of Selected Indian Companies
43-50
♦ Handling Imprecision in Software Engineering Measurements using Fuzzy Logic
51-57
♦ Highly Efficient Motion Vector Estimation for Video Compression using Bacterial Foraging
Optimization Algorithm
58-65
♦ Evaluation of Incubation Centres in India
66-71
♦ Internet Applications: A Soft Computing Approach
72-78
♦ Self-Assembly of a 3D-Supermolecular Architecture with Guanidium Ligands &
Decavanadate Units
79-85
♦ Implementing “SYN” based Port Scanning in Windows Environment
86-94
♦ Theory of Fluorescence & Phosphorescence Decay in Magnesium Oxide(MgO) Crystals
95-101
♦ Optimizing Financial Trading Systems by Integrating Computer Based Modelling in
“Virtual Reality Environment”
102-109
♦ Approach of Six Sigma in Lean Industry
110-122
♦ Crystallization Kinetics of New Sealant Material for SOFC
123-129
♦ Implementation of Functional Reputation based Data Aggregationfor Wireless Sensor
Network
130-135
♦ New Advanced Internet Mining Techniques
136-140
♦ Swarm Intelligence: Revolutionizing Natural to Artificial Systems
141-147
♦ “Stem Cell” Future of Cancer Therapy
148-155
♦ Post Modern Sensibility in John Updike’s Works
156-160
♦ Emerging Applications : Bluetooth Technology
161-167
♦ Future Scope of Global Networking of Electric Power
168-172
♦ Technology & Terrorism
173-178
♦ Analysis of M=-1 Low Frequency Bounded Whistler Modes
179-185
5
20-25
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN: 0975-3389
CONTRIBUTORS
1. Reliability Analysis of Two Unit Hot Stand by PLC system Installed at Steel Plant with
Corrective / Preventive Maintenances with One Inspection & Repaired by Either
Regular Repairman or at Service Station Dr. Manoj Duhan*
Professor & Chairman, Department of Electronics & Communication Engineering
Deenbandhu Chhotu Ram University of Science & Technology, Murthal-131039, India
E-mail: [email protected]
Sumit Kumar
Lecturer, Department of Electronics & Information
Galgotias College of Engineering, Greater Noida-201306, India
E-mail: [email protected]
2. Study of Network Management Using Mobile Agent
Dr. Parvinder Singh*
Reader, Department of Computer Science & Engineering
Deenbandhu Chhoturam University of Science & Technology, Murthal-131039, India
E-mail: [email protected]
Ajmer Singh
Lecturer, Department of Computer Science & Engineering
Deenbandhu Chhoturam University of Science & Technology, Murthal-131039, India
E-mail: [email protected]
Jasvinder Kaur
Research Scholar, Department of Computer Science & Engineering
Deenbandhu Chhoturam University of Science & Technology, Murthal-131039, India
E-mail: [email protected]
3. Power Dissipation & Delay in 6T-Symmetric SRAM
Dr. Rahul Rishi*
Professor & HOD, Department of Computer Science & Engineering
Technological Institute of Textile & Sciences, Bhiwani-127021, India
E-mail: [email protected]
Dr. C. Ram Singla
Advisor (R&D) & Professor, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Ashish Siwach
Lecture, Department of Computer Science & Engineering
Technological Institute of Textile & Sciences, Bhiwani-127021, India
E-mail: [email protected]
6
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.: 0975-3389
4. Entrepreneurship Development in Punjab & Haryana: A Study with Special
Reference to Entrepreneurship Development Programme
Sarabjit Singh*
Associate Professor & Head, Department of Humanities & Management
National Institute of Technology, Jalandhar (Punjab)-, India
E-mail: [email protected]
Dr. H.L. Verma
Dean & Professor, Haryana School of Business
Guru Jambeshwar University of Science & Technology, Hissar-125001, India
E-mail: [email protected]
5. Strategic Management Practices: A Study of Selected Indian Companies
Mani Shreshtha*
Assistant Professor, NC College of Engineering
Panipat-132107, India
E-mail: [email protected]
Dr. H.L. Verma
Dean & Professor, Haryana School of Business
Guru Jambeshwar University of Science & Technology, Hissar-125001, India
E-mail: [email protected]
6. Handling Imprecision in Software Engineering Measurements using Fuzzy Logic
Dr. Pradeep Kumar Bhatia*
Professor, Department of Computer Science & Engineering
Gurgu Jambheshwar University of Science & Technology, Hissar-125001, India
E-mail: [email protected]
Harish Kumar Mittal
Lecturer, Department of information Technology
Vaish College of Engineering, Rohtak-124001, India
E-mail: [email protected]
Kevika Singla
Senior Software Engineer
Royal Bank of Scotland, Gurgaon-122001, India
E-mail: [email protected]
7. Highly Efficient Motion Vector Estimation for Video Compression using Bacterial
Foraging Optimization Algorithm
Dr. Navin Rajpal*
Professor & Chairman, School of Information Technology
Guru Gobind Singh Indraprastha University, New Delhi-110006, India
E-mail: [email protected]
7
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.: 0975-3389
Deepak Gambhir
Ph.D Scholar, School of Information Technology
Guru Gobind Singh Indraprastha University, New Delhi-110006, India
E-mail: [email protected]
8. Evaluation of Incubation Centres in India
Dr. Onkar Singh*
Professor & Dean Academics, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Dr. H L Verma
Professor, Haryana School of Business
Guru Jambheshwar University of Science and Technology, Hissar-125001, India
E-mail: [email protected]
9. Internet Applications: A Soft Computing Approach
Dr. C. Ram Singla*
Advisor (R&D) & Professor, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Dr. B.M.K Prasad
Principal, Dronacharya College of Engineering
Gurgaon-123506, India
E-mail: [email protected]
Vinay Kumar Nassa
Associate Professor, Department of Electronics of Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
10. Self-Assembly Of a 3D-Supramolecular Architecture with Guanidium Ligands and
Decavanadate Units
Dr. Katikaneani Pavani*
Assistant Professor, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Sangeeta Singla
Assistant Professor, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Reshu Sharma
II Semester, Department of Electronics and Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
8
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.: 0975-3389
Pratima Sharma
II Semester, Department of Electronics and Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
11. Implementing “SYN” Based Port Scanning in Windows Environment
Vishal Bharti*
Assistant Professor, Department of Computer Science & Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Hardik Suri
VIII Semester, Department of Computer Science & Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
12. Theory of Fluorescence and Phosphorescence Decay in Magnesium Oxide
(MgO) Crystals
Dr. Smita Srivastava*
Associate Professor, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Dr. S K Gupta
Professor and HOD, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Rashmi Verma
Lecturer, Department of Physics
Department of Physics, Bangalore City College, Bangalore, India
Email: [email protected]
13. Optimizing Financial Trading Systems by Integrating Computer based
Modelling in “Virtual Reality Environment”
Kevika Singla*
Senior Software Engineer
Royal Bank of Scotland, Gurgaon-122001, India
E-mail: [email protected]
Aakash Gupta
Associate Consultant
Mckinsey & company, Gurgaon-122001, India
E-mail: [email protected]
Dr. B.M.K Prasad
Principal, Dronacharya College of Engineering
Gurgaon-123506, India
E-mail: [email protected]
9
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.: 0975-3389
14. Approach of Six Sigma in Lean Industry
Achin Srivastav*
Associate Professor, Department of Mechanical Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Dr. D.S. Sharma
Professor and Head, Department of Mechanical Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Nidhi Srivastav
Assistant Professor, Department of Computer Science & Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
15. Crystallization Kinetics of New Sealant Material for SOFC
Neha Gupta*
Assistant Professor, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Dimple Saproo
Assistant Professor, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Rita Yadav
VIII Semester, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
16. Implementation of Functional Reputation Based Data Aggregation for Wireless
Sensor Network
Manisha Saini*
Assistant Professor, Department of Computer Science & Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Kevika Singla
Senior Software Engineer
Royal Bank of Scotland, Gurgaon-122001, India
E-mail: [email protected]
Deepak Gupta
VIII Semester, Department of Information Technology
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
10
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.: 0975-3389
17. New Advanced Internet Mining Techniques
Narendra Kumar Tyagi*
Assistant Professor, Department of Computer Science & Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Abhilasha Vyas
Assistant Professor, Department of Computer Science & Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Dr. S.V. Nair
Professor & HOD, Department of Computer Science & Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
18. Swarm Intelligence: Revolutionizing Natural to Artificial Systems
Aditya Gaba*
VIII Semester, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Dr. H.S. Dua
Professor & HOD, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
19. “Stem Cell” Future of Cancer Therapy
Jyotsna*
VI Semester, Department of Bio-Medical Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Subhransh Pandey
VI Semester, Department of Bio-Medical Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Dr. D.P. Singh
Professor & HOD, Department of Bio-Medical Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
20. Post Modern Sensibility in John Updike’s Works
Dr. Neetu Raina Bhat*
Assistant Professor, Department of Applied Sciences & Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
11
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.: 0975-3389
Dr. Sunil K. Mishra
Associate Professor, Department of Applied Sciences & Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Suchitra Deswal
Assistant Professor, Department of Applied Sciences & Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Puneet Mehta
II Semester, Department of Mechanical Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
21. Emerging Applications: Bluetooth Technology
Y. P. Chopra*
Professor, Department of Electronics and Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: yashpal_chopra@ yahoo.co.in
Rohit Khanna
VIII Semester, Department of Electronics & Communication Engineering
Dronacharya College of Engineering Gurgaon-123506, , India
Email: [email protected]
Meenu Rathi
VIII Semester, Department of Electronics & Communication Engineering
Dronacharya College of Engineering Gurgaon-123506, Haryana, India
Email: [email protected]
22. Future Scope of Global Networking of Electric Power
Seema Das*
Assistant Professor, Department of Electronics & Communication Engineering
Dronacharya College of Engineering Gurgaon-123506, India
Email: [email protected]
Chandra Shekhar Singh
Assistant Professor, Department of Electronics & Communication Engineering
Dronacharya College of Engineering Gurgaon-123506, India
Email: [email protected]
Gaurav Chugh
VIII Semester, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
12
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.: 0975-3389
23. Technology and Terrorism
Dr. H.S. Dua*
Professor & HOD, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Meha Sharma
Assistant Professor, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Rahul Gupta
VIII Semester, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
24. Analysis of m = -1 Low Frequency Bounded Whistler Modes
Dr. B.B. Sahu*
Assistant Professor, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Dr. K. Maharana
Professor and HOD, Department of Physics
Utkal University, Bhubaneswar, Orissa, India
E-mail: [email protected]
Dr. S.K. Gupta
Professor and HOD, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
13
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.: 0975-3389
RELIABILITY ANALYSIS OF TWO UNIT HOT STAND BY PLC
SYSTEM INSTALLED AT STEEL PLANT WITH
CORRECTIVE/PREVENTIVE MAINTENANCES WITH ONE
INSPECTION & REPAIRED BY EITHER REGULAR REPAIRMAN
OR AT SERVICE STATION
Manoj Duhan*
Chairman, Department of Electronics & Communication Engineering
Deenbandhu Chhotu Ram University of Science & Technology, Murthal-131039, India
E-mail: [email protected]
Sumit Kumar
Department of Electronics & Information
Galgotias College of Engineering, Greater Noida-201306, India
E-mail: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT
Examination of a two unit hot stand by plc system is brought out in the present paper for corrective/preventive maintainers and two types of repair.
Hypothetical industrial case is being taken here. In case operative unit fails it is firstly inspected and repaired by plant engineer itself, if possible, otherwise
unit is being sent to authorized service station. Whereas hot stand by is repaired at the plant by regular engineer only. Various measures related to system
reliability are obtained. Graphical study is also brought out.
Keywords: Preventive, Hypothetical, Operative, Inspected, Reliability
__________________________________________________________________________________________________________________________
1. INTRODUCTION
1.1 In papers from [1] to [7] it has been observed that study has been taken up for theoretical aspect of the topic. Very rare
practical implementation of the concepts has been found at industrial level. This fact motivated me to perform the study at the
industrial level. Papers [6] & [7] are regarding hypothetical industrial case. It is hypothetical industrial case in which practical
incidents at steel industry are included. In particular we have collected data from Jindal steel plant at Hisar. Here regular
engineer first inspects it, possibly he tries to repair the unit otherwise he sends the unit to authorized service station. Also the
corrective inspection time is so small that we have merged the inspection & repair by regular engineer in the same state. We
wish to extend the existing concept with the study of case of hot stand by in steel industry and try to compare the results with
the previous studies.
1.2 Now a days quality of product is at top preference. So to achieve good quality we require less number of failures. For
which our machine must be preventively maintained to avoid corrective maintenance.
The model is analyzed by making use of Markov processes and regenerative point technique. Various measures of system
effectiveness such as (MTSF) Mean Time to System Failure, busy periods of regular engineer while repairing operative unit
and hot stand by unit, busy period of unit under repair at service station Profit for the model is evaluated. Graphs pertaining
to particular cases are plotted and analyzed accordingly.
2. NOTATIONS
O - Operative state
Hs - Hot stand by in operative state
fi - Unit is under inspection
fr - Unit under inspection of regular repairman
*Corresponding Author
14
Dronacharya Research Journal
Issue II
Jan-June 2010
fri - Hot stand by stand by unit inspection of regular repairman
α - Down state rate
β - Rate at which both units comes out of down state to operating state.
λ- Constant failure rate of operative unit.
γ - Rate at which failed unit is repairs at service station
δ-
Rate at which warm stand by fails
β1 - Rate at regular repairman repairs hot stand by
β2- Rate at which unit is repairs at service station
α - Down state rate
p- Probability that unit failed unit is handled by regular repairman for repair at plant itself.
q- Probability that failed unit is sent to service station for repair at company.
3
0
1
δ
O
fri
α
O
Ws β
β1
λ
fd
λ
β2
δ
4fi
γ
β1
fri
2
fi
O
5
β1
fr
δ
6
fr
fr
- Operative state
- Down state
- Failed state
-Regenerative state
Fig. 1
3. STATE TRANSITION PROBABILITIES
q01= αe-(α+ δ+λ) t
q02= λe-(α+ δ+λ) t
q03= δ e-(α+ δ+λ) t
q10= βe-βt
q24= δe-(δ+ γ) t
q25= γe-(δ +γ) t
15
O
Dronacharya Research Journal
Issue II
Jan-June 2010
q30= β1e-(β1+ λ) t
q34= λe-(β1+ λ) t
q42= β1e-(β1+ δ) t
q46= γe-(β1+ γ) t
q50= βe-(β2+ δ) t
q56= δe-(β2+δ) t
q65= β1e-(β1) t
…… (1-13)
Non-zero elements (pij) are calculated using given formula, pij=lim ∫0∞.qij (t)dt
s→0
4. MEAN SOJOURN TIME
µ0=1/(α+δ+λ)
µ1=1/(β)
µ2=1/(δ+γ)
µ3=1/ (β1+γ)
µ4=1/ (δ+ γ)
µ5=1/ (β2+δ)
µ6=1/(β2)
…… (15-21)
5. MEAN TIME TO SYSTEM FAILURE (MTSF)
Let Фi(t) be c.d.f of the first passage time from regenerative state I to a failed state. To calculate the MTSF we consider the
failed state as absorbing. The following recursive equations for Фi(t) are obtained by probabilistic arguments and taking
laplace-steljes transform of the equations and solving them for Ф0**(s)(MTSF) when the system started from the beginning of
state 0, is given by,
MTSF= lim 1- Ф0**(s)/s=N/D
s→0
where
N= 1-p50p02p25-p01p10+p03p30
D=-µ 0-p02µ 2-p01µ 1-p03µ 3-p25p02µ 5
…..(22)
…..(23)
…..(24)
6. AVALIBILITY ANALYSIS
Let Ai(t) be the probability that the system is in upstate at instant t,given that the system entered regenerative state ‘t’ at
t=0.following recursive equations for Ai(t) and taking lap lace transform of the equations and calculateA0*(s),the steady state
availability of the system
A0=N1/D1=lim A0(t)/t= lim sA0*(s)
t→0
t→0
…..(25)
where
N1=u0-p03u3-p02µ 2-p65p46µ 5p02p24-p34p65p46µ 5p03-p65p56p42µ 0p24p65p56p42p03µ 2p34+p65p56p42p03p24µ 3p65p56p02µ 2+p25µ 5p42p03p34+p25µ 5p02-p42µ 0p24-p03p42µ 2p24-p03p42µ 2p34+ p03p42p24µ 3
p65p56µ 3p03+p65p56µ 0….. (26)
D1=-µ 1 (p01 + p24p01p56p42- p56p01- p24p01p42) - µ 6 (p24p01p56p42 - p56p01 - p24p50p46p02 - p03p50p46p34 -p24p42p56+ p24p30p56p42p03 +
p56-p56p30p03) - µ 0 (p24p56p42p50) + µ 2 (p01p56p42 + p30p56p42p03) + µ 4 (p50p25p03p34 +p50p24p02)+µ 3(p50p46p03-p42p24p03)+µ 5(
p25p42p03p34 + p25p02 )
…..(27)
7. BUSY PERIOD ANALYSIS OF REGULAR REPAIRMAN FOR CORRECTIVE
INSPECTION ONLY
Let Bi(t) be the probability that the repairman is busy is repair/Replacement at time t,given that the steady state entered
regenerative state I at t=0 for Bi(t).using probabilistic arguments we obtain the following recursive relations for Bi(t) and
using lap lace transform of recursive relations which we obtained for busy period analysis of repairman for
repair/replacement and solve them for B0*(s), the fraction of time for which the repairman is busy in repair only, in sready
state is given by
16
Dronacharya Research Journal
Issue II
Jan-June 2010
B0= lim sB0*(s)=N2/D1
s→0
where
N2=µ 2 (p02+p24p42+p03p34p42-p03p34)(1-p56p65)
and D1 is already specified.
…..(28)
…..(29)
8. BUSY PERIOD ANALYSIS FOR THE MAIN UNIT UNDER REPAIR OF REGULAR
REPAIMAN
Let BRi(t) be the probability that the repairman is busy at instant t, given that the system has entered regenerative state i at
t=0 for BRi(t). By probabilistic arguments, we have following recursive relations. Taking lap lace transform of the above
obtained relations and solve them for BR*0(s), the total fraction of time for which operative unit is under repair of an regular
repairman, in steady state, is given by
BR0= lim sBR0*(s)=N3/D1
s→0
where
N3= -µ 5 (p46p02p24-p46p34p03-p56p24p42p03p34-p56p25p02-p65p46p02p24+p65p46p34p03+p25p42p03p34+p02p25)
…..(30)
….. (31)
and D1 is already specified.
9. BUSY PERIOD ANALYSIS FOR THE HOT STAND BY UNIT UNDER THE REPAIR OF
REGULAR REPAIRMAN.
Let BHi(t) be the probability that the repairmen is busy at instant t for repair, given that the system entered the regenerative
state I at t=0 for BWi(t). By probabilistic arguments, we have the following recursive equations. Taking laplace of time for
which the warm stand by is under the repair of regular repairman, in steady state, is given by
BW0= lim sBH0*(s)=N4/D1
s→0
…..(32)
whereN4=µ 3 [(1+p34-p65p56-p65p56p34)-p03p42 (1+p24p56p65) +p02 (1+p24p65p56)]
…..(33)
and D1 is already specified.
10. BUSY PERIOD ANALYSIS FOR PREVENTIVE MAINTAINANCE BY REGULAR
REPAIRMAN.
Let BPi(t) be the probability that the repairmen is busy at instant t for repair, given that the system entered the regenerative
state I at t=0 for BPi(t). By probabilistic arguments, we have the following recursive equations. Taking laplace of time for
which the warm stand by is under the repair of regular repairman, in steady state, is given by
BP0= lim sBH0*(s)=N5/D1
s→0
…..(34)
whereN5=-µ 1 (1-p56p01-p24p42-p56p01p24p42)
…..(35)
and D1 is already specified.
11. PROFFIT ANALYSIS
In steady state the expected total profit is given by
P=C0A0-C1BI0-C2BRO-C3BH0-C4CP0
…..(36)
17
Dronacharya Research Journal
Issue II
Jan-June 2010
where
C0 = revenue per unit up time
C1 = cost per unit up time for which engineer is busy for inspection
C2 = cost per unit up time the main unit under repair of regular repairman
C3 = cost per unit time for unit is at the service station
C4 = cost per unit up time for which is in down state
12. GRAPHICAL INTERPRETATION
(i) Figure 2 shows the behavior of MTSF w.r.t to failure rate with ‘β1’ (repair rate of regular engineer) as a parameter. Here
MTSF decreases with the increase in failure rate. However it attains larger values on increasing ‘β1’.
MTSF v/s Failure rate for different values of probability that repair is
done by regular repairm an
α = 0.05 β=0.193524 λ= 0.00033 γ =4 δ=0.000146 β2=1.6
β1=1
800000
β1=2
700000
600000
MTSF
500000
400000
300000
200000
100000
0
0.0004 0.001 0.005
0.01
0.05
0.1
1
Failure Rate
Fig. 2
(ii) In Figure 3 we have plotted graph between profit & failure rate (λ) with ‘β1’ as parameter. With the increase in failure
rate profit decrease. Where profit curve cuts the zero axes we get further negative profit Means, we can bear the failure
rate up to that particular point only.
Proffit v/s Failure rate w ith different values of probability that
unit is repaired by regular repairm an (p)
β1=1
α = 0.05 β=0.193524 λ= 0.00033 γ =4 δ=0.000146 β2=1.6
40000
30000
Profit
20000
10000
0
-10000
-20000
0.000350.001 0.005 0.01
0.05
-30000
-40000
-50000
-60000
Failure Rate
Fig. 3
18
0.1
0.5
1
β1=2
Dronacharya Research Journal
Issue II
Jan-June 2010
REFERENCES
[1]
Weirs g “a note on the coincidence of some random functions”, quart math 14, 103-107(1956).
[2]
Srinivasan, s.k and Gopalan, m.n, “probabilistic analysis of a two-unit system with a warm stand by and single repair
facility”, operation research, 21, 748-754(1973).
[3]
Sing,s.k,Goel,l.r and Gupta,r., “Cost benefit analysis of two unit warm stand by system with inspection, repair and post
repair” ieee tras.reliab,r-35,70-74(1986).
[4]
Tuteja r.k and Taneja g, “Cost-benefit analysis of two server, two unit warm stand by system with different types of
failures”, microelectron.reliab 32, 1353-1359(1992).
[5]
Mokaddis, g.s Labib, s.w and Ahmed a.m “ analysis of a two unit warm stand by system subject to degradation”,
microectron.reliab,37(4), 641-647(1997).
[6]
Manoj Duhan, Sudhir Batra, C.L.Mittal,Gulshan Taneja, “Probabilistic Analysis of two unit cold stand by PLC system with
Preventive/ Corrective maintenance with two types of repair”, JISSOR, Vol. XXV, No.1-4, December 2004, P.No 15-23. (2004).
[7]
[Manoj Duhan, Sudhir Batra, C.L.Mittal, Gulshan Taneja, “Two unit Hot Stand by PLC system with Preventive/Corrective
maintenance with two types of repair”. International Journal of Pure and Applied Mathematika Sciences, Vol LX, No. 1-2,
September 2004, P.No 21-29 (2004).
19
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.: 0975-3389
STUDY OF NETWORK MANAGEMENT USING
MOBILE AGENT
Dr. Parvinder Singh*
Reader, Department of Computer Science & Engineering
Deenbandhu Chhoturam University of Science & Technology, Murthal-131039, India
Email: [email protected]
Ajmer Singh
Lecturer, Department of Computer Science & Engineering
Deenbandhu Chhoturam University of Science & Technology, Murthal-131039, India
Email: [email protected]
Jasvinder Kaur
Research Scholar, Department of Computer Science & Engineering
Deenbandhu Chhoturam University of Science & Technology, Murthal-131039, India
E-mail: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT
Heterogeneous nature of communication networks makes management of the network very tedious. Traditional network management is based on Simple
Network Management Protocol (SNMP), which follows a centralized approach. This approach involves huge transfer of management data from managing
stations. This consequently generates congestion in the area around management stations and it causes lack of scalability, especially if they are connected by
wireless links. Moreover SNMP protocol is based on client-server model. In certain circumstances client-server interaction generates significant traffic that
overloads the management station. As compared to centralized approach distributed network management approach provides an efficient way to perform
management functions when networks grow significantly. Mobile agents can be used to carry out distributed information management tasks to help the user
to discover, create, maintain and integrate information, which is distributed across heterogeneous systems and in various data formats. By transmitting
executable programs between (possibly heterogeneous) machines, agent-based computing introduces an important new paradigm for the implementation of
distributed applications in an open and dynamically changing environment.
Mobile agents can be used to distribute the network management task over the entire network. Mobile agents for network management tend to monitor and
control networked devices on site and consequently reduce the manager’s load and network bandwidth. These agents move to the place where data are stored
and select information the user wants. They decentralize processing & control, and, as a consequence, reduce the traffic around the management station, and
distribute processing load.
Keywords: Mobile Agent, Network traffic, Network Monitoring, Quality of Service
______________________________________________________________________________________________________________________________
1. INTRODUCTION
A Network Management System performs the tasks of managing a network, undertaking its proper functionalities,
maintenance, security control, gathering and archiving of data and fault management [1]. If the network grows in size and
services, then monitoring and controlling the devices, applications and services become more tedious. An efficient operating
environment can take care of it. Network management systems like Simple Network Management Protocol (SNMP) and
Common Management Information Protocol (CMIP) are based on centralized approach for network management. A few of
the problems associated with centralized approach are lack of distribution, a low degree of flexibility, re-configurability,
efficiency, scalability, and fault tolerance [2]. For example through SNMP, it is not possible to change the structure of a
management information base (MIB) by adding or deleting object instances [3]. Furthermore, since SNMP does not support
manager-to-manager communication, a hierarchical structure of managers cannot be used to solve the polling problem. In
SNMP a management application uses the manager protocol to communicate with the managed system, which uses the agent
protocol to communicate with the MIB and the manager protocol. Processing of managed data is done at the management
station. Network management stations interact with SNMP agents in managed nodes. Its drawbacks include information
bottleneck at the manager, lack of scalability, excessive processing load at manager, and heavy usage of network bandwidth
by network management actions. This is centralized approach.
Alternatively, we can go for distributed approach where centralized management strategy is replaced by interoperable
management systems. Distributed management solves the problems with centralized management to some extent.
*Corresponding Author
20
Dronacharya Research Journal
Issue II
Jan-June 2010
However, it still has some drawbacks like limited scalability and complex coordination mechanisms between management
stations. Particularly we talk about performance management, which involves gathering statistics about network traffic and
schemes to condense and present data. Measuring performance of networks using centralized SNMP based management is
very difficult due to reasons like network delays and information bottleneck at the central management station. Beside it
management activities are limited, and since they cannot do intelligent processing such as judgment, forecasting, decision
making, analyzing data, and make positive efforts to maintain quality of service. Therefore, all these problems suggest
distribution of management intelligence by using mobile agent to overcome the limitations of centralized management and
meet today’s requirements.
1.1 MOBILE AGENTS
Mobile agents are programs being sent across the network from the client to the server or vice versa. An agent that can be
executed after being transferred over the network will be called an agent host [4]. A software agent is a common name and
describes a software entity that computerizes some of the regular or difficult tasks on behalf of human or other agents.
Mobile agents can travel in network following their itinerary and carrying logic and data to perform a set of management
tasks at each of the visited nodes in order to meet their designed objectives [5]. A software agent is recognized by a life-cycle
model, a computational model, a security model, and a communication model. But a mobile agent is additionally identified
by a basic agent model and navigation model [6].
1.2 MOBILE AGENT APPROACH VS CLIENT/SERVER MODEL
Although an agent-based system can be implemented with any client/server technology, it differs from classical client/server
systems, because there is no clear distinction between a client and a server. Client-server architectures are not capable to yield
efficient use of bandwidth, a problem where the Internet is suffering greatly [7]. Consider the situation when an agent (client)
wishes to retrieve some data from a remote server; if the server does not provide the exact service that the client requires, for
example the server only provides low-level services, then the client must make a series of remote calls to obtain the end
service. This may result in an overall latency increase and in intermediate information being transmitted across the network,
which is wasteful and inefficient, especially where large amounts of data are involved. Moreover, if servers attempt to
address this problem by introducing more specialized services, then as the number of clients grow so the amount of services
required per server becomes infeasible to support. Mobile agents can be seen as an extension or generalization of the wellknown remote procedure call principal. But whereas in the RPC case merely data is moved from the client to a procedure that
already resides on the server (and the client usually remains idle while the remote procedure is executed), in an agent-based
framework the client dispatches an agent which travels to the server and performs its task there by interacting locally with the
server’s resources.
Hence, mobile agents are able to emulate remote procedure calls, but more importantly, they also allow for much more
flexible and dynamic structures than traditional systems based on the client/server paradigm.
2. Requirements for Mobile Agents
(i) Support for legacy systems: An agent platform must ensure the privacy and integrity of agents and its own infrastructure.
For that, it needs means for encryption and decryption of agent code, and it must provide authentication, authorization, and
access control mechanisms.
(ii) Portability: Portability is a prerequisite for code mobility. Today’s networks are heterogeneous and tend to be more and
more complex. They are composed of diverse network elements based on vendor-specific platforms. Very often, diverse
hardware platforms and operating systems perform basically the same or similar tasks. For instance, a machine on network
might use particular microprocessor and runs its own proprietary operating system. On the other hand, another Machine uses
another processor and runs its own proprietary operating system. Mobile agents must be able to adapt in any environment.
(iii) Persistent State: A mobile program may want to keep its persistent state during mobility. The program may run on one
host, pause for a certain period of time and continue on to another host. The program should store its state during the pause
and restore it when it continues to execute. Java supports serialization that allows the state of classes to be stored in the file
system or sent through an output stream, so the class can be reconstructed with the same state as before it was serialized.
(iv) Security: Security has always been a major issue in distributed computing. Downloading and executing an un trusted,
unsecured program at an end user’s computer may expose private resources to malicious attacks. Taking into account the
global nature of networks such as the Internet, security should be of serious concern on every connected computer. Untrusted
programs should be authenticated and validated before they are allowed to execute. There are four major security services:
authentication, authorization, data integrity and data privacy [8].
21
Dronacharya Research Journal
Issue II
Jan-June 2010
(v) Access to Resources of Visited Systems: The ability of a mobile agent to access managed resources of visited systems is
clearly necessary for a network management. Accessing data on a heterogeneous system may be complex due to nonstandard naming and procedures to manage them. Therefore, there must be a standard way that is understandable to the
mobile code in performing its tasks without prior knowledge of the underlying system. The framework provides a template
that we call Virtual Managed Component (VMC). The mobile agent accesses managed resources indirectly through the
VMC.
(vi) Communication support: Agents should be able to communicate with other locally residing agents, but also with
remote agents and with their owner or creator. For that, the agent environment should support standard communication
mechanisms and protocols [9].
2.1 INFRASTRUCTURE FOR MOBILE AGENTS
In an agent-based computing scenario, hosts must provide a platform, which act as a local environment or agent platform.
Such a platform is responsible for launching, receiving, and providing residence to agents, and it has to provide the necessary
services, resources, and runtime support. It may also act as a meeting point for agents or even provide a trusted computing
based secure execution. This environment is portrayed in the figure below.
Mobile agents may need to collaborate with each other to perform the assigned tasks. Inter-mobile code communication is a
necessity. The infrastructure can be built on Java [11]. Java addresses several critical issues, such as security, portability, and
persistent state through serialization, networking, and other features. That’s why Java can be selected for the development of
the infrastructure.
Here we present some details of infrastructure application and terms used in above given figure. Every network component
runs a Mobile Code Daemon (MCD) within a Java Virtual Machine (JVM) environment. The MCD provides a number of
services that allow the execution of mobile code in order to perform its tasks. Those services are: a runtime environment, a
migration facility to transport mobile code to the next destination, a communication facility for mobile code residing not only
on the same virtual machine but also on a different virtual machine, and an interface to access managed resources of the
network component. The MCD listens to both UDP and TCP connections ready to accept mobile code. The mobile code is
instantiated as a thread within the same JVM. The MCD is also responsible for authentication and data integrity checks on
visiting netlet to make sure that it instantiates a trusted agent. Moreover, it keeps track of any and maintains handlers of the
instantiated mobile code. Mobile agents communicate not only with each other, but also with the environment. They need to
access the managed resources to perform network management tasks.
22
Dronacharya Research Journal
Issue II
Jan-June 2010
Even though certain international organizations have defined standards that must be used by the manufacturers to implement
their products, there are still many offerings that do not adhere to the standards. Some vendors may implement their products
with proprietary attributes. This leads to problems while accessing the managed resources of diverse implementations of
network components. To address this issue, mobile code communicates with managed resources of a network component
indirectly through so-called Virtual Managed Component (VMC). The VMC provides a uniform access to the managed
resource no matter what the underlying platform and the implementation are. Therefore, the same mobile code can be used on
a variety of vendor specific network components.
2.2 ADVANTAGES OF USING MOBILE AGENTS.
Now we want to describe some major advantages of mobile agents and try to explain why they will meet the demands of
future distributed systems.
(i) Delegation of tasks- because mobile agents are simply a more specific type of software agent, a user can employ a
mobile agent as representative to which the user may delegate tasks[12]. Instead of using computer system as interactive
tools that are able to work only under direct control by a user, autonomous software agents aim at taking care of entire tasks
and working without permanent contact and control. As a result, the user can devote time and attention to other. Thus, Mobile
software agents are a good means to cope with the steady information overload we experience.
(ii)Asynchronous processing- Once mobile agents have been initialized and setup for a specific task, they physically leave
their owner’s computer system and from then on roam freely through the Internet. Only for this first migration must a
network connection be established. This feature makes mobile agents suitable for nomadic computing, meaning mobile users
can start their agents from mobile devices that offer only limited bandwidth and volatile network links. Because the agent is
less dependent on the network, it will be more stable than client–server-based applications.
(iii) Code shipping may be better than data shipping-This is the probably most cited advantage of mobile agents, and it
stands in close relationship to adaptable service interfaces. A single call can therefore result in a huge amount of data being
sent back to the client because of the lack pf precision in the request. Instead of transferring data to the client where it will be
processed, filtered, and probably cause a new request (data shipping), this code can be transferred to the location of the data
(code shipping) by means of mobile agents. In the latter case, only relevant data (i.e., the results after processing and
filtering) is sent back to the client, which reduces network traffic and saves time if the code for filtering is smaller than the
data that must be processed.
3. CODE MOBILITY AND ITS IMPLICATIONS
3.1 Mobile agents can physically travel across a network, and perform tasks on machines that provide agent hosting
capability. This allows processes to migrate from computer to computer, for processes to split into multiple instances that
execute on different machines, and to return to their point of origin. Unlike remote procedure calls, where a process invokes
procedures of a remote host, process migration allows executable code to travel and interact with databases, file systems,
information services and other agents.
3.2 Mobile agents have been the focus of much speculation and hype in recent years. The appeal of mobile agents is quite
alluring - mobile agents roaming the Internet could search for information, find us great deals on goods and services, and
interact with other agents that also roam networks (and meet in a gathering place) or remain bound to a particular machine.
Significant research and development into mobile agency has been conducted in recent years, and there are many mobile
agent architectures available today. Some architectures use custom scripting languages, while others use Java which helps to
solve the problem of portability
3.3 Few Existing Systems based on mobile technology.
Here are examples of mobile agent based systems.
•
•
•
•
•
Aglets from IBM
AgentBuilder from IntelliOne Technologies
GrassHopper from IKV++
D’Agents from Dartmouth University
Mobile Code Toolkit from Carleton University
23
Dronacharya Research Journal
•
•
•
•
•
•
Issue II
Jan-June 2010
Hive from MIT
JATLite from Stanford University
JADE from CSELT, Italy
FarGo from Isreal Institute of Technology
Ajanta from Univ. of Minnesota
MAgNET from UCSB.
4. USABILITY OF AGENTS FOR DISTRIBUTED NETWORK MANAGEMENT.
4.1 Network management systems based on SNMP and CMIP are largely static and require many procedures to describe new
managing plans. SNMP manages and monitors only network elements and SNMP agents provide a limited and fixed set of
functions [13]. Existing network management systems basically use the client/server method for their functionalities. These
systems regularly suffer from poor scalability due to an increase in the amount of communication and generate too much
traffic in the network and the number of failures in nodes and channel. For managing a network system, sometimes network
administrator needs to locally observe and control components on multiple nodes in the system. The traditional network
management architecture is inefficient, expensive and difficult to change. Hence we need to increase the level of the
automation for improving the effectiveness of management operations and reducing the cost [14].
4.2 Therefore there is a need to employ mobile agents as an autonomous entity in network management and transfer the
administration tasks to them. Also under this situation the network management tasks and computational load are distributed
instead of being centralized towards and on the manager host. One of the important goals of the network management is to
have balanced loading and reliable loading on the network such that connections in the network can be established quickly
without noise, or several trails. Network management also aims to organize the networks in order to work professionally,
successfully adjust to changes, and react to problems such as traffic patterns [15]. The important function in the area of
network management is performance measurement, which involves gathering statistical information about network traffic,
methods to reduce, and present data. Measuring performance of networks using centralized SNMP based management is very
difficult due to reasons like network delays and information traffic jam at the central management station [16].
4.3 It is now widely recognized that the use of decentralization in this kind of applications potentially solves most of the
problems that exist in centralized client/server solutions. Hence applications can be more scalable, more robust, can be easily
upgraded or customized and they reduce the traffic in the network.
In a distributed network, the network operator monitors the trend of network flow to assess network performance and identify
unusual conditions. The analysis of data can be achieved from the management information base. The management
information base preserves various data objects for network management. The information in management information base
is ordered in clusters and maintained in a tree-like structure. Thus management information base manage the complex
network tasks in the distributed network management environment [17]. The management of heterogeneous networks
requires the capabilities to combine different types of data and to account for events occurring on different time scales.
5. CONCLUSION
In this paper, we discussed the use of mobile agents for managing networks. Mobile Agents are a promising technology,
providing a new viewpoint for implementing applications for distributed systems in widely distributed, heterogeneous open
networks. We have seen in this paper that many applications exist, which, by employing the mobile agent paradigm, show
many advantages in coping with today’s infrastructure of heterogeneous computers connected by communication systems of
varying speed and quality. Network management systems based on SNMP and CMIP use more bandwidth and create
network traffic. They can not satisfy the various requirements of heterogeneous networks, maintain an essential level of
quality of service and reliability for the end user and multimedia applications. Therefore, mobile agents offer a solution to the
flexible management of today’s telecommunication networks. Agents are autonomous entities and their usage in network
management reduces the number of necessary human interactions. Furthermore, mobile agent based network monitoring and
management can overcome the shortcomings of SNMP and CMIP by decentralizing network monitoring and management.
24
Dronacharya Research Journal
Issue II
Jan-June 2010
REFERENCES
[1]
A. Sahai, C. Morin, “Towards Distributed and Dynamic Network Management’’, in the Proceedings of IFIP / IEEE
Network Operations and Management Symposium (NOMS), New Orleans, U.S.A., Feb, (1998).
[2]
M. Ghanbari, D. Gavalas, D. Greenwood, M. O’Mahony, “ Advanced network monitoring applications based on
mobile/intelligent agent technology”, Computer Communications Journal, 23(8), pp. 720-730, April (2000).
W. Stallings, “SNMP and SNMPv2: The Infrastructure for Network Management”,IEEE Communications Magazine,
vol.36, no.3, pp. 37-43, March (1998).
[3]
[4] J. M. Steinberg, J. Pasquale, “Limited Mobile Agents: A Practical Approach”, Technical Report No; CS2000 – 0641,
January (2000).
[5]
I. Satoh, “Building Reusable Mobile Agents for Network management”, IEEE, (2003).
[6]
A. Bieszczad, B. Pagurek, T. White, “Mobile Agents for Network Management”, IEEE Communications Surveys,
(1998).
[7]
N. Amara-Hachmi1 and A. El Fallah-Seghrouchni, “Towards a generic architecture for self-adaptiv,” Proceedings of
5th European Workshop on AdaptiveAgents and MultiAgent Systems (AAMAS’05), Paris, (2005).
[8]
C. Tsatsoulis, L. K. Soh, “Intelligent Agents in Telecommunication Networks”, Computational Intelligence in
Telecommunications Networks, W. Pedrycz and A. V. Vasilakos (Eds.), CRC Press, (2000).
[9]
Stefan F¨unfrocken, FriedemannMattern “Mobile Agents as an Architectural Concept for Internet-based Distributed
Applications” Proceedings of the KiVS (1999).
[10] Gatot Susilo, Andrzej Bieszczad, Bernard Pagurek, “Infrastructure for Advanced Network Management based on
Mobile Code” IEEE Network Operations and Management, (1998).
[11] J. Gosling, “The Java Language Environment”, White Paper, Sun Microsystems, Mountain View, Calif.,(1996);
http://www.javasoft.com.
[12] H. Sanneck, M. Berger, B. Bauer, “Application of Agent Technology to Next Generation Wireless/Mobile Networks”,
WWRF WG3: Going Wireless-New Technologies, (2001).
[13] A. Tripathi, T. Ahmed, S. Pathak, M. Carney, P. Dokas, “Paradigms for Mobile Agent Based Active Monitoring of
Network Systems”, IEEE, (2002).
[14] D. Gurer, V. Lakshminarayan, A. Sastry, “An intelligent Agent Based Architecture for the Management of
Heterogeneous networks”, in Proceedings of the IFIP/ IEEE, (1998).
[15] T.C. Du, E.Y. Li, A.P. Chang, “Mobile Agents in Distributed Network Management”, Communications of the ACM,
vol. 46, no. 7, pp. 127-132, (2003).
[16] P. Simoes, L. M. Silva, F. B. Fernandes, “Integrating SNMP into a Mobile Agent Infrastructure”, in Proceedings. of the
Tenth IFIP/IEEE International workshop on Distributed Systems: Operations and Maintenance (DSOM’99), Zurich,
Switzerland, Oct. (1999).
[17] Shamila Makki, Subbarao V. Wunnava “Application of Mobile Agents in Managing the Traffic in the Network and
Improving the Reliability and Quality of Service” IAENG International Journal of Computer Science.
25
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.: 0975-3389
POWER DISSIPATION & DELAY IN 6T- SYMMETRIC SRAM
Dr. Rahul Rishi*
Professor & HOD, Department of Computer Science & Engineering
Technological Institute of Textile & Sciences, Bhiwani-127021, India
E-mail: [email protected]
Dr C.Ram Singla
Advisor (R&D) & Professor, Department of Electronics and Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
Ashish Siwach
Lecturer, Department of Computer Science & Engineering
Technological Institute of Textile & Sciences, Bhiwani-127021, India
E-mail: [email protected]
____________________________________________________________________________________
ABSTRACT
In recent years rapid growth is noticed in mobile, hand-held communication devices, battery operated devices and fast data transfer demand, that these
systems should have larger memory capacity and low power consumption and with minimum operational delays. Since memory is maintained consisting a
large part of systems, nearly fifty percent, reducing the power and delay in memories becomes an hot burning issue. Almost half of the total CPU (central
processing unit) dissipation is due to memory operations. It is necessary to identify the sources of power consumption and delay in memory blocks so as they
can be reduced, hence allowing for better overall performance of the system. Today’s microprocessors are very fast and require fast caches with low power
dissipation and low delay. This paper presents the simulation results of 6T SRAM (six transistors static random access memory) cells, which are the main
choice for today’s cache applications. The stability of the cell is best among all the cells, existing in memory cell configurations.
Keywords: Static Random Access Memory, Power Dissipation, Simulation, Delay.
____________________________________________________________________________________
1. INTRODUCTION
Fast and low power SRAMs have become a critical component of many VLSI chips (very large scale integration). This is
especially true for microprocessors, where the on-chip cache sizes are growing with each generation to bridge the increasing
divergence in the speeds of the processor and the main memory [1-2]. Simultaneously, power dissipation has become an
important consideration due to the increased integration and operating speeds, as well as due to the explosive growth of
battery operated appliances [3]. While process [4-5] and supply [6-11] scaling remain the biggest drivers of fast low power
designs. This paper investigates some circuit techniques which can be used in conjunction to scaling to achieve fast, low
power operation.
1.1 Architecture of SRAM
SRAM has the structure shown in Fig 1. It consists of a matrix of 2m rows by 2n columns of memory cells. Each memory
cell in an SRAM contains a pair of cross coupled inverters which form a bi-stable element. These inverters are connected to a
pair of bitlines through nMOS (n type metal oxide sillicon) pass transistors which provide differential read and write access.
An SRAM also contains some column and row circuitry to access these cells.
*Corresponding Author
26
Dronacharya Research Journal
Issue II
Jan-June 2010
Fig 1: Elementary SRAM structure
The [m + n ] bits of address input, which identifies the cell which is to be accessed, is split into m row address bits and n
column address bits. The row decoder activates one of the 2m word lines which connects the memory cells of that row to
their respective bitlines. The column decoder sets a pair of column switches which connects one of 2n bitline columns to the
peripheral circuits. In a read operation, the bitlines start precharged to some reference voltage usually close to the positive
supply. When word line turns high, the access is connected to the cell node storing a ‘0’ starts discharging the bitline, while
the complementary bitline remains in its precharged state, thus resulting in a differential voltage being developed across the
bitline pair. SRAM cells are so optimized to reduce the cell size, and hence result in very small cell currents and slow bitline
discharge rate. To speed up the RAM access, sense amplifiers are used to amplify the small bitline signal and drive it to the
external world. During write operation, the data is transferred to the desired columns by driving it onto the bitline pairs by
grounding either the bit line or its complement. If the cell data is different from the write data, then the ‘1’ node is
discharged. When access transistor connects it to the discharged bitline, causing the cell to be written with the bitline value.
The SRAM can be significantly optimized to minimum delay and power at the cost of some area. The optimization starts
with the design and layout of the cell, which is undertaken in consultation with the process technologists [3].
2. SOURCES OF POWER DISSIPATION
In CMOS technology power dissipation can be categorized in two ways: -i. Dynamic power dissipation.
a) Short circuit power dissipation.
b) Power dissipation due to switching.
ii. Static power dissipation.
2.1 Dynamic Power Dissipation
(a) Short Circuit Power: The finite slopes of input signals yields direct-path currents to flow through the gate for a short
time period during switching. During this short time duration there establishes a direct path between VDD and GND and
circuit consumes large amount of power [9].
Fig: 2 Short Circuit Power Dissipation
27
Dronacharya Research Journal
Issue II
Jan-June 2010
From the figure the energy consumed per switching is calculated as below:
V dd ∗ I Peak ∗ t SC V dd ∗ I Peak ∗ t SC 

E
=
+
dp


2
2
[E
dp
= Vdd ∗ I Peak ∗ t SC
]
Where
Vdd = Voltage supply,
tsc = time period of power consumption,
Ipeak = Peak current,
P (short ckt) = Vdd Ipeak
Matching the rise and fall times of the gate will results in reduced short circuit power. In practice, however, the times are not
matched, since optimizing for propagation delay can result in unmatched times. So circuit power is also a major source of
power consumption in the digital circuits.
(b) Power Consumption during Switching: During and after the input transition, charge is moved from Vdd to the output of
the inverter, hereby pulling Vout to Vdd. The lumped capacitance CL results from parasitic wire capacitances and from gate
capacitances of the logic gates driven by the inverter. In this transition CLoad get charged from the VDD supply.
Fig: 3 Dynamic Power Dissipation
Energy Transition
= C L * Vdd
Power = Energy Transition * f
2
2
= C L * Vdd * f
On the opposite transition of the input, the PMOS transistor switches off and the NMOS transistor switches on. Now the
charge stored on CL is dissipated through ground terminal. This output capacitances consumed the power during switching is
known as the dynamic power dissipation. It is the largest source of energy dissipation in CMOS circuits.
E = CLoad * Vdd * Vdd
28
Dronacharya Research Journal
Issue II
Jan-June 2010
Where
CLoad is the load capacitance,
Vdd is the Power supply
If f is the clock frequency and the average number of low to high or high to low transitions (the switching activity) of the
node is denoted by ? then the power consumption due to capacitive switching is given by:
Pswitching = α CLoad (Vdd )² f
Where
α = The activity factor,
CLoad = The effective capacitance of the output load,
Vdd = The voltage swing of the output node,
f = Switching frequency
2.2 Static Power Dissipation
Traditionally, the static component of power consumption has been negligible in static CMOS. But now a days a number of
leakage mechanisms begin to gain significance. Most of these mechanisms are due to the small device geometries.
Fig: 4 Leakage Mechanisms in (NMOS) Transistors
Fig. 4 illustrates six different mechanisms in MOS transistor leakage. The different current components which are responsible
for the static power consumption are:(a) Irev is called reverse bias p-n junction leakage and is caused by minority carriers drifting and diffusing across the edge of
the depletion region and by electron-hole pair generation in the depletion region of the reverse bias junction.
(b) Isub is the subthreshold leakage current that is caused by the low Vth needed to maintain drive strength in processes with
low Vdd. The result in, that Ids can be considerable even when Vg < Vth.
(c) Igate is the gate oxide tunneling caused by thin gate oxides. Unlike the other effects, it occurs in both ON and OFF state of
the transistor.
(d) Ihot is the gate current due to injection of hot carriers from substrate to gate oxide. It is caused by electrons or holes
gaining enough energy to enter the gate oxide layer. This current can occur in OFF state, but more typically it occurs during
transitions of the gate voltage.
(e) IGIDL is gate induced drain leakage. The high field effects below the gate cause holes to accumulate at the silicon surface.
This narrows the depletion edge at the drain and causes further increase in the electric field across the junction. Tunneling
allows minority carriers to cross the gate and exit through the body terminal.
(f) IPT channel punch through leakage is caused by the small distance between source and drain. Due to the small geometries
and due to doping profile, the depletion regions of source and drain can merge below the surface causing carriers to cross.
3. SYMMETRIC 6T SRAM CELL
3.1 Cell Structure
The 6T SRAM is built up of two cross coupled inverters and two access transistors, connecting the cell to the bit lines. The
inverters make up the storage element and the access transistors are used to communicate with the outside world. The cell is
symmetrical and has a relatively large area. No special process steps are needed and it is fully compatible with standard
CMOS processes [19].
29
Dronacharya Research Journal
Issue II
Jan-June 2010
3.2 Read Operation
The 6T SRAM cell has a differential read operation. This means that both the stored value and its inverse are used in
evaluation to determine the stored value. Before the onset of a read operation, the word line is held low (grounded) and the
two bit lines connected to the cell through access transistors M5 and M6 are pre charged high (to VCC). Since the gates of
M5 and M6 are held low, these access transistors are off and the cross-coupled latch is isolated from the bit lines. If a ’0’ is
stored on the left storage node, the gates of the latch to the right are low. That means that transistor M3 is initially turned off.
In the same way, M2 will also be off initially since its gate is held high. The capacitors represent the capacitances on the bit
lines, which are several magnitudes larger than the capacitances of the cell. The cell capacitance has here been represented
only through the value held by each inverter (Q=0 and Q=1 respectively). The next phase of the read operation scheme is to
pull the word line high and at the same time release the bit lines. This turns on the access transistors and connects the storage
nodes to the bit lines. It is evident that the right storage node (the inverse node) has the same potential as and therefore no
charge transfer will be take place on this side. The left storage node, on the other hand, is charged to ’0’ (low) while BL is pre
charged to VCC. Since transistor M5 now has been turned on, a current is going from C bit to the storage node. This current
discharges BL while charging the left storage node. As mentioned earlier, the capacitance of BL (C bit) is far greater than that
of the storage node. This means that the charge sharing alone would lead to a rapid charging of the storage node, potentially
destroying the stored value, while the bit line would remain virtually unchanged. However, M1 is also turned on which leads
to a discharge current from the storage node down to ground. By making M1 stronger (wider) than M5, the current flowing
from the storage node will be large enough to prevent the node from being charged high. After some time of discharging the
bit line, a specialized detection circuit called Sense Amplifier is turned on. It detects the difference between the potentials of
BL and gives the resulting output. Initially the sense amplifier is turned off (sense enable, SE, is low). At the same time as the
bit lines of the 6T cell are being pre charged high, so are the cross-coupled inverters of the sense amplifier. The bit lines are
also equalized (EQ is low) so that any mismatch between the pre charges of BL and is evened out. When the word line of the
memory cell is asserted EQ and PC are lifted and the pre charge of the sense amplifier is discontinued. The column selector
CS is then lowered to connect the bit lines to the latch of the sense amplifier. After some time, when a voltage difference of
about 50-100mV (for a 0.18um process) has developed between the two inverters of the sense amplifier, the sensing is turned
on. This is done by raising SE, and thereby connecting the sources of the NMOS transistors in the latch to ground. Since the
internal nodes were pre charged high the NMOS transistors are open and current is being drawn from the nodes. The side
with the highest initial voltage will make the opposite NMOS (since it is connected to its gate) draw current faster. This will
make the lower node fall faster and in turn shut of the NMOS drawing current from the higher node. An increased voltage
difference will develop and eventually the nodes will flip to a stable state. The Out node in is then connected to a buffer to
restore the flank of the signal and to facilitate driving of larger loads. Which is also connected to an inverter. This inverter is
of the same size as the first inverter in the buffer. This is to make sure that the two sense amplifier nodes have the same load,
and therefore will be totally symmetric. The performance is mainly dependent on the constellation M1-M5 or M3-M6 and
their ability to draw current from the bit line.
3.3 Write Operation
For a standard 6T SRAM cell [20], writing is done by lowering one of the bit lines to ground while asserting the word line.
To write a ’0’ BL is lowered, while writing a ’1’ requires to be lowered. Read operation, the cell has a ’0’ stored and for
simplicity the schematic has been reduced in the same way as before. The main difference now is that the bit lines no longer
are released. Instead they are held at VCC and ground respectively. If we look at the left side of the memory cell (M1-M5) it
is virtually identical to the read operation. Since both bit lines are now held at their respective value, the bit line capacitances
have been omitted. During the discussion of read operation, it was concluded that transistor M1 had to be stronger than
transistor M5 to prevent accidental writing. Now in the write operation this feature actually prevents a wanted write
operation. Even when transistor M5 is turned on and current is flowing from BL to the storage node, the state of the node will
not change. As soon as the node is raised transistor M1 will sink current to ground, and the node is prevented from reaching
even close to the switching point. So instead of writing a ’1’ to the node, we are forced to write a ’0’ to the inverse node.
Looking at the right side of the cell we have the constellation M4-M6. In this case is held at gnd. When the word line is
raised M6 is turned on and current is drawn from the inverse storage node to. At the same time, however, M4 is turned on
and, as soon as the potential at the inverse storage node starts to decrease, current will flow from VCC to the node. In this
case M6 has to be stronger than M4 for the inverse node to change its state. The transistor M4 is a PMOS transistor and
inherently weaker than the NMOS transistor M6 Therefore, making both of them minimum size, according to the process
design rules, will assure that M6 is stronger and that writing is possible. When the inverse node has been pulled low enough,
the transistor M1 will no longer be open and the normal storage node will also flip, leaving the cell in a new stable state.
4. SIMULATION RESULTS
For simulating the all SRAM Cells, 0.18um CMOS technology parameters taken for NMOS and PMOS transistors.
30
Dronacharya Research Journal
Issue II
Jan-June 2010
4.1 Stand By
During standby time, word line is off and both bit lines are precharged to VCC. In the first waveform, a leakage current cause
voltage drop across word line of 0.95V maximum and minimum drop of 0.2V across the cell. The two output nodes showing
stored values at the nodes are next shown in the fig. 5.1. Voltage at node vout2 stored is logic ‘1’ i.e. 1.8V and voltage stored
at the node vout1 is approaching logic ‘0’ i.e. 0V. Middle waveform is showing the voltage at node vout2 at the last
waveform is showing the voltage at the node vout1 [21].
Fig. 5 Standby Operation
4.2 Write Operation
Fig. 6 Write Operation
It is correct to first write into the cell and then read the stored value. During the write operation, word line is set high and the
voltage at the bit line BIT is not precharged and the bit line BIT_BAR is precharged to VCC. The first waveform is showing
the negative edge triggered pulse of 1.8V across the word line, fig. 8. The second waveform is showing the write a logic ‘0’
at the node vout2 and the last waveform is showing the write of logic ‘1’ at the node vout1. Last two waveforms are showing
spikes. In the write of logic ‘0’, spikes are neglected. Also in the last waveform showing logic ‘1’, the spikes range is very
low i.e. 0.002V at the falling and rising edge of the word line. Hence, it is also negligible. The main use of negative edge
triggered pulse is that it reduces delay and power dissipation in the cell in significant figures [22].
31
Dronacharya Research Journal
Issue II
Jan-June 2010
4.3 Read Operation
Fig. 7 Read Operation
During the read operation, the word line is set high and both the bit lines are precharged to VCC. The values written in the cell
are confirmed in the read operation. The first waveform is showing the applied pulse to word line. The second pulse is
showing the value stored is logic ‘0’ across the node vout2 and the last waveform is showing the stored value at logic ‘1’
across the node vout1. Again we can see the spike in the last waveform and its range is in between 2mV i.e. very low. But a
significant power is saved and the speed of the cell is also increased.
4.4 Power dissipation and Delay
(A) Power Dissipation
Operation
Power Dissipation
6T SRAM Cell- Power Dissipation
Stand by
Write operation
7.515x10-6 watts
6.842x10-12
Read operation
1.366x10-9
Table 1: Power dissipation in 6T SRAM cell
(B) Delay
Operation
DELAY
6T SRAM Cell- DELAY
Stand by
Write operation
40 ns
0.1 ns
Read operation
0.1 ns
Table 2: Delay in 6T SRAM cell
REFERENCES
[1]
P. Barnes, “A 500MHz 64b RISC CPU with 1.5Mb On-Chip Cache”, IEEE International Solid State Circuits
Conference, Digest of Technical Papers, pp. 86-87, (1999).
[2]
S. Hesley, et. al., “A 7th-Generation x86 Microprocessor”, IEEE International Solid State Circuits Conference, Digest
of Technical Papers, pp. 92-93, (1999).
[3]
Special issue on low power electronics. Proceedings of the IEEE, vol. 83, no. 4, (April 1995).
[4]
S. Subbanna, et. al., “A High-Density 6.9 sq. µm embedded SRAM cell in a High-Performance 0.25µm- generation
CMOS Logic Technology”, IEDM Technical Digest, pp. 275-278, (1996).
32
Dronacharya Research Journal
Issue II
Jan-June 2010
[5]
G.G. Shahidi, et. al., “Partially-depleted SOI technology for digital logic”, ISSCC Digest of Technical Papers, pp. 426427, Feb. (1999).
[6]
A.P. Chandrakasan, et. al., “Low-Power CMOS Digital Design”, IEEE Journal of Solid State Circuits, vol. 27, no. 4, p.
473-484, April (1992).
[7]
W. Lee, et. al., “A 1V DSP for Wireless Communications”, IEEE International Solid State Circuits Conference, Digest
of Technical Papers, pp. 92-93, (1997).
[8]
M. Izumikawa, et. al., “A 0.25-µm CMOS 0.9-V 100-MHz DSP Core”, IEEE
Journal of Solid State Circuits, vol. 32, no. 1, pp. 52-61, Jan. (1997).
[9]
K. Ishibashi, et. al., “A 1V TFT-Load SRAM using a Two-Step Word Voltage Method”, 1992 IEEE International Solid
State Circuits Conference, Digest of Technical Papers, pp. 206-207, (1992).
[10] H. Yamauchi, et. al., “A 0.5V / 100MHz Over-Vcc Grounded Data Storage (OVGS) SRAM Cell Architecture with
Boosted Bit-Line and Offset Source Over-Driving Schemes”, IEEE International Symposium on Low Power
Electronics and Design, Digest of Technical Papers, pp. 49-54,(1996).
[11] K. Itoh, A.R.Fridi, A. Bellaouar and M.I. Elmasry, “A deep sub-V, single power-supply SRAM cell with multi-Vt,
boosted storage node and dynamic load”, Symposium on VLSI Circuits, Digest of Technical Papers, pp. 132-133,
(1996).
[12] D. Pivin, “Pick the right package for your Next ASIC Design,” EDN, vol.39, no.3, pp.91-108, February 3, (1994).
[13] C. small, “Shrinking Devices Put the Squeeze on System Packaging,” EDN, vol.39, no. 4, pp.41-46, February 17,
(1994).
[14] D. Manners, “Portables Prompt Low Power Chips,” Electronics weekly, no.1574, p.22, November 13, (1991).
[15] J.Mayer, “Designers Heed the Portable Mandate,” EDN, vol.37, no.20, pp.65-68, November 5, (1992).
[16] Amit Agarwal, , Bipul C. Paul, Hamid Mahmoodi, Animesh Datta, student, Member, IEEE, and Kaushik Roy, Fellow,
IEEE, IEEE Transactions on VLSI systems, vol.13, no. 1, January (2005).
[17] Vijay Degalahal, Student Member, IEEE, Lin Li, Student Member, IEEE, Vijaykrishnan Narayanan, Member, IEEE,
Mahmut Kandemir, Member, IEEE, and Mary Jane Irwin, Fellow, IEEE, IEEE Transactions VLSI systems, vol. 13, no.
10, October (2005).
[18] Martin Margala, deptt. of electrical and computer engineering, university of Alberta, IEEE paper Low power SRAM
circuit design – (1999).
[19] H. Tran, “Demonstration of 5T SRAM and 6T Dual-Port RAM Cell Arrays,” Symposium on VLSI Circuits, pp. 68-69,
Jun. (1996).
[20] E. Seevinck, F. J. List and J. Lohsttoh, “Static- Noise Margin Analysis of MOS SRAM Cells,” IEEE JSSC, VOL. SC22, NOS, pp.748-754, Oct. (1987).
[21] J. Lohstroh, E. Seevinck and J. de Groot, “Worst- Case Static Noise Margin Criteria for Logic Circuits and Their
Mathematical Equivalence,” IEEE JSSC, VOL. SC-18, NO. 6, pp. 803-807, Dec. (1983).
[22] Ingvar Carlson, Stefan Anderson, Sreedhar Natarajan, Atila Alvandpour, Division of Electronic Devices, Department of
Electrical Engineering (ISY), Linkoping University, IEEE paper-A High Density, Low Leakage, 5T SRAM for
Embedded Caches, (2004).
[23] T-H Joubert, E Seevinck, M du Plessis, A CMOS REDUCED-AREA SRAM CELL, ISGAS 2000 - IEEE International
Symposium on Circuits and Systems, May 28-31, Geneva, Switzerland (2000).
33
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
ENTREPRENEURSHIP DEVELOPMENT IN PUNJAB AND
HARYANA: A STUDY WITH SPECIAL REFERENCE TO
ENTREPRENEURSHIP DEVELOPMENT PROGRAMME
Sarabjit Singh*
Associate Professor and Head
Department of Humanities and Management
National Institute of Technology, Jalandhar (Punjab)
E mail- [email protected]
Dr. H. L. Verma
Professor, Haryana School of Business,
Guru Jambeshwar University of Science and Technology,
Hissar-125001, India
E mail- [email protected]
____________________________________________________________________________________________________________________________
ABSTRACT
Many developing countries including India are in a state of transition. These are striving to move from a subsistence-oriented, tightly integrated, inward
looking local economy to a surplus seeking, market led, outward looking economy. Entrepreneurs can play a pivotal role in the economic development of the
country. However capital alone is not sufficient for the success of entrepreneurship. Besides economic aspects, the intangibles such as knowledge have
become increasingly important in the modern times. Therefore, there is an emphasis on entrepreneurship training and development. Many
organizations/institutions have taken proactive measures in this direction. The study has been carried out with a view to analyze the role of Entrepreneurship
Development Organizations in the entrepreneurship development, and the problems faced by EDPs in developing and promoting entrepreneurship. Twentytwo entrepreneurship development organizations of Punjab and Haryana have responded to the study. Achievement motivation training, management inputs,
opportunities guidance, project report preparation, support system exposure, market survey, field visits and labour laws were major inputs covered in EDPs.
Non-availability of suitable candidates for training, limited availability of guest faculty, financial institutions preference for collateral securities, inadequate
finance for EDP trainees, delay in assistance from supporting organizations, inflexible rules prevailing in training institutions, problems in follow-up and
non-seriousness on the part of EDP trainees were indicated as major problems in promoting entrepreneurship.
Keywords: Entrepreneurship, Organizations, Models, Problems.
____________________________________________________________________________________________________________________
1. INTRODUCTION
1.1 Many developing countries including India are in a state of transition. They are striving to move from a subsistenceoriented, tightly integrated, inward looking local economy to a surplus seeking, market led, outward looking economy. Such
a move is possible only with the emergence of a multitude of a small-scale and rural enterprise in all works of life. This
requires building up of a wider base of population capable of entrepreneurial behavior. If we take India as an example in the
context of development, we find that the initial build up of entrepreneurial activity took place in urban sector. This was
followed by a trickledown effect in rural communities over time. Development strategy today, however, seeks a more
proactive and immediate change in India. While much of policy making in this regard treats enterprise creation as a function
of appropriate economic conditions (made possible through institutional and economic interventions), others have
emphasized training and attitude change as vital elements in the process. But it needs systematic observations and research
into the process through which entrepreneurship emerges and sustains itself. Enterprises and entrepreneurs have been in the
center stage of modernization since the days of Industrial Revolution. Economists, sociologists, psychologists and
anthropologists have studied this concept, usually within the frontiers of their respective disciplines.
1.2 The training and development of entrepreneurs has gained tremendous amount of significance in the recent past for it is
purported to contribute to the socio-economic development of the country. The first seed in the direction of entrepreneurial
development perhaps dates back to the early sixties when the Small Industry Extension Training Institute was established by
the government of India. Since then there has been a widespread growth of entrepreneurial development activities. It will not
be an exaggeration to say that not a day passes without some kind of entrepreneurial training activity going on in country.
*Corresponding Author
34
Dronacharya Research Journal
Issue II
Jan-June 2010
The number of organizations participating in the development of entrepreneurship has enormously increased. In a nationwide survey undertaken by NIESBUD (National Institute for Entrepreneurship and Small business Development) identified
more than 1600 organizations participating fully or partially in entrepreneurial development and promoting self employment
in the country. Entrepreneurship Development Programmes (EDPs) have emerged as an important strategy for development
of human resources for promoting small and medium enterprises in developing and underdeveloped countries which are
characterized by lack of entrepreneurship, pockets of urban industrial concentration and large scale unemployment. Besides
the initiative by both Union and State Ministries of Industry, many financing institutions, educational institutions and
voluntary organizations have come forward to take part in developing self-employment through entrepreneurship. The initial
emphasis on manufacturing sector is now shifting to encompass comparatively very wide areas of service, processing agro
and rural industries on one hand and high-tech on the other hand. Entrepreneurship Development Programme is a popularly
accepted method for developing entrepreneur. It is regarded as an organized and systematic method of development.
2. REVIEW OF LITERATURE
2.1 Entrepreneurship is multidisciplinary in nature. The entrepreneur is the individual who ‘unites all means of production
and who finds in the value of the products , the reestablishment of the entire capital he employs, and the value of the wages,
the interest, and the rent which he pays, as well as the profits belonging to himself’ (Say, 1816).
Mill (1848), considered entrepreneurship as the direction, supervision, control and risk taking, with risk being the main
distinguishing feature between the manager and the owner–manager. Schumpeter (1928, 1934) focused on the instability of
capitalism and on the entrepreneur’s function as an innovator. Fraser (1937) associated entrepreneurs with the management of
a business unit, profit taking, business innovation and risk bearing. Risk is an integral part of entrepreneurship Oxenfeldt
(1943) and Cole (1959). Other definitions emphasize other characteristics. As summed up by Cochran(1968), ‘There are
some unresolved differences in the definitions of entrepreneurship, but there is agreement that the term includes at least a part
of the administrative function of making decisions for the conduct of some type of organization’. An entrepreneur is someone
who specializes in taking judgemental decisions.
2.2 Entrepreneurship should not be defined on the basis of opportunity, but rather cultural perception of opportunity’ (Dana,
1995).) Entrepreneurs are seen as risk-takers and innovators who reject the relative security of employment in large
organizations to create wealth and accumulate capital (Scase and Goffee, 1987). Entrepreneurship is not about objective
monetary considerations in isolation; Penrose (1959) explains the importance of non-monetary considerations: ‘The fact that
businessmen, though interested in profits, have a variety of other ambitions as well, some of which seem to influence (or
distort) their judgment about the best way of making money, has often been discussed primarily in connection with the
controversial subject of profit maximization’. Value is a function of perceived need, and this is a function of cultural values,
as acquired from the nuclear family, the extended family and society. Much has to do with attitudes and perceptions that are
culturally coloured (Dana, 1995).
3. METHODOLOGY
3.1 The study has been carried out with a view to (i) analyze the role of Entrepreneurship Development Organizations in the
entrepreneurship development, (ii) the problems faced by EDPs in developing and promoting entrepreneurship. The states of
Punjab and Haryana have been selected for study as these are primarily agrarian states, and the future lies in the development
of industry by harnessing the synergies in agriculture and industry through the development of agro based industry by further
strengthening base of manufacturing industries.
Needed data for determining present practices of entrepreneurship development were collected with the help of
questionnaires. Required information was collected by personally visiting various training institutions and organizations.
3.2 The sample includes 32 ED organizations but due to diverse nature of activities in offering entrepreneurship training,
scope of this study did not provide for inclusion of those organizations which had not conducted/sponsored programmes of at
least two week full-time or one month part-time duration. It was, therefore, considered appropriate to drop such organizations
from the purview of this study. Further discussion on entrepreneurship in this study is therefore confined to ED Organizations
which conducted EDPs of duration more than 15 days and doing follow-up. This results in sample size of 22 ED
organizations. The various ED organizations in the state of Haryana and Punjab are: Engineering College and Training
Institutions, District Industries Centers, Technical, Consultancy Organizations, Small Industry Service Institute Branch
Offices, Banks and Financial Institutions, Science and Technology Entrepreneurs Parks, Government Department, NonGovernment Organizations, and Regional Center for Entrepreneurship Development (RCED) Regd.
35
Dronacharya Research Journal
Issue II
Jan-June 2010
4. DATA PRESENTATION AND ANALYSIS
Major findings with regard to entrepreneurship development programmes are as follows:
4.1 Pre-training Promotional Work
One of the important factors in the success of an EDP is considered to be the pre-training promotional work done by the
training organization. It was, therefore, planned to identify various activities involved in pre-training promotional work and
the extent to which such activities were carried out by training organizations. Based on the information made available by the
above ED organizations, the major activities during pre-training period and the extent to which they were carried out by these
organizations have been shown in Table 1:
Sr. No.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Activities Carried out by ED Organizations
Programme announcement in local newspapers
Identification of faculty for EDPs
Visit to various organizations by course faculty
Programme publicity through supporting organizations
Circular letters to various organizations
Radio announcements
Hand bills and pamphlets
Identification of viable venture opportunities
Preparation of instructional material
Arranging infrastructure for conduct of EDPs
Finalizing course curriculum
Visit to technical institutions and Employment Exchanges
Visit to industrial areas
No. of EDOs
22
15
11
8
6
6
5
4
3
3
2
2
1
Source: Compiled from data provided by ED Organizations.
Table 1: Pre-training Activities carried out by ED Organizations
All those organizations, which conducted EDPs i.e. 22, gave programme advertisement in local newspapers. Second most
important pre-training promotional activity was identification of resource person for conduct of the course. Out of 22 ED
organizations, 15 had reported to undertake such activity.
Very few organizations, i.e. 2 finalized the course curriculum prior to start of the programme, 2 visited technical institutions
and employment exchange with a view to spread message about EDPs and only 1 organization deputed its officers even to an
industrial area for the same purpose.
4.2 Response of EDPs Beneficiaries
Sr. No.
1.
2.
3.
4.
5.
6.
Particulars of EDPs Beneficiaries
Total No. of sample respondents
No. of sample respondents who did not need help in project/product Selection
No. of sample respondents who needed help in project/product Selection
No. of respondents who got needed help in project/ product Selection
No. of respondents who felt to have been assisted to some extent in project/product Selection.
No. of respondents who could not benefit in product selection.
No. of EDP
Beneficiaries
150
105
45
6
7
32
Source: Compiled from data provide by ED Organizations.
Table 2: Beneficiaries Showing Benefits of Opportunity Guidance
Many EDP beneficiaries i.e. 105 out of 150 did not require any help in project/product selection. Only 6 beneficiaries have
agreed that they got needed help in project/product selection.
4.3 Nature of Programmes Offered
Entrepreneurship development programmes varied in terms of target groups, programme inputs, duration, methodology and
programme being part-time or full time. Such variations were noticed from place to place and from organization to
36
Dronacharya Research Journal
Issue II
Jan-June 2010
organization. 22 ED Organizations reported in this study also included two EDP sponsoring organization. Types of
programmes offered and their duration are given in Table 3. The table shows that most of the organizations conducted EDP
on full-time basis and part-time programmes were not tried much.
Sr.No. Types of Programmes Offered
1.
Full-time
2.
Part-time and full-time both
3.
Programme on week ends, i.e. Saturdays and Sundays
4.
EDPs Sponsoring Organizations
TOTAL
No. of EDOs
17
2
1
2
22
Table 3: Type of Programmes Offered by ED Organizations
Neither the participants of part-time programmes found it inconvenient nor did participants of full-time programmes indicate
any special usefulness of such programmes. Evidently, full-time programmes were practiced by most of the training
organizations (17), Part-time and Full-time both by 2 ED organizations and programme on weekends by 1 ED organization.
4.4 Duration of EDPs
All ED training organizations were asked to furnish information with regard to duration of their programmes. Distribution of
ED organizations according to duration of their programmes is given in Table 4.
Sr.No.
1.
2.
3.
4.
5.
6.
7.
TOTAL
Duration of EDPs in Weeks
3 weeks
4-6 weeks
4-6 weeks full-time and 12-14 weeks part-time
6-8 Weeks
9 weeks full-time
12 Week full-time
Non-Respondents
No. of EDOs
2
8
2
6
1
1
2
22
Table 4: Distribution of ED Organizations According to Programmes Duration
8 organizations offered programmes of 4-6 weeks duration included 3 branch offices of Small Industry Service Institute, 3
training institutes and a bank (120-150 hrs). Out of these 8 organizations, 6 worked on five days a week basis and another on
six days a week basis. 2 organizations had offered full-time programmes and alternatively they also offered 12 weeks parttime programmes (120-150 hrs). The 9 weeks full-time week ends programme was tried out only by Regional Engineering
College, Kurukshetra (Now NIT, Kurukshetra) (100-110 hrs.). 12 weeks full-time programme, longest in duration conducted
in the region, was offered by Punjab Engineering College, Chandigarh (minimum 240 - 260 hrs). The faculty associated with
conduct of 3 months programme felt that duration was indeed more than required.
4.5 EDP Models in Practice
EDP models developed by National Institute for Small Industry Extension Training, Hyderabad; National Institute for
Entrepreneurship and Small Business Development, New Delhi; Entrepreneurship Development Institute of India,
Ahmedabad; and State Bank of India, are some of the widely accepted models in India. Through this study, an effort was
made to find out which of the above models was practiced most by ED training institutions. Information pertaining to EDP
models practiced by various ED Organizations has been Compiled and given in Table 5.
It was interesting to note that all those institutions, which practiced EDII model, received funds for conduct of EDPs from
one or more than one of the following organizations:
1.
National Science and Technology Entrepreneurship Development Board, Department of Science and Technology,
Government of India, New Delhi.
2.
Industrial Development Bank of India, Bombay.
3.
Industrial Finance Corporation of India, Bombay.
4.
Industrial Credit and Investment Corporation of India, Bombay
37
Dronacharya Research Journal
Issue II
Sr. No.
EDP MODEL
1.
2.
3.
4.
5.
6.
EDII Model
EDII Model with minor modifications
NIESBUD Model
SBI Model
Own Model
EDP Sponsoring Organization not conducting EDPs but
favoring model
TOTAL
Jan-June 2010
No. of EDOs Practicing the Model
8
2
2
1
7
2
22
Table 5: Distribution of ED Organizations According to EDP Model
These EDPs sponsoring organizations considered EDII as an Apex Institute of Entrepreneurship. 8 organizations were found
practicing EDII model as such and other 2 organizations were also practicing EDII model but with some modifications. Only
2 organizations were found practicing NIESBUD model. One organization was practicing SBI model and the remaining
seven were practicing their own model of EDP.
4.6 Core Inputs Provided in EDPs
With a view to ascertain core inputs in EDPs the training organizations were asked to indicate the inputs which they
considered most important / critical in EDPs which they had organized. The information provided by ED organizations has
been tabulated in Table 6.
Sr.No.
Inputs
1.
2.
3.
4.
5.
6.
7.
8.
9.
Achievement Motivation/Entrepreneurial Motivation Training
Management
Opportunity Guidance / Project Selection
Project Report Preparation
Exposure to Support System
Market Survey
Field Visits
Labour Laws
Marketing
No. of ED Organizations
Considering it Critical
18
16
12
11
11
4
4
4
2
Table 6: Distribution of ED Organizations According to Programme Inputs
Out of a sample of 150 persons who underwent EDPs, 85 per cent told that Achievement Motivation Training was something
they still remember and it was indeed useful.
Out of 22 organizations, maximum i.e. 18 organizations considered Achievement Motivation Training as critical. Knowledge
of marketing was considered critical by only 2 organizations.
4.7 Expenditure on EDPs
An attempt was made to find out the amount of money spent on EDPs by various institutions. Information pertaining to
amount spent per EDP by ED organizations is shown in Table 7.
Sr. No.
1.
2.
3.
4.
5.
6.
7.
TOTAL
Range of Expenditure
Less than 20000
20000―40000
40000―60000
60000―80000
80000―100000
More than 100000
Non-respondents and EDP Sponsoring Organizations
No. of Organizations
6
5
4
3
4
Table 7: Distribution of ED Organizations According to Expenditure Incurred per EDP
38
Dronacharya Research Journal
Issue II
Jan-June 2010
6 organizations ranged their expenditure between 20,000-40,000, 5 in the range of 40,000-60,000, 4 in 80,000-1,00,000 and
three for more than 1,00,000.
Budgets of various programmes were analysed in detail. It was noticed that major part of the money went in programme
advertisement; TA/DA to faculty staff; and guest speakers; salary of faculty and staff; boarding and lodging; cost of
instructional material; honorarium to speakers; and post-training follow-up.
4.8 Relationship of Classroom inputs and Fieldwork
It is believed that classroom training alone is not enough for potential entrepreneurs. They need to do a lot of field work for
establishing and managing their enterprises. Finding out the relationship between time devoted on class inputs & fieldwork
was, therefore, considered necessary. Training institutions/ organizations were asked to look back to their experience &
provide the total time spent on classroom inputs & field work. Data obtained from 18 organizations conducting comparable
programmes have been tabulated in Table 8.
Sr. No.
Inputs of field work Ratio of Classroom
1.
2.
3.
4.
5.
6.
85: 15
80: 20
75: 25
70: 30
60:40
Non- respondents& ED Sponsoring organizations
Total
No. of Organizations
03
10
01
02
02
04
22
Table 8: Distribution of ED Organizations According to Time spent on classroom Inputs and field work
The ratio of classroom to field work time for 18 organizations, which provided required information varied between 85:15 to
60:40. When we apply weighted average technique we find that approximately 74 percent of the total time was spent by
training organizations on imparting classroom inputs & remaining 26 percent on field work.
It could be concluded that approximately ¾ of EDP time was found to have been spent on classroom inputs & ¼ on field
work. What was the mechanism of assisting trainees in making effective use of time provided for field work & whether
present approach of spending almost ¼ times on field work was justified, has not been studied in this work.
4.9 Post Training Follow-Up Activities
22 ED organizations covered in this study include 2 EDP sponsoring organizations and 3 such organizations which did not
carry out any follow-up as such, leaving behind only 17 ED organizations. Responses of 17 organizations who claimed to
have carried out the task of post training follow-up are summarized in table 9. Most of the trainees expressed that such
meetings largely reviewed the status of participants and hardly any real help /guidance was provided. It was also revealed that
not more than four meetings were organized by any training organizations and average number of meetings usually varied
between one two hours.
Follow- up meetings were organized by 13 out of 17 organizations. Only 2 organizations claimed to have provided escort
services to needy trainees, 2 claimed to have helped their trainees in preparation of project reports during post-training
period. In case of 2 ED organizations, the programme coordinator stayed at EDP center after training with a view to assist the
trainees in taking concrete steps in establishing their Ventures.
Sr.No.
Follow- up activity
1.
2.
3.
4.
5.
6.
7.
8.
Follow- up meetings
Correspondence with trainees and various organizations
Free guidance and counseling to trainees
Personal visits by trainers
Monitoring committee/Problem-solving committee meetings
Escort services to trainees
Assistance in project report preparation
Extended stay of co- coordinator at training center
No. of ED Organizations which
carried out this activity
13
11
10
4
4
2
2
2
Table 9: Post-training Follow-up Activities
39
Dronacharya Research Journal
Issue II
Jan-June 2010
4.10 Suggested Criteria for Evaluation of EDPs
Like all other developmental projects and programmes, EDPs also need to be evaluated. Training Institutions were requested
to suggest ways and means of evaluating EDPs. Out of 22 ED organizations, 5 did not suggest any evaluation criteria.
Sr.No.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Suggested Evaluation criteria
No. of enterprises established
No. of enterprises oerating successfully after a few years
Employment generated
Competencies developed
Time taken in establishing enterprises
Better management practices
Growth and development of enterprises established
Contribution of entrepreneurs to gross national product
Demand for more EDPs
Response to follow-up meetings
No. of EDOs
14
4
3
3
1
1
1
1
1
1
Table 10: Distribution of ED Organizations According to EDP Evaluation Criteria
14 out of 17 ED organizations, suggested that criteria for evaluation of EDPs should be number of enterprises established by
trained entrepreneurs. Very few organizations suggested that EDP could be considered effective if the trainees were
implementing their enterprises faster, adopting better management practices in comparison to untrained entrepreneurs, growth
and development of enterprises established, contribution of entrepreneurs to GNP, demand for more EDPs and response to
follow-up meetings as evaluation criteria.
4.11 Problems Faced by Training Organizations
Training Institutions were asked to narrate the major problems faced by them in conduct of EDPs. Out of 22 ED
organizations, 3 organizations which reported that they did not face any problem in conducting EDPs and 2 EDP sponsoring
organizations were excluded. Responses of other 17 organizations have been tabulated in table11.
Sr.No.
Nature of Problem
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
Inadequate availability of suitable candidates
Limited availability of guest faculty
Collateral securities were insisted upon
Inadequate finance for EDPs
Delay in assistance by support system
No flexibility in rules
Post-Training follow-up
Non-serious candidates
No separate cell for EDPs
No separate faculty and staff for EDPs
No regular sanction of programmes
Inadequate infrastructure for training
Inadequate infrastructure for entrepreneurs
Low level of entrepreneurial awareness
Unable to satisfy trainees
Heterogeneous background of trainees
Trainees expect VIP treatment
Trainees not prepared to take risk
No. Of Organizations Facing
the Problems
7
6
6
3
3
3
3
3
2
2
2
2
2
2
2
1
1
1
Table 11: Distribution of ED Organizations according to the Problems Faced
The problems of inadequate availability of suitable candidates for training were faced by maximum number of organizations,
i.e. 7 out of 17 organizations. The problem of non-availability of competent faculty from outside was faced by 6
organizations. 6 training institutions told that collateral securities for grant of loan were invariably insisted upon by banks and
financial institutions. 1 told that heterogeneous background of trainees posed a problem to them. An organization which had
40
Dronacharya Research Journal
Issue II
Jan-June 2010
conducted residential EDPs was facing the problem of trainees expecting VIP treatment in terms of boarding and lodging.
Yet another organization told that most of the trainees avoided taking needed risk.
5. DISCUSSION AND CONCLUSION
5.1 The entrepreneurship training movement in northern region of India is largely being carried out by Technical
Consultancy Organizations, Small Industries Service Institutes, Science and Technology Entrepreneur Parks, Engineering
Colleges and Specialized Training Institutes. District Industries Centers, National and State Level Financial Institutions,
Government Departments and Voluntary Organizations like Regional Centre for Entrepreneurship Development (RCED).
5.2 Programme announcement in local newspapers, identification of faculty, visits to concerned departments constituted
major activities involved in pre- programme preparations. Identification of viable venture opportunities before start of the
programme was yet another pre-training activity. Out of 150 EDP beneficiaries, 45 needed help in product selection.
However, out of 45 beneficiaries, only 6 got needed guidance and 7 felt to have got assistance to some extent only. Overall
impression of EDP beneficiaries about this pre-training activity was poor.
Out of 22 organizations involved in conducting EDPs of more than one week duration, 17 were found conducting full-time
programmes. 2 organizations were found conducting both full-time and part-time programmes and another organization
offered programme on week-ends.
5.3 EDP duration in terms of class room hours was found to be in the range of 108 to 150 hours. However, most of the EDPs
had duration of 120 hours to 135 Hours. Keeping in view the feedback provided by EDP trainees, the EDP duration of 150
hours should be considered adequate. Out of 22 organizations, 8 were following EDII Model, 2 were found practicing EDII’s
model with minor modifications. NIESBUD model of EDP was practiced by only 2 organizations. One was practicing SBI
model, and 7 organizations, most of them taking clues from EDII’s model, developed their own EDP models. There were 2
EDP sponsoring organizations not conducting EDPs but favouring EDII Model.
5.4 Achievement motivation training, management inputs, opportunities guidance, project report preparation, support system
exposure, market survey, field visits and labour laws were major inputs covered in EDPs. The amount spent per EDP was in
the range of RS. 20,000 – 1,00,000. No uniform approach was adopted by training organizations in working out budget for
EDPs. The way the EDPs have been funded in the past, revealed that there was need for standardizing cost of programmes
and ED institutions need to be more realistic in sanctioning budgets for EDPs
The post-training follow-up was mostly carried out in the form of follow-up meetings, counseling to trainees, visits by trainer
to trainees and constituting monitoring committees. Feedback from trainees indicated ineffectiveness of follow-up
programme. Predominantly, the number of enterprises established by trained entrepreneurs was suggested as criteria for
evaluating EDPs. Number of enterprises operating successfully after a few years, employment generated, and competency
developed in trainees were other criterion worth mentioning suggested by training institutions for evaluation of EDPs.
5.5 Non-availability of suitable candidates for training, limited availability of guest faculty, financial institutions preference
for collateral securities, inadequate finance for EDP trainees, delay in assistance from supporting organizations, inflexible
rules prevailing in training institutions, problems in follow-up and non-seriousness on the part of EDP trainees were indicated
as major problems in promoting entrepreneurship.
6. RECOMMENDATIONS
6.1 In view of massive unemployment, education and training institutions need to be encouraged to take up the task of
training potential entrepreneurs. Identification of right type and adequate number of persons who wish to attend EDPs has
been a problem. ED organizations should develop and validate entrepreneur’s identification tool which could really help
them in identifying EDP aspirants with latent entrepreneurial traits.
Product/ project selection continues to be a serious problem area for EDP trainees. ED organizations in collaboration with
entrepreneurial support system should identify venture opportunities which the trainees can think of taking up as career
option.
6.2 Most of the ED organizations did not provide adequate instructional material to programme participants. As a part of preprogramme preparation ED organizations should prepare adequate and useful course material which could be given to
trainees during EDPs. Standardization of programme inputs, training methodology, and costing of EDPs are required to be
done so as to avoid wide variations in programmes.
41
Dronacharya Research Journal
Issue II
Jan-June 2010
ED organizations need to clearly state the extent to which they can come up to the expectations of the participants. In fact
clear message to programme aspirants about what the programme organizers can do for them could help in weeding out the
persons with unreasonable expectations. Training organizations and programme sponsoring organizations need to arrive at a
logical consensus and state the expectations which need to be considered reasonable or unreasonable.
6.3 In light of broad objectives of entrepreneurship development, i.e. employment creation and income generation, ED
organizations need to take more promising target groups within their fold. EDPs need to be conducted for employees of
government departments, corporations, and public sector companies. Coupling the experience and maturity of such people
will give boost to the entrepreneurship promotion.
Entrepreneurial orientation, updating of knowledge relating to entrepreneurship promotion on continuous basis, and
development of skills and capabilities in identifying latent entrepreneurial traits amongst EDP aspirants are extremely
important for successful entrepreneur trainers. Training and retraining of entrepreneur trainers/motivators is therefore,
required to be taken up for sustaining entrepreneurship development. Immediate need is to train the trainers in entrepreneur’s
identification process.
6.4 Programme sponsoring organizations need to develop a criterion for accreditation of entrepreneurship training
organizations. Ad hoc approach to entrepreneurship must come to an end by institutionalizing entrepreneurship in resourceful
organizations.
REFERENCES
[1]
Cochran, Thomas C., ‘Cultural factors in economic growth’, The Journal of Economic Growth, 20(4), December, 515–
30, (1960).
[2]
Dana, Léo-Paul, ‘Entrepreneurship in a remote Sub-Arctic community: Nome, Alaska’,
Entrepreneurship: Theory and Practice, 20(1), Fall, 55–72; reprinted in Norris Krueger, (ed.), Entrepreneurship: Critical
Perspectives on Business and Management, vol. IV, London: Routledge, 2002, pp. 255–75, (1995).
[3]
Dana, Léo-Paul, ‘Self-employment in the Canadian Sub-Arctic: an exploratory study’, Canadian Journalof
Administrative Sciences, 13(1), March, 65–77, (1996).
[4]
Fraser, Lindley M., Economic Thought and Language, Black, (1937).
[5]
Mill, John S., Principles of Political Economy with Some of their Applications to Social Philosophy, London:
Longmann, Green, (1848).
[6]
Oxenfeldt, Alfred R., New Firms and Free Enterprise: Pre-War and Post-War Aspects, Washington, DC: American
Council of Public Affairs, (1943).
[7]
Penrose, Edith Tilton, The Theory of the Growth of the Firm, Oxford: Basil Blackwell, (1959).
[8]
Say, Jean Baptiste, Catechism of Political Economy: Or, Familiar Conversations of the Manner in Which Wealth is
Produced, Distributed, and Consumed by Society, London: Sherwood, (1816).
[9]
Scase, Richard and Robert Goffee, ‘Introduction’, in Robert Goffee and Richard Scase (eds), Entrepreneurship in Asia:
The Social Processes, London: Croom Helm, pp. 1–11, (1987).
[10] Schumpeter, Joseph Alois, ‘The instability of capitalism’, Economic Journal, 38, 361–86, (1928).
[11] Schumpeter, Joseph Alois, The Theory of Economic Development: An Inquiry into Profits, Capital, Credit, Interest, and
the Business Cycle, trans. Redvers Opie, Cambridge, Massachusetts: Harvard University Press, (1934).
42
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
STRATEGIC MANAGEMENT PRACTICES:
A STUDY OF SELECTED INDIAN COMPANIES
Mani Shreshtha*
Assistant Professor, NC College of Engineering
Panipat -132107, India
E-mail: [email protected]
Dr H L Verma
Professor, Haryana School of Business
Guru Jambheshwar University of Science and Technology, Hissar-125001, India
E-mail: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT
In this age of cutthroat competition it has become indispensable for business organizations to adopt certain practices, which provide them competitive
advantage. The changing business environment as a result of economic liberalization and globalization has made the things more difficult for the people
having responsibility for running the organization. This puts emphasis on taking strategic decisions in such a manner that it ensures the organizational
success not only for the present but also for the years to come. These strategic decisions can be taken by following a structured path of strategic management
process. Many of the organizations feel about adopting strategic management process for taking strategic decisions but how much these are successful in
doing that is an issue of concern. This paper investigates about the presence and procedure of strategic management practices in Indian organizations.
Keywords: Strategic Management Practices, Phases of Strategic Management, Strategic Issues, Indian Companies
______________________________________________________________________________________________________________________________
1. INTRODUCTION
1.1 Today’s competitive world demands better performance than others and keeping the stakeholders satisfied. Every
organization is striving hard to gain sustained competitive advantage. The near future may find companies competing not
only among industry players but also with players of unrelated industries. The landscape of today’s business is rapidly
changing and can be characterized by considering various elements like Time, Core competence, Value generation,
Innovation & Technology, Globalization, Human Resource Development, Organizational Restructuring, Corporate
Governance, Knowledge Management, Transformational Leadership, Influence of internet, and Brand building for strategic
success. With the internationalization of market the way of doing business has completely changed. In order to reach
economies of scale it becomes inevitable to achieve low costs, and thus lower prices to be competitive, the companies are
now thinking of global market instead of a national market. With more and more companies going global, it becomes more
important to keep a track of international developments and position of the company for long-term competitive advantage.
The managerial tasks now have shifted from performing the basic functions, towards formulation and implementation of
well-crafted strategy and hence acting proactively. Therefore crafting and executing strategy has now become core
management functions.
1.2 It becomes imperative to understand the development of strategic management field to have a grasp of its relevance for
the modern day organizations. History of strategic corporate management is one of cumulative learning and it has progressed
stage-wise since the Second World War. The concept of strategic management has been expanded and refined by business
practitioners and academic researchers over the time. Field of strategic management has evolved over four decades [1]. Basic
Financial Planning was first recognized in 1950’s and it largely focuses on budget in an organization for one or two years
with an aim to develop control mechanism for costs and internal environment. This is under the domain of financial
executives and generally excludes people from other functional areas. Due to the myopic view of this approach, a need arose
of a more effective system for the business from the long-term view. This followed by Forecast-based Planning, introduced in
1960’s. Managers propose projects taking more than one year considering both internal and external data on which
extrapolation is applied on current trends for five years into future. The chief issue is how to allocate financial resources
between the businesses while encompassing other functional areas. The time horizon of this is usually three to five years.
Strategic Planning was the enlargement of the scope of the forecast based planning taking care of its limitations. In this, top
management takes control of the planning process by initiating strategic planning. Its focal area is on the outside players
*Corresponding Author
43
Dronacharya Research Journal
Issue II
Jan-June 2010
mainly the customers and the competitors, with key emphasis on formal strategy formulation and leaves the implementation
issues to lower management level. Strategic Management includes strategic thinking concept and the field of strategic
management was well recognized in 1980’s. Top management forms planning groups of managers and key employees from
various departments and work groups at many levels. The sophisticated annual five-years strategic plan is replaced with
strategic thinking at all levels throughout the year and strategic information is available at all the levels. It considers that
planning is interactive activity at all levels and no longer the top down.
1.3 The concept of strategic management also considers the aspect of stakeholder satisfaction and the process to be followed,
and can be defined as the process through which organizations analyze and learn from their internal and external
environments, establish strategic direction, create strategies to achieve established goals, and execute these strategies, all in
an effort to satisfy the stakeholders [2]. The elements of strategic management can be understood through considering them
as a stream of decisions taken one after other or popularly known as the stages or phases of strategic decision-making.
Strategic management process is the full set of commitments, decisions and actions required for a firm to achieve strategic
competitiveness and to earn above average returns. Strategic decision-making process is concerned with defining the
company’s purpose of existence, the qualitative and quantitative projection of its objectives and goals, determining most
profitable way to deploy resources, to exploit present and future opportunities. It also includes implementing, monitoring,
recycling and reformulating plans in view of changing environment.
1.4 For good management, formulation and execution of good strategy is imperative. This is one of the reasons that points
out toward examining the strategic management practices, being followed by Indian corporate sector at formulation and
implementation phases. First phase of strategic decision-making is regarding the planning and establishment of strategic
intent, which lays the foundation for strategic management. Second phase focuses on environmental scanning, as due to the
presence of uncertainty and risk in the general and immediate environment of the organization, it is necessary to identify the
strategic factors. Third phase of strategic management process focuses on strategic alternatives and choices required for
developing alternative strategies out many options and choosing the one suited the most in the light of environmental
opportunities and threats and corporate strengths and weaknesses. Fourth phase of strategic management process is related to
the implementation of the selected strategic alternative. Fifth and final phase of strategic management process includes
evaluation and control process and ensures that the company is achieving what it set out to achieve [3].
2. REVIEW OF LITERATURE AND OBJECTIVES OF THE STUDY
2.1 On the relevance of strategic management practices, Schraeder [4] is of the view that organizations can benefit from
strategic plans and should be viewed as a tool that evokes action within the organization- a living document that guides the
activities of the organization in a purposeful manner. Similarly, Sharma [5] stated that all the large or small firms have to go
for strategic management as for small companies the reward for adopting strategic management is improved economic
performance and for the bigger ones reward takes the form of improved practices and better response to uncertain external
environment. He also observes that strategic management should encourage creativity and adaptability in a firm and a swift
response is solicited to shifts in environment. Lyles [6] states that the most important issues for strategic management in
1990’s are strategic decision making, organization change, global competition/ operation, technology innovation, government
policy, restructuring and leadership. Studies conducted by Karger and Malik [7], Stagner [8], and Thune and House [9]
emphasize that firms using strategic management methods outperformed those that did not as shown by their performance
measurements. Dincer et al. [10] in their study observed that firms appear to have a greater commitment to formulation
aspects of strategy and relatively less to the implementation and evaluation of the strategy.
2.2 The review provided a need to further examine the strategic management practices in Indian perspective. Following
specific research objectives of the study form the identification of the issues that requires strategic attention of the
management and assessing the strategic management practices in the companies.
3. RESEARCH METHOD
3.1 A questionnaire has been designed to assess the views of the respondents on the area of concern under the study for
collecting primary data. First part of this questionnaire on Strategic Issues has been designed to determine the issues in terms
of their requirement for strategic attention of an organization. Second part pertains Strategic Management Practices [11] has
been designed to ascertain the steps in strategic decision-making process being followed by the organizations. The strategic
decision- making process has been examined through a 46-item scale. The scale was developed considering major phases of
strategic management process. Respondents were asked to rate their agreement on these items on a five point Likert type
scale (where 1= Strongly Disagree to 5= Strongly Agree).
44
Dronacharya Research Journal
Issue II
Jan-June 2010
3.2 The respondents comprise of company officials from upper managerial level. It also includes all the people who are
having an exposure regarding strategic management practices of the organization and who are the part of strategic decisionmaking process of the organization. In this study, the method of random sampling has been used. For the purpose of the
study, any organization operating in India is considered as Indian organization. The organizations have been randomly
selected from different sectors of the economy. Responses from 119 organizations have been considered for the purpose data
analysis adopting relevant statistical methods. The selected 119 organizations are in a wide range of industries; a brief profile
of the organizations and respondents is presented in Table 1:
Sr. No.
1
Profile Group
Sub Groups
Total Sample Size
2
Sectors
3
Education Level
4
Total Experience (in yrs.)
5
Age Group (in yrs.)
N
%
119
100
i.
ii.
iii.
iv.
v.
vi.
Textile
Pharmaceutical & Chemical
Manufacturing
Engineering
Services
Software & IT
14
14
24
23
19
25
11.76
11.76
20.17
19.33
15.97
21.01
i.
ii.
iii.
Graduation
Post Graduation
Professional
43
37
39
36.13
31.09
32.77
i.
ii.
iii.
iv.
Upto 15
15-20
20-25
Above 25
23
40
34
22
19.33
33.61
28.57
18.49
i.
ii.
iii.
Upto 40
40-50
Above 50
28
62
29
23.53
52.10
24.37
Source: Primary Survey
Table 1: Respondent Profile
Table 1 indicates that respondents are eligible enough to respond regarding the questions being asked.
4. ANALYSIS OF DESCRIPTIVE STATISTICS
The analysis has been carried out in two parts. The first part takes care of various issues requiring strategic attention among
different industrial sectors and the second part focuses on phase wise analysis of strategic decision- making process.
4.1 Issues Requiring Strategic Attention.
In the present section, data about issues that requires strategic attention of the management has been analyzed. On the basis of
a pilot survey conducted by the researcher, seven such issues were identified. These seven issues were then put before the
respondents in order to identify most strategic issues that generally require attention of strategists in their organization. The
following results have been observed on the basis of data analysis:
Strategic Issue
Leadership
Quality of Good services
Innovation
Customer Focus
Competent Human Capital
Ethical Practices
Organization
Rank
1
2
3
4
5
6
7
Mean
4.66
4.50
4.46
4.45
4.23
4.11
3.73
SD
0.54
0.56
0.62
0.68
0.64
0.59
0.71
Source: Primary Survey
Table 2: Ranking of Issues requiring Strategic Attention
45
Dronacharya Research Journal
Issue II
Jan-June 2010
From Table 2 it can be inferred that all sectors under the study agree that Leadership (Mean = 4.66, SD = 0.54) is the most
crucial issue that requires strategic attention in the current situation in organizations. The issues regarding quality of goods or
services, innovation, customer focus, competent human capital and ethical practices have been ranked second, third, fourth,
fifth and sixth respectively. Respondents have ranked Organization Culture (Mean = 3.73, SD = 0.71) as least strategic
attention seeking issue. The value of Standard Deviation (SD) on all the issues highlights the unanimity in viewpoint of the
respondents pertaining to Indian corporate sector. Further the ranking of these strategic issues have been analyzed sector-wise
and given in Table 3 as under:
Strategic Issue
Textile
Pharma & Chem.
Mfg.
Engg.
Services
Soft. & IT
Leadership
1
1
1
4
2
1
Quality of Goods or service
3
4
3
1
4
2
Innovation
5
2
4
2
1
4
Customer Focus
2
6
2
3
3
3
Competent Human Capital
4
3
4
5
5
5
Ethical Practices
6
5
7
6
6
6
Organization Culture
7
7
6
7
7
7
Source: Primary Survey
Table 3: Ranking of Strategic Issues by Different Sectors
The above table examines how the preference of different sectors varies on ranking of strategic issues. Apart from
Engineering and Services sectors, all other sectors stick to Leadership as the most attention seeker strategic issue. All the
sectors agree that Organization Culture is lesser critical strategic issue as compared to others.
4.2 Strategic Management Practices
Analysis of Strategic Management Practice (SMP) followed by selected organization is presented. Five phases covering all
important issues of strategic decision- making process namely Institutionalizing Planning Function, Establishing Strategic
Foundation, Conducting Strategic Situational Diagnosis, Developing Strategic Plans and Managing Strategic Plan
Implementation have been considered for analysis.
4.2.1 Institutionalizing Planning Function: Phase I
The analysis of first phase highlights the seriousness of the companies to emphasize on planning function in strategic
decision- making process. To get response on institutionalizing planning function, five statements (SMP 1- SMP 5) were
used. The respondents were asked to rate their level of agreement on these statements on a five-point scale. For the purpose
of reporting a strategic management practice, percentage of agreement value includes both Agree and Strongly Agree
responses. More than 90 percent of the managers support the practice that top executives take formal responsibility for the
organization's strategic business planning (SMP 1) (Mean = 4.50) and strategic planning is a top priority activity performed
on a regular basis. (SMP 2) (Mean = 4.30). More than 75 percent of the respondents confirm that a proper resource allocation
is carried out for strategic planning activity (SMP 3) (Mean = 3.97). More than 75 percent of the respondents agree that in
their organization, strategic planning activity is performed in a systematic manner (SMP 4) (Mean = 3.97). 63 percent of the
respondents have shown their agreement to the practice that key people at all the levels are consulted for strategic planning
activity (SMP 5) (Mean = 3.68).
4.2.2 Establishing Strategic Foundation: Phase II
Institutionalization of Planning Function phase is followed by the strategic management practices related with Establishment
of Strategic Foundation. This is termed as the second phase for the purpose of our study. The analysis of this phase highlights
concern of the organizations about emphasis on establishing strategic intent. To get response on Establishment of Strategic
Foundation phase, nine statements (SMP 6 – SMP 14) were used. The respondents were asked to rate their level of agreement
on these statements on a five-point scale. For the purpose of reporting a strategic management practice, percentage of
agreement value includes both Agree and Strongly Agree responses. For the purpose of reporting a strategic management
practice, percentage of agreement value includes both Agree and Strongly Agree responses. More than 85 percent of the
respondents support the statement that their organization has a written mission statement (SMP 6) (Mean = 4.19) whereas 75
percent of the managers confirms to the statement that all management and higher-level staff is aware of the mission and
understands it properly. (SMP 7) (Mean = 4.01). 90 percent (Mean = 4.23) of the respondents agree with the practice of
46
Dronacharya Research Journal
Issue II
Jan-June 2010
quantifying goals for proper measurement (SMP 9). 70 percent respondents say that their organization has written longerterm and short-term goals (SMP 8) (Mean = 3.95) whereas more than 73 percent supports the practice that actual
performance has been systematically measured keeping in view goals of organization (SMP 13) (Mean = 4.08). Almost 75
percent of the strategic managers have given their consent about participation of Management and higher-level staff, whose
responsibilities are affected, in goal setting (SMP 14) (Mean = 3.93). A relatively lesser percentage that is 67 percent (Mean
= 3.79) of the respondents have shown their agreement to the practice that the goals appear realistic yet challenging based
upon experience and/or research (SMP 12).
4.2.3 Conducting Strategic Situational Diagnosis: Phase III
The third phase of the strategic decision making process encompasses the strategic management practices related to internal
and external environmental analysis. To get response on conduction of strategic situational diagnosis phase eighteen
statements (SMP 15- SMP 32) were used. The respondents were asked to rate their agreement on these items on a five-point
scale. In this phase statements (SMP 15- SMP 22) reflect various practices followed under External environmental analysis.
Internal environmental analysis practices can be identified through rest of the statements (SMP 23- SMP 32). For the purpose
of reporting a strategic management practice, percentage of agreement value includes both Agree and Strongly Agree
responses. Under the external environmental analysis, 88 percent of the strategic managers most strongly agree with the
practice that their organization assesses institutional factors such as cost and availability of capital, government regulations
and the economy (SMP 21) (Mean = 4.25). 62 percent of the respondents agree to the practice (SMP 17) about inclusion of
detailed analysis of market or other geographic and/or demographic and/or psychographic segments (Mean = 3.70). 80
percent of the responding strategic managers support the practice that organization assesses the industry as a whole in terms
of new competitors, new concepts, innovative technologies, procurement practices, price trends, labor practices, etc (SMP 20)
(Mean = 4.02). 69 percent of the strategic managers agree to the practice that internal analysis identifies key strengths and
weaknesses in the organization (SMP 24) (Mean = 3.93). More than 83 percent of the respondents support the practices of
analyzing its own business objectively and including profitability factor trends like after- tax earnings and return on assets in
their analysis (SMP 23 and 25) (Mean = 4.13 and 4.14 respectively). More than 67 percent of the strategic managers agree to
the practices of including marketing/advertising and pricing strategy and its effects on customer behavior in the analysis
(SMP 26 and 27) (Mean = 3.77 and 3.87 respectively). Under the diagnosis phase, 72 percent (Mean = 3.82) of the
respondents follow the practice that after completing its external and internal analyses, the organization reviews the mission
and goals in light of the apparent threats/ opportunities and strengths/ weaknesses (SMP 31).
4.2.4 Developing Strategic Plans: Phase IV
The fourth phase of strategic management practices focus on development of strategic plans. To get response on Developing
Strategic Plans, seven statements (SMP 33- SMP 39) were used. The respondents were asked to rate their level of agreement
on these statements on a five-point scale. For the purpose of reporting a strategic management practice, percentage of
agreement value includes both Agree and Strongly Agree responses. The above table shows that 72 percent of the
respondents agree to the practice of using the strategic (situational) diagnosis to formulate strategic plan options (SMP 33)
(Mean = 3.92). More than 70 percent of the respondents have shown their agreement towards considering restructuring and
product enhancement option, at the time of developing strategic plans (SMP 36 and 37) (Mean = 3.89 and 3.90 respectively).
80 percent (Mean = 4.06) of the strategic managers has given their nod to the practice that the organization decides its
strategic plan(s) based on feasibility and risk/return criteria (SMP 39).
4.2.5 Managing Strategic Plan Implementation: Phase V
Managing Strategic Plan Implementation has been denoted as the fifth phase of strategic decision-making process. This phase
highlights strategic management practices at implementation stage. To get response on Managing Strategic Plan
Implementation seven statements (SMP 40- SMP 46) were used. Respondents were asked to rate their level of agreement on
these statements on a five-point scale. Table 3.19 reports view of respondents on adopting the practices related to managing
strategic plan implementation in their organizations. For the purpose of reporting a strategic management practice, percentage
of agreement value includes both Agree and Strongly Agree responses. 77 percent of the respondents (Mean = 4.00) support
the practice of clearly assigning lead responsibility for action plan implementation to a person or, alternately, to a team (SMP
41) while almost 80 percent has given their agreement on the practice that sufficient resources are provided for proper
implementation (SMP 42) (Mean = 3.79). More than 70 percent of the strategic managers have given their consent to the
practice of setting clearly defined and measurable performance standards for each plan element (SMP 43) (Mean = 3.84). As
far as monitoring and evaluation is concerned 72 percent of the respondents (Mean = 3.88) agree to the practice that the
organization review monitoring data regularly, and revise strategic decisions as appropriate (SMP 44). The practice of
continuous monitoring is followed by 75 percent of the respondents (SMP 45) (Mean = 3.87). Relatively a lesser that is 60
percent of the respondents (Mean= 3.59) support the practice that individuals responsible for strategic planning and
implementation are rewarded for successful performance (SMP 46).
47
Dronacharya Research Journal
Issue II
Jan-June 2010
A sample of descriptive statistics for all the phases of strategic decision-making process are presented in Table 4:
Level of Agreement
SMP
No.
Strategic Management Practice
Strongly
Disagree
Disagree
Neutral
Agree
Strongly
Agree
Mean
Std. Deviation
Phase I Institutionalizing Planning Function (SMP 1- SMP 5)
1
Top executives take formal responsibility
for the organization's strategic business
planning.
-
1
0.84
-
57
47.90
61
51.26
4.50
0.55
2
Strategic planning is a top priority activity,
performed on a regular basis, e.g., each
year
1
0.84
3
2.52
7
5.88
56
47.06
52
43.70
4.30
0.76
Phase II Establishing Strategic Foundation (SMP 6- SMP 14)
6
The organization have a written mission
statement
-
1
0.84
16
13.45
61
51.26
41
34.45
4.19
0.69
7
All management and higher-level staff is
aware of the mission and they understand it
1
0.84
1
0.84
24
20.17
63
52.94
30
25.21
4.01
0.75
Phase III Conducting Strategic Situational Diagnosis (SMP 15- SMP 32)
15
The organization periodically gathers and
analyzes data about market and other
external factors, which affect the business
-
5
4.20
26
21.85
54
45.38
34
28.57
3.98
0.82
16
The external/market analysis identifies key
threats and key opportunities to the
business
-
5
4.20
20
16.81
51
42.86
43
36.13
4.11
0.83
Phase IV Developing Strategic Plans (SMP 33- SMP 39)
33
The organization uses the strategic
(situational) diagnosis to formulate
strategic plan options
-
8
6.72
25
21.01
54
45.38
32
26.89
3.92
0.86
34
It considers business performance options,
e.g., cost reduction, alternative suppliers,
production improvements, etc.
1
0.84
5
4.20
18
15.13
72
60.50
23
19.33
3.93
0.76
23
19.33
71
59.66
24
20.17
3.99
0.65
Phase V Managing Strategic Plan Implementation (SMP 40- SMP 46)
40
The organization makes strategic decisions
(implementation action plans) based upon
the strategic plan
1
0.84
-
48
Dronacharya Research Journal
41
Issue II
The organization clearly assigns lead
responsibility
for
action
plan
implementation to a person or, alternately,
to a team
3
2.52
-
Jan-June 2010
24
20.17
62
52.10
30
25.21
Figures in bold depict the level of agreement in percentage
4.00
0.74
Source: Primary Survey
Table 4: Sample Responses on Strategic Management Practices (N= 119)
5. MAJOR FINDINGS OF THE STUDY
The analysis concerning strategic issues and practices under various phases of strategic decision-making in Indian corporate
sector has revealed the following major findings:
• Three of the top four issues i.e. leadership, innovation and customer focus ranked by strategic managers are in
conformance with the issues identified by earlier studies of Strategic issues by Lyles, Amitabh and Sahay [12] and Lu and
Chiang [13]. These results emphasize the importance of leadership aspect in strategic managers. The present study,
therefore, confirms the results of previous studies.
• One fourth of the respondents have shown their neutrality to all the strategic management practices.
• Indian companies are taking strategic planning as a high priority function practiced through a systematic process. Suitable
resources are provided for the strategic planning activity but only selected people at the top level take part in the strategic
planning process which is in contrast with the basic premises of strategic management that emphasize involvement of key
people at all levels in strategic planning.
• Strategic managers are having a positive outlook towards establishing strategic intent. Goal setting, which is the end
activity of the strategic intent, is practiced at a good level. Performance measurement is done on the basis of comparison
with quantifiable and measurable goals. Although practice of setting realistic goals is still have a long way to go. It is
important to strike a balance between realistic and challenging goals.
• In comparison to the internal environment, more emphasis is given to the analysis of external environment by the Indian
corporate sector. Also the financial and legal factors are of more concern for analysis as compared to demographic factors.
The practice of reviewing mission and goals after environmental analysis is prevalent among the Indian corporate sector.
• Indian companies are developing strategic plans only after getting strategic cues from the environment scanning. Strategic
plans are not developed in a haste and due consideration is given to business performance, market penetration, organization
and management, and product/ service enhancement options. Risk return analysis is top most practice while developing
strategic plans.
• Strategic managers of Indian companies are adopting strategic management practices related to managing strategic plan
implementation upto a reasonable extent. A proper resource allocation has been carried out for executing strategic plans. A
due consideration has also been given to monitoring of plans on a continuous basis, with a scope of alteration in the
strategy adopted. Although the practice of rewarding the effort of the people involved in planning and implementation of
strategy has been a bit neglected.
• Strategic management practices are adopted considering a series of decisions where a strategic decision in one phase is
based upon the findings of previous phase for example developing strategic plan phase considers strategic diagnosis phase.
• A downward trend persists in the effort of the Indian companies while following strategic management practices. More
stress is given to the strategic planning phase as compared to the strategy execution phase. This result supports the findings
of the study of Dincer et al.
• There is no variation found across the respondent profile elements like age group, total work experience and education
level while following the strategic management practices. This finding contradict to the results of similar study conducted
by Hitt and Tyler [14] stating that the strategic decision models were found to vary by industry and executive
characteristics of age, educational degree type, amount and type of work experience.
CONCLUSION
The results of survey show that in general strategic management practices are being followed upto a good extent by the
Indian companies. This is a positive sign for the sustainability of the Indian corporate sector. A serious effort is required by
the organizations to strike a balance between effectively formulating and implementing the strategies.
49
Dronacharya Research Journal
Issue II
Jan-June 2010
REFERENCES
[1]
Glueck, W.F., Kaufman, S.P., and Walleck, A.S., “The Four Phases of Strategic Management”, Journal of Business
Strategy, Winter, pp.9-21, (1982).
[2]
Harrison, J.S., and John, St. C.H., “Strategic Management of Organizations and Stakeholders, SWC Publishing,
Cincinatti, Ohio, 1998, p.4, (1982).
[3]
Wheelen, T.L., and Hunger, J.D., “Concepts in Strategic Management and Business Policy”, 9th Edition, Pearson
Education, Delhi, p.272, (2005).
[4]
Schraeder, M., “A Simplified Approach to Strategic Planning: Practical Considerations and Illustrated Example”,
Business Process Management Journal, Vol.8, No. 1, pp. 8-18, (2002).
[5]
Sharma, R.A., “Strategic Management in Indian Companies”, 8M, pp.17-24, (1994).
[6]
Lyles, M.A., “A Research Agenda for Strategic Management in 1990s’”, Journal of Management Studies, p. 30, (1990).
[7]
[8]
Karger, D.W., and Malik, Z.A., “Long Range Planning and Organizational Performance”, Long Range Planning, Vol.8,
No.6, pp. 60-64, (1975).
Stagner, R., “Corporate Decision Making”, Journal of Applied Psychology, Vol. 53, No.1, pp. 1-13, (1969).
[9]
Thune, S., and House, R., “Where Long Range Planning Pays Off”, Business Horizons, August, pp. 81-87, (1970).
[10] Dincer, O., Tatoglu, E., and Glaister, K.W., “Strategic Planning Process: Evidence from Turkish Firms”, Management
Research News, Vol. 29, No. 4, pp. 206-219, (2006).
[11] Adapted from Strategic Futures, 113 South Washington Street, Alexandria, VA 22314
[12] Amitabh, M., and Sahay, A., “Strategic Thinking: Is Leadership the Missing Link”, An Exploratory Study, MDI,
Gurgaon, (2005).
[13] Lu, Z., and Chiang, D., “Strategic Issues Faced by Ontario Hotels”, International Journal of Contemporary Hospitality
Management, Vol.15, No.6, pp. 343-345 (2003).
[14] Hitt, M.A., and Tyler, B.B., “Strategic Decisions Models: Integrating Different Perspectives”, Strategic Management
Journal, Vol.12, No.5, pp.327-351, July (1991).
50
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
HANDLING IMPRECISION IN SOFTWARE ENGINEERING
MEASUREMNTS USING FUZZY LOGIC
Dr. Pradeep Kumar Bhatia*
Professor, Department of Computer Science
Guru Jambheshwar University of Science & Technology, Hisar, Haryana, India
E-mail- [email protected]
Harish Kumar Mittal
Lecturer, Department of Informatioin Technology
Vaish College of Engineering, Rohtak-124001, India
E-mail- [email protected]
Kevika Singla
Senior Software Engineer
Royal Bank of Scotland, Gurgaon-122001, India
E-mail: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT
The measurement of software is recognized as a fundamental topic in software engineering research and practice. Before initiating any project prior
estimation about the time, cost and manpower involved in the project must be made to ensure the success of the project. Software effort and quality
estimation has become an increasingly important field due to increasingly pervasive role of software in today’s world. It is noted that traditional estimation
approaches can have serious difficulties when used on software engineering data that is usually scarce, incomplete, and imprecisely collected. As the
uncertainties are best handled by using fuzzy logic, the emphasis is on quantitative estimation of various software attributes using fuzzy technique. As a rule
of thumb we can say if some decision making or human communication involve during development process we can use the concept of fuzzy logic to
improve software development processes and products.
Key words: Fuzzy Logic, Effort Estimation, Software Quality, Software Maintainability, Software Testing, Software Productivity.
__________________________________________________________________________________________________________________________
1. INTRODUCTION
Software metrics are measurements of the software development process and product that can be used to indicate the
performance of the software product and to build software quality models. Before initiating any project prior estimation about
the time, cost and manpower involved in the project must be made to ensure the success of the project. Software effort and
quality estimation has become an increasingly important field due to increasingly pervasive role of software in today’s world.
Before initiating any project prior calculation about the time, cost and manpower involved in the project must be made to
ensure the success of the project.
Software cost estimation is the process of predicting the amount of effort required to build a software system and its duration.
With the use of new technologies the present cost estimation formulae are not giving good estimates. Moreover the estimated
size of the project is a fuzzy number, while many of these do not take into account fuzziness
In order to develop high quality reliable software, various quality attributes need to be quantified. In present study we have
focused mainly on software effort, quality and reliability assessment using fuzzy logic.
The paper is divided into four sections. Section 2 gives some basics of fuzzy logic along with the variety of criterions to
validate the accuracy of measurement techniques. Section 3 discusses some prevalent techniques for software engineering
measurements using fuzzy logic. Section 4 concludes the paper and describes promising topics worthy of further research.
*Corresponding Author
51
Dronacharya Research Journal
Issue II
Jan-June 2010
2. RELATED TERMS
2.1 Fuzzy Logic
Fuzzy logic is a superset of conventional (Boolean) logic that has been extended to handle the concept of partial truth i.e.,
truth values between completely true and completely false. Fuzzy logic is a methodology, to solve problems which are too
complex to be understood quantitatively, based on fuzzy set theory, and introduced in 1965 by Prof. Zadeh in the paper
Fuzzy Sets [ZADE65]. Use of fuzzy sets in logical expression is known as fuzzy logic. A fuzzy set is characterized by a
membership function, which associates with each point in the fuzzy set a real number in the interval [0, 1], called degree or
grade of membership.
Fuzzy logic systems are one of the main developments and successes of fuzzy sets and fuzzy logic. A FLS is a rule-base
system that implements a nonlinear mapping between its inputs and outputs. Fuzzy logic process consists of the following
steps:
•
•
•
•
•
Input as a crisp number.
Fuzzification
Fuzzy Logic
Defuzzification
Crisp output
Fig. 1: Fuzzy Logic Process
2.2 Fuzzy Number
Fuzzy numbers are special convex and normal fuzzy sets, usually with single modal value, representing uncertain quantitative
information. A fuzzy number is a quantity whose value is imprecise, rather than exact as in the case of ordinary single valued
numbers. Any fuzzy number can be thought of as a function, called membership function, whose domain is specified, usually
the set of real numbers, and whose range is the span of positive numbers in the closed interval [0,1]. Each numerical value of
the domain is assigned a specific value and 0 represents the smallest possible value of the membership function, while the
largest possible value is 1.
In many respects fuzzy numbers depict the physical world more realistically than single valued numbers. Suppose that we are
driving along a highway where the speed limit is 80km/hr, we try to hold the speed at exactly 80 km/hr, but our car lacks
cruise control, so the speed varies from moment to moment. If we note the instantaneous speed over a period of several
minutes and then plot the result in rectangular coordinates, we may get a curve that looks like one of the curves shown in
figure 1. However there is no restriction on the shape of the curve. The curves in figure 1 show a triangular fuzzy number, a
trapezoidal fuzzy number, and a bell shaped fuzzy number.
µ(x)
1
0.5
0
10
Triangular MF
20
30
x
40
Trapezoidal MF
50
Bell Shaped MF
Fig. 2: Membership Functions
52
60
Dronacharya Research Journal
Issue II
Jan-June 2010
2.3 Criteria for validation of estimation models
The following criterions are frequently used by various researchers for validation of estimation models:
Variance Accounted For (VAF)
A model which gives higher VAF is better than that which gives lower VAF.
 var ( E − Eˆ ) 
VAF (%) = 1 −
 *100
var E 

(0.1)
Where,
E = measured Value
Ê = estimated Value f = frequency
var x =
∑ f ( x − x)
∑f
2
(0.2)
Where, x = mean x
Mean Absolute Relative Error
MARE (%) =
∑ f (R
∑f
E
)
*100
(0.3)
Where,
Absolute Relative Error ( R E
)
=
E − Eˆ
(0.4)
E
A model which gives lower Mean absolute Relative Error is better than that which gives higher Mean absolute Relative
Error.
Variance Absolute Relative Error
var R E
∑ f (R
(%) =
Ε
− mean RΕ ) 2
∑f
*100
(0.5)
A model which gives lower Variance Absolute Relative Error is better than that which gives higher Variance Absolute
Relative Error.
Root Mean Square Error (RMSE)
RMSE is frequently used measure of differences between values predicted by a model or estimator and the values actually
observed from the thing being modeled or estimated. It is just the square root of the mean square error as shown in equation
given below
RMSE =
1 n
( Ei − Eˆ i ) 2
∑
n i =1
(0.6)
Prediction (n)
Prediction at level n is defined as the % of projects that have absolute relative error less than n [EMIL05]. A model which
gives higher pred (n) is better than that which gives lower pred(n).
53
Dronacharya Research Journal
Issue II
Jan-June 2010
3. FUZZY LOGIC IN SOFTWARE ENGINEERING MEASUREMENTS
3.1 Fuzzy logic in Software Effort and Cost Estimation
Effective estimation of effort is the most challenging activity in software development. Software effort estimation is not an
exact science. Effort estimation process involves a series of systematic steps that provide estimate with acceptable risk.
Various models have been derived by studying large number of completed software projects from various organizations and
applications to explore how project size is mapped into project effort and project cost.
• Chuk Yau, Raymond H.L. Tsoi, in 1994 [CHUK94], introduces a Fuzzified Function Point Analysis (FFPA) model using
TFN, to help software size estimators to express their judgment. Through a case study for in-house software, this paper
presents the experience of using FFPA to estimate the software size and compares it with the conventional FPA.
• Ryder, J in 1998 [RYDE98], investigates the application of fuzzy modeling techniques to two of the most widely used
software effort prediction models: the Constructive Cost Model and the Function Points model.
• W. Pedrycz and others [PEDR99] found that the concept of information granularity and fuzzy sets, in particular, plays an
important role in making software cost estimation models more user-friendly.
• Ali Idri, Alain Abran and Laila Kjiri, in 2000 [IDRI00], proposed the use of fuzzy sets rather than classical intervals in
the COCOMO’81 model. For each cost driver and its associated linguistic values, they defined the corresponding fuzzy
sets using trapezoidal-shaped membership functions.
• Musilek, P., Pedrycz, W. and others [MUSI00] fuzzify the basic COCOMO model at two different levels of detail. They
proposed f-COCOMO model, using fuzzy sets. They claim that methodology of fuzzy sets giving rise to f-COCOMO is
sufficiently general to be applied to other models of software cost estimation such as function point method.
• Nonika Bajaj, Alok Tyagi and Rakesh Aggarwal, in 2006 [BAJA06], discussed the bottom up approach of cost
estimation using fuzzy logic. Trapezoidal fuzzy numbers were used to represent various linguistic terms of bottom-up
estimation.
• Harish Kumar Mittal and Pradeep Kumar Bhatia in 2007, [HARI07] proposes two models viz Model1, Effort
Estimation using fuzzy technique (without methodology) and Model2, Effort Estimation using Fuzzy Technique (with
methodology). Rather than using a single number they took software size (KLOC) as a triangular number. Estimated effort
can be optimized for any application type by varying arbitrary constants for these models. The developed models are tested
on ten NASA software projects, on the basis of four criterions for assessment of software cost estimation models.
Comparison of these models is done with the existing leading models and it is found that the developed models provide
better estimation.
• Harish Kumar Mittal and Pradeep Kumar Bhatia [MITT07] in 2007 introduces rectified model based on function point
analysis for effort estimation. Fuzzy functional points are first evaluated and then the result is defuzzified to get the crisp
functional points and hence the size estimation in person hours. The developed model is tested on published experimental
data. Comparison of results from developed model is done with the conventional FP estimate.
• Avner Engel and Mark Last, in 2007 [ENGE07] model software testing costs and risks using fuzzy logic methodology.
They estimate the quality cost occurring during the development of software for an avionic suite in a fighter aircraft and
demonstrate that applying fuzzy logic methodology yields results comparable to estimations based on models using the
probabilistic paradigm. Quality costs are defined as money spent on verification, validation and testing plus all costs
stemming from software and system failures. They also compared actual quality costs measurement vs modelling using
data of [ENGE03].
• Alaa F. Sheta and others in 2008 [ALAA08] presented a Fuzzy logic based model for effort estimation using LOC
approach. They use PSO (particle swarm optimization) method for tuning of COCOMO parameters. NASA, SEL dataset
was used for validation of their model with RMSSE as evaluation criteria. A part of data was used for tuning and other for
implementation.
• Harish Mittal and Pradeep Bhatia [MITT08] introduced a fuzzy logic based precise approach to quantify Cost of
software testing and risks. Most VVT cost data and relevant parameters are not available in precise form. Therefore, fuzzy
modelling has the distinct advantage of deriving realistic information based on imprecise knowledge. The proposed study
gives better results as compared to some earlier models. Furthermore, the calculation process is simpler than the process of
earlier models. The methodology of fuzzy logic used for, in the proposed study, is sufficiently general and can be applied
to other areas of quantitative software engineering
54
Dronacharya Research Journal
Issue II
Jan-June 2010
3.2 Fuzzy Logic in Software Quality and Reliability Estimation
Quality, simplistically, means that a product should meet its specification. In order to develop software quality prediction
model, one must first identify factors that strongly influence software quality and the number of residual errors. It is
extremely difficult, to accurately identify relevant quality factors. Furthermore, the degree of influence is imprecise in nature.
Due to its natural ability to model imprecise and fuzzy aspect of data and rules, fuzzy logic is an attractive alternative in such
situations.
• Houari A. Sahraoui and Others, in 2001 [SAHR01], provided an approach for Quality estimation using Fuzzy Threshold
Values. They used a fuzzy logic based approach to investigate the stability of a reusable class library interface, using
structural metrics as stability indicators.
• Zhiwei Xu, in his Ph.D. Thesis, Florida Atlantic University, in 2001 [ZHIW01], studied the aspects of usage of Fuzzy
Logic in Software Reliability Engineering using fuzzy expert systems in early risk assessment, Software quality models,
software cost estimation. He used Commercial software systems and COCOMO database to demonstrate usefulness of the
concepts.
• Sun Sup So and others [SUN02], in 2002 proposed a fuzzy logic based approach to predict error prone modules using
inspection data. Empirical evaluation of the proposed system was done on the published inspection data. They claim that
this approach offers advantages over others in several ways. First, interpretation of much of inspection data is fuzzy in
nature, and this model provides a natural mechanism to model such fuzzy data. Rules used in determining error-prone
modules are fuzzy, too. Second, prototype system can be developed without having to have extensive empirical data and
that the system’s performance can be continuously tuned as more inspection data become available. Finally, utilization of
this system requires no extra cost to the development team since our analysis is based on inspection data and analysis is
automated.
• K.K. Aggarwal and Yogesh Singh, [AGGA05] in 2005 explored following metrics for estimation of software
maintainability using Fuzzy Logic.
Average number of live variables (ALV)
Average life span of variables (ALS)
Average cyclomatic complexity (ACC)
Comment ratio (CR)
They used TFN, with MATLAB Fuzzy Logic toolbox and mamdani inference system. Empirical results prove that the
integrated measure of maintenance obtained from this model shows a strong co-relation to the maintenance time.
• N. Raj Kiran, V. Ravi, [KIRA07] in 2007 develops models to accurately forecast software reliability. Various statistical
(multiple linear regression and multivariate adaptive regression splines) and intelligent techniques (back-propagation
trained neural network, dynamic evolving neuro–fuzzy inference system and TreeNet) constitute the ensembles presented.
Three linear ensembles and one non-linear ensemble are designed and tested. Based on the experiments performed on the
software reliability data obtained from literature, they observed that the non-linear ensemble outperformed all the other
ensembles and also the constituent statistical and intelligent techniques.
• Harish Kumar Mittal and Pradeep Kumar Bhatia [HARI08] in 2008 presented a fuzzy logic based precise approach to
quantify quality of software. Software can be given quality grades on the basis of two metrics inspection rate/hr and error
density. The prediction of quality of software is very easy by this approach. Precise quality grades can be given to any
software. Software is graded on the quality grade basis in 10 grades. Modules having low grade are supposed to be most
error prone, while those having higher quality grade are considered satisfactory on the basis of quality. Triangular Fuzzy
numbers have been used for inspection rate and error/kLOC. The methodology of fuzzy logic used for, in the proposed
study, is sufficiently general and can be applied to other areas of quantitative software engineering.
• Harish Kumar Mittal and Pradeep Kuamr Bhatia [MITT09] in 2009, proposed a model for predicting the
maintainability of software based on combined effect of various factors which affect maintainability. The model is
validated on some software projects to check the usefulness of the approach. Lower values of maintainability indicate need
of improvement in the software, so that the maintenance costs can be reduced. The methodology of fuzzy logic used in the
study, is sufficiently general and can be easily applied to other areas of quantitative software engineering.
55
Dronacharya Research Journal
Issue II
Jan-June 2010
• Harish Kumar Mittal, Pradeep Kumar Bhatia and JP Mittal [HARI09] in 2009, proposes a fuzzy logic based precise
approach to quantify productivity of software. The estimation of productivity of software is very easy by this approach.
Triangular Fuzzy numbers have been used for cyclomatic complexity density. The methodology of fuzzy logic used for, in
the proposed study, is sufficiently general and can be applied to other areas of quantitative software engineering. The
model is evaluated on the basis of published data for a small pilot project on actual maintenance data. However, the
technique is quite general and may be tested for medium and large projects.
CONCLUSION
Conventional estimation approaches can have serious difficulties when used on software engineering data that is Scarce,
Incomplete and imprecisely collected. Even though effort has been done to propose fuzzy based models, yet there is a vast
scope to find better fuzzy models. Predicting better cost estimation and to find techniques to increase software reliability are
always needed. The existence of large set of alternatives provides the facility to identify the best approach of software
measurement techniques using the validation criterion discussed in the paper.
REFERENCES
[1]
[AGGA05] Aggarwal, K. K.,Y. Singh & M. Puri, , Measurement of Software Maintainability using a Fuzzy model,
Journal of Computer Science, U.S.A, 1(4), pp 538-542 (2005).
[2]
[ALAA08] Alaa F. Sheta, David Rine, Aladdin Ayesh: Development of software effort and schedule estimation models
using Soft Computing Techniques. IEEE Congress on Evolutionary Computation 2008: (1283-1289)
[3]
[BAJA06] Nonika Bajaj, Alok Tyagi, Rakesh Agarwal: Software estimation: a fuzzy approach. ACM SIGSOFT
Software Engineering Notes 31(3): 1-5 (2006)
[4]
[CHUK94] Chuk Yau, Raymond H.L. Tsoi, “Assessing the Fuzziness of General System Characteristics in Estimating
Software Size”, IEEE, 189-193, (1994).
[5]
[EMIL05] E. Mendes, S. Counsell, N. Mosley: Towards Taxonomy of Hypermedia and Web Application Size Metrics.
In Proceedings of International Conference of Web Engineering (ICWE 2005), pp. 110--123, 2005.
[6]
[ENGE07] Avner Engel, Mark Last, Modeling software testing costs and risks using fuzzy logic paradigm, Journal of
Systems and Software, v.80 n.6, p.817-835, (2007).
[7]
[HARI07] Harish Mittal, Pradeep Bhatia, “Optimization Criterion for Effort Estimation using Fuzzy Technique”. CLEI
EJ, Vol. 10 Num. 1 Pap. 2, (2007).
[8]
[HARI08] Harish Mittal, et. al. “Software Quality Assessment Based on Fuzzy Logic Technique”, ISSN: 1453-2277,
International Journal of Software Computing Applications (IJSCA), Issue 3, pp: 105-112, (2008).
[9]
[HARI09] Harish Mittal, Pradeep Bhatia, JP Mittal, “Software Maintenance Productivity Assessment using Fuzzy
Logic” ACM SIGSOFT SEN (accepted)
[10] [IDRI00] Ali Idri , Alain Abran and Laila Kjiri, COCOMO cost model using Fuzzy Logic, 7th International Conference
on Fuzzy Theory & Technology Atlantic, New Jersy, March-(2000).
[11] [KIRA07] Raj Kiran N., Ravi V., Software Reliability Prediction by Soft Computing Techniques, J. Syst. Software,
(2007).
[12] [MITT07] Harish Mittal, Pradeep Bhatia, “A comparative study of conventional effort estimation and fuzzy effort
estimation based on Triangular Fuzzy Numbers”, International Journal of Computer Science & Security, Vol. 1, Issue 4,
pp 36 – 47, ISSN: 1985-1533, (2007).
56
Dronacharya Research Journal
Issue II
Jan-June 2010
[13] [MITT08] Harish Mittal, Pradeep Bhatia,“Estimation of Software Testing Costs and Risks” Proc. Of The International
Conference on Software Engineering Research & Practice (SERP'08), organized by WORLDCOMP’08, Las Vegas,
Nevada, USA, (2008).
[14] [MIIT09] Harish Mittal, Pradeep Bhatia,” Software maintainability assessment based on fuzzy logic technique”, ACM
SIGSOFT Software Engineering Notes, Volume 34, Issue 3, ISSN: 0163-5948, May (2009).
[15] [MUSI00] Musílek, P., Pedrycz, W., Succi, G., & Reformat, M., Software Cost Estimation with Fuzzy Models. ACM
SIGAPP Applied Computing Review, 8(2), 24-29, (2000).
[16] [PEDR99] Pedrycz W., Peters J.F., Ramanna S., A Fuzzy Set Approach to Cost Estimation of Software Projects,
Proceedings of the 1999 IEEE Canadian Conference on Electrical and Computer Engineering Shaw Conference Center,
Edmonton Alberta, Canada. (1999).
[17] [PRES05] Pressman, Roger S., Software Engineering; A Practitioner Approach, McGraw-Hill International Edition,
Sixth Edition, (2005).
[18] [RYDE98] Ryder, J., "Fuzzy modelling of software effort prediction," Information Technology Conference, 1998.
IEEE, vol., no., pp.53-56, 1-3, Sep (1998).
[19] [SAHR01] Houari A. Sahraoui, Mounir A. Boukadoum, and Hakim Lounis. Building Quality Estimation models with
Fuzzy Threshold Values. L’Objet, 17(4):535--554, (2001).
[20] [SUN02] So, S. S., Cha, S. D., and Kwon, Y. R. Empirical evaluation of a fuzzy logic-based software quality prediction
model. Fuzzy Sets Syst. 127, 2 (Apr. 2002), 199-208, (2002).
[21] [ZHIW01] Zhiwei Xu , Taghi M. Khoshgoftaar, Fuzzy logic techniques for software reliability engineering, Florida
Atlantic University, Boca Raton, FL, (2001).
[22] [ZADE65] Zadeh L.A., Fuzzy Sets, Information and Control, 8, 338-353, (1965).
57
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
HIGHLY EFFICIENT MOTION VECTOR ESTIMATION FOR
VIDEO COMPRESSION USING BACTERIAL FORAGING
OPTIMIZATION ALGORITHM
Dr. Navin Rajpal*
Professor, School of Information Technology
Guru Gobind Singh Indraprastha University, New Delhi-110006, India
E-mail: [email protected]
Deepak Gambhir
Ph.D Research Scholar, School of Information Technology
Guru Gobind Singh Indraprastha University, New Delhi-110006, India
E-mail: [email protected]
__________________________________________________________________________________________________________________________
ABSTRACT
We propose an efficient video compression technique by using bio inspired approach of Bacterial Foraging Optimization. A more efficient and less complex
search algorithm is thus presented. Block matching techniques are the popular motion estimation methods used to obtain the motion compensated prediction.
By splitting each frame into macro blocks, motion vector of each macro block is obtained by using a block matching algorithm (or motion estimation
algorithm). In order to obtain motion vector of each macro block, we propose the use of Bacterial Foraging Algorithm which models bacterial behavior:
chemotaxis, swarming, reproduction, elimination, and dispersal. Experimental Results show an improvement in efficiency over existing` algorithms such as
Full Search, Three Step Search, New Three Step Search, An Efficient Three Step Search, A Novel Four Step Search, Simple and Efficient Search, Diamond
Search Algorithm, Adaptive Rood Pattern Search.
Keywords: Bacterial Foraging Optimization Algorithm BFOA, Motion Compensation, Motion Vectors, Encoder, Decoder, Reconstruction.
_____________________________________________________________________________________________________________________________
1. INTRODUCTION
Video compression is vital for efficient storage and transmission of digital videos. The hybrid video coding techniques [9]
based on predictive and transform coding are adopted by many video coding standards such as ISO MPEG-1/2 and ITU-T
H.261/263, owing to their high compression efficiency. Motion compensation is a predictive technique that exploits the
temporal redundancy between successive frames of video sequence.
The computational complexity of any general Block Matching [10] motion estimation technique [12] [13] [15] can then be
determined by three factors:
• Search algorithm.
• Cost function (Block Matching Criterion)
• Quality (PSNR).
We can reduce the complexity of the motion estimation algorithms by reducing the complexity of the applied search
algorithm and/or the complexity of the selected cost function.
Optimization problems [4] are made up of three basic ingredients:
1. An objective function which we want to minimize or maximize.
2. A set of unknowns or variables which affect the value of the objective function.
3. A set of constraints that allow the unknowns to take on certain values but exclude others.
The optimization problem is then:
Find values of the variables that minimize or maximize the objective function while satisfying the constraints.
This work is arranged into six sections. Section II discusses foraging theory in general. Section III explains the Bacterial
Foraging Optimization Algorithm. Section IV discusses about the method as proposed. Section V shows results of using BFO
in Video Compression, and finally Section VI suggests applications of BFO in other areas.
*Corresponding Author
58
Dronacharya Research Journal
Issue II
Jan-June 2010
2. FORAGING
2.1 Elements of Foraging Theory
Foraging theory [2] is based on the assumption that animals search for and obtain nutrients in a way that maximizes their
energy intake E per unit time T spent foraging. Hence, they try to maximize a function like
E / T (Or maximize their long-term average rate of energy intake).
2.2 E. coli Bacterial Swarm Foraging for Optimization
Suppose J(θ),θ∈ℜp is to be minimized, where measurements or an analytical description of the gradient ∇ J(θ) are not
available. Here, ideas from bacterial foraging are used to solve this non-gradient optimization problem. First, suppose that
J(θ) = Jinitial is the starting position of a bacterium and J (θ) represents the combined effects of attractants and repellants from
the environment, with for example, J(θ)<0, J(θ)=0 and J(θ)>0 representing that the bacterium at location in nutrient-rich (i.e.
if its behavior seeks increasingly favorable environments), neutral (i.e. if its behavior similar to previous searches) , and
noxious (i.e. if its behavior seeks increasingly unfavorable environments) environments, respectively. Basically, chemo taxis
is a foraging behavior that implements optimization where bacteria try to climb up the nutrient concentration (find lower and
lower values of J(θ)) avoid noxious substances, and search for ways out of neutral media (avoid being at positions where
J(θ)≥0). That is, it implements a type of biased random walk.
2.3 Chemo taxis, Swarming, Reproduction, Elimination, and Dispersal
Define a chemo tactic step to be a tumble followed by a tumble or a tumble followed by a run. Let j be the index for the
chemo tactic step. Let k be the index for the reproduction step. Let l be the index of the elimination-dispersal event. Let
P(j, k, l) = {θi (j, k, l) | i= 1, 2, ….., S}
represent the position of each member in the population of the S bacteria at the jth chemo tactic step, kth reproduction step,
and lth elimination-dispersal event. Here, let J(i, j, k, l,) denote the cost at the location of the ith bacterium θi(j,k,l)∈ℜp. Let
Nc be the length of the lifetime of the bacteria as measured by the number of chemo tactic steps they take during their life.
Let C (i)>0, i=1,2,…..S, denote a basic chemo tactic step size that we will use to define the lengths of steps during runs. To
represent a tumble, a unit length random direction, say φ (j), is generated; this will be used to define the direction of
movement after a tumble. In particular, we let
θi (j+1, k, l) = θi (j, k, l) + C(i) φ (j)
so that C(i) is the size of the step taken in the random direction specified by the tumble. If at θi(j+1, k, l) the cost θi (i, j+1, k,
l) is better (lower) than at θi (j, k, l), then another step of size C(i) in this same direction will be taken, and again, if that step
resulted in a position with a better cost value than at the previous step, another step is taken. This swim is continued as long
as it continues to reduce the cost, but only up to a maximum number of steps, Ns. This represents that the cell will tend to
keep moving if it is headed in the direction of increasingly favorable environments.
The above discussion is for the case where no cell-released attractants are used to signal swarming to other cells. Here, we
also solve cell-to-cell signaling via an attractant and will represent that with Jicc (θ,θi(j,k,l), i =1, 2, ….., S, for the ith
bacterium. Let
dattract = 0.1
be the depth of the attractant released by the cell (a quantification of how much attractant is released) and
Wattract = 0.2
be a measure of the width of the attractant signal (a quantification of the diffusion rate of the chemical). The cell also repels a
nearby cell in the sense that it consumes nearby nutrients and it is not physically possible to have two cells at the same
location. To model this, we let
hrepellant = dattract
be the height of the repellant effect (magnitude of its effect) and
Wrepellant = 10
be a measure of the width of the repellant. The values for these parameters are simply chosen to illustrate general bacterial
behaviors, not to represent a particular bacterial chemical signaling scheme. For instance, the depth and width f the attractant
is small relative to the nutrient concentrations. Let
59
Dronacharya Research Journal
Issue II
Jan-June 2010
denote the combined cell-to-cell attraction and repelling effects, where θ =[θ1, …., θp]T is a point on the optimization domain
and θim is the mth component of the ith bacterium position θi An example for the case of S = 2 and the above parameter
values is shown in Fig. 4. Here, note that the two sharp peaks represent the cell locations, and as you move radially away
from the cell, the function decreases and then increases (to model the fact that cells far away will tend not to be attracted,
whereas cells close by will tend to try to climb down the cell-to-cell nutrient gradient toward each other and hence try to
swarm). Note that as each cell moves, so does its Jicc (θ, θi(j,k,l) function, and this represents that it will release chemicals as
it moves. Due to the movements of all the cells, the Jcc (θ, P(j,k,l) function is time varying in that if many cells come close
together there will be a high amount of attractant and hence an increasing likelihood that other cells will move toward the
group. This produces the swarming effect. When we want to study swarming, the ith bacterium, i = 1, 2,…,S will hill-climb
on
J(i, j, k, l) + Jcc(θ, P)
(rather than the J(i, j, k, l) defined above) so that the cells will try to find nutrients, avoid noxious substances, and at the same
time try to move toward other cells, but not too close to them. The Jcc(θ, P) function dynamically deforms the search
landscape as the cells move to represent the desire to swarm (i.e., we model mechanisms of swarming as a minimization
process).
After Nc chemotactic steps, a reproduction step is taken. Let Nre be the number of reproduction steps to be taken. For
convenience, we assume that S is a positive even integer. Let
Sr =S/2
be the number of population members who have had sufficient nutrients so that they will reproduce (split in two) with no
mutations. For reproduction, the population is sorted in order of ascending accumulated cost (higher accumulated cost
represents that a bacterium did not get as many nutrients during its lifetime of foraging and hence is not as “healthy” and thus
unlikely to reproduce); then the Sr least healthy bacteria die and the other Sr healthiest bacteria each split into two bacteria,
which are placed at the same location. Other fractions or approaches could be used in place of (1); this method rewards
bacteria that have encountered a lot of nutrients and allows us to keep a constant population size, which is convenient in
coding the algorithm.
Let Ned be the number of elimination-dispersal events, and for each elimination-dispersal event each bacterium in the
population is subjected to elimination-dispersal with probability Ped. We assume that the frequency of chemotactic steps is
greater than the frequency of reproduction steps, which is in turn greater in frequency than elimination-dispersal events (e.g.,
a bacterium will take many chemotactic steps before reproduction, and several generations may take place before an
elimination-dispersal event).
Clearly, we are ignoring many characteristics of the actual biological optimization process in favor of simplicity and
capturing the gross characteristics of chemotactic hill-climbing and swarming. For instance, we ignore many characteristics
of the chemical medium and we assume that consumption does not affect the nutrient surface (e.g., while a bacterium is in a
nutrient-rich environment, we do not increase the value of J near where it has consumed nutrients), where clearly in nature
bacteria modify the nutrient concentrations via consumption. A tumble does not result in a perfectly random new direction
for movement; however, here we assume that it does. Brownian effects buffet the cell so that after moving a small distance, it
is within a pie-shaped region with its start point at the tip of the piece of pie. Basically, we assume that swims are straight,
whereas in nature they are not. Tumble and run lengths are exponentially distributed random variables, not constant, as we
assume. Run-length decisions are actually based on the past 4 s of concentrations, whereas here we assume that at each
tumble, older information about nutrient concentrations is lost. Although naturally asynchronous, we force synchronicity by
requiring, for instance, chemotactic steps of different bacteria to occur at the same time, all bacteria to reproduce at the same
time instant, and all bacteria that are subjected to elimination and dispersal to do so at the same time. We assume a constant
population size, even if there are many nutrients and generations. We assume that the cells respond to nutrients in the
environment in the same way that they respond to ones released by other cells for the purpose of signaling the desire to.
Clearly, other choices for the criterion of which bacteria should split could be used (e.g., based only on the concentration at
the end of a cell’s lifetime, or on the quantity of noxious substances that were encountered). We are also ignoring conjugation
and other evolutionary characteristics. For instance, we assume that C(i), Ns, and Nc remain the same for each generation. In
nature it seems likely that these parameters could evolve for different environments to maximize population growth rates.
The intent here was simply to come up with a simple model that only represents certain aspects of the foraging behavior of
bacteria.
60
Dronacharya Research Journal
Issue II
Jan-June 2010
3. BACTERIA FORAGING OPTIMIZATION ALGORITHM
For initialization, you must choose p, S, Nc, Ns, Nre, Ned, Ped, and the C(i), i=1,2,….,S[1]. If you use swarming, you will also
have to pick the parameters of the cell-to-cell attractant functions; here we will use the parameters given above. Also, initial
values for the θi, i = 1, 2,….., S, must be chosen. Choosing these to be in areas where an optimum value is likely to exist is a
good choice. Alternatively, you may want to simply randomly distribute them across the domain of the optimization problem.
The algorithm that models bacterial population chemotaxis, swarming, reproduction, elimination, and dispersal is given here
(initially, j=k=l=0). For the algorithm, note that updates to θi result in updates to P (position of each member in the
population of the S bacteria). Clearly, a more sophisticated termination test can be added.
1) Elimination-dispersal loop: l = l + 1
2) Reproduction loop: k = k + 1
3) Chemotaxis loop: j = j + 1
a) For i = 1, 2,.., S take a chemotactic step for bacterium i as follows.
b) Compute J (i, j, k, l). Let J(i, j, k, l) = J(i, j, k, l) + Jcc(θi (j, k, l), P(j, k, l))
i.e., add on the cell-to-cell attractant effect to the nutrient concentration).
c) Let Jlast = J(i, j, k, l) to save this value since we may find a better cost via a run.
d) Tumble: Generate a random vector ∆ (i) ∈ ℜP with each element ∆ (ι), m = 1, 2,
, p, random number on [-1, 1].
e) Move: Let
θi ( j + 1, k , l ) = θi ( j , k , l ) + C (i )
∆ (i )
∆ T (i )∆ (i )
This results in a step of size C(i) in the direction of the tumble for bacterium i.
f) Compute J(i, j + 1, k, l), and then let J(i, j + 1, k, l) = J(i, j + 1, k, l) + Jcc (θi (j + 1, k, l), P (j 1, k, l)
g) Swim (note that we use an approximation since we decide swimming behavior of each cell as if the bacteria
numbered {1, 2, ……, i} have moved and {i + 1, i + 2, ……, S} have not; this is much simpler to simulate than
simultaneous decisions about swimming and tumbling by all bacteria at the same time) :
i) Let m = 0 (counter for swim length).
ii) While m<Ns (if have not climbed down too long)
Let m= m + 1.
If J (i, j + 1, k, l) < Jlast (if doing better), let Jlast = J(i, j+1, k, l) and let
θi ( j + 1, k , l ) = θi ( j , k , l ) + C (i )
∆ (i )
∆ T (i )∆ (i )
and use this θi(j + 1, k, l) to compute the new J(i, j+1, k, l) as we did in f).
Else, let m = Ns. This is the end of the while statement.
h) Go to next bacterium (i + 1) if i ≠ S (i.e., go to b) to process the next bacterium).
4) If j < Nc, go to step 3. In this case, continue chemo taxis, since the life of the bacteria is not over.
5) Reproduction:
a) For the given k and l, and for each I = 1, 2, ……, S let
Nc +1
i
health
j
=
∑ J(i, j,k,l)
j=1
the health of bacterium i (a measure of how many nutrients it got over its lifetime and how successful it was at
avoiding noxious substances). Sort bacteria and chemotactic parameters C(i) in order of ascending cost Jhealth (higher
cost means lower health).
b) The Sr bacteria with the highest Jhealth values die and the other Sr bacteria with the best values split (and the copies that
are made are placed at the same location as their parent).
6) If k < Nre, go to step 2. In this case, we have not reached the number of specified reproduction steps, so we start the next
generation in the chemo tactic loop.
61
Dronacharya Research Journal
Issue II
Jan-June 2010
7) Elimination-dispersal: For i = 1, 2, S, with probability Ped, eliminate and disperse each bacterium (this keeps the number of
bacteria in the population constant). To do this, if you eliminate a bacterium, simply disperse one to a random location on the
optimization domain.
8) If l < Ned, then go to step 1; otherwise end.
4. PROPOSED METHOD
The objective here is to use the above “Bacteria Foraging Optimization Algorithm” for video sequences and generate a
comparison with the existing Full search method.
Flow Diagram:
Fig 2: BFO oriented Video Compression Process Flow
Here the difference between the current frame and previous frame are used to generate the motion vectors using the BFOA.
These motion vectors used to generate the motion compensated frames which are further encoded by image encoder then they
are ready to transmit.
At the decoding side, BFO motion vectors and the reference previous frame is used to generate the predicted image in which
decoded motion vectors are added to get the resulted frame.
Fig. 3: Search window For Block Matching Algorithms [10]
62
Dronacharya Research Journal
Issue II
Jan-June 2010
Algorithms have been developed to make sure that as few elements as possible are examined while maintaining acceptable
quality. These are called block matching algorithms, or BMAs.
5. EXPERIMENTAL RESULTS
I. Image Sequence (CALTRAIN (400X352))
30
25
ESpsnr
BFOpsnr
PSNR
20
15
10
5
0
0
5
10
15
20
Frame Number
25
30
35
Fig. 4: PSNR performance of Block Matching Algorithms for Caltrain video Sequence.
BLOCK MATCHING ALGORITHM
AVERAGE PSNR
FULL SEARCH
26.8796
BFOA
17.9163
Table 1 Average PSNR for FS and BFO for Caltrain Video Sequence
BLOCK MATCHING ALGORITHM
AVERAGE COMPUTATIONS
FULL SEARCH
200.7202
.BFOA
192.4680
Table 2: Average Computations for FS, BFO for Caltrain Video Sequence
63
.
Dronacharya Research Journal
Issue II
Jan-June 2010
II. Image Sequence (Lecture Based Real Time Video (576X352))
35
ESpsnr
BFOpsnr
30
25
PSNR
20
15
10
5
0
0
5
10
15
20
Frame Number
25
30
35
Fig. 5: PSNR performance of BMAs for Lecture based real time Video Sequence.
BMA
1. FS
2. BFA
AVERAGE PSNR
FRAME DISTANCE 2
23.9576
AVERAGE PSNR
FRAME DISTANCE 4
21.3240
AVERAGE PSNR
FRAME DISTANCE 6
20.1810
19.0901
17.8946
16.8261
Table 3: Average PSNR for FS, BFO For Lecture Based Real Time Video Sequence
BLOCK MATCHING ALGORITHM
AVERAGE COMPUTATIONS
1. FULL SEARCH
203.0987
2. BACTERIA FORAGING
195.9567
ALGORITHM
Table 4: Average Computations For FS, BFO For Lecture Video Sequence
Sr. No.
Image Sequence
BFO TO FS (%)
1
Video Sequence
0.7
2
Caltrain Sequence
0.6
Table 5: PSNR Lag of BFO to FS
64
Dronacharya Research Journal
Issue II
Jan-June 2010
6. FUTURE SCOPE
From results we see that by exploring bacteria foraging we come pretty close to the PSNR results of ES, to further increase
the psnr or to achieve more computational gain, we need to hybrid BFO with other genetic algorithms like Genetic
Algorithm, Particle Swarm Optimization etc. and also we can make use of prediction of motion vector from previous blocks
that is making a hybrid of adaptive and bacteria foraging or hybrid of BFO with other genetic algorithms
REFERENCES
[1]
K. M. Passino, “Biomimicry of bacterial foraging for distributed optimization and control,” Control System Magazine,
IEEE, vol. 22, Issue 3, pp. 52-67, June (2002).
[2]
Stephens, D. W., and Krebs, J. R., “Foraging Theory”, Princeton University Press, Princeton, New Jersey, (1986).
[3]
Bell, W. J., Searching Behavior, “The Behavior Ecology of Finding Resources,” Chapman and Hall, London, England,
(1991).
[4]
M. Tripathy and S. Mishra, “Bacteria Foraging-Based Solution to Optimize Both Real Power Loss and Voltage
Stability Limit”, ”, IEEE Trans. on Power Systems vol 22,February (2007).
[5]
Tridib K. Das and Ganesh K. Venayagamoorthy, “Bio-inspired Algorithms for the Design of Multiple Optimal Power
System Stabilizers: SPPSO and BFA”, IEEE Trans. February (2006).
[6]
Dong Hwa Kim, Jae Hoon Cho, “Advanced Bacterial Foraging and Its Application Using Fuzzy Logic Based Variable
step Size and Clonal Selection of Immune Algorithm”, IEEE International Conference on Hybrid Information
Technology, (2006).
[7]
D.P.Acharya, G.Panda, S.Mishra, Y.V.S.Lakshmi, “Bacteria Foraging Based Independent Component Analysis”, IEEE
International Conference on Computational Intelligence and Multimedia Applications (2007).
[8]
M. Ghanbari, “ Video Coding: An Introduction to Standard Codecs”, London: The Institute of Electrical Engineers,
Ch.2, 5, 6, 7 & 8, (1999).
[9]
A. M. Tekalp, Digital Video Processing, Prentice Hall, (1995).
[10] M. Bierling, “Displace Estimation by Hierarchical Block matching”, Proc. SPIE, Visual Communications and Image
Processing, Vol. 1001, pp. 942-950, (1988).
[11] H. G. Musmann, M. Hotter and J. Ostermann, “Object- oriented Analysis-Synthesis Coding of Moving Image”, Signal
Processing: Image Communication, Vol. 1, No. 2, 187-217, 1980. pp. 117-138, Oct.( 1989).
[12] N.Diehl, “Object Oriented Motion Estimation and Segmentation in Image Sequences”, Signal Processing: Image
Communication, Vol. 3, No. 1, pp. 23-56, Feb.(1991).
[13] Muhammad Ahmad, Dong Kim, Kyoung Roh, Choi, "Motion vector estimation using edge oriented block matching
algorithm for video sequences” IEEE Trans. Circuits And Systems For Video Technology, vol 8, pp.860-863, Sept.
(2000).
[14] T. Koga, K. linuma, A. hirano, Y. Iijima, and T. Ishiguro,"Motion compensated interframe coding for video
conferencing', in PTCC Nat. Telecommun. Cont, New Orleans, L.A., pp. G5.3.1-G5.3.5, Nov (1981).
[15] Jianhua Lu, and Ming L. Liou, “A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation”,
IEEE Trans. Circuits And Systems For Video Technology, vol 7, no. 2, pp. 429-433,April (1997).
65
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
EVALUATION OF INCUBATION CENTRES IN INDIA
Dr. Onkar Singh*
Professor & Dean Academics
Dronacharya College of Engineering, Gurgaon -123506, India
E mail: [email protected]
Dr H L Verma
Professor, Haryana School of Business
Guru Jambheshwar University of Science and Technology
Hissar-125001, India
E mail: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT
Incubation centre works with a stated objective and has a life cycle. In our country it is financed through public and private funds. To ensure optimum
utilization of the resources and to achieve maximum productivity, performance needs to be apprised. Incubatees’ objectives/ aspirations and its selection
criteria should integrate with the objectives of the incubation center. The development in science, management and technology has allowed the incubation
center to innovate new operating models that suit the incubatees’ needs. The success of any incubation center lies in the performance of its incubatees and
the support services that it provides. Both aspects need to be made effective to ensure success of incubation center as well as incubatees. Therefore, it is
important to appraise the performance of incubation center and incubatees periodically. To achieve this aim, the author throws light on the factors that an
incubatee and incubation centre must keep in mind before selecting each other.
Keywords: Problems in performance evaluation, private sector initiative, state sponsorship, public / private partnership, research institute, creating a
benchmark, job creation potential, length of incubation period, success rate of graduates, composite advantage, evaluating incubation services.
______________________________________________________________________________________________________________________________
1. INTRODUCTION
Incubation Centres are intended to guide starting enterprises through their growth process with a nurturing environment and
hence reflect a strong endeavor to promote innovation and entrepreneurship with dedicated policy interventions. These are
also called business incubators (BI). Incubation centers provide the risk taking innovative minds the platform to start their
venture with minimum risk and maximum support involved. The main objective of business incubators is to stimulate
economic development that benefits the country or region in terms of jobs and tax revenues to the government. Governments
consider them as a part of the business infrastructure and believe that the future accrued taxes and other benefits from new
economic development will more than offset the initial state investments. The distinguishing characteristics of the incubator
can be summarized as follows [1], [2], [3]:
1.1 A managed work space providing shared facilities focused advisory services, interaction among tenants. It also
helps in mobilizing financial resources.
1.2 A small management team with core competencies to provide early diagnosis and treatment or referral for
business threats and opportunities through a wide network of professionals and friends in the local community.
1.3 Careful selection of start-up groups entering the incubator, the nurturing, growth and graduation after two
to three years. The selection and focused help, of course, account for the greater survival rate (two or three times
greater compared to those not incubated).
1.4 T he incubation centre in itself runs as a business, with the perspective of becoming self-supporting when
operations are fully established.
1.5 Initial support, however, almost is always provided by the central or state government in the form of a low
(or no) rent vacant building and operating subsidy, until rents and fees from tenants match operating expenses.
1.6 In addition to nurturing tenants within the incubator, outside assistance may also be provided to businesses in
their own premises.
*Corresponding Author
66
Dronacharya Research Journal
Issue II
Jan-June 2010
1.7 An important point to note is that if it has no tenant within its walls to benefit by interaction and
focused attention, then it is like a traditional small business and lacks the defining features of an incubator.
2. NEED FOR EVALUATION
Government of India wants the culture of entrepreneurship to spread. Hence, most of the incubators have been sponsored by
Department of Science and Technology, Government of India. Department of Information Technology has also started
sponsoring of incubation centers. Bulk of the sponsorship is coming out of government funds. To ensure that public money is
being utilized in desired manner and to optimal use, it is necessary to evaluate the performance of these incubation centers.
There is no point spending capital and providing support services to the new incubatees if they are not able to survive once
they graduate from incubation center. The endeavours of the government and the incubators would be a waste. Therefore, it is
necessary that each step, right from establishment of incubation centre to time when an incubatee graduates and becomes self
reliant to face the market competition should be optimized and goal oriented. Incubation process consists of incubatee,
business proposal and incubation center. Effectiveness of all these leads to successful enterprise. To ensure that these are
effective, these need to be evaluated.
3. REVIEW OF LITERATURE ON RELEVENT ISSUES ON EVALUATION OF
INCUBATION CENTERS
So far, active discussions and voluminous studies have been made on incubation centers. These studies can be divided into
four major categories: (i) The definition, organizational structure, function, performance and positioning of incubation
centers; (ii) the kinds of resources provided and the mode of operational management of incubation centers; (iii) the
relationship between the incubation centers and its tenants; and (iv) the characteristics of the tenants and incubation
performance. However, little work has been devoted to the study of the criteria used to admit tenants of incubation centres, to
evaluate business proposal and incubate.
3.1 Macmillan, Siege and Narasimha [4] suggested that the business plan should show as clearly as possible that the
entrepreneur has capacity to survive, an established track record, the ability to react well to risk and familiarity with the target
market. Venture capitalists generally assess ventures systematically based upon following six categories of risk: (i) loss of
entire investment; (ii) inability to bail out if necessary; (iii) failure to implement the venture idea; (iv) competitive risk; (v)
management failure; and (vi) leading failure.
3.2 There are many assessment criteria that can be used. However, following 10 criteria are referred to most often: (i) Ability
to concentrate on the task for a prolonged period; (ii) Having a good understanding of the target market; (iii) At least ten-fold
return of investment is required in the first 5 to 10 years; (iv) Demonstrating leadership from its past experience; (v) Ability
to evaluate and respond to risks properly; (vi) The liquidation of capital is high.; (vii) The targeted market has a big growth
rate; (viii) Whether it has related data of starting up a new business; (ix) Having a good grasp of the company; and (x)
Having ownership of the product.
3.3 Liu, Fong-show, Song, Shyang-guey [5] state that based on an on-site interview conducted during a visit to the
innovation incubation center in Austin, Texas, the criteria adopted for screening applicants for admission to the center were
(i) An innovative technology; (ii) Capability to start a business; (iii) Capability to organize and lead the team; (iv) Having a
sound business plan; (v) Having development potential; and (vi) Ability to create employment opportunities
3.4 According to B. WANG, J. SHYU AND G. TZENG, [6], appraisal model for admitting new tenants to the incubation
center at High-Tech Development Center in Georgia, U.S. uses following criteria: (i) Having a high technology conception or
prototype product; (ii) The product, the manufacturing process or the services of the business; (iii) Clear market opportunity;
(iv) Qualified management team; (v) Potential of company for growth; (vi) Attraction of the latent investment; and (vii)
Strategic business plan.
3.5 The criteria adopted for admitting tenants to the Innovation Center in Rensselaer Polytechnic Institute (RPI), New York
(1993) [7] are: (i) Having the aspiration to develop the start-up company into a science and technology based enterprise; (ii)
Able to survive in the existing or latent market-place: not only need the technology but also needed by the market-place; (iii)
During the founding stage, the start-up company is financially self sufficient; (iv) There is a good team; and (v) Affordability
of rent and willingness to let RPI to take part in 2 % of the total interest of the new start-up.
67
Dronacharya Research Journal
Issue II
Jan-June 2010
3.6 This study covers reviews on relevant interview literature in the field. From this, five aspects for screening applicants for
admission to the incubation center could be sketched out. The five aspects are: (i) technology and products; (ii) market and
marketing; (iii) capital and finance; (iv) management team; and (v) risk and policy.
6. NEED FOR FLEXIBLE AND MULTI CRITERIA APPROACH
As discussed and highlighted in preceding paragraphs, different organizations are using different models to evaluate
incubation centers. It is because that all have different parameters to measure success. A few of these are given below:
4.1 Incubation centers are being sponsored by different promoters. They have different objectives to achieve. Central and
state governments aim at achieving social and economic uplift of the area whereas private sector promoters are looking at
profits and returns on their investments.
4.2 Levels of success that may be termed as good/ exceptional may vary from area to area. For example even small levels of
profits generated by incubation centers located in small and under developed regions may be termed as successful whereas
higher profits generated by incubators located in bigger and more developed areas may be termed as failures.
4.3 Majority of studies base their assessment on single or few indicators, given that, in many cases the available data does
not allow for the consideration of multiple criteria. However, the usage of single indicator is insufficient to capture the
performance of incubation centers, since this may cover only one dimension of the complex support process. Moreover, this
imposes boundaries to the explanatory power of the evaluation outcomes. For instance, with respect to venture survival rates
as indicator for incubator success, it has also to be kept in mind that firms may induce improvements (e.g. on regional
employment, improved competitiveness, acceleration of structural change), even if they fail and therefore, survival rates
alone (as any other indicator) may be unable to provide a complete picture of performance of incubation centre.
4.4 The requirement for multi-criteria analyses is further strengthened by the fact that, although the superior economic goals
of incubators are widely comparable between most incubator organizations, the actual appropriateness of a particular
indicator may vary between different locations For example, in so called high-tech regions where the support of technologybased firms and the commercialization of academic research might be the primary incubator objective, other success
measures might be appropriate compared to incubators located in economically depressed or lagging regions, where the focus
is more on general economic development processes (e.g. improvement of local business infrastructure, improvement of the
general climate for entrepreneurship).
4.5 These different priorities within the same main goal categories, also point to potential trade-off conflicts, meaning that
some objectives might only be achieved by implicitly neglecting others. One could think of an incubator that reduces average
incubation time, and therefore exhibits a high fluctuation and produces masses of graduates each year, but only few graduates
survive after leaving the incubator facilities because of insufficient support during the incubation period.
4.6 This implies that given the multiplicity of underlying objectives and a set of different measures that reflect different
dimensions of incubator success. Normally there is no single incubation centre that can be considered effective regarding all
relevant variables. Broad range of major BI objectives and the need to include a heterogeneous set of evaluation variables
leads to a considerable complexity, which is the major cause for difficulties in developing uniform evaluation approaches.
4.7 Even though, there would exist a generally accepted set of evaluation criteria, there is another problem of bench
marking. In most cases it is not possible or meaningful to define adequate target values for particular indicators. For instance,
it is difficult to specify what survival rates after the graduations from the BIs are acceptable, how much graduates incubators
should generate per year, or which growth rates (e.g. in terms of employment) are satisfying. For incubation centers, there
hardly exist sufficiently specified and quantifiable evaluation criteria. Neither incubator organizations, and their management
respectively, nor local decision makers define such criteria. These are vague, verbalized and therefore difficult to control
retrospectively or on an ongoing basis.
4.8 The selection of performance measures is largely dependent on the actual unit of analysis. Hackett and Dilts [8]
differentiate between six different units of analysis when measuring the success of incubators: i) the community in which the
incubator operates, ii) the incubator as enterprise, iii) incubator manager, iv) incubatee firms, v) Incubatee management
teams, and vi) the innovations being incubated. Two broad categories can be derived: On one hand, indicators are needed that
reflect the success of BIs as organizations, their development and growth, their effectiveness to provide value-added support
or their long-term contribution to regional development objectives. On the other hand, variables have to be considered that
68
Dronacharya Research Journal
Issue II
Jan-June 2010
measure the success of the incubated ventures (especially after they graduate), and the impact of incubators support on these
development paths. Therefore, it is felt that incubators performance must include both of these levels.
4.9 These explanations clearly highlight the need for multidimensional evaluations of incubators and suggests that: do not
base their judgments on one single or only few indicators, and that perform a combined examination of both the incubator
level and the incubator-incubatee level. If available, a broad range of indicators should be used. Following this approach
would not only reduce the danger of excluding valuable information, but also increases the explanatory power of the
evaluation results. Comparing and ranking incubators performance whilst taking into account multiple criteria is complex but
much more reliable.
5. DIFFERENT ANGLES OF EVALUATION OF INCUBATION PROCESS
As discussed earlier, important constituents of a successful incubation process are incubatee, business proposal promoters
and incubation centers. Through an effective incubation process implemented at incubation center, an incubatee is able to
convert a business proposal in to a successful enterprise. Since stakes are high and success depends on effectiveness of all
components, it becomes necessary that mutual evaluation of each other is carried out. The phases in which evaluation shall be
carried out are: (i) Evaluation of incubator by sponsors; (ii) Evaluation of business proposal; (iii) Evaluation of incubator by
incubate.
6. EVALUATION OF INCUBATOR BY SPONSORS:
Incubation Centres are created by different stake holders. All of them have got different objectives. When apprised by these
sponsors they need to do the evaluation according to their perceived objectives. The incubation centers usually are of
following types based on their area of focus. Expectations of the sponsors shape the desired goals of incubators. The criteria
used for evaluation may be as follows:
(i)
(ii)
(iii)
(iv)
(v)
(vi)
Sponsor
Technical university
Research institute
Public/private partnership
State sponsorship
Private sector initiative
Venture capital-based
Desired Criteria
Innovation, faculty/graduate student involvement
Research commercialization
Investment, employment, other social benefits
Regional development, poverty alleviation, equity
Profit, patents, spin-offs, equity in client, image
Winning enterprises, high portfolio returns.
7. EVALUTION OF INCUBATOR BY INCUBATEE
A prospective entrepreneur must prepare a good business plan. He must also know the types of incubation centers present in
the area of his interest and which among those would suit him the most. The desired incubatee must do some homework on
those incubation centers that suit his needs based on incubator’s previous track record, minimum and maximum financial
assistance provided by it in the past, its board of directors, alumni network and number of incubatees that it intakes at any
time. A well established incubation center tells its story without any narration, demand exceeds capacity and it contemplates
expansion and can provide a good platform to any start-up. Only after doing all this research he should apply to these
incubation centers.
8. One more important aspect to consider over here is to know the limitations of an incubation center. Incubators have their
own pros and cons. It has been argued that business incubators are:
•
•
•
Selective: Most incubators cater to a selected group of potential winners. This leads to stringent criteria for selection of
incubatee.
Dependent on government support: Most of the incubation centres, especially, non-profit are supported by
government in terms of initial funding, formulation of policies and infrastructure facilities.
Limited out-reach: Initially they have limited reach to certain industry professionals. Majority of support comes from
universities and education institutes. Thus, they make only a marginal contribution to job-creation in the short term. With
increase in reputation in market the network can grow beyond its initial limits.
69
Dronacharya Research Journal
•
•
•
•
•
Issue II
Jan-June 2010
Expensive: To support themselves incubators often charge rent for their services and assistance.This makes them
expensive in the initial stages of start-up. However, most of these incubators charge minimum in start-up phase and
increase their charges as the business grows and starts to self sustain itself.
Skill-Intensive as it requires experienced management teams.
Creates dependency: The nut shell that these incubators provide often protects the incubatees from extremes of the
severe competition faced in the open market by other businesses. This might lead delay in the incubation process or
difficulties in surviving the challenges of open market after the graduation from incubation centre.
Good business infrastructure: It is very important for a new venture to minimize cost and serve the client’s need in
minimum time to outface the competition. Thus, there arises a need of modern infrastructure facilities and a good
location (preferably near to the market) by the start-ups. New incubation centers especially the government funded lack
this advantage.
Requires external subsidy: Some incubatees look for some sort of subsidies from incubators or government which may
lead to certain financial stability to start-ups till the time they become self-sustainable. However, incubator may or may
not provide these subsidies to the new incubatees in initial period of their growth.
9. GUIDELINES TO INCUBATEE FOR EVALUATION OF INCUBATION CENTRE:
The following questions shall prove useful for aspiring incubatees to assess the incubation centre [10] . This will help him
assess the strengths and weaknesses of incubation centre. The check-list can be used by both incubator managers and
prospective tenants in their decision making process.
9.1 Facilities are: (i) Offices; (ii) Office equipment; (iii) Communication facilities; (iv) Laboratory / prototyping / testing
equipment; and (v) Meeting rooms
9.2 Business Development Services are: (i) Provides coaching / e-coaching on business skill and business model
development; (ii) Provides business extension services (accounting, legal, secretarial support, etc); (iii) Provides assistance in
preparation of business plans; (iv) Provides assistance in building the business management team; (v) Organizes business
development training programs; (vi) Provides milestone-based operational guidance and technical assistance; and (vii)
Provides market research and product marketing assistance
9.3 Assistance in Fund Raising: (i) Has its own seed investment fund; (ii) Facilitates access to public business
development funds; (iii) Established a network of private investors (business angels, venture capitalists); (iv) Helps tenants
prepare their projects to start-up venture financing; and (v) Organizes presentations of tenants' projects to prospective
investors
9.4 Networking and Building Partnerships: (i) Established a network of critical business service providers &
negotiated special arrangements with them; (ii) Provides training and advisory services on building strategic business
partnerships; and (iii) Organizes regular (e.g. weekly / bi-weekly) networking meetings for tenants and investors / prospective
business partners
10. EVALUATION OF BUSINESS PROPOSAL
Once an incubatee has decided upon the incubator, which may meet his requirements, he shall approach the incubator for
seeking acceptance and admission. All incubators have certain criteria of their own to ensure that the business proposal has a
reasonable chance of success. The incubatee should have a very good business plan or business idea so that he would be
selected easily to whichever incubation center he goes to. Following criteria may be used to assess business proposal for
admittance to incubation centre [11]
10.1 Technology and product: (i) Innovation in technology; (ii) Potential of technology; (iii) Legitimate source of
technology; (iv) Difficulty in converting technology into a product; and (v) Probability on the success in product research and
development.
10.2 Market and marketing: (i) Needs in the market; (ii) Market entrance barrier; (iii) Quantity and value of products;
(iv) Main market; and (v) Substitutes, etc.
70
Dronacharya Research Journal
Issue II
Jan-June 2010
10.3 Capital and Finance: (i) Structure of share holders; (ii) Quality of capital; (iii) Return on investments; (iv) Break
even point; and (v) Relation between owners and management.
10.4 Management Team: (i) Contribution of major share holders to utility; (ii) Ambition of senior management; (iii)
Leadership; (iv) Cohesiveness of management team; and (v) Adaptability of senior management.
10.5 Risk and policy: (i) Support of government policy; (ii) Probability of success in mass production; (iii) Probability of
success in market development and marketing; (iv) Probability on emergence of potential competitors; and (v) Degree of
influence on industrial sector.
CONCLUSION
Whether an incubatee graduates successfully from the incubation center or fails, the success of an incubator depends on the
success of all incubatees that took admission in that center and not on a single failure. The average number of graduates per
year measures the overall effectiveness of the incubation centers with respect to the underlying incubator function and the
acceleration in the entrepreneurial process. Also a higher number of graduate firms reflect a healthy fluctuation of new firms,
meaning that more start-ups can be supported by the incubators, which contribute better to regional development objectives
However, we cannot underestimate the importance of those who failed to establish themselves. They affect the prestige of an
incubator and also play an important part in feedback and control mechanism. Minimum company mortality rate will reflect
strength of incubator to support new ventures. This also serves as an indicator as to how effectively the resources are being
utilized.
REFERENCES
[1]
Lalkaka, Rusta , “Technology Business Incubation: Role, Performance, Linkages, Trends”. National Workshop on
Technology Parks and Business Incubators Isfahan Iran, (2003).
[2]
Lavrow, Marina; Sample, Sherry). “Business Incubation: Trend or Fad? Incubating the Start-up Company to the
Venture Capital Stage: Theory and Practic,e” August (2000).
[3]
Eduardo, Carlos (September 2003). “The Incubation Process”. InfoDev Incubator Support Center (iDISC)
[4]
Macmmillan, I. C., P. N. Siegel and S. Narasimha, Criteria used by venture capitalists to evaluate
new venture proposals, Journal of Business Venturing, pp.119-128, (1985).
[5]
Liu, F. S. and S. G. Song, The Report of Technology Transfer Meeting in U. S., pp.1-2, (1993).
[6]
Brochure of Advanced Technology Development Center, GIT, Atlanta, Georgia, (1993).
[7]
Buckley, J. J., Fuzzy hierarchical analysis, Fuzzy Sets and Systems, vol.17, pp.233-247, (1985).
[8]
Hackett S M, Dilts D M, ``Inside the black box of business incubation: study B-scale assessment, model refinement,
and incubation outcomes'' Journal of Technology Transfer 33 439 ^ 471(2008).
[9]
Rustam Lalkaka, International Conference onBusiness Centers: Actors for Economic & Social Development Brussels,
14 – 15 November ‘Best Practices’ in Business Incubation: Lessons (yet to be) Learned (2001).
[10] Efficiency Evaluation Checklist for Business Incubators, By Vadim Kotelnikov, Founder, Ten3 Business.
[11] Apprisal Model for Admittance of New tenants to Incubation Centre at ITRI Benjamin Wang and Joseph Shyu,
Institute of Management of Technology, National Chiao Tung University, 1001, Ta Hsueh Rd., Hsinchu 300, Taiwan,
[email protected]; [email protected]
Gwo-Hshiung Tzeng, Department of Business Administration, College of Management Kainan University
No.1, Kainan Road, Luchu, Taoyuan 338, Taiwan, [email protected].
71
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
INTERNET APPLICATIONS: A SOFT COMPUTING APPROACH
Dr. C. Ram Singla *
Advisor (R&D) & Professor of Electronics and Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
Dr. B.M.K Prasad
Principal, Dronacharya College of Engineering
Gurgaon-123506, India
E-mail: [email protected]
Vinay Kumar Nassa
Associate Professor, Department of Electronics and Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT
The emergence of the Internet and the ubiquitous power PC system together has created tremendous opportunity for new generation of applications like
education, business etc, which can reach millions of students, individuals through Rich Internet Applications (RIAs). Recent developments in technology are
leading to a speedy convergence between marketing and technology in respect of two main characteristics: Rich and Reach.
Many internet applications need to deal with large amount of data collected from non-technical users and is imprecise and incomplete in nature. Well
structured rules are hardly available in general applications and the nature and the pattern of the users can never be fully accounted. Recently the use of Soft
Computing tools for solving Internet applications problems have been gaining the attention of researchers because of their ability to handle imprecision,
uncertainty in large and complex search spaces. This paper is an attempt to present recent Internet applications using these techniques.
Keywords: Soft computing paradigm, artificial neural network, Fuzzy logic, Rich Internet Applications (RIAs), World Wide Web (WWW), Ontology, and
eCommerce.
______________________________________________________________________________________________________________________________
1. INTRODUCTION
With the rapid growth of information technologies, a new era, the digital age, has arrived. They make more and more
innovative products and electronic/digital services possible. The digital revolution is happening much more quickly. Our
economic society and life are changing significantly in this digital age. The digital revolution in our world is spurring on
facilities, hardware, software, services, and capital investment. Vice President of USA, Albert Gore Jr. has said: "We are on
the verge of a revolution that is just as profound as the change in the economy that came with the industrial revolution. Soon
electronic networks will allow people to transcend the barriers of time and distance and take advantage of global markets and
business opportunities not even imaginable today, opening up a new world of economic possibility and progress." In order to
keep the competence in the digital age, many countries and enterprises pay much attention to the development and the
application of information technologies.
1.1 According to the characteristics of the development of the digital revolution, Internet, Innovation and
Internationalization (3I’S) are three major trends in the digital age. In the digital age, Internet conducts a new business
model - electronic business. The electronic business implies that business transactions are held by computer-mediated
network. Internet provides a two-way communication channel to let enterprises fulfill the whole or part of traditional business
activities. So far, many industries all over the world lavish much attention on the electronic business.
Innovation is the only way for an enterprise to survive in the digital age. For high-technology industries, especially, much
attention is paid on the development of innovative products or services.
1.2 Internet applications are also commonly known as Web applications. The World Wide Web (or the "Web") is a system
of interlinked, hypertext documents that runs over the Internet. With a Web browser, a user views Web pages that may
contain text, images, and other multimedia and navigates between them using hyperlinks. The Web was created around 1990
by Tim Berners-Lee and Robert Cailliau working at CERN in Geneva, Switzerland [1]. Since then, there is an ever growing
number of applications that range from e-commerce to information search.
* Corresponding Author
72
Dronacharya Research Journal
Issue II
Jan-June 2010
1.3 Typical Web applications are characterized by large amount of data that is growing in size and categories on a daily
basis. Unlike other engineering problems where the issues and the size of the problem remain fairly constant, web
applications present a new challenge to researchers. As of 2005, it is estimated that there are more than 15 billion web pages
and the number is expected to be even higher and keep on growing today[1]. How to effectively retrieve information from
the web and guide users through this maze like information jungle can hardly be handled with few adhoc rules. Business
transaction over the Internet has evolved from the B2B (Business-to-business), B2C(Business-to-consumer), to the current
C2C(Consumer-to-consumer) protocol due to the emerging P2P popularity. eCommerce has new issues to deal with, ranging
from personalized recommendation to product recommendation. From the web deployment perspective, how to classify
internet users and their usage pattern is a must in ensuring proper content being delivered to the right user. Other related
applications that will find soft computing techniques useful include network routing, Internet traffic classification, and
Internet video game.
2. SOFT COMPUTING PARADIGM
Soft computing is a consortium of methodologies that work synergistically and provides, in one form or another, flexible
information processing capabilities for handling real life ambiguous situations. Its aim, unlike conventional (hard)
computing, is to exploit the tolerance for imprecision, uncertainty, approximate reasoning and partial truth in order to achieve
tractability, robustness, low solution cost, and close resemblance with human like decision-making. The constituents of soft
computing are:
Fuzzy Logic (FL), Artificial Neural Networks (ANN), Evolutionary Algorithms (EAs) (including genetic algorithms (GAs),
genetic programming (GP), evolutionary strategies (ES)), Support Vector Machines (SVM), Wavelets, Rough Sets (RS),
Simulated Annealing (SA), Swarm Optimization (SO), Memetic Algorithms (MA), Ant Colony Optimization (ACO) and
Tabu Search (TS).
In this paper, the application of the main constituent of the soft computing methods like fuzzy set, artificial neural network in
Internet have been briefly discussed.
2.1 Fuzzy Logic
Fuzzy logic is a relatively new technique (first appeared in 1970s) for solving engineering control problems. This technique
can be easily used to implement systems ranging from simple, small or even embedded up to large The key idea of fuzzy
logic is that it uses a simple and easy way in order to get the output(s) from the input(s), actually the outputs are related to the
inputs using if-statements and this is the secret behind the easiness of this technique. The most fascinating thing about Fuzzy
logic is that it accepts the uncertainties that are inherited in the realistic inputs and it deals with these uncertainties in such
away that their affect is negligible and thus resulting in a precise outputs.
The concept of Fuzzy Logic (FL) was conceived by Lotfi Zadeh, a professor at the University of California at Berkley, and
presented not as a control methodology, but as a way of processing data by allowing partial set membership rather than crisp
set membership or non- membership. FL provides a simple way to arrive at a definite conclusion based upon vague,
ambiguous, imprecise, noisy, or missing input information. It mimics human control logic.
2.2 Neural Networks
An Artificial Neural Network (ANN) is an information processing model that is able to capture and represent complex inputoutput relationships. The motivation for the development of the ANN technique came from a desire for an intelligent
artificial system that could process information in the same way as the human brain. Its novel structure is represented as
multiple layers of simple processing elements, operating in parallel to solve specific problems. ANNs resemble human brain
in two respects: learning process and storing experiential knowledge. An artificial neural network learns and classifies a
problem through repeated adjustments of the connecting weights between the elements. In other words, an ANN learns from
examples and generalizes the learning beyond the examples supplied.
Artificial neural network applications have recently received considerable attention. The methodology of modeling, or
estimation, is somewhat comparable to statistical modeling. Neural networks should not, however, be heralded as a substitute
for statistical modeling, but rather as a complementary effort (without the restrictive assumption of a particular statistical
model) or an alternative approach to fitting non-linear data.
Neural networks were earlier thought to be unsuitable for deduction because of their inherent black-box nature. This is,
however, its strength as well. No information in symbolic form is needed to train the neural network for subsequent
classification and/or deduction beyond its domain of training. There has also been active research aimed at extracting the
embedded knowledge in trained networks in the form of symbolic rules. This serves to identify the attributes that are needed
in performing classification. Many Internet applications that need to be classified do not have explicit symbolic rules. One
73
Dronacharya Research Journal
Issue II
Jan-June 2010
example of such is the Internet user classification problem where abundant data is available although no explicit rules can be
easily formed. Unlike fuzzy sets, the main use of neural nets in Internet applications is in the area of rule extraction and
clustering. Lately, game players demand a more realistic and changing environment and they are no longer satisfied to play in
situations generated from preset rules or a variation of such.
The adaptive ability of neural network to react to new situations and generalize training sets is an ideal match to extend
internet games beyond the traditional settings and can help to create a new generation of games that adapt to different play
styles.
3. INFORMATION RETRIEVAL AND eCOMMERCE APPLICATIONS
Information retrieval over the Internet has always been the key application of Internet. It is a challenging problem as the data
structure is typically not available to the information seeker. Much of the work has focused on how to do a more effective
search as well as mining web pages. The need to recover information from system with no clear structure is an ideal task for
neural network. Use of fuzzy semantic to extend the search scope for a better result has also been conducted. eCommerce has
been the main interest of internet applications and various new techniques have been researched in recent years to solve
emerging problems like automated help in finance process, personalized product recommendation and self-guided shopping
process. Numerous neural network and fuzzy logic based designed have shown to be helpful in all these eCommerce
processes.
3.1 Web content mining and search
As one of the important fields in data mining, Web mining refers to the use of data mining techniques to retrieve, extract and
evaluate information from Web documents and services. Web mining typically addresses semi-structured or unstructured
data, e.g., Web and log files, and are often represented by imprecise or incomplete information. This implies that neural
networks are useful instruments to mine knowledge from such data. We have seen the use of neural network in web page
mining problem and the result appears to be promising though higher precision rate is still not quite attainable [2, 3]. An
approach in Figure 1 using RBF (Radial Basis Function) neural network for classification is more efficient than other
available classifiers. When testing with 1000 Web pages from several popular web sites, RBF neural network can classify
Web documents into five categories – News, Sports, IT, Education, Other - with 80 percent accuracy and has 10 percent
higher than BP ( Back Propagation ) model.
Fig. 1: Web Classification Mining System Model based on RBF neural network
74
Dronacharya Research Journal
Issue II
Jan-June 2010
Fuzzy logic was used in web mining because fuzzy logic is an ideal tool to handle approximate or vague notions that are
inherent in many information retrieval (IR) tasks [4, 5].
Most of the search engines search for data in their databases by using the exact words in the user query. Due to the fact that
more users cannot articulate the search with the intended keywords, search engines fail to return the desired results from time
to time. It is found that the user usually skims only through the first page of the search results; hence, it becomes crucial to
output the best possible result to the user through their individual perspective. Fuzzy and Neural Network aided search engine
can provide more relevant result to the users’ query [6, 7]. It is especially true when the semantics is taken into design
considerate on. This is a natural consequence as more relevant information is being extracted added to this process through
the generalization feature of fuzzy logic and neural network. Fuzzy logic uses fuzzy rules to track the users and makes the
decision about their interests. Neural network, once trained, can be used in making an optimal search result. The objective is
to retrieve documents that satisfy a given query subject to a specific topic.
Figure 2 represents a search engine architecture utilizing fuzzy logic in interpreting the semantic of the search keywords.
Here this search engine enhances the search with fuzzy linguistic search (FLS) or the Fuzzy Numeric Semantics (FNS) search
modules. The Fuzzy linguistic search would extend the search to semantically related words based on the keywords entered
in the user query. The FNS search technique searches for data that contain keywords semantically related through the fuzzy
linguistic-to-numeric mapping in the chosen application domain. Take for example, the search for high vitamin A foods is
translated into a search on food that has over certain International Units of Vitamin A. This increases the efficiency of the
search engine by as much as 50% in many cases.
Fig. 2: Perception-based search engine with fuzzy semantic
3.2 eCommerce
In banking and financial industry management of accounts and information involves heavy reliance on the use of networked
computerized data systems. In AEC (Architecture, Engineering and Construction) projects, a large number of members have
to work together on design and production of complex products. These team members may be geographically dispersed.
Hence computer supported collaborative work is important in the design process. IT tools are needed that enable
collaboration among the team members. Risk assessment is an important area in such complex projects. It is often carried out
individually by the various disciplines involved in the project. It is important to have tools available which will enable risk
assessment to be undertaken collaboratively. Neural network and fuzzy logic can be used to help assess the risks and
correlate information from a variety of technological and database sources to identify suspicious account activity.
75
Dronacharya Research Journal
Issue II
Jan-June 2010
Fig. 3: Access Mechanisms to Account
In a typical finance process, the system needs quickly recognize underlying complex patterns in data and use them to assign
risk levels to particular sets of transactions and activities observed in the environment of a financial institution. The scheme
like that illustrated in Figure 3 proposes an activity monitoring and analysis tool based on the ability of neural networks.
These systems are major sources of information that can be used in the identification and prevention of financial fraud like
illegal/unauthorized transfer of funds by external and internal entities.
Finding a matching product among large product space and satisfying buyer's preferences is one of the essential activities in
e-commerce. In online market, a customer specifies his/her product specifications while purchasing a product. At the same
time, the customer wants to have the information about the popularity of the product. Neural network and fuzzy logic can be
used to compare and rank the products based on customer’s own preferences and on the information from the Internet about
the products. [4, 5, 8].
4. OTHER APPLICATIONS
4.1 Video Game
The market of video games is getting bigger and bigger throughout the recent years and it has become a facet of many
people’s lives. This market continues to expand as of now. In addition to its big player base, video games carry perhaps the
least risk to human life of any real-world applications. They make an excellent test bed for techniques in artificial intelligence
and machine learning. Machine learning is one of the most compelling and yet least exploited technology in the video game
industry. This technology using neural network and fuzzy logic can greatly enhance video games, make video games more
interesting and realistic, and to build entirely new genres. Figure 4 describes a novel game built around a real-time
enhancement of the NeuroEvolution of Augmenting Topologies method (NEAT).
Real-time NEAT (rtNEAT) is able to handle neural networks as the game is played, making it possible for agents to evolve
increasingly sophisticated behaviors in real time. Robot game agents (represented as small circles) are depicted playing a
game in the large box. Every few ticks, two high-fitness robots are selected to produce an offspring that replaces another of
lower fitness. This cycle of replacement operates continually throughout the game, creating a constant turnover of new
behaviors.
Fig. 4: The main replacement cycle in rtNEAT
76
Dronacharya Research Journal
Issue II
Jan-June 2010
4.2 Internet Traffic Identification
Internet traffic identification is an important tool for network management. It allows operators to better predict future traffic
matrices and demands, security personnel to detect anomalous behavior, and researchers to develop more realistic traffic
models. A sophisticated Bayesian trained neural network is able to classify flows, based on header-derived statistics and no
port or host (IP address) information, with up to 99% accuracy for data trained and tested on the same day, and 95% accuracy
for data trained and tested eight months apart[5]. Further, the neural network produces a probability distribution over the
classes for a given flow. By providing high accuracies without access to packet payloads or sophisticated traffic processing
this technique offers good results as a low-overhead method with potential for real-time implementation [9, 10,11].
4.3 Miscellaneous
Ontology is the conceptualization of a domain into a human understandable, machine-readable format consisting of entities,
attributes, relationships, axioms. It is used as a standard knowledge representation for the semantic web. However, the
conceptual formalism supported by typical ontology may not be sufficient to represent uncertainty information commonly
found in many application domains due to the lack of clear-cut boundaries between concepts of the domains. To tackle this
type of problems, one possible solution is to incorporate Fuzzy Logic into ontology to handle uncertainty data. Traditionally,
fuzzy ontology is generated and used in text retrieval and search engines. However, the manual generation of fuzzy ontology
from a predefined concept is difficult and tedious task that often requires expert interpretation. So, automatic generation of
concept hierarchy and fuzzy ontology from uncertainty data of a domain is highly desirable. Fuzzy ontology can be
incorporated to ontology to represent the uncertainty of the information. In order to tackle this problem fuzzy logic
researchers propose a solution for that is known as the Fuzzy ontology generation framework which is basically used for
automatic generation of Fuzzy Ontology of uncertainty information [12].
The volume of e-mail that we get is constantly growing. We are spending more and more time filtering e-mails and
organizing them into folders in order to facilitate retrieval when necessary. The rate of unsolicited (spam) e-mail is also
rapidly increasing. It may vary significantly in content, from get-rich and selling items, to offensive e-mails and pornographic
sites. Most modern e-mail software packages provide some form of programmable filtering, typically in the form of rules that
organize mail into folders or dispose of spam mail based on keywords detected in the header or body. However, most users
avoid the customizing software. In addition, manually constructing robust rules is difficult as users are constantly creating,
deleting and reorganizing their folders. Even if the folders remain the same, the nature of the e-mails within the folder may
well drift over time. The characteristics of the spam e-mail also change over time. Hence, the rules must be constantly tuned
by the user. That is not only time consuming but also error-prone.
Rapid development of Internet technologies, remotely accessing and operating medical applications, ubiquitously, is
becoming increasingly possible. So the virtual medical advice may be planned for the future, using an Internet link and a
fuzzy logic algorithm to replace the “doctor-patient” direct relations. But in this type of application, uncertainty can arise at
various levels, and that can be solved using fuzzy logic algorithm. So the representation of uncertainty and the decision
process through the use of fuzzy algorithms, in order to replace a medical consultation can be addressed. [13].
5. CONCLUSION
There is a growing interest in using fuzzy logic and neural network for solving Internet applications in the past 10 years. The
inherent capability of neuro-fuzzy techniques in handling vague, large-scale, and unstructured data is an ideal match for
Internet related problems. Earlier research tends to focus on how to extract needed information from unstructured Internet
data. Lately, it is seen the use of neuro-fuzzy methodologies in building a structure web. Semantic web is one example of
such. The notion of a structure web can be made more realistic when the concept of fuzzy logic is employed since web data
tend to be fuzzy in nature. It is expected to see an integration of soft computing techniques in Semantic web methodologies
in the near future. Genetic algorithm for Internet application should also become more popular as Internet applications get
larger in scale.
REFERENCES
[1]
A.Gulli and A.Signorini, “The indexable web is more than 11.5 billion pages”, Pages: 902 – 903, International World
Wide Web Conference, (2005).
[2]
Chen, Chih-Ming, “Incremental Personalized Web Page Mining Utilizing Self-Organizing HCMAC neural network”,
Web Intelligence and Agent Systems, v2, n1, pages 21-38,(2004).
77
Dronacharya Research Journal
Issue II
Jan-June 2010
[3]
Chen Junjie, and Huang Rongbing, “Research of Web classification mining based on RBF neural network,” Control,
Automation, Robotics and Vision Conference, 2004. ICARCV 2004 8th Vol. 2 pages 1365-1367,. December (2004).
[4]
B. K. Mohanty, K. Passi, “Web based information for product ranking in e-business: a fuzzy approach”, Proceedings of
the 8th international conference on Electronic commerce: The new e-commerce: innovations for conquering current
barriers, obstacles and limitations to conducting successful business on the internet ICEC '06, ACM Press, August
(2006).
[5]
World Wide Web definition, http://en.wikipedia.org/wiki/World_Wide_Web#Publishing_Web_pages.
[6]
Agarwal, S.; Agarwal, P, “A Fuzzy Logic Approach to Search Results’ Personalization by Tracking User’s Web
Navigation Pattern and Psychology”, Tools with Artificial Intelligence, 2005, pages 318 – 325,14-16 N0v (2005).
[7]
Cheng-Zhong Xu Tamer I. Ibrahim, “A Keyword-Based Semantic Prefetching Approach in Internet News Services”
Transactions on Knowledge and Data Engineering, Volume 16 , Issue 5 Pages 601 – 611,May (2004).
[8]
Ananta Charan Ojha, Sateesh Kumar radhan, “Fuzzy Linguistic Approach to Matchmaking in E-Commerce”, 9th
International Conference on Information Technology (ICIT'06), Pages 217-220, December (2006).
[9]
Alandjani, G.; Johnson, E.E., “Fuzzy routing in ad hoc networks” Performance, Computing, and Communications
Conference, 2003, Pages 525 – 530,9-11 April (2003).
[10] Auld, T.; Moore, A. W.; Gull, S. F, “Bayesian Neural Networks for Internet Traffic Classification,” Neural Networks,
IEEE Transactions, Pages 223 – 239, Jan (2007).
[11] Bin Zhou; Mouftah, H.T., “Adaptive Shortest Path Routing in GMPLS-based Optical Network Using Fuzzy Link
Costs”, Electrical and Computer Engineering, 2004. Canadian Conference on Volume 3, Pages 1653 – 1657, Issue 2-5
May 2004.
[12] Chang-Shing Lee, Zhi-Wei Jian and Lin-Kai Huang, “A Fuzzy Ontology and its application to news summarization”,
Systems, Man and Cybernetics, Pages 859-880, September 2004.
[13] A. Taleb-Ahmed, A. Bigand, V. Lethuc, P.M. Allioux, “Visual acuity of vision tested by fuzzy logic: An application in
ophthalmology as a step towards a telemedicine project, ” Information Fusion, Volume 5, Issue 3, 4 Pages 217-230,
September 2004.
78
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
SELF-ASSEMBLY OF A 3D SUPRAMOLECULAR ARCHITECTURE
WITH GUANIDIUM LIGANDS AND DECAVANADATE UNITS
Dr. Katikaneani Pavani*
Assistant Professor, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
Sangeeta Singla
Assistant Professor, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
Reshu Sharma
II Semester, Department of Electronics and Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
Pratima Sharma
II Semester, Department of Electronics and Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT
A new decavanadate supramolecular architecture with guanidium ligands, [C(N+H2)(NH2)2]5{HV10O28}, 1, has been synthesized by solution evaporation
method and characterized by single crystal X-ray diffraction, IR, and thermogravimetric analysis. The structure of 1 consists of guanidium ligands acting as
counter cations to decavanadate anions resulting in the formation of a 3D framework based on hydrogen bonding contacts between protonated guanidine
cations and decavanadate anions.
Keywords: decavanadates, crystal structure, hydrogen bonding, Vanadium
______________________________________________________________________________________________________________________________
1. INTRODUCTION
1.1 The role of vanadium in living organisms
Vanadium is an ultratrace metal essential to live, detected in quantities of 15x10-3 g per 75 kg in the human body [1]. These
ultratrace elements are usually involved in enzymes or have some effect on their activity. Its biological effects result mainly
from the facility of moving through its several oxidation states (-3 to+5), by one-electron steps electron transfer, acting as
cofactor in redox enzymes and in oxo-transfer enzymes [2]. The oxidation state +4 is the most stable under anaerobic
conditions (like the cellular cytoplasm), in particular because of the high stability of the vanadyl ion (VO2+), while the state
+5, in the form of vanadates (VO4 3-, VO3+ and VO2 +) is more stable under aerobic conditions [2].
1.2 Decavanadates
The very soluble vanadate ion (VO2+) can, in acid conditions (pH below 6.3), in the proper concentration range, and after a
series of hydrolysis-polymerisation reactions, precipitate as an orange crystalline decavanadate, (V10O28)6- [3, 4]. The
decavanadate anion can be represented like in Fig. 1, with six vanadium atoms in a plane, two above and two below,
displaying a cage-like structure [5,6].
*Corresponding Author
79
Dronacharya Research Journal
Issue II
Jan-June 2010
Fig. 1: Decavanadate anion
1.3 Decavanadates in biological systems and material sciences
Decavanadate and several other polyoxovanadates (not only of V(V) but also displaying mixed valences in a large range of
V(V)/V(IV) ratios) have been prepared, because of the growing interest in studying the biological properties of these
systems. These properties can result from three different species: (a) The polyoxovanadate itself; several vanadates have been
associated to the activity of certain human body enzymes [1,2]: the vanadate dimmer H2V2O7 2- influences the activity of
hydrogenases, isomerases and phosphatases [7], while the tetramer V4O12 4- is an inhibitor of dehydrogenases and aldolases
[7,8]. The decavanadate itself has been proposed as the active species in photo cleavage of myosin at the phosphate binding
sites [9], as well as an activator of a 5’- nucleotidase from rat kidney [10] and a strong inhibitor of phosphofructokinase and
other kinases of Ca2+-ATPase [11] and of muscle phosphorylase. (b) The vanadate ion, H2VO4-, produced by the hydrolysis
of the polyoxovanadates in biological conditions [2]; its chemical and structural similarity to H2PO4- explains the inhibition
effect of vanadate in many phosphate-metabolising enzymes [12]. (c) The ion VO2+, resulting from the reduction of
vanadate, also in biological intracellular conditions; it shows chemical similarities with Fe3+ in storage and transport
proteins [1-/2].
The biological activity of the polyoxovanadates (and decavanates in particular) as themselves seems, in this scenery of
instability in physiological conditions, almost impossible. The half-life of V(V) in the intracellular medium has been
estimated to be approximately half an hour [13]. Another problem arises from the fact that, at physiological vanadium
concentrations (about 0.5 mM) decavanadate should only be found in trace amounts. However its presence in significant
amounts can be explained by the existence of special cell compartments where vanadium can be accumulated and by the
protection from decomposition that can result from hydrogen-bonding interactions with the surrounding molecules, like
proteins or macrocyclic ligands [2]. These interactions can result in the formation of a protective ‘cage’ of molecules that
avoids the destruction of the decavanadates.
Decavanadate (V10O286-) is shown to be an inhibitor of catalysis by bovine pancreatic ribonuclease A (RNase A) [14]. The
interaction between RNase A and decavanadate has a coulombic component, as the affinity for decavanadate is diminished
by NaCl and binding is weaker to variant enzymes in which one (K41A RNase A) or three (K7A/R10A/K66A RNase A) of
the cationic residues near the active site have been replaced with alanine. Decavanadate is thus the first oxometalate to be
identified as an inhibitor of catalysis by a ribonuclease (fig.2.). Surprisingly, decavanadate binds to RNase A with an affinity
similar to that of the pentavalent organo-vanadate, uridine 2’, 3’-cyclic vanadate.
Fig. 2: Decavanadate is identified as an inhibitor of catalysis by a ribonuclease.
80
Dronacharya Research Journal
Issue II
Jan-June 2010
1.4 Guanidine
Guanidine is a crystalline compound of strong alkalinity formed by the oxidation of guanine. It is used in the manufacture of
plastics and explosives. It is found in urine as a normal product of protein metabolism. The molecule (Fig. 3) was first
synthesized in 1861 by the oxidative degradation of an aromatic natural product, guanine, isolated from Peruvian guano.
Despite the provocative simplicity of the molecule, the crystal structure was first described 148 years later. The guanidine
group defines chemical and physicochemical properties of many compounds of medical interest and guanidine-containing
derivatives constitute a very important class of therapeutic agents suitable for the treatment of a wide spectrum of diseases.
Notable guanidinium salts include guanidine hydrochloride (GuHCI), which has chaotropic properties and is used to denature
proteins. Empirically, guanidine hydrochloride is known to denature proteins with a linear relationship between concentration
and free energy of unfolding.
1.5 It is well established that when decavanadate based solids are crystallized from aqueous solution (either under ambient or
hydrothermal condition) in the presence of organic amines (commonly referred to as structure directors), organic/inorganic
hybrid solids are invariably formed [15, 16]. In many cases, the organic amines exert considerable structural influence to
form a range of structures from zero-dimensional discrete clusters, 1D, 2D and 3-dimensional networks [17-19]. In this
paper, we describe an interesting solid, [C(N+H2)(NH2)2]5{HV10O28}, 1 crystallized in the presence of guanidine.
Fig. 3: Guanidine
2. EXPERIMENTAL
Vanadium pentoxide (V2O5), sodium hydroxide (NaOH) and guanidine were obtained from aldrich and used without further
purification.
2.1 Synthesis. 1 was synthesized from a mixture of V2O5 (0.4547g, 2.5mmol), NaOH (1.2g, 30mmol), guanidine
hydrochloride (0.477g, 5mmol), dissolved in 40 mL of distilled water. pH of the solution was adjusted to 5.0 by dilute HCl
and the orange color solution was kept at room temperature (25°C) for crystallization. After a few weeks, orange color rodshaped crystals were filtered off from the solution and washed with water followed by acetone and dried in air.
2.2 Characterization. Room-temperature X-ray powder diffraction data were collected on a Bruker D8 Advance
diffractometer equipped with a curved graphite single-crystal monochromator and a scintillation detector. TG analyses were
carried out with a Perkin–Elmer TGA7 system on well-ground samples under a nitrogen flow with a heating rate of 10°Cmin–
1
. 1 loses its organic groups in multiple steps. Weight loss up to 500°C corresponds to the loss of organic groups and
compares well with the composition determined from single-crystal X-ray analysis. In all cases the phase purity of the
samples was established by simulating powder X-ray diffraction patterns on the basis of the single-crystal structure data.
2.2.1 X-ray crystallographic studies:
Single crystal diffraction studies were carried out on a Bruker AXS SMART Apex CCD diffractometer with a MoKα
(0.71073Å) sealed tube at 28oC for 1. The software SADABS was used for absorption correction and SHELXTL for space
group and structure determination and refinements [20, 21]. The vanadium atoms were located first and then remaining atoms
were deduced from subsequent difference Fourier syntheses. The hydrogen atoms were located using geometrical constraints.
All the atoms except H were refined anisotropically. The least-squares refinement cycles on F2 were performed until the
model converged. crystal data are provided in Table 1.
81
Dronacharya Research Journal
Issue II
Formula
[C(N+H2)(NH2)2]5{HV10O28}, 1
Space Group
P2(1)/n
a, Å
11.837(4)
b, Å
20.460(7)
c, Å
14.752(5)
α, °
90.00
β, °
94.669(6)
γ, °
Jan-June 2010
90.00
3
V, Å
3561(2)
Z
4
-3
dcalc, g·cm
1.972
µMoKα, cm
2.861
Diffractometer
Bruker Smart Apex CCD
Radiation
MoKα
-1
T, K
300(2)
Crystal size, mm
2θmax, °
No. measured reflections
No. observed reflections (I>2σI)
No. refined parameters
2.31 X 1.25X 0.77
56.60
40976
8694
7694
523
R1(I>2σI)
0.0610
WR2(all)
0.1458
Table 1: Crystallographic data of 1
3. RESULTS AND DISCUSSION
3.1 Crystal structure of 1.
The crystal structure of 1 reveals the presence of a discrete cluster anion, [HV10O28]5- reported in the literature [15-19]. The
framework contains a central {V6O12} core built of six edge-shared VO6 octahedral units arranged in 2 x 3 rectangular array;
two VO6 units from above and two from below share the equatorial oxygens at the apices of the octahedra in the rectangle.
The molecular symmetry of an idealized framework is D2h. Each V(V) atom in the V10O286- cluster has a distorted octahedral
geometry with the V–O bond length in the range of 1.592(3)–1.597(2)Å for terminal oxygen and 1.752(2)–2.340(2)Å for
bridging oxygen atoms. The O–V–O angles for cis and trans bonds are in the range of 74.8(1)–106.4(1)Å and 155.1(1)–
175.7(1)Å respectively. All the bond angles and distances are consistent with {V10O28}6- clusters observed in previous reports
[15-19]. Crystallographic studies reveal that each unit cell consists of four decavanadate cluster anions. Each decavanadate anion
is surrounded by five guanidium organic cations (Fig.4). The double bonded nitrogen atom of guanidine is protonated since the
lone pair of other two nitrogen atoms are involved in resonance. On the basis of the number of counter cations present per
decavanadate cluster, it is inferred that the cluster anion is mono protonated. We did not succeed in locating the protons either
from the difference Fourier map or by rigid atom. Interestingly, the nitrogens of the organic molecule are directing towards the
cluster oxygens thereby forming strong H-bonding resulting in supramolecular 3D framework (Fig.5).
82
Dronacharya Research Journal
Issue II
Jan-June 2010
Fig. 4: Decavanadate anion surrounded by guanidium cations
Fig. 5: Strong H-bonding between organic and decavanadate resulting in supramolecular 3D
framework
3.2 Chemistry of Formation of Decavanadates
Crystal structures of the solid 1 reported here demonstrate the influence of supramolecular interactions in dictating a
particular crystal packing. Rationalization of such network structures involving covalent bonds is difficult to interpret.
However, we have made a systematic a´ posteriori analysis [22] in terms of reacting synthons (chemically reasonable
molecules) participating in a supramolecular reaction (fig. 6). In aqueous vanadate solution, the dominant building blocks for
the formation of polyoxovanadate clusters are tetrahedral VO4, square-pyramidal VO5 and octahedral VO6. While above pH
7, metavanadate and cyclic metavanadate solids based on tetrahedral VO4 dominate; at pH~5, {V15O42} cluster occurs as it
requires tetrahedral VO4 in addition to VO5 and VO6 species as building blocks. Under ambient conditions, in acidic medium,
octahedral VO6 species are predominantly present in the reaction medium. Consequently, {V10} cluster based solids are
isolated in a large number of cases similar to 1 observed in this study. At this stage, a´ posteriori analysis of examples such as
1 only suggests the way molecules aggregate closer to nucleation to optimize crystal packing.
83
Dronacharya Research Journal
Issue II
Jan-June 2010
Fig. 6: Supramolecular aggregation of vanadate species in solution to form decavanadate cluster
CONCLUSION
To conclude, formation of the decavanadate based salts isolated in the presence of organic templates is driven by pH of the
crystallization medium. The high symmetry in the crystal packing suggests that probably nucleation is symmetry driven.
Molecular aggregation in solution is initially dominated by electrostatic interactions between cations (Guanidium) and anions
({V10} cluster). However, supramolecular interactions dictate the organization of ions such that favorable bonding takes
place considering crystal packing effects as well. Crystal engineering will be more meaningful if we consider ‘chemically
reasonable molecules’ as building blocks and correctly address the weak interactions. Most interestingly decavanadate is
found to have an important role in the biological systems and it is also well known that guanidine-containing derivatives
constitute a very important class of therapeutic agents suitable for the treatment of a wide spectrum of diseases. So the
compound synthesized out of these two will have a prominent role to play. We are successful in synthesizing the material; its
biological activity is yet to be explored.
REFERENCES
[1]
D.E. Fenton, Biocoordination Chemistry, Oxford University Press, (1995).
[2]
D. Wang, Vanadium complexes and clusters for (potential) industrial and medicinal application, Ph.D. thesis,
University of Hamburg, (2002).
[3]
N.N. Greenwood, A. Earnshaw, Chemistry of the Elements, second ed., Butterworth Heinemann, Leeds, UK, (1998).
[4]
F.A. Cotton, R.N. Greens, J. Wilkinson, M. Bochmann, C. Murillo, Advanced Inorganic Chemistry, sixth ed., Wiley,
New York, (1999).
[5]
H.T. Evans, Inorg. Chem. 5, 967, (1966).
[6]
V. Arrieta, Polyhedron 23, 3045, (1992).
[7]
D.C. Crans, K. Sudhakar, T.J. Zamboborelli, Biochemistry 31, 6812, (1992).
84
Dronacharya Research Journal
Issue II
Jan-June 2010
[8]
(a) D.C. Crans, E.M. Willing, S.R. Butler, J. Am. Chem. Soc. 112, 427, (1990).
(b) D.C. Crans, in: A. Mu¨ ller, M.T. Pope (Eds.), Polyoxometalates: From Platonic Solids to anti-Retroviral Activity,
Kluwer Academic Publishers, Dordrecht, The Netherlands, 399, (1993).
[9]
(a) C.R. Cremo, J.C. Grammer, R.G. Yount, Meth. Enzymol. 196, 442, (1991).
(b) I. Ringel, Y.M. Peyser, A. Muhlrad, Biochemistry 29, 9091, (1990).
[10] M. Hir, Biochem. J. 273, 795, (1991).
[11] D.C. Crans, C.M. Simone, R.C. Holz, L. Que, Jr., Biochemistry 31, 11731, (1992).
[12] P.J. Stankiewiez, A.S. Tracey, D.C. Crans (Eds.), Vanadium and its Role in Life vol. 31 of Metal Ions in Biological
Systems (chapter 9), Marcel Dekker, New York, (1995) (chapter 9).
[13] N.D. Chasteen, J.K. Grady, C.E. Hoolway, Inorg. Chem. 25, 2754, (1986).
[14] J. M. Messmore, R.T. Raines, Archives of Biochemistry and Biophysics, 381, 25, (2000).
[15] J. Livage, Coord. Chem. Rev. 178, 999, (1998).
[16] W. Wang, F. L. Zeng, X.Wang and M. Y. Tan, Polyhedron 15, 265, (1996).
[17] S. Sharma, A. Ramanan, P. Y. Zavalij and M. S. Wittingham, Cryst. Eng. 4, 601, (2002).
[18] J.Thomas, M. Agarwal, A. Ramanan, N. Chernova, M. S. Whittingham, Cryst. Eng. Comm. 11, 625, (2009).
[19] K. Pavani, S. Upreti, A. Ramanan, J. Chem. Sci. 118, 159, (2006).
[20] G. M. Sheldrick Acta Cryst. A46, 467, (1990).
[21] G. M. Sheldrick SHELXTL-NT, version 6.12, reference Manual. University of Göttingen, Germany(2000).
[22] A. Ramanan, M. S. Whittingham, Cryst. Growth Des. 6, 2419, (2006).
85
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
IMPLEMENTING “SYN” BASED PORT SCANNING IN
WINDOWS ENVIRONMENT
Vishal Bharti*
Assistant Professor, Department of Computer Science & Engineering
Dronacharya College of Engineering, Gurgaon -123506, India
Email: [email protected]
Hardik Suri
VIII Semester, Department of Computer Science & Engineering
Dronacharya College of Engineering, Gurgaon -123506, India
Email: [email protected]
_____________________________________________________________________________________________________________________________
ABSTRACT
One of the primary stages in penetrating/auditing a remote host is to perform Port scanning. It is one of the most popular techniques used to discover and
map services that are listening on a specified port. This paper gives the real life scenario of implementing SYN based port scanning along with the various
issues involved either directly or indirectly with it. We have implemented the scheme in Windows based environment, which till date is only implemented in
UNIX/Linux based environment. The result of our implementation is depicted in the form of clear cut screenshots. Finally the paper also gives some of the
screenshots in Behind the Scene section, which shows ICMP ECHO REQUEST packets for the target machine to see if the Target is alive or not along with
the requests to get the Target’s MAC/physical address as an ARP REQUEST packet.
Keywords: Port, Host, SYN, FIN, ICMP and ECHO.
______________________________________________________________________________________________________________________________
1. INTRODUCTION
A scan is a reconnaissance technique in which an attacker checks for an exploitable target by trying to determine attributes of
the target host, such as whether it is running, what services is it running, what the operating system is, and whether there are
exploitable vulnerabilities. Using this method an attacker can then create a list of potential weaknesses and vulnerabilities in
the open port by using exploit particular to the services offered on that port to gain access as a ‘root’ or a non privileged user
leading to exploitation and compromise of a remote host[4,5].
1.1 Port Scanning
Port scanning can be performed using the following three methods:
•
•
•
Open scanning method
Half-open scanning methods
Stealth scanning method
1.1.1 Open Scanning Method
The following scan employs the open scanning method:
• TCP CONNECT() SCAN
It is the easiest and the most non stealthy port scanning technique which is detected even by the most poorly configured
firewalls/ids, as a complete connection is established with the current port on which the port scanning is done. Thus a
complete three way handshake is done, once this full handshake has taken effect, the connection will be terminated by the
client allowing a new socket to be created/called allowing the next port to be checked. TCP scanning technique employs
open scanning method [6].
1.1.2 Half Open Scanning Method
The following technique employs half open method:
* Corresponding Author
86
Dronacharya Research Journal
Issue II
Jan-June 2010
• SYN SCANNING
Half open scan method is similar to open scanning method except, instead of a full three way handshake only first two steps
are performed. Thus instead of the last ACK packet, the scanning machine closes the connection by sending RST packet.
User privileges are needed as it requires creation of custom SYN packet. The last RST packet is kernel automated. It is
stealthy than TCP connect () scan as logs saved by the firewalls/ids interpret the SYN Scanning as failed attempt to connect,
hence can bypass basic firewall/ids.
1.1.3 Stealth Scanning Method
A number of techniques employ stealth scanning method. It is generally a scanning method which involves flags other than
SYN. Thus stealth scanning involves:
1. Setting individual flags (ACK, FIN, RST...).
2. NULL flags set.
3. All flags set.
Some of the scanning techniques employing stealth method are FIN, NULL, ACK, XMAS, and WINDOWS SCANNING
[6]. To bypass firewall more stealthy techniques are introduced. In stealth scanning flag experimentation is done. These
techniques are effective in scanning UNIX / Linux machines but fail to scan Windows systems.
2. THE WORKING STEPS OF SYN PORT SCANNER
Following are the steps which depicts the working of a SYN based port scanner:
Step1: An ICMP ECHO REQUEST packet is generated and send over to the target, in other words we ping the target first.
An ICMP ECHO REPLY packet tells the scanning machine that the target is active; hence port scanning can be done.
One major point to keep in mind is that most computers running firewalls/ids block ICMP traffic [7,8,9]; hence in such
scenarios Step 1 should be avoided as our request ICMP packet would never reach the target.
Step 2: An ARP REQUEST packet is broadcasted over the network to get the Mac / physical address by providing the IP
address. Thus the system with matching IP address sends an ARP REPLY packet to the scanning machine.
Step 3: After receiving the MAC address, a TCP SYN packet is generated and the Mac /physical address is copied in the
Ethernet header of the TCP SYN packet.
Step 4: The TCP SYN packet is then sent over the network to the target's port, an ACK and SYN packet on receiving tells the
scanning machine that the port is open, an ACK and RST packet signifies a closed port, if no reply is received then the port is
filtered; in other words blocked by the firewall [12].
3. IMPLEMENTATION
3.1 Technologies Used
Predefined C++ functions are used in the algorithm from header files:
•
•
•
•
WINSOCK2.H
WINDOWS.H
STDIO.H
PCAP.H
3.2 Important Functions and Terms
Following table gives a brief idea of the various terms and the functions used for the implementation of our algorithm. The
description of the variables/functions along with their values and the task, which these generally perform, is poised in Table1.
87
Dronacharya Research Journal
Issue II
Jan-June 2010
VARIABLES
VALUE
USAGE
TCP_SYN
0x02
To initiate the connection.
ACK_RST
0x14
Tells the client that the port is closed
ACK_SYN
0x12
Tells the client that the port is open
pSendBuffer
60
Buffer containing ICMP header values along with Payload
PAYLOAD
32
Garbage data sent along ICMP ECHO packet.
ICMP[]
8
Buffer containing ICMP header values
N
user defined
Number of ports to be scanned
DESTMAC
6
Mac address field in the Ethernet header of TCP SYN packet.
Destmac
6
Stores the targets' Mac address
ARP[]
28
Buffer to store ARP header values
TCP[]
20
Buffer to store TCP header values
Table 1: Variables and Functions Used
3.3 Algorithm of half open/SYN port scanner
Implementation of our scheme is based on the following algorithm. The final outcome in the form of screenshots is shown in
the result part.
Algorithm:
START
{Step 1: function PING(SOCKET x,SOCKADDR_IN y)
{
Initialize the values of SOCKET(a,b,c).
/* a=AF_INET b=SOCK_RAW
c=IPPROTO_ICMP */
Intialize the values of SOCKADDR_IN()
/*y.sin_addr.S_un.S_addr = inet_addr ((const char
*)DestinationIP);
y.sin_family = AF_INET;
y.sin_port = rand (); */
Intialize
pSendBuffer[ICMP+PAYLAOD]
/*memcpy(pSendBuffer,ICMP_HEADER+PAYLAOD,sizeof(ICMP_HEADER) +PAYLAOD) */
Repaet step e) until N=5
send
the
pSendBuffer
packet
/*sendto(x,pSendBuffer,sizeof(ICMP_HEADER)+PAYLOAD,0,(SOCKADDR
*)y,
sizeof(SOCKADDR_IN)) */
Initialize recvfrom().
/* function which decodes the incoming packet recvfrom (sock,
pRecvBuffer, 1500, 0, 0, 0) */
If recvfrom() := FALSE ,
then
{return EXIT.
}
else continue.
}
88
Dronacharya Research Journal
Issue II
Jan-June 2010
Step 2: function ARP_PACKET()
{
Initialize ARP[] with ARP_HEADER values /*memcpy((void *)(ARP),(void *)htons(0x0001),2)
copying the hardware type value in the header and so on */
function ARP_SEND(pcap_if_t* Device1)
{
Broadcast ARP packet using pcap_sendpacket()
/*
pcap_open(Device1name,65535,PCAP_OPENFLAG_PROMISCUOUS,1,NULL,Error1)
pcap_sendpacket(ARP, 60) */
}
Retrieve values from function pcap_next_ex( )
/* function to recieve arp reply */
Copy DESTMAC into destmac.
}
Step 3: function TCP_PACKET()
{
Initialize TCP[] with TCP_HEADER values
/*memcpy((void *)(TCP),(void
*)TCP_SYN,1) creating a syn tcp packet */
Copy destmac into TCP[]
/*memcpy((void*)(TCP),(void*)destmac,6) */
function TCP_SEND(pcap_if_t * Device2,N)
{
for I to N
TCP_SEND(pcap_if_t * Device2,N-1)
}
If recv_tcp ():=TRUE,
then
{
If FLAG: =ACK_SYN,then print Port open at : N
If FLAG=ACK_RST,then print Connection refused at: N
}
}
}
STOP
4. OUTCOME OF SYN PORT SCANNER
Based on implementation of our algorithm using C++ language, we got several outputs, which are shown with the help of
screenshots.
1. The port scanner first shows all the available adapters/interfaces available.
89
Dronacharya Research Journal
Issue II
Jan-June 2010
2. The selected adapter/interface is shown.
3. The source port through which the request packets are send. For each Port scan the source port is incremented randomly.
4. The target’s IP address is typed on which the port scan is to be done.
90
Dronacharya Research Journal
Issue II
Jan-June 2010
5. The port range to be scanned is entered.
6. Finally the port scan is carried out. The port scanner prints “open at port:” if an ACK-SYN packet is received signifying
an open port or prints “connection refused on port:” if an ACK-RST packet is received signifying a closed port. If no
reply is received then the Port scanner prints “port filtered” which signifies that the port is blocked by a firewall.
5. BEHIND THE SCENE
1.
An ICMP ECHO REQUEST packet is send to the target machine to see if the Target is alive or not, in other words to see
if the target is online or Not.
91
Dronacharya Research Journal
Issue II
Jan-June 2010
2.
The target replies with ECHO REPLY packet which tells the scanning Machine that the target is online.
3.
Since target’s IP address i.e. 192.168.1.1 is not enough to send TCP packet, hence to get the Target’s Mac/physical
address, an ARP REQUEST packet is broadcasted over the network.
4.
The target replies with its Mac/physical address to the scanning machine i.e. 192.168.1.2.
92
Dronacharya Research Journal
Issue II
Jan-June 2010
5.
The Mac address is then copied in TCP-SYN packet generated by the scanning machine and sent over the network to the
target.
6.
The target i.e. 192.168.1.1 replays to SYN packet by sending a ACK-RST Packet to the scanning machine telling the
scanning machine that the port is closed.
SUMMARY
This paper is a step towards implementing SYN Based Port Scanning in Windows Based Environment, which till date was
only implemented in Unix/Linux environment. The result of our implementation of the scanning in Windows based
environment is a clear cut indication towards achievement of our goals with the right perspective. We also tried to enclose
some of the outcomes related to what’s going on actually with the ECHO request and also with ARP REQUEST packet is
broadcasted over the network.
REFERENCES
[1]
Bharti Vishal, Snigdh Itu, “Practical Development and Deployment of Covert Communication in IPv4”, Journal of
Theoretical and Applied Information Technology [JATIT], Vol: 4 No.6 pp. 466-473, www.jatit.org (June2008)
[2]
Bharti Vishal, Bedi Harish, “A Novel Algorithm Based Design Scheme for Embedding Secret Message onto a
Steganographic Channel”, International Journal of Electronics Engineering [IJEE], Vol: 1 No. 2
93
Dronacharya Research Journal
Issue II
Jan-June 2010
[3]
Bharti Vishal, Solanki Kamna “A Frequency Domain Manipulation based Approach towards Steganographic
Embedding in Digital Images for Covert Communication” International Journal on Applied Engineering Research
[IJAER], Vol: 4, No. 7
[4]
Bennieston Andrew J., “NMAP - A Stealth Port Scanner”, http://www.nmap-tutorial.com
[5]
Arpit Aggarwal, Ranveer Kunal, “A Comparison of Various Port Scanning Techniques”
[6]
Examining port scan methods - Analyzing Audible Techniques whitepaper by [email protected], Synnergy
Networks
[7]
Vivo M., E. Carrasco, G. Isern, and G. O.Vivo, “A review of port scanning techniques”, ACM Computer
Communications Review, 29(2):41–48, (Apr. 1999).
[8]
Maimon Uriel, “Port Scanning without the SYN flag, TCP port Stealth Scanning”, Phrack Magazine, Issue 49, article
15, 1996.
[9]
Sanfilippo S., “New TCP Scan Method”, Bugtraq mailing list archives, 18 Dec 1998.
[10] Quantifying Computer Security, The Institute for Systems Research, v.5.29.07.
[11] Safley Dave, 2005-03-31 12:37,
http://www.iptoolbox.net/index.php?action=artikel&cat=331846&id=30&artlang=en.
[12] Afzal Naveed, “Host Fingerprinting and Firewalking With hping”, APEC GIKI (2005).
[13] Cukier, Michel “Quantifying Computer Security”, The Institute for Systems Research.
94
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
THEORY OF FLUORESCENCE AND PHOSPHORESCENCE DECAY
IN MAGNESIUM OXIDE (MgO) CRYSTALS
Dr. Smita Srivastava*
Associate professor, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Farrukh Nagar, Gurgaon, India
Email: [email protected]
Dr. S. K. Gupta
Professor and HOD, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Farrukh Nagar, Gurgaon, India
Email: [email protected]
Rashmi Verma
Lecturer, Department of Physics
Department of Physics, Bangalore City College, Bangalore, India
Email: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT
A theoretical approach is made to understand the process of fluorescence and phosphorescence for Magnesium oxide (MgO) crystals. The luminescence data
show that the decay kinetics of excited F and F+ centers are dominated by ionization from the excited state and charge recapture from traps which include H–
ion in the thermo chemically reduced MgO. The 390nm (3.2 eV) band, obtained from the electronic decay of F+ centre, decays rapidly. The lifetime of the
540 mm (2.3 eV) band is material dependent and has been observed to vary from a fraction of second to minutes. The cause of the long lifetime near room
temperature for the F-centre bands is H– ions, which are present substitutionally for O2– ions. A good agreement is found between theoretical and
experimental results.
Keywords: Fluorescence, Phosphorescence Luminescence
1. INTRODUCTION
The alkaline earth oxides MgO, CaO, SrO and BaO are the divalent cousins of the alkali halides. They are the face-centered
cubic rock-salt structure and their binding is dominantly ionic. MgO is an important material for understanding the electronic
structure of defects in ionic materials [1−2]. During the 1960’s, single crystals of MgO with relatively low impurity levels
become increasingly available. Since then, the luminescence properties, the mechanism of defect creation and formation of
color centers in these crystals have been the matter of interest [3, 7−11]. This paper reports the theory of fluorescence and
phosphorescence decay in alkaline earth oxide crystals.
2. THEORY
In this case we can consider the schematic configuration co-ordinate diagram for the F, F+ (F*) and F2+ (F+*) centers for MgO
crystals reported in [3] as shown in Fig.1. JEFFRIES et al have observed the cause of long lifetime near room temperature is
the presence of H– ions, which serves as metastable traps for electrons excited out from F and F+ centers. The trap and defect
levels for MgO crystal is reported [6] which is shown in Fig. 2. In the present case, consider three states, first state is the
ground state which is very near to the valence band in which F and F+ centers are present. Second state is the metastable state
at the H– centre near the conduction band and the third state is the excited state which is very near to the conduction band.
Here the ground state is denoted by 1, metastable state is the denoted by 2 and excited state is denoted by 3. The probabilities
of non-radiative transitions are P13, P12 and P23 and radiative transitions are P32, P21 and P31. The level populations will
satisfy the balance equations:
*Corresponding Author
95
Dronacharya Research Journal
Issue II
Jan-June 2010
Fig. 1: Schematic configuration-coordinate diagrams for the F, F+ and F2+ centers in MgO.
Fig. 2: Schematic representation of the defect and trap levels (e.g.. F, F+, H–) for F and F+ luminescence.
d n1
= − n1 (P12 + P13 ) + n 2 P21 + n3 P31
dt
d n2
= − n 2 (P21 + P23 ) + n 2 P12 + n3 P32
dt
n1 + n 2 + n3 = n
96
(1)
(2)
(3)
Dronacharya Research Journal
Issue II
Jan-June 2010
On solving these, we find that
n1 = C1 e
− tτ
1
+ C2 e
− tτ
2
α1
+
∆
n
( 4)
P12 + P13 + P31 − 1
T2 − t τ 1
e
+
n 2 = C1
P21 − P31
P12 + P13 + P31 − 1
τ2
C2
P21 − P31
τ 1 − P21 − P12 − P13
+
1
n3 = C1
P21 − P31
τ 1 − P21 − P12 − P13
e
P21 − P31
∆
− t
1
C2
α2
+
τ1
α3
∆
(5)
n
+
n
(6)
Here C1 and C2 are the integration constants where we have used the notation.
∆ = P21 (P32 + P13 + P31 ) + P21 (P32 + P23 + P31 )
+ P13 (P23 + P32 ) + P23 P31
α1 = P21 (P31 + P32 ) + P31 P23
α 3 = P12 (P31 + P32 ) + P32 P13
α 3 = P23 (P12 + P13 ) + P21 P13
The quantities τ and
1
(7 )
τ2 are related to the transition probabilities by the formulae
1
τ1
1
τ2
= b + b2 − ∆
(8)
= b − b2 − ∆
where 2b= P12 + P21 + P13 + P31 + P23 + P32
To obtain the equilibrium distribution over the levels, all the external ‘exciting radiation fluxes must be equated to zero and
the values of t must be such that all the time-dependent terms vanishes, then we find from equation (4) to (8) that :-
n10 =
n
1+ e
(−hν 21 / kT )
+ e (−hν 31 / kT )
n e (−hν 21 / kT )
1 + e (−hν 21 / kT ) + e (−hν 31 / kT )
n e (−hν 21 / kT )
n30 =
1 + e (−hν 21 / kT ) + e (−hν 31 / kT )
n20 =
(9)
(10)
(11)
Since P130 = P31 e (hν 31 / kT ) , We shall use the superscript 0 to represent the level population in thermodynamic equilibrium. In
the presence of external excitation, the constant terms in (4) to (6) give the stationary distribution over the levels.
97
Dronacharya Research Journal
Issue II
n
[P21 (P31 + P32 ) + P31 P23 ]
∆
n
n2st = [P12 (P31 + P32 ) + P32 P13 ]
∆
n
n3st = [P23 (P12 + P13 ) + P21 P13 ]
∆
n1st =
Jan-June 2010
(12)
(13)
(14)
The states of level populations do not vary exponentially after the excitation has been switched on or off. For two levels or
states, the mean life time of the excited state is not equal either to τ1 or τ2. If the τ1 and τ2 are of the same order of magnitude,
conversely, if τ2 >> τ1, the luminescence after-glow of the system can be naturally divided into two components, namely,
short-period luminescence which falls off practically as e −t / τ1 and long period luminescence which falls off as e −t / τ 2 . In
0
0
such cases, one speaks for fluorescence and other for phosphorescence τ 1 and τ 2 are the corresponding time constants.
0
Determining the value of
0
τ1
and
0
τ2
0
for the electrons, present in the metastable state, we consider the case.
P210 + P230 << P310 + P320
In the absence of excitation (external) and when we assume that P210 + P230 = 0 so that from equation (9) and (10). We have
1
τ
0
1
1
τ 20
≈ P310 + P320
≈ P210 + P230 (1 − P320 τ 10 )
(15)
It is evident from the formulae that the duration of fluorescence is determined by probabilities of transition from the third to
the first and second levels. The duration of phosphorescence depends on all the transition probabilities and is of the order of
P210
1
+ P230
It should be noted that while the steady state is being established, the quantities τ1 and τ2 may be complex. In particular, when
P12 = P13 = 0
P21 = P31 + P32 + P13
The square root is purely imaginary
b 2 − ∆ = i p13 p 32
and τ1 and τ2 are complex. This means that under certain conditions of excitations the level populations may undergo periodic
pulsations.
The present model which we consider here is a three level model in which one of the state or level is metastable which is very
similar to Joblonski’s model. In this case, the emission produced as a result of 1 to 3 i.e. (F → F*), 3 to 1 i.e., (F* → F)
transitions occurring in immediate succession is called fluorescence. Since it is rapidly damped out after the excitation is
switched off, it can also be referred to as short-period emission.
An electron occupying the metastable level can return to the normal state in two ways, if the probability of the 2 → 3
transition is not zero, it may return to the liable level and then undergo the 3 → 1 transition or it can undergo direct transition
from the metastable to the normal state. Two forms of the phosphorescence corresponding to these two possible ways in
which the particle or electron returns to the normal state. They are called α and β phosphorescence, respectively.
After excitation has been switched off, the third and the second level populations are given by
98
Dronacharya Research Journal
Issue II
Jan-June 2010
n3dec (t ) = (1 − f ) n3st e −t /τ 1 + ρ n3st e −t / τ 2
0
n2dec (t ) = −
τ
0
1
0
n2st e −t /τ 1 +
τ
0
τ −τ
0
2
0
1
(
(16)
0
2
τ −τ
)
n2st e −t /τ 2 (17)
0
0
2
0
1
 1
0
0
0
0
0

0 − P31  P23 − P21 − P32 P21
τ1

ρ = τ 01 τ 02 
0
τ 2 −τ 01 P230 + P210
(
≈τ
)(
)
0
0
0 P23 P32
1
P230 + P210
(18)
The number of electrons in the second and third levels under stationary conditions can readily be found from equation (15)
and (16) by setting P12 = 0 and recalling that excitation is carried out at frequency υ31 (P13 = B13 u31), where uij indicates the
density of radiation.
[(
]
)
n
P230 + P210 B13 u
∆
st
n3 = nτ 1τ 2 P230 + P210 B13 u31
n3st =
[(
)
(19)
]
(20)
n 0
P32 B13 u = nτ 1τ 2 P320 B13 u13
(21)
∆
1
where ∆ =
P310 P230 + P210 + P210 P320 +
n2st =
τ 1τ 2
[ (
(2 P
0
23
0
]
)
)
+ 2 P210 + P320 B13 u13
0
The quantities τ 1 and τ 2 determine the duration of florescence and phosphorescence and are related to the transition
probabilities by (8) or approximately by (15).
Usually, τ 1 is of the order of 10–8 sec. While τ 2 is of the order of 1/10 sec of few minutes. It follows that the first terms of
equation (16) and (17) vanishes a time of 10–6 sec and thereafter number of photons emitted as α and β phosphorescence per
unit time is given by.
0
0
N αphos = ρ n3st A31 e − t /τ 2
0
= n P230 P320 τ 10 τ 1 τ 2 A31 B13 u13 e −t /τ 2
0
β
N phos
=
τ
0
2
τ −τ
0
2
(22)
n 2st A21 e −t /τ 2
0
0
1
= n P320 τ 1 τ 2 A21 B13 u 31 e −t /τ 2
0
(23)
where Aji and Bij are the emission and absorption coefficients.
It is easy to see that the time constants for α and β phosphorescence after glow are equal and are determined by the mean life
time of electrons in the metastable level.In Fig. 3, the emission spectra of F and F+ centers are shown [4]. The present
theoretical discussion of lifetime satisfies these experimental results. Fig. 4 shows the graph between normalized emission
intensity and time, which shows the luminescence decay at 260 K of the 2.3 eV band in thermo chemically reduced MgO.
99
Dronacharya Research Journal
Issue II
Jan-June 2010
.
Fig. 3: Emission spectra of F and F+ centres
Fig. 4: Luminescence decay (phosphorescence) for the 2.3 eV band in thermochemically reduced MgO.
3. DISCUSSION
The above theoretical model can be discussed with respect to the excitation of F, F+, F2+ and H- centers. Most of the F and F+
centers are situated at the energy level 1. After absorbing an energy of 5 eV, the F centers is excited as F → F*, the F* centre
can easily the ionized by releasing electron and forming a F+ centre. This F+ centre is now at ground i.e. (1) state, F+ centre
also absorbs energy 5 eV and goes to the excited state F+*, again F+* ionizes an electron and F2+ centre is formed. At this time
hole is formed near the valence band, which s responsible for the V-centre near the valence band. There are two possibilities
for the transition of electrons in the conduction band:Some of the electrons may transit directly form state 3 to the state 1, which may give rise to a short life time luminescence,
which is called fluorescence. The color of this luminescence is green and energy of the order of 2.3 eV. Its lifetime is
approximately 10–8 sec. This lifetime is calculated by the formula.
1
τ 10
= P310 + P320
When electrons jump to the state 3 to 2, the emitted energy is utilized for lattice vibration in the form of heat. But when the
electrons jump from state 2 to 1 state, the luminescence is produced whose lifetime is of the order of 1/10 sec. to few min.
This is duet to the presence of H– ion in the case of MgO. The color of this luminescence is blue and its energy is 3.2 eV.
This process is called phosphorescence, and the mean lifetime in this case can be calculated by the formulae
100
Dronacharya Research Journal
1
τ 20
(
= P210 + P230 1 − P320 τ 10
Issue II
Jan-June 2010
)
and the number of photons emitted α as β and phosphorescence per unit time is given by
N αphos = ρ n3st A31 e −t /τ 2
0
= n P230 P320 τ 10 τ 1 τ 2 A31 B13 u13 e −t /τ 2
0
and
β
N phos
=
0
τ 20
n st A21 e −t /τ 2
0
0 2
τ 2 −τ 1
= n P320 τ 1 τ 2 A21 B13 u 31 e −t /τ 2
0
CONCLUSIONS
The three energy level model of luminescence is discussed theoretically. It is found that there may be two decay times for the
luminescence, one having the short decay time and the other having comparatively longer decay time. The theory is
compared with the experimental observations of luminescence decay in MgO crystal, where one of the decay time is 10–8 sec
and other luminescence decay time is 1/10 sec to few minutes depending on the impurity present in the MgO crystal. A good
agreement is found between the theoretical and experimental observations.
REFERENCES
[1]
Henderson B., Crit C.R.C, Rev. Solid State Matter Sci., 9, 1(1981).
[2]
Henrich V. E., Rep. Prog. Phys., 48, 1481 (1985).
[3]
Wang R. S. and Halzwarth N. A. W., Phys. Rev. B 5, 3211 (1990).
[4]
Ballesteros C. , Piqueras J. and Gonzalez Z., Phys. Stat. Sol. (a) 83, 645 (1984).
[5]
Jeffries B. T., Gonzalez R., Chen Y.and Summers G. P. , Phys. Rev. B 25, 2077 (1982).
[6]
Rosenblatt G. H. , Rowe M. W. , Williams G. P. Jr, Williams R. T. and Chen Y., Phys. Rev. B 39, 10309 (1989).
[7]
Chandra B. P., in D.R. Vij (Ed.), Luminescence of Solids, Plenumm Press, New York, (1998).
[8]
Molotskii M. I. , Sov. Sci. Rev. Chem. 13, 1 (1989).
[9]
Xu C. N. , Zheng X. G. , Akiyama M., Nonaka K., Watanabe K., Appl. Phys. Lett. 76, 179 (2000)
[10] Akiyama M., Xu C.N., Liu Y., Nonaka K., Watanabe T., J. Lumin. 97, 13 (2002).
[11] Ph. D. Thesis, 1996, Verma Smita “Theoretical Studies on the Deformation Induced Electronic Excitation in Alkaline
Earth Oxide Crystals.” Rani Durgawati University, Jabalpur.
101
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
OPTIMIZING FINANCIAL TRADING SYSTEMS BY INTEGRATING
COMPUTER BASED MODELLING IN VIRTUAL REALITY
ENVIRONMENT
Kevika Singla*
Sr. Software Engineer
Royal Bank of Scotland, Gurgaon-122001, India
Email: [email protected]
Aakash Gupta
Associate Consultant
Mckinsey & Company, Gurgaon-122001, India
Email: [email protected]
Dr. B.M.K Prasad
Principal, Dronacharya College of Engineering
Gurgaon-123506, India
E-mail: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT
The financial industry is witnessing major changes. The financial market is shifting from an old to a new trading model that introduces major structural
changes to the market and new roles for market participants. In all these developments, there is a central role for human intelligence that can potentially
influence the pattern of change and direct appropriate decisions in adapting to change. The principal aim of this paper is to introduce new principles for
computer-based modelling – Empirical Modelling (EM) – that can potentially address these concerns. The key idea in EM is to establish a more intimate
relationship between the computational activity and the human action and interpretation associated with each situation. This paper discusses the prospects for
developing new environments for Virtual Trading that combine Virtual Reality (VR) modelling with a new approach to computer-based modelling that
engages the participants with electronic components and the external world simultaneously, operating through interfaces that complement each other. It also
addresses the problem of developing software that takes into account the human factor, the integration of the social and technical aspects, human insight, the
experiential and situated aspects, and group social interaction. The aim is to complement the power of the computer to automate action by enabling
participants to intervene intelligently when human interpretation and discretion is required as input, or singular conditions arise.
Keywords: Empirical Modelling (EM), Virtual Reality (VR), Trading, Interactive situation models (ISMs), LSD
____________________________________________________________________________________________________________________________
1. INTRODUCTION
1.1 Traditional stock exchanges are witnessing major structural changes due to increased competition from alternative
trading systems and Electronic Communication Networks and rising investors’ demand and financing needs. The old trading
model adopted by traditional exchanges is no longer adequate and new trading models are being introduced, revolutionising
old execution, clearing and settlement processes. These developments impact on the behaviour of all market participants
(investors, brokers, dealers, and market makers) and are reshaping the financial market microstructure in terms of transaction
cost, bid-ask spread, price volatility, trading volume, information effect, and best execution price. Exploring an ideal trading
system minimizing transaction cost and increasing market efficiency is a major concern in the area of financial market
microstructure. Interaction in a trading environment is particularly subtle and complex and traditional mathematical models
are not sufficient for such applications, where human behaviour is of paramount importance.
1.2 Virtual Reality, with its orientation towards immersing the human actor in a computer-generated environment is
potentially much better suited to modelling state where human activity is central. VR’s capacity to handle objects and their
properties, to allow user immersion, and to emulate observation of the real-world using a 3D graphical display, make it an
obvious candidate for application in this field. This paper proposes new principles for the development of environments for
virtual trading to deliver VR using an approach to computer-based modelling known as Empirical Modelling (EM). The
potential applications of EM to virtual collaboration [1] are described with reference to illustrations drawn from the context of
an agent-oriented analysis of Online Trading. First, the paper overviews EM and VR and their role in constructing environ*Corresponding Author
102
Dronacharya Research Journal
Issue II
Jan-June 2010
ments for virtual financial trading. The paper then discusses the challenges of adopting VR technology to model complicated
social environments (such as virtual trading) and proposes the merging of the conceptual framework of the Empirical
Modelling approach with the VR design and construction by discussing an Options Trading case study of a monopoly dealer.
The paper concludes with our findings about the use of VR for modelling a social context such as virtual financial trading.
2. EM BACKGROUND
2.1 Empirical Modelling (EM) is a novel approach to computer-based modelling on the concepts of Observation,
Dependency and Agency [8]. The EM project has developed various software tools to support the modelling activity.
Currently, the main tool is tkeden an implementation of Eden written in C and tcl/tk.
Main Features:
• Contributes to the principles for software integration and virtual collaboration in the financial enterprise;
• A novel modelling approach adapting to the new trading model in the financial market; computer-based support for
distributed financial engineering;
• Contributes to the principles for a closer integration of the software system development and financial research
development activities.
• Represents and analyses systems in a way that can address the complexity of the interaction between programmable
components and human agents
The central concepts behind EM are definitive (definition based) representations of state, based on the use of families of
definitions called definitive scripts, and agent oriented analysis and representation of state-transitions. Empirical Modelling
techniques involve an analysis that is concerned with explaining a situation with reference to agency and dependency, and the
construction of a complementary computer artefact – an interactive situation model (ISM) – that metaphorically represents
the agency and dependency identified in this process of construing. The modelling activity is open-ended in character, and an
ISM typically has a provisional quality that is characteristic of a current – and in general partial and incomplete – explanation
of a situation. The identification of agency and dependency often exploits previous knowledge and experience, but can be
thought of as being derived in essence through observation and experiment. Unlike a closed-world computer model with a
fixed interface and preconceived human interaction, an ISM is always open to elaboration and unconstrained exploratory
interaction. Empirical Modelling notations include LSD – a special-purpose notation that has been introduced to classify
observables and dependency in agent interaction.
2.2 An LSD account is a classification of observables from the perspective of an external observer, detailing where
appropriate: the observables whose values can act as stimuli for an agent[6] (its oracles); which can be redefined by the agent
in its responses (its handles); those observables whose existence is intrinsically associated with the agent (its states); those
indivisible relationships between observables that are characteristic of the interface between the agent and its environment
(its derivates); and what privileges an agent has for state-changing action (its protocol).
3. VR BACKGROUND
3.1 Virtual reality (VR) [9] is a term that applies to computer-simulated environments that can simulate places in the real
world as well as in imaginary worlds. Most current virtual reality environments are primarily visual experiences, displayed
either on a computer screen or through special stereoscopic displays, but some simulations include additional sensory
information, such as sound through speakers or headphones. Users can interact with a virtual environment or a virtual artefact
(VA) either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices. The
simulated environment can be similar to the real world. Virtual reality tools [5] and technologies supply virtual environments
that have key characteristics in common with our physical environment. A combination of computer and interface devices
(goggles, gloves, etc.) that present a user with the illusion of being in a three dimensional world of computer generated
objects. Viewing and interacting with 3D objects is closer to reality than abstract mathematical and 2D representations of the
real world. In that respect virtual reality can potentially serve two objectives: (a) reflecting realism through a closer
correspondence with real experience, and (b) extending the power of computer-based technology to better reflect “abstract”
experience (interactions concerned with interpretation and manipulation of symbols that have no obvious embodiment e.g.
share prices, as contrasted to interaction with physical objects).
The main motivation for using VR to achieve objective (a) is cost reduction (e.g. it is cheaper to navigate a virtual
environment depicting a physical location such as a theatre, a road, or a market, than to be in the physical location itself), and
more scope for flexible interaction (e.g. interacting with a virtual object depicting a car allows more scope for viewing it from
different locations and angles). Objective (b) can be better targeted because the available metaphors embrace representations
in 3D-space.The user’s body and mind integrate with this scene. This frees the intuition, curiosity and intelligence of the user
103
Dronacharya Research Journal
Issue II
Jan-June 2010
in exploring the state of the scene. In a real context, agents intervene to change the state of current objects/situations (e.g. a
dealer acts as an agent in changing bid/ask quotes and so affects the flow of buyers and sellers). In virtual reality however, we
do participate actively in a non-linear story, we are a part of the plot. How the story evolves, depends on what we do, and
when we do it. Other names for the concept of virtual reality include "artificial reality," "augmented reality," and "telepresence".
3.2 [10] Virtual Reality Applications-Today and Tomorrow
• Architecture and construction
Virtual reality is already showing its potential in the architecture and construction industries. A building can be created as a
navigable, interactive, and immersive experience while still being designed, so that both architect and client can experience
the structure and make changes before construction begins.
• Art
Virtual reality will change the conception of what constitutes art. One can travel into a virtual painting, interact with its
elements, change them, enter a sculpture gallery and interact with the art pieces.
• Business
Those companies trading on various stock markets globally will require this virtual-reality application to identify trends and
make trades more rapidly. Their work will be much like playing a large and complex video game.
• Disabilities
Virtual Reality is experimented to confirm the accessibility of buildings for people with disabilities. People with disabilities
will be able to visit new areas virtually before they visit them in the everyday world.
• Education and training
Interactive computing and communications technology are ushering in a new era of education in which students and teachers,
separated by a distance, are conducting research and performing experiments through high-speed connections that will
eventually incorporate VR.
• Engineering
Engineers of all descriptions are already using virtual reality simulations to create and test prototypes. In the future, nearly
every engineering pursuit will use virtual-reality prototypes so that designs can be shared, evaluated, and modified with input
from both co-workers and customers.
• Entertainment
Virtual reality is already being applied in entertainment. Location-based entertainment centres are cropping up in major cities
around the globe and travelling virtual-reality entertainment shows are on the road.
• Marketing
Virtual reality is just beginning to be used by companies who want customers to experience their products and to understand
them better. They've found that a new technology, such as virtual reality, draws people to their exhibits and involves them
with a product much more than standard displays.
• Medicine
Virtual Reality is used to study the circulation of blood simulated a heart by treating the heart wall as a set of fibers immersed
in fluid and responding to both fluid forces and tension forces.
• Military
One of the first applications of virtual reality was in flight simulators. Today, these applications are used not only for aircraft
simulation, but also for ships, tanks, and infantry maneuvers.
• Religion
At present, religion does not seem to be making much use of virtual reality. In the future, we can expect to see an array of
religious experiences via virtual reality. Even more profound mystical experiences, such as the prophecies of Ezekiel or a
revelation from eastern religions, could be created in virtual worlds.
• Manufacturing and industrial design
Major companies are maintaining their global competitiveness by designing products better and faster with virtual reality.
General Motors is a good example here.
• Environmental monitoring
For a glimpse of the complex circulation dynamics and ecosystems beneath the sea surface, researchers are simulating
estuarine systems of the Chesapeake Bay.
• Weather forecasting
Atmospheric scientists are also turning to advanced computing tools and virtual environments to calculate the behaviour of
more local disturbances, particularly thunderstorms that spawn tornados.
104
Dronacharya Research Journal
Issue II
Jan-June 2010
• Cosmology
One of the largest, 3-dimensional simulations of the universe is helping scientists refine theories about the origins of galaxies.
By digitally altering the mix of stellar gas, ordinary matter, and dark matter created soon after the Big Bang, cosmologists are
searching for the correct formula for replicating the universe as it exists.
• Material sciences
Researchers are modelling the more than 400 different hydrogen-nitrogen chemical reactions in an internal combustion
engine to design cooler, more efficient car engines.
3.3 The motivation for integrating EM with VR
The main objective in using VR for virtual trading environments is enhanced cognition of financial markets phenomena with
its use in computer-aided assembly [7] and the main objective is to minimise the need for building physical prototypes. From
an EM viewpoint, sharing explanations and understanding is the key to effective virtual collaboration. Our paper aims to
indicate ways in which EM can be used to investigate how human and automatic agents can cooperate through patterns of
work flow and in decision support. Current technologies for Empirical Modelling can help in constructing financial situations
and in representing state and the analysis of agency in state change, whilst VR offers enhanced visualisation and scope for
user immersion and experience of state.
4. CASE STUDY: CONSTRUCTING VIRTUAL ONLINE TRADING SYSTEM WITH THE
USE OF EMPIRICAL MODELLING
4.1 Simulation of a dealer using VR trading system: An EM model for virtual online trading
Trading in a sufficiently liquid and cost efficient market is becoming a major concern for investors and this motivates a better
understanding of the trading environment and the layers of intermediation. The support of a large range of instruments, the
quality and timeliness of information feed, the functionality of the front-end, and the scalability and performance of the
system are important factors in designing digital information resources for an investor. Technology is opening up new
avenues for investors to cut out the layers of intermediation and talk to one another directly. This places a question mark over
what value can be added to the trading process by the stock exchanges and their constituent brokerages. Current online
trading networks provide a huge amount of static information for the investor to interpret and analyse.
4.1.1 NYSE Virtual Trading Floor
The NYSE Virtual Trading Floor [12] is a riot of tossed papers and traders shouting orders and furiously pounding computer
keys. The floor consists of a flat 6-by-4-foot display panel showing a computer-generated model of all the trading activity.
With a click of a mouse, the dealer can zero in on icons depicting real-time changes in the price and volume of a particular
stock or group of stocks and instantly spot suspicious price changes or trading patterns. The colourful graphics and symbols
of the virtual trading floor allow the dealer to monitor the pulse of the Exchange as never before. It is a kind of trading
simulator that is designed for traders to experience the market and trade in a virtual-reality environment. The design of the
virtual trading floor (3DTF) [11] began as a reinterpretation and transformation of the existing physical trading environment.
The NYSE floor was idealized and refined for eventual virtual deployment. This was accomplished by developing a
wireframe model that corresponded to the layout of the "real" trading floor and its constituent elements, their relative
placement and geographic location on the floor. The architectural idealization had to provide absolute flexibility; particularly
to accommodate the data feeds that would eventually be programmed into it. The modelling also needed to provide for
constant shifts in scale, enhanced levels of detail, and the insertion of numerous other kinetic virtual objects. Thus the actual
trading floor had to be reconfigured for several reasons: the model had to function in real time, which produced high
technological demands; and an economy of form was necessary to process and animate extremely large quantities of data.
The virtual-reality environment allows users to monitor and correlate the stock exchange's daily trading activity and present
the information within a fully interactive, multi-dimensional environment.
Fig 1: A synthetic stock exchange turns dry data into dramatic graphs of daily activity (above) and intuitively displays
the information in a virtual replica of the real trading floor.
105
Dronacharya Research Journal
Issue II
Jan-June 2010
The above virtual views can incorporate the patterns of work flow in decision support. Those workflow patterns can be best
understood by using a modelling technique called Empirical Modelling [4]. Online security trading [3] is a classical example
of a collaborative environment. In modelling an online trading environment, collaboration can be viewed as a workflow of
interdependent tasks undertaken by human and electronic agents. A simple case study will be used to illustrate how the
principles of agent-oriented analysis using LSD and development of an ISM in EM can be applied to workflow in an online
trading context. This study experiences straight through processing which refers to the fully automated, hands-free processing
of security transactions from the fund manager’s decision right through to settlement, reconciliation and reporting.
4.1.2 Associated ISM and LSD account
An embryonic ISM and associated LSD account below would be simulating the case study of a Call Options trading by a
dealer.
Fig. 2: An ISM for a retail trade in New York Stock Exchange
In the online trading context, the social network comprises investors, brokers, dealers, arbitrageurs, and boards of trade. The
trading marketplace may be a physical trading floor or an electronic system. In the retail trade situation in NYSE-listed stock
[2], the relevant agents in the model are identified as: the investor, the broker, the quote information system, the order entry
system, the order routing system, the floor specialist, and the information reporting system. Constructing an ISM for an
NYSE retail trade is a way of modelling an external observer’s explanation of the retail trade process (RTP). As Figure 2
illustrates, the major roles in a retail trade are played by the investor and the broker. The investor/client requests information
on a particular stock from the broker, puts a trading order, confirms his order, pays for his transaction, and acquires or
releases share ownership following the execution of his order. The broker requests quotes from the quote information system,
returns this information to the investor, enters any received order in the order entry system, reviews the order details prior to
its release in the order entry system, reports the trade execution to the investor, receives payment including charges fees, and
mediates the exchange of share ownership. Each of these actions on the part of investor and broker is performed at a specific
stage in the Retail Trade Process. The study is the simulation of trading in a virtual market in which there is only one dealer
(the user of the simulation model). The user’s task (the sole dealer) is to set and adjust bid and ask quotes (raise, lower
quotes, or narrow and widen the spread) to maximize his trading profits. The computer model simulates traders arriving at
random times to trade with the dealer (user) at his quoted prices. The simulation uses abstract representations for the buy/sell
orders flow, true price, type of investor (informed/uninformed); keyboard press for dealer (user) actions; and mathematical
computation of true/realised profit. The aim of the simulation is to raise the awareness of its user (playing the role of a dealer)
to the trading behaviour of different types of investors (informed/uniformed), and the true value of the security (changing
through time and known to informed traders).
106
Dronacharya Research Journal
Issue II
Jan-June 2010
A possible account of the broker’s response to an information request might be:
a) Check status of the investor’s information request;
b) Get the investor’s information request;
c) Direct the request to the quote information system;
d) Update the current RTP status.
e) Returns the information to investor
f) Enters any received order in the order entry system
g) Reviews the order details prior to its release in the order entry system
h) Reports the trade execution to the investor
i) Receives payment including charges fees, and mediates the exchange of share ownership.
The roles of the various agents in the NYSE have to be understood in terms of the relevant observables. Some of these
observables (such as the current status of a BUY/SELL order) are particular to the retail trade situation, but the actions of
agents also relate to observables generic to the online trading context. Here we are taking the example from one of the agent’s
point of view which is Dealer (Broker)
In the retail trade situation, the relevant observables for the participating agents comprise:
• Order information, including: investor name, ID, BUY/SELL order, share name and
• symbol, quantity of shares, type of order (such as market, stop loss, limit order, etc.), price (if needed), expiry date of the
order, the date and time of the order.
• Stock quotes, including: stock symbol, bidder, BID/ASK, price, size, time and date.
• Stock information, including: stock symbol, stock name, last trade price, and change from previous day close, time last
traded, place last traded, highest day price, lowest day price, and day volume.
• Order indication from dealers and brokers, including: the stock name, the name of the broker/dealer, the time, and the
date.
To formulate this simple explanation in LSD, it suffices to interpret the current stage reached in the trade process as an
observable for the participating agents, and to formulate each agent action in terms of re-definitions of observables. For
instance, in the initial stages of the retail trade process, the broker requests quotes from the quote information system when an
investor has requested information on a particular stock. The animation in Figure 2 can be derived from such an LSD
account. In reality, the possible scenarios that can arise in the retail trade process are much more subtle than the workflow
alone indicates.
4.1.3 Trading Call Options - Example explained
A study here is taken for the Call Options trading by the dealer in this virtual trading environment of NYSE. Suppose there is
an option dealer and thinking of buying 100 shares of Company A from Investor (the client) at the price of $52.
The LSD template for the dealer takes the following form:
Agent Dealer {
state
info requested, inventory, bid , ask , spread, actual profit, buyers/sellers flow, current status and history of transactions, time
clock, his estimated true value of the security
Oracles
Info requested ,flow of orders, order side (buy/sell), order quantity, inventory level, actual profit, his estimated true value of
security, his knowledge of trader type (informed/uninformed)
Handles
Bid, ask, spread, quotes_info requested=0
Derivate
stage_in_retail_trade = F(info_requested, …)
Protocols
if (stock price > $52) raise ‘buy signal’ for the shares from investor (the client) at $52 in which case dealer will gain by
simply buying from investor at $52 and selling it in the market at the price which is above $52
if (stock price < $52)) do not raise ‘buy signal’ for the shares from investor. After all the dealer had an 'option' to buy
those shares from him – he did not make any commitment.
if (deal is fair from investor’s viewpoint) ® raise the signal to pay the premium to investor $2 per share, i.e. $200 in total.
This is the risk the dealer is paying to investor for the risk investor is willing to take.
107
Dronacharya Research Journal
Issue II
Jan-June 2010
The state of the model is captured in a script of dependencies such as:
exerciseOption_per_time_unit is (stock price -$52)>0 ? buy_Call_Option : null;
nonexerciseOption_per_time_unit is ($52 - stock price )>0? null : buy_Call_Option;
The sole dealer role is to set and adjust bid and ask quotes (raise, lower quotes, or narrow and widen the spread) to maximize
his trading profits. In the above LSD account, the investor requests for the information on a particular stock say Options and
puts a trading order. An LSD account is a classification of observables from the perspective of an observer, detailing where
appropriate. All these and above mentioned under “State” in LSD account are the observables whose existence is intrinsically
associated with the agent(dealer).It also includes the current status of the transaction flow. Recording info_requested as a
state potentially admits discrepancies between what the broker believes or recalls and what the investor has declared. The
consequences of such discrepancies are implicit in the interpretation of the broker’s protocol “Oracles” constitute of the
observables whose values can act as stimuli for the dealer. As an oracle, info_requested refers to an observable that is
associated with an investor. This can be interpreted as saying that the broker is – or at any rate can be – aware that an investor
is requesting information.”Handles” comprises of the observables that can be redefined by the dealer in his responses. The
definition stage_in_retail_trade = F(info_requested, … ) is used to indicate that the current stage in the RTP can be construed
as functionally dependent on the status of transactions. These are derivates which are indivisible relationships between
observables that are characteristic of the interface between the dealer and its environment.”Protocols” are the privileges the
dealer has for his state-changing actions. If he sees the stock price is greater than the quoted price, he will raise a buy signal
for the shares from investor and otherwise, he will not raise the buy signal. In this way, the computer model simulates traders
arriving at random times to trade with the dealer (user) at his quoted prices. Empirical Modelling emphasizes modelling
states and the role of agency in changing state. Agent actions initiate state change. A state is represented in a script of
definitions linking observables through dependencies. Agent actions are modelled by redefinitions. In constructing
environments for virtual financial trading, EM principles can be useful in construing a situation in the financial market
context, and in capturing the state of this situation in a definitive script that can be used to realize and explore different
possible construal. In this way, we have seen that in constructing a VR scene for the monopoly dealer simulation, the EM
analysis was imported. Virtual trading is a prime example of an activity in which the impact of technology upon human
cognition is prominent, and character of its agencies and observables is captured through Empirical Modelling approach.
5. RESULTS AND DISCUSSION
This paper describes and illustrates how an Empirical Modelling approach can be applied to environments for several
different forms of virtual collaboration. The work carried out for this paper points to the following conclusions:
• It raises the awareness of its user (playing the role of a dealer) to the trading behaviour of different types of investors
(informed/uniformed), and the true value of the security (changing through time and known to informed traders). Based
upon the state changing actions, the trading behaviour for the deal gets determined.
• The research reported in this paper indicates that, in principle, the EM approach can deliver more powerful distributed
environments for collaboration than alternative technologies.
• The paper concludes with our findings about the use of VR for modelling a social context such as virtual financial
trading.
• Virtual trading is a prime example of an activity in which the impact of technology upon human cognition is prominent,
and character of its agencies and observables is accordingly hard to capture in objective terms. Empirical Modelling
supplies an appropriate framework within which to address the ontological issues raised by various applications of VR.
The pre-construction phase for a VR scene can benefit greatly from concepts drawn from the Empirical Modelling
literature such as modelling state, state change, and the initiators of state change.
• VR technology needs to be better adapted for the representation of multiple agents acting to change the state and
corresponding visualisation in a VR scene.
• EM provides a conceptual framework in which to examine issues of feasibility and human-computer Integration.
• EM exploits communication that is centred on artefacts rather than documents.
6. FUTURE SCOPE OF WORK
Some investigations are further suggested:
• As an additional exercise, we had to find a proper visualization for abstract numeric indicators, dealer actions, and the
human (user) role in the scene, and to add sound support to produce warning messages to the dealer.
• We propose to apply quantitative and qualitative metrics to our case study to assess the potential benefits of VR in
modelling a social context. The profitability of the dealer’s position with reference to a particular scenario can be used as
a quantitative metric to evaluate the simulation.
108
Dronacharya Research Journal
•
•
Issue II
Jan-June 2010
Merge EM and VR in a web-based framework for developing more promising technology.
The successful application of VR technology in modelling social and data intensive environment relies upon integrating
VR with other programming paradigms such as databases and definitive programming.
REFERENCES
[1]
Harasim, L., "A Framework for Online Learning: The Virtual-U", IEEE Computer Society, Computer,(September
1999), pp. 45-49
[2]
Harris, L.E., "Trading and Exchanges", Draft copy of the text book, (December 4, 1998), University of Southern
California, Marshall School of Business.
[3]
Langton L. J., "The impact of the electronic marketplace: The transformation of securities trading", Proceedings of OnLine Securities Trading 1999, AIC Worldwide Ltd, (September 1999), London.
[4]
Sun, P.H., Beynon, W.M., "Empirical Modelling: A New Approach to Understanding Requirements",Proceedings of
ICSEA Conference, Vol. 3, (1998), Paris.
[5]
Earnshaw, R.A., Gigante, M.A., and Jones, H. "Virtual Reality Systems", Academic Press, (1993)
[6]
Beynon, W.M. "Empirical Modelling and the Foundations of Artificial Intelligence, Computation for Metaphors,
Analogy and Agents", Lecture Notes in Artificial Intelligence 1562, Springer, 322-364, (1999)
[7]
Garbaya, S. and Coiffet, P. "Generating Operation Time from Virtual Assembly Environment", in proceedings of the
7th UK VR-SIG Conference, (19th September, 2000), University of Strathclyde, Glasgow, Scotland
[8]
http://en.wikipedia.org/wiki/Empirical_modelling
[9]
en.wikipedia.org/wiki/Virtual_reality
[10] http://project.cyberpunk.ru/idb/virtualreality_promise.html
[11] http://www.archphoto.it/IMAGES/asymptote/rashid.htm
[12] www.cs.unc.edu/Research/stc/inthenews/pdf/Discover_1999_Sep.pdf
109
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
APPROACH OF SIX SIGMA IN LEAN INDUSTRY
Achin Srivastav*
Associate Professor, Department of Mechanical Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail:[email protected]
D.S. Sharma
Professor and Head, Department of Mechanical Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Nidhi Srivastav
Assistant Professor, Department of Computer Science & Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail::[email protected]
_______________________________________________________________________________________________________________________
ABSTRACT
Globalization, growing competition and emergence of new technologies, shows the change in economic profile of nations & knowledge driven economies
that has created a scenario where quality is no more a desirable strategy –it has become a survival strategy. Six Sigma is widely used in material & service
organizations to enhance quality & to reduce cost but the time has come to move one step further to Lean Six Sigma approach to achieve rapid
transformational change at lower cost. This paper is a modest attempt to describe Six Sigma approach towards Lean and the benefits attained.
Keywords: Lean, Six Sigma, Scale of Defects, Quality Improvement
____________________________________________________________________________________________________________________________
1. Introduction
The role of continuous improvement within organizations has changed and matured throughout history. From the first
improvements made through the invention of machines that sped up production to using empirical or statistical methods to
analyze processes, individuals and organizations have pursued improved operating methods. Certain industries, such as
healthcare and pharmaceuticals, focus the majority of their continuous improvement efforts on maximizing the quality of
their products and services. For others, continuous improvement is viewed as a mechanism for driving down cost. In addition
to cutting costs and improving quality, successful continuous improvement initiatives ultimately change the culture of an
organization. The culture change focuses on the motivation and desire of the organization’s members to continually improve
business processes and policies. This fundamental change in operating and managing processes requires the stimulus of a
structured method or program of continuous improvement. Lean Six Sigma is a combination of two popular continuous
improvement methodologies: Lean and Six Sigma. Lean and Six Sigma focus typically on improving the production and
transactional processes of an organization. Although each uses different methodologies and principles to effect the
improvement, both have complementary effects. Each of these methodologies has been individually popularized by
successful implementations at companies such as Toyota, General Electric, and Raytheon. Many companies are now
recognizing the powerful synergy that is produced when these two methodologies are combined and have successfully
implemented Lean or Six Sigma. However, these implementations were not without some difficulty. The experiences of the
first implementations of Lean and Six Sigma methodologies are unique based on leadership and culture. Subsequent
implementations of Lean and Six Sigma have benefited from the literature and experiences produced by these pioneering
companies. The combination of Lean and Six Sigma is a recent continuous improvement development and the experiences of
companies implementing it are fresh areas for research. This research paper effort focuses on the identification of the barriers
and challenges surrounding the deployment and implementation of Lean Six Sigma.
*Corresponding Author
110
Dronacharya Research Journal
Issue II
Jan-June 2010
2. Six Sigma
2.1 Six Sigma
With increasing competition, changing business condition, globalization and more quality conscious customers created a
scenario where companies need Lean Six Sigma, to help product costs down without compromising on quality. It is now well
established that traditional performance measures based on accounting figures such as sales turnover, profit, debt and ROI do
not match entirely with the competencies and skills companies require to face today’s challenging business environment.
Many experts have also pointed out that operational measures are the drivers of future financial performance, and financial
success is the logical consequence of doing the fundamentals well. It is thus necessary for any organization to consider
performance measurement of the entire supply chain and all of its entities as a strategic issue. Six Sigma is new, emerging,
approach to quality assurance and quality management with emphasis on continuous quality improvements. The main goal of
this approach is reaching level of quality and reliability that will satisfy and even exceed demands and expectations of
today’s demanding customer [1].A term Sigma Quality Level is used as an indicator of a process goodness. Lower Sigma
quality level means greater possibility of defective products, while, higher Sigma quality level means smaller possibility of
defective products within process. If Sigma quality level equals six, chances for defective products are 3,4 ppm. Achieving
Six Sigma quality level involves leadership, infrastructure, appropriate tools and methods, while quality have to become a
part of corporate business plan [2, 3].The main objective of Six Sigma initiative is to aggressively attack costs of a quality.
Overall costs of quality are, usually, divided in tangible and intangible part. The tangible or visible part of costs of quality,
e.g. inspection and warranty costs, scrap, rework and reject, can be approximated with only 10–15 % of overall costs of
quality. Remaining 85-90 % of quality costs are usually intangible and, therefore, overlooked and neglected in companies
quality costs analyses. The present study has made an attempt to understand Lean Six Sigma approach. In this paper, we
discuss the utility and effectiveness of the six-sigma metrics in evaluating performances of the entire supply chain and its
entities In this paper three aspect of Six Sigma dealt, what is Six Sigma? What is Lean? How they can be integrated? Lastly
the benefits for implementing six sigma are discussed in a lean industry.Six Sigma is a quality improvement program with a
goal to reduce defects to as low as 3.4 parts per millon. Six Sigma devoloped at Motorala in 1985 to reduce defects in its
processes.
Table 1: The Six Sigma Scale of Defects
111
Dronacharya Research Journal
Issue II
Jan-June 2010
Fig. 1: Relationship between complexity and process quality.
2.2 Application of Six Sigma
Six Sigma works on a simple equation Y = f(x), where Y is the product or service that has to be improved & ‘x’ is a set of
factors that influence ‘Y’.Y (x) is the function that defines relationship between ‘Y’ and ‘x’. Six Sigma is all about finding
the critical ‘x’ which affect the ‘Y’ or output of the process product or service. Traditionally quality professionals have
focused on output of the process ‘Y’ to improve the process but Six Sigma focuses on the ‘x’ to improve the process and
reduce the defects or errors [4]. Six Sigma counts the number of defects per million opportunities (DPMO) at six levels. If
company works at the ‘one’ sigma level it is making about 6,90,000 defects per million opportunities whereas if it works at
‘six’ sigma level it is making about 3.4 defects per million opportunities
Fig. 2: Comparison of six sigma and three sigma
2.3 The Six-Sigma Metrics and their Computation
There is a substantial volume of literature available on descriptions of the six- sigma metrics and their computations. Here we
try briefly to explain these measures and emphasize, with a few examples, the key benefit of using these metrics. Consider a
manufacturing process in which the task is to manufacture a chemical with a minimum purity of L (e.g. 99%). Corresponding
to n batches of the chemical produced ina week, let x1, x2, . . ., xn denote the purity values. Let m and s denote respectively
the mean and standard deviation of x1 , x2, . . ., xn. Then the z-value or sigma value of the manufacturing process with respect
to purity is defined as zó(mñL)/s. If the data pertain to a short period, this z value is termed ‘short-term sigma’ and is denoted
by zst. The ‘long-term sigma’ is computed as zltózstò1.5, considering a probable 1.5 s shift in the process setting in the
112
Dronacharya Research Journal
Issue II
Jan-June 2010
longrun. The probability of manufacturing a batch with purity less than the specified value canbe obtained as the area to the
left of 99% in the normal curve. This may be called defectsper opportunity (dpo) if the manufacture of each batch is
considered as an opportunity of getting a defect, or defects per unit (dpu) if each batch is considered as a unit. If we consider
the task of filling a purchase order, each entry may be considered as opportunity for a defect. Suppose there are n entries to be
made in each purchase order, and during a week k such purchase orders are filled. An inspection reveals that the total number
of defective entries is d. Then, the total number of opportunities is kn and the defects per opportunity is given by dpoód/ kn.
The yield, i.e. the probability of filling a purchase order
Fig. 3: A sequential chain of entities.
free of defects, is given by (1ñdpo)n. For large n, using the Poisson distribution, this may be approximated by eñdpu where
dpuód/k, which denotes the defects per unit. Using the table for the standard normal distribution, the z-value or sigma-value
corresponding to the yield (i.e. the value on the standard normal scale, the area to the right of which is 1ñeñdpu) can be
obtained. Thus, the beauty of the six-sigma measurement method is that one can compare a chemical manufacturing process
(from the point of view of output quality) and a purchasing process on the same scale. In the context of the former, a sigma
level of 3 would mean producing 66 807 defect batches per million, whereas, in the latter case, the same level would mean 66
807 mistakes per million entries made while filling purchase orders. The concept of rolled throughput yield is also a very
useful one in the context of supply chain management. Rolled throughput yield is defined as the probability of being able to
pass a unit of product or service through the entire process (or chain) defect free. Thus, if we consider a sequential chain of n
entities as shown in Fig. 1, the rolled throughput yield is given by RTY ó%Yi, i.e. the product of the yields Y1, Y2, . . .,Yn for
the n individual entities (given by eñdpu for each). The rolled throughput yield is much better correlated with other business
success measures, compared with the conventional notion of yield, since it unearths the ‘hidden factory’ in terms of inprocess losses, rework, increased cycle time etc. states that for n parallel entities (Fig. 3) the product of individual yields is
also a useful index of process quality without an interpretation as a percentage of some obvious count of defects. Since a
supply chain is essentially a combination of sequential and parallel structures, the notion of rolled throughput yield is
extremely relevant in its context.
3. Lean
Lean is an approach that seeks to improve flow in the value stream and eliminate waste. It’s about doing things quickly with
more efficiency. Lean means “using less to do more” by “determining the value of any given process by distinguishing value
added steps from non value added and eliminating waste so that ultimately every step adds value to the process”. Hopp &
spearman define lean very precisely as “production of goods and services is lean if it is accomplished with minimal buffering
costs”
REDUCE WASTE
LEAN
Fig. 4: Approach of Lean
Here Buffering costs include
a) Inventory Buffers (stocking more WIP or finished goods)
b) Capacity Buffers (excess capacity)
c) Time Buffers (increased lead time)
Lean production was innovated by Taichii Ohno at Toyota in 1889, described by him as “Toyota production system ”
4. Integration of Lean & Six Sigma
Lean and Six Sigma have developed in manufacturing environment and have grown separately but in today’s scenario both
require each other for solving quality problems and to create rapid transformational improvement at lower cost.
113
Dronacharya Research Journal
Issue II
Jan-June 2010
Lean Focus
Six Sigma Focus
Material, Effort, Time Waste
Process variation
Balance flow in manufacturing
Identify root cause of problem
Reduce cycle times
Create uniform process output
Critical to productivity
Critical to product and process quality
Table 2: Comparison of Lean and Six Sigma Approach
A combination of two methodologies can provide the philosophy and the effective tools to solve problems and create rapid
transformational improvement at lower cost. Potentially this could increase productivity, improve quality, reduce cost, better
flow and meet customer expectations[5] .
Fig. 5: Evolution of Lean Six Sigma
The theory of Lean Methodology is to reduce waste and it focuses on flow while the six sigma theory is to reduce defects
and it focuses on problems [3]. Six Sigma methodology is a data driven methodology which reduces variations among
practices, subsequently reducing defects. It consists of five phases. This methodology can be used in lean manufacturing
for the activity that is concerned with cost, timeliness and quality
Fig. 6: Lean Six Sigma (Best of both worlds)
114
Dronacharya Research Journal
Issue II
Jan-June 2010
5. Development of Lean Six Sigma Methodology
Fig. 7: Phases and content of Integrated Lean Six Sigma Methodology
Process may also start out as capable but changes over time to have more variability. In addition the process mean may also
shift placing the process too close to one of the specification limits. Both increases in process variability and shifting of mean
may result in once capable process become incapable [6].
5.1 Implementation of Six Sigma in Lean
Process Mean
(1)
Process Spread (Standard Deviation)
(2)
Process Capability Index
(3)
Capability Index ( 2-sided specification limits)
(4)
Process Capability Index (1-sided specification limits)
(5)
(6)
115
Dronacharya Research Journal
Issue II
SIGMA LEVEL
1
2
3
4
5
6
Cp
0.33
0.67
1.00
1.33
1.67
2.0
Jan-June 2010
Cpk
-0.167
0.167
0.5
0.834
1.167
1.5
Table 3: Determination of Sigma Level
6. A Case Study of an Automotive Industry
This section illustrates the application of the proposed framework through a case example of o automotive company. This
company provides a wonderful opportunity of implementing six sigma. We demonstrate through this case example how
performance characteristics can be identified, defects are defined and calculate sigma ratings. The scope of this study was
limited to the one manufacturing division of the organization. Lumax Automotive System Limited is one of the companies of
Lumax group. They are one of the leading manufacturers of automotive components such as Head lamps, plastic components
etc. Lumax has seven ultra modern manufacturing plants in India. Of these, two are located in cities of Gurgaon, Dharuhera
in the state of Haryana, near New Delhi and three plants in Pune, near Mumbai in Maharashtra and one plant near Chennai.
These facilities have been laid out to match world's best plant engineering standards and as you hear this, our plants are busy
producing automotive lighting products in large quantities to our customer's exacting standards. Lumax has been following
TQM practices from end 1998. They have maintained their quality standards to achieve customer satisfaction and to make
their organization one of the most modern facilities of India. We have implemented Six Sigma Quality tool on one of their
process, manufacturing of oil catch cover.
6.1 Determination of Sigma level
Before starting our observations it was necessary to find out the six sigma level of the organization for the particular part on
which we are applying six sigma tool.
(1) Selection of part (OIL CATCH COVER)
(2) Total production of that part = 15796
(3) Total rejection of that part = 5
Defects per million opportunity (DPMO) = (7 / 15796) * (1,000,000) = 443
Part per million = 443 = 4.7 Six Sigma Level
6.2 Implementation of Lean Six sigma
6.2.1 Define
Problem: Burn mark on oil catch cover
Defect of burn mark on oil catch cover has been noticed during the process of injection molding. This defect was one of the
most important problem noticed because of high rate of customer complaints & rejection.
During initial analysis & troubleshooting of the problem following causes were figured out
Lack of ventilation;
High force of clamping;
High speed of injection
High injection pressure
6.2.2 Measure
Next phase of Six Sigma methodology comprises of measuring the defined problem, to get exact values of rejection. For this
phase the process flow of that component is studied thoroughly from its initial step to its final step, so that any possibility of
faulty measurements can be ruled out. It was decided to verify all the figured out causes by closely analyzing the production
process for each piece of Oil Catch Cover for one shift. The various process parameters are measured and following ranges
are computed as shown in Table 4.
116
Dronacharya Research Journal
Zone Temperature
(°C)
Z1 = Nozzle Zone;
Z2 = Front Zone;
Z3 = Middle Zone;
Z4 = Rear Zone
Z1 =230 ± 10
Z2 =220 ± 10
Z3 =210 ± 10
Z4 =190 ± 10
Issue II
Injection
Speed
(cc/sec.)
Injection
Time (sec)
(15,15,15,15)
±5
5 ± 2 Sec
Cooling
Time
(sec)
25 ± 5
Sec
Jan-June 2010
Injection
Pressure
(Kg/cm2 )
Clamping
Pressure
(tons)
25 ± 5
90 ± 10
Table 4: Range of Injection Molding Parameter
The Process Flow is as follows:
RAW MATERIAL PURCHASE
MASTER BATCH MIXING
INJECTION MOULDING
SETTING INJECTION MOULDING PARAMETERS
ZONE TEMPERATURE
INJECTION SPEED (cc/sec.)
INJECTION TIME (sec)
COOLING TIME (sec)
INJECTION PRESSURE (Kg/cm2 )
CLAMPING PRESSURE (Tons)
TOTAL CYCLE TIME (sec)
STANDARD PRODUCTION per hr
Fig. 7: Process Flow chart
117
Total
Cycle
Time (sec)
45± 5
Sec
Standard
Production
per Hr.
80 Pcs.
Dronacharya Research Journal
Issue II
Jan-June 2010
6.2.3 Analyse
Analyzing is the third step of Six Sigma methodology. In this step analysis of all the measured of potential causes for defect
data is carried out, so that some conclusion can be drawn about what is the actual cause of the problem. As soon some
conclusion is drawn steps are taken to counter that problem, so that it can be controlled from repeating in future. Total
rejections from Oct 2008 to Jan 2009 can be seen in Table 5.
S No.
1.
2.
3.
4.
Month - Year
Oct – 2008
Nov – 2008
Dec– 2008
Jan – 2009
Total Production
15796
15796
15796
15796
Total Rejection
10
8
8
7
PPM
633
506
506
443
Table 5: No. of Rejections in Oil catch cover
The individual parameters can be analysed from data of Table 6
S.No.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Parameter
Raw Material
Master Batching
Zone Temperature
Observation / Reading
As per test certificates & specification sheets
As per test certificates & specification sheets
Z1 =227, 225, 233, 232, 228
Z2 =223, 227, 215, 218, 231
Z3 =217, 219, 211, 204, 208
Z4 =182, 192, 194, 190, 191
Remarks
OK
OK
OK
Injection Speed
Injection Time
Cooling Time
Injection Pressure
Clamping Pressure
Total Cycle Time
Standard Production/Hr.
22, 21, 10, 23, 12: HIGH
2.5,2.7,6.8,2.2,5.8: LESS
30, 32, 27, 33, 30: HIGH
31, 30, 32, 33, 29: HIGH
92, 95, 87, 82, 89
40, 40, 53, 41, 51: FLUCTUATING
90, 90, 68, 88, 70: FLUCTUATING
NOT OK
I R/L
NOT OK
NOT OK
OK
I R/L
I R/L
Table 6: Observation Table for Different Parameters
Fig. 8: Manufacturing of Oil Catch Cover
The manufacturing of oil catch cover through injection moulding can be seen from the Fig 8. The various causes of
defects can be seen from Cause and Effect Diagram in Fig. 9
118
Dronacharya Research Journal
Issue II
Jan-June 2010
Fig. 9: Cause and Effect Diagram
6.2.4 Improve
The next step in the six sigma methodology is the improvement process. This process counters the actual problem and
rectifies i.e. actually minimizes the defects. The process by which this is done is as given below.
As it has been stated before that Six Sigma works on a mathematical formula which is:
Y = f (X)
Or
Y = f (X1, X2, X3, -------, Xn)
From this mathematical representation it can be said that Y is function of X i.e. Y depends on X. So for any irregularities in
X one should focus on Y to find the solution. But in Six Sigma reverse of this rule is followed, it has been stated in this
methodology that if one is good at Y, than there is no need of focusing on Y, rather focus on the problem area to get the
realistic and actual solution i.e. inspect X in place of Y.
Y
DEPENDENT
OUTPUT
EFFECT
SYMPTOM
MONITORED
X
INDEPENDENT
INPUT
CAUSE
PROBLEM
CONTROLLED: We can reduce variability in Y
Table 7: Dependent and independent parameters
Now, as we apply this method on our problem of burn marks, it is very clear from the observations collected at the analysis
phase that there are three main depending variables of this problem.
Three Depending Variables are:
Injection Speed---Injection Time---Cycle Time---Standard Production
Injection Pressure
Cooling Time--- Due to improper Venting
6.2.5 Action Plan
Now to control & remove this defect all focus shifted on controlling the depending factors. Trial runs are carried out under
set frequency of each piece for 1st hour of production and thereafter every alternate pieces are checked for the problem
causing factors and all observations are noted on the observation sheet. Trials conducted by slowly reducing the Injection
speed and checking the part for burn marks, net weight of part. Injection Pressure is reduced with every trial and correct
pressure is recorded. Pressure is checked for every trial visually by the operator and is recorded in the record book.
119
Dronacharya Research Journal
X
Y
PPM
Rejection
Six Sigma Level
Issue II
Jan-June 2010
15
16
17
19
20
21
22
23
24
27
28
29
30
31
443
7
4.7
379
6
4.8
379
6
4.8
443
7
4.7
379
6
4.8
379
6
4.8
443
7
4.7
379
6
4.8
317
5
4.94
317
5
4.94
317
5
4.94
317
5
4.94
317
5
4.94
317
5
4.94
Table 8: Reduction in PPM in Jan.’09 after implementing Six Sigma
Cooling time was another factor for burn mark this is due to improper venting, to overcome this problem-venting path
cleaned regularly after every 2nd shift. Venting depth is increased where there are more blockages.
Trial
No.
1.
Injection Speed
(cc/sec.)
20
Injection Pressure
(Kg/cm2 )
29
Cooling Time
2.
19
27
26: ---do--- vent depth
increased
3.
18
26
26: vent depth
increased
4.
16
25
25: ---do---
5.
15
25
25: ---do---
28: after vents cleaned
Observations
N wt 19gms
Burn mark
N wt 19gms
Burn mark
reduced
N wt 19gms
Burn mark
reduced
N wt 19gms
No burn mark
N wt 19gms
No burn mark
Table 9: Reduction in Defects
6.2.6 Results
It has been found that burn mark was basically caused due to the venting problem and other factors are not much involved in
that problem. Now it has been established that it is operators job to check all these parameters before starting production of
part; especially conditions of vent and to bring in notice of the quality engineer if there is sort of irregularity in operation of
machine or if the quality of part is not up to the standards.
6.2.7Control
The final step of DMAIC methodology is to institutionalize product/process improvement and to monitor ongoing measures
and actions to sustain improvements. Accordingly, the key actions taken during control Phase were:
• A final confirmation of process capability
• Stockholder’s issues revisited
• Agree line inspection frequency
To control the defect decided action plan is followed and improvement in the PPM is shown with the help of control charts.
Total Production
Rejection
PPM
Oct 08
Nov 08
Dec 08
Jan 09
15796
10
633
15796
8
506
15796
8
506
15796
7
317
Table 10: Total Rejection of Specified Period
With six sigma implementation PPM level of the company is reduced from 443 ppm to 317 ppm (shown in Table 10) and
also has reduced cost of poor quality.
120
Dronacharya Research Journal
Issue II
Jan-June 2010
16000
14000
12000
10000
8000
6000
4000
2000
0
Total Pro.
Rejection
PPM
Oct-08 Nov-08 Dec-08 Jan-09
Fig. 10: Total Rejection during Oct-08 to Jan-09
Implementation of six sigma reduces the total cost of oil catch cover during Dec-08
08 Rs. 2165.68 drastically to Rs. 1356.76 as
shown in Table 11.
X
Y
Rejection
PPM
Cost (Rs.)
Oct 08
Nov 08
Dec 08
Jan 09
10
633
2709.24
8
506
2165.68
8
506
2165.68
5
317
1356.76
Table 11: Total Cost of Oil Catch Cover from Oct
Oct-08 to Jan-09.
3000
2500
2000
Rejection
1500
PPM
1000
Cost (Rs.)
500
0
Oct-08
08 Nov- Dec08
08
Jan09
Fig. 11: Bar chart shows the decline of Rejections,PPM and Cost from Oct
Oct-08
08 to Jan-09
Jan
Six Sigma has proven that its application brings a big boost in achieving the target of improvement in big as well as small
industries. In this example it shows the prac
practical application of Six Sigma which we performed at a lean industry Lumax
Automotive Systems
ems Limited, Manesar, Gurgaon. During the process of implementing six sigma in one of their process we
have managed to reduce their defects, Reduced PPM, Cost of poor quality, have improved their Six Sigma Level & we have
provided them check lists for carrying out production of that particular part so that defects can be controlled in future.
7. Benefits of Implementing Six Sigma in Lean
Demanding customers, higher demand variability and inability to balance push and pull environment, an integrated lean six
sigma approach respond to these challenges and demands. The full benefits of Lean Six Sigma will only be realized when
applied at both strategic and operational levels. Six Sigma approach in Lean make manufacturing operations more efficient
by eliminating waste and reducing variations and thus enhancing value.
121
Dronacharya Research Journal
Issue II
Jan-June 2010
Fig. 12: Lean Six Sigma Cycle
CONCLUSION
Six Sigma approach is integral to any successful Lean manufacturing implementation. Once Lean manufacturing techniques
eliminate wasteful activities, six sigma offers a sequential, problem solving procedure to continuously measure, analyse,
improve and control the processes. Working together, the two methodologies guarantee dramatic improvement in
productivity. It is expected that the suggested approach will help companies create a framework for effective measuring and
monitoring of process performance. The method will provide the management with clear-cut knowledge regarding the
strength and weaknesses of the process.
REFERENCES
[1]
Hendricks, C. & Kelbaugh, R., Implementing Six Sigma at GE, the Journal of Quality and Participation, vol.21, No.4,
pp. 48-53,(1998).
[2]
Vincent W.S, Six Sigma Paradigm Shift, International Journal of Six Sigma and Competitive Advantage,Vol.3, Issue
4,pp.317-332, (2007).
[3]
Dasgupta, A, Going the six sigma way, India Management, 42 (3) 24-31,(2003)
[4]
Henderson, K.M. & Evans, J.R., Successful Implementation of Six Sigma: Benchmarking General Electric Company,
Benchmarking: An International Journal, Vol.7, No.4, pp. 260-281,(2000).
[5]
George M L, Lean Six Sigma: Combining six sigma quality with lean speed, McGraw-Hill,(2002)
[6]
Bossert J, Lean and Six Sigma – Synergy Made in Heaven, Quality Progress ,
[7]
Hoerl, R.W., Six Sigma and the Future of Quality Profession, Quality Progress, June, pp 35-42, (1998)
[8]
Lahiri, S The Enigma of six sigma, Business Today, (18), 60-69,(1999)
[9]
Snee, R.D., Getting better all the time: the future of business improvement methodology, International Journal of Six
Sigma and Competitive Advantage, Vol.3, Issue 4, pp. 305-316, (2007).
vol. 36, no.7, pp. 31-32,(2003).
[10] Mcmanus, H. L.: Value stream Analysis and Mapping for product development, 23rd ICAS Congress, Toronto Canada,
(2002)
[11] Roy R K, A Primer on the Taguchi Approach, Van Nostrand Reinhold,(1990)
[12] Smith B, Lean and Six Sigma – A One-Two Punch, Quality Progress, vol. 36, no. 4,pp. 37-41,(2003)
[13] Taguchi G, Taguchi Methods: Design of Experiments, ASI Press, (1993).
[14] Black J T and Hunter S L, Lean Manufacturing System and Cell Design, SME,(2003).
[15] Bossert J, Lean and Six Sigma-Synergy Made in Heaven, Quality Progress, vol.36, no.7, pp. 31-32, (2003).
122
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
CRYSTALLIZATION KINETICS OF NEW SEALANT MATERIAL
FOR SOFC
Neha Gupta*
Assistant Professor, Department of Electronics & Communication Engineering
Dronacharya College of Engineering Gurgaon-123506, India
Email:[email protected]
Dimple Saproo
Assistant Professor, Department of Electronics & Communication Engineering
Dronacharya College of Engineering Gurgaon-123506, India
Email:[email protected]
Rita Yadav
VIII Semester, Department of Electronics & Communication Engineering
Dronacharya College of Engineering Gurgaon-123506, India
Email:[email protected]
____________________________________________________________________________________________________
ABSTRACT
Solid oxide fuel cell (SOFC) is a very efficient and is in demand as futuristic energy source. The planar design of SOFC requires sealant at the edges of the
cell to prevent fuel leakage (H2, CH4) and air mixing at its working temperature. The glass and glass ceramics are most suitable and compatible sealing
materials for planar design of SOFC due to their chemical and thermal inertness.
At working temperature they become viscous and seal the edges properly without reacting with the other components of the cell materials. These glasses
must have proper compatibility with other components as well as maintains its stability at the working temperature of the cell. In the present study the
synthesis and structural analysis of 45SiO2 4Al2O3 40MgO 5B2O3 6Y2O3 glass were done. The glasses were made by taking appropriate proportion of
each oxide constituents .These were mixed properly in mortal pestle in acetone medium for two hours to ensure homogeneity. These mixed constituents were
taken in alumina crucible and melted at 1550 C in a molybdenum silicide resistant heating furnace .The glasses were splat quenched after melting. Some
pieces of sample were given heat treatment at 1000 C for 1hr, 9hr and 100 hrs to see structural transformation. The sample was observed under optical and
scanning electron microscope (SEM) .The crystal structure was analysed by X-ray diffraction technique. The present paper describe in detail about structural
analysis of this glass.
Keywords: Solid oxide fuel cell, Sealants, Scanning electron microscope, X-ray diffraction
1. INTRODUCTION
Solid oxide fuel cells convert the chemical energy of a fuel such as hydrogen into electricity by electrochemical oxidation of
the fuel at around 800 to 10000 C [1, 2]. Such fuel cells consist of three electrochemically active ceramics ; a porous,
strontium-doped anthanum manganite (LSM) cathode; a dense , yitria - stabilised zirconia (YSZ) electrolyte ; and a porous ,
nickel-YSZ cermet (NSC) anode, Several such cells are stacked in series to build a planar solid oxide fuel cell (SOFC). The
cells are separated from each other by fuel and oxidant flow fields and bipolar plates of dense strontium-or calcium-doped
lanthanum chromite (LC) or corrosion-resistant metal alloy [3].
In such planar SOFC’s, gas –tight seals must be formed along the edges of each cell stack and the gas manifolds.
Conventional sealing materials could not meet the above-mentioned requirements due to high working temperature (80010000C) of SOFC and steep change in partial pressure of oxygen (2 x10- 1 -10-13 Pascal) during fuel cell operation and thus
there is a need to develop new sealants.
An effective sealant must form gas-tight seals to the cell and stack components. It must hold the cell and the stack
components together during thermal cycling [1]. Within the fuel cell stack, sealant must be compatible with the thermal
expansion behaviour of the fuel cell components; i .e, it must have a coefficient of thermal expansion value in the range of 813 X 10-6 /ºC. At the stack to manifold junction the much longer sealing distances require a compatible sealant that can
tolerate a relatively large mismatch in coefficient of thermal expansion.
*Corresponding Author
123
Dronacharya Research Journal
Issue II
Jan-June 2010
A glass or glass-ceramics sealant that is a viscous fluid at the cell operating temperature can provide a mechanism for the
coefficient of thermal expansion – mismatch tolerance. The glass must have the glass transition temperature (Tg below the
cell operating system. When the structure cools to room temperature, significant stresses begin to develop only as the
temperature drops only Tg .Consequently the total stresses due to thermal expansion – mismatch are considerably less than if
the stresses begin to develop at the operating system. To minimize the stresses produced, the Tg should be as low as possible
Conversely, the viscosity of the sealant at the fuel cell operating temperature must be greater than 10-13Pa-s; below that
viscosity, materials flow readily and the sealant cannot form an effective barrier for gas flow.
Glasses and glass ceramics, in principle, meet most of the requirements of an ideal sealant. By choosing the constituent of the
glasses suitably in their stiochiometric proportion, a glass can be developed as sealant materials,
Glass-ceramics, which can be prepared by control crystallization of glasses, possess superior mechanical and thermal
properties. These glasses can also have very different thermal expansion coefficients (TEC) due to the existence of the
different crystalline phases in the matrix. In order to develop a suitable sealant, it is therefore necessary to understand the
crystallization kinetics of the glass-ceramics from the point of view of their sealing properties as well as their chemical
interaction when in contact with other components of the cell.
In the present study, a series of SiO2-Al2O3-Y2O3-MgO-B2O3 of different glass compositions were synthesized. Small
amount of B2O3 is added to reduce the glass transition temperature; Tg. Al2O3 is known to prevent the rapid crystallization
of glass during heat treatment, which also prevent the formation of cristobalite phase [2, 4]. Y2O3 is responsible to increase
the thermal expansion coefficient [5]. In the MgO-Al2O3-SiO2 system the formation of different phases takes place during
crystallization, among those one of the possible phases is cordierite (Mg2Al4Si5O18) [6]. The cordierite phase however is
detrimental for the SOFC stack since its thermal expansion coefficient (2 x10-6/0C) is very low as compared to other
components of the fuel cell [2]. The crystallization behavior of the above mentioned glasses were investigated using thermal
gravimetric analysis (TGA), X-ray diffraction (XRD), and scanning electron microscopy (SEM).
2. EXPERIMENTAL
The selected glass compositions for present study are given in Table 1. The glasses were prepared by taking stoichiometric
amounts of different constituents in the form of oxides and carbonates. Each batch of glasses were prepared by mixing an
appropriate mole fraction of well desired initial ingredients under the wet media (acetone) using mortar and pestle for two
hours. The mixed powders of these samples were dried and melted in alumina crucible at 15500C. The melt was poured on
the flat copper plate and quenched by other copper plate in air to obtain thick flakes. These prepared glasses were
characterized by X-ray Diffraction for the confirmation of glassy nature as well as to find out the crystalline phases after heat
treatment at 10000C for 1, 9 and 100 hours time duration. The glass stability and weight loss was checked by a TGA (Mettler)
in the range of room temperature to 12000C at a rate of 100 0C/min in N2 atmosphere. Micro structural studies were carried
out using SEM on heat-treated sample of GS4 at 10000C for 1, 9 and 100 hours.
Sample
Label
SiO2
Al2O3
Y2O3
MgO
B2O3
GS1
GS2
GS3
GS4
45
45
45
45
10
8
6
4
0
2
4
6
40
40
40
40
5
5
5
5
Table 1: Sample label with constituents in mol %
3. RESULTS AND DISCUSSION
These prepared glasses of all the above four compositions were found to be amorphous in nature as all of them exhibited a
broad peak in the X-ray diffraction pattern. A typical XRD diffraction of GS4 sample is shown in Fig. 1
These glass samples were further subjected to heat treatment at 10000C for 1, 9 and 100 hours time duration to understand the
nucleation and growth of different phases with aging time at high temperature. The X-Ray diffraction patterns of all the heattreated glasses exhibit the formation of various crystalline phases that were identified using powder diffraction files (PDF).
124
Dronacharya Research Journal
Issue II
Jan-June 2010
Fig. 1: XRD pattern of glass samples of GS45
All the glass ceramics, where Y2O3 was added at the cost of Al2O3 exhibit only the formation of MgSiO3 and Mg2SiO4
phases after heat treatment. However, apart from these phases the sample GS1 additionally exhibited Mg0.6Al1.2Si1.8O6
solid solution phase [PDF file no. 75-1568]. The volume fraction of Mg0.6Al1.2Si1.8O6 phase was observed to increase with
increasing heat treatment duration at the cost of MgSiO3 phase as is evident in Fig2.
This means that aging time leads to growth of MgO.6Al1.2Si1.8O6 phase and a decrease in MgSiO3 phase. According to
Zdanicwki in MgO–SiO2 based glasses, the early stage of crystallization occurs by formation of a SiO2 rich solid solution
Fig. 2: XRD pattern of glass samples of GS1 after 1,9 and 100 hours heat treatment
phase and subsequently in the later stage an isomorphic substitution of Mg2+ and Al3+ occurs at different sites of the unit cell
[7]. This substitution takes place in such a way that the composition approaches to that of Mg0.6Al1.2Si1.8O6 phase apart
from formation of MgSiO3 and Mg2SiO4 phases. The XRD pattern of this particular sample shows the shifting in XRD lines
at lower angles with increasing heat treatment duration indicating a change in lattice parameter (Fig. 2). This might be
attributed to Mg0.6Al1.2Si1.8O6 phase that is not a pure one but a solid solution phase. Initially this phase nucleates as Si
rich phase and at later stages of the heat treatment Al+3 ions occupy Si+4 ion sites. Since Al+3 ions are larger than Si+4 ions it
would be responsible for the enhancement in lattice parameter of Mg0.6Al1.2Si1.8O6 phase. However, the peak shifting
could not be observed in case of MgSiO3 and Mg2SiO4 phases except variation in relative peak intensity. This can be
understood from the following chemical reactions:
125
Dronacharya Research Journal
Issue II
3MgO + 2SiO2 → MgSiO3 + Mg2SiO4
3MgSiO3 +3SiO2 +2 Al2O3 → MgSiO3 + Mg2Al4Si5O18
Jan-June 2010
I stage
II stage
It is obvious from above reaction that in the beginning the nucleation of solid solution silicon rich phase takes place and at
the later stage the migration of ions takes place and composition shifts to Mg2Al4Si5O18. However, glasses containing
Y2O3 exhibit the presence of MgSiO3 and Mg2SiO4 phases Fig. 3. Since, thermodynamically Mg2SiO4 phase is more stable
as compared to MgSiO3, so, the volume fraction of Mg2SiO4 phase keeps on increasing at the expense of MgSiO3 phase in
all these samples as is evident in Fig.3. The suppression of cordierite phase in GS2, GS3 and GS4 samples could be explained
on the basis of the presence of extra cation like yttria which increases the competition among other cations which may
prevent the formation Mg0.6Al1.2Si1.8O6 phase and excess amount of crystallization [8]. However, the field strength of Y3+
ion is less as compared Al3+ ions in glass composition as shown in Table 2
Fig. 3: XRD pattern of glass samples of GS1, GS2, GS3 and GS4 after 9 hours heat treatment
The TGA spectra of all four-glass samples are given in Fig. 4. Addition of Y2O3 significantly increases the stability of the
samples. In GS2 sample, addition of 2% Y2O3 at the cost of Al2O3 significantly decreases the percent weight loss relatively
to sample GS1. A further increase in Y2O3 content in samples was found to continue in the improvement in the stability of
the samples. In the subsequent samples, it can be seen that the addition of Y2O3 is seen to stabilize the glasses and thereby
decrease in the weight loss. This might be because of the partial crystallization of glasses during heat treatment. The above
consideration is also supported by the fact that the width of XRD peaks, which increase with increasing Y2O3 contents as,
can be seen from Fig. 3.
Fig. 4: TGA spectra of GS1, GS2, GS3 and GS4 samples
126
Dronacharya Research Journal
Issue II
Jan-June 2010
Phase separation in these glasses occurs when a metal cation other than Si+4 has a certain degree of ionic field strength as
indicated in Table 2.
Elements
Valency
Ionic Radius (Ao)
Ionic Distance
(rc+ra)Ao
Field strength
[Z+/(rc+ra)2]
Si
4
0.39
1.71
1.37
Y
3
1.06
2.38
0.53
Al
3
0.57
1.89
0.84
B
3
0.20
1.52
1.29
Mg
2
0.78
0.78
0.45
Table 2: Field strength of Y, Si, Al, B and Mg cations
However, it has been known that if there are several kinds of cations having field strength high enough to attract oxygen ions
then phase separation does not occur due to competition between the cations themselves [9]. But, in present case, the phase
separation could not be observed even the content of Al2O3 is more than 5 mol % as reported in earlier studies on the similar
systems [7]. The tendency of Al2O3 to induce phase separation depends upon the occupancy of Al3+ cations in different
interstitial sites. There can be four or six coordination numbers of Al3+ cation with oxygen giving rise to either tetrahedral
AlO4 or octahedral AlO6. When it is tetrahedrally coordinated, it takes part as network former. But at the same time, when
the coordination number changes to six, it works as network modifier [6]. In the present case, the Al+3 ions may be tetrahedral
coordinated so second exothermic peak could not be observed in these samples. This indicates that phase separation not only
depend on Al2O3 content but also on processing conditions of the samples. It is very interesting to note that the addition of
Y2O3 in starting material increases Tg and Tc with decreasing ratio of Al2O3/Y2O3. This might be attributed due to Y3+
cation that is coordinating tetrahedrally and it acts as a glass former. Apart from this, a decrease in Al2O3 content from GS1
to GS4 is also responsible to an increase in Tg and Tc in all the glass samples studied. In addition to this yttrium in the
hexavalent state acts as network former. However, the field strength of Y3+ ion is less than Al3+ ion. Lahl et al. [2] suggested
that when hexavalent cation is added in MgO based aluminosilicate glasses, it enhances the activation energy of these
systems.
In order to understand the properties of these glasses during long time exposure at high temperature it is essential to
understand the variation in microstructure of these samples during heat treatment. Out of all glasses, GS4 showed good
stability in TGA analysis so a detail SEM study was carried out for this sample. Fig. 5 (a-f) shows the SEM micrographs of
the GS4 sample after 1, 9 and 100 hours heat treatment respectively. The important features in all these micrographs are the
uniformly distributed needle like precipitate, which can be seen throughout the structure. The flow pattern existing in all the
samples, which is more pronounced at higher magnification, indicates that the glass has undergone higher amount of under
cooling. The existence of needle like fine structures can be seen throughout the sample. In general all the micrographs exhibit
the simultaneous growth of the two phases during heat treatment, which is evident throughout the structure in all the samples.
127
Dronacharya Research Journal
Issue II
Jan-June 2010
(e)
(a)
(f)
(c)
(d)
(b)
Fig.5: SEM micrograph of GS4 sample (a &b) after 1hour heat treatment (c & d) after 9 hour heat treatment and
(e & f) after 100 hour heat treatment at 1000 oC
Fig. 5 (a & b) is SEM micrograph of GS4 sample aged at 1000 oC for 1 hour. It is evident from the micrograph that two
types of phases appear during heat treatment one fine long needle type of structure and other coarser blunt needle, which is
more clear at higher magnification micrograph (Fig. 5b). Initially MgSiO3 phase nucleates as fine needle like structure.
During ageing, this finely nucleated structure starts collapsing and turn out as bigger structure. The structure appears to be a
composite type, one comprising of MgSiO3 and another Mg2SiO4. As the ageing time proceeds the volume fraction of
Mg2SiO4 phase increases. Fig. 5(c & d) is the micrograph of the sample aged for nine hours where one can see the transition
of MgSiO3 phase into Mg2SiO4 phase. The fine needle like structure represent for MgSiO3 phase and coarse blunt bright
phase for Mg2SiO4 phase (Fig. 5d). The process of this growth is similar to that of well known as Ostwald- ripening
phenomenon [10].
In order to understand the growth of one phase at the expense of another with passage of time at same heat treatment
temperature in detail, the samples were further given heat-treatment at 10000C for 100 hour. Fig. 5 (e & f) is a typical
micrograph of such sample. The low magnification micrograph reveals the banded morphology [5, 11] which exists at the
edges as well as at center of the sample. The micrograph further supports our view that growth of Mg2SiO4 phase occurs as
the conditions (time and temperature) becomes more favorable for it[12]. A high magnification micrograph taken from
brighter area (Fig. 5f) reveals the morphological feature of Mg2SiO4 phase that get it rearranged. The existence of more
pronounced banded structure (Fig. 5e) indicates that the volume fraction of the second phase (Mg2SiO4) appears to be more
as time duration of heat treatment increases. The nucleation and growth of the typical structure all along the edges of the
sample with a fine band is an indication of the fact that second phase has nucleated from the defect sites. An important
feature observed in the sample GS4 heat-treated for 9 and 100 hours is those two types of structural features one
corresponding to very fine (needle like) and another one (blunt type) of a structure (Fig. 5d). The coherent matching between
these two phases and the flow pattern observed indicates that the earlier nucleated MgSiO3 phase is being transformed to
128
Dronacharya Research Journal
Issue II
Jan-June 2010
another phase Mg2SiO4 of the same family having the same crystallographic directions of their growth [13, 14]. The X-Ray
analysis indicates that a sample which have been given heat treatment for one hour containing higher volume fraction
(relative intensity 99%) of MgSiO3 phase. The micro structural features observed and the X-ray analysis done for the glasses
indicates that Mg2SiO4 phase is more stable as compared to MgSiO3 .Since these two phases belongs to same category, the
growth of thermodynamically stable Mg2SiO4 phase with increasing ageing time on the expense of MgSiO3 phase is obvious.
CONCLUSION
It is possible to develop a proper glass sealant material SiO2-Al2O3-Y2O3-MgO-B2O3 by suitably adjusting the constituents
of glasses. The structural analysis indicates that glasses made were initially amorphous which subsequently transferred into
crystalline phases with the formation of MgSiO3 and Mg2SiO4 phases. The addition of Y2O3 completely suppresses the
formation of cordierite phase in rest of the glasses. Also, it enhances the stability of the glasses. The volume fraction of
Mg2SiO4 phase was observed to increase with increasing heat treatment time. The SEM and XRD studies clearly indicate
that Mg2SiO4 phase grows on the expense of MgSiO3 phase, which is a stable one. The stability of the samples increases
with the addition of Y2O3. The crystallization and glass transition temperature is also found to increase with the increase in
yttrium content.
REFERENCES
[1]
Ley K. L, Krumpel M, Meisser T .R, Bloom I, J. Mat. Res.Vol- 11,pg no.1449(1996).
[2]
Lahl N, Singh K, Singheiser L, Hilpert K, Bahadur D, J. Mater. Sci.Vol- 35, pg no-3089(2000).
[3]
Minh.Q, J. Am.Ceram. Society Vol-76, pg no-563 (1993).
[4]
Lahl N, Bahadur D, Singh K, Singheiser L, Hilpert K, J. Electrochem. Soc. Vol-149, pg no- A607(2002).
[5]
Zimmermann M, Carrard M, Kurz W, Acta Metall. Vol-37 ,pg no-3305(1989).
[6]
Lara.C, Pauscal .M. J, Duran. A, J. Non-Cryst. Solids, Vol -348, pg no.149 (2004).
[7]
Zdaniewski W, J. Am. Ceram. Soc. Vol-58,pg no-163(1975).
[8]
Yun-Mo-Sung, J. Mater. Res. Vol-17, pg no- 517 (2002).
[9]
Bahadur.D, Lahl.N, Singh.K, Singheiser.L and Hilpert.K, J.Electrochem. Soc.Vol-15, pg no.A558, (2004).
[10] Madras.G, COY B. J, J. Chem. Phys.Vol- 115, pg no-6699(2001).
[11] Grimand M, Carrard M, Kurz W, Acta Metall. Mater Vol-38, pg no 2587(1990).
[12]. Eichler K, Solow G, Otschil P and Schaffrath W, J. Eur. Ceram. Soc.Vol- 19, pg no-1101(1999).
[13] Zimmermann M, Carrard M, Gremand M, Kurz W, Mat. Sci. Engg. A Vol-134 , pg-no-1278(1995).
[14] Schwickert T, Sievering R, Geasee P, Conradt R, Matt.-wisu. Werkstofftech. Vol-33 ,pg no-363. (2002)
129
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
IMPLEMENTATION OF FUNCTIONAL REPUTATION BASED
DATA AGGREGATION FOR WIRELESS SENSOR NETWORK
Manisha Saini*
Assistant Professor, Department of Computer Science & Engineering
Dronacharya College of Engineering, Gurgaon -123506 India
E-mail:[email protected]
Kevika Singla
Senior Software Engineer
Royal Bank of Scotland, Gurgaon-122001, India
E-mail: [email protected]
Deepak Gupta
VIII Semester, Department of Information Technology
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT
In wireless sensor networks, malicious sensor nodes send false data reports to distort aggregation results. Existing trust systems rely on general reputation to
mitigate the effect of this attack. A novel reliable data aggregation protocol, called RDAT, which is based on the concept of functional reputation. Functional
reputation enables data aggregators to evaluate each type of sensor node action using a respective reputation value thereby increasing the accuracy of the
trust system. To implement a data aggregation protocol based on functional reputation which would evaluate each type of sensor node action using a
respective reputation value thereby increasing the accuracy of the trust system.
Keywords: RDAT, Reputation and Trust
__________________________________________________________________________________________________________________________
1. INTRODUCTION
The emerging field of wireless sensor networks combines sensing, computation, and communication into a single tiny device.
Through advanced mesh networking protocols, these devices form a sea of connectivity that extends the reach of cyber space
out into the physical world. As water flows to fill every room of a submerged ship, the mesh networking connectivity will
seek out and exploit any possible communication path by hopping data from node to node in search of its destination. While
the capabilities of any single device are minimal, the composition of hundreds of devices of a radical new technological
possibilities.
The power of wireless sensor networks lies in the ability to deploy large numbers of tiny nodes that assemble and configure
themselves. Usage scenarios for these devices range from real time tracking [8], to monitoring of environmental conditions,
to ubiquitous computer environment, to in situ monitoring of the health of structures or equipment. While often referred to as
wireless sensor networks, they can also control actuators that extend control from cyberspace into the physical world.
2. WIRELESS SENSOR ARCHITECTURE
Recent years have tremendous advances in the design and applications of wirelessly networked and embedded sensors.
Wireless sensor nodes are typically low-cost, low-power, small devices equipped with limited sensing, data processing and
wireless communication capabilities, as well as power supplies[1]. They leverage the concept of wireless sensor networks
(WSNs), in which a large (possibly huge) number of collaborative sensor nodes could be deployed. As an outcome of the
convergence of micro-electro-mechanical systems (MEMS) technology, wireless communications, and digital electronics,
WSNs represent a significant improvement over traditional sensors. In fact, the rapid evolution of WSN technology has
accelerated the development and deployment of various novel types of wireless sensors, e.g., multimedia sensors. Fulfilling
Moore’s law, wireless sensors are becoming smaller and cheaper, and at the same time more powerful and ubiquitous.
*Corresponding Author
130
Dronacharya Research Journal
Issue II
Sensing unit
Processing unit
Sensor 1
Jan-June 2010
Communication(tr
ansceiver)
Microcontroller
And storage
Sensor N
Transmitter
And receiver
Power supply
Fig. 1: Wireless sensor architecture
As shown in Figure 1, there are typically four main components in a sensor node i.e., a sensing unit, a processing unit, a
communication unit, and power supply. The sensing unit may be composed of one or more sensors and Analog-to-Digital
Converters (ADCs). Sensors are hardware devices that measure some physical data of the monitored system’s state such as
temperature, humidity, pressure, or speed. The analog signals produced by the sensors are digitized by ADCs and sent to the
processing unit for further processing Within the processing unit, there is a microcontroller associated with a small storage
unit including on-chip memory and flash memory. The processing unit is responsible for performing tasks, processing data,
and controlling the functionality of other components of the sensor node. A wireless sensor connects with other nodes via the
communication unit, where a transceiver encompasses the functionality of both transmitter and receiver. The wireless
transmission media may be radio frequency, optical (laser), or infrared. At present, the main type of power supply for
wireless sensor node is batteries, either rechargeable or non-rechargeable. Energy is consumed for sensing, data processing,
and communication. For small wireless sensor nodes (with limited computing capacity), data communication will expend the
majority of energy, while sensing and data processing are much less energy-consuming
The concept of wireless sensor networks is based on a simple equation:
Sensing+CPU+Tranceiver=Thousands of potential applications
3. RELIABLE DATA AGGREGATION PROTOCOL (RDAT)
The basic idea behind protocol RDAT [2] is to evaluate trustworthiness of sensor nodes by using three types of functional
reputation, namely sensing, routing, and aggregation .Sensor nodes monitor their neighborhood to obtain first-hand
information regarding their neighboring nodes. For sensing, routing, and aggregation tasks, each sensor node Ni records good
and bad actions of its neighbors in a table referred to as functional reputation table. Functional reputation tables are
exchanged among sensor nodes to be used as second-hand information during trust evaluation. The functional reputation
tables are piggy backed to other data and control packets in order to reduce the data transmission overhead. When sensor
node Ni needs to interact with its neighbour Nj, Ni evaluates the trustworthiness of Nj using both first-hand and second-hand
aggregation
information regarding Nj. Functional reputation for aggregation (R
) is needed by sensor nodes to evaluate the
i,j
routing
sensing
trustworthiness of data aggregators. Functional reputations for routing (R
)and sensing (R
) are used by data
i,j
i,j
aggregators to increase the security and reliability of the aggregated data. Functional reputation values are quantified using
beta distributions of node actions as explained next.
3.1 Beta reputation system
As the success of Bayesian formulation in detecting arbitrary misbehaviour of sensor nodes is, we select a Bayesian
formulation, namely beta reputation system, for trust evolution. In this section, before giving the details of protocol RDAT,
we present brief information about beta reputation system. Posteriori probabilities of binary events can be represented as beta
distributions which is indexed by the two parameters α and β. The beta distribution f (p|α, β) can be expressed using the
gamma function Γ as:
131
Dronacharya Research Journal
Issue II
Jan-June 2010
f (p|α,β) = (Γ(α+β)/ Γ(α)+Γ(β)) pα−1(1− p)β−1
0 ≤ p ≤ 1, α > 0, β > 0
The probability expectation value of the beta distribution is given by E(p) = α/(α+β). To show that how beta function can be
employed in sensor networks let us consider the task of target detection as an action with two possible outcomes, namely
“correct” and “false”. Let r be the observed number of “correct” target detections and s be the the observed number of “false”
target detections by a sensor node. The beta function takes the integer number of past observations of “correct” and “false”
target detections to predict the expected frequency of “correct” target detections by that sensor node in the future which is
achieved by setting:
α = r+1 β = s+1, where r, s ≥ 0.
The variable p represents the probability of “correct” target detections and f (p|α,β) represents the probability that p has a
specific value. The probability expectation value is given by E(p) which is interpreted as the most likely value of p. Hence, a
sensor node’s reliability can be predicted by beta distribution function of its previous actions as long as the actions are
represented in binary format.
3.2 Computing functional reputation and trust
Functional reputation value (R
X
i,j
) is computed using beta density function of sensor node Nj’s previous actions with respect
X
X
) is the expected value of R
.Let us take routing task as an example. If sensor node Ni counts
i,j
i,j
the number of good and bad routing actions of Nj as α and β, respectively. Then, Ni computes the functional reputation
routing
routing
R
about node Nj as Beta(α+1,β+1). Following the definition of trust, T
is calculated as the expected value
i,j
i,j
routing
of R
i,j
routing
T
= E(Beta(α+1,β+1))
i,j
= α+1/α+β+2
This equation shows that the expected value of the beta distribution is simply the fraction of events that have had outcome α.
Hence, functional reputation value of routing is given by the ratio of good routing actions to total routing actions observed.
routing
This is an intuitive decision and it justifies the use of the beta distribution. In the above formula, R
represents node
i,j
Ni’s observations about node Nj . In other words, it just involves first-hand information. Reputation systems that depend on
only first-hand information have a very large convergence time. Hence, second-hand information is desirable in order to
confirm firsthand information. In protocol RDAT, neighboring sensor nodes exchange their functional reputation tables to
provide secondhand information and this information is included in trust evaluation. Let us assume that sensor node Ni
receives secondhand information about node Nj from a set of N nodes and Sinfo(rk,j ) represents the second-hand
to function X. Trust (T
information received from node Nk (k ∈ N). Ni already has previous observations about Nj as αi,k and βi,j. Further assume
that, in a period of ∆t, Ni records ri,j good routing actions and si,j bad routing actions of Nj . Then, Ni computes the trust
routing
for Nj as follows.
i,j
routing
routing
αi,j
= ν*αi,j + ri,j+ Σ Sinfo
(rk,j )
routing
routing
βi,j
= ν*βi,j + ri,j+ Σ Sinfo
(rk,j )
routing
routing
routing
Ti,j
= E(beta(αi,j
+1, βi,j
+1))
where ν < 1 is the aging factor that allows reputation to fade with time. Integration of first and second hand information into a
single reputation value is studied in by mapping it to Dempster-Shafer belief theory. We follow a similar approach and use
the reporting node Nk’s reputation to weight down its contribution to the reputation of node Nj. Hence, second-hand
information Sinfo(rk, j) is defined as:
T
Sinfo(rk,j )= (2*αi,k * rk,j)/((βi,k +2) * (rk,j + sk,j +2) * (2 * αi,k ))
Sinfo(sk,j )= (2*αi,k * sk,j)/((βi,k +2) * (rk,j + sk,j +2) * (2 * αi,k ))
132
Dronacharya Research Journal
Issue II
Jan-June 2010
The idea here is to give greater weight to nodes with high trust and never give a weight above 1 so that second-hand
information does not outweigh first-hand information. In this function, if αi,k = 0 the function returns 0, therefore node Nk’s
report does not affect the reputation update.
3.3 Secure and Reliable Data Aggregation
In protocol RDAT, data aggregation is periodically performed in certain time intervals. In each data aggregation session,
secure and reliable data aggregation is achieved in two phases. In the first phase, before transmitting data to data aggregators,
aggregation
each sensor node Ni computes R
value for its data aggregator Aj and evaluate the trustworthiness of Aj . If
i,j
trustworthiness of Aj is below a predetermined threshold, then Ni does not let Aj to aggregate its data. To achieve this, Ni
encrypts its data using the pairwise key that is shared between the base station and Ni and sends this encrypted data to the
base station along with a report indicating Aj
may be compromised. Based on the number of reports about Aj over the time, the base station may decide that Aj is a
compromised node and it should be revoked from the network. In the second phase of data aggregation session, the following
sensing
Reliable Data Aggregation (RDA) algorithm is run by data aggregators. Algorithm RDA depends on R
and
i,j
routing
R
functional reputation values to mitigate the effect of compromised sensor nodes on aggregated data.
i,j
The Algorithm RDA is Input: Data aggregator Aj , Aj’s neighboring nodes {N1,N2, ...,Ni}, trust values of neighboring nodes computed by Aj {
sensing
sensing
routing
routing
Tj,1
,..., Tj,i
} and{ Tj,1
,..., Tj,i
}.
Output: Aggregated data Dagg .
Step 1: Aj requests each Ni to send its data for data aggregation.
Step 2: Sensor nodes {N1,N2, ...,Ni} transmit data {D1,D2, ...,Di} to Aj .
sensing
routing
Step 3: Aj updates trust values Ti,j
and Ti,j
of each Ni based on the first and second hand information
regarding Ni .
sensing
routing
Step 4: Aj weights data Di of sensor node Ni using the Ti,j
and Ti,j
.
Step 5: Aj aggregates the weighted data to obtain Dagg.
Since compromised nodes send false sensing reports in order to deceive the base station, Algorithm RDA considers
trustworthiness of sensor nodes with respect to sensing function to increase the reliability of aggregated data. To achieve this,
sensing
routing
Aj weights data of each sensor node Ni with respect to the sensor node’s trust value Ti,j
and Ti,j
. By
weighting sensor data based on trust levels, data aggregators reduce the compromised sensor nodes’ effect on the aggregated
sensing
routing
data. This reason is that a compromised node Ni is expected to have low Ti,j
and Ti,j
values
3.4 Graph
160
aggregate data
140
120
100
80
aggregated data
60
40
20
0
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
aging factor
Aging factor vs aggregated data graph
133
Dronacharya Research Journal
Issue II
3.5 Flowgraph
Generate the functional reputation
table for all the nodes
Divide node into
clusters
Select data aggregator
for each cluster
Each cluster head will
send request for data
to its cluster
Each node will check
the trustworthiness of
data aggregator and
send data accordingly
Data aggregator calculates
trust values for routing and
sensing for each node and
weights the data of that node
accordingly
Each cluster head
aggregates the data
and send it to the base
station
Output
Flow graph
134
Jan-June 2010
Dronacharya Research Journal
Issue II
Jan-June 2010
4. IMPLEMENTATION
The protocol is implemented in C.A structure called NODE has been used to describe a node which contains the array of
functional reputation table and also the values of node id, first and last node in the cluster and the node’s data. One node is
made the Base station and the remaining nodes are divided into clusters. A Data aggregator for each cluster is selected using
Random () function. Then the Random () function is again used to generate values for the functional reputation table and the
data for each node. After this the algorithm RDA is implemented. Each data aggregator will request all other nodes in its
cluster to send it the data. Each node in the cluster after receiving this message will calculate the trustworthiness i.e. of the
data aggregator. Only if it is above the threshold level, it will send its data, else it will increment the value in the array of
errors for that data aggregator.
Data aggregator will gather the second hand value for each node and then combining it with first hand value, it will calculate
trust values of Routing and Sensing for each node i.e. Then using these values of trust, it will weight the data received from
each node accordingly. There is also a factor “v” the aging factor which is used in calculation of trust values and its value is
varied from 0.2 to 0.9.The effect of change in value of ‘v’ on the aggregated data is plotted on the graph
CONCLUSION
In wireless sensor networks, compromised sensor nodes can distort the integrity of aggregated data by sending false data
reports and injecting false data during data aggregation. Since cryptographic [7] solutions are not sufficient to prevent these
attacks, general reputation based trust systems are proposed in the literature. This paper has presented a novel reliable data
aggregation and transmission protocol (RDAT) that introduces functional reputation concept. In comparison with general
reputation, the simulation results show that protocol RDAT improves the security and reliability of the aggregated data by
using functional reputation concept.
REFERENCES
[1]
“Wireless Sensor Technologies and Applications” School of Software, Dalian University of Technology, Dalian
116620, China; E-Mail: [email protected]: 20 October (2009); in revised form: 31 October 2009 / Accepted: 2
November 2009 /Published: 4 November (2009)
[2]
Suat Ozdemir,” Functional Reputation Based Data Aggregation for Wireless Sensor Networks”, IEEE International
Conference on Wireless & Mobile Computing, Networking & Communication, (2008)
[3]
Tamara Pazynyuk, JiangZhong Li, George S. Oreku,” Reliable Data Aggregation Protocol for Wireless Sensor
Networks”, IEEE (2008)
[4]
R. Rajagopalan and P.K. Varshney, “Data aggregation techniques in sensor networks: A survey”, IEEE
Communications Surveys and Tutorials, vol.8, no. 4, 4th Quarter (2006).
[5]
Hong Luo, Qi Li, Wei Guo, “RDA: Data Aggregation Protocol for WSNs”, Beijing Key Laboratory of Intelligent
Telecommunications Software and Multimedia, IEEE(2006).
[6]
Sang-ryul Shin, Jong-il Lee, Jang-woon Baek, Dae-wha Seo,” Reliable Data Aggregation Protocol for Ad-hoc Sensor
Network Environments”, IEEE (2006).
[7]
Yi Yang, Xinran Wang, Sencun Zhu, and Guohong Cao, “SDAP: A Secure Hop by Hop Data Aggregation Protocol for
Sensor Networks”, Department of Computer Science & engineering, The Pennsylvania State University, ACM (2006).
[8]
K. Wu, D. Dreef, B. Sun, and Y. Xiao, “Secure data aggregation without persistent cryptographic operations in wireless
sensor networks”, Ad Hoc Networks, vol. 5, no.1, pp. 100-111, (2007).
[9] H. C¸ am, S. Ozdemir, P. Nair, and D. Muthuavinashiappan, and H.O. Sanli, “Energy-Efficient and secure pattern based
data aggregation for wireless sensor networks”, Special Issue of Computer Communications on Sensor Networks, Feb.
(2006).
135
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
NEW ADVANCED INTERNET MINING TECHNIQUES
Narendra Kumar Tyagi*
Assistant Professor, Department of Computer Science & Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
Abhilasha Vyas
Assistant Professor, Department of Computer Science & Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
Dr. S.V. Nair
Professor & HOD, Department of Computer Science & Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT:
Internet mining technique is the application of data mining techniques to discover patterns from the Internet. It is the process of mining internet access logs
or other information using browsing and access patterns on web localities. Despite its critical role, the management of this data— from collection and
transmission to storage and its use within applications—remains disconcertingly ad hoc. Internet measurement data provides the foundation for the operation
and planning of the networks comprising the Internet which is a necessary component in research for analysis, simulation, and emulation. This paper
examines several of the challenges faced in collection and archive large volumes of network measurement data. This paper also outlines architecture for
Internet data repository - designed to create a framework for collaboratively addressing these challenges.
Keywords: Data Mining, Networks, Data repository, Internet Usage Mining, Internet Content Mining, OLAP
1. INTRODUCTION
Communication networks produce vast amounts of data that both help network operators manage and plan their networks and
enable researchers to study a variety of network characteristics. With the huge amount of information available online, the
World Wide Web has become the area for data mining research [1]. On short duration scales: the data can facilitate network
monitoring, troubleshooting, and reactive routing [9]. Over minutes or days: the data are useful for network traffic
engineering where operators attempt to shift the flow of traffic away from over utilized links onto less busy paths. On long
durations: the data useful for capacity planning and helps researchers perform longitudinal studies. Network data are also
vital for analysis, simulation, and emulation. Despite its critical role, the management of this data— from collection and
transmission to storage and its use within applications—remains disconcertingly ad hoc. It uses techniques created and recreated by each corporation or researcher. This paper examines several challenges in collecting and archiving large volumes
of network measurement data, and outlines architecture for an Internet data repository (data repository) designed to create a
framework for collaboratively addressing these challenges. This paper assumes that the data repository will catalyze sharing
of tools, formats, and data among researchers and operators. Two Internet processes are considered for the main research
work in this paper are- (i) Internet Mining and (ii) Internet Usage Mining. Internet measurement data provides the foundation
for the operation and planning of the networks comprising the Internet which is a necessary component in research for
analysis, simulation, and emulation. The Internet Mining search tools have the problems [12]. This paper examines several of
the challenges faced in collection and archive large volumes of network measurement data. Internet mining techniques can be
used for solving the critical problems though they are not the direct tool to solve the problems. This paper also outlines
architecture for an Internet data repository - designed to create a framework for collaboratively addressing these challenges.
2. METHODOLOGY
2.1 Internet content mining. Internet content mining is the process to discover useful information from the content of
*Corresponding Author
136
Dronacharya Research Journal
Issue II
Jan-June 2010
Internet pages. The type of the content may consist of text, image, audio or video data in the web. Internet content mining [2]
sometimes is called Internet text mining, because the text content is the most widely researched area.
The study of book by Mitchell [3] and research papers reveal a list of methods as given in table 1:
The technologies that are used in i- content mining are NLP (Natural language processing) and IR (Information retrieval).
Some emerging tools & techniques are2.1.1 Agent Based Approach- involves the development of sophisticated A.I. systems that can act autonomous or semi
autonomously on behalf of user to discover & organize internet based information
2.1.2 Intelligent Search Agents- search for relevant information using characteristics of a particular domain or user profile
to organize and interpret the discovered information
Examples –agents like Harvest, FAQ-Finder, Information Manifold, OC CAM & Parasite
They rely on pre-specified information about particular types of documents to retrieve & interpret documents. Other agent
like - ShopBot & ILA Internet Learning Agent - attempt to interect with & learn the structure of unfamiliar information
sources.
ShopBot – retrieves product information from vendor sites using only general information.
ILA- learning models of information sources & translates in its own internal concept.
2.1.3 Information Filtering/Categorization- Web agents use various information retrieval techniques & characteristics of
open hypertext web documents automatically retrieve, filter, and categorize. Example – Hypersuit – uses semantic
information in link structures/documents. BO (Book Mark) – combines hierarchical clustering techniques & user interaction
to organize a collection of internet documents based on conceptual information
2.1.4 Personlized Internet Agents- These include those that obtain/ learn user preferences and discover internet information
sources – that correspond to these preferences. Example- Web Watche, Syskill & Webert & others – is a system that utilizes
a user profile & learn to rate internet pages using Bayesian Classification
2.1.5 Database approach- This approach is focused on techniques for integrating & organizing the heterogeneous and semi
structured data on the Internet into more structured & high level collection of resources. Examples- Han, et.al. , Kholsa, King
& Novak, Araneus
137
Dronacharya Research Journal
Issue II
Jan-June 2010
2.2 Internet-usage mining:
Internet usage mining is the application that uses data mining to analyse and discover interesting patterns of user’s usage data
on the web. The usage data records the user’s behavior when the user browses or makes transactions on the web site. It is an
activity that involves the automatic discovery of patterns from one or more Web servers. Organizations often generate and
collect large volumes of data; most of this information is usually generated automatically by Web servers and collected in
server log. Analyzing such data can help these organizations to determine the value of particular customers, cross marketing
strategies across products and the effectiveness of promotional campaigns, etc.
3. PROPOSED METHODOLOGY:
The first Internet analysis tools simply provided mechanisms to report user activity as recorded in the servers. Using such
tools, it was possible to determine such information as the number of accesses to the server, the times or time intervals of
visits as well as the domain names and the URLs of users of the Web server. However, in general, these tools provide little or
no analysis of data relationships among the accessed files and directories within the Web space. Now more sophisticated
techniques for discovery and analysis of patterns are emerging. These tools fall into two main categories: Pattern Discovery
Tools and Pattern Analysis Tools.
Server
User
Activity
Internet Analysis
Tool
Information:
1. number of accesses to the server,
2. the times or time intervals of
visits,
3. the domain names and the URLs
of users of the Web server
Fig: Internet Access
Another interesting application of Internet Usage Mining is Web Link recommendation. One of the last trend is represented
by the online monitoring of page accesses to render personalized pages on the basis of similar visit patterns. It involves the
automatic discovery of user access patterns from one or more internet servers. Most of the information is generated
automatically by internet servers and collected in server access logs .Other sources of information are – referrer logs that
contain information about referring pages.
3.1 More sophisticated system and techniques:
3.1.1 Pattern Analysis Tools and Pattern Discovering Techniques:
It is necessary to develop a new framework to enable the mining process because of many unique characteristics of the clientserver model in the W.W.W
138
Dronacharya Research Journal
Issue II
Jan-June 2010
3.1.2 Data cleaning:
Techniques to clean a server log to eliminate irrelevant items. Elimination of irrelevant items can be accomplished by
clicking the suffix of URL name
Example- all log entries with filename suffix such as – gif, jpeg, JPG and map etc can be removed.
Mechanisms such as local caches and proxy severs can severely distort the overall picture of user traversals through a web
site.
A page that is listed only once in an access log may have infact been referenced many times by multiple users. This problem
is overcome by using – Cookies, Cache busting & explicit user registration.
3.1.3 Analysis of discovered Patterns –
Administrators are interested in questions like –
• How are people using the site?
•
Which pages are being accessed most frequently?
These questions require the analysis of the structure of hyperlinks as well as contents of pages. The end product of such
analysis include –
• The frequency of visits per document.
•
•
•
•
Most recent visit per document.
Who is visiting which document?
Frequency of use of each hyperlink.
Most recent use of each hyperlink.
We observed the following techniques and tools for any discovering pattern –
Visualization Techniques:
PITKOW developed the WEB VIZ system for visualizing www access patterns, for it web para-diagram is proposed (web
paths)
OLAP Techniques:
OLAP can be performed directly on top of relational databases. DATACUBE- Information model and techniques are
developed for its implementation.
Data & Knowledge Querying:
Relational database technology being a high level query language allows application to express conditions to satisfy.
Usability Analysis:
Research in Human- Computer interaction (HCI) developed a systematic approach to usability studies by adapting the
experimental method of computation.
CONCLUSION
There is a growing need to develop more tools and techniques that may improve Internet usefulness. Different kinds of
techniques and tools encompass a broad range of issues to mean different things to different people. Though the role of
internet is critical, the management of this data— from collection and transmission to storage and its use within
applications—remains disconcertingly ad hoc. Internet measurement data provides the foundation for the operation and
planning of the networks comprising the Internet which is a necessary component in research for analysis, simulation, and
emulation. This paper examines several challenges in collecting and archiving large volumes of network measurement data,
and outlines architecture for an Internet data repository (datapository) designed to create a framework for collaboratively
addressing these challenges. This paper assumes that the datapository will catalyze sharing of tools, formats, and data among
researchers and operators.
139
Dronacharya Research Journal
Issue II
Jan-June 2010
REFERENCES
[1]
Cooley, R. Mobasher, B. and Srivastave, J. “Web Mining: Information and Pattern Discovery on the World Wide Web”
In Proceedings of the 9th IEEE International Conference on Tool with Artificial Intelligence, (1997).
[2]
J. Carbonell, M.Craven, S.Fienberg, T.Mitchell, and Y.Yang. Report on the Conald workshop on learning from text and
web. In CONALD “Workshop on learning from text and the web, June(1998).
[3]
T.Mitchell. “Machine Learning” McGraw Hill, (1997).
[4]
Baraglia, R. Silvestri, F. "Dynamic personalization of web sites without user intervention”, In Communication of the
ACM 50(2):63-67(2007).
[5]
Cooley, R., Mobasher, B. and Srivastava, J. “Data Preparation for Mining World Wide Web Browsing Patterns”,
Journal of Knowledge and Information System, Vol.1, Issue. 1, pp.5-32, (1999).
[6]
Kohavi, R., Mason, L. and Zheng, Z. “Lessons and Challenges from Mining Retail E-commerce Data” Machine
Learning, Vol 57, pp. 83-113 (2004).
[7]
Lillian Clark, I-Hsien Ting, Chris Kimble, Peter Wright, Daniel Kudenko "Combining ethnographic and clickstream
data to identify user Web browsing strategies" Journal of Information Research, Vol. 11 No. 2, January (2006).
[8]
Eirinaki, M., Vazirgiannis, M. "Web Mining for Web Personalization", ACM Transactions on Internet Technology,
Vol.3, No.1, February (2003).
[9]
P. Atzeni and G. Mecca. In Proceedings of the Sixteenth ACM SIGACT – SIGMOD – SIGART symposium on
Principal of Database Systems, May 12-14, Tucson, Arizona, Pages 144-153, ACM Press, (1997).
[10] Mobasher, B., Cooley, R. and Srivastava, J. “Automatic Personalization based on web usage Mining” Communications
of the ACM, Vol. 43, No.8, pp. 142-151(2000).
[11] Mobasher, B., Dai, H., Kuo, T. and Nakagawa, M. “Effective Personalization Based on Association Rule Discover from
Web Usage Data” In Proceedings of WIDM 2001, Atlanta, GA, USA, pp. 9-15(2001).
[12] S. Chakrabarti. Data Mining for hypertext: A tutorial survey. ACM SIGKDD Explorations, 1(2): 1-11, (2000).
140
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
SWARM INTELLIGENCE
REVOLUTIONIZING NATURAL TO ARTIFICIAL SYSTEMS
Aditya Gaba*
VIII Semester, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail- [email protected]
Dr. H.S. Dua
Professor & HOD, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
________________________________________________________________________________________________________________
ABSTRACT
“Swarm Intelligence” is the term used to denote Artificial Intelligence systems where collective behavior of simple agents causes coherent solutions or
patterns to emerge. This has applications in swarm robotics. A population of unsophisticated agents interacting with their environments and each other
makes up a swarm intelligence system. Because there is no set of global instructions on how these units act, the collective interactions of all the agents
within the system often leads into some sort of collective behavior or intelligence. This type of artificial intelligence is used to explore distributed problem
solving without having a centralized control structure. This is seen to be a better alternative to centralized, rigid and preprogrammed control.
Keywords: Stigmergy, social insects, pheromone, self organization
____________________________________________________________________________________________________________________________
1. INTRODUCTION
Swarm Intelligence (SI) is a type of Artificial Intelligence (AI) based on the collective behavior of decentralized, selforganized systems.SI systems are typically made up of a population of simple agents interacting locally with one another and
with their environment. The agents follow very simple rules, and although there is no centralized control structure dictating
how individual agents should behave, local, and to a certain degree random, interactions between such agents lead to the
emergence of "intelligent" global behavior, unknown to the individual agents.
Natural examples of SI include ant colonies, bird flocking, animal herding, bacterial growth, and fish schooling.
The basic principles followed in the SI are:
1.1 Self organization
Self-organization is a set of dynamical mechanisms whereby structures appear at the global level of a system from
interactions of its lower-level components.
1.2 Stigmergy
Stigmergy: stigma (sting) + ergon (work) = ‘stimulation by work’
The application of swarm principles to robots is called swarm robotics, while 'swarm intelligence' refers to the more general
set of algorithms. 'Swarm prediction' has been used in the context of forecasting problems.
Fig. 1
*Corresponding Author
141
Dronacharya Research Journal
Issue II
Jan-June 2010
2. NEED FOR SWARM INTELLIGENCE
There are various reasons for which we have to adopt the Swarm Intelligence in every computing of day-to-day life.
Firstly, almost every model which we are using in our day-to-day life is centralized. Currently, the general publication
paradigm we are using is centralized. What we mean by general publication model is the wall streets papers, CNN, etc. which
have been distributing information widely. Basically, what they follow is: they have some set editors making all decision of
what content to publish. The paradigm is:
Fig. 2
Here, we have the editors on the top and they are making the decision of what we are reading today. That’s not fair because
what we are reading is based on their perspective, their collective information and their experiences. This is where paradigm
fails because we are dependent on information we can consume. This limits the area of information spreading out.
Solution: Now, above paradigm is being technically compared by new paradigm which is enabled by all the modern devices
and social networks available, therefore everybody is connected now. Another example is when we are texting all the time,
we are inter connected so this is paving the way to swarm intelligence to be used in a useful and meaningful way.
Fig. 3
So why is it better! Now you are getting the information from the wide group instead of 1 or 2 set editors. So in this
paradigm, swarm intelligence acts like an editor and take decisions of what to publish and what not to!
2.1 Few Problems can’t be tackled by traditional techniques
Though, the computer revolution changed human societies in communication, transportation, industrial production,
administration etc., but there are few drawbacks which cannot be over looked. There are certain requirements which have to
be fulfilled by the traditional techniques such as well defined, fairly predictable, computable in reasonable time with serial
computers.
So in problems like Action Response Planning (i.e. Chess playing). And hard problems like hard prediction problem –
Autonomous robotics.
We need an alternative for this i.e.
•
Decentralized
•
Self Organized
•
Redundant
142
Dronacharya Research Journal
Issue II
Jan-June 2010
3. HOW ARE NATURAL SYSTEMS USEFUL?
Analogies in IT and Social Insects
• distributed system of interacting autonomus agents
• goals: performance optimization and robustness
• self-organized control and cooperation (decentralized)
• division of labour and distributed task allocation
Boggled???
Let’s take an example from Natural System: ANTS (social insects)
Why are ants interesting?
• ants solve complex tasks by simple local means
• ant productivity is better than the sum of their single activities
• ants are ‘grand masters’ in search and exploitation
What mechanisms do we learn from these ants?
• cooperation and division of labour
• adaptive task allocation
• work stimulation by cultivation
• pheromones
Here comes the concept of Self Organization
3.1 Self Organisation
‘Self-organization is a set of dynamical mechanisms whereby structures appear at the global level of a system from
interactions of its lower-level components.’
The four bases of self-organization
• positive feedback (amplification)
• negative feedback (for counter-balance and stabilization)
• amplification of fluctuations (randomness, errors, random walks)
• multiple interactions
Self Organisation by ants
Cooperative Search by Pheromone Trails
STEP 1
2 ants start with equal probability of going on either path
STEP 2
The next ant takes the shorter route. The density of pheromone on the shorter path is higher because of 2 passes by the ant (as
compared to 1 by the other).
143
Dronacharya Research Journal
Issue II
Jan-June 2010
STEP 3
Over many iterations, more ants begin using the path with higher pheromone, thereby further reinforcing it.
STEP 4
After some time, the shorter path is almost exclusively used.
TERMITE (SOCIAL INSECT) SELF ORGANIZATION
Fig. 4
4. STIGMERGY “Stimulation by work”
Stigmergy is a mechanism of spontaneous, indirect coordination between agents or actions, where the trace left in
the environment by an action stimulates the performance of a subsequent action, by the same or a different agent. Stigmergy
is a form of self-organization. It produces complex, apparently intelligent structures, without need for any planning, control,
or even communication between the agents. As such it supports efficient collaboration between extremely simple agents, who
lack any memory, intelligence or even awareness of each other. Stigmergy is majorly used in swarm robotics.
It is nothing but in simpler terms stimulation by work. It is the second basic principle of Swarm Intelligence.
It is based on:•
•
•
work as behavioral response to the environmental state
an environment that serves as a work state memory
work that does not depend on specific agents
144
Dronacharya Research Journal
Issue II
Jan-June 2010
Stigmergy in termite nest building
Fig. 5
5. SWARM ROBOTICS
Modifying the robotics
Some tasks maybe inherently too complex or impossible for a single robot to perfom as carrying large substances. Therefore
increased speed can result from using several robots. They are simple, more flexible without need to reprogram the robots
therefore reliable and fault tolerant. It is because one of the several robots may fail without affecting the task completion
although completion time may be affected by such a perturbation than having a powerful complex robot.
Relatively simple individual rules can produce a large set of complex swarm behavior. A key-component is the
communication between the members of the group that build a system of constant feedback. The swarm behavior involves
constant change of individuals in cooperation with others, as well as the behavior of the whole group.
Unlike distributed robotic systems in general, swarm robotics emphasizes a large number of robots, and promotes scalability,
for instance by using only local communication. The local communication for example can be achieved
by wireless transmission systems, like radio frequency or infrared.
145
Dronacharya Research Journal
Issue II
Jan-June 2010
Fig. 6
Swarm robots are a group of small robots where each member of the group performs a small task, almost like an assembly
line. Their combined efforts usually add up to a big task being completed. Since it is a lot easier to program several robots
with simple tasks than it is to program one robot with a complex task, the idea of swarm robots certainly seems plausible.
Both miniaturization and cost are key-factors in swarm robotics. These are the constraints in building large groups of
robotics; therefore the simplicity of the individual team member should be emphasized. This should motivate a swarmintelligent approach to achieve meaningful behavior at swarm-level, instead of the individual level.
Potential applications for swarm robotics include tasks that demand for miniaturization (nanorobotics, microrobotics), like
distributed sensing tasks in micromachinery or the human body. On the other hand swarm robotics can be suited to tasks that
demand cheap designs, for instance mining tasks or agricultural foraging tasks. Also some artists use swarm robotic
techniques to realize new forms of interactive art.
6. FACTORS FOR SUCCESS AND FAILURE
Factors for current success of collective robots
• Artificial Intelligence (AI) program failed because it relied upon the ‘classical’ robots. Swarm based Robotics relies on
the anti classical AI idea that a group of robots may be able to perform without explicitly representation of environment and
of the other robots, thereby Planning of artificial intelligence is replaced by reactivity.
• Remarkable process of hardware during the last decade has allowed many researchers to experiment with real robots,
which have not only become more efficient and capable of performing tasks, but also cheaper.
Factor for lack of success
•
Swarm Intelligent robots are hard to program.
Possible Solution
• Path consists of studying how social insects collectively perform some specific tasks, modeling their behavior and using
model as basis upon which artificial variations can be developed, either by tuning model parameters beyond biologically
relevant range or by adding non biological features to the model.
7. FUTURE SCOPE
•
•
•
•
•
Aerospace technology
Environmental robots
Industrial application
Ship maintenance and ocean cleaning
Surgery
146
Dronacharya Research Journal
Issue II
Jan-June 2010
CONCLUSION
•
•
•
•
Provides heuristic to solve difficult problems
Has been applied to wide variety of applications
Can be used in dynamic applications
Analytic proof and models of swarm based algorithm remain topics of ongoing research
Fig. 7
REFERENCES
[1]
Reynolds, C. W. Flocks, Herds, and Schools: “A Distributed Behavioral Model, in Computer Graphics”, 21(4)
(SIGGRAPH '87 Conference Proceedings) pages 25-34, (1987)
[2]
James Kennedy, Russell Eberhart. Particle Swarm Optimization, IEEE Conf. on Neural networks – (1995)
[3]
www.adaptiveview.com/articles/ ipsop1
[4]
M.Dorigo, M.Birattari, T.Stutzle, Ant colony optimization – “Artificial Ants as a computational intelligence technique”,
IEEE Computational Intelligence Magazine, (2006)
[5]
Ruud Schoonderwoerd, Owen Holland, Janet Bruten –. “Ant like agents for load balancing in telecommunication
networks”, Adaptive behavior, 5(2)., (1996)
147
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
“STEM CELL” FUTURE OF CANCER THERAPY
Jyotsna*
VI Semester, Department of Bio-medical Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
Subhransh Pandey
VI Semester., Department of Bio-medical Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
Dr. D.P. Singh
Professor & HOD, Department of Bio-Medical Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT
For over 30 years, stem cells have been used in the replenishment of blood and immune systems damaged by the cancer cells or during treatment of cancer
by chemotherapy or radiotherapy. Apart from their use in the immuno-reconstitution, the stem cells have been reported to contribute in the tissue
regeneration and as delivery vehicles in the cancer treatments. The recent concept of 'cancer stem cells' has directed scientific communities towards a
different wide new area of research field and possible potential future treatment modalities for the cancer. Aim of this review is primarily focus on the recent
developments in the use of the stem cells in the cancer treatments, then to discuss the cancer stem cells, now considered as backbone in the development of
the cancer; and their role in carcinogenesis and their implications in the development of possible new cancer treatment options in fu ture.
Keywords: cancer cells, Stem cells, Embryonic stem cells (ESC), Adult stem cells(ASC), Mesenchymal stem cells(MSC)
______________________________________________________________________________________________________________________________
1. INTRODUCTION
Cancer is the most common cause of mortality and morbidity in India. Despite recent advances in the treatments of cancer,
the clinical outcome is yet far away from expectation. Use of stem cells in immuno-modulation or reconstitution is one of the
methods used for decades in cancer therapy. Stem cells have self-renewal capacity with highly replicative potential in
multilineage differentiation capacity.
Stem cells can be divided into main three categories: embryonic, germinal, and somatic. Embryonic stem cells (ESCs)
originate from the inner cell mass of the blastocyst. ESCs are omnipotent and have indefinite replicative life span, which is
attributable to their telomerase expression. Germinal stem cells are derived from primary germinal layers of embryo. They
differentiate into progenitor cells to produce specific organ cells. Somatic/adult stem cells are progenitor cells as they are less
totipotent i.e. less replicative life span than ESCs. They exist in mature tissues such as haematopoietic, neural,
gastrointestinal and mesenchymal tissues. The most commonly used adult stem cells (ASCs) derived from bone marrow are
haemopoietic stem cells (HSCs) and other primitive progenitor cells including mesenchymal stem cells (MSCs) and
multipotent adult progenitor cells (MAPCs) The microRNAs expression has been reported as a requisite to bypass G1/S
checkpoint, thus for the self-renewal characteristic of stem cells. Figure shows hierarchy of stem cells with cell determination
and differentiation. In this review, we highlight the potential of the adult stem cells in the cancer treatment and also focus on
the new concept of the cancer stem cell.
*Corresponding Author
148
Dronacharya Research Journal
Issue II
Jan-June 2010
Fig. 1: Hierarchy of stem cells with cell determination, differentiation and maturation
2. THE CHOICE OF STEM CELLS FOR CANCER THERAPIES
Ideally, ESCs would be the source of stem cells for therapeutic purposes due to higher totipotency and indefinite life span
compared to ASCs with lower totipotency and restricted life span. However, use of ESCs have ethical constraints
(Department of Health, UK, National Institutes of Health and International Society for Stem Cell Research) and their use for
research and therapeutic purposes are restricted and prohibited in many countries throughout the world. In addition, the stem
cells with higher totipotency have been shown to be more tumorogenic in mice. Thus, for ease of availability and lesser
constrained on ethical issue, ASCs are the stem cells most commonly used for research and therapeutic purposes. The other
reason for the use of ASCs is their easy accessibility compared to ESCs. According to literature, ASCs from bone marrow
(HSCs & MSCs) are the most commonly studied stem cells. MSCs support HSCs in the bone marrow and have the ability to
differentiate both in vivo and in vitro into the different mesenchymal cells such as bone, cartilage, fat, muscle, tendon and
marrow stroma.
2.1 STEM CELL SOURCES
ESCs are derived from a 5-day old pre-implantation human embryos, however it posses potential risk of destroying the
embryo. ASCs can be obtained from many tissues including bone, synovium, deciduous teeth, adipose tissue, brain, blood
vessels, blood and umbilical cord blood. Due to legal and ethical reasons, use of ESCs is restricted in research and clinical
fields and ASCs remain the main supplement for the stem cells. Although ASCs can be obtained from the various sites, the
ideal source of ASCs is yet to be found. Most commonly, ASCs are acquired from the bone marrow and peripheral blood.
The bone marrow (BM) aspiration is one of the common procedures performed to obtain ASCs, but it is associated with
morbidity in the form of wound infection and sepsis complications. ASCs can also be obtained from adipose tissues such as
abdominal fat and infra-patellar fat which is less invasive and less morbid procedure than the bone marrow aspiration. It has
been shown that there is no significant difference in the cell growth kinetics, cell senescence, gene transduction of adherent
stromal cells and yield from stem cells obtained from bone marrow or adipose tissues. The peripheral blood also provides a
safe and easily accessible route for isolating ASCs with minimal morbidity. Use of ASCs through peripheral blood has shown
to induce more T and NK (Natural Killer) cells compared to bone marrow ASCs. Recently, the stem cells have been claimed
to be obtained from the amniotic fluid without any harm to mother and embryo.
149
Dronacharya Research Journal
Issue II
Jan-June 2010
2.2 CHOICE OF STEM CELL: BONE MARROW OR PERIPHERAL BLOOD
The source of stem cells is most commonly either from the bone marrow or the peripheral blood. The procedure of the bone
marrow aspiration is invasive and is associated with the potential possible complications including fracture, wound infection
and sepsis while the procedure for PBSCs isolation is much less invasive and less morbid. PBSCs have also been shown to
induce higher number of CD4 T and NK cells compared to stem cells obtained from the bone marrow. Thus, the stem cells
from peripheral blood are considered the preferred source of stem cells however various clinical trials have publicized
controversial conclusions comparing PBSCs and BM stem cells. It is also noticed that the occurrence of graft versus host
reaction varies with PBSCs compared to BM stem cells survival. Double stem cell transplantation has been documented to
improve overall survival compared to single stem cell transplantation. Granulocyte-colony stimulating factor (G-CSF) helps
in proliferation and differentiation of haematopoietic progenitor cells. G-CSF has also been reported to mobilise autologous
peripheral blood stem cells and to preserve and increase the length of telomerase. There are various different agents which
are shown to enhance the G-CSG activity in mobilising stem cell. These are paclitaxel and docetaxel, recombinant human
thrombopoietin, lithium and recombinant methionyl human stem cell factor (r-metHuSCF).
Published human clinical trials comparing outcomes following isolation of stem cells from bone marrow vs peripheral
blood.
TYPE OF CANCER
Hematological
Malignancies
NO.s OF PATS
140
CONCLUSION
PBSC yield higher lymphocyte subset count and is
associated with fewer infections.
Intermediate and high
Grade non-Hodgkins
Lymphoma
12
The CD4:CD8 and CD45RA:CD45RO ratios were
higher in PBSC group. Accelerated reconstruction of
NK cell activity following PBSC compared to BM.
Chronic myeloid
Leukaemia
116
No statistically significant difference in acute and chronic
GVHD, OS and disease free survival.
Hematological
Malignancies
61
Statistically significant enhanced graft versus leukaemia
effect in allo PBSC.
Haematological
Malignancies
228
Faster hematological recovery and improved survival in
PBSC but no difference in GVHD.
ABRV.: PBSC – Peripheral Blood Stem Cells, NK – Natural Killer, CD8, CD4, CD45RA & CD45RO – Different types of T Cells, HLA – Human
Leucocyte Antigen, GVHD – Graft Vs Host Disease, OS – Overall Survival, Pats – Patients, No – Number.
3. ROLE OF PURGING IN ISOLATION OF STEM CELLS
3.1 The isolation of stem cells from the allergenic donor is the most preferable method, however only 30% of candidates are
eligible due to the lack of donors and age restrictions. Stem cells from autologous source are easily available but they carry
the risk of coexistence of normal haematopoietic progenitors with malignant counterparts and may lead to the relapse of
cancer. In population of patients with breast cancer, PBSC transplantation has been related to a rapid and sustained
haematopoietic engraftment and has shown to be less contaminated than bone marrow stem cells. There was however no
overall improvement in survival outcome.
3.2 The contamination of the retrieval of stem cells with tumour cells have been major problem which reported by many
studies, however the effect on clinical cell therapy has been less problematic.
Purging procedures are used in an attempt to remove these contaminant cancer cells from stem cells. Table 2 shows published
clinical trials with various in vitro and in vivo techniques to purge the stem cells such as use of monoclonal antibodies,
continuous flow immunoadsorption technique, dielectroforetic-field-flow-fractionation, use of rituximab, pulsed electric
field, and hyperthermia. Amifostine has been shown to protect normal haematopoietic progenitor cells from damage by
alkylating agents used for purging of stem cells. The double procedure using 'positive CD34' and 'negative CD19' double
selection method for purging is reported to be better than single procedure in the poorprognosis lymphoproliferative
disorders, but it is associated with increased risk of life-threatening infections.
150
Dronacharya Research Journal
Issue II
Jan-June 2010
Published clinical trials with various in vitro and in vivo stem cell purging techniques.
IN-VIVO/
TYPE OF
PURGING TECHNIQUE
CONCLUSION
IN-VITRO
CANCER (CELLS)
______________________________________________________________________________________
IV & IT
Multiple Myeloma
Two step negative selection
Procedure with combination of
Monoclonal antibodies.
Safe procedure for purging
stem cells.
IV
Breast cancer
WR-2721(amifostine) to
4-hydroperoxycyclophosphamide
Reduced time to engraftment
IT
B-cell lymphoma
Rituximab
Rituximab can be used in stem
Cell purging.
IT
Acute myeloid
Leukaemia
Hyperthermia
Promising method for stem
cell purging.
KEYWORD: IT – in vitro, IV – in vivo.
4. CANCER STEM CELL
4.1 Why a tumour does not respond to treatment? Why tumours recur? Why cancer cells develop resistance to treatment?
These and many other raised questions may be answered by the new concept of "Cancer Stem Cells".Cancer stem cells can
be defined as cells in the tumour growth with a tumour initiating potential. Normal stem cells are characterised by three
properties: 1 Capability of self-renewal; 2 Strict control on stem cell numbers; 3 Ability to divide and differentiate to
generate all functional elements of that particular tissue. Compared to normal stem cells, the cancer stem cells are believed to
have no control on the cell numbers. Cancer stem cells form very small numbers in whole tumour growth and they are said to
be responsible for the growth of the tumour cells.
Fig. 2
4.2 It has been well-known that in order to induce a tumour in an animal model, hundreds of thousands of cancer cells need
to be injected. This has been explained to be due to limitations in the assay to support tumour growth, or due to tumour
formation deficiency. With the recent concept of the cancer stem cells, it may be explained that higher numbers of cancer
cells are needed to maximize the probability of injecting cancer stem cells in animal model. At present, the shrinkage in the
size of a tumour is considered as a response to the treatment. However, tumour often shrinks in response to the treatment only
to recur again. This may be explained by cancer stem cells that the treatment targeting the cancer cells may not be able to
target the cancer stem cells
4.3 A fundamental problem in the cancer is the identification of the cell type capable of sustaining the neoplastic growth.
There is evidence that the majority of the cancers are clones and that the cancer cells represent the progeny of one cell,
however it is not clear which cells possess the tumour-initiating cell (TIC) function (cancer stem cells) and how to recognise
them . Though the idea of cancer stem cells is considered as a new concept in science, it was thought almost 35 years back in
1971 when they were called as leukaemic stem cells. A small subset of cancer cells capable of extensive proliferation in
leukaemia and multiple myeloma were found and named as leukaemic stem cells (LSC). Two possibilities were proposed:
either all leukaemia cells had a low probability of proliferation and therefore all leukaemia cells behave as LSC, or only a
small subset was clonogenic. The later theory was favoured by Dick and colleagues who were able to separate the LSC as
CD34+CD38- from patients' samples. Despite being small in numbers (0.2%), these were the only cells capable to transfer
Acute Myeloid Leukaemia from patients to NOD-SCID (non-obese diabetic-severe combined immunodeficiency) mice.
151
Dronacharya Research Journal
Issue II
Jan-June 2010
Fig. 3
4.4 Recently, the cancer stem cells were also shown in the solid tumours such as breast cancer and brain tumours. The cancer
stem cells have been shown to have not only self-renewal capability but also generating wide spectrum of progeny, like
normal stem cells. In paediatric brain tumours, including medulloblastomas and gliomas, a subset of cells, called
neurospheres, have been shown to have self-renewal capability. In conditions to promote differentiation, these neuospheres
gave rise to neurones and glia, in proportion that reflect the amount in the tumour.
5. ORIGIN OF CANCER STEM CELLS
5.1 The cancer stem cells may be able to answer some of the questions related to a cancer growth, however origin of the
cancer stem cells is yet to be defined. To recognise the origin of the cancer stem cells, two important factors need to be
considered; 1 a number of mutations are required for a cell to be cancerous and 2 a stem cell needs to overcome any genetic
constraints on both self-renewal and proliferation capabilities. It is unlikely that all the mutations could occur in the lifespan
of a progenitor/mature cell. Therefore, cancer stem cells should be derived from either the self-renewing normal stem cells or
from the progenitor cells that have acquired the ability of self-renewal due to mutation.
Fig. 4
152
Dronacharya Research Journal
Issue II
Jan-June 2010
A simplified model of suggested hypothesis about origin of the cancer stem cells. The cancer stem cells may develop when
self-renewing normal stem cells acquire mutations and are transformed by altering only proliferative pathways. It is also
possible that the cancer stem cells originate by multiple oncogenic mutations in the restricted progenitor cells which acquire
the capability of self-renewal
The hypothesis that cancer stem cells are derived from normal stem cells rather than more committed progenitor cells have
been addressed in the cases of AML where leukaemia initiating cells (LIC) from various subtypes of AML with different
stages of differentiation have been shown to share the same cell-surface markers with normal haematopoietic stem cells.
However, some of the studies have suggested that cancer stem cells can be derived from the normal stem cells, as well as
from the committed short-lived progenitors, giving rise to the tumours with comparable latencies, phenotypes and gene
expression profiles In the solid tumours, lack of the markers to characterise the tumour initiating cells (TIC) in the tumours
has made it difficult to study the origins of
5.2 the cancer stem cells, however there have been identification of cell-surface markers in the lung, brain and prostate
which may allow the separation of the stem or progenitor cells with the tumour initiating function.
6. IMPLICATIONS FOR CANCER TREATMENT
6.1 At present, the cancer treatment is targeted at its proliferation potential and its ability to metastasise, and hence the
majority of treatments are targeted at rapidly dividing cells and at molecular targets that represent the bulk of the tumour.
This may explain the failure of treatments to eradicate the disease or the recurrence of the cancer. Although current
treatments can shrink the size of the tumour, these effects are transient and usually do not improve patient's survival
outcomes. For tumours in which the cancer stem cells play role, three possibilities exist. First, the mutation of normal stem
cells or progenitor cells into cancer stem cells can lead to the development of the primary tumour. Second, during
chemotherapy, most of the primary tumour cells may be destroyed but if cancer stem cells are not eradicated, they become
refractory cancer stem cells and may lead to recurrence of tumour. Third, the cancer stem cells may emigrate to distal sites
from the primary tumour and cause metastasis. Theoretically, identification of the cancer stem cells may allow the
development of treatment modalities that target the cancer stem cells rather than rapidly dividing cells in the cancer. This
may cure the cancer as the remaining cells in the cancer growth have limited proliferative capability. If cytotoxic agents spare
TICs, the disease is more likely to relapse. The TICs have been shown to have different sensitivity to different
chemotherapeutic agents such as TICs in leukaemia are less sensitive to daunorubicin and cytarabine.
Fig. 5
153
Dronacharya Research Journal
Issue II
Jan-June 2010
The conventional therapies may shrink the size of the tumour; by contrast, if the therapies are directed against the cancer stem
cells, they are more effective in eradicating the tumour.
6.2 Although the idea of the therapies focused on the cancer stem cells may look exciting, targeting the cancer stem cells
may not be easy. The cancer stem cells are relatively quiescent compared to other cancer cells and do not appear to have the
hyper-proliferation signals activated such as tyrosine kinase. These make the cancer stem cells resistant to the toxicity of the
anti-cancer drugs, which traditionally target the rapidly dividing cells. In addition, the tumour suppressor gene PTEN,
polycomb gene Bmi1 and the signal transduction pathways such as the Sonic Hedgehog (Shh), Notch and Wnt that are
crucial for normal stem cell regulation, have been shown to be deregulated in the process of cancinogenesis. These
deregulated signalling pathways and gene expressions may have impact on response to cancer therapy. One approach to
target the cancer stem cells may be the identification of the markers that are specific for the cancer stem cells compared to
normal stem cells such as haematopoietic stem cells express Thy-1 and c-kit whereas leukaemic stem cells express IL-3
(interleukin-3) receptor α-chain.
6.3 Much of the research is now focused on targeting the essential genes or pathways crucial for the cancer development
through the cancer stem cells, with any possible therapies targeted against TICs. One such example is the use of Gleevec® in
chronic myeloid leukaemia that targets the ATP-binding domain of the Abl kinase. Most patients in this study experienced
the complete cytogenetic responses. although the therapy may not be curative due to reported presence of the fusion
transcript. A comparison of the pathways that regulate the stem cell homing with those responsible for metastasis may prove
useful to minimise the toxic effects of the drugs. Treatment of mice with a Hedgehog (Hh) pathway inhibitor such as
cyclopamine inhibits the growth of medulloblastomas in mouse models, without any apparent toxicity. Thus, the Hh pathway
may be inactive in most normal adult tissues, thus minimising the toxicity effects of these inhibitors. Thus, the concept of the
cancer stem cells has opened new areas of research in carcinogenesis and future treatment options.
CONCLUSION AND FUTURE PROSPECTS
Presently, cancer therapy has entered in to an exciting new era, with traditional therapies such as chemotherapy, radiotherapy
and surgery on one side while the stem cells on the other hand. Apart from their well-known role in immuno-reconstitution,
the stem cells have attracted much attention especially with the new gene technologies such as the gene incorporation into the
eukaryotic cells allowing more focused delivery of the anti-cancer agents. Now the cancer may be considered as a cancer
stem cell disorder rather than that of rapidly growing cells. Although the origin of the cancer stem cells is yet to be defined,
the concept of the cancer stem cells may allow new treatment options in the possible cure of the cancer. However, further
research is required to identify and separate the cancer stem cells in various cancers from normal stem cells and other cancer
cells. Further work is also required to differentiate the genes and signalling pathways in the process of the carcinogenesis
from cancer stem cells for development of new therapies, with the eventual goal of eliminating the residual disease and
recurrence.
REFERENCES
[1]
Reya T, Morrison SJ, Clarke MF, Weissman IL: Stem cells, cancer, and cancer stem cells.
[2]
Soltysova A, Altanerova V, Altaner C: Cancer stem cells.
[3]
Jiang Y, Jahagirdar BN, Reinhardt RL, Schwartz RE, Keene CD, Ortiz-Gonzalez XR, Reyes M, Lenvik T, Lund T,
Blackstad M, Du J, Aldrich S, Lisberg A, Low WC, Largaespada DA, Verfaillie CM: Pluripotency of mesenchymal
stem cells derived from adult marrow.
[4]
Kim CF, Jackson EL, Woolfenden AE, Lawrence S, Babar I, Vogel S, Crowley D, Bronson RT, Jacks T: Identification
of bronchioalveolar stem cells in normal lung and lung cancer.
[5]
Hatfield SD, Shcherbata HR, Fischer KA, Nakahara K, Carthew RW, Ruohola-Baker H: Stem cell division is regulated
by the microRNA pathway.
[6]
Thomson JA, Itskovitz-Eldor J, Shapiro SS, Waknitz MA, Swiergiel JJ, Marshall VS, Jones JM: Embryonic stem cell
lines derived from human blastocysts.
154
Dronacharya Research Journal
Issue II
Jan-June 2010
[7]
Serakinci N, Guldberg P, Burns JS, Abdallah B, Schrodder H, Jensen T, Kassem M: Adult human mesenchymal stem
cell as a target for neoplastic transformation.
[8]
Sylvester KG, Longaker MT: Stem cells: review and update.
[9]
Simonsen JL, Rosada C, Serakinci N, Justesen J, Stenderup K, Rattan SI, Jensen TG, Kassem M: Telomerase
expression extends the proliferative life-span and maintains the osteogenic potential of human bone marrow stromal
cells.
[10] Awad HA, Wickham MQ, Leddy HA, Gimble JM, Guilak F: Chondrogenic differentiation of adipose-derived adult
stem cells in agarose, alginate, and gelatin scaffolds.
[11] Lee OK, Kuo TK, Chen WM, Lee KD, Hsieh SL, Chen TH: Isolation of multipotent mesenchymal stem cells from
umbilical cord blood.
[12] Miura M, Gronthos S, Zhao M, Lu B, Fisher LW, Robey PG, Shi S: SHED: stem cells from human exfoliated
deciduous teeth.
[13] Sottile V, Halleux C, Bassilana F, Keller H, Seuwen K: Stem cell characteristics of human trabecular bone-derived
cells.
[14] Pittenger MF, Mackay AM, Beck SC, Jaiswal RK, Douglas R, Mosca JD, Moorman MA, Simonetti DW, Craig S,
Marshak DR: Multilineage potential of adult human mesenchymal stem cells.
[15] Dragoo JL, Samimi B, Zhu M, Hame SL, Thomas BJ, Lieberman JR, Hedrick MH, Benhaim P: Tissue-engineered
cartilage and bone using stem cells from human infrapatellar fat pads.
[16] Huang JI, Zuk PA, Jones NF, Zhu M, Lorenz HP, Hedrick MH, Benhaim P: Chondrogenic potential of multipotential
cells from human adipose tissue.
[17] De Ugarte DA, Morizono K, Elbarbary A, Alfonso Z, Zuk PA, Zhu M, Dragoo JL, Ashjian P, Thomas B, Benhaim P,
Chen I, Fraser J, Hedrick MH: Comparison of multi-lineage cells from human adipose tissue and bone marrow.
155
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
POST MODERN SENSIBILITY IN JOHN UPDIKE’S WORKS
Dr. Neetu Raina Bhat*
Assitant Professor, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Dr. Sunil K. Mishra
Associate Professor, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Suchitra Deswal
Assitant Professor, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
Puneet Mehta
II Semester, Department of Mechanical Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
E-mail: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT
With the advent of the phenomenon called postmodernism usually defined as the paradoxical aftermath of modernism, every form of writing across the
globe underwent radical changes in form, content, symbolism and imagery and above all in the very nature of the message conveyed to the reader. In the
United States of America, fiction written during the 1960’s, the 70’s, the 80’s and beyond came to reflect a disturbing, at times unnerving kind of a
sociological ethos, especially in the context of human relationships. Among a crop of celebrity status American novelist who gained remarkable prominence
with the beginning of the 1960’s decade, John Updike stands out as a virtuoso who weaves in his works a ruptured fabric of modern and postmodern
American society and culture. Right from the earliest novels like The Poor House Fair (1959), Rabbit, Run (1960) and The Centaur (1963) to some of the
latest ones like Roger’s Version (1986) and S (1988) the reader is treated to a well orchestrated “symphony” a kind of a sterile “music” emanating from a
“stringless-lyre”.
The present research paper attempts to analyze the interesting spectrum of the ways, means and nuances which Updike employs in projecting a debased,
dismembered and dehumanized cosmos of humans who are shown as living, interacting, desiring and manoeuvring their existence according to ways and
methods which at best can only be labelled as a tapestry of the tempting, the tasty and the tumultuous. The study leaves to the conclusion that the ethos of a
mass society which we are experiencing these days encapsulates men and women only to make them the willing agents and instruments of sociological
dismemberment coupled with an amalgam of the demonic and ecstatic.
Keywords: postmodernism, american society, family, disintegration, dismemberment
____________________________________________________________________________________________________________________________
1. INTRODUCTION
The novel written in the United States after the Second World War came to reflect a host of changes vis-à-vis the content,
thematic thrust, imagery and symbolism as well as form and structure. Such changes cumulatively have been often
categorized as the contemporary writer’s response to the dismantling “ethos of mass society” [1] a society which encroaches
upon the lives of humans with clinical and ruthless efficiency.
1.1 The fictional works of leading American novelists who have written their best known works during the late 1960’s, the
70’s and the 80’s, embody within the narrative matrix a disturbing and unsettling sociological scenario. The society of
modern man and his culture are projected as a human supernova, on the brink of explosion and chaotic extinction. Among the
modern and postmodern American writers, John Updike has since attained a position of eminence as a novelist who "despite
the atomism, discontinuities, of his world, no longer needs to fear the rawness of American experience; ... Barbaric frenzy
and Alexandrian virtuosity constitute the extremes, not the alternatives, he faces" [2]. In fact, not only Updike but every
modern or postmodern American writer, be he a novelist, a poet or a dramatist has felt compelled and in fact encapsulated by
a kind of quicksand society in which things like tradition, custom and ritual, be it the family, the institution of marriage and
above all the man-woman relationship, have got alarmingly fractured or destabilized, even dismembered.
*Corresponding Author
156
Dronacharya Research Journal
Issue II
Jan-June 2010
2. WRITING STYLE
As an astute writer Updike "refrains from committing himself to any of the philosophies which he presents". [3] But he
admits that "he is too tired to attempt to draw philosophy from the scenes which he creates".It becomes quite obvious that as
one of America's most cerebral writers, John Updike treads a path in his works, a path which a careful, cautious and
discerning reader too has to tread in order to conceptualize the writer's sensibility, choices and preferences. Updike's distinct
prose style, an essential feature of his fiction and discursive writings, is characterized by its vividly descriptive passages,
carefully wrought in a striking, allusive and often esoteric vocabulary, revealing an infatuation with language itself. Often
placed within the realist tradition-a literary mode that favours precise description of the real world over imaginative or
idealized representations-much of Updike’s fiction is presided over by a wry, intelligent authorial voice that conscientiously
portrays the physical world and everyday life in lucid detail.
2.1 A list of his hitherto published works would be appropriate to indicate the prolific range and a fairly rapid order of
appearance. The chronological sequence of the novels spans a time-period of more than four decades: The Carpenter Hen and
Other Tame Creature: (1958), The Poorhouse Fair (1959), The fame Door (1959), Rabbit Run (1960), Pigeon Feather; and
Other Stories (1962), The Centaur (1963), Telephone Poles and Other Poems (1963), Of The Farm (1965), Couples (1968),
Rabbit Redux (1971), Rabbit Is Rich (1981), The Witches of Eastwick (1984), Roger’s Version (1986), S (1988), Self
Consciousness: Memoirs (1984), Rabbit at Rest (1990), Memories of the Ford Administration (1992), Brazil (1994), In the
Beauty of the Lilies (1996), Toward the End of Time (1997), Gertrude and Claudius: (2000), Seek My Face(2002), Villager
(2004) and Terrorist (2006).
3. CRITICAL OPINIONS
Critical opinions on the work of John Updike are diverse and galore, nothing surprising in the case of a prolific writer with a
career spanning more than four decades. Updike’s creative acumen as one of America’s leading contemporary writers has its
foundation across the exciting and innovative expanse of his novels numbering more than twenty from the inaugural one, The
Poor Home Fair (1959) to the latest one, Village: (2004). All the same, the multidimensional genius of Updike can also be
witnessed in a good number of Short Stories and Poems he has written besides Essays and Criticism and some Memoirs titled
Self-Consciousness. Needless to say that it is the fictional works which have attracted maximum scholarly attention as these
offer a challenging as well as highly involving spectrum of investigation to any intelligent and committed researcher.
3.1 It has been rightly pointed out that Updike’s works, like: Faulkner’s reveal increase unease with structure, whether in
form or content; in Updike’s ease this shows particularly in the concentration on perception as a last desperate remedy for the
problem of meaninglessness. Again and again, like so many modern novelists, he returns to describe and evoke experienceno matter what that experience may be-for, in the face of increasing social and personal collapse, the feeling of the moment is
the only positive reality man has [4].
The “feeling of the moment” can be defined in terms of desperation to find out some cure for the existential problems
afflicting Updike’s fictional protagonists. Cumulatively, what the reader accosts in the novels of Updike, is a fractured,
dystopian, deconstructive and disruptionist scenario of the contemporary American family, human relationships and above all
the institution of marriage. His works as a novelist "can appear realistic and local, but its resonances are greater; his essential
concern is with transcend form and the pressure against it of a compelling but disquietining history; his novels of domesticity
are really novels of social anxiety and secular unease" [5]. The factor of "social anxiety and secular unease" obviously gets
related to the chronic recurrence of adultery, infidelity and other marital aberrations which afflict the fictional cosmos Updike
weaves in his novels.
3.2 Norman Podhoretz comments about Updike by pointing out that "his prose was overly lyrical, bloated like a child who
had eaten too much candy" [6]. Podhoretz goes to add that Updike "seems a writer who has very little to say and whose
authentic emotional range is so narrow and thin that it may without too exaggeration be characterized as limited to a rather
timid nostalgia for the confusions of youth" [7]. The opinion of Norman Podhoretz seems to be somewhat lop-sided and
extremist; vis-à-vis the role-playing-matrix in Updike's concern for the existential aberrations of the postmodernist generation
in the United States is construed as a fondness for behavioural vacillations among the young men and women who populate
the author's novels. Another interesting opinion focuses on the fact that for Updike the "subject matter has always been
contemporary American middle-class life the life-styles of his characters are close to those of the country at large, and his
fiction could hardly escape the radical changes in those lives” [8]. Primarily due to the instability of society itself human
nature and relationships amidst such a social environment, chronically reflect the irreconcilable nature of the American
middle class ideal and ground-level experiences in a turbulent world. The postmodern American novel, with such fictional
157
Dronacharya Research Journal
Issue II
Jan-June 2010
men and women in its narratives does not concern itself with high and lofty ideals and goals, but increasingly and rather
obsessively deals with the burden, the pain, the anguish, and even the ambivalence of personal relationships. Such
relationships in a demonic human world leave no scope for defeats and victims on the sociological plane. Such type of
fictional narratives, Updike s included; reveal ironic American Adams and Eves hopelessly and in an absurdist manner
striving for existential redemption. More often than not, the climactic product comes in the form of what could be termed as
“adamic falls and quixotic redemptions".
3.3 John Updike “uses texture and a new sort of pattern in place of linear action”, [9] and even his “style is a view of the
world through a lens of alienation” [10], which is the recurrent problem inherent in the contemporary American novel. And
not only alienation but human relationships as a whole become “an ever changing act of apprehension, belonging in the
contemporary world of changing thought, changing history, changing ways of naming experience ...”' [11], something
witnessed repeatedly across the firmament of Updike’s fictional cosmos. Another critical consensus maintains that Updike
has “matured and developed as a writer concurrently with the birth of a new American culture. His methods have been to
grow with that culture, while maintaining a basic artistic conservatism which forms a helpful bridge from our present times to
the recent, but aesthetically remote, past” [12]. Ostensibly, as a realistic socio-cultural diagnostician Updike in spite of his
artistic compulsions, keeps his creative stance in the novels as that of a fabulator, successfully synthesizing contemporaneous
ills with past forgotten glories.
4. WHY DISMEMBERMENT AND DISSOLUTION
The family, a sacrosanct institution inherited as a pious acquisition in any given civilization and culture, now in the works of
writers like Updike and implicitly in actual reality, undergoes fracture and fission, dismemberment and dissolution. After all
why? The answer is not far to seek: an unnerving disequilibriurn in the gender-equation. The man-woman relationship as
projected in the novels of Updike and his contemporaries like Kurt Vonnegut, Joseph Heller, John Hawkes and Jerzy
Kosinski, to name a few, undergoes demonic changes and becomes an exercise in anguish bondage, pain and confusion. The
reader is treated with a kaleidoscopic spectrum of a Kafkaesque scenario in which both men and women get entangled in a
Sisyphus-like struggle against the crippling ethos of a mass society and a hostile social environment in which human dignity
and ethical values seem to be doomed to extinction.
4.1 In the current postmodern urban culture in developed as well as developing countries, the preceding observation has
since become a hard reality and in this context John Updike becomes "in worldly terms a successful writer" [13], who has
"always taken pride in the professionalism of his work,” [14] as reflected in his novels. Perhaps, such a creative contingency
earns for Updike an antithetical connotation as a novelist who is "Christian on the one hand, yet twentieth century skeptic on
the other" [15]. The fact also remains that Updike is a “humanist, believing strongly in the worth of the individual, yet finding
him often defeated by the forces beyond his control” [16]. These “forces” get epitomized by the intrusion of illusory entities
like an irresistible desire for individual satisfaction and a workable identity amidst destabilized, sterile and dismembered
lives. In novel after novel, this existentially operative scenario is presented by Updike as the stark truth about the personages
who represent the actuality of American society and culture in his novels. As natural corollary to this formulation comes
when Updike's novelistic protagonists with their inflated egos make desperate attempts to propitiate their dreams, perversions
and fantasies.
5. AUTHOR’S SENSIBILITIES
5.1 Coming back to the author’s sensibility, Updike’s “ideologies are a composite of too many for definition” [17] because
he is “his own best example moving one step up on the foundations of his own accomplishment each time a new book
appears” [18], which can be seen in the novels as well. As Updike “tends to be a social realist” [19], he depicts the feeling of
the absurd in the relationships between men and women, besides illustrating a kind of a frustrating search for truth. To
borrow an expression from Ihab Hassan, this type of dispersal and dehumanization, perversion of human values and fracture
of relationships, witnessed among the male and female personages in the contemporary American novel, reminds one of a
“lyre without strings,” [20] reminiscent of Orpheus, the legendary Greek musician who was brutally slaughtered by the
Maenads. It is said that the severed head of Orpheus was thrown by his murderers into the ocean. The dismembered body of
Orpheus becomes the metaphor of a debased, dehumanized and deconstructive society which strengthens itself on the
degeneration of human values and morals. Metaphorically speaking, the modern as well as the postmodern assault on human
culture and relationships becomes a hard reality, there by not only dehumanizing art but also deforming the creative energy.
Such a sociological vision creates a kind of an anxiety regarding society and human consciousness. A sense of ironic
contradiction and waste with human individuals becoming victims of society and certain processes also forms the main
158
Dronacharya Research Journal
Issue II
Jan-June 2010
concern in Updike’s novels of manners and morals. The novels have been appearing regularly, with a maintained focus on
the sociological aspect, earning Updike the label of “the most prolific major American writer of his generation” [21].
5.2 It has been frequently pointed out that a kind of compromise has been forged “between the conflicting realistic and
romantic traditions of the American nove1” [22]. This realistic-romantic dichotomy continues to exist in the criticism of
American fiction in many forms and guises. Among these forms, one can logically include the form of marital relationships
as a part of sexual politics. Both the sexes have been shown “in their real complexity of temperament and motives” [23],
which means that the man-woman relationship reflects a kind of an explicable equation vis-a-vis nature, to each other, and in
context of the social classes involved. Well known practitioners of the contemporary American novel have with metonymical
regularity tried to redefine and restructure symbolism and realism in order to support their respective visions regarding the
impossibility of maintaining human equilibrium within a hostile social environment and Updike is no exception.
CONCLUSION
The Spectacle of social dismemberment becomes the structure of reality, the structure of language and the structure of logic.
Whatever, Updike is doing as a novelist becomes quite true and realistic even in the Indian context. In the highly urbanized
metropolitan culture of our country, the man-woman relationship is no different: even average middle-class Indian house
wives crave for extra-marital relationships. In a survey conducted by ‘India Today’ and ‘Outlook’ magazines in the year 2000
and 2004, a huge chunk of young unmarried women in their late 20’s and early 30’s preferred much older and mature men for
any kind of relationship. In fact according to information conveyed by a recent issue of India Today, middle-class house
wives crave for extra-marital relationship and Outlook magazine in the year 2004 had for its cover the caption: Women on
Top with an ironic parody of the famous episode of Lord Krishna taking the clothes of bathing milkmaids and sitting pretty
on a tree to taunt and tease them. Here, as shown by Outlook, it is the woman who has taken away the clothes of a number of
men bathing in the river, thus earning the sobriquet of Woman on Top. In the context of all these realist assertions it can be
justifiably said from the reader’s point of view that whatever John Updike portrays in his novels vis-à-vis American society
and the culture can apply to any given socio-cultural ethos in the postmodern world today. The “stringless lyre” of the
demonic Orpheus goes on reverberating with the dismembering music of dehumanization of fracture and fission.
REFERENCES
[1]
Hassan Ihab, “The Pattern of Fictional Experience”, Modern American Fiction: Essays in Criticism ed; A Walton Litz
(New York, Oxford: Princeton University Press, 328) (1963).
[2]
Hassan Ihab, 325.
[3]
Galloway David, The Absurd Hero in American Fiction (Austin and London: Univ. of Texas Press, 23) (1960).
[4]
Miles Donald, the American Novel in the Twentieth Century (Vancouver: David and Charles, 91) (1978).
[5]
Bradbury Malcolm, The Modern American Novel (Oxford, New York: Oxford University Press, 147) (1983).
[6]
Podhoretz Norman, Doing: and Undoings 251.
[7]
Podhoretz Norman, 257.
[8]
Klinkowitz Jerome, the Practice of Fiction in America: Writers from Hawthorne to the Present (Ames: State University
Press, 85) (1980).
[9]
Walcutt Charles Child, Man’s Changing Mask: Mode and Method: of Characterization in Fiction (Minnoepolis:
University of Minnesota Press, 326) (1968).
[10] Finkelstein Sidney, Existentialism and Alienation in American Literature (New York: International Publishers, 24)
(1967).
[11] Bradbury Malcolm, the Modern American Novel, 186.
[12] Klinkowitz Jerome, The Practice of Fiction in America, 86.
159
Dronacharya Research Journal
Issue II
Jan-June 2010
[13] Donald Miles, the American Novel in the Twentieth Century, 91.
[14] Donald Miles, the American Novel in the Twentieth Century, 91.
[15] Burchard Rachael, John Updike: Yea Sayings (Carbondale: Southern Illinois University Press, 3) (1971).
[16] Burchard Rachael, 3.
[17] Burchard Rachael, 4.
[18] Burchard Rachael, 4.
[19] Neary John, Something and Nothingness: The Fiction of John Updike and John Fowles (Carbondale and Edwardsville:
Southern Illinois University Press, 3) (1992).
[20] Hassan Ihab, Dismemberment of Orpheus: Towards a. Postmodern Literature (New York: Oxford University Press, 5)
(1971).
[21] Hunt George W, S. J John Updike and the great secret things: Sex, Religion, and Art. (Grand Rapids William B.
Eerdmans Publishing Company, 1) (1980).
[22] Werner Craig Hansen, Paradoxical Resolutions: American Fiction Since James Joyce (Urbana University of Illinois
Press, 1) (1982).
[23] Werner Craig Hansen, 1.
160
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
EMERGING APPLICATIONS: BLUETOOTH TECHNOLOGY
Y. P. CHOPRA*
Professor, Department of Electronics and Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: yashpal_chopra@ yahoo.co.in
Rohit Khanna
VIII Semester, Department of Electronics & Communication Engineering
Dronacharya College of Engineering Gurgaon-123506, India
Email: [email protected]
Meenu Rathi
VIII Semester, Department of Electronics & Communication Engineering
Dronacharya College of Engineering Gurgaon-123506, India
Email: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT
Bluetooth technology is a low cost, low power and short range wireless radio communication technology for adhoc wireless communication of voice & data
anywhere in the world. It has made a phenomenon growth during the last 10 years. During 2008 more than 4 billion Bluetooth enabled products were
marketed. Bluetooth Version 3.0 + HS has been launched in July 2009.It has enhanced the data speed to 24 Mbps. A new face book tool named ‘City-Ware”
has been invented. It uses Bluetooth unique ID to build networks. Bluetooth is also being tried to make mobile payments. A number of Bluetooth enabled
devices are under development for adoption by automotive industry which will enhance the quality of in car voice communication system. In very near
future users will be able to have their choice of music from devices such as MP3 players to the car entertainment system. Security concerns have been taken
care of by adoption of encryption techniques. The future of Bluetooth lies in enhancing the security, data transfer speed & avoidance of interference.
Keywords: Bluetooth, Adaptive Frequency Hopping, Facebook, m-commerce, MP-3 Player
______________________________________________________________________________________________________________________________
1. INTRODUCTION
Bluetooth is a wireless communication of low cost, low power, and short range radio technology. It offers a uniform structure
for a wide range of devices to conduct and communicate with each other while maintaining high level of security. Its
fundamental strength is the ability to simultaneously handle both data and voice transmission. Bluetooth technology was
originally developed by Ericsson, the Swedish phone manufacturer as a method to allow electronic devices such as mobile
phone or a computer to use short radio waves to connect to each other without the use of cables or wires. Bluetooth was
named after a late 10th century king Harald Bluetooth, king of Denmark and Norway He is known for his unification of
previously warring tribes from Denmark (Including Scania present day Sweden) and Norway. Bluetooth likewise was
intended to unify different technologies, such as computers and mobile phones.Bluetooth logo shown in fig. 1 merges the
Nordic runes analogous to the modern latin H and B: * hagall and Bbjarkan from the younger runes forming a bind rune [1]
Fig.1: Bluetooth Logo
*Corresponding Author
161
Dronacharya Research Journal
Issue II
Jan-June 2010
2. DEVICES OPERABLE WITH BLUETOOTH
Fig. 2 shows the different types of devices that can be linked by wireless personal area network communication [2]
Fig. 2: Devices in Bluetooth operation
Bluetooth technology has been successfully used and incorporated in most of the electronic devices like keyboard, mouse,
monitor, speakers and microphones. As on date even presentation projectors have been successfully used with Bluetooth.
2.1 Radio Spectrum
Bluetooth devices operate in the 2.4GHz ISM (Industrial, Scientific and Medical) band. It is an unlicensed free band in most
of the countries. Bandwidth is sufficient to define 79, 1-MHz physical channels. Gaussian FSK modulation is used with a
binary one represented by a positive frequency deviation and a binary zero represented by a negative frequency deviation
from the center frequency, the minimum deviation is 115 KHz.
2.2 Power of Transmission
Bluetooth is power class dependent. The three power classes which cover effective ranges are shown in the table-1 below:
Class
Maximum permitted power Range
mW/dBm
Approx.
100mW(20 dBm)
~ 100 meters
2.5 mW( 4dBm)
~10 meters
1mW(0 dBm)
~ 1 meters
Class-I
Class-II
Class-III
Table 1: Bluetooth Power Transmission Classes
It has been seen that in most cases the effective range of class 2 devices is extended if they connect to class 1 transmitter,
compared to pure class network. This is accomplished by higher sensitivity and transmitter power of the class 1 device
3. SPECIFICATIONS & FEATURES
Bluetooth specifications were developed in 1994 by Saap Haartsen (joined six months later by Swen Mattsen) who were
working for Erricsson mobile platforms in Sweden. The specification was based on frequency hopping spread spectrum
technology. The specifications were formalized by the Bluetooth Special Interest Group(SIG) formed on May20, 1998, Sony
Ericssson, IBM, Intel, Toshiba and Nokia and later joined by many other companies. Various versions namely Bluetooth 1.1,
Bluetooth 1.2, Bluetooth 2.0were developed and used. The current versions in use and those under development are given
below[3,4].
3.1 Bluetooth 2.1 + EDR
This code named Lisbon was adopted by Bluetooth SIG on Aug1, 2007.Supports data transfer speeds up to 3Mbps. It
includes following features:
- Fully backward compatible with1.1
- Extended enquiry response:- provides more information during the enquiry procedure to allow better filtering of devices
before connection. This information includes the name of the device, a list of services the device supports as well as
other information
162
Dronacharya Research Journal
-
-
Issue II
Jan-June 2010
Sniff substrating:- It reduces the power consumption when devices are with sniff’s low power mode, especially on links
with asymmetric data flows. Human interface devices (HID) are expected to benefit the most. With mouse and keyboard
devices increasing the battery life by a factor of 3 to 10
Secure simple Pairing: It has radically improved the pairing experience for the Bluetooth device while increasing the use
and strength of security.
Near Field Communication (NFC): It has resulted in automatic creation of secure Bluetooth connection when NFC radio
interface is also available. This function is a part of SSP, where NFC is one way of exchanging pairing information. For
example, a headset should be paired with a Bluetooth 2.1+EDR phone including NFC just by bringing the two devices
close to each other (a few cm).Another example is automatic uploading of photos from a mobile camera or phone to a
digital picture frame by just bringing the camera or phone close to a frame.
3.2 Bluetooth 3.0 + HS
The 3.0+HS specification has been recently adopted on April 21, 2009 by SIG. It supports data transfer speeds up to 24
Mbps. Its main features include:
- It builds on 2.1 + EDR version including Simple Secure Pairing and built in security.
- Alternate MAC/PHY: It enables the use of alternate medium access control (MAC) and PHYS for transporting Bluetooth
profile data. The Bluetooth radio is still used for device discovery, initial connection and profile configuration. However
when lots of data need to be sent, the high speed alternate MAC PHY (typically associated with Wi-Fi) will be used to
transport data.
- Unicast Connectionless Data : This will permit data to be sent without establishing an explict L2CAP channel.It is
intended for use by applications that require low battery between user action and reconnection/transmission of data. This
will be used only for small amounts of data.
- Read Encryption Key Size : Standard HCI command has been introduced for a Bluetooth host to querry the encryption
key size on an encrypted
- Enhanced Power Control: It has updated the power control feature by removing the open loop power control as well as
ambiguities in power control. This has been made possible by introducing new modulation scheme added for EDR.This
feature has added closed loop power control. Additionally a “go straight to maximum power” request has been
introduced. This will deal with the headset link loss issue typically observed when the user puts his phone into a pocket
on the opposite side to the headset.
4. TECHNOLOGIES
The key technologies used for transmission of data/text are Frequency Hopping Spread spectrum and Adaptive Frequency
hopping [5,6].
4.1 Frequency Hopping
Frequency Hopping spread spectrum is a process where a message or voice communications is sent on a radio channel that
regularly changes frequency (hops) according to a predetermined code. The receiver of the message or voice information also
receives on the same frequency using the same frequency hopping sequence. This technique provides firstly resistance to
interference and multipath effects and secondly a form of multiple accesses among co-located devices in different piconets.
Fig. 3: Frequency Hopping Spread Spectrum
163
Dronacharya Research Journal
Issue II
Jan-June 2010
Fig. 3 above shows a simplified diagram of how Bluetooth system uses frequency hopping to transfer information (data) from
a transmitter to a receiver using 79 communication channel each of 1 MHz bandwidth [7]. The hop rate is 1600 Kbps per
second so that each physical channel is occupied for duration of 0.625ms. Each 0.625ms time period is referred to as slot and
these are numbered sequentially. TDD discipline is used.
4.2 Adaptive Frequency Hopping
Bluetooth specification 1.2 introduced adaptive frequency (AFH) that can reduce the effects of interference between
Bluetooth and other types of devices. The AFH adapts the access channel sharing method so that the transmission does not
occur on channels that have significant interference.
Fig. 4: Blue tooth devices change pattern
By using interference avoidance, devices that operate within the same frequency band and with in the same physical area can
detect the presence of each other and adjust their communication system to reduce the amount of overlap caused by each
other.
Fig 4 shows how blue tooth device changes its hopping pattern to avoid interface to and from other device that operate with
in its frequency band. This example shows that after detecting the presence of a continuous signal being transmitted by the
video camera in the 2.46 GHz band, the blue tooth device automatically changes its frequency hopping pattern to avoid
transmitting on the frequency band that is used by video camera signal transmission.
This results in more packets being successfully sent by the Bluetooth device and reduced interface from the Bluetooth device
to the transmitted video signal.
5. COMMUNICATION AND CONNECTION
The basic unit of networking in Bluetooth is a piconet an adhoc network consisting of a master and from one to seven active
slave devices (Fig5).
Fig. 5: Piconet
Fig. 6: Scatternet
164
Dronacharya Research Journal
Issue II
Jan-June 2010
The master makes the determination of the channel (FH sequnce) and phase (timing effect ie when to transmit) that shall be
used by all device on this position. One master can interconnect with upto seven active slave devices because a three bit
MAC address is used. Upto 255 further slave devices can be inactive and parked, which the master device can bring into
active status at any time. At any given time data can be transferred between the master and one slave. The master rapidly
switches from slave to slave in a round- robin fashion. Either device may switch the master slave role at any time.
Bluetooth specification allows connecting 2-or more piconets together to form a scatternet (Fig.6) with some device acting as
a bridge by simultaneously playing the master role and the slave role in one piconet [8]. These devices have yet to come,
though they were supposed to appear in 2007.
5.1 Setting Up Connection
Any Bluetooth device will transmit the following sets of information on demand.-device name, device class, list of devices
and technical information for example device features manufacturer, Bluetooth specifications and clock offsets.
Any device can perform an enquiry to find other device to which to connect, and any device can be configured to respond to
such enquiries. However if the device trying to connect knows the address of the device, it always responds to direct
connection requests and transmits the information about it if requested. Use of device services may require pairing on
acceptance by its owner, but the connection itself can be started by any device and held until it goes out of range.Every
device has a unique 48-bit address. These addresses are generally not shown in enquires. Instead friendly Bluetooth names
are used, which can be set by the user. This name appears when another user scans for device in lists of paired devices.
6. SECURITY CONCERNS OF BLUETOOTH
A security protocol prevents an eaves dropper from gaining access to confidential information exchange between two
Bluetooth devices. For maintaining security at the link layer 4, different entities as under are used.
a)
48 bit device (BD-ADDER) ie a unique address for each Bluetooth device.
b) a 128 bit random number(RAND)
c)
A private device key of 128 bits for authentication.
d) A private device key of 8 to 128 bit for encryption.
e)
Link level encryption and authentication.
f)
Personal identification number( pin for device access)
g) These keys are not transmitted over wireless other parameters are transmitted over wireless which in combination with
certain information known to the device can generate the keys.
Numbers of steps are carried out in a sequential manner by 2 devices which have to implement authentications and
encryption
7. BLUETOOTH APPLICATIONS
In order to use Bluetooth, a device has to be compatible with certain Bluetooth profile.
The prevalent applications are given below:
a.
Wireless control of and communication between a cell phone and hand free handset or car kit.( This was one of the
earliest applications to become popular)
b.
Wireless networking between PCs in a confined space where little bandwidth is required.
c.
Wireless communication with PC input & output device the most common being the mouse, keyboard and printer.
d.
Transfer of files, contact details, calendar appointments and reminders between devices in OBEX.
e.
Replacement of traditional wired serial communications in test equipment, GPS receivers, medical equipment, bar code
scanners and traffic control devices.
f.
For low bandwidth applications where higher (USB) bandwidth is not required and cable free connection is desired.
g.
For controls where infrared was traditionally used.
165
Dronacharya Research Journal
Issue II
Jan-June 2010
h.
Sending of small advertising hoardings to other, discoverable, Bluetooth device.
i.
Wireless control of some console- Sony’s play station3 uses Bluetooth technology for their wireless controllers.
j.
Sending commands and software to the LEGO windstorms NXT instead of infrared
k.
Wireless bridge between two industriall Ethernet (eg PROFINET)
l.
Dialing –up internet access on Personal Computers or PDAs using a data –capable mobile phone as a wireless modem
like Novatel Mifi.
8. BENEFITS OF BLUETOOTH TECHNOLOGY
Bluetooth technology has come to stay and is being adopted more & more all over the globe. If offers the following benefits:
a. Globally available free of cost
b.
Easy to use: It is an adhoc technology that requires no fixed infrastructure & is simple to install.
c.
Globally accepted specification: Bluetooth Special Interest Group formed by leading communication & software
companies have ensured that all manufacturers follow the same specifications in their products
d.
Secure connection: It incorporates adaptive frequency hopping techniques and built in 128 bit energy phase and pin code
authentication procedure. This ensures that security is maintained between the two Bluetooth devices in communication
with each other.
e.
A Bluetooth chip that enables the device to become Bluetooth enabled costs just under Rs 150. 4000 companies have
now become members of Bluetooth SIG.
9. FUTURE OF BLUETOOTH TECHNOLOGY
The futuristic trends in Bluetooth technology and its applications are summarized as under:
(a) Bluetooth Low Energy: Bluetooth low energy have replaced Wibree and Ultra Low Power. Their expected use will
include watches displaying caller ID information, sports sensors monitoring the wearers heart rate during exercise. The
medical devices working group is also developing devices to have a battery life of up to one year.
(b) Broadcast Channel: It will enable Bluetooth information points. This will drive adoption of Bluetooth into mobile
phones, and enable advertising
models based around users pulling information points, and not based around the
object push model that is used in a limited way today
(c) Topology Management: It will enable the automatic configuration of the piconet topologies especially in scatternet
situations that are becoming more common today. It will all be invisible to users of the technology while also making the
technology “just work”
(d) Bluetooth Application for Face Book: Researchers in Britains Bath University have invented a tool which uses Bluetooth
unique ID to build new networks. Users register with the facebook tool called City Ware. Their real life encounters are
then tracked by Bluetooth. The trials are under way in UK and USA where nodes have been set up. The nodes scan for
Bluetooth enabled devices, and then send the information back to servers which compare the IDs of the gadgets with any
enabled facebook profile. This will enable the cell phones to alert each other when two facebook users who share
common interests or common friends are close to each other.
(e) Bluetooth as a Channel for M-commerce transaction: Trials are under way to make use of Bluetooth as a channel for mcommerce transactions. Secure card payment is being tried that allows roaming up to 100 m. Customer does not have to
be in line of sight of the card reader in order to access the card .This will avoid the queue system time wastage. Bluetooth
will scan at the gate instead of Bar Code reader. When fully developed, it will be a very secure technology except if the
phone is lost [9].
(f) Bluetooth in Auto-motive Industry: A number of device manufacturers are designing products for vehicles [9]. These
include:
166
Dronacharya Research Journal
•
•
•
Issue II
Jan-June 2010
Bluetooth stereo Headset: Blue core 3-multimedia chip integrates an on –chip battery changer and DSP to improve audio
quality and battery life. It will enable passengers to listen to stereo music without distracting the driver.
Combined Navigation, Radio and CD/MP-3 Player: It will use Bluetooth technology to link mobile phones, have voice
controlled function feature and can store address and telephone directories-which can be exchanged with those in the
mobile phones via the Bluetooth link.
Advanced Audio Distribution Profile Device: These will enable Bluetooth equipped joy-sticks to be used with gaming
consoles and for audio streaming.
It is expected that in very near future, users will be able to stream their choice of music from devices such as MP-3 players to
the cars central entertainment system. They will then be able to listen to their precompiled playlist via the vehicles audio
system and use the system to control MP3 player operation.
CONCLUSION
Bluetooth technology has proved very effective in replacing cables for short range applications with very low power and that
too at low cost. The Bluetooth chip costs just under Rs 150. It has made tremendous progress which can be gauzed from the
fact that over 4 billion Bluetooth enabled devices were marketed in 2008. Security concerns have been dually taken care of.
The future of Bluetooth technology lies in enhancing the data speed rate over long ranges with minimum of interferences.
Bluetooth technology will find use in m-commerce transactions and in a big way in auto industry.
REFERENCES
[1] Sanjeev Kumar -Wireless & Mobile Communication, (2008).
[2] Prabhu CSR & Reddi AP (2004)
[3] Stalling Wireless Communications & Network, (2008).
[4] Bluetooth Wikipedia - the free encyclopedia
[5] Bluetooth technology & its Application with Java & J2
[6] Muller - Bluetooth Demystifield, (2001).
[7] Luhar D.R. – Introduction to Bluetooth-Sigma Publishing, (2004)
[8] Zulkhar Nain Bluetooth-a failed Technology for Mobile Payment, (2006).
[9] Murry Anthony-Bluetooth trends and their adoption in automotive applications
167
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
FUTURE SCOPE OF GLOBAL NETWORKING OF ELECTRIC
POWER
Seema Das*
Assistant Professor, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
Chandra Shekhar Singh
Assistant Professor, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
Gaurav Chugh
VIII Semester, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
______________________________________________________________________________________________________________________________
ABSTRACT
One day there will be a global electricity network that links all countries of the world. It will take decades to complete but the first step has already been
taken and the principle is deemed workable. A key benefit will be a reduction in the number of power stations that are needed globally. During night time,
for example, any given country’s generators can be kept running efficiently by supplying the extra needs of other countries during their working hours
further round the globe. It will also help balance the different types of power generation indigenous to different countries. Nuclear power stations can take
several days to switch on or off, so they are best at providing continuous base-load electricity. A hydropower generator can be started in minutes, making it
ideal for meeting surges. For example, Switzerland imports baseload electricity from French nuclear power plants, but exports power from its Alpine dams
in short bursts to meet France’s peak needs. Any one country does not have to cater for all contingencies because it can use resources from elsewhere.
A global network will enable easier access to major sources of energy that are uneconomical to reach just now. The Himalayan kingdom of Nepal, for
example, is a remote region with a large hydroelectric potential. The capacity of the national system is about 300MW, according to its Water and Energy
Commission, but it could generate more than 40GW of hydroelectricity in its steep valleys. Such a grid will increase security of supply, reduce the need for
new power plants and cut back on the primary electricity reserve requirements within each country. This includes the use of spinning reserve where a station
is semi-powered so it can take over very quickly if another station fails or if demand rises sharply
Keywords: HVDC, CCC, GENI, UHV, UCTE
______________________________________________________________________________________________________________________________
1. INTRODUCTION
Thirty years ago, electric power could only be efficiently transmitted 600 kilometers. Breakthroughs in materials science
extended this transmission distance to 2500 kilometers. This allowed utilities to interconnect across time zones and
compensate for variations in seasonal demand. The buying and selling of power is now common, because utilities desire to
level the peaks of energy demand.
Today, research shows the efficient distance of ultra-high voltage (UHV) transmission to be 7000 kilometers for direct
current, and 4000 kilometers for alternating current. This would allow for power interchange between North and South
hemispheres, as well as East and West. Because of electricity's link to a quality standard of living, the interconnection of
regional power grids became the highest priority objective of the World Game.
Expanding power grids has proven to be both economically and environmentally desirable. Presently, 80% of all power
generation is non-renewable, causing many of the world's environmental ills -- greenhouse gases, acid rain, toxic wastes. Yet,
enormous potential for hydro, tidal, solar, wind and geothermal sites exist around the world. These are oftentimes in remote
locations, but within economic transmission reach. Today, as peak power is often purchased from a neighboring utility, the
most inefficient, expensive and polluting generators are being phased out. Billions of dollars are presently being saved
through shared power, and much of the future demand can be met from wheeled electricity, rather than constructing the next
power plant. These savings are reflected in reduced customer costs, while expanding markets for each power producer -- a
*Corresponding Author
168
Dronacharya Research Journal
Issue II
Jan-June 2010
Massive win-win situation. In most developed countries, end-use efficiency is the priority. However, demand-side
management for the developing countries is difficult when their energy demand is rapidly increasing. One does not become
environmentally concerned until survival is handled. Efficiency savings are important, yet only part of the solution
2. INTERCONNECTED SYSTEMS
"There is no energy shortage, there is no energy crisis, there is a crisis of ignorance."
These words, spoken by Buckminster Fuller over 20 years ago, ring even more true today. His answer to what is seen as an
"energy crisis" was to prove there is enough energy for everyone to enjoy a quality standard of living without destroying our
planet. Fuller's solution — a global electric network.
The linking of electrical transmission across time zones, east to west, and seasonal variations, north to south, currently allows
utilities to level demand patterns and make the most efficient and economic use of remote energy sources. Fuller projected
that interconnected grids between countries and continents would be possible with today's technology, transcending political
and geographic boundaries. He was right The Union for the Co-ordination of Transmission of Electricity (UCTE) is the
association of transmission system operators in continental Europe. It provides a reliable market base by creating efficient
and secure electric power highways. It has 50 years of experience in the synchronous operation of interconnected power
systems and its networks supply some 500million people in 22 countries from Portugal to Romania and from The
Netherlands to Greece with about 2300TWh of electricity. Part of its network is the Baltic Ring, recently developed by the
Baltic Ring Electricity Co-operation Committee (BALTREL) which has created a common electricity market in Latvia,
Lithuania, and Estonia. It expects this will strengthen economic development in the region, increase reliability of supply and
help the environment.
Currently, UCTE is investigating the feasibility of a synchronous interconnection between the Baltic States, Russia and many
countries of eastern Europe as far as Mongolia. This would create an electricity system with an installed generation capacity
of some 800GW, spanning 13 time zones and serving about 800million people.
Today, Europe is linked to North Africa by ac cable between Spain and Morocco, Algeria and Tunisia – known as the
Maghreb, or western, countries. Further interconnection will bring in Tunisia and Libya, already forming a synchronous
block with Egypt, Jordan and Syria – known as the Mashreq, or eastern, countries. This is the basis of the Mediterranean
Ring, which could eventually include Turkey.
The project will increase energy security in the entire region, and enable more efficient power flows at lower costs. It will
also reduce the need for more power plants to meet rapidly increasing demand for electricity in the southern and eastern
Mediterranean regions. From Turkey the ring would then link back into the European grid via Greece or through the newly
interconnected Eastern European country grids.
Apart from the economic and technical hurdles to be overcome, there are two quite different outlooks to reconcile. European
networks are highly meshed, consisting of high voltage lines, with high consumption and high density of consumers, and
predictable load patterns. But grids in the Southern Mediterranean region are typically lower voltage grids, non-redundant,
serving fewer loads, concentrated in highly urbanised areas, and strung out through the countryside at lower voltages.
3. HVDC TRANSMISSION SYSTEM
Siemens Power Transmission and Distribution Group is one company keeping an eye on these developments. As one of the
two world-leading suppliers in the HVDC market, it expects to contribute to discussions regarding technical realisation of the
project.
The group is already very active in HVDC around the world. Currently, it is working with local companies to construct a link
in southeast China. The US$121million contract was awarded by China Southern Power Grid Company in Guangzhou and
the project is expected to be connected in 2007. The new HVDC transmission line will eventually provide electricity from the
hydro and coal fired power plants in the west of the country to the industrial districts in Guangdong.
India’s largest power transmission project, the East-South HVDC Interconnector II, was completed by Siemens PTD Group
ahead of schedule. It links the states Karnataka and Orissa over a distance of 1450km – the second longest HVDC link in the
world – with a bulk power of up to 2000MW.
Siemens says HVDC is the only technically and economically feasible solution for interconnection of asynchronous grids and
for power transmission over large distances between generation and load centres.
169
Dronacharya Research Journal
Issue II
Jan-June 2010
Today, although most grids are ac, more dc lines are being installed and the backbone of a global grid will probably be
HVDC. Such links are less costly than ac versions because they need only two main conductors while an ac line needs three.
And the losses are lower. But HVDC converter stations cost more than the ac terminal stations so HVDC may not be
economical over short distances, unless earth return can be used to further reduce transmission line costs.
The major advantage of HVDC is its controllability. There is no need to synchronise power stations and grids with each other
and there are no problems with phase change over distance, so stability is no problem.
The basic power control is achieved trough a system where one of the converters controls its dc voltage and the other
converter controls the current through the dc circuit. The control system acts through firing angle adjustments of the thyristor
valves and through tap changer adjustments on the converter transformers.
A back-to-back HVDC station can be used to link two ac grids. This system isolates each grid from fault conditions and
disturbances on the other and eliminates the need for synchronisation while allowing two-way power transmission.
4. COMMERCIAL SYSTEMS
The latest from ABB is its HVDC2000 system, based on thyristor-switching at converter stations. Its key feature is the use of
capacitor commutated converters (CCC) in conjunction with its development of continuously tuned ac filters (ConTune).
These filters can be built to generate small quantities of reactive power but still provide good filtering.
Commutation capacitors are connected between the thyristor valve bridge and the converter transformers. With a CCC there
is no need to switch filter banks or shunt capacitors banks in and out to follow the reactive consumption when the active
power is changed.
The ConTune AC filter has electromagnetic tuning that adjusts to the inherent frequency variations and temperature
variations of the filter components. It uses a filter reactor with variable inductance based on an iron core with a control
winding round it.
By feeding a corrective direct current into the control winding, the total magnetic flux in the reactor is influenced, so
changing the inductance, which tunes the filter to the correct frequency of the harmonic.
HVDC converters produce current harmonics on the ac side and voltage harmonics on the dc side. For good performance,
low impedance tuned filters often need to be provided for the lowest characteristic harmonics. Detuning of conventional
filters is caused by network frequency excursions and component variations such as capacitance changes due to temperature
differences.
The outdoor air-insulated thyristor valve is a new component, made possible by the development of high power thyristors. It
gives increased flexibility in the station layout; eliminates the need of a valve hall, including its subsystems; reduces the
equipment size; and makes it easier to upgrade existing stations. Future relocation of an HVDC station will also be simpler
when outdoor HVDC valves are used.
5. GLOBAL ENERGY NETWORK INSTITUTE (GENI)
Global Energy Network Institute (GENI) is a tax exempt, IRS Sec 501(c)(3), organization in the United States of America. It
conducts research and educational activities related to the international and inter-regional transmission of electricity, with a
specific emphasis on tapping abundant local and remote renewable energy resources. With the increased awareness of climate
change, growing energy demand, renewable resource solutions and smart technology over the past 3 years, GENI’s Strategic
Position and Activities have expanded as well.
Integrated resource usage is currently limited without interconnections and high voltage transmission. Our research to date
finds that, using today's technology, the interconnection of large scale renewable resource energy is an economic and
environmentally sustainable solution.
In considering the decision making processes of the global electricity industry, our position for the past 20 years has been that
there exist three areas of activity that would accelerate the attainment of optimal sustainable energy solutions:
First, we have said that the industry needs to be convinced that interconnection of renewable energy sources via high voltage
transmission networks is a financially compelling; reliably secure and highly desirable forward energy option. In the United
170
Dronacharya Research Journal
Issue II
Jan-June 2010
States and in many other regions of the world, this awareness is now established and numerous projects are being financed
and developed.
Second, the general public and their representative organizations need tto
o be aware of sustainable global energy options. A
major shift has occurred over the last 3 years (since 2006) as witnessed by the surge in websites, public campaigns
advocating renewable energy use and smart technology, and important policy changes favori
favoring
ng the use of renewables.
Third, the policy makers need to be aware of global, sustainable energy options when determining their regional policy
direction and legislation. This awareness is growing, especially with the public’s support, and it will continue
conti
to encourage
clean energy and energy efficiency policies until such policies are commensurate with the need.
In addition to these long standing areas of focus, there are three nnew strategic areas that will also accelerate the attainment of
optimal energy solutions:
First, given the interconnected nature of our highly complex global issues, what is needed is a place for face-to-face
face
decisionmaking where global leaders from business, governments, education and NGOs can meet in cooperation and collaboration
collabo
(outside their specialized silos) to make informed and sustainable choices for humanity as a whole in the shortest possible
time. A state-of-the-art
art facility is needed that can access the inventory of world resources where guests could visualize and
analyze historical and projected trends, study best practices and identify solutions to current and anticipated problems.
Second, we recognize that we live in a world driven by money, the marketplace and by investment. Moving renewable
energy to the market
ket place requires a massive shift of investment from fossil fuel to the renewable, clean tech sectors.
Third, current realities make it clear that in the next decades ‘the grid’ will not reach rural areas where most of the 1.6 billion
b
people of the world (25% of humanity) live without electricity. These people live on less than $1 per day, most of them just
surviving. There is a clear and documented relationship between a livable standard of living and access to electricity whether delivered via an electric power grid or a stand alone device.
Increasing electricity consumption per capita can directly stimulate faster economic growth and indirectly achieve enhanced
social development--especially
especially for medium and low human development countries. The tthreshold
hreshold for moving from a low to
medium human development economy appears to transition when 500kwh per capita is attained. Electricity plays a key role
in development. Many of the large renewable resources are located in developing countries. With optimal
optima global resource
development, excess power can be exported to developed nations. This would provide income for developing countries and
energy to drive their economies.
Fig
Fig. 1: Projected Global Energy Demand
The interconnection of renewable energy resources was the highest priority objective. It was revealed that the standard of
living is a function of sufficient kilowatt-hours
hours per capita. There appears to be a threshold reached at about 2000 kWh per
capita per year that moves a country from develop
developing to developed status.
Increasing availability and use of electricity is generally associated with a higher "quality of life." While different people
peopl and
cultures disagree on how to define quality of life, several measures are commonly used. Here we will
wil examine four of these life expectancy, infant mortality (the
the number of children per 1000 live births who die in their first year), adult literacy rate,
and availability of safe drinking water.
171
Dronacharya Research Journal
Issue II
Jan-June 2010
Life expectancy defintely increases with energy consumption. Once a nation reaches 2000 kWh per capita, the average life
expectancy is about 75 years. China's emphasis on controlling birth rates and improving health care has resulted in a higher
then average life expectancy based on their energy consumption. The Asian nations could significantly improve life
expectancy with increased availability of energy.
CONCLUSION
Utility grid system planners are facing an increasingly complex world. The current problems stopping expansion of existing
local or regional electricity grids appear to be FINANCIAL, SOCIAL and POLITICAL considerations, rather than technical
ones.
As noted by Yuri Rudenko and Victor Yershevich of the Russian Academy of Sciences, the creation of a unified electrical
power system would not be an end in itself. Rather, it was their view, that a unified system would be the natural result of
systems that demonstrated benefits in terms of economics, ecology and national priorities.
Possibly the most encouraging endorsement for the linking of renewable resources comes on the heels of the Earth Summit in
1993 in Rio de Janeiro. Noel Brown, North American Director of the United Nations Environmental Program, stated that
tapping remote renewable resources is "one of the most important projects furthering the cause of environmental protection
and sustainable development."
REFERENCES
[1]
Fuller, R Buckminster, "Critical Path", St. Martins Press. page 206. , (1981)
[2]
World Energy Council Commission: Energy for Tomorrow's World - the Realities, the Real Options and the Agenda for
Achievement Draft Summary GLOBAL REPORT, 15th WEC Congress, Madrid - Spain, September, , Appendix 9.,
Table I. Regional Fuel Mix 1990. (1992)
[3]
Hubbert, M.K., Scientific American, Sept., "The Energy Resources of Earth", and United Nations World Energy Data
Sheet, 1978. Earth's daily receipt of solar energy remaining after reflection and re-radiation = 3.160x1017 kWh; Daily
human energy consumption = 0.00104x1017 kWh, (1971).
[4]
Johansson, Thomas B.[et al.], Renewable Energy -Sources for Fuels and Electricity, Island Press,. p. 121. Wind
Turbines generate power at $.053/kWh in areas of good wind resources; Mature technology goal is $.029/kWh. Energy
for Planet Earth, Readings from Scientific American, W.H. Freeman & Co., 1991. "Energy from the Sun", Carl J.
Weinberg & Robert H. Williams. p. 108. ...new coal-fired power plant costs about $.05/kWh. (1993)
[5]
Michael Hesse Wolfe, "International Cooperation for Renewable Energy Transfer", IEEE Power Engineering Review,
June, , p. 17-18. Remote Location of Major Renewable Energy Resources, (1992).
[6]
Paris, L. [et al.], "Present Limits of Very Long Distance Transmission Systems", CIGRE International Conference on
Large High Voltage Electric Systems, Session. Section 8. Conclusions. ". . . transmission systems can be set up over a
distance of as much as 7000 km in d.c. and 3000-4000 km in a.c. . . . as to make advantageous the exploitation of those
sources . . ." (1984)
[7]
Remondeulaz, Jean, "East-West Europe Power Interconnection," Modern Power Systems, Vol. 8, Issue 8, Wilmington
Publishing, Ltd., August,( 1992).
[8]
Bohin, S., Eriksson, K., & Flisberg, G., ABB Power Systems AB; "Electrical Transmission," Conference - World
Energy Coalition, pg. 507, (1991).
[9]
Alam, M. S., Bala, B. K., Huq, A.M., Matin, M. A., "A Model for the Quality of Life as a Function of Electrical Energy
Consumption," Energy, Vol. 16, No. 4, 1991. pp. 740.
[10] Hammons, T J., Vedavalli, R., Abu-Alam, Y., deFranco, N., Drolet, T., McConnach, J. "International Electric Network
History and Future Perspectives on the United Nations and World Bank." IEEE Power Engineering Review, Vol. 13
(7), (1993).
172
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
TECHNOLOGY AND TERRORISM
Dr. H.S. Dua*
Professor & HOD, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
Meha Sharma
Assistant Professor, Department of Electronics & Communication Engineering
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
Rahul Gupta
VIII Semester, Department of Electronics & Communication Engineering
Dronacharya College of Engineering Gurgaon-123506, India
Email: [email protected]
__________________________________________________________________________________________________________________________
ABSTRACT
In today’s world hardly any issue gets greater attention and concern of the international community than the question of how to respond to terrorist attacks
and the security threat they pose to the society .The world around us is shrinking in technological time and distance. Electronics, Information and
Communication Technology (EICT) has made rapid advances bringing the teledensity to more than 40% today compared to just 0.8% in 1995.However the
same technology is being used by negative forces to perpetrate terrorism and create chaos in society. This paper brings out the problems facing the society
regarding the menance of terrorism,t he issues involved, the need for command and control and role of R&D, broadcasting and surveillance network. The
challenges for the EICT professionals to counter terrorism have also been enumerated.
Keywords: Cyber Terrorism, Validy Technology, Deep Packet Inspection, Network Forsenics, Terrorist Information Awareness, Electronic Surveillance,
Interactive Mobile TV.
___________________________________________________________________________________________________
1. INTRODUCTION
“Managing Terrorism is also a war; may be more serious than the conventional war on the border. It is because the enemy is
not visible; neither the time of strike nor the place of strike are known”. The world around us is shrinking in technological
time and distance. If we look at the national connectivity figure, it is very encouraging. Teledensity of the country which
was 0.8% in 1995 (all fixed and no mobiles) has crossed 40% as in Sep 09[1]. The wireless (GSM & CDMA) segment adds
almost 10 to 12 million new subscribers each month; India being the 2nd largest cellular mobile phone network in the world.
12 million people in the country have access to Internet today with 6.4 million broadband internet users. We are expecting the
broadband connections to reach the 10 million figure by end 2010.
With a very large teledensity and rural connectivity increasing rapidly, we are marching towards a reasonably connected
nation, but there is need for caution in the light what the country has recently witnessed; Havoc has been perpetrated by
terrorists and antisocial elements. The terrorists have used the modern information and communication technologies as a
conduit for their nefarious activities. There is concern for security throughout the world. The e-safety, security systems and
devices are more relevant today than ever before, when cyber crimes are on the rise. Researches in the field of sensors,
explosive detectors, CCTVs have become important. Likewise modern communication systems and interactive multimedia
devices are need of the hour. Hence there is an urgent need to correctly channel our Electronics, Information and
Communication Technologies (EICT) capabilities to face these natural and manmade challenges.
1.1 The Problem: Terrorism - a Global Threat
The world of terrorism is engaged to achieve destruction of mankind and its creations. The world attention was focused on
this issue after the devastating terror attack of 9/11 in USA. Statistics show that no two terrorist attacks are the same. After
every terrorist incident we hear the often repeated lament about the lack of intelligence. Data and intelligence are available in
plenty, the drawback is in its effective analysis and dissemination to the right stakeholders. In fact there is an information
overflow resulting in an intelligence white out. Very often different organizations work wastefully on the same problems,
plan and take decisions without access to up-to-date or adequate knowledge.
*Corresponding Author
173
Dronacharya Research Journal
Issue II
Jan-June 2010
2. CYBER TERRORISM
Cyberterrorism in general, can be defined as an act of terrorism committed through the use of cyberspace or computer
resources (Parker 1983). As such, a simple propaganda in the Internet, that there will be bomb attacks during the holidays can
be considered cyberterrorism. A cyberterrorist is someone who intimidates or coerces a government or organization to
advance his or her political or social objectives by launching computer-based attack against computers, networks, and the
information stored on them.Sometime back NASSCOM estimated that $7-8 billion worth of business may not have come to
India because of security concerns among clients.
2.1 Dealing with Cyber- Terrorists
A few salient issues which need to be analyzed in this context are:How secure are we in this information age?
How to combat a computer crime or catch a computer criminal?
No universally accepted mechanism has yet been established for handling such crimes and such situations in terms of
searching, seizing, analyzing the digital evidence and finally bring the computer criminals so identified to book.
However there can be two forms of defense: passive and active defense.
Passive defense is essentially target hardening [2]. It largely consists of the use of various technologies and products (for
example, firewalls, cryptography, intrusion detection) and procedures (for example, those governing outside dial-in etc) to
protect the information technology (IT) assets owned or operated by an individual or organization. Some forms of passive
defense may be dynamic, such as stopping an attack in progress, but by definition, passive defense does not impose serious
risk or penalty on the attacker.
Active defense by definition imposes serious risk or penalty on the attacker. Risk or penalty may include identification and
exposure, investigation and prosecution, or preemptive or counter attacks.
3. VALIDY TECHNOLOGY
Validy Technology (VT) is a system which protects software against piracy and ensures software integrity. It uses a
combination of software compilation techniques and of a small secure hardware device called a token.
Fig. 1: In figure 1 there is a co-processor that works under a Secure Token and executes the program there itself.
4. MONITORING AND SENSORS
It is understood that terrorists in the Mumbai carnage extensively used VoIP communication and iphone facilities for getting
directions from their masters. Such a facility could have been easily denied if we had in place measures to monitor such
communication alongwith the capability to deny this unauthorized connectivity. In this context it is relevant to bring out here
that Iranian government has developed web spying services with the assistance of European telecommunication companies.
This is one of the world’s most sophisticated mechanisms for controlling and censoring the internet, allowing it to examine
the content of individual online communications on a massive scale. This practice is called “Deep Packet Inspection” which
enables authorities to not only block communications but also to monitor them to gather information about individuals as
well as alter it for disinformation purposes. Such systems are required to be in place for use when required in the national
interest.
174
Dronacharya Research Journal
Issue II
Jan-June 2010
5. NETWORK FORSENICS
Network forsenics is basically about monitoring network traffic, determining if there is an anamoly in the traffic and whether
the anamoly could be an attack. If it is an attack, the nature of the attack is also determined. Important aspects include traffic
capture, preservation, analysis and visualization of the results. An incident response must be invoked depending upon the
results.
6. TERRORIST INFORMATION AWARENESS
This system has been designed in US to intercept and analyze voluminous data and extract only that information which is
pertinent to law enforcement and useful to agencies such as Information Awareness Office (IAO), National Security Agency
(NSA) and the Federal Bureau of Investigation (FBI)[3]. Aim is to detect, classify and identify potential terrorists and
preempt their nefarious designs and hostile actions. This is done through software programmes for aggregation and
automated analysis of the data.
At the heart of ‘Terrorism Information Awareness’ is the conviction that by searching a vast range of databases, it will be
possible to identify terrorists even before they strike. The capability to locate enemy before he can succeed in his plot to
create chaos or mayhem is the acme of intelligence.
There is abundance of data related to terrorist and other unlawful activities on the internet. It is sheer impossible for humans
to manually search, sift and study it. In India every fourth day you get a warning for terrorist attack which never comes about.
The terrorists strike when the public and the security personnel get used to the “routine” and lower their guard. They then
strike at place and time of their own choice.
Only technology can obviate this human folly. Programmes are available which will sift through vast amount of intercepted
internet traffic, identify and report to investigating agencies traffic considered interesting through visit to websites or
communicating with suspicious individuals or groups to gain intelligence. Keywords are under special focus during the
search.
Technological challenges include correlating/integrating information derived from heterogeneous data sources and
development of signal detection algorithms and ensuring privacy protection while correlating widely differing data and
sources. Some of the software programmes being used for this purpose include ‘Echelon’ and ‘CIPAV’
6.1 CIPAV
It is software developed by FBI, USA to track and gather location data on suspects under the electronic surveillance. The
software operates on the target computer, much like spy ware, whereas it is unknown to the operator that the software has
been installed and is monitoring and reporting on their activities [4]. The CIPAV captures location related information, such
as IP address, running programmes, operating system and installed version information etc. Once the initial data is collected,
the CIPAV slips into the background and silently monitors all outbound communication, logging every IP address to which
the computer connects with time and date.
7. NARUS INSIGHT –PENETRATING THE INTERNET
It is one of the highly versatile cyber surveillance systems, which can penetrate internet. It can track individual users, web
browsers, monitor e-mail contents and instant message conversations and see how users’ activities are connected to each
other e.g compiling lists of people who visit a certain website or use certain specific words or phrases in their e-mails. Other
capabilities include playback of VoIP and rendering of web pages.
8. DEDICATED AND SPECIAL SURVEILLANCE NETWORK
There is need to set up a dedicated and special surveillance network. All strategic areas, critical infrastructure; sensitive data
banks and important cyber sections in the country- all should be brought under a special electronic surveillance network with
constant electronic as well as manual monitoring. Kargil war of 1999 reminds us how devastating can be the result of lack of
surveillance in sensitive areas. High Altitude Balloons providing Lighter than-Air-Surveillance for law enforcement, border
security and facilities protection can be used. The heights of such balloons are generally from 20-30km. This is above air
space ceiling of 20 km and thus will not interfere with air traffic. A balloon at a height of 20 km will have a footprint
diameter of about 800 km.
175
Dronacharya Research Journal
Issue II
Jan-June 2010
A special surveillance network comprising- High Altitude Balloons, Surveillance Satellites as well as land based
surveillance system is required to be deployed to ensure constant round the clock monitoring of sensitive border areas and
long coastal line as well as in land sensitive areas of the country .
The country wide surveillance network evolved could be a hybrid system having a linkage through ground based, satellite
based as well as high altitude balloon based system.
9. AERIAL SURVEILLANCE
Aerial surveillance includes visual imagery or video from an airborne vehicle using digital imaging technology, miniaturized
computers, forward-looking infrared and high resolution imagery capable of identifying objects at extremely long distances.
Unmanned Aerial Vehicles (UAVs) and Micro Aerial Vehicles (MAVs) are being used for border surveillance, terrorist
shadowing and for carrying weapons for killing enemy combatants respectively. MAVs are capable of vertical take-off and
landing.
Other technologies that merit cognizance are real-time image and video capture from UAVs; multisensor fusion, Ground
Image Exploitation System (GIES) and enhanced vision for situational awareness.
10. INTEGRATED GIS
India has got its first integrated GIS and Image Processing Software (IGIS) developed by Scanpoint Geomatics, a software
development firm in partnership with Indian Space Research Organization (ISRO)[5]. The system encompasses a
geographical information system (GIS), image processing and its integration with the real time information using the global
positioning system (GPS). The satellite and aircraft sensors are able to penetrate cloud cover, detect chemical traces and
identify objects in buildings and underground bunkers. These can provide real-time video at much higher resolutions than the
still images produced by programmers such as Google Earth.
In the US, Communication Assistance for Law Enforcement Act (CALEA) requires that all telephone and VoIP
communications be available for real-time wire tapping by federal law enforcement and intelligence agencies. Two major
telecom companies in the US (AT&T and Verizon) have arrangement with the FBI, requiring them to keep their phone call
records searchable and accessible by federal agencies.
11. ROLE OF BROADCASTING TO COUNTER TERRORISM
In the event of some terrorist attack, there may often be a requirement to make urgent communication, sometimes interactive
communication with the public in general. For such an urgent mass-communication, there is no substitute for broadcasting.
Interactive Mobile TV (MTV) Broadcasting is on the anvil for launch in the country. The system permits Broadband
operation allowing delivery of streaming video signal. This MTV is going to be one of the best mass communication
networks in case of emergencies. Further, a communication control system can be devised by which in the event of any
emergency arising out of Terrorist strike or disaster, an urgent public message can be instantly disseminated to the masses
countrywide using broadcasting network.
12. ROLE OF R&D TO COUNTER TERRORISM
Telecommunications and Tele-control using- Internet, Satellite, GPS system and Computer Technology are the common
features of today’s Electronics, Information and Communication Technology (EICT). Exactly the same facilities are being
used by the ‘Negative Forces’ for terrorism and all destructive activities throughout the world. Terrorism is directly linked
with the safety & security of the country. Therefore the country’s highest intelligence analysis wings like- RAW; FBI should
have one of the strongest R & D wings particularly in the field of ICT. Apart from the routine operational activities like
monitoring, interception intelligence analysis etc, its core R & D group should be engaged in finding out the latest
developments in EICT around the world and developing the required deterrents against all possible anti-development usage
of those new EICT developments. The R & D must be taken up on priority and results delivered in an objective manner
without getting lost and mixed up in the usual daily routine and bureaucratic process of govt. departments.
13. COMMAND & CONTROL SERVER SYSTEM TO COUNTER TERRORISM
It should be appreciated that any operation- whether conventional or Counter terrorism- involves a multi layered hierarchy for
effective command and control chain.
176
Dronacharya Research Journal
Issue II
Jan-June 2010
The situation is slightly more critical in case of counter terrorism, since it may involve different agencies- armed forces
(Army, Navy, Air force), special forces (NSG, BSF or CISF) and of course the state police.
In such a situation the access to the information and its dissemination has to be based on a defined operational doctrine which
has to cater for various contingencies and role of each of these agencies in such situations. The flow of information and
access to it, of course should be on the need-to-know basis.
14. IDEAL SETUP OF ELECTRONICS, INFORMATION AND COMM.
TECHNOLOGY (EICT)
What then is the EICT setup which will help us combat terrorism effectively? Some of the points are as under:-
i) Centralized Command And Control:
It has been observed that lack of synergy among different forces tackling an emergency can result in delay in eliminating
terrorists, when they strike. In such situations when different agencies work in isolation it leads to under utilization of
available capability.
ii) Involvement of law enforcement agencies :
Measures at individual level are not enough to prevent cyber crimes. Global law enforcement involving agencies like
INTERPOL through enhanced information sharing and connectivity is required.
iii) Communications:
It is necessary to have state of the art communication systems for providing speedy response and interactivity in handling
adverse situations to ensure safety of life and property.
iv) Intelligence Setup:
A number of intelligence agencies are at work but very often is isolation. There is a need for synergy among all intelligence
gathering and disseminating agencies. After Kargil in1999,a comprehensive review of intelligence setup in the country was
undertaken. This review points gapping holes that existed in country’s intelligence set up [6]. Several recommendations to
improve the system including up gradation of technical imaging, signal electronic counter intelligence capabilities and
reforms in human intelligence gathering have been made. This report was accepted by Group of Ministers (GOM) in Feb
2001. However, it is unfortunate that the recommendations of the report remain largely unimplemented.
15. CHALLENGES FOR EICT PROFESSIONALS
As we see a total convergence between electronics, telecommunication and IT the challenges enumerated below are
applicable to all EICT professionals.
i) Cyber Space
Our websites, e- business, e- commerce on line banking are always vulnerable to cyber space attack. Developing tools
guarantying security to computer networks, web contents etc should be the topmost priority of the EICT professionals so as
to frustrate the ever eager hacker.
ii) Sat Phones
Developing ability to locate hostile satellite phones, recording conversation on Sat phones even if the conversation was
encrypted is a challenge EICT professionals need to take on.
iii) Neutralizing Remotely Operated Explosive De
The remote need to be neutralized at the right time and sequence by generating optimum RF power at the predetermined
frequency band through intelligence gathering. It should cover the entire frequency band in which Walkie – Talkie sets would
operate generally i.e. 27 MHZ, 138 MHz to 172 MHz and 350 – 400 MHz[7].
iv) Electronics Warfare
This involve actions to use our own Electromagnetic spectrum effectively and direct energies to deny the use of
Electromagnetic Spectrum by terrorist organizations.This may include surveillance through SIGINT (Signals
intelligence),COMINT(Communication Intelligence) and use of Jamming – snooping and sniffing etc.
177
Dronacharya Research Journal
Issue II
Jan-June 2010
v) IP Phones
Internet telephony should be seriously looked into, ‘Skype’ and other freely available voice and video IP services being
favourite among terrorists. Ability to tap conversations on these services over Indian Territory need to be developed.
Alternatively, selective jamming of these virtual circuits can be considered.
vi) IP Cameras
Monitoring / Surveillance of entry / escape routes at Airports, Sea ports, Railway stations and Bus terminus can be effected
with IP devices connected to the net. Real time data can be auto-analyzed for operational intelligence. The technology can be
extended to important buildings, energy centers, hospitals and education centers [8].
CONCLUSION
Electronics, Computers, Information and Communication Technologies (EICT) have made the transfer of information and
ideas almost instantaneous. Because of the Internet, any information residing on any computer in any corner of the world is
just a click away. The same advances in technologies are being used by the world of terrorism to spread their nefarious
activities towards destruction of mankind and its creations.
We have a long way to go if this menace of terrorism has to be wiped off the earth. Technology has to mesh with courage.
Of course the latter we have in abundance among our defense forces. It is the former they seek from us. Let the scientist and
technocrats accept the challenge.
Further so many agencies in our country work in isolation without sharing information with each other. This lead to
suboptimum utilization of available resources and waste of effort as well in cases. The activities of these agencies need to be
coordinated effectively with proper control and command mechanism in place so as to maintain a 24*7 vigil. The aim is to
preempt the design of terrorists so that terrorist activities are prevented before they occur and to deal firmly with any type of
untoward incident once it happens. Govt. should act on framing the action plan to exploit all the benefits of EICT in fighting
the terrorism and to ensure the maximum possible safety and security of the country-its people, its critical infrastructure and
other national assets.
REFERENCES
[1] Business line news Article dated 13 Feb (2009).
[2] Fundamental of Radar, Sonar and Navigation by Indrash Babbar, Faridabad: Manav Rachna Publishing Society,(2008).
[3] Information Awareness Office USA website
[4] “Technology vital to counter terror: PM- Technology news, 22 Dec (2008).
http://infowar.net.
[5] www.hcl technologies .html
[6] http:// www.security focus.com
[7] http://www.snoopes.com
[8] http://hoaxbusters.ciac.org.
178
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN No.:0975-3389
ANALYSIS OF m = -1 LOW FREQUENCY BOUNDED WHISTLER
MODES
Dr. B. B. Sahu*
Assistant professor, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
Dr. K. Maharana
Professor, Department of Physics
Utkal University, Bhubaneswar, Orissa, India
Email: [email protected]
Dr. S. K. Gupta
Professor and HOD, Department of Applied Sciences and Humanities
Dronacharya College of Engineering, Gurgaon-123506, India
Email: [email protected]
ABSTRACT
This paper presents the dispersion relation for a cold, collisionless and uniform plasma column loaded in a conducting waveguide. This is then solved
numerically for the m = − 1 whistler mode at frequency 13.56 MHz for various plasma parameters. The wave dispersion characteristics have been studied.
The structures of wave field patterns are also studied. It has been seen that the wave electric fields are much stronger as compared to the wave electric fields
in the above frequency range.
Keywords: Plasma sources, bounded whistler modes, dispersion relation, etc.
1.
INTRODUCTION
The main motivation to our research on plasma is to advance the science of low-temperature plasma physics, particularly as
applied to the commercial applications. Emphasis has been on the plasma sources used for this purpose: how they work and
how they can be improved. High-density plasma sources, which are important for plasma-based technologies, are attracting
considerable attention at present. One of the most promising sources is the low frequency whistler wave discharge, which has
the virtue of high ionization efficiency. Much of the present work in this field is aimed at understanding the properties of
these plasma sources so that devices with industrial applications can be designed with greater control over the plasma
characteristics. Bounded whistler sources have commanded most of our attention because these convert RF energy to plasma
density comparatively higher than the other sources and are the source of the future. Unlike other RF sources, these devices
work on the principle of wave generation in plasma.
1.1 The low frequency whistler waves that propagate inside a cylindrical plasma column were initially discovered by
Thoneman and Lehane [1] and were first used for high-density plasma production by Boswell [2] in 1970. Then it looks at
the extensive studies in the area of plasma physics, and finishes with current work on industrial applications of gaseous
plasmas produced by low frequency bounded whistler waves. Klozenberg et al [3] initially carried out detailed theoretical
treatment of bounded whistler waves. Their work, which has become known as the KMT theory, derives the dispersion
relation of bounded whistler waves in uniform plasma with a vacuum boundary. Using numerical methods they obtained
dispersion relations and radial wave fields for m = 0, ± 1 mode bounded whistler waves. Chen [4] derived the dispersion
relation for bounded whistler waves in uniform, bounded plasma. Apart from the theoretical work a considerable amount of
progress has been made by in the design of excitation coils and establishing the nature of the bounded whistler waves.
Detailed characterization of the radial and axial profiles of the discharge were carried out with respect to the different
azimuthal modes of the bounded whistler waves, the applied magnetic field, gas pressure, different coil designs, etc. [4-10].
*Corresponding Author
179
Dronacharya Research Journal
Issue II
Jan-June 2010
1.2 For efficient RF-power coupling to the plasma through a suitable antenna, the field produced by the antenna should
match with those of bounded whistler wave modes in the plasma. To know about the nature of modes a theory has been
developed for the plasma loaded inside the conducting wave-guide. The theory involves solution to field equations and
matching boundary conditions. In this paper a simple plasma loaded wave-guide is considered.
1.3 The paper has four sections and is organised as follows. Section 2 describes about the theory and dispersion relation.
Details numerical computations and results, which include the wave dispersion characteristics and the bounded whistler wave
field structures, are presented in Section 3. The theory has the experimental basis. The relationship of the theory with the
experiment is given in Section 4 as the concluding section.
2. THEORY
2.1 Wave fields
The physical picture of the plasma loaded waveguide system is illustrated in Fig. 1. We consider an infinitely long cylindrical
plasma column of radius rp placed coaxially inside a infinitely long waveguide radius rw. The plasma is assumed to be cold,
collisionless and homogeneous. A uniform magnetic field B0 is applied along the axis of the system, i.e., the z-axis. In Fig. 1,
there are two distinct regions, e.g. plasma and vacuum. We have to specify the wave electric and magnetic fields for these,
separately. The general solution of the wave equation for cold plasma in cylindrical geometry is given in Franklin [11]. Only
the axial field components are given; the other field components can all be expressed in terms of these.
2.1.1 Plasma fields: Region I (0 < r < rp)
We consider propagation of small amplitude waves in this system, such that all amplitudes vary as ≈ exp[- (ω t − k z + mφ )] ,
where m, the azimuthal mode number is an integer. The general solution for the axial components of electric and magnetic
fields may be written [4] as
E zI = A1 J m ( γ 1 r) + A2 J m ( γ 2 r)
(1)
H zI = A1h1 J m ( γ 1 r) + A2 h2 J m ( γ 2 r)
The expressions for
E zI and H zI are obtained by solving wave equation obtained by using Maxwell’s equations inside the
plasma region. The expressions h1, h2 are related with γ1 and γ2 the two perpendicular propagation constants (radial wave
numbers). For simplicity the variation of field amplitudes ≈ exp [− i (ω t − k z + m φ )] is not written along with the above feeld
expressions. The super script ‘I’ represents region I. A1 and A2 are two arbitrary constants; Jm is the Bessel function of first
kind and order m. Solution of the form Ym is not allowed as these have the singularity at r = 0 (on axis).
Fig. 1: Plasma column loaded inside a cylindrical waveguide.
2.1.2 Vacuum fields: Region II (rp < r < rp)
It is straightforward to write out vacuum fields [12] with varying field amplitudes ≈ exp [− i (ω t − k z + m φ )] . Since we are
looking for slow waves (vph < c), we use the modified Bessel function representation for these fields.
E zII = A3 I m (α r ) + A 4 K m (α r )
(2)
H zII = A5 I m (α r ) + A 6 K m (α r )
where super script ‘II ’ represents region II. A3, A4, A5 and A6 are arbitrary constants; Im and Km are the modified Bessel
function of first and second kind and order m. α 2 = k 2 − kv2 . k is the axial propagation constant and kv, the propagation
constant in vacuum. The other field components in this region may be expressed [4, 12, 19] in terms of E zII and H zII .
180
Dronacharya Research Journal
Issue II
Jan-June 2010
Here, we have six arbitrary constants, A1 through A6. These can be eliminated by applying the appropriate boundary
conditions. The relevant boundary conditions are discussed in the following sections.
2.2 Boundary conditions
1. r = rw (Wave guide wall)
At the metal boundary the tangential components of the wave electric field E must vanish. This leads to the following
boundary conditions
EφII = 0
(i)
(3)
ErII = 0 (ii)
2. r = rp (Plasma-vacuum interface)
Across the plasma-vacuum interface the tangential components of the electric and magnetic fields should be continuous. This
gives:
E zI = E zII
(iii)
(iv) EφI = EφII
(v) H zI = H zII (vi) H φI = H φII
(4)
2.3 Dispersion Relation
On applying the above boundary conditions we can get six homogeneous equations. One can write those system of equations
using matrix methods as
[B ] [A] = [0]
(5)
where
0

0

[B] =  B31
 B41

 B51
 B61

0
B13
B14
0
0
0
B25
B32
B33
B34
0
B42
B43
B44
B45
B52
0
0
B55
B62
B63
B64
B65
0 

B26 
0 
,
B46 

B56 
B66 
0
 A1 
 
 A2 
A 
[A] =  3 
 A4 
A 
 5
 A6 
The dispersion relation for this case is obtained from the condition for the non-trivial solution of the above
homogeneous set of equation (5). Here, the matrix elements Bijs are the coefficients of arbitrary constants on applying
boundary conditions given as follows
There will be a non-trivial solution to the system defined by above Eq. (5) if and only if
Det [B ] = 0
(6)
where ‘Det’ represents determinant. One could evaluate Det [B] to solve for these arbitrary constants and the wave
parameter k. In order to solve for these constants we have to use the equations relating the axial wave number k to the
perpendicular wave numbers γ1, γ2 in the plasma. These are given by Paoloni [17] as follows

 1  S  
k 2 = −   × +1× γ12 −
 2  P  

 2k P S  ±  1  ×sgn1− ω  × sgn1− ω  ×
 ( P + S)   2   ω   ω 
2
v
ci
S
( P +1) × γ −( P+PSS)  − 4×(PS)×(γ



2
2
2
1
2kv2
2
2
1 − kv
)
ce
( ( )
P × γ12 −
1
2
(7)
kv2 R L
S

where the + ve (- ve) sign before the square root corresponds to fast (slow) waves; and
( γ1 γ 2 )
2
(
)
P
=   × k v2 L R − 2 k 2 kv2 S + k 4 .
S
(8)
181
Dronacharya Research Journal
Issue II
Jan-June 2010
Here, R, L and P are the components of the cold plasma dielectric tensor (as defined in Stix [18]) in rotating coordinates.
B13 = I m (α rw );
B25 = I m′ (α rw );
B14 = K m (α rw ).
B26 = K m′ (α rw ).
B31 = J m γ 1 rp ;
B32 = J m γ 2 rp ;
(
B41 =
B42
B43
)
(
 kS 2 m ik mU 2

+ v
rp
2 ρ 2δ 2  rp
1
( )
( )
B33 = I m α rp ; B34 = K m α rp .

( ) (
)
( ) ;
) (
)
(

 J m γ 1rp − kU 2 + ik v S 2 h1 γ 1 J m′ γ 1rp




1  kS 2 m ik v mU 2 

J m γ 2 rp − kU 2 + ik v S 2 h2 γ 2 J m′ γ 2 rp  ;
=
+
2 2 

rp
2 ρ δ  rp


mk
mk
=
I m α rp ;
B44 =
K m α rp ;
rpα 2
rpα 2
(
( )
( )
B51 = h1 J m (γ 1rp );
B55 = − I m (α1rp );
B45 = −
B61 =
ik v
α
I m′ α rp ;
)
( )
B46 = −
ik v
( )
K ′ α rp .
α m
B52 = h2 J m (γ 2 rp );
(
)
B56 = − K m α1rp .
 kS 2 m
ik 2U 2 m 


h
−
J m γ 1rp − kU 2 h1 − ik vT 2 γ 1 J m′ γ 1rp
1
2 2 

r
k
r
2 ρ δ  p
v p

( ) (
1
)

( ) ;

 kS 2 m
ik 2U 2 m 


h
−
J m γ 2 rp − kU 2 h2 − ik vT 2 γ 2 J m′ γ 2 rp
2
k v rp 
2 ρ 2δ 2  rp
ik
ik
= v I m′ α rp ;
B64 = v K m′ α rp ;
B65 =
α
) (
(
1
B62 =
B63
)
(
)
( )
mk
I m α rp ;
rpα 2
α
B66 =
(
)
(

) ;

)
( )
mk
K m α rp ;
rpα 2
3. NUMERICAL COMPUTATIONS AND RESULTS
Based on the above theory, a suitable code was developed in Matlab platform for studying the nature of modes inside the
waveguide. The dispersion relations for the former case, i.e., Eq.(6) was solved numerically for the RF frequency, m = 0, ± 1
modes of the system. The waveguide radius rw was set equal to 7.35 cm (generally standard waveguide radius in most
experimental systems). Computations were carried out for argon plasma of density range 1011- 1012 cm-3 at RF frequency
13.56 MHz (the standard frequency of RF source being used in most experiments). The axial magnetic field is varied up to
400 gauss.
It may be noted that the solution of the dispersion equation within the plasma will yield two roots corresponding to the ±
sign in Eq.(7). In the terminology of plasma waves, these will correspond to the fast plasma waves (FPW) and the slow
plasma waves (SPW) respectively. The numerical computations show that only slow wave solution exits. No fast wave
solutions were seen during the computations. It turns out that the SPW are true slow waves (vph < c).
3.1 Dispersion Curves
It is likely from the computations that the dispersion relation can yield roots (real k value) for
γ1-real and γ2-imaginary: Labelled as Type-I Roots (or simply I).
i)
ii)
γ1-imaginary and γ2-imaginary: Labelled as Type-II Roots (or simply II).
iii)
γ1-real and γ2-real: Labelled as Type-III Roots (or simply III).
No Type-II roots were seen in computations. The roots are basically Type-I and Type-III. It is seen that the SPW correspond
to the Type-I and Type-III roots, and corresponds to the lower sign for k2 in Eq. (7).
It is seen that during computations for SPW, Type-III roots are initially computed for simplicity. The number of roots is one
or more than one. With increase in magnetic field this number increases. These roots are closely spaced. One has to consider
a root and disperse it with respect to magnetic field for a given set of plasma parameters. It may be noted that the ion
182
Dronacharya Research Journal
Issue II
Jan-June 2010
cyclotron motion is much smaller compared to the wave frequency. So, electron motion is taken into account for the
dispersion plots. In all dispersion plots the wave frequency ω is less than the electron cyclotron frequency ωce.
An examination of the dispersion relation reveals that, there are certain values of γ1 and γ2, for which the dispersion equation
is satisfied trivially. This would happen, for the instance, when either γ1 = 0 or γ2 = 0 or when γ1 = γ2. For γ1 = γ2, the root is
of either Type-III or Type-I.
Fig. 2 shows the phase velocity as a function of the frequency (vph/c verses ω / ωce, vph= ω / k) at frequency 13.56 MHz and for
the m = - 1, mode for a plasma column radius (= rp) 6.0 cm. One can observe that the phase velocity of the waves increases
continuously with magnetic field. This magnitude decreases gradually as the plasma density increases. This implies that the
dielectric constant increases with increasing density. For the Type-I roots the plots are of similar nature with Type-III roots
and are not shown. It has been seen that the magnitude of phase velocity for Type-III root is more than that of Type-I root for
a given set of plasma parameters. It is also observed that at very low magnetic fields the magnitude of phase velocity is very
c
small which indicates that the wave approaches the resonance region (vph →0, i.e., index of refraction
→ ∞ ), ω ≈ ωce. In
v ph
this region it is observed that it is difficult to find the root.
0.20
11
ne= 10
0.18
11
ne= 5x10
12
0.16
ne= 10
0.14
m = -1 mode
Type-III Root
f = 13.56 MHz
vph/c
0.12
0.10
0.08
0.06
0.04
0.02
0.00
0.00
0.05
0.10
0.15
0.20
0.25
0.30
ω/ωce
Fig. 2: Dispersion plots showing the variation of the phase velocity as a function of the frequency.
Fig. 3 shows the dependence of wavelength, λ on applied magnetic field, B0 for m = − 1 SPW modes. In the graphs three
lines corresponds to three different plasma densities. The wavelength varies linearly with B0, ranges between 20−400 cm for
Type-III roots when magnetic field changes from 20−400 gauss at plasma density of 1011 cm-3 and frequency of 13.56 MHz.
The corresponding wavelengths for higher plasma density (1012 cm-3) vary in between 6−140 cm for Type-III and. It has also
been observed from computations that the wavelengths for Type-III roots are greater than Type-I roots (not shown). This is
due to the fact that the radial propagation constant γ1 in case of later type of roots is much larger compared to those of former
type roots.
450
ne= 10
400
11
ne= 5x10
ne= 10
350
Wavelength (cm)
300
11
12
m = -1 mode
Type-III Root
f = 13.56 MHz
250
200
150
100
50
0
0
50
100
150
200
250
300
350
400
450
B0(gauss)
Fig. 3: Variation of the wavelengths with axial magnetic fields.
3.2 Wave Field Amplitudes
The radial plots of the normalised electromagnetic fields for the m = -1 mode have been plotted at the z = 0 plane for a
plasma column of radius rp = 6.0 cm. In the Fig.s the plasma surface and the waveguide wall are also shown. The other
parameters along with the type of the roots are also given. For the sake of comparison all the electromagnetic field
components are plotted in the same graph. Computations at magnetic field 100 gauss and plasma density = 1011 cm-3 for m =
− 1 modes at wave frequencies 13.56 MHz are shown in Fig. 4 for type III roots. Similar plots for field profiles are given in
Fig. 5 for the Type-I roots. Important observations that could be made from the Fig.s for various modes of this system are
given as follows.
183
Dronacharya Research Journal
Issue II
Jan-June 2010
3.2.1 The m = -1 mode
(I) Type-III Root:
From the Fig. 4, one can generally make the following observations for the Type-III root
i) m = -1 mode is a guided mode of the system
ii) wave electric field components Ez, Eφ and Er are very weak as compared to their corresponding magnetic field
components. Hz and Hφ are the dominant components.
Hz vanishes on axis where as Hφ and Hr peak on axis.
iii) Away from axis, Hr decays continuously for this mode and vanishes near the waveguide wall.
iv) both Hz and Hφ have some significant values even beyond the plasma surface and up to the waveguide wall ( i.e., in
vacuum region).
0 .1 5
P l a s m a W a v e g u id e S y s te m
m = -1 m o d e
T y p e -I II R o o t
λ = 1 0 2 .7 6 c m
B 0= 1 0 0 G
f = 1 3 .5 6 M H z
P la s m a W a v e g u id e S y s te m
m = -1 m o d e
T y p e -I II R o o t
λ = 1 0 2 .7 6 c m
B 0= 1 0 0 G
f = 1 3.56 M H z
1 .0
0 .8
Hz
Ηφ
Ηr
0 .6
F/Fmax
F/Fmax
0 .1 0
Ez
Εφ
Εr
0 .0 5
0 .4
0 .2
0 .0
0 .0 0
0
1
2
3
4
5
6
7
0
8
1
2
3
(a)
4
5
6
r (cm )
P la s m a s u rf a c e
r (cm )
W a v e g u id e w a ll
P la s m a s u rf a c e
7
8
W a v e g u id e w a ll
(b)
Plasma loaded waveguide system
ne = ni = 1011 cm-3, rp = 6.0 cm, rw = 7.35 cm, B0 = 100 Gauss
F = |E| or |H|
Type-III : γ1 and γ2 are both real
Fig. 4: Radial profile of the normalized electric field components: (a) and magnetic field components: (b) for the m = -1modes of
the plasma loaded wave guide system. rp: plasma column radius; rw: wave guide radius at 13.56 MHz for Type-III root.
(II) Type-I Root:
1 .0
W a v e g u id e P la s m a S y s te m
m = -1 m o d e
T y p e -I R o o t
λ = 2 4 .5 2 c m
B 0= 1 0 0 G
f = 1 3 .5 6 M H z
0 .8
Ε
Ε
W a v e g u id e P la s m a S y s te m
m = -1 m o d e
T y p e-I R o o t
λ = 2 4 .5 2 c m
B 0= 1 0 0 G
f = 1 3 .5 6 M H z
1 .0
z
φ
r
0 .8
0 .6
F/Fmax
F/Fmax
0 .6
E
0 .4
0 .2
H
Η
Η
z
φ
r
0 .4
0 .2
0 .0
0 .0
0
1
2
3
4
5
6
7
8
0
(a)
1
2
3
4
5
6
r (cm )
P la s m a s u r fa c e
r (c m )
W a v e g u id e w a ll
P la s m a s u rfa c e
7
8
W a v e g u id e w a ll
(b)
Plasma loaded waveguide system
ne = ni = 1011 cm-3, rp = 6.0 cm, rw = 7.35 cm, B0 = 100 Gauss
F = |E| or |H|
Type-I : γ1 real and γ2 imaginary
Fig. 5: Radial profile of the normalized electric field components: (a) and magnetic field components: (b) for the m= -1 modes of
the plasma loaded wave guide system. rp: plasma column radius; rw: wave guide radius at 13.56 MHz for Type-I root.
Fig. 5 for the Type-I root ( ω > ωce) show the following observations for the plasma loaded wave-guide system:
i) this is also a guided mode of the system.
ii) wave electric field components Eφ and Er are much stronger as compared to corresponding electric field components of
Type-III roots, but smaller compared with the magnetic field components. There is existence of a significant electrostatic
Er component inside the plasma. It can be seen from the plots (Fig. 13. (a) and Fig. 14 (a)) that Ez is out of phase with Er
in plasma. Away from the axis Ez is in phase with Eφ.
184
Dronacharya Research Journal
Issue II
Jan-June 2010
iii) Hφ and Hr peak on axis. All components Hz, Hφ and Hr oscillate with varying amplitude inside the plasma region. The
number of nodes within the plasma is more in contrast to Type-III root. Hz and Hφ are out of phase with each other. Both
these have some definite value even in vacuum region. Hr decays outside the plasma and vanishes near the waveguide
wall.
iv) The electrostatic components Eφ and Er peak on axis. The phase changes continuously and amplitudes decrease gradually
as moving away from axis.
CONCLUSION
We have studied the dispersion characteristics and wave field structures of low frequency whistler modes in bounded, cold,
collisionless and uniform plasma. In the computations it has been seen that the wave magnetic fields are much stronger field
components as compared to electric field components. This suggests that the RF coupling or antenna structure should be
such that it can produce magnetic fields. So, current carrying loop can serve this purpose in designing the antenna systems
for the experiments for plasma production using bounded whistler modes.
REFERENCES:
[1]
Lehane. A and Thoneman,. PC, Proc. Phys. Soc., 85, 301. (1965)
[2]
Boswell. RW, Phys. Lett., 33, 457. (1970)
[3]
Klozenburg. J, McNamara. B and Thonemann. P, Journal of Fluid Mechanics, 21, 545. (1965)
[4]
Chen. FF, Plasma Phys. Controlled Fus., 33, 339. (1991)
[5]
Chen. FF, J. Vac. Sci. & Technol. A, 10, 1389. (1992)
[6]
Chen. FF, IEEE Trans. Plasma Sci., 23, 20. (1995)
[7]
Chen. FF, Phys. Plasmas, 3, 1783. (1996)
[8]
Conrads. H and Schmidt. M, Plasma Sources Sci. & Technol., 9, 441. (2000)
[9]
Chen. FF and Blackwell. DD, Phys. Rev. Lett., 82, 2677. (1999)
[10] Stevens. JE, Sowa. MJ and Cecchi. JL, J. Vac. Sci. Technol. A 13, 2476. (1995)
[11] Franklin. RF, Plasma phenomena in Gas Discharge, Clarendon Press, Oxford (1976)
[12] Collin. RE, Foundation of Microwave Engineering. McGraw-Hill Book Company, New York (1966)
[13] Pierce. JR, Traveling Wave Tubes, Van Nostrand, New York (1950)
[14] Sensiper. S, Proc. IRE 43, 149. (1955)
[15] Kraus. JD, Antennas, McGraw-Hill, New York (1950).
[16] Ganguli. A and Baskaran. R, Plasma Phys. Contr. Fusion, 29, 729. (1987)
[17] Paoloni. FJ, Phy. Fluid, 18, 640. (1975)
[18] Stix. TH, Theory of Plasma Waves, McGraw-Hill, New York (1962)
[19] Ganguli. A, Sahu. BB and Tarey. RD, Phys. Plasmas, 14, 113503-1. (2007)
185
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN: 0975-3389
Guidelines for Authors to publish a paper in
Dronacharya Research Journal
(Bi-annual Journal focusing on Engineering, Technology, Management and Applied Sciences)
The Journal welcomes original research papers from academicians, researchers, research supervisors,
students etc. The papers should justify good quality research and must reflect the practical applicability
of the research.
Articles should be sent by e-mail. The authors of the selected papers will be intimated by e-mail. The
name of the author(s) should not appear anywhere else in the paper.
The contributors must adhere to following format for the submission of research papers:1. Font Type
: Times New Roman
2. Title of Manuscript
: Font Size-16, Alignment-Center, Style-Bold, Fully caps
3. Abstract
: The abstract shall be a continuous text without any paragraph
/section. An abstract should be approximately of 250 words with
font size 8.
4. Author’s Name
: Font Size-12, Alignment-Center, Style-Bold
5. Author’s Affiliation
: Font Size-8, Alignment-Center
6. Mark “*”
: For Corresponding Author (in case of more than one author)
7. Paragraph headings in Text
: Font Size-12, Alignment-Left, Style-Bold, Upper Case
8. Keywords (title only)
: Font Size-8, Alignment-Justify, Style-Italic
9. Text
: Font Size-10, Alignment-Justify
10. Sub Paragraph Heading
: Font Size-10, Alignment-Justify, Style-Bold
11. Name of Figure & Number
: Font Size-10, Alignment-Center, Style-Bold, to be below the fig.
12. Table Name & Number
: Font Size-10, Alignment-Center, Style-Bold, to be below the table
13. References
: Font Size-12, Alignment-Justify, [s.no.] surname, name,
name of journal, vol/issue, page no. (year of publish)
14. Web Address for Reference
: Font Size-10, Alignment-Justify, Style-Bold, topic searched,
complete web address
15. Line Spacing
: 1.0
16. Margins
: Top-0.7”, Bottom-0.7”, Left-1.25”, Right-1.25”
186
Dronacharya Research Journal
Issue II
Jan-June 2010
ISSN: 0975-3389
COPYRIGHT FORM & AUTHORS’ DECLARATION
I / We hereby declare that, the research paper entitled“________________________________________
____________________________________________________________________________________
”is my / our original & unpublished work. It is neither under consideration for publication by any other
journal nor presented in any seminar/conference/workshop etc. and I / we completely abide to the act. I /
we will be wholly and solely responsible for any legal actions that may arise in future. The undersigned
also submit that all responsibility for the contents of published papers rests upon the authors.
Author/Corresponding Author:
Name __________________________________________ Contact No. _______________________________
Correspondence Address ____________________________________________________________________________
E-mail _________________________________________________ Signature _________________________________
Author-I:
Name __________________________________________ Contact No. _______________________________
Correspondence Address ____________________________________________________________________________
E-mail _________________________________________________ Signature _________________________________
Author-II:
Name __________________________________________ Contact No. _______________________________
Correspondence Address ____________________________________________________________________________
E-mail _________________________________________________ Signature _________________________________
Author-III:
Name __________________________________________ Contact No. _______________________________
Correspondence Address ____________________________________________________________________________
E-mail _________________________________________________ Signature _________________________________
Signature and Stamp of Director/HOD/Supervisor
For further details contact:
Dean Academincs
E-mail: [email protected]
Mob: 09999908250
Advisor (R&D)
E-mail: advisor.r&[email protected]
Mob: 09873453922
187
Dronacharya Research Journal
Issue II
Jan-June 2010
Notes
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
188
Dronacharya Research Journal
Issue II
Jan-June 2010
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
189
Dronacharya Research Journal
Issue II
Jan-June 2010
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
190
Dronacharya Research Journal
Issue II
Jan-June 2010
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
191
Dronacharya Research Journal
Issue II
Jan-June 2010
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
____________________________________________________________________________________________________
192