CSIC 2012( October ) - Computer Society Of India

Transcription

CSIC 2012( October ) - Computer Society Of India
` 50/ISSN 0970-647X | Volume No. 36 | Issue No. 7 | October 2012
www.csi-india.org
Cover Story
AGTAB Automatic
Guitar Tabulation 5
Cover Story
Machine Recognition of
Karnatic Ragas 8
Cover Story
Cyber Music & Digital
Rights Management 9
Article
WiTricity - The
Wireless Future 22
CIO Perspective
Security Corner
Managing Technology »
Information Security »
CIO to CEO: Only
a Vowel Change? 29
Internet Censorship
CSI Communications
October 2012 | A
in India| 31
ReaderSpeak()
The issue of CSIC on History of IT in India evolved thick responses. We quote some excerpts below.
We humbly accept the bouquets as well as brickbats. We highlight once again that our editorial
for the concerned month did point out that the topic is too vast to be covered in one issue. We
will be bringing out another issue on the theme. In addition we have launched a new column to
address this aspect: IT.Yesterday(). We are happy to note that the subject is of great interest to the
CSI community.
The CSIC issue on History of IT in India was lopsided.
While Indian computing could have its own origins in Kolkata,
the history is about how it progressed. From that point of
view the issue failed to broad-base the events. I am sure there
were stalwarts in other cities as well and they could have
been contacted: F.C Kohli, Prof. Rajaraman, Prof. Mahabala, Prof.
Narasimhan
Could not resist writing about the book "Homi Bhabha and the
Computer Revolution” briefly reviewed in the CSI eNL of Dec
2011 as detailed below:
Homi Bhabha and the Computer Revolution: This book traces
the evolution and growth of country’s computer industry from
the time of Bhabha – who had played a visionary role in the
development of diverse branches of science and technology in
India. N.R. Narayana Murthy, Nandan Nilekani, Sam Pitroda,
F.C. Kohli, and M.G.K. Menon and several leading scientists,
policymakers, and industry leaders address indigenous efforts in
telecom revolution and how computer and IT can bring about
positive changes in our lives. From computerization of services
and e-governance to the computer’s impact on biotechnology,
IT for power infrastructure, and transport—the essays address
wide-ranging issues of tremendous topical relevance. In the
Foreword, Ratan N. Tata highlighting the country’s journey
towards self-reliance in IT and telecom states that this volume
will appeal to students and teachers of computer science and
telecommunications, IT and telecom industry professionals,
policymakers, and anybody interested to know more about the
India story. Edited by: R.K. Shyamasundar & M.A. Pai. Published
by: Oxford University Press. Hardback. Pages: 366+xxxiv. Price:
Rs.695/- More about the book at http://goo.gl/2Z9qg
No doubt that Kolkata played an important role in the History
of Indian Computing. Inputs from some of the authors penned the
various chapters would have given a broader history of computing
in India.
Every place is special and has rich history. Some have more. If
you know similar story around do write them. We definitely will
enjoy.
Candidly, I must confess that this is one of the best issues of CSI
Communications and is considered to be a Collector’s Item. I find
there are eight persons who have taken sometime to recollect
Editors
old memories ( part of oral histories) in Kolkata, and nostalgic
experiences of five former students while studying at Jadavpur
University. Even an article of 1985 was published. I could see four
articles on IBM360, Fishnet, MINSK and IIT/K systems. People
remember the issue of Resonance published in May 2008 that
contains many articles by Prof P V S Rao, Prof S Ramani, Editorial
by Prof V Rajaraman and even one paper by Prof R Narasimhan
published in 1960. I know these issues have been read by many
and will be re-read by many in the years to come.
Congrats for bringing to the notice of CSI community about
Kolkata's role in development of Computing in India! However
I feel that the history of CSE in India is not complete without
discussing the immense contribution of Prof A K Chaudhuri. He
played a key role in development of India's first analog computer,
started CSE education in Calcutta Univ, started CSE department
in IIT KGP etc etc. He supervised about 50 PhD thesis and his
students are spread all across the globe in very eminent positions.
In all fairness a special issue of CSIC can be brought out
with papers by his students. I thought this will make the story
complete.
The last issue of CSIC mainly covered the history of computing in
Kolkata. When we have started with history, let's do it well. May I
request Prof. R.K Shyamsundar to write about TIFRAC in the next
issue. Someone from TIFR should do it. Not only the history of
computing, why should we not write about the early stalwarts
and their contributions, about the persons like PVS Rao, Mathai
Joseph, S. Ramani, B.Nag, HN Mahabala, D. Dutta Majumdar, V.
Rajaraman and many others about the work done at TIFR, IITs
and IISC. The present generation should know about all these.
They should also know about the early leaders in IT industry. They
should know about the contribution of FC Kohli in developing IT
industry in India. Would someone from TCS do the necessary
research and contribute. At this juncture, I should also admit that
in the present form CSIC is really good.
Debasish has done a commendable job in depicting the history
of Indian IT scenario. In India many stalwarts have helped
computerization and CSI. Some of the names I can think of are
Prof R Narasimham, Maj Gen A Balasubramanian, F C Kohli,
H N Mahabala, H S Sonawala, B Nag, V Rajaraman, D Dutta
Majumder, P V Rao, etc. It is impossible to name all. I think all of
us and the new entrants in IT field will be interested to know the
contributions these persons have made.
CSI Communications
Contents
Volume No. 36 • Issue No. 7 • October 2012
Editorial Board
Chief Editor
Dr. R M Sonar
Editors
Dr. Debasish Jana
Dr. Achuthsankar Nair
Resident Editor
Mrs. Jayshree Dhere
Published by
Executive Secretary
Mr. Suchit Gogwekar
For Computer Society of India
Design, Print and
Dispatch by
CyberMedia Services Limited
Please note:
CSI Communications is published by Computer
Society of India, a non-profit organization.
Views and opinions expressed in the CSI
Communications are those of individual authors,
contributors and advertisers and they may
differ from policies and official statements of
CSI. These should not be construed as legal or
professional advice. The CSI, the publisher, the
editors and the contributors are not responsible
for any decisions taken by readers on the basis of
these views and opinions.
Although every care is being taken to ensure
genuineness of the writings in this publication,
CSI Communications does not attest to the
originality of the respective authors’ content.
© 2012 CSI. All rights reserved.
Instructors are permitted to photocopy isolated
articles for non-commercial classroom use
without fee. For any other copying, reprint or
republication, permission must be obtained
in writing from the Society. Copying for other
than personal use or internal reference, or of
articles or columns not owned by the Society
without explicit permission of the Society or the
copyright owner is strictly prohibited.
Cover Story
5
8
9
10
AGTAB Automatic Guitar Tabulation
Ahsan Salim, Rahulnath H A,
Gautham M D and Namsheed K S
13
15
19
22
25
Hadoop Mapreduce:
Framework for Parallelism
Pratik Thanawala
Machine Recognition of Karnatic Ragas
Satish Babu
Cyber Music & Digital Rights Management
Aakash Goyal
Digital Restoration of Analog Audio
Hareesh N Nampoothiri
Articles
28
29
Practitioner Workbench
Programming.Tips() »
Fun with C Programs
Wallace Jacob
CIO Perspective
Managing Technology »
CIO to CEO: Only a Vowel Change?
S Ramanathan
Security Corner
30
31
Hacking: Illegal but Ethical?
32
IT Act 2000 »
Prof. IT Law in Conversation with
Mr. IT Executive - Digital Signatures
Issue No. 7
Challenges of Software Reliability
Dr. Pramod Koparkar
Optimization of Customer Data Storage
Balasaheb Ware, Vinod Kumar Garg,
and Gopal Ranjan
WiTricity - The Wireless Future
Hema Ramachandran and Bindu G R
Why Do We Need the COBIT 5 Business
Framework?
Avinash Kadam
V Rajendran
Information Security »
Internet Censorship in India
Adv. Prashant Mali
Mr. Subramaniam Vutha
33
HR
Job Analysis: A Tool for Skill Building
Dr. Manish Godse and
Dr. Mahesh Deshmukh
PLUS
IT.Yesterday(): Early Years of IT in a Tiny South Indian Town
Sugathan R P and T Mahalakshmi
Brain Teaser
35
37
Dr. Debasish Jana
Ask an Expert
38
Hiral Vegda
Happenings@ICT: ICT News Briefs in September 2012
H R Mohan
39
CSI News
42
Published by Suchit Gogwekar for Computer Society of India at Unit No. 3, 4th Floor, Samruddhi Venture Park, MIDC, Andheri (E), Mumbai-400 093.
Tel. : 022-2926 1700 • Fax : 022-2830 2133 • Email : [email protected] Printed at GP Offset Pvt. Ltd., Mumbai 400 059.
CSI Communications | October 2012 | 1
Know Your CSI
Executive Committee (2012-13/14)
»
President
Mr. Satish Babu
[email protected]
Vice-President
Prof. S V Raghavan
[email protected]
Hon. Treasurer
Mr. V L Mehta
[email protected]
Immd. Past President
Mr. M D Agrawal
[email protected]
Hon. Secretary
Mr. S Ramanathan
[email protected]
Nomination Committee (2012-2013)
Dr. D D Sarma
Mr. Bipin V Mehta
Mr. Subimal Kundu
Region - I
Mr. R K Vyas
Delhi, Punjab, Haryana, Himachal
Pradesh, Jammu & Kashmir,
Uttar Pradesh, Uttaranchal and
other areas in Northern India.
[email protected]
Region - II
Prof. Dipti Prasad Mukherjee
Assam, Bihar, West Bengal,
North Eastern States
and other areas in
East & North East India
[email protected]
Region - III
Mr. Anil Srivastava
Gujarat, Madhya Pradesh,
Rajasthan and other areas
in Western India
[email protected]
Region - IV
Mr. Sanjeev Kumar
Jharkhand, Chattisgarh,
Orissa and other areas in
Central & South
Eastern India
[email protected]
Region - V
Prof. D B V Sarma
Karnataka and Andhra Pradesh
[email protected]
Region - VI
Mr. C G Sahasrabudhe
Maharashtra and Goa
[email protected]
Region - VII
Mr. Ramasamy S
Tamil Nadu, Pondicherry,
Andaman and Nicobar,
Kerala, Lakshadweep
[email protected]
Region - VIII
Mr. Pramit Makoday
International Members
[email protected]
Regional Vice-Presidents
Division Chairpersons, National Student Coordinator & Publication Committee Chairman
Division-I : Hardware (2011-13)
Dr. C R Chakravarthy
[email protected]
Division-II : Software (2012-14)
Dr. T V Gopal
[email protected]
Division-IV : Communications
(2012-14)
Mr. Sanjay Mohapatra
[email protected]
Division-V : Education and Research
(2011-13)
Prof. R P Soni
[email protected]
Division-III : Applications (2011-13)
Dr. Debesh Das
[email protected]
National Student Coordinator
Mr. Ranga Raj Gopal
Publication Committee
Chairman
Prof. R K Shyamsundar
Important links on CSI website »
Structure & Organisation
National, Regional &
State Students Coordinators
Statutory Committees
Collaborations
Join Now Renew Membership
Member Eligibility
Member Benefits
Subscription Fees
Forms Download
BABA Scheme
Publications
CSI Communications*
Adhyayan*
R & D Projects
Technical Papers
Tutorials
Course Curriculum
Training Program
(CSI Education Products)
eNewsletter*
Current Issue
Archives
Policy Guidelines
Events
President’s Desk
ExecCom Transacts
News & Announcements archive
http://www.csi-india.org/web/csi/structure
http://www.csi-india.org/web/csi/structure/nsc
http://www.csi-india.org/web/csi/statutory-committees
http://www.csi-india.org/web/csi/collaborations
http://www.csi-india.org/web/csi/join
http://www.csi-india.org/web/csi/renew
http://www.csi-india.org/web/csi/eligibility
http://www.csi-india.org/web/csi/benifits
http://www.csi-india.org/web/csi/subscription-fees
http://www.csi-india.org/web/csi/forms-download
http://www.csi-india.org/web/csi/baba-scheme
http://www.csi-india.org/web/csi/publications
http://www.csi-india.org/web/csi/info-center/communications
http://www.csi-india.org/web/csi/adhyayan
http://csi-india.org/web/csi/1204
http://csi-india.org/web/csi/technical-papers
http://csi-india.org/web/csi/tutorials
http://csi-india.org/web/csi/course-curriculum
http://csi-india.org/web/csi/training-programs
http://www.csi-india.org/web/csi/enewsletter
http://www.csi-india.org/web/csi/current-issue
http://www.csi-india.org/web/csi/archives
http://www.csi-india.org/web/csi/helpdesk
http://www.csi-india.org/web/csi/events1
http://www.csi-india.org/web/csi/infocenter/president-s-desk
http://www.csi-india.org/web/csi/execcom-transacts1
http://www.csi-india.org/web/csi/announcements
CSI Divisions and their respective web links
Division-Hardware
http://www.csi-india.org/web/csi/division1
Division Software
http://www.csi-india.org/web/csi/division2
Division Application
http://www.csi-india.org/web/csi/division3
Division Communications
http://www.csi-india.org/web/csi/division4
Division Education and Research http://www.csi-india.org/web/csi/division5
List of SIGs and their respective web links
SIG-Artificial Intelligence
http://www.csi-india.org/web/csi/csi-sig-ai
SIG-eGovernance
http://www.csi-india.org/web/csi/csi-sig-egov
SIG-FOSS
http://www.csi-india.org/web/csi/csi-sig-foss
SIG-Software Engineering
http://www.csi-india.org/web/csi/csi-sig-se
SIG-DATA
http://www.csi-india.org/web/csi/csi-sigdata
SIG-Distributed Systems
http://www.csi-india.org/web/csi/csi-sig-ds
SIG-Humane Computing
http://www.csi-india.org/web/csi/csi-sig-humane
SIG-Information Security
http://www.csi-india.org/web/csi/csi-sig-is
SIG-Web 2.0 and SNS
http://www.csi-india.org/web/csi/sig-web-2.0
SIG-BVIT
http://www.csi-india.org/web/csi/sig-bvit
SIG-WNs
http://www.csi-india.org/web/csi/sig-fwns
SIG-Green IT
http://www.csi-india.org/web/csi/sig-green-it
SIG-HPC
http://www.csi-india.org/web/csi/sig-hpc
SIG-TSSR
http://www.csi-india.org/web/csi/sig-tssr
Other Links Forums
http://www.csi-india.org/web/csi/discuss-share/forums
Blogs
http://www.csi-india.org/web/csi/discuss-share/blogs
Communities*
http://www.csi-india.org/web/csi/discuss-share/communities
CSI Chapters
http://www.csi-india.org/web/csi/chapters
Calendar of Events
http://www.csi-india.org/web/csi/csi-eventcalendar
* Access is for CSI members only.
Important Contact Details »
For queries, correspondence regarding Membership, contact [email protected]
CSI Communications | October 2012 | 2
www.csi-india.org
President’s Message
Satish Babu
From
: [email protected]
Subject : President’s Desk
Date
: 1st October, 2012
Dear Members
One of the areas in computing which has witnessed explosive
growth in the last decade has been social media. Amongst
the different uses to which the Internet has been put to, social
media is particularly noteworthy in that it enables ordinary
users to project their points of view in the global space. This
ability to express opinion has resulted in numerous and diverse
viewpoints being articulated publicly. As this has been done at
very minimal costs, we have to acknowledge that computers and
the Internet have democratized the ability of citizens to project
their viewpoints with unprecedented flexibility.
Hand-in-hand with democratization of information and media,
we have also seen concerns from different stakeholders on
the possible misuse of these technologies that could result in
undesirable impact in society. In particular, Governments have
been concerned—and perhaps legitimately so—about abuse of
technology to create potential disaffection between sections
of society. One of the ways in which Governments have tried
to combat the problem has been to frame laws that prevent
such occurrences. Given the difficulty in identifying the content
generators—many of whom are anonymous individuals dispersed
geographically—most of these laws target intermediaries such
as ISPs and other service providers who make content available.
While the intent of these laws are benign, there is a significant
risk that their net effect would be to curtail the freedoms of the
civil society stakeholders—including content generators as well
as readers—which would be a retrograde step for society as a
whole. After all, in today's world, it is exceedingly difficult to
visualize a vibrant society without free and unrestricted access
to the Internet!
In India, the original IT Act of 2000, together with its amendments
of 2008 and the Intermediary Rules of 2011, are also open to
the same criticism. Different studies, such as those by Software
Freedom Law Center, Delhi, and the Center for Internet and
Society, Bangalore, as well as some of the initial applications of
these laws in actual practice, gives rise to apprehensions that
these laws are too open in their scope, and may lead to arbitrary
and unnecessary curtailing of rights to freedom of speech and
expression of citizens.
In particular, the Intermediary Rules of 2011 are targeted at firms
that provide intermediary services such as ISPs, search engines,
blog hosts, and DNS providers. Generally, these firms do not have
control over content generated by their users, and are therefore
protected from liability by the ‘safe harbor’ provision in the Act.
However, to avail of this protection, these firms are required to
take down any content deemed as objectionable by any party
through a take-down notice, within 36 hours of such a notice.
Unfortunately, such a provision not only transfers the onus of
protection of the citizen's freedom of speech to a firm (who may
or may not be committed to the freedom of their users), but it
is also likely that firms may not want to risk losing their ‘safe
harbor’ protection, and so will take down content at the flimsiest
of complaints. Intermediary firms also cannot be expected to
do a proper enquiry or study to see if the complaint is genuine
or flippant. Moreover, the take-down does not require any
intimation to the original creator of the content.
The net consequence of such a provision would be the taking
down of any content that is even mildly annoying to anyone.
Instead of promoting tolerance of dissenting opinion as is
customary in a democratic nation, this is likely to promote
intolerance of diversity of opinions.
A more nuanced approach may address some of these concerns.
For example, take-down should be necessary only in case of
content which is illegal as per laws which are in force and not based
on the whims of any complainant. A counter notice mechanism,
wherein the creator of the content is first notified before takedown, should be instituted. Guidelines on what constitutes a
valid take-down notice should be provided to intermediaries
so that they are not obliged to respond to vague or frivolous
demands. Penalties for frivolous notices would also be desirable
to reduce such activity.
Another suggestion is that while a take-down notice can have
immediate effect, it should also require the complainant to obtain
a court order within a stipulated time. If such an order is not
provided, the content should be allowed to be put back. Further,
public disclosure on the take-down requests, as provided by
Google and Facebook recently, would be useful. Finally, sharing
of private information of users by intermediaries should only
be based on safeguards as mandated by existing laws. These
and several other suggestions are now being articulated by
interested individuals and organizations to ensure that there is a
balance between the interests of all stakeholders.
As pointed out by Abbie Hoffman, “You measure democracy by
the freedom it gives its dissidents”, the ability to tolerate differing
points of view is an important prerequisite for a democracy. The
Internet, which is a remarkably accessible and free space today,
should be maintained in the same way, with the least amount of
restraint that would be feasible, so as not to stifle voices of dissent.
With greetings
Satish Babu
President
CSI Communications | October 2012 | 3
Editorial
Rajendra M Sonar, Achuthsankar S Nair, Debasish Jana and Jayshree Dhere
Editors
Dear Fellow CSI Members,
The Shakespearean play, Twelfth Night, opens with the famous festive
sentiments: If music be the food of love, play on, Give me excess of it;
that surfeiting… The bard if alive today would have said: If music be the
food of cyberspace, play on, download excess of it; that surfing... Imagine
what would be the web minus graphics and music! That music also
boils down to zeroes and ones integrates it completely with the Web.
Music oozes through all ports of the cyberworld gizmos and this issue
of CSIC with the theme Cyber Music is in recognition of this state of
affairs. The area of cyber music is of course vast and we have only a
few, nevertheless, interesting articles in this issue. We hope to catch
up with this theme sometime again in future.
The Shakespearean play, Twelfth Night, opens with
the famous festive sentiments: If music be the food
of love, play on, Give me excess of it; that surfeiting…
The bard if alive today would have said: If music
be the food of cyberspace, play on, download
excess of it; that surfing...
There four different articles related to the theme of the
current issue i.e. Cyber Music. First article is by four young authors
Ahsan Salim, Rahulnath H A, Gautham M D and Namsheed K S of
Government Engineering College of Thiruvananthapuram. They
describe an innovation in interfacing a guitar using a software to
produce music tabulature. It is about automatic guitar tabulation
where authors discuss features of AGTAB, which was designed to take
as input, from the default sound driver, the notes played using a guitar
(connected to the computer through the line-in port or microphone
port of the sound card), identify all received frequencies and their
amplitude values and use this information to figure out the actual note
played. Second article is by Satish Babu, which talks about ‘Machine
Recognition of Karantic Ragas’.
Author has described his early experimentation with raga
recognition engine consisting of keyboard and PC, which was
presented to the public as early as in 1995. Third article is about
Digital Rights Management in the context of Cyber Music and this
article is contributed by Prof Aakash Goyal from Kurukshetra. Fourth
article related to the theme is by Hareesh Nampoothiri, who has
written about ‘Digital Restoration of Analog Audio’. In this article he
has described that it is possible to digitally record the analog audio
stored in audio cassettes and restore the sound quality to acceptable
levels using modern computer techniques such as using the free
software Audacity.
Article section this time comes with a number of rich articles on
variety of topics. Mr. Pratik Thanawala, a lecturer from Ahmedabad
has written an article titled ‘Hadoop Mapreduce: Framework for
Parallelism’, where he discusses a technique for processing massive
amount of data in parallel on large clusters. There is another article by
Dr Pramod Koparkar, who has written about ‘Challenges of Software
Reliability’. Pramod explains the concept of reliability in very lucid
manner and then slowly introduces the complexities involved in
mathematically computing reliability.
Two authors Balasaheb Ware and Gopal Ranjan from Datamatics
Global Services along with Vinod Kumar Garg have written an article on
‘Optimization of Customer Data Storage’, where they write about the
techniques for meeting increasing data storage needs due to knowledge
process outsourcing activities. There is one more interesting article on
Wireless Future by Hema Ramachandran and Bindu GR where they talk
about WiTricity, which is short for wireless electricity.
CSI Communications | October 2012 | 4
Avinash W Kadam, Advisor, ISACA’s India Task Force has
contributed an article about COBIT (Control Objectives for Information
and Related Technologies) titled ‘Why do we need COBIT 5 Business
Framework?’ In this article he has written about the formal meaning of
the word framework and says that we need frameworks as they provide
a structure for consistent guidance. Later Mr. Kadam has explained five
principles of COBIT 5 Framework of ISACA.
Prof. Wallace Jacob has contributed for Practitioner Workbench
column under section Programming.Tips(), and in this issue his
contribution is about ‘Fun with C Programs’ wherein he provides
solution for some interesting problems. Programming.Learn(“Python”)
column is omitted for want of space.
CIO Perspective column comes with a thought provoking article
titled ‘CIO to CEO: Only a Vowel Change?’ by S. Ramanathan. He
talks about what hinders CIO’s progress towards becoming CEO and
how CIO to CEO transition can be achieved. Dr. Manish Godse and
Dr Mahesh Deshmukh have written about ‘Job Analysis: A Tool for
Skill Building’ for HR column. Here they write in detail about Skill
Analysis Process and Skill Gap Analysis and how Job Analysis can
help build skill as a result.
In the Information Security section of the Security Corner
column, Adv. Prashant Mali has written an article about his views
regarding “Internet Censorship In India”, which is one of the current
topics of debate. There is also an interesting article by Advocate
V. Rajendran on hacking, which provides techno-legal issues of
hacking and its treatment in IT Act 2000. Another section called
IT Act 2000 under Security Corner is enriched with a writeup by
Adv. Subramaniam Vutha, where he has focussed on digital signatures
and provided information in Q&A style.
… AGTAB, which was designed to take as input,
from the default sound driver, the notes played
using a guitar (connected to the computer through
the line-in port or microphone port of the sound
card), identify all received frequencies and their
amplitude values and use this information to figure
out the actual note played.
The newly introduced column called IT.Yesterday() in place
of ICT@Society comes with an article by Sugathan R P and
T Mahalakshmi, wherein they describe early years of IT in a tiny south
Indian town. As usual there are other regular features such as Ask
an Expert, wherein Ms Hiral Hegde has provided expert answers for
some questions and Happenings@ICT, where HR Mohan writes about
latest news briefs. On the Shelf! column is omitted to accommodate
various other articles that were received. CSI Reports and CSI News
section provide event details of various regions, SIGs, chapters and
student branches.
We have also compiled some of the feedback – both positive as
well as negative - received from various readers for your information
under ReaderSpeak() on backside of front cover.
Please note that we welcome your feedback, contributions and
suggestions at [email protected].
With warm regards,
Rajendra M Sonar, Achuthsankar S Nair,
Debasish Jana and Jayshree Dhere
Editors
www.csi-india.org
Cover
Story
Ahsan Salim, Rahulnath H A, Gautham M D and Namsheed K S
Govt. Engineering College Barton Hill, Thiruvananthapuram
AGTAB Automatic Guitar Tabulation
Introduction
Gone are the days when music was played
using musical instruments and sung by a
singer and recorded using a diaphragm and
a needle connected to it. Computerized
beats and computer-generated voices are
used so commonly in the entertainment
industry these days that they are barely
noticed. We, human beings, have
advanced so much technologically that,
days when computers will sing lullabies
to babies isn’t far away. Technologies for
digital music creation has moved forward
so far, so quickly, that it is now possible to
fully automate the production and postproduction activities associated with
music creation. It has made creation of
music so easy that, even amateurs can
today create professional quality work
at their homes, provided they have the
right hardware. With technologies such
as autotuning and beat quantization,
even someone who isn’t a virtuoso can
create magic. However, there are still
many small, yet important, fields that
have not yet been fully explored. This is
mostly because of the fact that a software
engineer may not know what a musician
wants, or because musicians may be
ignorant about the growth of technology
that they do not know the capabilities of
software engineering. One such area is
musical tablature.
One might want to note down the
musical notes that he/she creates using
a musical instrument, so that another
person may repeat the music he/she
creates.
One such notation used to keep a
record of the notes on paper is the “Staff”
notation, which is a set of five horizontal
lines and four spaces that each represents
a different musical pitch or, in the case of
a percussion staff, they represent different
Fig. 1: The staff notation for the Indian
National Anthem
percussion instruments. Appropriate
music symbols, depending upon the
intended effect, are placed on the staff
according to their corresponding pitch
or function. Musical notes are placed
by pitch, percussion notes are placed by
instrument, and rests and other symbols
are placed by convention. Fig. 1 shows the
staff notation of “Jana Gana Mana”- the
Indian National Anthem.
To someone who has limited
knowledge about music, the staff notation
would be rather difficult to comprehend
and also to create. Because of these
drawbacks, a simpler notation known
as Musical Tablature notation was
introduced. The tablature is a form of
musical notation that indicates instrument
fingering positions, rather than musical
pitches. Tablature is common for fretted
stringed instruments such as the lute,
vihuela, or guitar, as well as many free reed
aerophones such as the harmonica. On
the tab there is a series of numbers. These
represent the frets of the instrument. Fig. 2
shows a sample tab diagram with 3 notes.
In the figure the first “3” on the second
line from bottom represents third fret on
string 5 of the guitar. The numbers are read
from left to right, so the tab diagram in
Fig. 2 can be interpreted as, play the third
fret on the 5th string, followed by the fifth
fret on the same string, and back to the third
again on the same string. The tablature
can be used to show the notations in any
tuning (viz. Standard, Dropped D, E Flat
etc.). The tuning is usually written above
the actual tab diagram.
Fig. 2: A sample Tab diagram
As of now Tabs are manually written
down on paper or typed in using a keyboard
into a computer by the musicians or
musical programmers. Writing tabs down
on paper or creating tabs using computers
is a painstaking task as it involves noting
down each note at the right position
depending on the timing of each note.
To code it using a computer, a software
like guitar pro[1] or tab pro[4] is used. But
they all require the person coding them
to enter each note into the computer.
It takes long hours of work and decent
computer skills to do this right. If a simpler
automatic computer aided generator of
tabs existed it would had made life easier
for the musician making the tabs. These
difficulties that we faced while creating
tabs using computers served as the
driving force to come up with a software
that could generate musical tablature
of anything played using a guitar or
keyboard. The result of such an attempt is
what we call “AGTAB”- Automatic Guitar
TABulator.
AGTAB was designed to take as
input, from the default sound driver, the
notes played using a guitar (connected
to the computer through the line-in port
or microphone port of the sound card),
identify all received frequencies and their
amplitude values and use this information
to figure out the actual note played.
AGTAB then shows the image of the
detected notes in a tab diagram. Each note
has more than one frequency associated
to it, but the note is identified based on
the fundamental frequency associated
with it. The first challenge was to develop
an algorithm that was accurate enough
to identify the fundamental frequency
and remove unwanted noise signals. A
single note can consist of a fundamental
frequency, its overtones, and other noises
level frequencies. Moreover when a single
string is struck, the other strings also tend
to vibrate, producing their associated
frequencies.
CSI Communications | October 2012 | 5
The second problem was the
tabulation. A single note, let’s say, note
A3 can be played at two or more positions
on the guitar, viz. 5th fret of 6th string and
5th string played open are both A3. An
algorithm based on relative position of the
previous note played was used in AGTAB
which has been explained later on in detail.
Since accuracy of the whole note
detection and tab generation procedure
is dependent on frequency produced by
the guitar, proper tuning of the guitar is
necessary. Hence Digital Guitar Tuner
module, which aids in tuning the instrument
to the standard tuning, was incorporated
with the software design, for easy tuning of
the guitar while using the software.
Through experimentation, it was
found out during the development phase
of the software that for a normal guitar
tuned to the standard (EADGBE) tuning
only frequencies in the range of 80Hz and
1080Hz are needed for note detection.
But to keep the accuracy high and improve
note detection ability, we took the privilege
to use frequencies up to 4000Hz. Hence
by Nyquist–Shannon[6] sampling theorem,
we used an input audio sampling rate of
8000Hz.
Detection of Frequency
Once a sample has been obtained, it is
required to find the fundamental frequency
of the sample. As mentioned above,
musical notes, though represented by a
single frequency on paper, are not so for
real and the component frequencies of the
same note may vary from Guitar A to Guitar
B, except the fundamental frequency. This
is why different guitars have different tones
based on the type of materials used to
manufacture them. To a listener, the extra
tones add to the aesthetics of the music
played using the instrument; but for us, the
designers of AGTAB, it only added to the
complexity of the design.
Fast fourier transform
A Fast Fourier transform (FFT)[5] is an
efficient algorithm to compute the Discrete
Fourier Transform (DFT) and its inverse.
AGTAB takes 2048 point FFT algorithm to
return 1024 values in the frequency domain.
This converts the audio signal obtained into
frequency domain, making the extraction
of frequency possible. From this frequency
and amplitude graph the detection of the
frequency was done using an algorithm we
call the AutoCorrelation Algorithm[3].
CSI Communications | October 2012 | 6
AutoCorrelation algorithm
Though at first finding the fundamental
frequency seemed obvious to us, that
it would be the frequency that was
most pronounced in the frequencyamplitude graph, we were proved wrong.
Experimenting with more than one semiacoustic guitars, we observed that the
lower strings, namely the first and second
strings, produced sounds that had values
other than the fundamental frequencies
as most pronounced in the frequencyamplitude graph. But it was observed that
though, the fundamental frequency didn’t
have the highest amplitude value, it was
observed that the fundamental frequency
was the frequency that remained alive for
the longest period of time. It died out only
after other frequencies died out.
This observation led us to using
an autocorrelation algorithm to find out
the frequency that lasted the longest,
namely the fundamental frequency.
Autocorrelation is the crosscorrelation
of a signal with itself. Informally, it is the
similarity between observations as a
function of the time separation between
them. It is a mathematical tool for finding
repeating patterns, such as the presence
of a periodic signal which has been buried
under noise, or identifying the missing
fundamental frequency in a signal implied
by its harmonic frequencies. It is often
used in signal processing for analyzing
functions or series of values, such as time
domain signals. The above mentioned
method has been implemented in the
present version of AGTAB.
Fundamentally,
this
algorithm
exploits the fact that a periodic signal,
even if it is not a pure sine wave, will be
similar from one period to the next. This is
true even if the amplitude of the signal is
changing in time, provided those changes
do not occur too quickly.
To detect the frequency, we take a
window of the signal, with a length at
least twice as long as the longest period
that we might detect. In our case, this
corresponded to a length of 1200 samples,
given a sampling rate of 44,100 KHz.
Using this section of signal, we
generate the autocorrelation function,
defined as the sum of the point-wise
absolute difference between the two
signals over some interval, which we chose
to be 600 points, through experiment.
Other values may also be used, but 600
seemed sufficient enough a value for the
demonstration.
The present version of AGTAB
uses the Autocorrelation Algorithm for
detection of frequency. Due to the need
for an interval in time to perform the
autocorrelation of the signal, there is a
fixed time delay needed to detect a note;
for our implementation with 600 points,
it is 0.3 seconds. Only notes played after
an interval of 0.3 seconds is detected and
considered a note.
Frequency patterns algorithm
As mentioned above the Autocorrelation
Algorithm is slow. A more accurate
algorithm that we have tried out is
analyzing, the frequency harmonic
spectrum of the input signal and creating
a database of possible spectra that can be
produced in a guitar and compare them.
Analyzing the peaks in the frequency
domain graph we can find certain patterns.
It has been noted after several trials that
a particular note has a unique pattern of
frequency peaks.
These frequency peaks consist of the
fundamental frequency, its harmonics,
and the overtones. We noticed that each
note always has a particular pattern
in which the frequencies appear for a
particular guitar. The amplitude values of
each component frequency of each note
always maintain a particular ratio between
them, which remains constant (thought
the exact amplitude value depend on the
level of the input). This can help identify
a particular note if a database of all
possible frequency patterns is stored and
their corresponding names. The process
of identification of peaks involved, cross
checking these against that of database.
So there is a learning phase to store
all possible sorts of patterns by the notes
www.csi-india.org
produced by the particular guitar, as
each guitar has its own tone. Learning
phase accommodates the use of different
types of guitars (or any other stringed
instruments). The learned instruments
may be stored as a profile which can be
loaded when the instrument is used.
For this algorithm every peak is not
necessary. Only values above a particular
threshold are selected. But since the
overall amplitude level of the input can
vary depending on the power put into
striking each note by the guitarist and
also on the string, the threshold needs to
be adjusted dynamically. The threshold
is selected according to the amplitude
level of the input, so that only relevant
frequencies are accepted.
Cross checking
Now the input samples in frequency
domain are cross checked with that of
the database and if there is an entry, the
detected note is returned. The returned
note is in the form of [note] [octave]. For
example, C4, E2 etc.
The above algorithm is not used in
present version of AGTAB, and is only in
the design phase. The present version
features Autocorrelation.
Note Duration
We have discussed how to obtain the
fundamental frequency of the sound every
300ms. A note usually lasts more than
that. So it is important to find the duration
of our input note or the algorithm will
interpret a sustained note as a new note.
This section describes a method to find
the note duration.
Here we have to analyze the Attack
Decay Sustain Release (ADSR) envelope.
An ADSR envelope is found in the time
domain. Fig. 3 shows what ADSR is. The
peaks found in the graph are the attacking
phase. It is the time when the strings
are plucked. The peak represents the
maximum volume of the note. The time
between two maxima is the duration of
the note. The below figure depicts the
duration of a note.
Attack Decay
Sustain
Fig. 3: ADSR of a signal
Release
The simple idea that the note has the
highest amplitude at the end of the attack
phase can be used to know if the note
detected is the same note being sustained
or a new note. Once the decay phase starts
the value of the amplitude cannot increase
anymore, as it is clear from the diagram.
Now, if a new note is played the amplitude
value will usually be more than the present
amplitude value of the already detected
note, assuming all notes are played with
almost the same intensity and that there is
sufficient time interval between the notes
played. This is because the new note is
now in its attack phase while the old one
is in the decay or sustain or release phase.
Hence a threshold is set for the amplitude,
which is dynamically adjusted depending
on the present amplitude value of the
previous note. Signals with amplitude less
than threshold are not considered for note
detection.
previous note is assumed to be current
note. In our example since d2 is smaller
string 5 open is chosen. If both frets are at
the same distance from the previous note
then the distance between the strings is
used as the metric for comparison. This
method has been proven effective in the
tests performed, with good accuracy.
Implementation Details of AGTAB
The platform used is visual C# on .Net
framework. An audio library BASS[7] is used
to get the input, decode, save and other low
level activities in audio processing. BASS is
also used to obtain the frequency spectrum
using FFT. BASS library is also used to find
the beat of the song which is later used in
tabulation. The GUI is made using Visual
C#’s in-built libraries.
Tabulation
The final part of the AGTAB is the
tabulation of the notes on to the tab
diagram. The output from the previous
stage in the name of the note computed
using the autocorrelation function for
instance “E3”. The major challenge here
was the fact that the same note can be
played at multiple positions on the guitar.
For example, A2 can either be played at
the 5th fret of the 6th string or the open
string note of the 5th string. This makes
representation of notes a difficult task.
During the literature survey, we
noticed that a lot of audio recognition
systems use the Hidden Markow Model
(HMM)[2] to model such predictions that
involved probability. But it seemed too
complicated a procedure to implement
for a task that didn’t require any such
complexities, especially considering the
fact that usually guitar notes are played
within the same harmonic and played
close to the finger position where the last
note was played.
So we came up with a simple yet
effective solution for this problem.
The distance between the frets of the
previously played note and the possible
frets of the current note is calculated,
that is for example if previously played
note was string 6, fret 2 and after this
A2 was played (which has 2 possibilities,
viz. string 6 fret 5 and string 5 open), the
distances d1=|5-2|=3 and d2=|0-2|=2 are
found. The position that is closest to the
Fig. 4: User interface of AGTAB
Test Results
The software was tested using a Pluto
HW10, semi-acoustic guitar. The testing
was done for accuracy mainly. More than
a thousand test notes were played and the
output of the software was verified. The
software gave 958 correct results out of
1000 test notes inputted, which is above
95% and much above the initial targeted
accuracy of 75%.
The major short coming of the
software is the inter-note interval that
needs to be there so that detection of
notes can be made possible. As mentioned
earlier, the autocorrelation function
needs to check if the note detected is the
sustained sound of the same note or if it
is really the next note. Because of this if
the same note is played more than once
before 0.3 seconds, it will not be detected
as the next note. This probably is the only
short coming of the AGTAB. Also the
guitar effects like bends, tremolo, hammer
on, pull offs etc. are not detected.
The
tuner
module
using
autocorrelation proved to be as accurate
as any commercially available tuner
Continued on Page 27
CSI Communications | October 2012 | 7
Cover
Story
Satish Babu
Email: [email protected]
Machine Recognition of Karnatic Ragas
A key concept of Indian music is the
concept of ragas, which are strictly-defined
melodic modes in which compositions are
created. While Western classical music
does have the somewhat-related concept
of modes or scales (such as Dorian,
Phrygian, Lydian, Mixolydian), these do
not match the depth and complexity of
ragas of Indian music.
While ragas seem to have originally
arisen from folk tunes, there have been
several attempts to classify and systematize
them in the process of evolution of classical
Indian music. The approaches taken for this
have been different in the Hindustani and
Karnatic streams (which otherwise share
several similarities). Today, the Hindustani
system follows the thaat system, while
Karnatic music uses the melakarta
classification proposed by Venkatamakhi
around 1640 CE.
Both schemes propose 'parent'
ragas, from which numerous 'child' ragas
are defined to have been derived. The
Venkatamakhi classification provides
72 parent ragas (melakarta ragas) from
which all the other ragas are derived. In
the process, the melakarta framework
also predicts many ragas that did not
exist earlier (some of which, despite their
'synthetic' nature, have since become
popular).
It must be noted, however, that it
is often very difficult to thus categorize
many ragas, as they contain not just notes
but also complex transitions between
notes (known as gamakas or meends) that
characterizes them, and which defy easy
classification. Not only do these gamakas
constitute the core of ragas, but they also
cannot be notated in the current notation
system in use, which is the reason
tradition dictates that ragas can only be
learnt directly from the Guru.
Recognition of a raga by a human
being is a complex case of real-time
parsing followed by pattern matching, that
has to match not only the tones or notes
used, but also the inter-tonal transitions.
Considering further that Indian music is
microtonal, the remarkable complexity of
raga recognition is fully encountered when
we try to achieve machine recognition.
CSII Co
CS
Comm
Communications
mmunic
icat
atio
ions
n | Octobe
October
er 2
2012
012
01
2|8
A related problem for machine
recognition of ragas is that of tonic
detection. The tonic (also known as the
base, fundament, 'sa' or the adhaar svara)
defines the 'reference framework' for
the melody. The tanpura, visible in most
concerts (although it is being superceded
by its electronic version), sets this base
note. Anyone with even a slight ear for
music can effortlessly identify this pitch,
on which the entire musical edifice of
ragas are built.
For a machine parser, the
identification of the tonic is a non-trivial
task (especially without the tanpura), as
there may be very few aural cues that
can be reliably assumed to point to the
tonic. While this challenge has not been
satisfactorily addressed, both statistical
and analytical approaches (such as pitch
histograms) seem promising.
My own experiments in raga
recognition started from a database of
Karnatic ragas that I was building in 1995
as a hobby. After creating a database
of over 300 of the more popular ragas, I
decided to try to create a raga recognition
engine. The first challenge, of course, was
tonic recognition—without which I could
not even extract the component notes.
Rather than get bogged down by
this challenge, I decided to side-step it by
using a keyboard to input the music. This
approach is quite imprecise and unsuited
for the more complex ragas, since a fixedpitch instrument such as a keyboard (or
piano or harmonium) cannot produce with
fidelity microtones used by Indian music.
Using
a
keyboard,
however,
significantly reduces complexity. The MIDI
(Musical Instrument Digital Interface)
input port in modern sound cards provide
music as not as sound but as data (such as
the middle C, F#, or D۰), together with the
associated parameters such as duration,
note velocity or instrument. A melody
played on a keyboard that is connected
to the PC through the MIDI interface
solves the tonic identification problem
(middle C is taken as the tonic) and also
eliminates manual parsing/mapping of
pitches to notes, as MIDI provides notes
automatically.
My raga recognition software
consisted of a MIDI keyboard, a parser
(comprising a MIDI module that would
read the port using a C program), a
matching engine (another C program that
would compare the input with the patterns
in the database and identify matches), and
the database (held as a flat-file).
This
recognition
engine
was
capable of easily identifying ragas with
unique notes (such as Hamsadhwani
or Hindolam), as the pattern matching
was straightforward in these cases.
It had difficulties in identifying ragas
that used the same notes in different
configurations (such as Sankarabharanam,
Bilahari, Devagandhari and Arabhi, all
of which used exactly the same notes
as defined by the 29th Melakarta,
Dheerasankarabharanam, which is also
the C-Major scale of Western music). In
this case, the engine would list all the four
as matches, and thus could provide only
a short-list and not an exact match. A
more nuanced search engine needed also
to statistically identify frequently-used
patterns (called sancharas) of each raga,
and then disambiguate a case such as the
above. However, this also meant that the
database need to hold the notes as well as
common sancharas of each raga.
The raga recognition engine,
consisting of the keyboard and the PC
represented an early model of what the
computer could do, and had obvious
limitations. Despite its limited capabilities,
it was greeted with astonishment and
curiosity (and some skepticism!) in 1995,
when it was demonstrated to the public.
I had the good fortune to present it, in
front of a public audience, to Maestro M.
Balamuralikrishna, at Thrissur, in 1996.
As a passionate music-lover, I am
aware that this effort merely scratched
the surface of what is possible. It was
interesting for me to observe that what
was an effortless task for a trained human
being, was significantly complex for a
computer. However, given today's melody
detection capabilities of numerous
Smartphone apps, I am sure the day is
not far away when ragas would also be
identified as easily by machines.
n
g
www.csi-india.org
Cover
Story
Aakash Goyal
Email: [email protected]
Cyber Music & Digital Rights Management
Digital rights management technology
is used to protect against piracy and also
called as copy protection services. In this
service, the copy right holder can remotely
control installation, listening and duplication
of files. While talking about cyber music
it may represent specific music player for
files, restricted number of copies, specific
login ID and password and number of
downloads etc. For music files to get DRM
lock we have different stages: Encryption
of the music file is done by using DRM
keys; Distribution is done through the web
server download or email to purchasing
party; Authentication of legitimate user
by authenticating their internet connection
and Decryption requiring the purchasing
party get the encryption keys and decrypt
the original music.
On the basis of presence or absence
of DRM music services can be classified
in two categories: DRM-free and DRMbased. EMusic and Amazon MP3 Music
store are two examples of the former.
EMusicIt is storage of Online Music
and also called as Audio Book. It is a
subscription service for which user has to
pay as per their need and use. It acts as
the special store for providing the Music
for iPod without offering the Digital Right.
It does not provide any information of
purchaser on the track but maintain the
records on the internal server. It uses the
LAME to encode the music into the loosy
MP3 of variable rate. Analysis of files is
done to maintain the average bit rate 192
Kbit/s. It include following features:
•
It provides 7 days trail before
subscription.
•
It provides the capability of
downloading multiple numbers of
tracks.
•
It offers the two types of package
activation schemes with monthly
pack and Booster Pack for 30 çand
90 days respectively with unlimited
access.
•
It is a Download to own subscription
service i.e. transmission of the music
file from the store by use of a network
and enabling the user to view the
contents indefinitely.
•
It is very popular because it supports
MP3 format and also it is DRM
(Digital Right Management) free.
Amazon Mp3 Music Store service was
the first DRM free cyber music service.
Amazon has copyright law and restrictions
but it impose no enforcement for these
by using DRM methods. It sells music in
sharing with Warner Bros, Sony Music and
also sells self-made music.
iTunes and Napster are two examples
of DRM-based music. iTunes make use
of Fairplay DRM system for cyber music.
Itt restricts the Apple music to the iTunes
only, as Apple doesn’t share its license
with other companies. Fairplay is based
on QuickTime multimedia software
used by all products of Apple. All the
songs are encoded with Fairplay. It uses
AES algorithm and MD5 algorithm in
combination for encryption. A master key
is used to decrypt the audio which itself is
Cyber music
store
decrypted using “user key”. With purchase
of every new song, a new random key is
generated. This is stored in iTunes so
that iTunes can retrieve user key required
to decrypt the master key. Every Apple
device and media like iPods and iPhones
have their own encrypted key bank.
Napster was established as a peer
to peer file sharing internet service that
enables the user to share the audio files
.The Napster is launched free from the
May 2006 to March 2010 for the user
to download full length song. It was
acquired by Roxio in 2011 and became
an Online Music store. It mainly offers
3 types of services.
1. Napster (Basic Subscription Tier): It
provided the access of music to the
user with the two payment methods.
One includes the pay per track option
and another one includes the $5-7
per month for unlimited listening.
2. Napster to Go (Company’s Portable
Subscription Tier): It provided the
access of music to the user with
the one payment method that
includes the pay $8-10 per month
for unlimited listening. User can also
transfer choices to any Music player
Device with supported format.
3. Napster Mobile: It enables the user
to search, browse, purchase, play,
preview of the music on the mobile
with the help of music player.
References
[1]
Copyright
Restrictions
[2]
[3]
Protected
Content
Rights
Object
Network Store
[4]
[5]
Removable
Media
USER
Protected
Content
Protected
Content
About the Author
Fig.1 Functional model of DRM for Cyber Music
OTHER USER
[6]
Eberhard Becker, Willms Buhse, Dirk
Gunnewig, Niels Rump “Digital Rights
Management Technological, Economic,
Legal and Political Aspects”, ISBN
3-540-40465-1, Springer-Verlag Berlin
Heidelberg New York, Page 3-80.
http://en.wikipedia.org /wiki/Digital_
rights_management
http://computer.howstuffworks.com/drm.
htm
http://en.wikipedia.org /wiki/Amazon.
com
h t t p : // w w w . a p p l e . c o m / p r /
library/2007/04/02Apple-UnveilsHigher-Quality-DRM-Free-Music-on-theiTunes-Store.html
http://www.emusic.com/info/how-itworks/
n
Aakash Goyal, B.Tech.(CSE) M.Tech.(CSE) is currently working as Assistant Professor in Jind Institute of engineering and
technology, Jind (under Kurukshetra university, Kurukshetra). He has published 2 national and 3 international papers in various
conferences. He has also attended several seminars and workshops. He is a member of working committee of international
journal JRPS (Journal for research publication and seminar). His areas of interest are network security, Cryptography &
information security, and Mobile ad hoc networks.
CSI Communications | October 2012 | 9
Cover
Story
Hareesh N Nampoothiri
Research Scholar, University of Kerala, Thiruvananthapuram
http://www.newnmedia.com/~haree
Digital Restoration of Analog Audio
The age of audio cassettes and cassette players are long gone. In today's digital world we are used to hear the audio in
digital formats with no hiss or hum. But what about our old collection of music in audio cassettes? Are we going to lose it
for ever? Not really. It is possible to digitally record the analog audio stored in audio cassettes and restore the sound quality
to acceptable levels using modern computer techniques. All that we need is a cassette player, an RCA to stereo cable, a
multimedia computer with necessary software and a little patience.
mention, there is no way to get a particular
track as quickly as we do in case of digital
files. So, are we doomed to lose all those
music in our old tapes? Fortunately, not. It
is possible to restore the files digitally and
keep it safe from now on.
Before We Start...
Source: Flickr.com/rockheim/.:elNico:.
I recollect the days when the stereo
cassette player in my home used to be
my only source of hearing music. I used to
enjoy hearing Kathakali Padams (the songs
rendered while performing Kathakali, a dance
form of Kerala) and my dad had a huge
collection of padams rendered by various
maestros in the field. Myself, just like the
majority out there, shifted to digital music
when the entire audio industry shifted
to audio discs and other digital audio
formats. Along with the medium, we
also shifted from the music of yesteryear
maestros to digitally available music by
the new generation artists. Cassette decks
are also becoming less popular these days.
Most of us use computers or digital audio
players for listening to music. But then,
what happens to our old music collections
in our old audio tapes? Often, they are
recorded from some live concerts or
stage shows and chances are very rare to
get a digital version of the same. As time
passes, the tapes will become unusable
or the sound quality will get reduced.
Moreover, it is not very easy to store these
audio tapes, especially when you have a
huge collection of recorded audio. Not to
We need to arrange a few things before
we start restoring the old cassettes.
•
We need a good cassette player.
Clean the play head and make sure it
gives the best sound it can produce. If
you are using very
old cassettes, then
it is also advisable
to
clean
the
head in between
recordings.
•
If the cassettes are not touched for
long, it will be good to fast forward
and rewind at least
once before we
start recording that
cassette.
•
Make sure the
cassette deck has a
stereo out. It will be marked Line Out.
In most cases they will be female
RCA connectors.
•
Sound cards (on board as well as
separate) often have 3.5 mm TRS
connectors. So, we need an RCA
male to 3.5 mm TRS male cable to
connect the audio system to PC.
•
Sometimes old cassette players may
not have RCA out option. In that case
you may use the headphone out. It
can be a 3.5 mm stereo out (use 3.5
mm stereo to 3.5 mm stereo cable) or
else a 6.35 mm stereo out (use 6.35
mm stereo to 3.5 mm stereo cable). Pick
an appropriate cable according to the
There are many commercial as well as free software available in
the Internet to do digital recording. Here we are using Audacity,
an open source software for recording and editing sound.
CSI Communications | October 2012 | 10
•
connector available in your player.
Make the connections properly.
The Red color of RCA indicates Right
and white indicates Left. Insert the
TRS plug in the line-in port of your
sound card.
Now we are all set to begin the digital
restoration process. Pick a cassette,
put it inside the deck. Wait! We are not
yet ready to press the play button. We
need to do a few settings in the computer
as well.
Meet Audacity, Our Digital
Restoration Partner
To download Audacity, visit:
http://audacity.sourceforge.net/
There are many commercial as well as
free software available in the Internet to
do digital recording. Here we are using
Audacity, an open source software for
recording and editing sound. Audacity can
be installed in Microsoft Windows, Apple
Mac, and GNU/Linux-based PCs.
Fig. 1: Sound settings in Windows
Download Audacity from the official
website and install it in your PC. Go to
www.csi-india.org
Control Panel > Sound (in Windows) and
double click the Line In device. In the
advanced Line In Properties window you
can select the quality settings. (See Fig. 1)
Now start Audacity and go to Edit >
Preferences (Ctrl + P). Select devices from
the items list on the left side and select the
device as Line In. Also change the value of
Channels: to 2 (Stereo). Alternatively you can
do it from the options bar on top. (See Fig. 2)
Now you can hit the Record button
(Fig. 3) available in the options bar and then
start playing the tape in the cassette deck.
Fig. 2: Choose the input device and channels
Fig. 3: Hit Record button to start recording.
You can continue recording the same track by
Shift clicking the button
The Audacity window during the
sound recording in progress is shown in
Fig. 4. We need to take care of a few things
during the process.
•
Keep the volume of the cassette
player near to maximum. This helps
to keep the inherent tape noise to a
minimum level in the signal sent to
Audacity.
•
Adjust the Input Volume using the
slider (as shown in Fig. 4), so that the
waveform will remain well inside the
limits. Also you can monitor the Input
Level indication (shown in Red) and
adjust the level so that the indicators
will not touch the 0 limit.
•
Do not use other processor intensive
applications while recording. Also do
not use any application which will
use the sound card and driver.
•
Note the Project Rate (Hz) combobox on the bottom left corner. 44100
or higher is recommended here.
•
Instead of recording the entire
cassette at one stretch, it is advisable
to pause in between and save the file.
Audacity will save it as an Audacity
project file (*.aup extension) and will
create a directory in the same name
to store the project files.
Cleaning Up
Once we complete recording an audio
tape, the next thing to do is to clean the
recorded tracks. Cleaning up primarily
involves cutting away the bad portions
of the audio spectrum and tweaking it in
such a way that the final audio will sound
good.
Normalizing
Notice the levels of left and right channels
of a sample recording in Fig. 5. The levels
are not the same. In some cases the
waveform may not be centered on the
horizontal line at 0.0 amplitude. (DC offset.
This problem is not present in this recording.)
Also the amplitude level may be too high
which will not be very pleasant to hear.
These kinds of problems can be nullified
using the Effect > Normalize option.
Fig. 5: A sample recording with separate
channels for Left and Right audio streams
Select the option as shown in
Fig. 6 and once we click the OK button, the
wave form will get normalized to -1.0 dB
amplitude. The track after normalization
can be seen in the background of the
Normalize dialogue box. It may be noticed
that the level difference between Left
and Right streams got reduced to a
certain extent.
Fig. 6: Normalized track is given in the
background
Fig. 4: Adjust the Input Volume in such a way that the waveform should not
touch the top and bottom boundaries.
To further correct the level difference
between right and left channels we can
use the Pan option available on the left
side of each track. By checking the output
monitor you can adjust the percentage
of panning. It will not get reflected in
CSI Communications | October 2012 | 11
the waveform. But, once we export the
track into a sound file, this change will be
applied. (See Fig. 7)
Increasing Volume
Exporting Audio to MP3
Often, after normalizing and applying other
filters to remove Hiss/Hum the volume
becomes too low. By selecting Effect >
Amplify... you can increase the volume
to desired levels. Keep in mind that the
waveform after amplification should be with
in the top and bottom limits. Otherwise the
amplified sound may not feel good.
By default Audacity will not allow you
to export the sound to MP3 format. You
need to install additional plug-ins for
this functionality. To do that go to Edit >
Preferences... menu. Select Libraries from
the list of items on the left. (See Fig. 10)
Hit the Download button and it will
take you to the download page of the
LAME MP3 Library. Download and install
the library. If you prefer to manually install
the library, you can press the Locate...
button and add the library file.
Once you add the library files, you can
select the MP3 file type from the window
opened File > Export... menu item. Click on
the Options button and you will get a dialogue
to set variables such as Bit Rate Mode,
Quality, Channel Mode etc. (See Fig. 11)
Fig. 7: Adjusting Pan slider
Splitting Stereo Tracks
In some cases we may need to edit left
and right channels separately. In such
situations we may split a track in to two,
one representing the left channel and the
other representing the right channel. Now
we can edit them separately Click the
track name area (Audio Track in the given
example) to show the menu and select
Split Stereo track. (See Fig. 8)
Fig. 8: A stereo track split into two
separate tracks
Fig. 9: Here is the final output of the sample
recording after cleaning up
Cleaning recorded audio is mostly
about playing with various available filters.
It is not be possible to correct all sound
problems by using a single filter. You
need to play around with different filters
in different order to get a good result.
The final output of the sample recording
(after exporting to a *.WAV file) is given in
Fig. 9. Notice the difference in the
waveform as compared to the image given
in Fig. 5. Keep in mind, over doing any of
these filters will affect the entire track and
we may feel a difference in sound.
Removing Hiss/Hum
About the Author
Hum is low-frequency noise while Hiss is
high-frequency noise. Almost all analog
formats will have some sort of hiss and
hum. The High Pass Filter... and the Low
Pass Filter... available in the Effect menu
of Audacity will help you to remove any
hiss and/or hum present in the recorded
file. Noise Removal... option can also be
used for removing noise.
Fig. 10: Preferences: Libraries window
Fig. 11: MP3 Options dialogue box
Once you hit the Save button we will
get another window in which we can add
details of the sound file such as Artist
Name, Track Title, Album Title, Track
Number etc. Hit the OK button and the
file will get saved in MP3 format. Always
remember to keep a backup of these
digitally restored audio files. Now we can
transfer the songs to our digital music
player and once again enjoy the music
from old cassettes, of our favorite artists
from the past.
Happy Digitizing!
n
Hareesh N Nampoothiri is a visual design consultant with an experience of more than a decade and worked
with government organizations like C-DIT, C-DAC, University of Kerala and other private organizations.
Currently he is doing research in University of Kerala, on communication design with special reference to
the aesthetic principles of Indian art. He is an author of two books on graphic design and a regular columnist
in leading technology magazines including CSI Communications. Kathakali, blogging and photography are
his passions. He has directed a documentary feature on Kathakali and also directed an educational video
production for IGNOU, New Delhi.
CSI Communications | October 2012 | 12
www.csi-india.org
Article
Pratik Thanawala
Lecturer, AES Institute of Computer Studies, School of Computer Studies, Ahmedabad University, Ahmedabad, Gujarat
Email id: [email protected], [email protected]
Hadoop Mapreduce: Framework for Parallelism
As volume of data grows rapidly,
Hadoop Mapreduce provides effectively
processing framework, required to process
massive amount of data in parallel on large
clusters. Hadoop provides the distributed
programming framework for large-scale
data processing while Mapreduce is
a programming model and associate
implementation for development of Webscale programs and generating large data
sets. Hadoop is an open source software
framework that supports data-intensive
distributed applications to be executed
successfully by many companies and
aimed to provide analysis and processing
of large amount of data. It is a scalable
and reliable system for shared storage and
analyses. It automatically handles data
replication and node failure. It includes a
Distributed File System that stores large
amount of data with a high throughput
access to data on cluster called as Hadoop
Distributed File System (HDFS).
MapReduce is the highly scalable key
algorithm that the Hadoop MapReduce
engine uses to distribute work across
many computers in a cluster. It was
originally proposed by Google to handle
large-scale web search applications and
utilizes the Google File System (GFS) as an
underlying storage layer to read input and
store output. The main idea of Mapreduce
is to hide the details of parallel execution
and allows the users to focus on data
processing strategies. It emphasizes on
the use of small machines to process the
job which normally could not be processed
by large machine. It enables one to exploit
the massive parallelism provided by the
cloud and provides a simple interface to a
very complex and distributed computing
infrastructure.
Simply, Mapreduce is organized as
“Map” function and “Reduce” function.
The map transforms a piece of data into
some number of <key, value> pairs. The
notion of map is as follows:
•
Inputs a <key, value> pair where
Key is a reference to input value
Value is a data set on which to
operate
•
Assessment
Done by function supplied by user
map(input rec) { emit (k1, v1),
emit (k2, v2)
}
Apply to every value in value input
•
Produces a set of <key, value> pairs
as the output of the job, conceivably
of different types
Each of these elements will then be sorted
on their key and reach to the same node,
where “reduce” function is used to merge
the values into a single result. The notion
of reduce is as follows:
Done by a function supplied by user
reduce(key, values) {combined =
initialize()
While (values.hasnext) {
combined = merge(values.next)
}
Collect (key, combined)
}
•
Starts with intermediate large
number of <key, value> pairs
•
Ends with very few finalized <key,
value> pairs
•
Starting pairs are sorted by the key
•
Value of a given key is supplied to
Reduce function by iterator
Shortly, Map returns the information
that is accepted by Reduce to reduce the
amount of data.
Maps returns
information
Reduce accepts
information
Amount of data
is reduced
Fig. 1: MapReduce work together
Hadoop-Mapreduce
has
become
a powerful computation model for
processing large amount of massive data
on large clusters such as cloud. HadoopMapreduce is widely used framework for
Mapreduce whose java source code is
freely available. It divides the data set into
independent chunks which are processed
by a map task in a completely parallel
manner. Hadoop-Mapreduce sorts the
output of the map which are then made
as input to reduce the task where both the
input and output of job are stored in the
same file-system. The framework takes
care of scheduling tasks, monitoring, and
reexecuting the failed tasks. To distribute
the tasks, i.e. work around the Clusters
Hadoop Mapreduce engine uses the
“Mapreduce”. Usually the compute nodes
and storage nodes are the same, that is
MapReduce framework and HDFS are
running on same set of nodes in clusters.
Brief of How It Works?
To run a Mapreduce job, Hadoop requires
the objects such as Client to submit the
job (i.e. the combination of all classes
and JAR files needed to run a map/reduce
program), a single master Job Tracker,
a java-based application to schedule
the jobs on slaves, monitoring them,
reexecuting them in case of failure and
to synchronize them. While the slaves
are the cluster nodes that runs the Task
Tracker again java-based application to
execute the tasks directed by the master.
A Job is used to describe all inputs,
outputs, classes, and libraries used in a
map/reduce program, whereas a program
that executes individual map and reduce
steps is called as a task. The tasks are
executed on TaskTracker nodes chosen by
Job Tracker. The Hadoop application uses
HDFS (Hadoop Distributed File Systems)
as a primary storage to construct multiple
replicas of data blocks and distribute them
on the cluster nodes. It is a storage where
input and output files of the Hadoop
programs are stored. HDFS is used to
provide a very high input and output
speed. It also provides a high bandwidth
by storing portion of files scattered
throughout the Hadoop Cluster.
The steps involved in working
of Hadoop MapReduce includes Job
submission, Job initialization, Task
assignment, Task execution, Streaming
and Pipes, Program and status updates,
and Job completion.
Consider an example of word
count that simply reads an input text file
containing number of words and count
the number of times that each word
appears. As Hadoop is build on HDFS
and MapReduce, this example of word
count will execute as a part of Hadoop
application. To execute it requires installing
and downloading Hadoop. It is a zip file,
which is to be unzipped. Once this is
CSI Communications | October 2012 | 13
performed, the input files are needed to be
put in HDFS. In some cases, this requires
first to format a filesystem to HDFS.
After the system is formatted, the input
dictionary files are put into filesystem.
Hadoop gives better performance with
single larger files rather than smaller files.
Short files should be copied to HDFS.
This data will be then processed using
MapReduce program. The program will
be Java file that contains Map and reduce
algorithms. Here each mapper takes a line
as input and breaks it into words. Each
mapper then emits <key/value> pairs of
word where each emitted<key/value>
pairs are then “shuffled” that indicated
that the pairs with the same key are
grouped and then passed to machine to
be reduced by reduce function. For word
count example, to count the number of
word occurrence, the reduce function will
perform the aggregate (sum) of the values
of the collection of <key, value> pairs
which have the same key. The diagram
below depicts working of Map Reduce for
word count. The following portion provides
comparison between Parallel programming
paradigms as MPI with MapReduce.
Conclusion
Hadoop MapReduce as an effective
programming model and software
Parameters
Part/Whole
Operations
Iterative
Algorithms
Nature
Suitability
Data
Interconnection
Simplicity
Modes of
Communications
Fault-tolerance
Mechanism
Input
Splitting
Mapping
Ram, 1
Tom, 1
Harry, 1
Ram Tom Harry
Ram Tom Harry
Sam Harry Tom
Harry Ram Sam
Sam, 1
Harry, 1
Tom, 1
Sam Harry Tom
Harry, 1
Ram, 1
Sam, 1
Harry Ram Sam
Reducing
Harry, 1
Harry, 1
Harry, 1
Harry, 3
Ram, 1
Ram, 1
Ram, 2
Sam, 1
Sam, 1
Sam, 2
Tom, 1
Tom, 1
Tom, 2
Final Result
Harry, 3
Ram, 2
Sam, 2
Tom, 2
Fig. 2: MapReduce word count
framework is used for writing dataintensive applications that speedily
process vast amounts of data in parallel on
low-level compute nodes of clusters.
Bibliography
[1] Apache Hadoop available at http://
en.wikipedia.org/wiki/Apache_Hadoop
[2] Chen, W Y; et al. (2011). “Parallel
Spectral Clustering in Distributed
Systems”. IEEE Trans. Pattern Anal.
Mach. Intell., 568-586.
[3] Dean, J and Ghemawat, S (2008).
“MapReduce:
Simplified
Data
Processing on Large Clusters”. USENIX
Association OSDI ’04: 6th Symposium
on Operating Systems Design and
Implementation, COMMUNICATIONS
MapReduce (MR)
MR can be said as Subset of MPI
Include Less operations compare
to MPI
Implemented using sequence of
Jobs
Convenient abstraction Model for
better fault tolerance in “parallel”
implementation
For problems that requires to solve
huge data-intensive tasks
Good at data parallelism
Appropriate for data-intensive task
Suitable for Non-iterative
algorithms where nodes require
little data exchange to proceed
Less Correlated
Easy to learn
Communication between nodes by
Disk I/O
Better fault-tolerance. If one node
fails, the task is restarted on the
other node
MPI
MPI can be said as Super-Set MR
Include Many Operations
Efficiently implemented
More General and higher performance
Model
For problems that require lots of InterProcess Communication
Good at task parallelism
Appropriate for computation-intensive task
Suitable for iterative algorithms where
nodes require data exchange to proceed
More Connected
Distinctly Complex
Communication between nodes by
Message Passing
Poor fault tolerance. If one process fails all
process will exit
Table: Comparison of parallel paradigm (MapReduce vs. Message Passing Interface)
CSI Communications | October 2012 | 14
Shuffling
OF THE ACM, 107-108.
[4] Dörre, J, et al. (2011). “Static Type
Checking of Hadoop MapReduce
Programs”.
Proceedings
of
the
International Workshop on MapReduce
and its Applications (MapReduce), ACM
Press, 17-24.
[5] Hadoop Mapreduce Wiki. http://wiki.
apache.org/hadoop/MapReduce
[6] How MapReduce works with Hadoop
http://answers.oreilly.com/topic/2141how-mapreduce-works-with-hadoop/
[7] http://wiki.apache.org /hadoop/
WordCount
[8] http://hci.stanford.edu/courses/
cs448g/a2/files/map reduce_tutorial.
pdf
[9] h t t p : //j ava . d zo n e .co m /a r t i c l e s /
hadoop-basics-creating
[10] h t t p : // s t a c k o v e r f l o w . c o m /
questions/1530490/what-are-somescenarios-for-which-mpi-is-a-betterfit-than-mapreduce
[11] Lee, K H, et al. (2011). “Parallel Data
Processing with Mapreduce: A Survey”.
ACM SIGMOD RECORD, 40(4), 11-20.
[12] Lin, J and Dyer, C (2010). DataIntensive text processing with MapReduce
(manuscript of a book in the Morgan &
Claypool Synthesis Lectures on Human
Language Technologies).
[13] Mapreduce Tutorial available at http://
hadoop.apache.org /common/docs/
r0.19.2/mapred_tutorial.pdf
[14] Rao, B T and Reddy, L S S (2011). “Survey
on Improved Scheduling in Hadoop
MapReduce in Cloud Environments”.
International Journal of Computer
Applications, 34(9), 29-33.
[15] Singh, V (2011). Hadoop in Action,
Manning Publications available at http://
cyclo.ps/books/HadoopinAction.pdf
[16] w e b . c s . w p i . e d u / ~ c s 4 5 1 3 /d 0 8 /
OtherStuff/MapReduce-TeamA.ppt n
www.csi-india.org
Article
Dr. Pramod Koparkar
Senior Consultant
Challenges of Software Reliability
I call somebody in the US from India. The call gets through as
per my expectations. I get a great relief. I thank all the scientists,
engineers, manufacturers, workers, and service providers. They
have made such a ‘reliable’ system. When this happens repeatedly,
I get very pleased. I say that the telephones are quite reliable nowa-days.
I said the system had worked reliably when it did not fail
during my call (1) at the particular time and (2) between the two
particular places (India, US). I have also compared it (indirectly)
with my earlier experiences. I then used the phrase ‘now-a-days.’
This refers to time.
My call from Mumbai to New York has gone all right. However,
I may get a different experience when I call to some remote place
like Akranimahal in mountains or Vazakkulam in rural area. The
reliability experienced may not be the same. The reliability gets
affected as the environment changes.
Defining Reliability
Reliability can be defined as ‘the probability that a given item will
perform its intended function for a given period under a given set
of conditions’. This definition has five parts[11]:
1. Probability is the likelihood that some given event will occur.
A number between 0 and 1 expresses it. We estimate this
number by some explicit means. We can use percentage also
(84.9% instead of 0.849).
2. The given item can be hardware, software, a TV channel, a car,
or even a human.
3. The intended function has to be defined by the customer and
the reliability engineer. What is success to one person may be
failure to another.
4. The given period is the mission time. It can be clock hours or
number of cycles.
5. Conditions refer to the operating and environmental
conditions.
The reliability statement is complete when all parts have been
provided.
Another name for reliability (R) is the ‘probability of success
(S)’. We also speak of ‘probability of failure (F)’ or ‘unreliability
(U)’. Mathematically
S + F = 1 and
R + U = 1.
We want every given system to be unconditionally reliable.
Its unconditional reliability goes down if it fails at certain time or
in certain environment. Of course, we may compromise and still
accept such a system “conditionally”. Under those conditions, the
reliability is still acceptable. As an example, the telephone lines
sometimes do not work during the day. Still we buy a telephone.
We can use it reliably during night hours. Thus, even a less reliable
system can still be useful. Of course, we always strive to achieve
unconditional reliability.
Component Reliability and System Reliability
Large systems are composed of smaller components. For any
system to be reliable, its every component must be reliable. Failure of
any single component causes the whole system to fail. Just a single
component can put the whole system at stake.
Logically speaking, this (reliability of every component) is
only a necessary condition. It may not be a sufficient condition[6].
Just making the failing component reliable may not solve the
problem. There can be other less reliable components still existing
over there. For the whole system to be reliable, all components of
it must be highly reliable, and that too, simultaneously.
Recall that reliability is nothing but the probability of being
correct. It is a real number lying between 0.0 and 1.0. Reliability 1.0
indicates that the component is working fully and satisfactorily.
Reliability 0.0 indicates that the component is failing without fail!
Consider a small system with just two components, C1 and C2.
Let their respective reliabilities be R1 and R2. The system will work
correctly when both C1 and C2 simultaneously work correctly. Thus,
the reliability of the system would be R1 × R2. The probabilities
multiply to represent simultaneity of events[3]. In a similar fashion,
consider a system with n components C1 … Cn. Let their respective
reliabilities be R1 … Rn. The system reliability (SR) would then be:
(1) SR = R1 × R2 … × Rn
Different components are unlikely to have the same reliability, and
so, the above expression becomes complicated from the practical
point of view. However, we may focus on their minimal reliability.
Let R be the minimum of all Ris. That is, we have each component
at least that much reliable. Then Ri ≥ R for every value of i, and the
equation becomes an inequality:
SR ≥ Rn
Conditional and unconditional reliability
Earlier I used the word ‘reliability’ in the opening paragraph of this
article. I was actually referring to ‘conditional reliability’. During
the given time and between the given places, telephone call was
reliable. It is ‘the reliability within a specified period in a specified
environment’. May it be reliability of a telephone system, or of
anything else!
As opposed, we also have ‘unconditional reliability’. If anything
works reliably at all (every) times in all (every) environments,
it is unconditionally reliable. A failure at any single time or in any
single environment cracks the status and reduces unconditional
reliability. The context generally makes it clear: ‘conditional’ or
‘unconditional’. Consequently, ‘reliability’ is used for both.
Since our focus is on the minimal expected reliability of the whole
system (Minimal System Reliability, or MSR, in short), we replace
the ≥ by = and arrive at
(2) MSR = Rn
This MSR is the quantity in which our whole interest lies.
Impact of MSR
Let me numerically illustrate the impact of equation (2). Suppose
R is 0.8 (=80%). That means, we have successfully created every
component with reliability = 0.8 or more.
CSI Communications | October 2012 | 15
The following table shows the values of MSR against those of
n when R is held constant = 0.8.
n
1
2
3
4
5
6
7
8
9
MSR = Rn 0.8 0.64 0.51 0.41 0.33 0.26 0.21 0.17 0.14
10
0.11
You can see that just three components would bring down
MSR, the minimal expected reliability of the whole system, to 1⁄2
or 50%. Five components are enough to bring it down to 1⁄3 or
33.33%. Ten components are enough to bring it down near 1⁄10
or 10%.
When we talk about systems, we are typically considering
large and very large systems. Surely not just 3, 5, or 10 components.
How about the components with slightly higher reliability of,
say, 0.9?
The following table shows the figures with R = 0.9.
n
1
MSR = Rn 0.9
2
3
4
5
6
7
8
9
10
0.81 0.73 0.66 0.59 0.53 0.48 0.43 0.39 0.35
Of course, now the situation is slightly better. Still just 6
components would bring down MSR to nearly 50%. Moreover,
just 10 components would bring it down near 33%.
Recall that R is reliability, that is, a probability. As such, it
cannot exceed the value 1. When it is exactly 1, then Rn = 1 for any
n, and the total system also has reliability MSR = 1. Actually, that is
what we wish we had, or striving for.
Otherwise, whenever R < 1, (i.e. R ≠ 1) from calculus, we have
the famous result[8]
(3) Limit n→∞ Rn = 0 whenever |R| < 1.
In other words, however you try to make R close to 1, some number,
say N0.5, would always exist to make Rn < 0.5 whenever n ≥ N0.5. It is
the case even if you worry about making Rn < 0.33. Some (other)
number N0.33 would exist that makes Rn < 0.33 whenever n ≥ N0.33.
In fact, for any given value k, (k > 0), there always would exist
some number Nk such that Rn < k when value of n is ≥ Nk. This is a
consequence of equation (3) obtained by following lines of usual
ε–δ arguments in calculus[8].
It may be the case that in a large system 5 or 6 components
can be bad, with reliability 0.8. Other components may have very
high reliability. Either 100%, or almost close to it, say 99.99999%
(or so). It would not matter much. The impact of those 5-6
components would typically outperform the others. They bring
the product
R1 × R2 … × Rn
spiraling down.
An important aspect in improving reliability is the 80/20 rule.
If you have industry/management experience, you may perhaps
been already aware of it[12]. It says: for many events, roughly
80% of the results come from 20% of the efforts. An important
consequence is that: you need to spend (other) 80% efforts to
achieve the (remaining) 20% results. Moreover, the 80/20 rule
is found to hold often in many different aspects of life[5]. Software
reliability is also subject to this 80/20 rule.
Now you can visualize that the 80/20 rule acts as an
obstruction in improving reliability. Enormous efforts are required
to raise reliability above 80% (and then above 90%, also).
Applying Reliability to Software
Software products are more vulnerable compared to other
CSI Communications | October 2012 | 16
industrial, manufactured products. A special care is needed to
achieve reliability in software. All the characteristics of reliability
discussed so far are applicable to software reliability also. On top
of that, software reliability is a different, more intricate, ballgame.
A software product has typically (1) very large size, (2) complex
structure, and (3) complex functionality. Owing to this fact,
software reliability needs certain special considerations. Let us
understand the repercussions in detail. This section addresses
the first two. The next section discusses the consequences of
functional complexity.
Very large size
A million-lines-of-code is common in software. Suppose a person
starts reading it, say 1 line per second, and does it for 8 hours
a day. He/she can read only 60×60×8 = 28,800 lines in a day.
That takes 35 working days, or roughly 1½ months just to read it.
Forget understanding it; forget debugging it; and of course, forget
achieving very high reliability for the entire code in a single shot!
‘One million lines of code’ is a simple phrase to utter, but
difficult to manage. It demands very tight quality control for
ensuring reliability. The complexity of such large software is
handled in three ways: using modules, using hierarchy of modules/
sub-modules, and using object-oriented approach. Software
management is not our topic here, but the following remark would
be in order.
Often the components of a system occur in a hierarchy, or in
a recursive fashion like this:
A system consists of components.
Each of these components itself is like a system, and thus,
further consists of its own components.
Moreover, each of these components itself is like a system,
and thus, further consists of its own components.
… And so on.
Thus, at each level, all the remarks about reliability apply. In fact,
we need to apply reliability considerations recursively.
Structural complexity
Structural complexity typically arises due to the application at
hand. As an example, CAD/CAM software is intrinsically more
complex, than a railway reservation system. A railway ticket is
typically a simple concept, understandable by its purchaser (the
traveler) as well as the issuing clerk. Both of them are common
people with no extra special skill. What all can happen to a ticket
are rather simple actions: cancellations, pending in a waiting list,
confirmation etc. On the contrary, CAD/CAM software may be
used to design some complex mechanical part: a piston of a car
engine, or a crankshaft, or a propeller for an aircraft. The user
typically is a trained mechanical or aerodynamic engineer. He/
she not only wants to see a picture of the part, but also wants
to know about other things such as the density of material,
moment of inertia, temperature tolerance, reaction with the
lubricant material etc. The CAD/CAM software must have all
the routines (functions, procedures, methods, macros etc.)
to do these kinds of tasks. These routines must be coherently
cooperative with each other. This we refer to as the ‘intrinsic’
structural complexity.
Once again, software management is not our topic here, but
the following remark would be in order:
Structural complexity is often handled through objectoriented technologies. This allows an easy plug-in of some
www.csi-india.org
(existing) software into another software (either existing or under
development). Of course, such acts of handling complexity also
increase the size of the software. That is,
Complexity ⇒ Largeness
However, converse of this may not be true. Largeness may not
imply complexity necessarily.
Functional Complexity
Functional complexity is more of a ‘combinatorial’ nature than
‘intrinsic’. You may write many —say 1,000, or even 10,000—
routines (functions, procedures, methods, macros etc.). However,
you follow the strict discipline that no routine contains more than
(say) 25 lines of code (excluding comments etc.). Correspondingly,
larger code portions are suitably divided into portions of 25 lines
each. You thus control the structural, intrinsic complexity. These
routines call each other. However, their calling sequence depends
on the total functionality of the software.
Effect of sequencing
Suppose the total software has n routines out of which you are
using m routines. Then there are nPm or n!/(n-m)! ways of the calling
sequence. Different combinations of simple, short 25-line routines
make the overall software complex. This is functional complexity.
It is combinatorial in nature. Depending on the situation, different
executions of the program may choose different sequence of
calling, each time.
The reliability of the total software does not depend on
the structural complexity alone. It (the reliability) also gets
affected by the functional calling sequence. As a simple example,
consider first dividing a given number x by a very large number
y and then multiplying the result by another large number z, i.e.
(x/y)*z. Mathematically, the same answer would result even if
we first multiply by z and then divide by y, i.e. (x*z)/y. However,
numerically, the second sequence is more stable than the first,
because the intermediate result is of considerable size[1]. You may
not appreciate this as an example for impact of calling sequence.
Think of x, y, and z as some other kind of numbers. Many examples
of such quantities exist:
Complex[2]
Quaternion[10]
Interval[7]
x = x0 + ix1, etc.
x = x0 + ix1 + jx2 + kx3, etc.
x = [x0, x1], etc.
For these, suitable * and / operations are defined as routines
(typical functions) multiply and divide. The above sequences to
calculate result r using a temporary variable t would be written
like this:
First choice:
t = divide (x, y);
Second choice:
t = multiply (x, z);
of all variables considered together). This may change the course
of action of the following executions of that or any other routine.
As an example, consider any sorting algorithm. It compares
two entities at a time, in sequence. If they are not in order, it swaps
them using a specially written routine named ‘swap’. When and
how the swap routine will be called cannot be known a priory. It all
depends on the current values of the entities. They may or may
not be swapped. In the extreme cases:
•
Swap routine will never be called if the entities are already
sorted in proper order.
•
Swap routine will be called for each pair if the entities are
already sorted, but in the reverse order.
Now consider another example. Suppose we have a routine C with
reliability 0.8. Suppose it is called 247 times during the whole
execution of the program. Then its contribution to MSR would be
0.8247. This effect is the same even if we have software that has
247 routines each with reliability 0.8.
This observation has an important consequence. We must
not only count the routines as components. Instead, we must
count every execution of every routine as a component. It is
useless to distinguish between the two.
Accordingly, we need to enlarge our definition of component.
In the rest of this article, I shall use the word ‘component’
for structural component (routine) as well as for functional
component (execution of routine).
Achieving Software Reliability
Day-by-day we attempt to write bigger software. We also want
to use it to do things that are more intricate. Generating an
image on the screen of your computer is a common task. The
screen resolution typically exceeds 1,000×1,000 pixels array.
Consequently, image generation demands millions of calls to the
routines that handle pixel compositions. Thus, it is quite common
that n in Rn goes to millions.
Recall that achieving better reliability boils down to dealing
with Rn and not letting it go down too much from 1.0. Since the
largeness of n is inevitable, we are forced to keep R as close to
1.0 as possible. In fact, R should be very very close to 1.0. Let me
again resort to numerical calculations. Earlier I had discussed two
tables. These showed how Rn shoots down as n increases and R
held constant. Now let me present two more tables. In these, the
desired total reliability of the system, MSR, is held constant to 0.9
or 0.8. Recall that R is the minimal reliability that we can achieve
for every component. For the given different values of R, the table
below evaluates (lowest) n such that Rn sinks below the desired
total reliability MSR(= 0.9, or 0.8). This is done by solving n in
terms of R and MSR, that is,
n = log MSR/ log R.
r = multiply (t, z);
r = divide (t, y);
Consequently, while considering reliability, we must account for
the sequence of calls made to routines.
Table for MSR = 0.9
R
n
0.9
1
0.99
10
0.999
105
0.999,9
1,053
0.999,99
10,535
0.999,999
105,360
0.99
22
0.999
223
0.999,9
2,231
0.999,99
22,314
0.999,999
223,143
For MSR= 0.8
Effect of execution
R
n
0.9
2
Let us focus on another important aspect, namely, effect of
software execution on reliability. Execution of one routine may
leave some effect on the state of the program (state means values
Please do a careful examination of these tables. We may ensure
to have very high reliability R of 0.999,999 for every routine. Still
we run the risk of dropping total reliability MSR below 0.9 (or even
CSI Communications | October 2012 | 17
below 0.8). Of course, the number of components (i.e. executions
of routines) that make this is 223,143. This looks large, but such n
is quite possible in today’s software.
Where are We in Light of Six-Sigma (6σ)?
Six-Sigma movement has generated enormous interest in business
world. What does the term signify exactly? Without going into
details of 6σ theory, let us see what it means for our purpose. The
standard table for σ values[9] reads like this:
The first row shows σ values from 6σ to 1σ. The second row
expresses how many failures are allowed out of 1,000,000 (1M)
trials. The third row expresses how many successes are required,
expressed in terms of fraction. This is nothing but the reliability by
definition. Fourth row expresses the same reliability as percentage.
σ
In 1 M
Reliability
%
6
5
4
3.4
233
6,210
0.999,996,6 0.999,77 0.9938
99.999,66
99.977 99.38
3
66,807
0.9332
93.32
2
308,537
0.6915
69.15
1
690,000
0.3085
30.85
The above table expresses σ in terms of the reliability. However,
for our purpose, we have certain values of R in mind, and the
question we ask is ‘what σ does it corresponds to?’ The inverse
table, for reliability values of our interest, reads like this:
R as %
σ
95
3.2
90
2.8
80
2.3
70
2.1
60
1.75
Thus, it is quite evident that even achieving 95% reliability is
not that welcome on the 6σ scale. It corresponds to just 3.2σ.
Therefore, we anyway have to struggle for 5σ, or at least 4σ.
The two examples below make the implications more
explicit. These are expressed in 6σ terminology. Note that the
word ‘component’ refers to structural component (routine) as well
as functional component (call to routine).
Example 6σ/5σ
Suppose we want to achieve 5σ for our total system. That is, we
set MSR = 0.999,77. Suppose we are able to achieve the minimal
component reliability R at 6σ level. That is R = 0.999,996,6. Using
the formula
n = log MSR/ log R,
We get
n = 67.6547244,
i.e. n = 68
About the Author
That means, even though we have achieved 6σ level for R, just 68
components are sufficient to push our total reliability MSR below
5σ level.
Example 6σ/4σ
Suppose we want to achieve 4σ for our total system. That is we
set MSR = 0.9938. Suppose we are able to achieve the minimal
component reliability R at 6σ level. That is R = 0.999,996,6.
We get
n = 1829.20,
i.e. n = 1830
That means, even though we have achieved 6σ level for R, just
1830 components are sufficient to push our total reliability MSR
below 4σ level.
Epilogue
Unfortunately, this whole article has an alarming but a bit negative
tone. However, I am not the only one with a negative tone about
software reliability. During the opening of 13th chapter of his
famous book[4] ‘The Mythical Man Month,’ even Prof. Frederick
Brooks Jr. has this interesting sentence:
“I can write programs that control air traffic, intercept ballistic
missiles, reconcile bank accounts, control production lines.” To
which the answer comes “So can I, and so can any man, but do
they work when you do write them?”
I think this sentence shades good-enough light on the reliability
of software.
References
[1] Acton, F S (1971). Numerical Methods that Work, Spectrum.
[2] Ahlfors, L V (1979). Complex Analysis: An Introduction to the
Theory of Analytic Functions of One Complex Variable, 3 Ed.
[3] Feller, W (1971). An Introduction to Probability Theory and Its
Applications, Vol. 1, John Wiley.
[4] Frederick Brooks Jr. (1995). Mythical Man Month, AddisonWesley, Chapter 13.
[5] Koch, R. (2001). The 80/20 Principle: The Secret of Achieving
More with Less, London: Nicholas Brealey Publishing.
[6] Logic: www.math.umn.edu/~jodeit/course/ACaRA01.pdf
[7] Mudur, S and Koparkar, P (1984). “Interval Methods for
Processing Geometric Objects”. IEEE Computer Graphics and
Applications, 4(2), 7-17.
[8] Narayan, S (1962). Differential Calculus, S. Chand and Co.
[9] Pande, P and Holpp, L (2002). What is Six Sigma? Tata
McGraw Hill Publishing, New Delhi.
[10] Quaternions: http://en.wikipedia.org/wiki/Quaternion
[11] Reliability:
http://kscsma.ksc.nasa.gov/Reliability/
Documents/whatReli.pdf
[12] The 80/20 Rule: http://en.wikipedia.org/wiki/Pareto_
principle
n
Dr. Koparkar has a Ph.D. in Computer Science, in 1985. Since then he has published over 20 Research Papers in the
prestigious International Journals and Conferences, mainly in the areas of Geometric Modelling, Image Synthesis, and
Geometric Shape Processing in 2-D and 3-D. He has been on the International Journal Editorial Board and International
Conference Program Committee. He has visited several organizations in different countries for delivering lectures,
developing software and presenting research papers.
He has been on various Academic Advisory Committees at the University and Government levels in India. He had
worked in Research Institutes like TIFR and NCST, and in Corporations like Citicorp, Computer Vision, ADAC Laboratories
(USA), and 3-dPLM/GSSL (India).
He has written four Books: Unix for You, Pascal for You, Java for You, and C-DAC Entrance Guide. At present,
he offers consultancy to corporate clients about various latest technologies.
CSI Communications | October 2012 | 18
www.csi-india.org
Article
Balasaheb Ware,* Vinod Kumar Garg,** and Gopal Ranjan***
*Assist. General Manager, Datamatics Global Services
**Certified project management professional PMP®
***Global Head - Corporate Quality (Datamatics Global Services Ltd.)
Optimization of Customer Data Storage
Background
Knowledge process outsourcing or KPO
as may be called is the current trend
wherein mostly customers in countries
such as USA, Canada outsource their core
business activities to developing countries
like India, Philippines, Brazil etc. Reasons
behind KPO is skilled labor, additional
value creation and most importantly cost
reductions. KPO services include different
types of research and information
gathering, equity, business and market
research, transcription services, and
consultancy. Mostly this kind of business
requires vendor to retrieve data, process,
store and send data back to customer
within TAT (turn around time) set
mutually. The data which resides centrally
needs to be retained for a prescribed
period of time as defined by customers in
their SLAs.
Generic Procedure
✔
Customer’s data is received in soft
copy format using various secure
data transmission methodologies.
These methodologies are as under:
•
•
•
SFTP (Secure File Transfer Protocol)
IPSEC (Internet Protocol Security)
VPN (Virtual Private Network)
✔
Data downloaded is then stored on a
central location for processing.
Users perform processing of this
data using various in-house data
conversion tools. Data processing
is entirely based on customer’s
requirement. Processed data is kept
back on central repository. Most of the
✔
✔
users need to retain soft copy of raw
data along with processed data onto
central repository for some time or at
least for period specified by customer
in their SOW (Statement of Work).
In fact in some cases it is required to
retain for more than a year.
These data will be backed up on a
daily as well as weekly basis. These
tapes are kept at remote off-site
locations. There are cases wherein
customers provide data that requires
references to historical data and
in such cases making historical
data centrally available becomes
mandatory as it becomes easier to
process current data by referring
historical data. Such data also needs
to be retained on central repository.
This detailed data flow is described in the
Fig. 1.
Pain Area
IPSEC VPN CUSTOMER B CUSTOMER C
IPSEC
VPN
TFTP
access
SSL
Un-processed data
USER 1
Processed data
ISP Cloud
Un-processed data
CENTRAL
REPOSITORY
Processed data
USER 2
Un-processed data
Processed data
DLT tapes movement at offsite location
Backup device
OFF-SITE LOCATION
Fig. 1: Datamatics customer datastorage architecture
USER 3
Survey conducted for the one of the
leading KPO organization revealed that
data stored on central repository is
increasing over a period and is evident
from attached graphical view in Fig. 2.
It was observed that the data storage
needs have been consistently growing due
to the following:
1. Increase in customer base.
2. Bulk quantity data being processed etc.
3. Variety of data stored on central
repository.
These have resulted in exhausting space
available on central repository.
It is evident from the graph above
that in a short span this would have
resulted in exhausting the entire space
thereby making requirement for additional
space. Existing repository has already
crossed the limitations. Therefore the only
option left was to procure an additional
set of data processing resource that
comprises new server, licenses, data
backup device, additional tapes, and office
site space to store additional tapes. This
would have positively resulted in investing
heavily on additional set of IT resources &
subsequent huge investment increase in
operational cost.
Six sigma methodology that
comprises various phases such as Define,
CSI Communications | October 2012 | 19
Measurement, Analysis, Improve
and Control (DMAIC) was
followed to identify root causes.
Data Size in TB
7
Root Cause Analysis
6.5
Cost break down structure
Diagram
Fig. 3 shows various factors
responsible for increasing overall
cost of the data storage.
Data storage in TB
6
Improvement Strategy
5.5
Improvement
strategy
was
based on identification of data
patterns, its longevity, data type
and its requirement on central
repository. Removing or taking
such data away will result in data
storage space.
Identification
of
data
pattern in terms of its longevity,
its types and requirements of
such data on central repository
using certain tools and takes
it away or delete the data that
resulted in additional free space.
5
4.5
4
3.5
Wee
k9
Wee
k 11
Wee
k 13
Wee
k 15
Wee
k 17
Wee
k 19
Wee
k 21
Wee
k 23
Wee
k 25
Wee
k 27
Wee
k 29
Wee
k 31
Wee
k 33
Wee
k 35
Wee
k 37
Wee
k 39
Wee
k 41
Wee
k 43
Wee
k 45
Wee
k7
Wee
k5
Wee
k1
Wee
k3
3
Fig. 2: Survey showing data stored on central repository increasing over a period of time for a leading KPO
Environment
Quality of data
People
Hardware
AMC cost
AMC cost
Installation cost
Customer
contracts for
data retention
Server cost
Backup device cost
Housekeeping
of data
CITNO cost
Tapes cost
Storage cost
Antivirus software
Insurance cost
Cost of data
Direct cost
Cost of missing
data
Generic training to users
Indirect cost
Cost of accidently data
deletion
Training on space
optimization
CITNO - Technical training
Software licenses cost
Cost - Software assurance
In addition following strategies
were also suggested:
• Re-configuration of server to
avoid unnecessary creation
of dummy files and storage
of data wasteful in nature.
• Limiting directory structure
to a certain level instead of
previous unlimited structure
• Bringing up required level
of
awareness
among
users processing data and
administering it.
For the same organization with
improvement strategy overall
improvement in data storage
that has resulted in overall cost
savings and is clearly evident
from Fig. 5.
Statistics: Post
Improvements
quantification of benefits
Resultant good practices
Data
download
Data handling
Awareness
Software
Fig. 3: Showing various factors responsible for increasing overall cost of the data storage
CSI Communications | October 2012 | 20
Following good practices were
identified in the areas of data
storage:
www.csi-india.org
Storage of
unwanted data
Housekeeping of
data
1.
Hardware / OS
failure
Nature of data
Lack of antivirus scanning
Poor housekeeping
Old operating system
due to lack of
technical tools
Lack of user
to monitor data
awareness
OLD systems
about data usage for its longevity
No AMC in place
2.
High storage
Priviledge rights issue
User awareness
3.
High attrition
Difficulty in maintaining
superior quality
Discipline
issue among
users for data
storage
Budget constraint
Technical
competencies
Users have no time
Attitude
Considered as
low priority
Awareness issue
Awareness among
users on storage
Redundant
data
Consistent data monitoring by
users processing data and data
administrators in terms of its
longevity will help in controlling data
growth.
Control of unwanted data by data
owner and data administrator will
certainly affect the overall processing
speed and hence savings of human
resource.
Optimizing data storage architecture
by tuning it at regular intervals
in line with business needs and
requirements
will
considerably
help in maintaining maximum data
storage.
n
Fig. 4: Storage space fish bone diagram
Data Size in TB
6.5
6
Data storage in TB
5.5
5
4.5
4
3.5
Wee
k 33
Wee
k 31
9
Wee
k2
Wee
k 27
5
Wee
k2
Wee
k 23
9
Wee
k 21
Wee
k1
5
Wee
k 17
Wee
k1
Wee
k 13
1
Wee
k1
Wee
k9
Wee
k7
Wee
k5
Wee
k3
Wee
k1
3
About the Authors
Fig. 5: Survey showing overall improvement in data storage for a leading KPO
Balasaheb Ware, B.E.Electronics, Lead Auditor 27001,CEH works with Datamatics Global Services as Assist. General Manager.
He is responsible for overall information security @ Datamatics global Services Ltd globally.
Prof. V K Garg, PMP ® is a gold medalist bachelor of engineering and M. Tech from IIT Delhi. He is a certified project management
professional PMP®. He carries 35 years of rich experience in the field of Information Technology and has been on board of
various companies for more than 10 years
Gopal Ranjan, B.Tech, MBA, Master Black Belt works with Datamatics Global Services Ltd as Global Head - Corporate quality
and responsible for Six sigma, Quality mananement,ISMS and corporate initiatives.
CSI Communications | October 2012 | 21
Article
Hema Ramachandran* and Bindu G R**
*Research Fellow at the College of Engineering, Trivandrum
**Associate Professor in Electrical Engineering, College of Engineering, Trivandrum
WiTricity - The Wireless Future
Though
communication
systems
have gone wireless, the power chords
occasionally tie down our modern gizmos
with wires (during charging). These
power chords also may vanish soon.
Real wireless world is around the corner.
Imagine keeping your mobile phone on the
bedroom table (see Fig. 1) to get it charged
without any plugging of any sort! Imagine
driving your electric car into the garage
and its battery starting to get charged
(see Fig. 2), receiving power from the
mains without any tangible connections!
These are the promises of WiTricity the emerging technology of wireless
electricity.
Fig. 1: Mobile devices in handbag being
charged using wireless charging pad
(Courtesy: www.pocket-lint.com/ecoupled)
Fig. 2: Electric vehicle charging using WiTricity
(Courtesy: themonki.com/gadgets/wireless-electric-carcharging)
WiTricity is an unrealized dream of
Nikola Tesla, the wizard of the west who is
well-known for the invention of AC power
transmission and the induction motor.
Had Tesla lived to see the current times, it
would have been blissful for him. Wireless
CSI Communications | October 2012 | 22
electricity which Tesla dreamt and worked
hard to realize without success is taking
its wings again, this time pet-named
WiTricity. In early twentieth century, Tesla
had to abandon his thoughts in spite of
his extensive efforts which led to the
construction of the huge Wardenclyffe
Tower in Long Island USA, to implement
his idea of exploiting the earth’s magnetic
field to generate power for wireless power
transmission.
After the advent of mobile phones in
the 1990s, the dream of wireless power
has come back to life in a natural way. In
the modern IT era, wireless technology
is the in thing as demonstrated by
mobile phones, wireless Internet, and
blue tooth technologies. A technology
to enable efficient wireless transfer of
power over midrange will enable devices
such as laptops and mobile phones to
go completely wireless as their charging
will be done in a wireless mode. A large
number of consumer electronic goods
will also be enhanced by such systems.
Household robots, toys, car batteries, and
other portable electronics can be equipped
to charge themselves without being
plugged. A surge in the interest towards
wireless power has been created in 2007
through a successful experiment at the
Massachusetts Institute of Technology led
by Marian Soljacic. This experiment has
proved beyond doubt that wireless power
transmission is a reality, with certain
limitations. Though limited to midrange
transmission, devices like cell phones and
laptops which regularly need charging
from the power socket, can exploit this
safe and reliable technology. WiTricity can
perhaps wipe out the last wire of charging
devices of the wireless age. A number
of investigations are going all around
the globe to take forward this small
achievement that has been demonstrated
at MIT. We give here a basic introduction
to the concept of WiTricity, its behavior,
limitations, applications, and current
efforts to take it forward.
WiTricity systems basically consist of
a power transmitting device and a receiving
device, both specifically designed to achieve
power transfer through the phenomenon
of magnetic resonant coupling. Marian
Soljaciac along with MIT physics professors
Fig. 3: The MIT experiment (Courtesy: MIT website)
John Joannopoulos and Peter Fischer and a
team of students devised a simple set up to
power a 60-watt light bulb (see Fig. 3). They
built two copper coils of 60 cm diameter
and hung them from the ceiling about two
meters apart. In one coil a high frequency
(10MHz) alternating current was passed
to create a magnetic field. The second
coil was tuned to resonate to the same
frequency as that of the first coil. A light
bulb connected to the second coil was lit
even when nonmetallic obstructions were
placed between the coils. The experiment
achieved 50% efficiency over a distance of
2m. It may be noted that at MHz frequency
range the coils would produce near and far
fields. WiTricity utilized the nonradiative
near field only. Many other techniques in
consideration for wireless power transfer
rely on radiative fields. The idea of spacebased solar power stations, for instance
consider directed microwave or laser
beams to transmit power to receivers
on earth.
The strength of a near field would
diminish very fast with distance (inversely
proportional to the square of distance).
At this point of time, the prospects of
increasing the distance of transmission
seem to be dim in terms of acceptable
efficiency and cost. This however need
not mean that WiTricity has no potential
for real-life use. Even when power
transmission distances are extremely
small, the wireless effect can be exploited
to obtain wire-free charging in many
situations. This is already used on an
industrial scale such as in the case of
charging of electric tooth brush. The tooth
brush is placed in a cradle so that coils in
the toothbrush and cradle come within
a range of a few millimeters and power
transfer is realized.
www.csi-india.org
R
➔
T
➔
Fig. 4: Desktop WiTricity system in operation photographed without camera flash (left)
and photographed with camera flash (right). Transmitter coil wound on the rim of the cane
table is marked T and the receiver coil is marked R. The receiver coil powers the table lamp
as well as the decorative LEDs at the base of the lamp
A bo ut th e Auth ors
The authors have experimented
with a desktop WiTricity system. We
set a transmission coil right beneath the
rim of a 1m diameter coffee table and
had the receiver coil wound at the base
of a table lamp which used white LED.
We calculated the self inductances and
parasitic capacitances of the coil (which
are significant at high frequencies that
we operate within WiTricity systems).
Resonant coupling was achieved by
adding additional external capacitances
in both the coils. As the distance between
the transmitter and receiver coils was
negligible, the table lamp could receive
power efficiently from the transmitter
coil. The lamp could be moved around the
table without any variation in intensity.
Even when it is lifted, for a distance of
a few centimeters, the lamp maintains
reasonable brightness. This is an example
of the utility of WiTricity system for
household applications. Fig. 4 shows the
desktop WiTricity system. Scaling up this
system to wireless powering of a whole
room is worth investigating.
The coils used in the WiTricity system
typically operates from 1MHz to 10MHz.
Even though we utilize the near field, far
field is produced from these antennae.
These waves will travel far and it may be
prudent to reserve a frequency band for
WiTricity system.
A concern that may pop up naturally
with WiTricity systems is about health
hazards. The near field is the powerful
one which is in a relatively safe frequency
range (1-10MHz). This need not raise a
major concern as in the case of mobile
phones. However, studies need to be
held to investigate the effects on human
tissues in detail, if WiTricity systems are
to go into large-scale use.
A small number of players are
there in the field of commercializing
WiTricity. Intel (www.intel.com) has
its version of WiTricity as Wireless
Resonant Energy Link (WREL) and has
demonstrated the equivalent of the MIT
experiment in lighting a 60-watt bulb.
WiTricity
Corp
(www.witricity.com)
was born out of the MIT team which
triggered recent attention on the subject.
They are aiming at developing a range of
wireless devices with power delivered
at high efficiency and at distances over
room scale. Wireless power consortium
(www.wirelesspowerconsortium.com),
wild charge (www.uk.wildcharge.com),
WiPower (www.wi-power.com), and
Fulton Innovation (fultoninnovation.com)
are some of the other players in the field.
For Further Reading
[1] Karalis, A, et al. (2008). “Efficient
wireless non-radiative mid range
energy transfer”. Annals of Physics,
323, 34-48.
[2] Kurs, A, et al. (2007). “Wireless
power transfer via strongly coupled
resonances”. Science, 317(7), 83-86.
[3] Ramachandran H and Bindu G R
(2012). “Prototype of a desktop
WiTricity System for Powering
household
Uility
Equipments”.
International Journal of Advanced
Science
Engineering,
Information
Technology, 2, 49-52.
n
Hema Ramachandran, Speed IT Research Fellow at the College of Engineering, Trivandrum, holds an MTech
and B Tech in Electrical Engineering from the University of Kerala and also an MPhil in Futures Studies from the
same University. She has taught in NSS College of Engineering, Palakkad and University College of Engineering,
Karyavattom where she also served as Principal for a brief period. She also had a brief stint in software industry
in Trivandrum Technopark. She has authored a book on Scilab, published by S Chand & Co, New Delhi in 2011.
G R Bindu, Associate Professor in Electrical Engineering, College of Engineering, Trivandrum, took her MTech
Degree in 1992 and PhD in 2006 from University of Kerala. She worked as an Engineer in KERAFED and also as
a faculty in various Engineering Colleges. Her areas of special interest are electromagnetic field theory, control
and condition - monitoring of electric drives. She has a number of research publications to her credit including
the prestigious IEEE Transactions.
CSI Communications | October 2012 | 23
IT is complicated.
IT governance
doesn’t have
to be.
®
DOWNLOAD COBIT 5 TODAY!
Delivering thought leadership and guidance from business and IT leaders worldwide,
COBIT 5 takes the guesswork out of governing and managing enterprise IT.
It’s the most significant evolution in the framework’s 16-year history. COBIT 5 now
provides the business view of IT governance, reflecting the central role of both
information and technology in adding enterprise value. It also integrates other
approaches, such as ITIL® practices and ISO standards.
IT is getting more complex by the day. Who says IT governance has to?
Take advantage of the only business framework for the governance and
management of enterprise IT. To learn more about COBIT 5, please visit our web
page today at www.isaca.org/cobit5-CSI.
COBIT® is a registered trademark of ISACA. ITIL®PZHYLNPZ[LYLK[YHKLTHYRVM[OL*HIPUL[6MÄJL
(SSV[OLY[YHKLTHYRZHUKJVTWHU`UHTLZTLU[PVULKHYL[OLWYVWLY[`VM[OLPYYLZWLJ[P]LV^ULYZ
CSI Communications | October 2012 | 24
www.csi-india.org
Article
Avinash Kadam [CISA, CISM, CGEIT, CRISC]
Advisor to the ISACA India Task Force
Why Do We Need the COBIT 5 Business Framework?
Introduction
In today’s complex world, there are a
number of standards and frameworks
which are issued by various institutions
with their own specific objectives. Some
of the prominent ones among this plethora
of standards and frameworks are ITIL,
ISO27001, PMBOK and TOGAF. Each of
these is designed to meet the specific
requirement of the user community.
Additionally, each has a specific depth and
breadth of coverage in a specific focused
area. There was no one comprehensive
framework which could be the one overall
holistic framework that could integrate
other standards and frameworks, cover
the enterprise end to end and meet the
needs of all stakeholders. The COBIT
framework filled that need. The recently
released COBIT 5[1] is the comprehensive
business framework created by ISACA
for the governance and management of
enterprise IT. COBIT 5 is the one single,
integrated framework which integrates
and aligns with other frameworks and is
focused on enabling the goal of meeting
the business requirements. This article will
provide an overview of the five principles
of COBIT 5 and will explain why the COBIT
5 framework is indispensable for every
enterprise using IT for its business.
What Is a Framework?
“Framework is a real or conceptual structure
intended to serve as a support or guide for
the building of something that expands
the structure into something useful”[2].
We need frameworks as they provide a
structure for consistent guidance. So, if we
need guidance about information security,
we use ISO 27000 series of standards
that together constitute an information
security framework. If we need to design
IT-enabled services, we use ITIL to provide
guidance. Similarly, when it comes to
project management, we use PMBOK. For
software architecture, we use TOGAF. All
these niche standards can be integrated
under the umbrella framework of COBIT 5.
COBIT 5 is a holistic business framework
for the governance and management of
the enterprise IT in its entirety. The COBIT
5 framework is based on five principles
which are explained hereafter.
Principle 1: Meeting Stakeholder Needs
An enterprise has a number of
stakeholders, both internal and external.
1. Meeting
Stakeholder
Needs
5. Separating
Governance
From
Management
4. Enabling a
Holistic
Approach
COBIT 5
Principles
2. Covering the
Enterprise
End-to-end
3. Applying a
Single
Integrated
Framework
Source: ISACA, COBIT 5, 2012, www.isaca.org/
cobit. Used with permission
For example, a bank has management
and employees who are the internal
stakeholders, and customers, partners,
suppliers, government and regulators
are the external stakeholders. These
stakeholders have different and sometimes
conflicting needs. Employees want job
security, management wants productivity,
customers want stability of the bank
and good returns on their investments
and regulators want strict adherence to
the regulations and laws. The decision
of the bank to invest in modernisation
of IT to provide online banking facilities
will have different meanings for different
stakeholders. Employees will be worried
about their jobs, management will be
concerned about the selection of the
right technology and quick returns on the
investment, customers will be happy that
they will get better service but, at the same
time, worried about security and privacy
of their information, and regulators will
be keenly watching whether the bank is
complying with all the regulations.
To meet the diverse requirements
of internal and external stakeholders,
it is critical to keep in mind not only
the management perspective, but also
the governance perspective, when
implementing IT. The objective of
governance is to make a balanced decision,
keeping all stakeholders’ interests in mind.
The governance team represents all the
stakeholders and is composed of the board
of directors headed by the Chairman. The
ultimate objective of governance is to
create value for the enterprise. This value
creation leads to benefit realisation for the
enterprise. Not all stakeholders can be
happy with every decision. Governance is
about negotiating and deciding amongst
different stakeholders’ value interests.
Every decision will have a different impact.
For example, adoption of cloud computing
for banks will reduce investment in
infrastructure, thereby reducing capital
investment and increasing profitability.
However, it will increase the security
concerns for customers. Regulators will be
concerned about the location of the data
and whether there is a cross-border flow
of customer information in breach of the
IT Act. So governance has to optimise not
only the resources but also the risks to
realise the benefits. At the same time, it
also has to do a balancing act of keeping
all the stakeholders’ needs in mind while
pursuing the goal of value creation.
How Is This Accomplished by
COBIT 5?
COBIT 5 has identified a large number
of stakeholders’ questions for such
situations. These questions lead us to
the selection of the enterprise goals.
How can a framework know what goals
an enterprise may have? COBIT 5, as a
business framework, uses the approach
of the balanced scorecard (BSC). As
per BSC principles, an enterprise has to
balance its goals in four dimensions financial, customer, internal, and learning
and growth. An enterprise that has only
financial goals, but no goals from the
remaining three dimensions, might soon
fail as its goals are not balanced.
In our example of modernizing IT for
the bank, the enterprise goals could be:
Financial dimension:
1. Managed business risk (safeguarding
of assets)
2. Compliance with external laws and
regulations
Customer dimension:
1. Customer-oriented service culture
2. Agile response to a changing
business environment
3. Business service continuity and
availability
Internal dimension:
1. Optimisation of business process
functionality
2. Optimisation of business process
costs
3. Operational and staff productivity
CSI Communications | October 2012 | 25
Learning and growth:
1. Skilled and motivated people
2. Product and business innovation
culture
These enterprise goals are business
oriented and required for enterprise
governance. We need to convert these
into IT-related goals that can be pursued
for IT governance. COBIT 5 provides a
matrix to relate enterprise goals with ITrelated goals. The IT-related goals again
are based on the BSC principle. Using the
matrix, we can identify the following ITrelated goals.
Financial:
1. Alignment of IT and business strategy
2. IT compliance and support for
business compliance with external
laws and regulations
3. Managed IT-related business risk
4. Realised benefits from IT-enabled
investments and service portfolio
5. Transparency of IT costs, benefits
and risk
Customer:
1. Adequate use of applications,
information and technology solutions
Internal:
1. IT agility
2. Security
of
information
and
processing
infrastructure
and
applications
3. Optimisation of IT assets, resources
and capabilities
4. Enablement and support of business
processes by integrating applications
and technology into business
processes
Learning and growth:
1. Competent and motivated IT
personnel
2. Knowledge and expertise and
initiative for business innovation
It is not necessary to simultaneously
pursue each and every one of these goals.
Governance is also about prioritisation.
The bank can select specific goals to be
pursued on higher priority. Armed with
the selected IT-related goals, we can then
identify specific enabler goals from the
seven enablers identified by COBIT 5.
These enablers are listed under principle
no. 4 below. Specifically, the enabler no. 2,
“processes”, provides a detailed mapping
of IT-related goals with governance and
management processes. This helps in
selecting the right processes and practices
to achieve these IT-related goals. There
are total 37 processes to guide us.
CSI Communications | October 2012 | 26
Principle 2: Covering the Enterprise
End to end
In the earlier days of adoption of
computers, the IT department was
responsible for the ‘IT function’. The
data was sent to the IT department
and processed reports were sent back.
This is no more the case. Information
has become one of the critical assets
of the organisation and it is rightly said
in the information age: information is
the currency of the enterprise. Every
action and decision depends on the
availability of the right information at
the right time. COBIT 5 has taken this
view and integrated governance of
enterprise IT into enterprise governance.
It not only focuses on the IT function,
but also treats information and related
technologies as assets like any other
asset for the enterprise. This enterprisewide approach is possible by providing
enterprise-wide governance enablers
such as having a uniform framework,
principles, structures, processes and
practices. It also requires considering
the enterprise’s resources, e.g. service
capabilities, people and information.
Information itself is a key enabler. Every
stakeholder has different needs for
information. A bank customer will require
very specific information. The banker
will require different type of information
to perform the task. COBIT 5 enables
every stakeholder to define extensive and
complete requirement of information and
its life cycle. This helps the IT function
to identify and support all stakeholders’
needs for information.
COBIT 5 also provides detailed roles,
activities and relationships between
stakeholders, the governing body,
management, operations and execution
team to have clear idea of accountability
and responsibility and avoid any confusion.
This is done by providing RACI charts
(Responsible, Accountable, Consulted
and Informed) for each key governance
and management practice.
Principle 3: Applying a Single
Integrated Framework
ISACA, a non-profit global association of
100,000 IT professionals in 180 countries,
has always strived to create best practices
for the IT profession. It has been a
collaborative effort of numerous experts
and practitioners. The collective efforts
created a number of valuable frameworks
such as COBIT 4.1, Val IT 2.0, Risk IT
and the Business Model for Information
Security (BMIS). All these frameworks
and models have now been integrated
in COBIT 5, a comprehensive business
framework at a macro level. However, this
does not preclude the use of other niche
standards and frameworks dealing with
specialised areas which can be integrated
under COBIT. COBIT 5 aligns itself very
well with other relevant standards and
frameworks such as ISO 27000, ITIL,
ISO, PMBOK and TOGAF so as to provide
guidance on governance and management
of enterprise IT keeping the overall focus
as a business framework. This is a very
important aspect as technical persons
may get too focused on detailed technical
activities and may ignore the main
business objective. COBIT 5 ensures
that you do not lose sight of the overall
enterprise goals to meet the stakeholders’
needs while pursuing IT-related goals.
Principle 4: Enabling a Holistic
Approach
ISACA believes that one cannot achieve
enterprise goals through technical
processes alone. To bring this thinking
in clear focus, COBIT 5 has defined 7
enterprise enablers.
1. Principles, policies and framework
2. Processes
3. Organisational structures
4. Culture, ethics and behaviour
5. Information
6. Services,
infrastructure
and
applications
7. People, skills and competencies
These enablers were briefly explained
in the previous article published in
CSI Communications September 2012
issue[3]. Each enabler has four dimensions
- shareholders, goals, life cycle and
good practices. Enabler performance
can be managed by defining metrics for
achievement of goals as well as metrics
for application of practice. This helps us
to monitor if we are on the right track and
to measure the progress made toward
achieving these goals. For example, the
quality of information available to the bank
customer should improve substantially
by adopting modern IT infrastructure
and improved processes. This should be
measured to identify whether the enablers
have actually contributed toward better
information quality achieved through
effective governance and management of
enterprise IT.
Principle 5: Separating Governance
from Management
We discussed this principle in the September
article[3]. Governance responsibility is to
www.csi-india.org
evaluate stakeholder needs, conditions and
options; decide on balanced, agreed-on
enterprise objectives; and set the direction
for the enterprise. This alone is not enough.
Governance also requires monitoring the
performance and compliance against
agreed-on direction and objectives. To help
governance of enterprise IT, COBIT 5 has
identified five distinct governance processes
under the domain of EDM (Evaluate, Direct
and Monitor). These processes make the
task of governance of enterprise IT very
well-organised.
Management of enterprise IT
requires a number of processes to be
applied. The four areas of responsibility
for management are: Plan, Build, Run
and Monitor. These have been further
elaborated as below:
To date, ISACA has published
the following documents to help in
understanding and implementing COBIT 5:
1. COBIT 5: A Business Framework for
the Governance and Management of
Enterprise IT
2. COBIT 5 : Enabling Processes
3. COBIT 5 Implementation
4. COBIT 5 for Information Security
Plan - APO (Align, Plan and Organise)
Other forthcoming publications are COBIT
5: Enabling Information and other enabler
guides, COBIT 5 for Assurance, COBIT 5
for Risk and other practitioner guides.
There is also an India-specific
document published by ISACA: Securing
Sensitive Personal Data or Information: Using
COBIT 5 for India’s IT Act[4]. ISACA plans to
bring other India-specific publication for
facilitating COBIT 5 implementation for
Indian enterprises.
Build - BAI (Build, Acquire and Implement)
Conclusion
Run - DSS (Deliver, Service and Support)
Governance is the need of the hour as is
amply demonstrated by failure of various
enterprises that have not had an effective
governance framework. Research has
confirmed that enterprises which have
effective governance in place are more
successful and command a higher
premium in the market. COBIT 5 is not just
another framework but a holistic business
framework essential for governance
Monitor - MEA (Monitor, Evaluate and
Assess)
These four domains together have a
total of 32 management processes. Each
process has a link with IT-related goals,
clearly defined goals and metrics, RACI
charts, management practices, input/
outputs and activities.
and management of enterprise IT. With
growing importance of IT in enterprises
and huge investments being made in
e-Business and e-Governance projects
and the e-way becoming the highway for
all core business processes, it is essential
that each one of us learns how to use
COBIT 5 to make sure that we become
more effective and can contribute in our
chosen area of work to facilitate achieving
the enterprise business goals.
Avinash Kadam, CISA, CISM, CGEIT,
CRISC, is currently advisor to the
ISACA India Task Force. He is also a
past international vice president of
the association. He can be contacted
via e-mail [email protected]
Opinion expressed in the blog
are his personal opinions and do
not necessarily reflect the views of
Hyperlink reference not valid.ISACA.
References
[1] www.isaca.org/cobit
[2] http://whatis.techtarget.com/definition/
framework
[3] http://www.csi-india.org/web/csi/
(Printed version: CSI Communications,
ISSN 0970-647X |Volume No. 36 | Issue
No. 6 | September 2012)
[4] http://www.isaca.org /KnowledgeCenter/
n
Continued from Page 7
software. Tuners such as Guitar Pro tuner
use very simple algorithms and consider
the most pronounced frequency to be
the fundamental frequency, and hence
can be inaccurate at times. But the tuner
module on AGTAB suffers no such flaws
and proved to be 100% accurate in
testing.
Conclusion
The idea of developing something like
AGTAB started off when one of the
teammates asked another, why there
wasn’t a computer-based guitar tabs
generator software. The aim of the team
was not to make a 100% accurate, fully
functioning tabs generator, but a tabs
generator software that proved tabulation
could be automated and that it had its
advantages. The software has potential
application in the music industry if created
and distributed commercially. Musicians
don’t have to waste their time in tabulation
which is very tiresome for someone who
isn’t familiar with computers.
As mentioned above, AGTAB does
have its flaws, being the first if its kind. A
simple solution to overcome the inability
of AGTAB to detect guitar effects is
to have the user specify these effects
explicitly using buttons. This solution
takes away the concept of the software
being fully automated. So the designers
have left the idea and have come up
with a whole new algorithm called the
frequency pattern recognition (described
earlier under “Detection of Frequency-B”)
which is expected not to have any of the
short comings listed above. The algorithm
stores the patterns based on amplitude vs.
frequency graph of the various notes and
the effects. These can be compared to the
input to obtain the proper output.
Though AGTAB only deals with guitar
and keyboards, it can, not so easily, be
extended to other instruments also viz.
drums. Recording of drum beats requires
costly recording hardware, which may not
always be possible, apart from recordings
done in high-end studios. So usually in other
studios the drum beats are programmed.
This takes lot of time and effort. With
the extension of AGTAB to drums, the
drummer can play the drums and the
software could automatically generate the
programmed drum beats. This provides
more freedom to the drummer to the sort
of beats he can create.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
Arobas Musicals- Guitar Pro, www.guitarpro.com
Elliott, R J, et al. (1994). Hidden Markov
Model-Estimation and Control, Springer
eBooks.
Petrus M T Broersen (2006). Automatic
Autocorrelation and Spectral Analysis,
Springer eBooks.
PowerTab-www.powertab.net
Rao, K R, et al. (2010). Fast Fourier
Transform Algorithms and Applications,
Springer eBooks.
Surhone, L M, et al. (2010). NyquistShannon Sampling Theorem, Betascript
Publishing.
U4SEEN Developments - Bass Audio
Library, www.un4seen.com/bass
n
CSI Communications | October 2012 | 27
Practitioner
Workbench
Wallace Jacob
Sr. Asst. Prof. at Tolani Maritime Institute
[email protected]
Programming.Tips() »
Fun with C Programs
Problem. Is it possible for two structures (in C)
to have the same members requiring different
amount of storage?
Solution. Let the two numbers be x and y. Let
‘s’ denote the sum of the two numbers and ‘d’
denote the absolute value of the difference
between the two numbers.
Solution. The program below exemplifies:
s = x + y
d = |x – y|, // absolute value taken
Program listing one
#include<stdio.h>
struct ex1 {
char var1;
int num;
char var2;
};
struct ex2 {
char var1;
char var2;
int num;
};
int main() {
printf(“\nsizeof(struct ex1) =
%d”, sizeof(struct ex1));
printf(“\nsizeof(struct ex2) =
%d”, sizeof(struct ex2));
return 0;
}
A sample output (platform dependent):
sizeof(struct ex1) = 12
sizeof(struct ex2) = 8
The structures ex1 and ex2 have same
members, but requires different storage area.
Why? Well, it is due to the concept of slack
byte. Slack byte is the extra byte padding used
for aligning on appropriate word boundaries
(or double word or quad word boundaries). For
optimized storage, it is recommended to place
members of same date type adjacent to each
other (not dispersed). To align the data it might
be necessary to put some slack bytes between
the structure members. The alignment rule for
the members of the structures can be modified
using the #pragma pack directive as well.
The program below uses #pragma pack
directive. Here, members of struct ex1 and
struct ex2 are packed to 1-byte.
Program listing two
#include<stdio.h>
#pragma pack (1)
struct ex1 {
char var1;
int num;
char var2;
};
struct ex2 {
char var1;
char var2;
int num;
};
int main() {
printf(“\nsizeof(struct ex1) =
%d”, sizeof(struct ex1));
printf(“\nsizeof(struct ex2) =
%d”, sizeof(struct ex2));
return 0;
}
A sample output (platform dependent):
sizeof(struct ex1) = 6
sizeof(struct ex2) = 6
Problem. Is it possible to find the greater of
two integers (assuming them to be unequal)
without comparing them using less than ‘<’ or
greater than ‘>’ sign’?
CSI Communications | October 2012 | 28
greater of the two numbers = (s+d)/2
smaller of the two numbers = (s-d)/2
For instance, let the two numbers be 37 and
202. Therefore, s = 239 and d = 165.
Hence greater of the two numbers =
(239+165)/2 = 202, and, smaller of the two
numbers = (239 -165)/2 = 37.
The actual logic is based on simple reasoning.
Let the two integers be x and y.
If x > y, then (x+y+x-y)/2 = 2x/2 = x,
and if y > x, then (x+y-x+y)/2 = 2y/2 = y.
Since it is not known whether x is greater
than y or y is greater than x, therefore the
absolute value of x – y is taken.
Program listing three
#include<stdio.h>
#include<math.h>
int main() {
int num1, num2;
int s, d, g, l;
printf(“Enter two integers: “);
scanf(“%d %d”, &num1, &num2);
s = num1+num2;
d = abs(num1-num2);
g = (s+d)/2;
l = (s-d)/2;
printf(“\nGreater integer= %d”, g);
printf(“\nSmaller integer= %d”, l);
return 0;
}
A sample output:
Enter two integers: 237 891
Greater integer= 891
Smaller integer= 237
The program does not use the ‘<’ and ‘>’
symbols for determining the larger (or smaller)
of the two integers.
Problem. Generate all possible unique
permutations of the digits 1, 2, 3, and 4.
Solution.
The logic (there are several methods) for
generating all possible unique permutations
of the digits 1, 2, 3, and 4 happens to be rather
simple. Let the variables num1, num2, num3,
num4 be used for holding all the values 1 to 4. At
no instant, the value of these variables should
be equal to each other. Therefore, let each of
the variables (num1, num2, num3, num4) hold
the values 1 through 4 and when the values
are dissimilar then they can be printed. The
program below serves as an illustration:
Program listing four
#include<stdio.h>
int main() {
int num1, num2, num3, num4;
for(num1=1;num1<=4;num1++) {
for(num2=1;num2<=4;num2++) {
if(num1!=num2) {
for(num3=1;num3<=4;num3++) {
if((num1!=num3)&&(num2!=num3)){
num4=10-(num1+num2+num3);
printf(“\n%d%d%d%d”,
num1,num2,num3,num4);
}
}
}
}
}
return 0;
}
A sample output:
1234
1243
1324
1342
1423
1432
2134
2143
2314
2341
2413
2431
3124
3142
3214
3241
3412
3421
4123
4132
4213
4231
4312
4321
An explanation for the statement:
num4 = 10-(num1+num2+num3);
It is ensured that num1, num2, and num3 are
all holding unique integers. The sum of integers
from 1 to 4 is 10. On subtracting num1, num2,
and num3 from 10 we will always get a unique
integer. For instance, if num1 = 2, num2 = 1,
and num3=4, then num4 will be 3. As another
example if num1 = 3, num2 = 4, and num3 = 2,
then num4 will be 1.
The above logic can be extended to
produce the unique permutations of a string. In
‘C’ language a string can be considered as an
array of characters beginning with subscript
[0] and ending with the NULL character ‘\0’.
A sample program for generating the
unique permutations of the word ‘COMP’
follows:
Program listing five
#include<stdio.h>
int main() {
int num1, num2, num3, num4;
char s[] = “COMP”;
for(num1=1;num1<=4;num1++) {
for(num2=1;num2<=4;num2++) {
if(num1!=num2) {
for(num3=1;num3<=4;num3++) {
if((num1!=num3)&&(num2!=num3)) {
num4 = 10-(num1+num2+num3);
printf(“\n%c%c%c%c”,
s[num1-1],s[num2-],
s[num3-1],s[num4-1]);
}
}
}
}
}
return 0;
}
n
www.csi-india.org
CIO Perspective
S Ramanathan
Managing Technology »
CIO to CEO: Only a Vowel Change?
Story of Lone Brave CIO
Charles Wang, the charismatic CEO of
Computer Associates narrates a story in
his book ‘Techno Vision’[2]:
A CEO was throwing a party in his
opulent mansion, which had a huge pool
inside with hungry alligators. The CEO,
who was in search of a successor, told
his executives “courage is what made me
CEO. If anyone of you could jump into the
pool and swim through the alligators and
make it to the other side, the job is yours”.
Not many took it seriously, but then there
was a splash and what do you see, the CIO
in the pool, swimming for his life, finally
makes it to the edge of the pool. The CEO
was impressed beyond bounds and told
the CIO “You are really very brave. You
are my next CEO. Ask me whatever you
want”. Panting for breath, the CIO asked
“Who the hell pushed me into the pool?”
Why Are CIOs Not Preferred?
Wang narrates this story to impress upon
the point that not many CIOs show the
capability nor the ambition for the top slot.
In how many organizations have we seen
CIOs making it to the top? It is always the
Marketing or the Finance man; sometimes
manufacturing and even HR personnel
have been preferred for the top slot, but
rarely the CIO. This is in spite of the fact
the IT head normally would have a very
detailed understanding of the processes
of the company across functions and
thus can be expected to grow beyond any
functional bias. Moreover, IT personnel,
by their training and practice can be
expected to show a higher analytical
ability than many of their counterparts
in other functions. Their environment
requires creative problem-solving skills.
Even in IT companies often the technical
people meekly surrender their claim to
their marketing brethren.
Traditionally, the IT people have been
comfortable more with technology than
with people and business. Looking at the
business through a restricted functional
view is not the trait of IT alone. But in
other functions, as the people move
higher they make a sincere attempt to
have a cross-functional understanding.
This trend is not visible in IT. In fact,
there is ample justification for senior IT
people to move away from technology, as
technology changes so fast in this field.
A manager is well aware that he is no
match for his programmer in Java coding.
Instead of using this opportunity to move
to managerial role, most of the senior
people are bent on proving, unsuccessfully
though, that they are adept with the nittygritties of coding. The behavior continues
even when a person reaches the Head of
IT position. You ask a CIO his contribution
to business: he would wax eloquent about
the network he installed, its protocols,
transmission
speeds,
application
environment, GUIs and so on and on, but
not a bit on how all these added up to the
performance of the business - mostly he
does not know, nor does he care! What he
does not recognize is that all his technical
jargons alienate him from the rest of the
managers. Our friend, who is an expert
in peer-to-peer networking, fails to
network with his peers. The loner is never
considered to be a leader!
CIO - CEO Transition
Intelligent and ambitious CIOs, if they
want to make it to the CEO position, have
to cultivate certain traits consciously:
• Accountability: “The buck stops here”
is the motto by which a CEO functions.
In IT we are used to blaming all failures
on users, data, network and if all fails,
God. How many times have seen
an IT manager honestly accepting
programming errors or testing lapses?
• Profit orientation: Traditionally, IT
departments have been cost centers
except a few, who use to undertake
data processing services. RoI has
always been a taboo word for CIOs,
who have been attributing their poor
services to budgetary constraints.
When companies outsource their IT
services, CIO has to become more
business-oriented. Many organizations
have addressed this issue by assigning
the CIO role to a business manager or
another functional manager. They find
it easy to educate another functional
manager on IT processes than training
IT manager on business aspects!
• People skills: IT professionals are adept
in handling machines, but not men. As
team leaders and project managers they
pretend to manage the programmers,
but many times geeks are difficult to
manage and if a project manager has
mastered these skills, these are not
easily translatable into useful skills in a
general environment as the one in which
CEO operates.
• Action-orientation: Mark Polansky and
Simon Wiggins[1] argue that behavioral
style, rather than intellectual capability,
prevents many CIOs from donning the
mantle of CEOs. While analytical in
their approach, by virtue of having been
in a staff function for long, a bias for
action is missing in CIOs - a necessary
trait of a successful executive.
• Tolerance for ambiguity: Another trait
Polansky and Wiggins find CIOs lacking
is tolerance for ambiguity. The binary
logic is ingrained into an IT person’s
mind and ambiguity is anathema for him.
Senior level managers make decisions
with limited amount of data, based on
reasonable assumptions - again a trait
CIOs need to cultivate assiduously.
Knowledge areas such as Finance are also
important for a CEO. But CIO, normally
attributed with higher level learning
skills (how many other functions can
claim to learn continuously about new
technologies?) should be able to succeed
in these.
“I think there are lots of ways to reach
the CEO position. I am not sure that CIO to
CEO is any different than any other path.
The most important characteristics are
your abilities to learn and lead”, says Dawn
Lepore, a former CIO and Vice Chairman
of The Charles Schwab Company and
currently President, CEO and Chairman
of the Board at Drugstore.com. (Google
and find out about the meteoric rise of
this woman)
Any takers?
References
[1]
Polansky , Mark and Simon Wiggins, ‘CIO
to CEO’: Aspiring CIOs Should Focus On
Critical Behavioral Skills, Korn / Ferry
International, 2005.
[2] Wang, Charles B., Techno Vision:
the executive's survival guide to
understanding and managing information
technology, McGraw- Hill Inc., 1994.
n
CSI Communications | October 2012 | 29
Security Corner
V Rajendran
Advocate, High Court of Madras
Email: [email protected], [email protected]
Hacking: Illegal but Ethical?
Preface: This article discusses in brief the
techno-legal issues in the activity called
‘hacking’, its treatment in the Information
Technology Act 2000 (later amended
by the I.T. Amendment Act 2008), the
practice and the social acceptability of
ethical hackers and the responsibility of
information system security professionals.
The earlier Section 66 of Information
Technology Act in India stated as follows:
Sec. 66: Hacking with computer system.
(This section has since been amended)
(1) Whoever with the intent to cause
or knowing that he is likely to cause
wrongful loss or damage to the public
or any person destroys or delete or
alters any information residing in a
computer resource or diminishes its
value or utility or affects it injuriously
by any means, commits hack;
(2) Whoever commits hacking shall be
punished with imprisonment up to
three years, or with fine which may
extend upto two lakh rupees, or with
both.
From this it is quite clear that hacking
per se is an offence with well-defined
punishments and unambiguous treatment
in the Act. But in practice we often come
across many academic institutions and
training organizations giving training in
‘hacking’ giving much publicity to such
courses. There are quite a few number
of such institutions which offer theory
classes with hands-on training and
practical inputs too on the nuances of
hacking going in depth on issues like
tracing an IP address, tracing an email,
etc. A quick glance into the brochures
and other material brought out by such
organizations reveal much information
and promise a great deal.
Teaching an illegality? Some institutes
advertise stating clearly that “while these
hacking skills can be used for malicious
purposes, this class teaches you how
to use the same hacking techniques to
perform a white-hat, ethical hack, on
your organization” (italics mine). Some
institutes also advertise like “this website
will help you gain entry into the minds of
seasoned computer criminals, so that you
can forestall their attempts and pre-empt
all harmful intents and activities”. Sounds
too good and good Samaritan isn’t it? But
is there any check on the syllabus taught,
admission criteria, and the knowledge
imparted and above all the purpose for
which the knowledge so gained is to put to.
Comparison with other crimes: Crimes
such as murder, rape, robbery etc. are
all well defined and have been accepted
as crimes and offences which any one
would shun and are not just legally but
also morally and ethically treated as
crimes only. On the same plane, take
the offence called ‘hacking’. Here lies
the difference. While hacking itself is a
crime recognized as an offence with welldefined punishments for it, how can there
be a course or training programs called
‘ethical hacking’? The protagonists say
that like any other computer knowledge or
programming skill or software efficiency,
hacking too is a part of the knowledge
and at least to protect your computer
from being hacked, you should be taught
and trained in hacking. To protect ourselves
from robbery or cheating or chain-snatching
or eve-teasing, no institute conducts a
course called ‘ethical eve-teasing’ or ‘ethical
cheating’ or ‘ethical robbery’.
Besides, admission to such courses
is by advertisements and wide publicity
and in their eagerness to enroll more and
more candidates, such institutes admit
semiliterate
professionals,
teenaged
students, and inquisitive youngsters whose
antecedents are not known or verified.
Verification of antecedents is strictly taboo
and a firm No-No especially in the case of
state-owned Universities which the author
learnt during one of his interactions with
one such University in their Board of Studies
(syllabus drafting) meetings. Hence with
no such restrictions on admission, and with
such deeper knowledge about the various
software forensic tools being imparted and
with an inherent inquisitive and exploratory
brain it is but natural that such youngsters
venture into the act of hacking calling it ‘an
ethical act’.
It is interesting to note that to put
an end to this anomalous situation of
an illegality being taught in academic
institutions, the lawmakers thought it
fit to remove the word ‘hacking’ from
the Section 66 of the IT Amendment
Act 2008, which came into effect from
27th Oct, 2009. Though the section still
deals with the offence of unauthorized
access to a computer resource, data
theft, combining it with the civil offence
of data theft dealt with in Section 43, the
offence still remains the same and the
punishments for the act is stipulated as
three years’ imprisonment or a fine of five
lakh rupees or both.
Perhaps by this amendment, the
government has avoided the issue,
not solved it. No doubt, hacking is still
an offence, though the academicians
and institutes teaching it may like to
differentiate that doing it with the
permission of the owner of the system (i.e.
for good purposes) is hacking and doing it
in an unauthorized manner i.e. malicious
intent (or mens rea, to use a legal term
meaning criminal intent of mind) will be
called cracking. The act per se ultimately
and ab initio, remains the same.
Do the governments or other
regulatory authorities have a role to play
in putting an end to the nomenclature to
such courses? Can the syllabus or the
coverage of such courses be streamlined
or regulated? Admitting that such
coverage is essential to spread awareness
on the vulnerabilities in the system and for
one’s own protection, does it not border
on the lines of spreading awareness on
accessing other’s data, other’s computer
systems (which again is a clear offence
as per Sec. 43 of the I.T. Act)? Above all,
putting an end to such courses will be a
great boon to the cyber crime police and
other investigating agencies in the ever
increasing field of cyber crime.
Such spread of knowledge called
under the fancy names of “Ethical Hacking”
or “Knowledge of hacking tools” or “Handson sessions in hacking” etc. has led to
increase in cyber crimes in the country. It is
quite clear that the cyber crime police are
getting more and more cases of data theft,
hacking, attempted id theft, unauthorized
access to systems often resulting in cyberblackmailing and sometimes even in
e-publishing of obscene material (which
too is a cognizable offence even as per
the original Sec. 67 with its broadened
scope, additions and amendment as per I.T.
Amendment Act 2008).
Conclusion: It is time that the
governments, the I.T. Secretary and other
regulators like RBI, TRAI, and the Ministry
Continued on Page 31
CSI Communications | October 2012 | 30
www.csi-india.org
Security Corner
Adv. Prashant Mali [BSc (Physics), MSc (Comp Science), LLB]
Cyber Security & Cyber Law Expert
Email: [email protected]
Information Security »
Internet Censorship in India
A global report recently released on the
Internet freedom rated India 39th in 2012,
a slip from two places last year. The report
titled, Freedom on the net 2012 (FOTN): A
global assessment of internet and digital
media by Freedom House, a Washingtonbased monitoring group conducted a
comprehensive study of Internet freedom
in 47 countries.
Quoting Bangalore-based Center for
Internet and Society, the report said 309
specific items (URLs, Twitter accounts,
img tags, blog posts, blogs, and a handful
of websites) have been blocked by the
government. But officially, the government
has admitted to blocking only 245 web
pages for inflammatory content and
hosting of provocative content.
Now let’s look at Internet penetration
in India, according to the International
Telecommunications Union Internet
penetration was 10% - or about 120 million
people at the end of 2011. Among Internet
users, 90 million were ‘active’, accessing
it at least once a month (70 million urban
and 20 million rural).
First of all, it is written in the
constitution that freedom of speech is
the right of citizens of India. This means
that our government cannot and should
not be making an attempt to restrict or
penalize speech because of its content or
viewpoint.
So, when there is talk of restricting
Internet content, my eyebrows go up
quizzically, wondering just what limits
would be placed on those restrictions.
Moreover, as electronic media becomes
the norm rather than the exception, how
does reading something on the Internet
differ from reading a book, a magazine, or
a printed newspaper?
Let’s take a look at printed books. For
years, various books have been censored
in the schools. BJP sensors about some
religious matter or Congress sensors
about lessons on freedom fighters or older
politicians or Shivsena censors writings
on Shivaji, each been censored at one
time or another. The point is that at some
point, some “authority” decided what was
appropriate and what was not. Looking
back on these specific issues, it seems
silly, does it not?
Moving on to the Internet and
Internet content, do you really want
some “authority” to determine what is
appropriate and what is not? I don’t think
so. As a matter of fact, to me that smacks
of the regulation of thoughts and the
regulation of ideas. It means less freedom
and more mind control. Clearly, that is not
a good thing.
India lacks an appropriate legal
framework and procedures to ensure
proper oversight of Intelligence agencies’
growing surveillance and interception
capabilities, opening the possibility of
misuse and unconstitutional invasion of
citizens’ privacy.
I would like to suggest that
legislated Internet censorship by our
government goes too far. It translates into
the elimination of our right to express
individual ideas and opinions publicly on
blogs, in forums, and in online newsletters.
And it translates into the deprivation of
our right to “life, liberty and the pursuit of
happiness”.
I don’t know about readers, but I take
pride in the fact that I can say whatever
I want, read whatever I want, and think
whatever I want. As long as I am not
causing harm, there is no reason for our
government to stifle these Acts. And they
most certainly should not do so by telling
me that it is for the “greater good”.
Let us call it what it is: Government
censorship is a way for government to
control society by protecting its citizens
from what it thinks is appropriate.
And in recent times, let’s face it: the
Indian Government’s ability to use
sound judgment in determining what is
appropriate just plain stinks.
Today I encourage readers to contact
your elected representatives and tell them
that you oppose any attempt to censor
your right to read what you want and to
say what you want.
Tell them that censoring the Internet
will not advance freedom, but, instead,
will set us back to a time when we were
powerless to make our own decisions, and
powerless to have a say in the way we
lived our lives.
What India Government today is
doing is taking the power to decide what
is offensive and what is objectionable or
what is against national security, and it
is delegating this power to bureaucrats
and police inspectors. This IT Act, 2000
has given them power to decide what
is objectionable and they are now armtwisting with the intermediaries and
getting the content blocked. I feel except
for immediate and urgent National
security concerns, court should decide
what is objectionable and what is against
the national interest.
I feel India should not follow and
make the Great Indian Internet firewall like
communist Chinese government which
tries to regulate everything and chances
are if it fails to regulate some events can
go to the brink where it can get toppled. n
the whistle blower signals and brought
some regulations on these. For instance,
CERT-In (Computer Emergency Response
Team India) can take initiatives to ban such
courses with these fancy or misleading
names like “Ethical Hacking” and enforce
regulations before knowledge about
hacking tools is imparted to students. n
Continued from Page 30
About the Author
of I.T. and Telecom and CERT-In (under
the control of Ministry of I.T. with its legal
position now recognized under Sec. 70-B
of I.T. Amendment Act 2008) acted upon
V Rajendran, Advocate, High Court of Madras. His qualifications are M.A. B.L. M.Com. CAIIB, ISO-ISMS LA, STQC-CISP, Dip
I.T. Law, CeISB, “Certified Forensic Examiner” conducted by IDRBT and Directorate of Forensic Science, Govt. of India. He has
over 3 decades of experience in a tech-savvy Public Sector Bank in various capacities as Manager, Senior Manager and retired
under Voluntary Retirement Scheme as Chief Manager Systems. An invited speaker and Guest Faculty in various Universities
and Colleges, Police Training Colleges etc. on Cyber Crimes, Security Concerns in electronic delivery channels in Banks like
ATMs, e-Banking etc. Authored many articles and appeared in print media and electronic media on many occasions on issues
regarding cyber crimes, banking frauds etc.
CSI Communications | October 2012 | 31
Security Corner
Mr. Subramaniam Vutha
Advocate
Email: [email protected]
IT Act 2000 »
Prof. IT Law in Conversation with
Mr. IT Executive - Digital Signatures Issue No. 7
IT Executive: Hi Prof. IT Law! In our last session
you talked to me about electronic signatures.
I was quite amazed by the discovery that,
without being aware of it, I have been using
electronic signatures for years, in the form of
PINs , passwords, TINs and the like.
Prof. IT Law: Yes, and I also showed you
why the law recognizes electronic
signatures as being equivalent in
functionality to handwritten signatures.
IT Executive: I remember that. You said
electronic signatures are unique, are linked
to the signatory, are fixed to the medium
on which the document exists, and allow
detection of any unauthorized alterations.
Prof. IT Law: Just like handwritten
signatures. Therefore they are recognized
by the law.
IT Executive: Thanks. And what will we
discuss today?
Prof. IT Law: Today, we will look at a
specific type of electronic signature called
a digital signature.
IT Executive: That sounds interesting. What
is special about it?
Prof. IT Law: Digital signatures are based
on what we call public key infrastructure.
That will be explained in due course. For
a digital signature to work, the person
who uses it must first apply to a Digital
Signature Certification authority for a
digital signature and a digital signature
certificate.
IT Executive: What happens then?
Prof. IT Law: The Certification authority
seeks to establish the “identity” of
the person who applies for the digital
signature. And that is done by asking for
his/her proof of identification, proof of
residence etc.
IT Executive: Using a PAN card or a driving
license or passport?
CSI Communications | October 2012 | 32
Prof. IT Law: That’s right. Once the
details are presented and verified, the
certification authority issues a pair of keys
[each based on a unique algorithm - a
piece of software]. The 2 keys in a pair are
called the Public Key and the Private Key
respectively.
that the Signatory uses, to create an
identical “hash digest” from the original
unencrypted document. He/she then uses
the Signatory’s Public Key to “sign” the
“hash digest”. That results in the creation
of a digitally signed document at his/her
end.
IT Executive: Which working together,
generate a digital signature?
IT Executive: Why does he/she have to follow
all these procedures?
Prof. IT Law: Yes, together they generate
a digital signature. Here is how it works
- the holder of the digital signature [the
Signatory] uses his/her Private Key to
“sign” the document that he/she wishes
to sign and send electronically to another
person.
Prof. IT Law: Well, by comparing the
digital signature [i.e. the digitally signed
document] generated by the Signatory
and the digital signature [i.e. the digitally
signed document] generated in his/her
office the recipient can ascertain whether
the digitally signed document received by
him/her is genuine and reliable.
IT Executive: What happens then? Does the
digital signature look like a conventional
signature?
Prof. IT Law: Not at all. There is a
preliminary step. The Signatory uses a
“hash function” to “hash” the document
that he/she wishes to sign. The outcome
is an encoded or encrypted version of the
document that is called a “hash digest”.
Then, the Signatory uses his/her Private
Key to encode or encrypt further, that
“hash digest”. The outcome is then called
a “digital signature”.
IT Executive: So a digital signature results
in two-fold encryption or encoding of the
document to be signed?
Prof. IT Law: Exactly. A digital signature is
therefore a doubly encoded or encrypted
document which the Signatory sends
to the intended recipient along with
the original unencrypted or unencoded
document.
IT Executive: Why does the recipient also
get the original unencrypted or unencoded
document?
Prof. IT Law: I will explain why. The
recipient uses the same hash function
IT Executive: And what if the two don’t
match?
Prof. IT Law: Then the recipient knows
he/she cannot rely on the digitally signed
document received.
IT Executive: One question. Where does the
recipient get the Public Key of the Signatory?
Prof. IT Law: From the Certification
authority who issued the digital signature
- all Public Keys will be available from this
website. The corresponding Private Keys
are kept confidential by the respective
Signatories.
IT Executive: Sounds a little complicated to
me.
Prof. IT Law: I agree, but it is the software
program that handles most of the
complexity and presents simple screens
to the Signatory and the recipient.
IT Executive: Thank you once again Prof. IT
Law. I think I now know how digital signatures
work from the legal perspective.
Prof. IT Law: I am glad. Bye for now. n
www.csi-india.org
HR
Dr. Manish Godse* and Dr. Mahesh Deshmukh**
* Ph.D. (IITB), Research Analyst, Infosys Limited, Email: [email protected], [email protected]
** Ph.D. (IITB), CEO, Maruma Consultancy Pvt Ltd., Email: [email protected]
Job Analysis: A Tool for Skill Building
Introduction
Every organization has multiple job roles,
such as Software Engineer, Project or
Program Manager, and Chief Information
Officer. These job roles can be commonly
observed in every organization of IT
industry. Anyone who is familiar with or
works in IT industry can comprehend
the roles and responsibilities of these job
roles. However, roles and responsibilities
for same role may change within an
organization and across the IT industry.
IT industry being a technical industry,
technical skills required for any job role
may change depending upon the job role
and technology in which the employee
works. Database administrator and
programming engineer, both by job role
are software engineers. However, their
expected technical skills are different.
Database administrator should be well
conversant with database knowledge
while programming engineer should
have programming skills. If we drill down
skills of Database Administrator then he/
she may have skills of database design,
query optimization, stored procedure
whereas Programming Engineer should
know object-oriented programming,
web technologies and many more.
Similarly, we can further drill down skills
depending upon technologies such as
Oracle/MySQL/MS-SQL for Database
Administrator
and
Java/.NET/Open
source for Programming Engineer. From
this discussion, it can be easily observed
that though job role is similar but technical
skill requirements may vary.
The right understanding of job is
very essential for both employees and
organization to effectively meet the
organizational goals. Employee skills
and job skills need to be matched before
handing the job to employee. If there
is a mismatch then employee may be
unsatisfied. Moreover, her/his efforts may
not be in the right direction as expected
by the organization. To align the efforts
of the employee, job skills must be welldefined and communicated to employee.
It has been observed that performance
of employee increases when she/he
has accurate understanding of the job
she/he has to perform. Right knowledge
of job is also essential for managers as
they are responsible for assigning and
monitoring jobs of their teams.
Job Analysis
Job analysis provides structured approach
and allows capturing critical information
about a person’s work and relationship
of the person to the job as a whole[1].
There are multiple uses of job analysis
such as skill building, developing training
course, selection procedure, performance
appraisal, organizational career path,
compensation plan and many more[2]. In
this article, we will be focusing only on
using job analysis for skill profiling.
Job Analysis for Skill Profiling
Skill profiling is a process to map skills of
employees required to perform present and
future tasks. Job analysis is undertaken to
clarify the roles and undertake a skill gap
analysis for unique jobs in an organization.
This exercise also helps to assess the
current levels of skills for the entire work
force belonging to roles. This would pave
the path to undertake skill development
inputs for the employees so that the talent
pipeline is up-to-date on the skills required
for the organization in next few years. The
aim is to undertake:
•
Skill Analysis for jobs using a
combination of Critical Incident Technique
and Focus Group Technique.
•
Map jobs on the skill requirement.
Process for Skill Analysis
Skill analysis involves the following steps
(Fig. 1):
1. Unstructured interviews are to be
conducted with two to four ideal job
holders in the same level to understand
the tasks/activities they do in their job
role. Candidates should have at least
worked in that position for two year.
Result: Listing of key task/activities
performed by job holders.
2. Discussion with Subject matter
expert: Discuss with Subject Matter
Experts (SME) and identify the technical
skills required for those activities. These
SMEs will be internal to the organization
and will be selected on the basis of detailed
criteria to be arrived at the beginning of
the project.
Result: Refined list of activities, and first
set of technical skills.
3. Focus group discussion will be
conducted on the lines of outcome of
Step 2. During this discussion, one/two
managers of the target role, two/three
representative role incumbents, and an
SME will be involved. All discussions
will be recorded on voice recorder and
transcribed later to study those in details.
Result: List activities and skills obtained in
Step 2 are further refined with the help of
SME.
4. Wrap-up with select SME and senior
technical team will be carried out to take
their feedback on Step 3. This will be in a
workshop format where each job profile
will be discussed.
Result: Mapping of Final list of required
technical skills.
Skill Gap Analysis
Skill is defined as proficiency, facility, or
dexterity that is acquired or developed
through training or experience to successfully
complete any given task. Another meaning
of skill is a developed talent or ability. In
both cases, the proficiency or the ability can
be acquired or developed through training.
Fig. 1: Skill analysis process
CSI Communications | October 2012 | 33
Area
Skills
Software
engineering
(Programming)
•
•
•
Requirement
engineering
•
•
•
•
Proficiency for current job
Identify functional requirements from various sources such
as business process document, project document, interview
of stakeholders etc.
Able to analyze the requirements to avoid any conflict in
understanding.
Documenting requirements in forms of use cases, system
processes, user interface etc.
Able to understand system/non-functional requirements
such as performance, scalability, coding standards, error
handling, compatibility issues, messages, user interfaces,
data handling etc.
With this understanding at the background,
it becomes imperative for all organizations,
especially the knowledge-based ones for
keeping the skills updated date across the
organization. There are various reasons
why skills become obsolete: change in
technology, change in environment, and
change in the user needs. All these reasons
call for skill upgradation. An organization
that is agile and can foresee the future
changes can very effectively transition
to the new requirements. These are the
organizations that are seen as competitive,
and responding effectively to the customer
needs. These are the organizations that
create customer needs than merely reacting
to them. Skill gap analysis is an effective
method to address the dynamic nature of
business. This is clearly a differentiator for
identifying successful organizations. The
process is clearly explained in the earlier
section. Here are a few issues that one
needs to be mindful.
Linkage to Capability Building
Skill gap analysis needs to address the
current as well as the future needs of
A b o u t t h e Au t h ors
Proficiency for future job
Aware of software development best practices, Life cycle,
Methodologies such as Waterfall, Prototype model,
Incremental, Iterative, V-Model, Spiral, Scrum, Cleanroom,
Rapid application development (RAD), Dynamic systems
development method (DSDM), Rational Unified Process
(RUP), eXtreme Programming (XP), Agile, Lean, Dual Vee
Model, Test Driven Development (TDD) etc.
Able to understand importance of security, reliability,
availability, performance, safety, and interoperability of
software.
Have a programming knowledge in one of the technologies
such as Microsoft, Java, or Open source.
the organization. Drawing from the
competency literature, it needs to be
clarified that we need to focus on how we
can keep our workforce prepared for the
future. Therefore, when we profile Job-1
on the skills, it will be advisable to create
two columns one for the current role and
another for the future role or possible
roles. This is one mode of profiling. Here
we need to define each skill set and pen
down the requirements in detail for the
current role (see example below). Another
method is to create a profile of all required
skill sets across the organization and
define where each job holder fits in.
There are pros and cons of each of
these methods. In the first method, one
needs to spend more time with the job
analyst and the SMEs (usually an internal
person). And it takes longer to complete
the profiling as you will profile the same
person for two different jobs. The second
method is much quicker as you are only
examining the skill and not linking it to
any position holder. Whatever the method
you choose, it gives an excellent inventory
of the technical as well as psychological
skills your organization has and needs
in the future, and can be easily linked
to all technical and management jobs.
Employees are more informed on their
job skills and what the organization is
expecting from the job. This will offer
the organization a better take on the
profitability of each project and meet
the timelines as promised in the scoping.
Another advantage of this approach is
that each job holder can be sure as to
what he needs to upgrade for his career
growth. The training/learning department
can be better equipped to build internal
capability too. A win-win situation for all.
References
[1] Busi, D (2012). “Creating value
through simple structured job
analysis”. Supervision, 73(7), 8-13.
[2] Garwood, M, et al. (1991).
“Determining job groups: Application
of hierarchical agglomerative cluster
analysis in different job analysis
situations”. Professional Psychology,
44, 743-762.
n
Dr. Manish Godse works as Research Analyst in Infosys Limited for innovation and new products initiatives. He has two
decades of experience, which spans as Business Leader, Entrepreneur, and Academician. His functional experience is
focused on strategic partnership development, customer relationship management, pre-sales and product management.
Manish holds PhD in Management from SJM School of Management, Indian Institute of Technology, Bombay.
Dr. Mahesh Deshmukh is the Managing Director of Maruma Consultancy Pvt. Ltd. He has 19 years of HR experience in both
industry and consultancy. He is recognized as a change management, talent management & organization development
professional with extensive experience in the design and implementation of Assessment Centers, Development Centers,
Senior Level Executive Assessments, & Development, Job Analysis and Competency design projects, Leadership
Development and Executive Coaching.
CSI Communications | October 2012 | 34
www.csi-india.org
IT.Yesterday()
Sugathan R P* and T Mahalakshmi**
*Principal of National Institute of Computer Technology, Kollam
**Principal of Sree Narayana Institute of Technology, Kollam
Early Years of IT in a Tiny South Indian Town
It is ironic that the nation that contributed
the pivotal digit ZERO to the Decimal
system should be ignominiously left
behind, when the machine called digital
computer started crunching ZEROs and
ONEs and ushered in the IT era in the
50s. Eventually, thousands of vacuum
tubes and miles of cables in these gigantic
computers were replaced by transistors
and ICs in the 60s, and they became
lighter, faster, and more powerful. And in
1969 the computer was compact enough
to go extraterrestrial when it guided
the Apollo 11 Lunar Module to land and
navigate the lunar landscape. Even at that
time the Indian scenario consisted of a
few govt. initiatives at Indian Statistical
Institute, TIFR and such other places.
The size and weight reduction
enabled computers to make early
entry into military aircrafts, where
technological superiority, and not the
cost, was the deciding factor. In the
late 60s one of the authors, who was
an aircraft engineer onboard an aircraft
carrier, had worked on naval aircrafts
whose fuel system and Automatic
Flying Control System were computer
controlled. The job of maintaining the
height of a submarine hunting helicopter
while hovering 50 feet above sea
surface, was given to a computer, whose
sensing and reaction is manyfold faster
than the pilot. Because while hovering
near surface, helicopter can lose height
and hit the water before the pilot
could react. If this dangerous aspect
was known to my colleague and ships
engineer-turned-cine actor Jayan from
Kollam, he wouldn't have attempted that
fatal stunt of leaping at and grabbing a
hovering helicopter.
In the 70s, the PCs were born
and companies like Intel, IBM, Apple,
Microsoft etc. were having a real party.
Indian computer landscape was still barren
and bereft of activities worth mentioning.
With the growth-stifling License Raj
firmly in place, there was no incentive for
innovation or venture. Sycophantism ruled
the roost.
Then, in mid-80s, a US Digital Signal
Processing company, Texas Instruments,
opened shop at Bangalore, recruited
Indian talents, trained them, developed
software and beamed it halfway around
the globe to the US. The License Raj did
not understand the new phenomenon
and they were beaten hollow by the
digital signals that respected no borders,
and flashed through under-sea cables,
landlines, and satellites. Indian companies
followed suit and Bangalore became the
silicon valley of India. Thus unexpectedly,
against all odds, India was on the global IT
business despite the bungling bureaucracy
and clueless politicians.
At about that time in 1985, a
Mathematics
Professor-turnedIndustrialist, R P Lalaji, trained in
computers abroad, had the courage to start
an IT training and software development
center at Kollam, a town known as the
cashew capital of Kerala. When he invited
his former colleagues, all Professors of a
prestigious college, to a demo of the 16bit, IBM PC compatible, HCL Busy Bee
(see Fig. 1), they were incredulous. Most of
them didn’t have a clue what a computer
was. And many thought it very exotic
and elitist and doubted its relevance in
the society. He was prepared for such
reaction from the pedestrian public, but
not from the elite academia. It was an
eye-opener that the usually well-informed
Keralites were unaware of the cataclysmic
computer tsunami that was sweeping the
world and heading to their shores.
In the 70s, the PCs were born and companies like Intel, IBM,
Apple, Microsoft etc. were having a real party. Indian computer
landscape was still barren and bereft of activities worth
mentioning. With the growth-stifling License Raj firmly in place,
there was no incentive for innovation or venture. Sycophantism
ruled the roost.
Fig. 1: HCL BUSYBEE
In 1986, PC-XTs and ATs based on
Intel 80286 processor were in the market.
The much discussed virus attack was
awaited in tense anticipation. The first
attack in Kollam was in June 1987. And
the news of the virus attack went viral
due to its novelty at the time, and many
journalists wanted to see the biological
virus. The digital virus was beyond
their comprehension and they were
disappointed.
At the time, lack of visions and rank
opportunism in some of the mainstream
political parties spread the canard that
computers would take away jobs. This
led some to hate computerization. An
instance was when one of the leading
banks in Kollam wanted two PCs to be
installed in their premises on a rental
basis to process the foreign Demand
Drafts. The firm was very happy that
at last something was happening. Two
386 PCs that was just making its debut
worth were installed in the bank on one
fine morning in May 1988. The Deputy GM was happy and
proud that his was the only bank in the
town to use computing power. However
his joy was short-lived. The powerful
bank employees’ union rebeled and in
the after noon the GM requested the
firm to take away these two “enemies
of the working class”. The unions and
their mentors did not realize that their
CSI Communications | October 2012 | 35
In 1986, PC-XTs and ATs based on Intel 80286 processor were
in the market. The much discussed virus attack was awaited in
tense anticipation. The first attack in Kollam was in June 1987.
And the news of the virus attack went viral due to its novelty
at the time, and many journalists wanted to see the biological
virus. The digital virus was beyond their comprehension and
they were disappointed.
About th e Au th ors
objection to computerization in the bank
only helped to create Hithen Dalals and
Harshad Mehthas and set the IT clock
back for the State by several years.
The unions finally capitulated because
they could not stand in isolation when
financial sector globally was undergoing
rapid computerization. If they had their
way there would be no ATMs, no mobile
or Internet banking.
PC 386 which would support the
super OS, Unix, was introduced during
1988-89. And this mission critical OS
platform was in great demand. Students
as far from Cochin flocked to Kollam to
learn Unix. When an 80486-based Super
Mini with 32 nodes came to Kollam in
1992 many a night was spent to install
Novell Netware to set up a LAN from a
set of 16 floppy diskettes (3.5”) because
of lack of expertise locally available. But
now, with the ubiquitous net at hand, the
two decade old incident might appear to
be frivolous.
By the late 80s the strong GUI
in Windows and the exponentially
expanding computing power empowered
PC to launch DTP and Multimedia. This
started a revolution in the print industry.
Till then the artwork for printing matter
was a manual and tedious job. But the
introduction of DTP and Multimedia
has magically changed the printing and
publishing landscape beyond recognition,
saving time, money, and efforts and
imparting superb quality to picture and
letter. A sequel to this was CADD. And
the Autocad from Autodesk enabled
one to design and construct anything in
cyberspace. Photgraphy used to mean,
drawing with photons. Now it is drawing
with electrons. The information carried by
the photons is directly converted to digital
signals which can be stored, printed, or
transmitted instantaneously. In tandem
with new generation printers this has
ushered in digital photography which
banished two century old film-based
photography. Now images of person place
and events circulate the globe as pixels at
the speed of light.
Early 90s witnessed a paradigm
shift in programming with the dawn of
Object Oriented Programming (OOP).
Closely on the heels of COM (Common
Object Model) came CORBA (Common
Object Request Broker Architecture). The
platform-independent Java was followed
by Visual and .Net technologies which
brought in 3rd generation, event-driven
programming.
By now computerization was
gathering momentum in the govt.
And NCERT purchased educational
software from the local firm, in 1993.
The long years of hardwork devoted
to software development finally bore
fruit. This enabled the firm to shift to
high gear by importing an IBM AS400
and relocate software development
to STPT, Trivandrum in 1994. They
further expanded by opening a BPO
and software development company at
Technopark, Trivandrum in 1996.
Then the Internet was brought to
Kollam in 1996, ahead of many, because
of the privilege extended by STPT to the
Software Exporting firm. For those of
us wondering at that 10 Kbps marvel,
the broadband was never even on the
horizon. Now the world is ensnared in
this network of networks which is fault
tolerant and ever expanding. Its Spidery
web is always abuzz with the digital
signals that keep people connected in all
walks of human endeavors.
The epic, Mahabharatha, records
that Lord Krishna gave a boon to the
courtier Sanjay to witness the Great
War from the palace and describe it to
the blind king Dritharashtra. That was
probably the first recorded instance
of Doordarshan. Now the same boon
is available to all who possess a 3G
cell phone or a PC with browsers like
Google+, Skype, or FaceTime (Apple).
This enables one to do video conferencing
from wherever he/she is, for peanuts. A
decade back videoconferencing was the
exclusive preserve of the elite and the
executive. The ubiquitous cell phone,
the most popular devise, which, among
other things is also a PDA, is doing
something very quietly. It is bridging the
digital divide.
n
Sugathan R Padmanabhan is the Principal of National Institute of Computer Technology, Kollam, Kerala, India since its inception
in 1985. Before that he worked as an aircraft engineer in the Naval Aviation and was commended for achieving intensive,
accident-free flying while serving in the Indian Naval Air Squadron 300. He is a Life Member of the Aeronautical Society of
India.
Dr. T Mahalakshmi is the Principal of Sree Narayana Institute of Technology, Kollam, Kerala, India since 2007. After
postgraduation in computer science from the University of Minnesota USA, she joined National Institute of Computer
Technology, Kollam as Systems Manager in 1986 and worked there till 2002.
CSI Communications | October 2012 | 36
www.csi-india.org
Brain Teaser
Dr. Debasish Jana
Editor, CSI Communications
Crossword »
There is no new crossword set for October 2012 issue of CSI Communications.
Recently, some reputed members from CSI fraternity suggested that Editors
should not contribute and this created quite a bit of discussion. Verdict and
final recommendation on this matter came rather late and since no other
contributions came for crossword puzzle, we are forced to stall new crossword
set for this month.
In last eighteen months, we have been observing a steady increase in the
number of solutions coming from many enthusiastic readers of this column.
Our sincere apology to all of them for such inadvertent omission of crossword.
Please bear with us.
Send your comments and feedback on this column to CSI Communications at
email address [email protected] with subject: Crossword Feedback.
Solution to September 2012 crossword
1
4
X
C
2
A
M
3
C
L
5
R
7
O
S
M
A
E
A
10
J
18
I
N
11
X
12
16
O
O
S
F
Q
C
L
S
E
A
N
C
E
A
P
C
T
Y
S
T
L
E
A
I
L
B
A
E
G
L
F
K
R
R
R
A
C
U
D
S
I
19
S
P
17
R
A
P
S
A
G
14
B
A
P
A
Y
T
H
T
I
O
N
T
F
22
S
C
I
L
A
B
S
25
M
A
T
H
E
26
M
A
Y
R
E
P
O
R
T
B
E
X
C
G
S
E
L
C
A
O
Q
30
27
29
O
N
31
E
32
C
C
E
D
E
L
V
A
V
F
T
I
M
O
R
P
24
J
I
C
A
S
O
M
R
B
T
A
F
O
O
M
23
T
8
13
T
20
21
R
L
J
Z
R
O
A
L
X
A
B
15
6
F
E
B
9
28
Congratulations to
Er. Aruna Devi (Surabhi Softwares, Mysore)
for getting ALMOST ALL correct answers to September month’s crossword.
A
D
O
M
S
O
L
D
F
U
S
I
O
N
INDIACom-2013
7th National Conference on
“COMPUTING FOR NATION DEVELOPMENT”
(07th – 08th March, 2013)
Organized by
Bharati Vidyapeeth’s Institute of Computer Applications and Management (BVICAM), New Delhi
Jointly with
Computer Society of India (CSI), Region - I
IEEE, Delhi Section
IE (I), Delhi State Centre
Institutions of Electronics and Telecommunications Engineers (IETE), Delhi Centre
Indian Society for Technical Education (ISTE), Delhi Section and
Guru Gobind Singh Indraprastha University (GGSIPU), New Delhi.
Announcement and Call for Papers
Information and communication technology plays an important role in enhancing the effectiveness, efficiency, growth, and development
of education, health care, and modernization of a society. Foreseeing the importance and impact of the above and encouraged by the
resounding success met with the past six editions of INDIAComs since its inception in 2007; we hereby announce INDIACom - 2013
which aims to develop a strategic plan for balanced growth of our economy through IT in critical areas like E-Governance, E-Commerce,
Disaster Management, GIS, Nano-Technology, Intellectual Property Rights, AI and Expert Systems, Networking, Software Engineering
and other emerging technologies.
Instruction for Authors: Original papers based on theoretical or experimental works related to the above mentioned sub themes are
solicited for presentation in the conference. The paper should begin with a title, a short abstract, and a list of key words. The total length
of the paper must not exceed Six A4 size pages including bibliography and appendices.
Important dates:
Submission of Full Paper
10th December, 2012
Paper Acceptance Notification
17th December, 2012
Detailed guidelines are available at www.bvicam.ac.in/indiacom for download. All the submissions will be online at
www.bvicam.ac.in/indiacom. All correspondences, related to INDIACom - 2013 must be addressed to:
Prof. M N Hoda
Chief Convener, INDIACom - 2013,
Director, Bharati Vidyapeeth’s Institute of Computer Applications and Management (BVICAM),
A-4, Paschim Vihar, Rohtak Road, New Delhi - 63.
E-mails : [email protected], [email protected], [email protected]
Tel.: 011-25275055 Tel. / Fax: 011-25255056, 09212022066 (Mobile) For further details, Visit us at: www.bvicam.ac.in
CSI Communications | October 2012 | 37
Ask an Expert
Hiral Vegda
Lecturer at AES Institute of Computer Studies
Email: [email protected]
Your Question, Our Answer
[Following are different ways of finding odd or even number using bitwise operator, data types, and type casting. Declaration of array
without specifying the size explicitly and to store and retrieve data to/from the excel file using C programming.]
(1) Is it possible to find whether the entered
number is odd or even without using modulo
operator?
A
About the Author
The
following
programs
find
whether the number is odd or even using
different techniques:
(i) With the use of integer and float data
types:
#include<stdio.h>
main()
{
int a;
float b,no=8;
a=no/2;
b=no/2;
if(a==b)
{
printf("%d is even number ", no);
}
else
{
printf("%d is odd number ", no);
}
return 0;
}
Ans.
8 is even number
(ii) With the use of type casting:
main()
{ float no1=7.0,no2,no3;
no2=no1/2;
no3=no2-(int)no2;
if (no3>0)
{
printf("The number is odd\n");
}
else
{
printf("The number is even\n");
}
return 0;
}
(iii) With the use of bitwise operator:
#include<stdio.h>
main()
{
int x=5;
if(x & 1)
{
printf("\n %d is odd number ",x);
}
else
{
printf("\n %d is even number " ,x);
}
return 0;
}
Ans.
5 is odd number
The result of ANDing operation is 1 if both the
bits have a value of 1; otherwise it is 0.
So, here the value of variable x=5, the
binary representation of 5 is 101.
So, 101
& 001
------001
So, 5 is the odd number.
(2) Is it possible to declare array without
specifying the size?
A
The program below declares the array
without specifying the size.
main()
{
int a[' '],i,limit;
printf(“Enter the limit of the array\n”);
scanf(“%d”,&limit);
for(i=0;i<limit;i++)
{
scanf("%d",&a[i]);
}
printf(“Output is:\n”);
for(i=0;i<limit;i++)
{
printf("%d ",a[i]);
}
return 0;
}
Ans.
Enter the limit of the array
10
1 2 3 4 5 6 7 8 9 10
Output is:
1 2 3 4 5 6 7 8 9 10
But using the declaration a[‘ ‘], you may lose
some data or you may consume more space to
store even a smaller size data unknowingly.
(1) The program below stores and retrieves
the data to/from the excel file using C
programming.
#include<stdio.h>
main()
{ FILE *fptr;
char c;
printf("Data Input\n");
fptr=fopen("d:\\programs\\
x3.csv","w");
while((c=getchar())!=EOF)
{
fprintf(fptr,"%c",c);
}
fclose(fptr);
printf("data Output\n");
fptr=fopen("d:\\programs\\
x3.csv","r");
while((fscanf(fptr,"%c",&c))!=EOF)
{
printf("%c",c);
}
fclose(fptr);
return 0;
}
Ans.
Data Input
Roll No, Name
1,Ajay Patel
2,Krupa Shah
^z
Data Output
Roll No, Name
1,Ajay Patel
2,Krupa Shah
Here we kept extension of the excel file to .csv,
which stands for comma-separated values
(CSV). This file stores tabular data (numbers
and text) in plain-text form. A CSV file consists
of any number of records, separated by line
breaks of some kind; each record consists of
fields, separated by some other character or
string, most commonly a literal comma or tab.
The data will be stored in the Excel file like this
Hiral Vegda, MCA from Gujarat University, is currently working as a Lecturer at AESICS, Ahmedabad University. She has 6 years of experience
in teaching at graduate and post-graduate level - BCA and MCA subjects. Her areas of interest are Procedure Oriented and Object Oriented
Programming, Data Structure, Algorithm Analysis and Development, Information Security, Web Applications development and Database
Applications. She has conducted and coordinated various workshops such as “Working with C#.NET” and “Web Developing and Hosting with
PHP”, under AESICS-CSI Student branch.
Ask an Expert column used to be answered by our Editor, Dr. Debasish Jana. Since from this issue, Editors are not contributing as authors, we invite contributions from our members for this column
CSI Communications | October 2012 | 38
www.csi-india.org
Happenings@ICT
H R Mohan
AVP (Systems), The Hindu, Chennai
Email: [email protected]
ICT News Briefs in September 2012
The following are the ICT news and headlines
of interest in September 2012. They have been
compiled from various news & Internet sources
including the financial dailies - The Hindu,
Business Line, Economic Times.
Voices & Views
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
India and China account for 40% of the
estimated 140 million net additions in
mobile subscriptions across the world Q2
of 2012. Globally, mobile subscriptions
reached 6.3 billion.
Indian marketing analytics industry may
touch $1.2 billion in 2020 - Nasscom.
PC sales grew 16% in 2011-12 and 10.8
million PCs were sold - MAIT.
For every 10% increase in broadband
penetration, the GDP in developing
countries will increase 1.38% - World Bank.
Internet subscriber base in India may reach
150 million by Dec 2012 - IAMAI and IMRB.
Of the 13.8 million jobs created globally by
2015 because of cloud computing, 2 million
will be in India - IDC.
Telephone subscriber base declines for the
first time. It fell by 20.71 million to 944.81
million in July 2012 from 965.52 million a
month ago.
Indian IT companies among 10 worst
paymasters - MyHiringClub.com.
It is estimated that over 42 million people
in India fell victim to cybercrime in the past
12 months, involving $8 billion in direct
financial losses - Norton Cybercrime Report
2012.
India beats China on Internet user additions.
Adds 18 million against 14 million in China.
iPhone5 is a handy tool for scammers McAfee.
Public cloud computing market to be $109
billion globally - Gartner.
Indian gaming industry to grow at 49% to
touch $830 million by 2012, against $167
million in 2008 - Nasscom.
Asia-Pacific BPO market set to touch $9.5
billion in 2016 - Gartner.
Global firms will spend $13-22 million on
mobile marketing this year - TCS study.
‘Tape recorders, standard working hours,
desktops heading for extinction’ - Survey by
LinkedIn.
‘India’s enterprise software market to
grow by 13.7% and to reach $3.45 billion’ Gartner.
Despite a ban, use of pen drives is the main
threat to cyber security in defense forces
and is responsible for over 70% of such
breaches - Army.
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
IT Manpower, Staffing & Top Moves
•
•
•
Telecom, Govt., Policy, Compliance
•
New radiation norms for mobile towers in
place. Non-compliance to attract a penalty
of Rs. 5 lakh.
DoT panel wants foreign players to set up
local servers.
Tata Comm and Airtel, who own 85% of the
landing points under competition panel lens
as fee charged by them is much more than
global rates.
DoT may set up unified license regime in
phases. The operators may have to pay
a revenue share of 10% of their annual
revenues under the ‘Metro’ category license,
8% for ‘Circle B’, and 6% for Circle C.
The Govt. has found 18,000 mobile phones
having fake identity numbers.
Consent for offering ISD services is a must.
ISD facility for pre-paid mobile users only
on demand - TRAI.
Soon, the postman will knock with tablet in
hand to carry out all transactions related to
delivery of cash, banking activities and a few
more.
Mobile operators will be forced to increase
tariffs following the increase in the diesel
price.
Govt. evolves strategy to deal with social
media misuse.
2G: BJP, Left firm on summoning PM,
Chidambaram before JPC.
Telecom rates may fall as TRAI cuts interoperator fees.
15 gadgets (including video games and
laptops/tablets) to be brought under BIS Act.
National IT Policy gets Govt. nod; Aims to
make 1 per family e-literate.
Vodafone says ready to pay Rs. 8,000 crore
to settle tax dispute.
Cabinet clears the policy which envisages
the growth of the IT market from $100
billion to $300 billion and creation of
additional 10 million jobs by 2020.
Pitroda launches video call services; PCOs
to be upgraded.
E-commerce firms with FDI cannot sell
products online - Govt.
Mobile roaming charges will go next year Sibal.
Pitroda fumbles at first Twitter press
conference.
FinMin opposes a cess, proposed by IT
Ministry on electronic products.
•
This year could be worse than 2008 with
40-50% drop in campus recruitment over
the previous year - placement officers in
engineering colleges.
Mahalingam to continue as TCS ED for one
more year.
Xchanging, a LSE listed IT and BPO company
opens a new center in Shimoga SEZ and
aims to hire 3,000 people by 2013.
Ajuba to hire 400 by Nov 2012 for new
Chennai unit.
•
•
•
•
•
•
•
•
Vodafone to recruit visually impaired in call
centers.
Exterro plans to add 60-75 people in the
next six months.
Bill Moggridge, creator of first laptop
computer in 1979, has died.
Microsoft pushes for 20,000 more H-1B
visas.
Serco to expand Agra facility; hire 850
people.
Nasscom working with BPOs to weed out
fake CVs.
MindTree to hire students from tier-II, III
cities.
N R Narayana Murthy to be honored with
the Global Humanitarian Award.
Company News: Tie-ups, Joint
Ventures, New Initiatives
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
TCS is official IT partner for Berlin Marathon.
To tap the fast growing ICT market of
West Asia, 40 Indian ICT companies will
participate in GITEX Dubai 2012.
Google intensifying digital ad services for
small biz.
Google Maps unveils live traffic updates,
free voice navigation in India.
Microsoft Launches Windows Server 2012.
iPhone5 launched in the US and a number
of European countries; gets record 2 million
iPhone 5 orders in 24 hours of its launch.
25 renewable energy firms bid for supplying
power to tower firms.
Oracle launches its Exalytics In-Memory
Machine for analytical and performance
management applications.
Wipro featured on ‘Carbon Disclosure
Leadership Index’.
Apple wins German patent case against
Motorola.
Asus announces PadFone, a device that
combines a smartphone, a tablet, and a
laptop.
Exchange old mobile phone for cash at ATM
soon.
BSNL to launch ‘Fiber to Home’ for high
speed Internet.
Insurance cover to protect Facebook,
Twitter accounts.
Google+ user base crosses 400 million.
Smiley turns 30!
Google emerges as the world’s most
attractive employer, for the fourth
consecutive year.
‘In India, Windows 8 will be launched in 11
languages initially’.
Developers achieve Guinness Record at
Windows AppFest.
Google’s India hub is now the largest
n
outside US.
CSI Communications | October 2012 | 39
4TH NATIONAL E-GOVERNANCE KNOWLEDGE SHARING SUMMIT (KSS-2012)
RAIPUR, CHATTISGARH, 5-6 NOVEMBER 2012
Organized by
GOVT OF CHATTISGARH in association with CSI-SIGeGOV
Theme: ‘e-Governance in emerging era’
INTRODUCTION
The 4th National e-Governance Knowledge Sharing Summit (KSS-2012) is an annual event organized by Computer Society of India's Special Interest Group on
e-Governance (SIGeGOV) since 2009. The summit aims to provide a discussion forum for Government officials, policy makers, practitioners, industry leaders
and academicians to deliberate, interact and develop an actionable strategy for transparent and good governance in Indian context. This year, the summit is
being hosted by the Government of Chattisgarh, and is scheduled during 5 and 6 Nov 2012 at Raipur, Chattisgarh. The details of previous three summits held at
Ahmedabad (KSS-2011); Bhopal (KSS-2010) and in Hyderabad AP (KSS-2009) are available at www.csi-sigegov.org.
OBJECTIVES/THEME
For providing better citizen services, there is a need to respond proactively to the changing environment. Essentially, calls for strengthening capabilities, seizing and
exploiting the opportunities and there is an inherent need to explore, share success stories, best practices and achievements spread across the country for better
implementation of e-Governance initiatives by all governments. With 80% of India’s population now connected by mobile devices, there is an immediate need
to relook at e-Governance initiatives. Based on the theme ‘e-Governance in emerging era’, the summit will provide a platform for discussions among bureaucrats
and policy implementers. It will enable delegates to share their success stories, implementation challenges, and strategies emanating from states and countries,
which have already taken a lead in m-Governance.
CONTACT
The 4th e-Governance Knowledge Sharing Summit (KSS-2012) is being organized under the joint auspices of Department of Information Technology, Government
of Chattisgarh (CEO, CHiPS as nodal point) and Computer Society of India's SIGeGov. You may contact the following for more details:
❑
Shri A M Parial, CEO Chips -Governance Govt of Chattisgarh ([email protected])
BY INVITATION ONLY!!!
OR
❑
Maj Gen (Retd) Dr R K Bagga, AVSM, Chairman, CSI-SIGeGov ([email protected])
9TH INTERNATIONAL CONFERENCE ON E-GOVERNANCE
(ICEG-2012)
29-30 December 2012, School of
Communication and Management Studies,
Cochin, Kerala, India
ICEG-2012 is the ninth event in the conference series. It aims to provide a forum for discussing research findings, strategies, policies, and technologies in the
growing field of e-governance.
ABOUT ICEG CONFERENCES
Honorable Former President Dr. Abdul Kalam inaugurated the first conference in 2003. In 2004, the Government of Sri Lanka, with the Prime Minister as the Chief
Guest, hosted the conference at Colombo. In 2005, Lahore University of Management Sciences hosted it. Since 2006, the special interest group on e-governance
of the Computer Society of India (CSI) has been actively supporting the conference. In 2006, it was hosted by IIT Delhi and in 2007, by University of Hyderabad.
In 2008 IIT Delhi hosted it again and IIM Bangalore hosted the seventh event with focus on Microfinance and Healthcare. In 2011, the conference was hosted by
Nirma University, Gujarat with active support of the StateGovernment of Gujarat. In 2012, the ICEG is being hosted by SCMS group of educational institutions at
Cochin, Kerala, famed as “God's Own Country”.
FOR WHOM?
A significant annual event in the e-governance area, the conference promises to be a refreshing convergence of the best intelligentsia from the academia and the
industry. Scientists, faculty, and students from prestigious universities across the globe have expressed a keen desire to be part of this event. There is a strong
representation of NGOs/Government officers and volunteers as well as assured industry participation.
WHO DESIGNS THE CONFERENCE PROGRAM?
An International Advisory Committee (IAC), comprising international experts and representatives from the government as well as the industry, will guide the
development of the conference program.
A Program Committee (PC) will supplement the activities of IAC including identification of moderators and speakers, as well as the agenda for the conferences.
It will manage the parallel track sessions and drive the review process. The local Organizing Committee (OC) will ensure the smooth implementation of the
planned agenda on a day-to-day basis.
IMPORTANT MILESTONES
Last Date for Submission of Papers:
Acceptance/Rejection Notification:
Camera Ready Papers Submission:
Conference Dates:
November 15, 2012
November 30, 2012
December 5, 2012
December 29-30, 2012 @ SCMS COCHIN, Kerala, India
CONTACT: ICEG 2012 SECRETARIAT
Program Co-Chairs:
Professor Dr. Raman Nair, Director, SCMS-Cochin, India, (Sponsorships/logistics/registration/accommodation/local administration)
Email: [email protected]
Professor Indu Nair, Director, SCMS-Cochin, India, e-mail: [email protected]
Address: SCMS Group of Institutions, Prathap Nagar, Muttom, Alwaye, Cochin, Kerala, India
Tel:+91-484-2623803, 2623804,2626153,2626154; Fax:+91-484-2623885
Chandana Unnithan, School of Information Systems, Faculty of Business and Law, Deakin University, Australia. E-mail: [email protected] (Paper
Submissions/Queries)
Bardo Fraunholz, School of Information Systems, Faculty of Business and Law, Deakin University, Australia
Email: [email protected].
Address: Deakin University, Faculty of Business and Law, 70, Elgar Road, Burwood, Melbourne, Victoria 3125, Australia.
CSI Communications | October 2012 | 40
www.csi-india.org
CSI Elections 2013-2014/ 2013-2015
the office for which he/she should contest. The NC will take
into consideration any e-mail or signed written preferences
submitted by the nominee received prior to the last date of
nominations.
Dear CSI Members,
Under Byelaw 5.1.1 of the Computer Society of India, the Nominations
Committee (NC) is required to invite appropriate groups of members to
submit names of Voting Members for considering them for the various
elective offices of the ExecCom and the Nominations Committee as well
as Chapter Elections.
Members are accordingly invited to submit the names of candidates
who are valid Voting Members for the following elective offices:
For the Term 2013-2014 (April 1, 2013 - March 31, 2014)
1.
2.
Vice-President cum President Elect
Nominations Committee (3 members).
For the Term 2013-2015 (April 1, 2013 - March 31, 2015)
1.
2.
3.
4.
5.
6.
7.
8.
Hon. Treasurer
Regional Vice-President (Region I)
Delhi, Punjab, Haryana, Himachal Pradesh, Jammu & Kashmir,
Uttar Pradesh, Uttaranchal, and other areas in Northern India
Regional Vice-President (Region III)
Gujarat, Madhya Pradesh, Rajasthan, and other areas in Western
India
Regional Vice-President (Region V)
Karnataka and Andhra Pradesh
Regional Vice-President (Region VII)
Tamil Nadu, Pondicherry, Andaman and Nicobar, Kerala, and
Lakshadweep
Divisional Chairperson (Division I) - Hardware
Divisional Chairperson (Division III) - Applications
Divisional Chairperson (Division V) Education & Research
The Proposal for Nomination Should Be Accompanied by:
1.
2.
3.
4.
5.
Note-1
Signed letter/e-mail from at least 2 valid voting members
proposing the Nominee.
A signed letter/e-mail from the Nominee confirming:
i. Acceptance to stand for election to the nominated office.
ii. Willingness to devote adequate time for the Society’s work.
iii. Commitment to attend at least 3 ExecCom Meetings in a
year (Not for Nominees to NC).
Passport size Photograph (printed or digital) in High Resolution.
Statement of Intent on how the nominee intends to serve the
Computer Society of India.
Bio-data in the following suggested format:
i. Name
:
ii. CSI Membership No.
:
iii. CSI Membership since
:
iv. E-mail address
:
v. Date of Birth (Age)
:
vi. Postal Address
:
vii. Phone/Mobile/Fax Nos.
:
viii. Educational Qualifications
:
ix. Publications - relevant to the office being nominated for
:
x. Contribution to the IT profession
:
xi. Contribution made to CSI
:
xii. Experience - relevant to the position nominated for
:
xiii. Honors/Professional Recognition
:
xiv. Other Relevant Information
:
xv. In case of Nominees who are holding or have held an
Elected post in CSI Exe.Com or N.C in the last 3 years
:
a. Positions held
b. Statements of Intent submitted for the above positions
c. Results achieved/action taken against the
If the name of any Nominee appears for more than one Office,
the Nominations Committee will be empowered to decide on
Note-2
Nominees will NOT be considered in the following cases:
(a) Nominees with pending dues to CSI or
(b) Where Disciplinary action has been taken or
(c) Nominees with pending issues with the Disciplinary
Committee.
The Last Date for Receipt of Nominations Is
November 14, 2012.
The proposals must be sent to
The Chairman, Nominations Committee
C/o Executive Secretary, Computer Society of India,
Samruddhi Venture Park, Unit No 3, 4th Floor, MIDC Andheri (East),
Mumbai–400 093 with a copy to:
Dr. D D.Sarma, Chairman, Nominations Committee
(E-mail: [email protected])
All election related notices will be published on the CSI Homepage
www.csi-india.org. The date of publishing election related notices on
the CSI Homepage www.csi-india.org will be considered as the date of
publication. As per Section 4.6.4, “The word mail includes e-mail and the
word publication includes web publication”.
The Proposed Dates for Various Stages of the Above
Elections Are:
• Call for Nominations to be published on
CSI Homepage:
• In CSI Communications October 2012 issue:
• Last date for receipt of nominations:
• Last date for withdrawal of nominations:
• Communication of slate by NC to ExecCom:
• Slate to be published on CSI Homepage &
in CSI Communications: De., 2013 issue:
• E-mail posting of passwords:
• Opening election site:
• Last date for receipt of ballots (Internet):
• Declaration of results:
06.10.2012
13.10.2012
14.11.2012
21.11.2012
28.11.2012
10.12.2012
12.12.2012
15.12.2012
16.01.2013
21.01.2013
The dates may be changed by the Nominations Committee, if required - by
suitable announcements on the CSI Homepage www.csi-india.org
We would urge all members to register/update their latest e-mail ids
with the CSI Headquarters. This will allow the members to derive full
benefits from Internet Balloting and to take CSI to a leadership position in
showcasing the use of IT for Elections.
Elections for CSI Chapter
As also intimated in the past, Chapter Elections will also be held
simultaneously with the National Elections. Nominations Committees
at the Chapters will invite nominations for these positions from their
respective members.
There is no need to send Chapter elections nominations to National
NC-CSI. A copy may please be sent to CSI HQ., for record.
1.
Vice Chairman Cum Chairman elect (2013-2014)
2.
Nominations Committee (3 Members) (2013-2014)
3.
Hon. Treasurer (2013-2015)
4.
Managing Committee (2013-2014) (Class A: 8, Class B: 6,
Class C: 4 Members)
CSI Nominations Committee 2012-2013
Dr. D D Sarma (Chairman)
Bipin V Mehta
Subimal Kundu
E-mail: [email protected]
E-mail: [email protected]
E-mail: [email protected]
E-mail: [email protected]
Dear CSI Members,
If you have shifted to a new location or changed your Telephone Number, Mobile Number, Email id, kindly send your revised/new
contact details immediately to CSI HQ at [email protected]
Please help us to serve you better.
Executive Secretary
CSI Communications | October 2012 | 41
CSI News
From CSI Chapters »
Please check detailed news at:
http://www.csi-india.org/web/csi/chapternews-October2012
SPEAKER(S)
TOPIC AND GIST
LUCKNOW (REGION I)
Mr. Arvind Kumar and Prof. Bharat Bhasker
20 May 2012: Organized Storage Technology Day
Integral University organized Storage Technology Day at its campus, under
the aegis of CSIndia, Lucknow Chapter. Mr. Arvind Kumar and Prof. Bharat
Bhasker, presented key note address on the theme. It was very well received
by the students.
ç
Ms. Namita Agrawal and Mr. Ankit Agrawal
CSI Lucknow Past Chairman and Chairman along with University VC and other
panel members
18 August 2012: An IT Quiz Event in Tech Vista Fest
Many schools of Lucknow participated in the IT Quiz Event in Tech Vista
Fest. The event started with Prelims round and finally a stage Quiz Round.
Ms. Namita Agrawal and Mr.Ankit Agrawal were the quiz masters of the IT
Quiz Event. "It was a great experience to interact with ingenious and quickwitted young minds", said quizmaster Namita Agrawal. Lucknow Public
School was the winner of the Event.
ç
Proceedings of the IT Quiz. Quizmasters in the inset.
AHMEDABAD (REGION III)
Prof. Jayesh M Solanki and Mr. Vispi Munshi
4 August 2012: Free Lecture on “Business Intelligence and Data
Warehousing Concepts and Best Practices”
Prof. Jayesh M Solanki gave introduction and importance of this topic
including speaker’s profile. The lecture provided exposure to real scenario
on Business Intelligence and Data Warehousing. Mr. Vispi Munshi covered
basic understanding of what a data warehouse is and how it can be used.
He also explained various aspects of core data warehouse enablement
techniques in practice.
ç
Prof. Jayesh M Solanki gave the introducƟon about the basic understanding of
Datawarehousing
BHOPAL (REGION III)
Dr. V D Garde and Shri Hariranjan Rao
29 August 2012: Workshop on “e-finance & e-payments solutions in
changing scenario”
CSI in association with MAP_IT & ICICI bank organized the workshop which
was mainly attended by government and bank officials. From CSI Dr. V D
Garde was the chief speaker who focused more on the security aspects
of e-transactions. Secretary IT & Secretary to CM Hariranjan Rao spoke
about current scenario of e-payments worldwide and its benefits. Senior
officials from ICICI Bank highlighted their e-payment solution with flavors
of applications and benefits. ç
CSII Co
CS
Comm
Communications
mmun
mm
unic
un
ic
cat
atio
ions
nss | Oc
October
O
t be
to
b r2
2012
0 2 | 42
01
Dr. V D Garde with Shri Hariranjan Rao and Shri Sandeep Kumar
www.
ww
w.cs
w.
csics
i-in
indi
in
dia
di
a.or
a.or
org
g
www.csi-india.org
SPEAKER(S)
TOPIC AND GIST
BANGALORE (REGION V)
Mr. Vinay Krishna
25 August 2012: Workshop on “Object-oriented Design”
This workshop was a hands-on training for designing and developing
applications using OO concepts and OOD principles. The workshop was
divided into several sections which covered basics of Object-oriented
concepts, Object-oriented principles, Object-oriented Design Principles
(SOLID), and Other OO Design Principles. The participants got hands-on
practice on OO Design principles. They learnt how to apply these concepts
and principles and also got to know common mistakes that one must avoid. ç
ParƟcipants with speaker
NASHIK (REGION VI)
Shekhar Paranjpe
2 September 2012: E-mail broadcasting under “Knowledge Sharing Series”
completes 100 Weeks
Mr. Paranjpe acknowledged the guidance and support
received from Mr. Karode, Fellow, CSI and from
Mr. Kenge and Mr. Seth of CSI, Nashik.
Shekhar Paranjpe broadcasted 100th weekly mail under the illustrious
“Knowledge Sharing Series”. Activity completed 100 weeks of distribution
of interesting articles related to IT along with apt e-cartoons. Every month, a
link to an online IT Quiz, used to be inserted in the e-mail for the recipients to
participate. Starting with a few, a steady addition to the e-group of recipients
over last 100 weeks, has now taken the number past 400. He expressed satisfaction about keeping recipients'
interest alive and about their participation in monthly
online IT Quiz, winner of which was declared and
felicitated every month.
From Student Branches »
http://www.csi-india.org/web/csi/chapternews-October2012
SPEAKER(S)
TOPIC AND GIST
AES INSTITUTE OF COMPUTER STUDIES (AESICS), AHMEDABAD (REGION-III)
Mr. Jigar Pandya
26 July 2012: Seminar on “Cloud Computing using Salesforce.com”
Mr. Jigar Pandya, explained following topics in the seminar in an interactive
way with industry project case studies - different cloud services, salesforce
CRM, cloud architecture, cloud-based application development using force.
com, and demo of various cloud applications. He also discussed case studies
of business organizations benefiting from Salesforce.com cloud services.
ç
Shri Madhukant Patel
Mr. Jigar Pandya conducƟng the Seminar
1 August 2012: Expert talk on “Interfaces & Challenges for software
application developers”
The expert talk motivated the “Future ICT Professionals” for addressing the
various challenges in software application development. Shri. Patel discussed
various challenges and interface issues related to database interfaces,
hardware interfaces, language interfaces, applications interfaces, and user
interfaces. He also shared various application ideas related to e-Learning, telemedicine, GIS, and mobile devices.
ç
Shri Madhukant Patel addressing students during the expert talk
CSII Co
CS
Comm
Communications
mmun
mm
un
n ic
cattio
ions
ns | Oc
October
O
cto
tobe
to
berr 2
be
2012
012
01
2 | 43
SPEAKER(S)
TOPIC AND GIST
AES INSTITUTE OF COMPUTER STUDIES (AESICS), AHMEDABAD (REGION-III)
Prof. M L Saikumar
1 August 2012: Guest lecture on “Software Testing”
Prof. Saikumar provided exposure to importance of software testing
and various aspects of testing. He started his session by giving practical
demonstration of issues and challenges in software development and testing.
Prof. Saikumar explained different types of testing, test cases, automated
testing procedures, and concluded with real-life lessons. The lecture was
attended by around hundred and fifty students and faculty members.
ç
Prof. Saikumar interacƟng with students during the lecture
GYAN GANGA INSTITUTE OF TECHNOLOGY AND MANAGEMENT (GGITM), BHOPAL, MP (REGION-III)
8 September 2012: Organized “Tech coder’s competition”
In the Tech Coder Competition, students were given a real-time problem
which they had to code using programming language. The event was judged
by faculty members of the college. The winners were awarded with prize.
ç
Students working on their projects
SARDAR VALLABHBHAI PATEL INSTITUTE OF TECHNOLOGY (SVIT), VASAD, GUJRAT (REGION-III)
Ms. Bharti Trivedi
16 August 2012: Seminar on “Green IT”
In the technical session on “Green IT”, students got a chance to understand
the need and importance of Green IT. The seminar also made the students
aware about E-waste Management. The Seminar proved to be a useful
source of information for the students, where they got to know about the
current scenario of Green IT as well as social issues regarding it.
ç
Ms. Monali Salunke and Mr. Bhushan Narkhede
The students aƩending the “Green I.T.” Seminar
30-31 August & 1 September 2012: Workshop on “Android Application
Development Workshop”
As the Java-based Android platform is getting popular, Ms. Monali
Salunke and Mr. Bhushan Narkhede introduced the User Interface and the
Programming of Android. Each student was provided with resources to
develop an application of Google Maps on Android Interface which was a
wonderful learning experience for students.
ç
Students aƩending the Android ApplicaƟon Development Workshop by
Ms. Monali Salunke
GMR INSTITUTE OF TECHNOLOGY (GMRIT), RAJAM (REGION-V)
Mr. K Venkata Rao
31 July 2012: Guest Lecture on “Advanced Image Processing Techniques
based on Fuzzy Systems”
Mr. Rao spoke about Introduction to Image processing, Components of
Image Processing System, and Image File Formats. He explained about how
medical image processing is useful for CT scan, MRI, Ultra Sound, X-Ray,
Nuclear Medicine, Cardiac Imaging, Interventional Radiology, and Virtual
Colonoscopy. He elucidated about Fuzzy Operators, Fuzzy Rules, Membership
Functions, Applications of Fuzzy Logic, and Fuzzy Image Processing.
ç
CSII Co
CS
Comm
Communications
mmun
mm
unic
un
ic
cat
atio
ions
nss | Oc
October
O
t be
to
b r2
2012
0 2 | 44
01
Dr. D Madhavi felicitaƟng the speaker Mr. K Venkata Rao. The picture also
shows students parƟcipaƟng in the Guest Lecture
www.
ww
w.cs
w.
csics
i-in
indi
in
dia
di
a.or
a.or
org
g
www.csi-india.org
SPEAKER(S)
TOPIC AND GIST
R.V. COLLEGE OF ENGINEERING, BANGALORE (REGION-V)
Mr. Amit Grover and Mr. Siddarth Goyal
17-18 July 2012: Entrepreneurship Workshop on theme “National
Technology Entrepreneurship Training Program”
The program started with a brief introduction about 'Entrepreneurship: Idea
to Execution' by Mr. Amit Grover. He presented various aspects of Idea
generation, Business plan, Finance, Marketing & Legal Issues. Mr. Siddarth
discussed about Word press, eCommerce & Web marketing with handson practice in the Lab. This opportunity made the participants familiar with
executing simple assignment of Web marketing using PHP & MySql.
ç
The parƟcipants with Expert trainers of Nurture Talent Academy
REVA INSTITUTE OF TECHNOLOGY AND MANAGEMENT, BANGALORE (REGION-V)
Suman Kumar & T N SeethaRamu
25-26 August 2012: Two-day workshop on “ANDROID - The Future of
Mobile Computing”
On Day 1 Suman Kumar spoke about Introduction and system architecture
of Android. Anatomy of android application was also covered along with the
hands-on sessions on Building, Running and debugging user’s application. On
day 2 he gave the introduction to event handling programs. Binder Interprocess
Communication (IPC) with AIDL along with android multimedia internals was
also covered.
ç
ParƟcipants of the Android workshop along with the Guests
GODAVARI COLLEGE OF ENGINEERING, JALGAON (REGION-VI)
Prof. Umesh S Bhadade
21-22 March 2012: Two-day Workshop on “C-Graphics”
Prof. Umesh S Bhadade gave a brief introduction to the theory and practice
of computer graphics. The workshop emphasized on understanding various
elements underlying computer graphics and knowing how to acquire skills in
generating marketable computer graphics.
ç
Prof. Umesh S Bhadade, while taking lecture
K.K. WAGH INSTITUTE OF ENGINEERING EDUCATION AND RESEARCH (KKWIEER), NASHIK (REGION-VI)
Mr. Chinmay Saraswat
18-21 August 2012: Four-day Workshop on “IBM RAD”
Following topics were covered in the IBM RAD workshop: Workbench Basics,
Java Development, Web development basics, Working with databases,
Packaging and deployment, Debugging Web application, and Testing Web
application.
ç
The Trainer and ParƟcipant students of IBM RAD
A.V.C. COLLEGE OF ENGINEERING, TAMILNADU (REGION-VII)
Dr. C Loganathan and Mr. K Anada Sithan
21 August 2012: Workshop on “Hardware and OS Installation”
Dr. C Loganathan delivered presidential address and urged students to
utilize the workshop. The speaker Mr. K Anada Sithan delivered a lecture
on Hardware and OS Installation. He shared his insights on the Hardware
installation, Troubleshooting, and OS Installation. He also gave demonstration
for assembling the system parts.
ç
Dignitaries on dais
CSII Co
CS
Comm
Communications
mmun
mm
unic
un
ic
cattio
ions
ns | Oc
October
O
cto
tobe
to
berr 2012
be
201
012
2 | 45
SPEAKER(S)
TOPIC AND GIST
MAR BASELIOS COLLEGE OF ENGINEERING AND TECHNOLOGY, (MBCET) TRIVANDRUM (REGION-VII)
Mr. Neelankantan G
7 September 2012: Organized “Bytes, the Technical Quiz competition”
Participation in Bytes was restricted to a team of two with written prelims to
find the five finalists. Written prelims were conducted on 17 August in which
32 teams took part. The finals accommodated all the thrill & suspense of a real
quiz show. The finals consisted of 8 different rounds. Each round was unique
in its content & steered the quiz unexpectedly at different instances. Cash
prizes were awarded by the Principal to the 1st & 2nd positions.
ç
Bytes2012 Technical Quiz compeƟƟon
M.A.M. COLLEGE OF ENGINEERING (MAMCE), TIRUCHIRAPALLI (REGION-VII)
Prof. K Velmurugan
18 July 2012: Seminar on “Web Services And Service Oriented Architecture
(SOA)”
Prof. Velmurugan started with basics of Web services, concepts of service
provider, service consumer and service contract. Services are business
functionality - autonomous, discrete, reusable and exposing its capabilities in
a form of contracts. Web services is the technology which helps implement
Service Oriented Architecture (SOA). The speaker explained the basic
standards for web services such as XML (Extensible Markup Language), SOAP
(simple object access protocol), WSDL (web services description language),
and UDDI (universal description, discovery and integration).
ç
Dr. K Rameshwaran, Dr. K Kumar, and Mr. Adarsh Rajput
Lecturer conducƟng the Seminar on SOA
26-27 July 2012: Workshop on “Android Application Development”
Dr. Rameshwaran in his inaugural address enlightened students regarding
the need to stay updated with current trends to face the competitive world.
Dr. Kumar felicitated the gathering and in his address said that workshops
are greater tools for enhancing technical skills, fostering research-oriented
activities, and promote the young graduates to face the technical challenges.
Mr. Adarsh Rajput briefed about the techniques for the development of
Android applications.
ç
From LeŌ: Prof. H Parveen Begam, Dr. K Kumar, Dr. K Rameshwaran,
Mr. Vaibhav Gupta, Mr. Adarsh Rajput, and Mrs. S Kavitha
MEPCO SCHLENK ENGINEERING COLLEGE, SIVAKASI, TAMILNADU
Dr. K Muneeswaran, Mr. Sairam, and Mr. Prithivi
(REGION-VII)
21 April 2012: Workshop on “Android”
Mr. Sairam gave a presentation describing Overview of Android. He explained
the role of OOPs in Android, History and Background of Android and also the
Structure of Android project. The Mr. Prithivi described various aspects of World
of Mobile Apps, Enterprise Resource Planning (ERP) Basics, Job Market in
Android Developers/Fresher’s and Mobile Application Development in Future.
ç
Mr. RamKumar K Ramamoorthy and Mr. Parthasarathy
Chinnachamy, Ms. M N Saroja and Ms. J Angela Jenefa
Sujana
Mr. RamKumar K Ramamoorthy and Mr. Parthasarathy Chinnachamy gave
introduction about cloud computing, virtualization, demo on Windows Azure,
and the research challenges in it. Ms. M N Saroja gave hands-on training using
HADOOP simulator tool. Mrs. J Angela Jenefa Sujana elucidated and gave
demo on EUCALYPTUS tool, which is used for Cloud Computing. Dr T Revathi
presented the cloudsim Simulation tool.
ç
CSII Co
CS
Comm
Communications
mmun
mm
unic
un
ic
cat
atio
ions
nss | Oc
October
O
t be
to
b r2
2012
0 2 | 46
01
Speaker conduƟng workshop
12-13 September 2012: Two day Workshop on “Research Issues In Cloud
Computing and Its Simulation”
During the workshop
www.
ww
w.cs
w.
csics
i-in
indi
in
dia
di
a.or
a.or
org
g
www.csi-india.org
SPEAKER(S)
TOPIC AND GIST
NATIONAL COLLEGE OF ENGINEERING (NCE), MARUTHAKULAM, TIRUNELVELI (REGION-VII)
Mr. K Baskar and Mr. S Sivabalan
23–24 August 2012: Two day National Level Workshop on “Free Open
Source Software (FOSS)”
Mr. K Baskar and Mr. S Sivabalan covered topics such as Kernel Configuration
& Compilation, Virtualization, Perl Programming, Python Programming and
PHP scripts, Package management etc. in detail.
ç
NATIONAL ENGINEERING COLLEGE, KOVILPATTI
Dr. D Manimegalai, Mrs. M Stella Inba Mary, and
Mr. L Jerart Jolous
Encl: (Speakers on the dias)(L to R) Prof. Dr. M Mohamed Sitheeq,
Mr. A Mohamed Anwar, Mr. K Baskar, Mr. S Sivabalan, and
Prof. Dr. S Kother Mohideen.
(REGION-VII)
9 August 2012: Technical quiz
The Quiz was on basics of Data structures, C, and Websites. After five rounds
of technical quests, the first and second prizes were bagged by Gomatheeswari
S & Krithika K of II CSE – A and Prakash V & Muthukumar K of II CSE - B
respectively.
ç
Quiz conducted by Mr. R Vijay of III CSE & Ms.G Gayathri of III IT and the
parƟcipants taking part in it
VELAMMAL INSTITUTE OF TECHNOLOGY (VIT), PANCHETTI, CHENNAI
Mr. S Soundararajan, Dr. T Chandrashekar, and
Mr. Y Kathiresan
Mr. S Soundararajan spoke about importance of CSI. Dr. T Chandrashekar
insisted on students role to take part in CSI activity. Mr. Kathiresan inaugurated
Department Association and addressed students with a motivational
presentation about the need for better placement and career. He encouraged
students to elicit innovative projects and also advised them to be enriched
with technical knowledge, to develop communication skills by participating in
various programs such as seminars, symposia, workshops etc. and to emerge
successful in campus placement drives.
ç
VICKRAM COLLEGE OF ENGINEERING, TAMILNADU
Y Kathiresan
(REGION-VII)
24 July 2012: Motivational Seminar
From leŌ Mr. Kathiresan, Prof. Razak, Mr. S Soundararajan and
on Mic Dr. T Chandrashekar
(REGION-VII)
31 July 2012: Lecture on “Your unique identity”
The session on “your unique identity“ revealed the real value of a person’s
identity in life. It was an adorable session, since every statement helped the
students to relate them with the present day scenario and showed the path
of excellence. The ideas delivered by Y Kathiresan were real-life formulas.
He formulated each act. To gain uniqueness within the students they should
know about their strengths and weakness. This was formulated well using the
SWOT technique.
ç
Honoring the guest in OrientaƟon Program
Please send your event news to [email protected] . Low resolution photos and news without gist will not be published. Please send
only 1 photo per event, not more. Kindly note that news received on or before 20th of a month will only be considered for publishing
in the CSIC of the following month.
CSII Co
CS
Comm
Communications
mmun
mm
unic
un
ic
cattio
ions
ns | Oc
October
O
cto
tobe
to
berr 2012
be
201
2
012
01
2 | 47
CSI Membership = 360° Knowledge
WE INVITE YOU TO JOIN
Your membership in CSI provides instant
access to key career / business building
resources - Knowledge, Networking,
Opportunities.
Computer Society of India
Join us
India's largest technical
professional association
and
become a member
CSI provides you with 360°
coverage for your Technology goals
Learn more at www.csi-india.org
I am interested in the work of CSI . Please send me information on how to become an individual/institutional*
member
Name ______________________________________ Position held_______________________
Address______________________________________________________________________
______________________________________________________________________
City ____________Postal Code _____________
Telephone: _______________ Mobile:_______________ Fax:_______________ Email:_______________________
*[Delete whichever is not applicable]
Interested in joining CSI? Please send your details in the above format on the following email address. [email protected]
Computer Society of India
IS LOOKING FOR
Director - Education
TO BE LOCATED AT ITS HEAD QUARTER PREMISES AT CHENNAI
Responsibilities:
☞
☞
☞
☞
☞
Academics is a thrust area of CSI and the Director is expected to spearhead this activity with emphasis on quality.
Students form an important segment of CSI membership. The Directorate is responsible for enhancing membership and providing
services to the large number of students spread across the country.
Creation of new curricula and enabling delivery of courses and administration of examinations through chapters.
Certification of courses offered by different institutions.
Establishing relationships with academic and commercial institutions to serve the interests of CSI members.
The Right Candidate:
☞
☞
☞
☞
☞
☞
A professional with passion for academics with ability to interact with academics, educational institutions, and the Government;
An academic - serving or retired - will fit in the role well; and
Good administration and public relations capability are important requirements.
Position is on contract basis for a period of three years, extendable on the basis of performance
Remuneration commensurate with similar positions in other not-for-profit organizations
Interested candidates may send their profile to [email protected] before 15th Nov.
Profile of Prof. R P Soni
➢ Prof. R P Soni was Director of Rollwala Computer Centre at Gujarat University for 32 years and now he is Campus
Director (Computer Education) at Gujarat Law Society managing MCA and BCA programs for last 12 years.
Deeply involved in education, he took the lead in initiating and establishing Computer Science courses at several
universities of the state. Author of numerous text books for different levels and research findings of international
project with University of Regina, Canada. He is pioneer of CSI Ahmedabad chapter ever since it was established
in 1969 and served the society in various capacities including Chairman. He also organized a range of state and
national level seminars under CSI.
➢ He is Fellow of CSI and was Convention Chair for the 46th Annual Convention “CSI-2011” held at Ahmedabad.
CSII Co
CS
Comm
Communications
mmun
mm
unic
un
ic
cat
atio
ions
nss | Oc
October
O
t be
to
b r2
2012
0 2 | 48
01
www.
ww
w.cs
w.
csics
i-in
indi
in
dia
di
a.or
a.or
org
g
www.csi-india.org
CSI Calendar
2012
Date
Prof. S V Raghavan
Vice President & Chair, Conference Committee, CSI
Event Details & Organizers
Contact Information
October 2012 Events
12-13 Oct. 2012
26th Annual Karnataka Student Convention on Green Computing - Challenges Prof. Sunita
& Change
[email protected]
Sidganga Institute of Technology, Tumkur, Karnataka
20 Oct. 2012
Communication Technologies & its impact on Next Generation Computing Prof. Umang, [email protected]
Prof. Ashish Seth, [email protected]
(CTNGC-2012)
I.T.S - Management & IT Institute Mohan Nagar, Ghaziabad, U.P
Prof. Alka Agrawal, [email protected]
November 2012 Events
5-6 Nov. 2012
Fourth e-Governance Knowledge Sharing Summit (KSS-2012)
Mr. A M Parial, [email protected]
Govt. of Chattisgarh, in association with CSI-SIGeGOV at Hotel V W Canyon, Raipur Maj. Gen. (Retd.) Dr. R K Bagga, [email protected]
9-10 Nov. 2012
FDP on Intelligent Computing
New Horizon College of Engineering, Bangalore
Prof. Ilango
[email protected]
9-10 Nov. 2012
Cloud Computing
MVJCE, Bangalore
Prof. Avayamba
[email protected]
15-16 Nov. 2012
International conference on Demand Computing- ICODC-2012
Oxford College of Engineering, Bangalore
Prof. Jayaramiah
[email protected]
29 Nov.-1 Dec.
2012
Third International Conference on Emerging Applications of Information D P Mukherjee/Debasish Jana/
Technology (EAIT 2012)
Pinakpani Pal/R T Goswami
CSI Kolkata Chapter Event at Kolkata URL: https://sites.google.com/site/ [email protected]
csieait2012/
December 2012 Events
1-2 Dec. 2012
47th Annual National Convention of CSI (CSI 2012)
CSI Kolkata Chapter Event at Kolkata, URL: http://csi-2012.org/
Subimal Kundu/D P Mukherjee/
Phalguni Mukherjee/J K Mandal
[email protected]
2 Dec. 2012
CSI Nihilent eGovernance Awards 2011-12 & Release of eGov Case studies book
Venue: Science City, Kolkata
Surendra Kapoor, [email protected],
GSN Prabhu, [email protected],
P Harish Iyer, [email protected]
Nityesh Bhatt, [email protected]
6-8 Dec. 2012
Second IEEE International Conference on PDG Computing [PDGC 2012], Dr. Nitin, [email protected]
Technically
CSI Special Interest Group on Cyber Forensics at Jaypee University of information Dr. Vipin Tyagi, [email protected]
Technology, Waknaghat- Solan (HP) http://www.juit.ac.in/pdgc-2012/index1.php
13-15 Dec. 2012
International Conference on Multimedia Processing, Communication, and Prof. Santosh Katti, [email protected]
Dr. Anirban Basu, [email protected]
Computing Applications (ICMCCA)
PES Institute of Technology, CSI Division IV & Bangalore Chapter
Sanjay Mohapatra, [email protected]
http://www.icmcca.in
[email protected]
14-16 Dec. 2012
International Conference on Management of Data (COMAD-2012)
SIGDATA, CSI, Pune Chapter and CSI Division II
18-20 Dec. 2012
Alan Turing Year India Celebrations - Teacher Training. Subject: "Simplification Dr. D K Subrahmanian, [email protected]
in Intelligent Computing Theory and Algorithms"
Dr. Rajanikanth, [email protected]>
Bangalore http://www.csi-india.org/web/csi/division2; www.faer.ac.in/
Dr. T V Gopal, [email protected]
19-21 Dec. 2012
International Conference on Software Engineering & Mobile Application
Modelling & Development ( ICSEMA-2012)
CSI DIV IV ( Communications), B S Abdur Rahman University, Chennai & Deakin
University, Australia http://icsema.bsuniv.ac.in/
20-22 Dec. 2012 Futuristic Computing - ICFC 2012
RVCE, Bangalore
Mr. C G Sahasrabudhe
[email protected]
Dr. K M Meheta, [email protected]
[email protected]
Mr. Sanjay Mohapatra, [email protected]
[email protected]
Prof. Sumitra Devi
[email protected]
January 2013 Events
29-31 Jan. 2013
International Conference on Reliability, Infocom Technologies, and
Optimization (Trends and Future Directions)
Amity Institute of Information Technology, Amity University, CSI and IEEE
Prof. Sunil Kumar Khatri
[email protected]
February 2013 Events
19-20 Feb. 2013
International Conference on Advance Computing and Creating Entrepreneurs Dr. Dharm Singh, [email protected]
(ACCE2013)
Sanjay Mohapatra, [email protected]
SIG-WNs, DivIV and Udaipur Chapter CSI and GITS Udaipur,
Ms. Ridhima Khamesra, [email protected]
http://www.acce2013.gits.ac.in/
Registered with Registrar of News Papers for India - RNI 31668/78
Regd. No. MH/MR/N/222/MBI/12-14
Posting Date: 10 & 11 every month. Posted at Patrika Channel Mumbai-I
Date of Publication: 10 & 11 every month
icsema.bsauniv.ac.in
The international conference is first of its kind to focus on Software
engineering principles applied to Mobile application development as
cloud computing and mobility is one of the growing technologies. The
main objective of this International Conference is to bring together
members of research community and industrial practitioners to explore
challenges, issues and opportunities in software engineering, and mobile
application development that use modern software engineering
principles and practices.
ICSEMA 2012 scope includes all areas in software Engineering, Cloud
computing, Mobile application, Modelling and Development. A student
project contest is arranged in the conference inviting projects developed
by students from various institutions. A peer review committee will
shortlist the best projects and selected projects will be presented during
the conference.
One main highlight of this conference is that there is a special U.K
.session showcasing the mobile technological developments and several
products on mobile devices especially designed and developed at U.K.
About three to four industries from U.K. are expected to participate in the
conference.
All accepted papers will be included in IET Digital Library searchable
through IEEE Xplore and will be submitted for EI Compendex and ISI
indexing.
The conference is organized in partnership with Deakin University,
Australia and Computer Society of India (CSI Division IV (communication).
The conference is sponsored by Defence Research Development
Organization (DRDO), Council of Scientific and Industrial Research (CSIR),
UK Trade & Investment, British Deputy High commission and Paavai
Educational Institutions and many more are expected to participate.
Leading International experts like Tony Wasserman -USA, Dr. Hai Jin China, Dr. Sajal K Das - USA and many more renowned researchers will
grace the event and deliver keynote lectures.
Papers must be submitted electronically to [email protected].
Conference Sponsor: CSIR & DRDO Technical Sponsor: CSI
If undelivered return to :
Samruddhi Venture Park, Unit No.3,
4th floor, MIDC, Andheri (E). Mumbai-400 093
International
Conference on Software
Engineering and Mobile
Applications Modelling
and Development
19-21 December 2012
B S Abdur Rahman University
General chair
Wanlei Zhou, Deakin University
V.M.Periasamy, BSAU
Programme Chair
Jingyu Hou, Deakin University
K.M.Mehata, BSAU
Publicity Chair
V N A Jalal, BSAU
Worksop/Tutorial Chair
Robin Doss, Deakin University
P.Sheik Abdul Khader, BSAU
Student Project Chair
Kok Leong Ong, Deakin University
Angelina Geetha, BSAU
Organizing committee chair
R.Shriram BSAU
W.Aisha Banu , BSAU
Important dates
Paper Submission
: 30.09.2012
Intimation to Authors: 15.10.2012
Final copy for proceedings: 1.11.2012
Registration Fees
Academic Staff: Rs.2500/Research Scholar:Rs.2000/Students:Rs.1000
Foreigners:$400
For further details contact
Mr.Sanjay Mohapatra
Chairman, CSI Division
IV(Communication)
Email:[email protected]
Dr.K.M.Mehata
Email:[email protected]
U.K Industry Sponsor: British Deputy High commission