2015 - An International Journal of Advanced Computer

Transcription

2015 - An International Journal of Advanced Computer
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
[2015]
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
EVALUATION OF ANT HOC NET
PERFORMANCE WITH VARRYING OFFERED
LOAD AND NETWORK SIZE
Mohd. Salman Siddique1, Mrs. Sahana Lokesh R2
CCSIT ,TMU, Moradabad
1
[email protected]
Abstract—A Mobile Ad hoc network (MANET) is a collection of
wireless mobile nodes forming a temporary network without
using centralized access points, infrastructure. Ad hoc wireless
multi hop networks (AHWMNs) are communication network
that contain wireless nodes created without prior planning. All
the nodes have routing capabilities and forward data packets
for other nodes in multi hop fashion. AHWMN routing
protocols are categorized as topology based, bio-inspired and
position based routing protocols.
The main aim of this paper is to evaluate the performance of
AntHocNet which is based on the ant foraging behavior. Along
with Ad-hoc on demand distance vector (AODV) routing
protocol, Dynamic source routing (DSR) and Dynamic Manet
on demand routing protocols (DYMO) by using the network
simulator ns2.34 at different no. of nodes and at different data
rates. All result have been analyzed based on packet delivery
ratio , average end -to-end delay and routing overhead by
varying no. of nodes and different data rates.
Keywords—AHWMN, AntHocNet, AODV, DSR, DYMO.
I . INTRODUCTION
A rapid growth and research has been seen in the
field of Mobile Ad Hoc Networks (MANETs) due
to their dynamic nature and infrastructure less end
to end communication.
Ad
hoc
wireless
multi-hop
networks
(AHWMNs) [1] are a collection of mobile devices
which form a communication network with no preexisting infrastructure. Routing in AHWMNs is
challenging since there is no central coordinator
that manages routing decisions. Routing is the task
of constructing and maintaining the paths that
connect remote source and destination nodes. This
task is particularly hard in AHWMNs due to issues
that result from the particular characteristics of
these networks. First important issue is the fact that
AHWMNs are dynamic networks. This can be
because of their ad hoc nature: connections
between nodes in the network are set up in an
unplanned manner, and are usually modified while
the network is in use. An AHWMN routing
algorithm should be adaptive in order to keep up
with such dynamics. A second issue is unreliability
of wireless communication. Data and control
packets can easily get lost during transmission,
especially when mobile nodes are involved, and
once
multiple
transmissions
take
place
simultaneously and interfere with one another. A
routing algorithm should be robust with respect to
such losses. A third issue is caused by the often
restricted capabilities of the AHWMN nodes.
There are limitations in terms of node processing
power, battery power, memory, network
bandwidth, etc. It is therefore important for a
routing algorithm to work in an efficient way.
Finally, last important issue is the network size.
With the ever growing numbers of portable
wireless devices, several AHWMNs are expected
to grow to massive sizes. Routing algorithms
should be scalable to keep up with such evolutions.
Biology does provide solutions to scalability.
Computer network is one engineering field which
has many parallels with biology and hence the
solutions of biology can be used to solve the
problems of computer networks.
1
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
[2015]
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
Swarm intelligence is that the property of the
system whereby the collective behaviors of
unsophisticated agents interacting locally with their
environment cause coherent functional global
patterns to emerge. Swarm intelligence provides a
basis with which it is possible to explore collective
problem solving without centralized control or the
provision of a global model.
II . MANET PROTOCOLS
In the following subsections we discuss the most
commonly used standard routing protocols
AODV,DSR and DYMO along with AntHocNet.
A. Adhoc On Demand Distance Vector (AODV):
The AODV routing protocol is a reactive routing
protocol that uses some characteristics of proactive
routing protocols.
Routes are established on demand as they are
needed. Reactive routing protocols find a path
between the source and the destination only when
the path is needed.
In AODV, the network is silent until a
connection is needed. At that point the network
node that needs a connection broadcasts a request
for connection .Other AODV nodes forward this
message ,and record the node that they heard it
from ,creating an explosion of temporary routes
back to the needy node.
When a node receives such a message and
already has a route to the node ,it sends a message
backwards through a temporary route to the
requesting node .the needy node then begins using
the route that has the least number of hops through
other nodes. Unused entries in the routing tables
are recycled after a time.
When a link fails a routing error is passed back
to a transmitting node , and the process repeats.
B. DYNAMIC SOURCE ROUTING(DSR):
Dynamic Source Routing (DSR) IS A
ROUTING protocol for wireless mesh networks
.this protocol is truly based on source routing
whereby all the routing information is maintained
at mobile nodes .
It has only two major phases , which are Route
Discovery and Route Maintenance .
Route Reply would only be generated if the
message has reached the intended destination node.
To return the Route Reply, the destination node
must have a route to the source node. If the route is
in the Destination Nodes route cache, the route
would be used.
Otherwise, the node will reverse the route based
on the route record in the Route Request message
header .In the event of fatal is initiated whereby the
Route Error packets are generated at a node. The
erroneous hop will be removed from the node’s
route cache; all routes containing the hop are
truncated at that point .
C. DYNAMIC MANET ON DEMAND ROUTING (DYMO) :
DYMO is a purely reactive protocol in which
routes are computed on demand i.e. as and when
required. It is a descendant of AODV protocol, and
can act both as proactively as well as reactively. It
extends AODV with source route path
accumulation feature of DSR. DYMO is a
combination of AODV with DSR i.e. it is based on
AODV structure but works on the mechanism of
DSR. The DYMO protocol has two basic processes
that are route discovery and route management. In
route discovery process, the source node initiates
broadcasting of Route Request (RREQ) packet all
through the network to locate the sink node. When
the sink node receives a RREQ it replies back with
a Route Reply (RREP) message which is unicast
towards the source. When the source receives
RREP, a bidirectional route is established between
the nodes. Also in case of any link break or node
2
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
[2015]
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
failure a Route Error (RERR) packet is sent to the
source node to indicate the broken route. Once the
source node receives an RERR it re-initiates the
route discovery process.
As DYMO routing protocol is successor to the
popular Ad hoc On-Demand Distance Vector
routing protocol, it shares many of its benefits.
DYMO protocol has the similar basic functions
and operations to AODV. As a reactive protocol,
DYMO does not explicitly store the network
topology.
DYMO routing protocol with excellent
performance is simple, compact, easy to implement
and highly scalable characteristics, and is a very
promising protocol.
pheromone diffusion process is in its working more
similar to Bellman-Ford routing algorithms.
III . SIMULATION ENVIRONMENT
To test and compare the performance of
AntHocNet protocol, the network simulator NS-2
[5], version 2.34 is used. The network model used
in our simulation is composed by mobile nodes and
links that are considered unidirectional and
wireless. Each node considered as communication
endpoint is host and a forwarding unit is router. In
addition to NS-2, a set of tools, mainly Bash scripts
and AWK filters, to post-process
the output trace files generated by the simulator
are developed. In order to evaluate the
performance, multiple experiments were set up.
D. ANTHOCNET:
AntHocNet is an adaptive routing algorithm for
mobile ad hoc networks(MANETs) inspired by
ideas from Ant Colony Optimization (ACO).In
common MANET terminology , AntHocNet could
be called a hybrid algorithm ,as it combines both
reactive and proactive routing strategies. The
algorithm is reactive in the sense that it does not
try to maintain up-to-date routing information
between all the nodes in the network, but instead
concentrates its effort on the pairs of nodes
between which communication sessions are taking
place. It is proactive in the sense that for those
ongoing communication session, it continuously
tries to maintain and improve existing routing
information .To gather routing information, the
AntHocNet algorithm uses two complementary
processes. One is the repetitive end –to-end path
sampling using artificial ant agents.The other is
what we call pheromone diffusion, an information
bootstrapping process that allows to spread routing
information over the network in an efficient way.
While the ant-based path sampling is the typical
mode of operation of ACO routing algorithms, the
IV. PERFORMANCE METRICS
Different performance metrics are used in the
evaluation of routing protocols. They represent
different characteristics of the overall network
performance. The metrics considered are routing
overhead, packet delivery ratio and Average endto-end delay.
A. Packet Delivery Ratio
This is the ratio of total number of packets
successfully received by the destination nodes to
the number of packets sent by the source nodes
throughout the simulation.
B. Average End-to-End Delay
This is defined as the average delay in transmission
of a packet between two nodes.
C. Routing Overhead
Routing overhead is the total number of routing
packets. This metric provides an indication of the
extra bandwidth consumed data traffic.
V. RESULT AND ANALYSIS
The following results show the packet delivery
ratio ,routing overhead and average end to end
delay of AntHocNet, DYMO, AODV and DSR at
3
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
[2015]
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
different data rates and different number of nodes
with UDP as transport protocol.
.
Fig 3. No. of Nodes vs Avg. End to End Delay
Fig.1.No. of nodes vs Routing Overhead
COMPARISON
200
150
100
50
0
0.94
ANTHOCNET
1.5
1
DYMO
1.2
AODV
DSR
Fig 4. Data Rates vs Routing Overhead
Fig 2. No. of nodes vs Packet Delivery Ratio
4
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
[2015]
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
the network. When TCP is the transport protocol,
performance of AntHocNet is better than
DYMO,AODV and DSR at different data rates.
REFERENCES
[1]
Fig.5. Data Rates vs Packet Delivery Ratio
Fig.6. Data Rates vs Avg. End to End Delay
VI .CONCLUSION
The performance of AntHocNet is compared
with the routing protocols AODV,DYMO and
DSR by using the performance metrics such as
packet delivery Ratio, end-to-end delay and
Routing overhead .From the results it can be
concluded that AntHocNet has higher performance
at higher data rates and at higher number of nodes
with UDP as transport protocol. The performance
of AntHocNet is getting high gradually while
increasing the data rates and number of nodes in
www.ijarcsse.com. Traffic Investigation of AntHocNet with Varying
Mobility.
[2] Comparative Analysis of AntHocNet, AODV, DSR Routing Systems for
Improvising Loss Packet Delivery Factor Maahi Amit
Khemchandani#1, Prof. B. W. Balkhande*2
[3] An analysis of the diff erent components of the AntHocNet routing
algorithm Frederick Ducatelle, Gianni A. Di Caro, and Luca M.
Gambardella? Istituto “DalleMolle”diStudisull’IntelligenzaArtificiale
(IDSIA) Galleria 2, CH-6928 Manno-Lugano, Switzerland
[4] Ant Colony Optimization and its Application to Adaptive Routing in
Telecommunication Networks Gianni Di Caro Dissertation pr´esent´ee
en vue de l’obtention du grade de Docteur en Sciences
Appliqu´eesBruxelles, September 2004
[5] Frederic Ducatelle, (2007) “Adaptive Routing in Wireless Ad Hoc
Multi-Hop Networks”.
[6] R.M. Sharma, “Performance Comparison of AODV, DSR and
AntHocNetSystems”, Department of Computer Engineering, NIT
Kurukshetra.
[7] FJ ArbonaBenat, (2006) “Simulation of Ant Routing System for Ad-hoc
networks in NS-2”.
[8] D. B. Johnson, D.A Maltz, & J. Broch, (2001) “DSR: The Dynamic
Source Routing System for Multi-Hop Wireless Ad hoc Networks”,Ad
Hoc Networking, C.E. Perkins, Ed., Addison- Wesley, pp. 139-172.
[9] AmmarOdeh,EmanAbdelFattah and MuneerAlshowkan, “Performance
Evaluation Of Aodv And Dsr Routing Systems In Manet Networks”,
International Journal of Distributed and Parallel Systems
(IJDPS),Vol.3, July 2012.
[10]Prof. P. Chenna Reddy and T. Nishitha. Bio-inspired Routing in Ad-hoc
networks, Journal of engineering, computers and applied sciences,
ISSN : 2319-5606, Vol.1, no.1, October 2012.
[11]Gianni Di Caro, Frederick Ducatelle and Luca Maria
[12]Gambardella. AntHocNet: An Adaptive Nature-Inspired Algorithm for
Routing in Mobile Ad Hoc Networks, Published online in Wiley
InterScience [www.interScience.wiley.com], Vol.16, pp - 443-455,
May 2005.
[13] [12] The VINT project. The ns Manual (formely ns notes and
[14]documentation).
http://www.isi.edu/nsnam/ns/ns-documentation.html
November 2011H.
[15] [13 ]J. Martins, S. Correia, Junior, J., “Ant-DYMO: A bio-inspired
algorithm for MANETs,” in Proc. 17th International Conference on
Telecommunications, Qatar, Apr 4-7, 2010, pp. 748–754.
5
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
[2015]
Collaboration of Cloud Computing and IoT
in Smart Cities
Khan Atiya Naaz1, Dr. Harsh Kumar2, AshishBishnoi3
1,2
Department of CS, Dr K N MODI University, Newai Rajasthan
3
CCSIT, TMU, Moradabad
1
[email protected]
[email protected]
3
[email protected]
2
Abstract—India is a developing nation and its most of population
is moving to the cities for better opportunities. That’s why P.M
Modi has announced to construct 100 smart cities until 2022. So,
this paper is intended to be the guide of information technology
in smart cities, basically applying concept of cloud computing
and internet of things and their basic services and benefits.
Keywords— Internet, Web Services, Network, Technology
I. INTRODUCTION
Improvement in the quality of life experienced by
the citizen, with the help of emerging technology is
the only objective behind this. As India is on its
way to develop and construct 100 smart cities to
fulfil the need of rapidly growing and urbanizing
population. It will require construction of new cities
and renovation of existing cities by using cuttingedge technologies. This paper proposes about
technologies like cloud computing and IoT, will
ultimately efficient, cheaper and better integrated
services.
It also concentrate on vertical integration within
independent infrastructure and services, which is
compatible in technologies, that achieved through
common and census based standards that ensure
interoperability. We have need both of this to
increase mobility in our citizens, traffic congestion,
crime prevention, private and public transportation
system, commercial buildings, hospitals and homes.
It provides necessary connectivity because it is very
necessary to integrated collaborative approaches
that are responsible to number of coming ideas.
Without this, cities will have serious repercussions
for economic and social development.
II. OVERVIEW
A. Internet of Things
Currently, IoT is playing the most important role
in the digital universe. This technology is used to
interconnect embedded devices like sensors, mobile
devices etc. to establish connection between them
without any human intervention.
Fig. 1 An urban IoT network based on the web services
IoT becomes the fore-standing in the present era,
where number of internet connected devices has
exceeded the number of human beings. The overall
intension of setting up smart cities in India is to
establish IoT industry worth $15 billion throughout
the India in the upcoming 6 years, which will create
new job opportunities in India in different industrial
fields. IoT offers golden opportunities for telecom
operators and system integrators. From the IT
industry perspective, it will open up new gateways
1
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
to provide services, analytics and applications. The
development of IoT based platforms to achieve
smart city initiatives cannot happen overnight. The
creation of proper digital infrastructure is the first
step for setting up an IoT platform in our country.
In order to achieve this, the digital program has
been launched. The main objective of this program
is to transform India into digital empowered society
and knowledge economy and will also provide the
backbone for the development of IoT industry in
India.
The only way to the success of IoT technology in
India lies in the development of scalable low cost
open platform which will share the data among the
different government domains. It is very important
for Indian citizens to understand the value of smart
cities and benefit which it would offer. This will
make the citizens to support and adapt to the steps
and changes which are taken to the direction of
development of smart cities.
B. Cloud Computing
[2015]
A city equipped with sensors & capable of
providing the best services possible at every citizen
by citizens by processing all the data collected from
the sensor network.
In order to act smartly, these cities should have
the Artificial Intelligence to understand the data and
information acquired from sensors and cameras,
store them and process the data using data centers
and servers and take action to resolve it in real-time.
III. APPLICATION OF CLOUD & IOT IN SMART CITY
Both, Cloud Computing and IoT play a vital role
into smart city in case of developing technology ,
network, infrastructure, services etc. because to
make the city smart we have a need of smart
mobility, smart people, smart economy, smart
environment, smart governance.
Everyone believes that the latest emerging
technology will form cities for the future.
Transporting people easily from home to work or
anywhere and the best lifestyle imaginable are the
things which are possible with the use of better
cloud management and merging it with the Internet
of Things (IoT), this will grant them the ability of
individuals to perform operations through smart
applications and better opportunities to develop
their personal and entrepreneurial potential by
affordable services and infrastructure.
The technologies are available for converting any
city in to a smart city. Technology allowing urban
efficiency and sustainable management are already
available and continuously enhancing.
For
example,
monitoring
and
sensors
technologies, energy management system and
intelligent traffic system are the essential
technologies for a smart city.
Cloud computing has come forth as a new model
for providing services over the internet .This come
forth of cloud computing has made awe-inspiring
effect on the information technology (IT) industries,
over the past few years, where Google, Amazon,
Microsoft to make strenuous effort to provide
reliable, highly scalable, powerful cloud platforms,
and business enterprises acquire business models to
achieve benefits from cloud computing.
For example, Cloud Computing maximizes the
utilization of the resources as the same resources
can be used in more than one time zones with
different.
From this advancement, multiple users can access
a single server for their usage. This will maximize
A. Street lights with daylight sensors
the computing power and reducing the
Normally in India Street lights are always remain
environmental damage, air conditioning, Rack
on
even in the day, so for saving the energy we can
space etc.
install sensors in them to sense the daylight in the
C. Smart city
environment and program them to turn on only
A city in which information & communication when visibility is low, it will allow the street lights
technology is used to form city’s basic building to turn on only in the nights, which will save energy.
blocks, and forms a platforms by which a
community including administration, industries, and
people can develop altogether.
2
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
[2015]
B. Face recognition & thumb prints scanners around the city
The idea is to install and use applications like
face recognition and thumb prints scanners to
enable the citizens to access the facilities offered in
the city, such as City Metro, Intercity Bus
Transportation and ATM. This can help governing
administration and the citizen in a drastic way. In
place of using any Identification Card or Metro card
we should use face recognition and thumb prints
scanners to identify someone and allow them to use
the facilities. This will increase the security and
confidentiality in the city, administration will have
the information about the whereabouts of every
citizen which will decrease the crime rate in the city.
IV. CONCLUSION
India is developing day by day and now it is
planning to establish 100 smart cities across the
country. And from the point of view of information
and communication technology, the most important
aspect of smart city is the implementation of
Internet of Things (IoT) and Cloud Computing.
In this paper, we discussed about how we can
implement these technologies and their benefits to
the governing administration, entrepreneurs and
citizens of the smart city.
REFERENCES
[1] www.indiaonward.com & www.ubmfuturecities.com
[2] www.usibc.com, A nation of smart cities, USINDIA BusinessCouncil
[3] A.D. Guerrero-Pérez, A. Huerta, F. González and D. López.Network
Architecture based on Virtualized Networks for Smart Cities
[4] V.M. Larios, J.G. Robledo, L. Gómez, and R. Rincon,IEEE-GDL CCD
Smart Buildings Introduction IEEE Guadalajara Physical Infrastructure
Working Group for Smart Cities
[5]Qi Zhang · Lu Cheng · RaoufBoutaba (2010) Cloud computing: state-ofthe-art and research challenges© The Brazilian Computer Society 2010
[6] Pablo E. Branchi *, Carlos Fernández-Valdivielso and Ignacio R. Matias.
(2014) Analysis Matrix for Smart Cities www.mdpi.com/journal/futureinternet
[7]Annalisa Cocchia (2014) A. Cocchia (*)Smart and Digital City: A
Systematic Literature Review. Springer International Pub. Switzerland 2014
[8] Andrea zanella, Lorenzo vangelista (February 2014) Internet of things
for smart cities,IEEE IOT JOURNAL, VOL. 1, NO. 1, FEB.2014
3
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
[2015]
Revolt P2P Technology Application of Bit
Torrent
Dr. Ambuj Kumar Agarwal1, Mohit Dwivedi2
1
Associate Professor, College of Computing Science and Information Technology,
Teerthanker Mahaveer University Moradabad, India
2
Department of Information Technology-B.Tech Graduate (I T), Babu Banarsi Das- Educational Society
Group of Institution, BBD City, Faizabad Road, Lucknow, Uttar Pradesh 226028
1
2
[email protected]
[email protected]
Abstract— In this era of digital world every user of internet
wants to access data at a higher speed even though internet
potential is low ,So for resolving this issue Torrent is the
simplest way .If you want any movie /song /applications there
are various websites like kat.cr ,torrentz,ip torrents etc. to share
over Bit Torrent. Bit torrent provides a system in which you can
download and upload data at good speed and at the same time.
Peer-to-peer file transfer system. Whereas
traditional trackers can also works side by side as
DHT
I. INTRODUCTION
Bit torrent is an agreement that provides an
interface between user and internet to download and
upload files at a higher speed in minimum band
width, and also provide peer to peer connection
sharing.
An ex student Bram Cohen at University of
Buffalo developed a protocol in 2001 named Bit
Torrent, and release first version in July 2001.
In a survey in February 2009 this peep-to-peer
Fig. 1 Working of Bit torrent
networks occupies the 43% to 70% of all internet
To explain the working of the bit torrent there are
traffic. In November 2004, Bit torrent was used
two
important steps involves namely
35% of all internet traffic .As of February 2013, It
 Client –server Downloads processes.
was only occupies 3.35% of overall bandwidth,
 Peer-to-peer networking processes.
more than half of the 6% of total bandwidth related
to file sharing.
A. Client – Server Downloads Processes
A user must have a computer program that works
on Bit Torrent protocol for sharing files. Some of
the best programs that are the clients of Bit torrent
are Bit Comet, u torrent, vuze, Deluge .It provides a
list of files available and also helps to transfer and
reconstruct.
II. HOW DOES IT WORK?
As the DHT protocol specification says, “In
effect, each peer becomes a tracker.” This shows
that Bit torrent user does not use a central server
instead of that it becomes a fully decentralized
Fig2. Client server download process
As the name of process indicates that it is a two
sided process between Client and server, which
1
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
includes to download a file over the internet in
which user opens a link and click on it, now web
browser plays the role of client and requests the
server to send the copy of a file to user’s computer.
FTP (File transfer protocol) and HTTP (hyper text
transfer protocol) handles the transfer.
[2015]
client’s information data which is searched by
another user.
III. BIT TORRENT DOWNLOADING PROCESS
In peer to peer downloading proceeds torrent is
the identity of data which is to be shared between
the clients, when any of the client needs a data they
B. Peer-to-Peer Networking Processes
get Torrent of that data from another client which
have the original copy of that data .the original
data which is taken from the client and stored in
the system called as seed. This seed can be split into
various torrent according to the requirement of the
other users.
When user requests a file over the internet It
requires a Bit Torrent application to seed the
information and this information came from other
clients which are connected through internet these
clients downloads the parts of the required file and
send to appropriate user ,if the file is already in the
system then transfer of the data starts immediately
from the other clients .in this case the most popular
Fig 3. Peer-to-Peer Networking Processes
data will be downloaded sooner as comparison to
In 1999 when Peer to peer process had previously ordinary data .In the whole process the systems who
used in many application and there design was cooperating the file sharing are known as the swarm.
renowned by file sharing system which was known
As the downloading of file is done the user
as
NAPSTER. should be remain online so that seeds will increases
The basic idea motivates new design in the area of also for another users of bit torrent this process
human knowledge to each other.
known as seeding in case of user finish their
A network having interconnected nodes known as downloading and immediately disconnects the
peers, which share file and information amongst downloading process then the seeding get stops so
each of the peers without using a server.
that speed of downloading become low and this
Peers create some of their information i.e.
type of users called as leecher if all users leech in
 Processing power
this way the Bit torrent will get stop.
 Network bandwidth
 Disk storage
Are available to others network users without
need of any server. These nodes works as both
senders and receivers of information, which is the
antithesis of traditional client server process.
The sharing is done as exchange of data between
Fig. 4 example of seeds and leechers
computers but file sharing from one client to other
Complete Process to download file from torrent
may cause harm to each other. Some user fetch the
 Open the web page and enter the link of
file and disconnect their networks without allowing
needed file.
other users to fetch file from other systems, and this
term known as leeching. It reduces the no of
2
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad








Now tracker search the file with online Bit
Torrents user and check whether the file
available among the users
Then starts seeding and leeching process
according to its availability.
If the clients have all the parts of the
required file and already seeded then tracker
traces the swarm.
Now transfer of data started from the traced
swarm.
The tracker collects all the require parts of
the file from different clients in the swarm
and starts the transfer simultaneously.
If user still continues the execution of Bit
Torrent then the ranking of the individual
user improves in “tit-for-tat” system,
The collected piece of information gathered
by the computer uses for the future aspects,
and solves the upcoming common problem
of all clients.
Bit torrent always use for the large popular
files because the all the clients have
common pieces of information which
improve the downloading speed.
IV. INSTALLATION AND CONFIGURATION OF BIT
TORRENT
Following steps must be followed for the
installation and configuration of the Bit Torrents
Application :





Download and install the BitTorrent client
software.
Configure and check firewall or router for
BitTorrent
Find files to download.
Download and open the .torrent file.
BitTorrent share pieces of the file.
After the download completes to share
your .torrent files with others user have to BE
stay connected.
V. BIT TORRENT DISTRIBUTION SYSTEM
[2015]
Download a file with extension .torrent user need
access tracker and to a web server .
VI. ADOPTION
To handle the increasing amount of user
(individuals as well as organizations) Bit torrents
distribute its own licensed information Some of the
free adopters reports that demands are always
reduced of their networking and bandwidth if they
are not using Bit torrent. So they can not distribute
the files.








Research
Education
Government
Software
Broadcasters
Film and media
Personal media
Others
VII. DERIVED TECHNOLOGIES
Bit Torrent is still under process and is updating by
their needs .
A. RSS feed :
To create a content delivery system broad catching
is involved with RSS in bit torrent
B. Web seeding:
In addition to the swarm capacity of downloading
pieces from HTTP.
C.
Distributed trackers :
Its a basically a track less torrent
D. Multi trackers :
It allows to use multi track data .
E. Decentralized keyword search :
Searching torrents with the third party.
F.
Throttling and encryption :
to reduce the throttle over the internet encryption
has been used .
If user want to download a file with the large size
Bit torrent help to fetch that file in low bandwidth .
3
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
VIII. PROS AND CONS OF BIT TORRENT.
Whereas Bit torrent provides larger amount of data
in very low bandwidth there are some issues also
accurse
[2015]
[9] “A Comparative Study :Agent Oriented Software Engineering
[10]
Techniques” Technical Journal of LBSIMDS, Lucknow, Vol 2;Issue2;
Dec 12
Ambuj Agarwal, “Implementation of Cylomatrix Complexity Matrix”,
Journal of Nature Inspired Computing, Volume-1, Issue-2, Feb 2013
A. Pros







Easy information sharing
Gives best speed in slow connections
Peer to peer enhance the quality of the data
All type of data available in one click
Traffic of internet reduced
It is more efficient system than client server
model
Reduce cost of the big files released online.
B. Cons




A easiest way of piracy
Danger of malware attacks
Threat to secured and confidential
information
There is always a big debate between cars
and guns but it’s up to people to use it social
not the anti social.
IX. CONCLUSION
To conclude this journal research paper, it can be
said that the use of the bit torrent in social way can
provide a easy and fast access to the big
information in the lower cost . Bit torrent protocol
still under development which can enhance
performance and the value of the information .So
we can say that Bit torrent is the reliable and most
finest way to share data without client server
X. ACKNOWLEDGMENT
I would like to thank Associate Professor, Dr.
Ambuj Agrawal for providing with the relevant
information and support.
REFERENCES
[1] https://en.wikipedia.org/wiki/BitTorrent
[2] http://krazytech.com/technical-papers/bittorrent-a-revolution-in-p2p[3]
[4]
[5]
[6]
[7]
[8]
technology
http://www.explainthatstuff.com/howbittorrentworks.html
www.hackersthirst.com
computer network by sanjay Sharma
crazyhd.com
https://en.wikipedia.org/wiki/Peer-to-peer
Danish Ather Ambuj Kumar Agarwal, Ashendra Kumar Saxena
4
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
[2015]
Using Artificial Intelligence Techniques
In Workflow System’s
Ashish Bishnoi1, Dr. Harsh Kumar2
1
2
Department of Computer Applications, CCSIT, TMU, Moradabad
Department of Computer Science, Dr. K N MODI UNIVERSITY, Newai, Raj.
1
[email protected]
[email protected]
Abstract—Workflow systems are the backbone of any
make changes in their working to increase their
commercial organisation as they are responsible for the
productivity, to optimize the use of their assets, to
specification, modelling and implementation of work processes
use new technologies and overall to provide
of the organization. Workflow systems provide edge to
organisation in cut throat competitive environment by
greater satisfaction to their customers. The
maintaining information about organizational processes and
success of business organisation depends not only
providing the same to employees of the organisation to carry
on matching their products and services to market
out different work process in comparison to organisation’s
using traditional tools for fulfilling their information
requirements, also on more and more on the
processing needs.
processes and methods used to produce them. [1,2]
Artificial Intelligence is one of the emerging areas in computer
The availability of cheaper and efficient
science which provides different methods or technologies that
can be used to improve the efficiency of existing systems
computer hardware and software, foremost need
possibly in any area and make them more user friendly.
to use cutting edge technology to move ahead of
Artificial Intelligence technologies can be incorporated with
business rivals, forces the business organisation to
existing workflow systems to enhance their efficiency and
effectiveness in performing office work and making them more
automate the management of business processes.
user friendly. In this paper, we present (1) role of Artificial
Automation provides for better management of
Intelligence technologies in designing efficient and fault
processes by having deeper insight into current
tolerant model of work flow systems, (2) Using Artificial
Intelligence decision making capabilities in work flow systems
and planned operation using tools to monitor
and (3) using Mobile Agents in work flow execution to improve
status and progress of the processes. Automation
productivity, efficiency and error free processing of different
provides adaptive capabilities that ensure agility
activities.
At the end of paper a prototype of workflow system using
of processes and optimize use of resources. The
Artificial Intelligence technologies is proposed.
2
Keywords— Artificial Intelligence,
Intelligent Agents, Automation, WFM.
I.
Workflow
systems,
INTRODUCTION
Most of the business organisations situated
around the world implements wide array of
processes to conduct their daily operations. These
processes are well defined, implicitly used in the
working of the organisation but are not explicitly
represented, due to which it is difficult to track the
progress of the process. The lack of transparency
in system makes the modification of business
processes difficult or to add a new process for
implementing a new business operation which is
required to meet the changing requirements of
business environment. Business organisations
automation of business process management leads
to emergence of the field of “Workflow
Management” (WFM). Workflow management
proposes different models, architectures and
specifications for the development of automated
systems for business community, such systems are
known as Workflow Systems. The objective of
Workflow System to represent different business
processes using appropriate models and
dynamically manage the business processes using
automated tools. Workflow Systems provide
active support to business processes by
controlling the flow of work around the
organisation automatically. Workflow systems are
commonly used for document management,
Image processing, Groupware Applications,
1
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
Project Support Software and Structured System
Design Tools, Transactional Workflow etc.
Though the foundation of Artificial Intelligence
is laid thousands of years ago, still it is one of the
newest fields of intellectual research. Winston
defines the Artificial Intelligence as “the study of
computations that make it possible to perceive
reason and act”. [3] The objective of Artificial
Intelligence community to develop Intelligent
Systems which are capable enough to achieve
complex tasks in dynamic and uncertain
environments, to know the preferences of the user
and customize themselves according to user
preferences so obtained. Artificial Intelligence
models and techniques are employed in domains
such as robotics, satellites, computer science,
medical science etc. Workflow Systems are
focused on business and manufacturing processes
whereas Artificial Intelligence community works
in domain that involves active controls of
computational entities and physical devices.
Despite of difference with workflow technology,
Artificial Intelligence techniques can be used to
develop rich action representations and powerful
reasoning engines for dynamically generating and
repairing process in business environments. [4]
In last decade many researchers tries to
integrate the Artificial Intelligence techniques
with Workflow Systems. This paper provides an
overview of this exciting research area i.e. “how
Artificial Intelligence techniques can be combined
with workflow technology to develop robust,
reactive and adaptive workflow engines that
manage complex process”. This paper discusses
the management of business process through
Workflow Systems, emergence of Artificial
Intelligence, integration of Artificial Intelligence
and Workflow Technology and finally gives
conclusion with some discussions on future
advancements.
II.
BUSINESS PROCESS MGMT.
Workflow is defined as “systems that help
organizations to specify, execute, monitor and
coordinate the flow of work cases within a
distributed environment” [Bul 92]. We can
[2015]
broadly classify workflow processes into a)
administrative work flow systems, b) production
workflow systems and c) adhoc workflow systems.
A workflow management system is a system
that defines, manages, and executes workflow
processes through the execution of software
whose order of execution is driven by a computer
representation of the workflow process logic.
[WMC]
With the help of conceptual architecture of
work flow systems, we can easily describe
different components of work flow systems and
how they are associated with each other.
Implementation of workflow systems and defining
which service or component will be provided to
user by what type of machine is done with the
help of concrete architecture of workflow systems.
It has been found that some organizations are
fully structured, with precisely defined processes
and rules where as other organizations are loosely
structured, due to which no one model of
workflow systems can address all the needs of the
organizations. Therefore different models are
required for specifying different type of resources
and different levels of structure in any
organisation.
Workflow Systems provide active support to
business processes by controlling the flow of
work around the organisation automatically.
Workflow Systems requires co-ordination
between user and system to achieve desired goals
in given time frame. The co-ordination involves
transferring tasks to system agents in correct
sequence so that agents can successfully complete
their tasks. In case of exceptions (run time errors),
system operators are informed so that actions
required to resolve the problem can be triggered.
Workflow models must not create barriers in
doing specific jobs in doing specific jobs by the
employee but they should be flexible enough to
provide different ways of doing the same job in
the same work environment. Most of the
workflow models fail as they are not able to
recognize the unstructured nature of the work.
III.
III. ISSUES BEFORE WORKFLOW SYSTEMS
2
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
Workflow systems implemented in various
organizations deals with the conformability
according to changing requirements and remain
operational with previous performance and
efficiency. The major challenges faced by the
different workflow models are in the areas of
exception handling and dynamic organizational
challenge.
A. EXCEPTION HANDLING
During a study of workflow processes in an
organization it has been found that exceptions are
consuming a large amount of time of the
employees working in the organization.
Exceptions cannot be solved by the computer
alone but it requires the experience of people to
solve it. Thus, to create a successful workflow
design it is required that computer is used as a
collaborator and communication tool in solving
exception. When a workflow system is designed it
must have the ability to handle any kind of
exceptions occurred in the work environment.
From the study of workflow systems, it has been
found that exceptions are main hindrance in the
development of good workflow systems.
B. DYNAMIC CHANGE
To full-fill new requirements, meet new
challenges and providing cutting edge services to
customer, business organizations had to change
their structure. Therefore, workflow systems
provide support for these changes in structure or
procedure of business organization. Another
major problem faced in designing good workflow
systems is support for dynamic changes because it
is not an easy task for designers or administrators
have to make large scale changes in the workflow
systems. The process making changes in the
workflow system is an error prone, time
consuming, ineffective and unproductive. In
current scenario, instead of solving the problem in
workflow system, most of the organizations either
cope with it or evade it.
IV.
INSIGHT TO AI
John McCarthy, one of the founders of AI,
defines it as “the science and engineering of
[2015]
making intelligent machines”. Various researchers
provides different definition of AI but all
definitions says more or less the same thing,
which includes the use of heuristic techniques to
solve the complex problems, the study of mental
faculties through the use of computational models,
the art of creating intelligent machines which can
learn, think and reason like human being. In
current era, impact of AI can be felt around us as
it is used in domains such as expert system,
robotics, computer games, space research,
automobile industry, defence etc. AI systematizes
and automates intellectual tasks and is therefore
potentially relevant to any sphere of human
intellectual activity [5].
The main areas of AI research include
reasoning, knowledge, planning, learning,
communication, perception and the ability to
move and manipulate objects [6]. Statistical
methods, search and mathematical optimization,
logic, methods based on probability and
economics are some common tools used in AI
research. AI is divided into subfields on the basis
of factors like social, technical, specialised use.
Due to its technical and specialised nature,
subfields of AI often fail to communicate with
each other [7]. AI Community constantly explores
the areas where AI techniques can be used to
improve the performance, behaviour and decision
making capabilities of application or system.
Intelligent Agent is commonly used AI technique
and easily employed in different areas.
V.
INTELLIGENT AGENT
An intelligent agent is a set of independent
software tools or components linked with other
applications and database running on one or
several computer environments. Hewitt in 1977,
describes the agent for the first time in its Actor
Model. According to Hewitt, agent is a selfcontained, interactive and concurrently executing
object having some internal state and
communication capability.
Agents are linked with research areas like
robotics, Artificial Intelligence, distributed
systems and computer graphics. In simple words,
3
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
an agent can be described as an entity that is able
to carry out some task, usually to help a human
user. [10] Agent (IA) is an autonomous entity
which observes i.e. learn from its environment
and use their knowledge to acts upon an
environment and directs its activity towards
achieving goals. [9] Intelligent agents are often
described schematically as an abstract functional
system similar to a computer program. In
computer science, the term intelligent agent may
be used to refer to a software agent that has some
intelligence, regardless if it is not a rational agent
by Russell and Norvig’s definition. [8] An
Artificial Intelligent agent should have the
properties like: Intelligence, Autonomy, Mobility,
Reliability, Ability to learn, Cooperation,
Versatility etc.
VI.
AI AND WORKFLOW
Reactive control, scheduling and planning are
three subfields of AI which are directly related to
workflow management. (WFM) These fields are
used for developing techniques and tools for
managing processes, allocation of resources and
task, synthesizing new process and modifying
existing one, in workflow systems intelligently.
Besides them, knowledge acquisition and
distributed AI are used to improve coordination
between multiple layers of workflow systems.
Syntactic and semantic inconsistencies that may
exist in workflow systems are identified with the
help of knowledge acquisition technique of AI.
Workflow systems with reactive control strategies
are more adaptive to their changing environment
as a result they respond dynamically whenever
problem occurs in the system. Above mentioned
system uses forward recovery methods to identify
the actions that achieve a safe state from a failed
state.
AI scheduler is another technique developed for
resource and task allocation in workflow systems
with ease. They have the ability to change their
behaviour with changing requirement in workflow
system to meet new standards and modified
criteria’s of business organization. Constraint
directed and constraint iterative repair are two
[2015]
algorithms commonly used in prioritization of
problems and identifying important modifications
and goals for workflow systems. It also estimates
the possibilities for efficient and non-disruptive
schedule for modifications in workflow system.
Intelligent agents can also be used as an
alternative of AI schedulers and manages the task
in workflow systems. Agent uses distributed
approach to complete tasks in workflow system
where each agent compete and coordinate with
other agents present in system. Intelligent agent
based frameworks are developed in last decade
which can manage complex tasks and execute the
same in dynamic and uncertain events.
Continuous Planning & Execution Framework
(CPEF) [11] and Smart Workflow for ISR
Management (SWIM) are two agent based
frameworks commonly used with workflow
systems.
VII.
CONCLUSION
AI community working rigorously to develop
intelligent systems and tools for work flow
management with cooperation of people
associated with workflow management. AI
techniques can be very useful for developing
efficient, reliable and adaptive workflow systems.
Workflow systems cover a wide domain of
business organisations having different issues and
priorities with different working models, so no
one framework or approach resolves all those
issues. Finally, we can say that AI techniques are
very useful for resolving issues in WFM and more
areas of linkage between two should be identified.
REFERENCES
[1].
Davenport, T. and Short, J. The New Industrial Engineering;
Information Technology and Business Process Redesign. Sloan
Management Review, pp 11-27, 1990.
[2]. M. Hammer and J. Champy. Reengineering the Corporation. Harper
Business Press, New York, 1993.
[3]. Winston P. H., Artificial Intelligence, 3rd (repr. with corrections,
1993)
ed. Reading, Mass.: Addison – Wesley, ISBN: 0-201-53377-4, 1993.
[4]. Bishnoi A. and Kumar H. (2012), A Novel Designing Methodologies
for Artificial Intelligent Agents, In proceedings of International
Conference on Resurging India Myths and Realities, pp. 45-48,
TMU Moradabad, INDIA.
[5]. Joseph P. Bigus& Jennifer Bigus, Constructing Intelligent Agents
using java, 2nd edition, wiley
[6]. Luger & Stubblefield 2004, Nilsson 1998, Intelligent Traits
4
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
[2015]
[7].
[8].
[9]
Pamela McCorduck, Rough shattering of AI fields, 2004, pp. 424
Russel and Norvig, Dartmouth Conference, 2003, pp. 17.
Nick M. M., Building and Running Long-Lived Experience-Based
Systems, PhD thesis, Dept. of Computer Science, University of
Kaiserslautern, Kaiserslautern, 2004.
[10] Thomas, S. R. (1993). PLACA, an Agent Oriented Programming
Language. PhD thesis, Computer Science Department, Stanford
University, Stanford, CA 94305. (Available as technical report
STAN-CS-93-1487).
[11] Myers, K. L. 1998. Towards a framework for continuous planning
and execution. In proceedings of the AAAI 1998, fall symposium on
distributed continual planning. Mento Park, CA: AAAI Press.
5
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
[2015]
Printing and Publishing Trends with Workflow
Technologies
Priyank Singhal
Assistant Professor, College of Computing Sciences and Information Technology (CCSIT),
Teerthanker Mahaveer University, Moradabad
[email protected]
Abstract—In simple terms ‘Workflow’ is basically the
systems are based on the thought that once the
automation of business processes in whole or part using
business process is defined, its automation simply
computer. The implementation of effective Workflow with the
requires the integration of a few simple tools. A
blend of upcoming technologies can be seen in number of
domain areas now-a-days. The focus of this paper is print and
key inspiration for the arrangement of work
publishing domain. This paper discusses about the basic
process innovation is that it ought to give
concept of Workflow, its reference model as proposed by
adaptability to the business procedure to advance
WfMC. The publishing sector has been focused and how the
evolution has taken place from traditional printing to the
with least re-designing.
current trends used for digital printing or e-publishing.
Workflow management system is the system
Various aspects of advantages of digital publishing over
that
completely defines, manages and executes
traditional publishing have been discussed. Automated
workflow applications have given a boost to this sector and it
workflows through execution of software whose
has been observed that to be in cutting edge competition the
order of execution is driven by computer
companies are adopting e-publishing strategies for offering
representation of workflow logic. [2]
best to their customers. The paper peeps into the shifting of
publishing trends by applying latest automated workflow
According to the Workflow Management
applications for digital or e-publishing.
Keywords— Workflow, Workflow process, Digital Publishing,
E-Publishing
I. INTRODUCTION
The term workflow is used to describe tasks,
procedural steps, organization involved, required
inputs and outputs, and tools needed for each step
in business process. Workflow is concerned with
the computerization of methods where reports,
information or tasks are gone between members
as indicated by defined set of principles to
accomplish or add to an overall business goal.
In spite of the fact that work process can be
physically organized yet in most cases, most work
process are normally composed inside of the
setting of IT framework to give electronic support
to procedural activity and it is to this range where
work of coalition is coordinated.
„Workflow is the computerized automation of
business process in whole or part‟. [1]
Work process is seen as a key reconciliation
innovation, uniting business forms with the
information to bolster them, and connecting
legacy and desktop applications into an adaptable
and versatile dispersed infrastructure. These
Coalition, workflow represents “the automation of
a business process, in whole or part, during which
documents, information or tasks are passed from
one participant to another for action, according to
a set of procedural rules”. [3]
Workflow management is quick embracing
innovation that is assortment of commercial
enterprises. Its essential trademark is the
mechanization of different procedures including
mix of human and machine based exercises
especially those having connection with data
innovation application and instruments.
II. INTEGRATING SOFTWARE WITH WORKFLOW
Workflow
framework
incorporates
organizational model empowering work process
techniques to be defined in respect to
organizational roles and responsibilities. They
additionally require integration with procedure
definition and displaying tools so that a proposed
system can be completely indicated and reenacted preceding presentation.
Hence
integration
with
underlying
infrastructure (Email, Object Request Broker
domains etc) is an additional requirement. Figure
1 indicates some of its potential components and
1
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
points of integration of a typical workflow system.
[4]
[2015]
At highest level: All WFM systems are designed
to provide support in three functional areas. These
are:
1. Build Time Functions: These are
concerned with defining and possibly
modelling workflow process and its
constituent activities.
2. Run Time Control Functions: These are
concerned with managing workflow
process in operational environment and
sequencing various activities to be handled
as part of each process.
3. Runtime Interactions: These are concerned
with human users and IT application tools
for processing various activity steps.
Figure 1: Workflow Systems component
III. SYSTEM INTEGRATION MODEL
WFMC (Workflow Management Coalition)
reference model takes a larger view of workflow
management. This model is designed to
accommodate various implementation techniques
and operational environment.
The Reference Model is tshe guideline that
helps to create specific workflow models for
specific needs. The Print Industry has evolved
using number of print production workflow
models. When an effective Print Production
Workflow is designed and implemented well, then
quality results are produced.
The standardization programme is based upon
the ― Reference Model shown in Figure 2 the
five ― interfaces are identified within the
Reference Model, realized by a combination of
APIs, protocol and format conventions.[3]
Figure 2: The WFMC Reference Model
The WFM Coalition is the grouping of
companies who have joined together to address
the above situation. It has been perceived that all
work process administration items have some
basic qualities, empowering them possibly to
accomplish a level of interoperability through the
utilization of basic principles for different
capacities. The WFM Coalition has been set up to
recognize these utilitarian ranges and create
suitable details for execution in work process
items. It is proposed that such determinations will
empower interoperability between heterogeneous
work process items and enhanced combination of
work process applications with other IT
2
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
administrations, for example, electronic mail and
report administration, in this manner enhancing
the open doors for the compelling utilization of
work process innovation inside of the IT business
sector, to the event of both sellers and clients of
such innovation.
IV. TRADITIONAL PUBLISHING WORKFLOW
Publishing is the procedure of creation and
spread of writing, music or data – movement of
making data accessible to overall population.
Generally term refers to appropriation of printed
works, for example, books and daily paper
In customary distributed work process, every
book or archive is made by one writer utilizing
some type of content manager or word processor.
The report is made, altered and distributed as one
element or conceivably a progression of sections
and it doesn't collaborate with whatever else.
Traditional book publishing is when a
publisher offers the author a contract and in turn
prints, publishes and sells our book through
booksellers and retailers. The publisher essentially
buys the right to publish our book and pays us
royalties from the sales. For publishing a book
traditionally writers need to find an agent. In order
to find one, we must identify right category for
your writing. Once these steps are accomplished,
write a query letter indicating synopsis of your
book, chapter summary, description of book and
yourself. [5]
Figure 3: Traditional Publishing Workflow Process
[2015]
Various stages in pre-production publishing:
 Editorial Stage: Editors usually choose or
refine titles.
 Design Stage: This includes confirmation
of layout.
 Sales and marketing stage: Companies
produce advanced information sheets that
are sent to customers to gauge possible
sales.
 Printing: Physical production of printed
work begins here.
 Binding: It involves folding printed sheets.
 Distribution: Final stage ensures product
available to general public.
There are different focal points of
conventional distributed which speaks to a regular
commonplace
work
process.
Customary
distributed recognition makes it simple to use for
existing creators and editors. EBooks are made by
an innovation accomplice gifted in transformation
of print arranged substance to useful eBooks.
Traditional publishing manages file as
complete document and print oriented content is
converted, rather than specifically designed for
eBooks, this is the disadvantage.
V. ADVENT OF E-PUBLISHING/DIGITAL PRINTING
It is essentially a type of distributed in which
books, diaries and magazines are being created
and put away electronically as opposed to in print.
These productions have all characteristics of the
typical distributed like the utilization of hues,
design and pictures and they are much helpful
moreover. It is the procedure for creation of
typeset quality archives containing content,
representation, pictures, tables, mathematical
statements and so on. It is utilized to characterize
the generation of any that is digitized structure.
Electronic Publishing = Electronic Technology
+ Computer Technology + Communication
Technology + Publishing [6]
Print production rapidly shifting from analogue
to digital technology as the infrastructure for
workflows. For creation to be productive,
digitisation of all strides and disposal of simple
systems and materials from procedure stream
separated from beginning and completing stage is
3
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
required. Crosswise over systems, printing will be
dial tone administration which is basic, solid,
quick and modest. Accordingly mix of each one
of those angles offers critical upper hand to
printing and distributed firms. E-distributed
innovation can be arranged into two general
classifications: One, in which data is put away in
a unified PC source and conveyed to the clients by
an information transfers frameworks, including
online database administrations and videotext
speaks to the most dynamic territory in E
distributed today, and, another in which the
information is digitally put away on a circle or
other physically deliverable medium.
Publishing and printing market often suffers
from crises due to:
 Intense competition in shrinking market: Due
to creation of small printing and publishing
firms, offers questionable quality with low
prices.
 Cost of offset printing machines
 Digital printing companies investing are
expected to:
 Gain competitive advantage with introduction
of new services.
 Overcome problem of low investments in key
technologies: Digital colour press cost is much
lower than that of offset machine.
VI. E-PUBLISHING/DIGITAL PRINTING
The term E-publishing refers more precisely to
the storage and retrieval of information through
electronic communications media. It can employ a
variety of formats and technologies, some already
in widespread use by businesses and general
consumers and others still being developed.[7] In
broader sense, e-publishing includes whole
spectrum of e-communication during research
process while in narrower sense, e-publishing
refers only to final, peer reviewed release of data
as a culmination of research process. E-publishing
can be characterized by – Type 1 and Type 2 Epublishing.
A. Type 1 E-publishing:
This is characterized by opening work to
colleagues thus improving collaboration and
[2015]
quality. These have similar validity to papers
presented at conferences.
B. Type 2 E-publishing:
This aims to bring reasonably valid research
results into practice. The results have important
application and are expected to be acted upon on a
wider scale.
Type 1 and Type 2 communication are difficult
to discriminate from each other than in traditional
publishing world where publication was
inevitably linked with the notion of peer review
and quality control and therefore it is immediately
recognizable as type 2 communication. Unlike
traditional publishing, these two types of epublishing are the two processes of improving
quality and making paper physically available are
two distinct processes. In Type 2 data are filtered
upstream whereas in Type 1, scientist need to be
able to select and filter relevant information
downstream which requires labelling with
computer readable meta-information[8]. Epublishing provides alternative models - Paper
auction model where researchers could submit
type 1 e-paper to pre-print servers for discussion
and peer review and journal editors and publishers
would pick and bid for best papers they want to
see in „type 2 papers‟ in their journal [9].
Digital
printing/e-publishing
companies
investing are expected to gain competitive
advantage with introduction of new services. In
digital printing there is an integration of digital
colour presses and DTP systems [10]. Digital
printing satisfies:
 Price on demand: In publishing and printing
terms one can offer very competitive pricing
for small print
jobs as no initial cost
required and even overprinting costs are
avoided.
 Just-in-Time Printing: This allows direct
linking of DTP and printing press and printing
can occur immediately.
 Distributed
Processing:
Through
interconnection of digital presses to
international networks it is possible to
introduce remote printing.
4
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad


Personalised Printing: Personalised material
for direct marketing purposes can be easily
produced through database connections.
Repurposing: All over digital handling of data
and interconnection with digital document
archiving system supports reuse through easy
incorporation in further production.
VII.
DIGITAL PRINTING PROCESS
There are seven phases in digital printing
process [11]. These are:
 Order
Acquisition
Phase:
Publishing
Company receives order from customer and
prepare initial meeting to determine basic
concept according to customer requests. Based
on basic concept proposal is prepared and
after series of negotiations with client
agreement is produced.
 Design Phase: Design team prepare basic
design based on proposal which includes
combination of text and remaining artistic
parts like Graphics.
 Electronic Production Phase: All scanning,
and e-image processing is performed. Then
final e-prototype is developed.
 Film Production: Final e-prototype processed
by film production and final set of films are
produced.
 Printing Phase: Proper layout is done in this
phase and printing is performed.
 Finishing Phase: Cutting and binding activities
are performed in this phase.
 Delivery Phase: This phase includes all
activities related to final inspection, packaging
and delivery.
Figure 4: Digital Workflow Process
VIII.
[2015]
ADVANTAGES OF E-PUBLISHING
The advantages of electronic publishing are
numerous which includes research results can be
disseminated faster and more cheaply, can be
distributed to a wider audience more fairly (it
offers equity of access, including the lay public
and scientists in developing countries) and authors
have virtually no space restrictions, and can
therefore include huge datasets or even
multimedia data.[16]
Electronic publishing is increasingly popular in
works of fiction as well as with scientific articles.
Electronic publishers are able to provide quick
gratification for late-night readers, books that
customers might not be able to find in standard
book retailers (erotica is especially popular in eBook format), and books by new authors that
would be unlikely to be profitable for traditional
publishers. [15] The greatest points of interest of edistributed are the expense sparing in printing and
paper and better information stockpiling and
upkeep. It is most in a perfect world suited for
distributions like diaries, examination reports and
bulletins. It is additionally suited for all data that
is dynamic or always showing signs of change. Epublishing finds great use and acceptance in
academics, in the online publishing of educational
books or tutorials. With an increase in distance
learning programs, the need for quality
educational material is on the rise. These e-books
and study material need to recreate an active
learning atmosphere as can be found in a class full
of students and a teacher. [6][17]
IX. PIRACY-AS A CHALLENGE
Piracy is certainly one of the biggest challenge
publishing industry faces in this digital age.
Storage and transmission security of e-book
content uses both encryption and compression.
Each title is encrypted with single key. When
delivered, content encryption key is encrypted
again with specific key that relates back to
popular e-reader device and then this encrypted
content key is stored on customer so called
bookshelf in secure database. When downloaded
by user, encrypted title and encrypted content key
is downloaded to e-book device, whichever it may
be. [12] For example – EBX (E-Book Exchange) is
5
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
a security system which doesn‟t allow consumers
to share e-books.

X. CURRENT TRENDS IN E-PUBLISHING/DIGITAL
PRINTING



The financial matters of distributed on the web
is inciting a movement from responsibility for
nearby print duplicate to access to a remote
electronic duplicate. It is more productive for
distributers or sellers to host content on the web,
in an area that clients can access from wherever
whenever, than to manage the print model, where
materials are printed, dispersed, bound and held
locally. Libraries are scrutinizing the need to hold
print duplicates locally when material is
dependably accessible online. Although e-books
have not yet enjoyed commercial success,
between 80 percent and 100 percent of academic
publishers are converting their titles into PDF,
XML and OEB standards that provide them with
great options for electronic distribution and print
on demand [13].
The publications can be distributed on the open
web or via apps. In 2013, as reading habits shift to
memory lite and cloud enabled mobile devices
such as iPads and large screen smartphones, this
approach to publishing become more prevalent
and important. The defining characteristic of
micropublishing is that it is light weight, putting
the focus on text based stories while adding
pictures and rich media add-ons that requires
more computer memory and download time. Also
according to tenets of micropublishing, digital
magazine issues or books can be short and
produced using free or very cheap software so
publishers don‟t need to invest as much in design,
distribution or marketing, freeing up budgets for
editorial [14].
XI. FUTURE ASPECTS
Digital age has spawned an e-publishing
revolution and cultivated growing prevalence of
e-books.
 Digital Publishing provides variety of options
for writers.
 Enabling the feature of converting text to
speech – Future feature of Kindle.







[2015]
To sell future books as e-books, print on
demand books, audio book apps, PDFs and
podcasts.
Growth of e-book reader applications.
New type of print production processes.
New form of publishing and client
communication.
Different breed of business relationship
between companies involved in printing and
publishing as well as between vendors of
printing and publishing technology and their
distribution channels and customers.
Introduction of mobile publishing strategy.
Evidence suggests that students also prefer to
get information electronically rather than
books.
E-textbooks will become more accessible.
Magazine and newspaper publishers will
launch their own apps and devices.
100% market share for e-reader displays.
Corporation in printing and publishing
industry will rapidly deploy internet / intranet
/ extranet communication and graphical
computing infrastructure that link customers,
providers, suppliers.
XII.
CONCLUSION
Workflow is a built in series of steps that allows
users to submit their work to an approver who in
turn receives the content. Approver can either
approve it for publishing or send it back for
revision. Establishing workflow in publishing for
your title at very beginning will serve you well
and helps to increase not only efficiency and
speed of what you do but also maximising your
capacity to deal with change.
Workflow optimization is critical for global
companies as it identifies where you can cut costs
and eliminate unnecessary steps. As there are
costs hidden in so many areas of publishing that
quickly escalate your cost of doing business.
Traditionally printing or publishing process was
very tedious. Traditional workflow involves many
steps for publishing which focuses on publishing
of books of many authors. But this trend of
publishing has become outdated today. Instead
publishers are switching to e-publishing or digital
6
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
publishing which overcomes many of the
limitations of traditional publishing.
Digital publishing is really about exploring new
markets. Content is not changing you can
continue to publish what you always want to
publish. Only changes lies in the way in which we
deliver that content and who we are delivering to.
The implementation of new printing industry
trends involving Workflow technologies such as
the workflow digitization, technology integration
and changing demands have transformed
companies into more efficient and effective
businesses. However, some niche pre-print
operations are now rendered obsolete because
desktop publishing has eliminated the need for
pre-press and film-based processes.
Service and innovative applications allow value
and profits to go up by considering applications of
existing technologies effectively. Successful
printers must see themselves as ―solution
providers‖ rather than manufacturers.
Digitized workflow implementation has further
automated the printing process and at the same
time freed the operation from labour-intensive
tasks that may hamper production schedules. In
spite of the growing significance of online-based
services, the printing industry remains strong and
enduring as it adopts new strategies to achieve
success in the future.
[2015]
No doubt business benefits are foreseen with such
technological advancements. But the success will
depend upon using them with right approach in
this sector.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
David Hollingsworth, Principal Architect, Skill Centre, Windsor,
UK ―Workflow – A Model for Integration‖
Document Number TC00-1000 Document Status - Issue 1.1 19-Jan951993, 1994, 1995 Workflow Management Coalition
Glossary, 1996, The Workflow Reference Model, 1995, Workflow API
Specification, 1995, Workflow Interoperability Specification, 1996,
Process Definition Interchange Specification (draft), 1998.
David Hollingsworth, Principal Architect, Skill Centre, Windsor, UK
“Workflow – A Model for Integration”
The Publishing Process: How to create a successful Digital Publishing
Plan: Peachpit, 2014
Electronic Publishing: Impact of ICT on Academic Libraries
Electronic encyclopaedia, Grolier Electronic Publishing, 1995
Eysenbach G, Diepgen TL: Towards quality management of medical
information on the internet: evaluation, labelling, and filtering of
information. Brit Med J 1998, 317:1496-1500.
Eysenbach G: Challenges and changing roles for medical journals in
the cyberspace age: electronic pre-prints and e-papers. J Med Internet
Res 1999, 2:E9.
Adam, E. E., Jr., et al. (1997). An international study of quality
improvement approach and firm performance. International Journal of
Operations & Production Management, 17(9), 842–873.
Bevilacqua, R., & Thornhill, D. (1992). Process modelling. American
Programmer, 5(5), 2–9.
Waksman BH: Information overload in immunology: possible
solutions to the problem of excessive publication. J-Immunol 1980,
124:1009-1015.
http://www.writersdigestshop.com/how_to_get_published
Pando.com/2013/01/02/seven-publishing-trends-that-will-define-2013/
Eysenbach G, Diepgen TL: Towards quality management of medical
information on the internet: evaluation, labelling, and filtering of
information. Brit Med J 1998, 317:1496-1500.
Allen ES, Burke JM, Welch ME, Rieseberg LH: How reliable is
science information on the web? Nature 1999, 402:722.
Wright A. The Battle of Books. Wilson Q 2009,33(4) 59-64
7
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
[2015]
Global Positioning System
Ms. Namami Varshney, Dr. Ambuj kumar Agarwal
CCSIT, TMU, BAGADPUR MORADABAD U.P. 244001
[email protected]
[email protected]
Abstract— Where am I? where am I going? where are you
going? what is the best way to get there? when will I get there?
GPS technology can answer all these questions. The Global
positioning system (GPS) is a space based navigation system that
shows you exact position on the earth any time, in any weather.
No matter where you are! GNSS technology has made impact on
navigation and positioning needs with the use of satellites and
ground stations the ability to track aircrafts, cars, cell- phones,
boats and even the individuals has become a reality. It uses the
constellation of between 24 and 32 earth orbit satellites that
transmit precise radio signals, which allow GPS receivers to
determine their current location, the time and the velocity. These
satellites are high orbit, circulating at 14,000Km/hrs and
20,000Km above the earth’s surface. The signal being sent to the
earth at the speed of light is what is picked up by any GPS
receiver that are now commonplace worldwide.
Keywords: Navigation, Constellation, Speed of light, Tracking,
GNSS.
I.
INTRODUCTION
(GPS) technology is a great boon to anyone who
has the need to navigate either great or small
distances. The Global Positioning System (GPS) is
a burgeoning technology, which provides
unequalled accuracy and flexibility of positioning
for navigation, surveying and GIS data capture.
This wonderful navigation technology was actually
first available for government use back in the late
1970s.
The Global Positioning System (GPS) is a radio
based navigation system that gives three
dimensional coverage of the Earth, 24 hours a day
in any weather conditions throughout the world.
The technology seems to be beneficiary to the GPS
user community in terms of obtaining accurate data
up to about 100 meters for navigation. The GPS
technology has tremendous amount of applications
in Geographical Information System (GIS) data
collection, surveying, and mapping.
The first GPS satellite was launched by the U.S. Air
Force in early 1978. There are now at least 24
satellites orbiting the earth at an altitude of about
11,000 nautical miles. The high altitude insures that
the satellite orbits are stable, precise and
predictable, and that the satellites' motion through
space is not affected by atmospheric drag. These 24
satellites make up a full GPS constellation. The
satellites orbit the Earth every 12 hours at
approximately 12,000 miles above the Earth. There
are four satellites in each of 6 orbital planes. Each
plane is inclined 55 degrees relative to the equator,
which means that
satellites cross the equator tilted at a 55 degree
angle. The system is designed to maintain full
operational capability even if two of the 24
satellites fail.
The GPS system consists of three segments: 1) The
space segment: the GPS satellites themselves, 2)
The control system, operated by the U.S. military,
and 3) The user segment, which includes both
military and civilian users and their GPS
equipment.
The Russian government has developed a system,
similar to GPS, called GLONASS. The first
GLONASS satellite launch was in October 1982.
The full constellation consists of 24 satellites in 3
orbit planes, which have a 64.8 degree inclination
to the earth's equator. The GLONASS system now
consists of 12 healthy satellites. GLONASS uses
the same code for each satellite and many
frequencies, whereas GPS which uses two
frequencies and a different code for each satellite.
Galileo is Europe's contribution to the next
generation Global Navigation Satellite System
(GNSS). Unlike GPS, which is funded by the public
sector and operated by the U.S. Air Force, Galileo
560
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
will be a civil-controlled system that draws on both
public and private sectors for funding.
The GPS system is passive, meaning that the
satellites continuously transmit information towards
the Earth. If someone has a GPS receiver they can
receive the signal at no cost. The information is
transmitted on two frequencies: L1 (1575.42 MHz),
and L2 (1227.60 MHz). These frequencies are
called carrier waves because they are used primarily
to carry information to GPS receivers. The more
information a receiver measures the more expensive
the unit, and the more functions it will perform with
greater accuracy. When one receiver is tracking
satellites and obtaining position data, the
information received has traveled over 12,000 miles
and has been distorted by numerous atmospheric
factors. This results in accuracy of about 25 meters.
Moreover, the department of Defense (the agency
running the GPS) degrades receiver accuracy by
telling the satellites to transmit slightly inaccurate
information. This intentional distortion of the signal
is called Selective Availability (SA). With SA
turned on and one receiver is used, the greatest
accuracy a user can expect is 100 meters. To
improve the accuracy of GPS, differential, or
Relative Positioning can be employed. If two or
more receivers are used to track the same satellites,
and one is in a known position, many of the errors
of SA can be reduced, and in some cases
eliminated. Differential data can be accomplished
using common code or carrier data (L1 or L2).The
most accurate systems use differential data from a
GPS base station that continually tracks twelve
satellites and transmits the differential data to
remote units using a radio link.
II.




Block II Initial operational satellites. 9
launched between 1989 and 1990. 5 still
functioning.
Block IIA Slightly modified Block IIs. 19
launched between 1990 and 1997. 18 still
functioning.
Block IIR Replenishment satellites.6 orbited
to date. First in 1997. C/A code on L2 plus
higher power on last 12 satellites launched
from 2003 onwards.
Block IIF Follow-on satellites. New civil
signal at 1176.45 MHz. First launch
expected in 2005.
Block III Conceptual.
GPS History
 1973 - Consolidation of several U.S. DoD
developmental programs into the Navstar
Global Positioning System
 1978 - First prototype satellites launched
 1983 - Korean Airlines Flight 007 shot
down. President Reagan reafirms U.S.
policy on civil use of GPS
 1989 - First operational satellites launched
 1993 - Initial Operational Capability (24
satellites)
 1995 - Full Operational Capability
 2000 - Selective Availability turned off
Block
Block I Prototype (test) satellites. 10
launched between 1978 and 1985. All
retired.
Launch
Period
Succ
ess
Satellite launches
Fail
In
ure
prepar
ation
1
0
Planned
Currently
in
orbit
and
healthy
0
0
I
1978–
1985
10
II
1989–
1990
9
0
0
0
0
IIA
1990–
1997
19
0
0
0
2
IIR
1997–
2004
12
1
0
0
12
IIR-M
2005–
2009
8
0
0
0
7
IIF
From
2010
8
0
2
0
10
IIIA
From
2017
0
0
0
12
0
IIIB
-
0
0
0
8
0
IIIC
-
0
66
0
2
0
2
16
36
0
31
EVOLUTION
Generations of Satellites


[2015]
Total
561
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
III.
[2015]
COMPONENTS OF GPS SYSTEM
The current GPS consists of three major segments.
These are the space segment (SS), a control
segment (CS), and a user segment (US). The U.S.
Air Force develops, maintains, and operates the
space
and
control
segments.
GPS
satellites broadcast signals from space, and each
GPS receiver uses these signals to calculate its
three-dimensional location (latitude, longitude, and
altitude) and the current time.
Space Segment
Fig1
The space segment (SS) is composed of the orbiting Control segment
GPS satellites, or Space Vehicles (SV) in GPS
arlance. The GPS design originally called for The control segment is composed of:
24 SVs, eight each in three approximately
1. a master control station (MCS),
circular orbits, but this was modified to six orbital
2. an alternate master control station,
planes with four satellites each. The six orbit planes
3. four dedicated ground antennas, and
have approximately 55° inclination (tilt relative to
4. six dedicated monitor stations.
the Earth's equator) and are separated by 60° right
ascension of the ascending node (angle along the
equator from a reference point to the orbit's
intersection). The
orbital period is
one-half
a sidereal day, i.e., 11 hours and 58 minutes so that
the satellites pass over the same locations or almost
the same locations every day. The orbits are
arranged so that at least six satellites are always
within line of sight from almost everywhere on the
Earth's surface. The result of this objective is that
the four satellites are not evenly spaced (90 degrees)
apart within each orbit. In general terms, the
angular difference between satellites in each orbit is
30, 105, 120, and 105 degrees apart, which sum to
Fig2
360 degrees.
User Segment
The user segment is composed of hundreds of
thousands of U.S. and allied military users of the
secure GPS Precise Positioning Service, and tens of
millions of civil, commercial and scientific users of
the Standard Positioning Service. In general, GPS
receivers are composed of an antenna, tuned to the
frequencies transmitted by the satellites, receiver-
562
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
processors, and a highly stable clock (often a crystal
oscillator). They may also include a display for
providing location and speed information to the
user. A receiver is often described by its number of
channels: this signifies how many satellites it can
monitor simultaneously.
IV.
•
•
•
•
•
•
•
•
•
positioning system) provides instantaneous
position, velocity and time information.
V.
THE FUTURE OF GPS TECHNOLOGY
 Further miniaturization of the technology
(smaller and smaller)
• Integration of GPS receivers into PDAs, cameras,
sports equipment, etc.
• Pet, child, and disabled tracking systems and
services
• Bluetooth (short range RF) connectivity between
GPS receivers and other Bluetooth-equipped
devices (GPS + Bluetooth = positioning inside
buildings?)
• New GPS signals; higher power signals
• GPS + GLONASS + Galileo
Fig3
•
•
[2015]
FEATURES:
12 parallel satellite tracking channels.
Supports NMEA-0183 data protocol &
Binary data protocol.
Direct, differential RTCM SC 104 data
capability.
Static navigation improvements to minimize
wander due to SA.
Active or Passive antenna to lower cost.
Max accuracy achievable by SPS.
Enhanced TTFF when in Keep –Alive
power condition.
Auto altitude hold mode from 3D to 2D
navigation.
Maximum operational flexibility and
configurable via user commands.
Standard 2x10 I/O connector.
User selectable satellites.
VI.
APPLICATION AREAS OF GPS
MILITARY USE
TECHNOLOGY
 Navigation: Soldiers use GPS to find
objectives, even in the dark or in unfamiliar
territory, and to coordinate troop and supply
movement. In the United States armed
forces, commanders use the Commanders
Digital Assistant and lower ranks use
the Soldier Digital Assistant.
 Target tracking: Various military weapons
systems use GPS to track potential ground
and air targets before flagging them as
hostile. These weapon systems pass target
coordinates top recision-guided munitions to
allow them to engage targets accurately.
Military aircraft, particularly in air-toground roles, use GPS to find targets.
 for use in 155-millimeter.
 Search and rescue.
5.Need of GPS Technology:
 Trying to figure out where you are is
probable man’s oldest pastime.
 Finally US Dept of Defense decided to form
a worldwide positioning system.
 Also known as NAVSTAR ( Navigation
Satellite Timing and Ranging Global
563
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
[2015]
Public Safety
Satellite navigation is fast becoming an industry
standard for location information used by
emergency and other specialty fleets. Location and
status information provided to public safety systems
offers managers a quantum leap forward in efficient
operation of their emergency response teams. The
ability to effectively identify and view the location
of police, fire, rescue, and individual vehicles or
boats means a whole new way of doing business.
Fig4
Communication:
The navigational signals transmitted by GPS
satellites encode a variety of information including
satellite positions, the state of the internal clocks,
and the health of the network. These signals are
transmitted on two separate carrier frequencies that
are common to all satellites in the network. Two
different encodings are used: a public encoding that
enables lower resolution navigation, and an
encrypted encoding used by the U.S. military.
Forestry & GPS/GIS:
As a forester, Sawchuck finds that GPS and GIS
technologies enable him to more rapidly collect and
geocode data and then present it in numerous
formats ranging from text-based tables to detailed
color maps. But the most valuable asset that the
GPS/GIS combination brings to this forester’s job is
its analytical power. "A lot of people view GIS as a
great mapmaking tool," Sawchuck notes. "It does
that really well, but the real power behind GIS is
the ability to do analysis of your information.”
Fig5
Canoeing, Kayaking & Boating:
GPS provides mariners with navigational and
positioning accuracy up to within 3 meters. There is
no easier or safer way to navigate on the open
waters.
Canoeing & Kayaking
Record your journey by saving points of particular
interest and beauty or patches of soft shore where
you can easily return on future trips. Marking
hazardous areas to avoid when canoeing or
kayaking at a rapid pace can be boat-saver and
lifesaver.
GPS allows you to easily communicate coordinates
to others or to find your way.
Fig6
564
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
VIII.
Fig7
VII.
WORKING OF GPS TECHNOLOGY
GPS signals do not contain positional data. The
position reported by the receiver on the
ground is a calculated position based on rangefinding triangulation. GPS positioning is
achieved by measuring the time taken for a signal to
reach a receiver. Almost one million times a second
the satellite transmits a one or a zero in a complex
string of digits that appears random. In actuality this
code is not random and repeats every 266 days. The
receiver knows that the portion of the signal
received from the satellite matches exactly
with a portion it generated a set number of seconds
ago. When the receiver has determined
this time, the distance to the satellite can be
calculated using simple trigonometry where:
Distance to the satellite = speed x (tr - tto) (where
speed is c, the speed of light, in a vacuum
(299792.5 x 10³ ms-1). tto is the time at the origin
and tr is the time at the receiver).The DoD
maintains very accurate telemetry data on the
satellites and their positions are known to a high
level of precision. This simple operation allows the
distance to a satellite to be calculated accurately.
When the distance to three satellites is known then
there is only one point at which the user can be
standing.
[2015]
CONCLUSION
The Global positioning system (GPS) is a space
based navigation system that shows you exact
position on the earth any time, in any weather. No
matter where you are! GNSS technology has made
impact on navigation and positioning needs with the
use of satellites and ground stations the ability to
track aircrafts, cars, cell- phones, boats and even the
individuals has become a reality. It uses the
constellation of between 24 and 32 earth orbit
satellites that transmit precise radio signals, which
allow GPS receivers to determine their current
location, the time and the velocity. These satellites
are high orbit, circulating at 14,000Km/hrs and
20,000Km above the earth’s surface. The signal
being sent to the earth at the speed of light is what
is picked up by any GPS receiver that are now
commonplace worldwide.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
"What is a GPS?".
National Research Council (U.S.). Committee on the Future of the
Global Positioning System; National Academy of Public
Administration (1995). The global positioning system: a shared
national asset: recommendations for technical improvements and
enhancements. National Academies Press. p. 16. ISBN 0-309-05283-1.
Retrieved August 16, 2013., Chapter 1, p. 16
"Factsheets :
GPS
Advanced
Control
Segment
(OCX)".
Losangeles.af.mil. October 25, 2011. Retrieved November 6, 2011.
"Russia Launches Three More GLONASS-M Space Vehicles".Inside
GNSS. Retrieved December 26, 2008.
Winterberg, Friedwardt (1956). "Relativistische Zeitdiiatation eines
künstlichen Satelliten (Relativistic time dilation of an artificial
satellite)". Astronautica Acta II (in German) (25). Retrieved19
October 2014.
"GPS and Relativity". Astronomy.ohio-state.edu. RetrievedNovember
6, 2011.
Guier, William H.; Weiffenbach, George C. (1997). "Genesis of
Satellite Navigation" (PDF). Johns Hopkins APL Technical
Digest19 (1): 178–181.
Steven Johnson (2010), Where good ideas come from, the natural
history of innovation, New York: Riverhead Books
Helen E. Worth and Mame Warren (2009). Transit to Tomorrow. Fifty
Years of Space Research at The Johns Hopkins University Applied
Physics Laboratory (PDF).
Catherine Alexandrow (April 2008). "The Story of GPS".
DARPA: 50 Years of Bridging the Gap. April 2008.
Howell, Elizabeth. "Navstar: GPS Satellite Network". SPACE.com.
Retrieved February 14, 2013.
Jerry Proc. "Omega". Jproc.ca. Retrieved December 8, 2009.
"Why Did the Department of Defense Develop GPS?". Trimble
Navigation Ltd. Archived from the original on October 18, 2007.
Retrieved January 13, 2010.
"Charting a Course Toward Global Navigation". The Aerospace
Corporation. Archived from the original on September 3, 2013.
Retrieved October 14, 2015
565
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthkanker Mahaveer University , Moradabad
[2015]
Reducing Maintenance Cost and Effort of Object
Oriented Software-A Time Design Approach
Dr. Rakesh Shrama1, Rakhi Saha2
1
Associate Professor, CCSIT, Teerthkanker Mahaveer University Moradabad
2
Assistant Professor Department of Management IMT Ghaziabad
1
[email protected]
[email protected]
ABSTRACT—Software
maintenance
is
the
process
design. i.e. the cost to maintain one line of source
improvement of the performance of software or rectify the
code may be more than 10 times the cost of the
error if exists. Recently it has been observed that most of the
initial development of that line[7] Maintenance in
budget and efforts of software is used during the maintenance
phase. This is just because the designer is overlook the
the wildest sense of post development software
maintenance phase and they have to produce the software
support, is likely to continue to represent a very
within stipulated time period.
large fraction of the total system cost [3]. As
Keywords—Cryptography, Encryption, Decryption, Genetic.
2
I.
INTRODUCTION
Software maintenance is a set of activity
performed
when
software
undergoes
modification
to
code
and
associate
documentation due to a problem or the need for
improvement [1]. The laws of software evolution
aid maintenance decision aided by understanding
what improves to system overtime. We are
interested in change in size, complexity,
resources and ease of maintenance. Software
maintenance is become a major activity in the
industry. A surveys and estimate made between
1988 and 1990 suggested that an average as
much as 75% of a project software budget is
devoted to maintenance activity over the life of
the software [10]. Software maintenance cost is
the greatest cost incurred in developing and using
a software system. Maintenance cost varies
widely from application to application, but an
average they seem to be between 2.0 to 4.0 times
developments costs for large software system [3].
Software maintenance is the degree to
which it can be understood, corrected, adapted
and/or enhanced. Software maintenance accounts
for more effort than any other software
engineering activity. When the changes in the
software requirement are requested during
software maintenance, the impact cost may be
greater than 10 times the impact cost derived
from a change required during the software
more programs are developed the amount of
effort and resources expanded on software
maintenance is growing. Maintainability of
software thus continues to remain a critical area
in the software development era. Verification and
Validation (V & V) for software maintenance is
different from planning V&V for development
efforts [4]
Maintenance may be defined by defining
four activities that are undertaken after a program
is released for use. First activity is the corrective
maintenance that corrects uncovered error after
software is in use, Adaptive maintenance; the
second activity is applied when changes in the
external environment precipitate modification to
software. The third activity incorporate
enhancement that are requested by customers and
is defined by perfective maintenance where most
of the maintenance cost and efforts are spent.
The fourth and last activity is preventive
maintenance in anticipation of any future
problem. The maintenance effort distributions are
as follows [12]:
Activity
% Efforts
Enhancement
51.3
Adaptive
23.6
Corrective
21.7
Others
3.4
1
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthkanker Mahaveer University , Moradabad
Maintainability has been defined as effort of
personnel hours, errors caused by maintenance
actions, scope of effort of the maintenance action
and program comprehensibility is subject to the
programmer experience and performance [15].
Cost factor is an important element for the
success of a project. Cost in a project is due to
the requirement of hardware, software and
human resources. Cost estimates can be based on
subjective opinion of some person or determined
through the use of models [2]. ReliabilityConstrained Cost Minimization cost subject to a
system reliability goal. Reliability of a system is
presented as a function of component failure
intensities, as well as operation profile and
component utilization parameters. Let n denote
the number of software components. denotes
the system reliability target and  > 0 be the
mission time, the probability of failure free
execution with respect to time interval [0,] to be
at least . We assume that 0 <  < 1. The total
cost (TC) of achieving failure intensities λ1,
λ2,……λn and R(λ1,λ2,….λn,)≥  [23]
λi ≥ 0 for I = 1,2,….n
The purpose of software cost model is to
produce the total development effort required to
produce a given piece of software in terms of the
number engineers and length of time it will take
to develop the software. The general formula
used to arrive at the nominal development effort
is[9]
PMinitial = c.KLOCk
Where PM = person per month
KLOC = Thousand of line of code
C and k are constant given by the model.
Software metrics are numerical data related to
software development. Metric strongly supports
software project management activities. They
relate to the four function of management which
are as follows [6]:
[2015]

Planning: Metric serve as a basis of cost
estimating, training planning, resource
planning, scheduling and budgeting.
 Organizing: Size and schedule metrics
influence a project organization.
 Improving: Metrics are used, as a tool for
process improvement efforts should be
concentrated and measure the efforts of
process improvement efforts.
 Controlling: Metrics are used to status and
track software development activities for
compliance to plan.
The first step on the maintainability analysis
using metrics is to identify the collection of
metrics that reflects the characteristics of the
viewpoint with respect to which the system is
being analyzed and discard metrics that provide
redundant information [13]. To understand the
relationship between metrics, we create and
analyze correlation matrices for every three
months interval (snapshot). Each correlation
metrics has a correlation co-efficient (r) for each
possible metric pair and can range from -1 to +1,
while the value of r, +1, represents perfect
positive correlation, the value of r, -1 represents
perfect negative correlation, the value of r, +1,
represents perfect positive correlation. Any value
between +0.70 to +1.00 is accepted as strong
positive correlation while any value between 0.70 to -1.00 is accepted as a strong negative
correlation [26]
Object oriented technologies greatly
influence software development and maintenance
through faster development, cost saving and
quality improvement and thus has become a
major trend for methods of modern software
development and system modeling [15]. Class,
object, method, message, instance variable, and
inheritance are the basic concept of the objectoriented technology [8]. Object oriented metrics
are mainly measures of how these constructs are
used in designed process. Classes and methods
are the basic constructs for object-oriented
technology. The amount of function provided by
2
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthkanker Mahaveer University , Moradabad
object-oriented software can be estimated based
on the number of identified classes and metrics or
its variables.
Improving the quality and reducing the cost of
products are fundamental objective of any
engineering discipline. In the context of software
as the productivity and quality are largely
determined by the process to satisfy the
engineering objectives of quality improvement
and cost reduction, the software must be
improved. Cost factor is the crucial aspects of
project planning and managing. Cost overrun can
cause customers to cancel the project and cost
underestimate can force a project team to invest
much of its time without financial compensation.
II.
MAINTENANE: A DIFFERENT ASPECT
The maintenance of software is affected by
many factors, such as the availability of skilled
staff, the use of standardized programming
languages and inadvertent carelessness in design.
Implementation and testing has an obvious
negative impact on the ability to maintain the
resultant software. Additionally, some software
organization may become maintenance bound,
usable to undertake the implementation of new
projects, because all their resources are dedicated
to the maintenance of old software. The opinion
of Programmers, Managers and Customers are as
follows:
Programmer‟s
opinion:
According
to
programmer‟s opinion, a program with a high
level of maintainability should consist of
modules with loose coupling and high
cohesiveness, simple, traceable, well structured,
well documented, concurrent sufficiently
commented code, well defined terminology of
their variables. Furthermore, the implemented
routines should be of a reasonable size,
preferably less than 80 lines of code with limited
fan-in and fan-out. Finally the declaration and the
implementation part of each routine must be
strictly separated.
Program Managers Opinion: Program Manager
always aim at the limitation of effort spent during
[2015]
the maintenance process. They also focus on the
high reusability of one program.
Customers Opinion: Nowadays, because of the
high demand of the successful software systems
and external changes, a high level of
modification can be attributed to changes in
requirement.
III.
DESIGN CONSIDERATION: A BETTER WAY
TO REDUCE COST AND EFFORTS
Several elements affect and shape the
design of the application. Some of these elements
might be non-negotiable and finite resources,
such as time, money and workforce. Other
elements such as available technologies,
knowledge and skills are dynamic and vary
throughout the development life cycle [5].
Whenever development of a software system
complete, it is reached to the maintenance phase.
During this phase the defect arrivals by time
interval and customer problem is to fix as soon as
possible with excellent fix quality to make the
system more reliable [27]. Analyze the high
level design of a software system for the purpose
of prediction with respect to change difficulty
from the point of view of the testers and
maintainers [16]. The decision for scalability is
set in the context of a software engineering
environment [11]. Although these elements
influence the design of an application to some
extent, the business problems dictates the
capabilities application must have for a
satisfactory solution, such are as follows:
A. Design for Scalability
Scalability is the capability to increase
resources to produce an increase in the
service capacity. A scalable application
requires a balance between the software and
hardware used to implement the application.
The two most common approaches to
scalability are:
1) Scaling Up: Refers to achieving scalability by
improving the existing servers processing
hardware. Scaling up includes adapting more
memory, more or faster processes or
migrating the application to a powerful
3
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthkanker Mahaveer University , Moradabad
computer. Typically, an application can be
scale up without changing the source code. In
addition the administrative effort does not
change drastically. However, the benefit of
scaling up tapers off eventually until the
actual maximum processing capabilities of
the machine is reached.
2) Scaling Out: Refers to distributing the process
load across more than one server. This is
achieved by using multiple computers; the
collection of computers continues to act as
the original device configuration from the end
user perspective. The application should be
able to execute without needing information
about the server on which it is executing.
This concept is called location transparency.
It increases the fault tolerance of the
application.
Design has more impact on the scalability of
an application than the other three factors. As we
move up the pyramid, the impact of various
factors decreases:
[2015]
To design for scalability, the following
guidelines should be considered:
 Design process such that they do not waist.
 Design process so that processes do not
compete for resources.
 Design processes for commutability.
 Partition resources and activities.
 Design component for interchangeability.
3) Design for Availability: Availability is a measure
of how often the application is available to
handle service requests as compared to the
planned run time. Availability also takes into
account repair time because an application
that is being repaired is not available for use.
Measurement
types
for
calculating
availability.
Name
Calculation
Definition
Mean
Time
Between
Failure
(MTBF)
Hours/Failur
e Count
Average length of
time the application
runs before failing
Mean
Time To
recovery
(MTTR)
Repair
Hours/
Failure
Count
Average length of
time needed to repair
& restore service
after a failure.
The formula for calculating availability is:
Availability = (MTBF / (MTBF + MTTR)) X 100
The MTBF for a system that has periodic
maintenance at a regular interval can be
described by[16] as follows:
4
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthkanker Mahaveer University , Moradabad
Where RT(t) = exact reliable function
assuming periodic maintenance every T (hours).
Designing
for
availability
includes
anticipating, detecting and resolving hardware
and software failures before they result in service
errors faults, or data corruption thereby
minimizing downtime.
To design for availability of an application the
following guidelines should be considered:
 Reduce Planned downtime: Use rolling
upgrades, e.g. to update a component on a
clustered server, we can move the server‟s
resources group to another server, take the
server offline, update the component and then
bring the server online. Meanwhile,
application experiences no downtime.
 Use Redundant array of independent disks
(RAID) : Raid uses multiple hard disks to
store data in multiple places. If a disk fails,
the application in transferred to a mirrored
data image and the application continues
running. The failed disk can be replaced
without stopping the application.
A.
Design for Reliability
The Reliability of an application refers to the
ability of the application to provide accurate
results. Although software standard and
software engineering processes guide the
development of reliable or safe software,
mathematically sound conclusions that
quantify reliability from conformity to
standard are hard to drive [20]. Reliability
measures how long the application can
execute and produce expected results
without failing.
The following tasks can help to create reliable application.




B.
Using a good architectural infrastructure.
Including Management Information in the
application.
Implementing error handling.
Using redundancy.
Design for Performance
[2015]
Performance is defined by metrics such as
transaction throughput and resource utilization.
An application performance can be defined in
terms of its response time.
To define a good Performance the following
steps should be taken.
 Identify project constraints.
 Determine services that the application will
perform.
 Specify the load on the application.
C.
Design for Interoperability
Interoperability refers to the ability to operate
an application independent of programming
language, platform and device. The application
need to design for interoperability because it
reduces operational cost and complexity, uses
existing investment and enables optimal
deployment.
To design application interoperability the
following tasks should be considered:
 Network interoperability.
 Data interoperability.
 Application interoperability.
 Management interoperability.
D.
Design for Globalization
Globalization is the process of designing
and developing an application that can
operate in multiple cultures and locales.
Globalization involves:
 Identifying the cultures and locales that must
be supported.
 Designing features that support those cultures
and locales.
 Writing code that executes property in all the
supported cultures and locales.
Globalization enables to create application that
can accept, display and output information in
different languages scripts that are appropriate
for various geographical areas.
To design for globalization the following
information should be kept in mind:
 Character classification.
 Date and Time formatting.
 Number, currency, weight and measure
convention.
5
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthkanker Mahaveer University , Moradabad
E.
Design for Clarity and simplicity
Clarity and simplicity are enhanced by
modularity and module independence, by
structured code and by top-down design and
implemented among other techniques. The
allocation of functional requirements to
elements of code represents an important step in
the design process that critically impact
modifiability [19].
The following guidelines for the definition of
modules will have an extremely positive impact
on maintainability:
 Use hierarchical module control structure
whenever possible.
 Each module should do its own housekeeping
as first act.
 Module should have only one entrance and
exit.
 Limit module size. Up to 200 statements.
 Reduce communication complexity by
passing parameters directly between modules.
 Use „go-to-less‟ or structured programming
logic.
F.
application to some extent, the business problems
dictates the capabilities application must have for
a satisfactory solution
V.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
Design for Readability
Maintenance will ultimately result in changing
the source code, understanding of thousand lines
source code is almost impossible if source code is
not well supported by meaningful comments. So,
readability of the source code can be estimated
by finding percentage of comments lines in total
code. A new factor common ration (CR) is
defined as[14].
[2015]
[15]
[16]
REFERENCES
Pressman, R., “Software Engineering. A Practical Approach”, 4th
Edition, Mc Graw Hill, 1997
Pankaj Jalote. “ An Integrated Approach to Software Engineering”,
2nd Edition, Narosa Publishing House.
Shari LawrencePfleeger, “Software Engineering Theory and
Practice”, 2nd Edition, Pearson Education Asia 2002.
Wallace, R. Dolores, Daughtrey, Taz, “ Verifying and Validating for
Maintainability”, IEEE Press, 1988, pp. 41-46.
“Analyzing Requirements and Defining”, Microsoft.Net Solution
Architecture, Prentice-Hall of India Pvt. Ltd., New Delhi, 2004.
http://sunset.usc.edu/classes/css77b-2001/metricsguide/metrics.html
Ian Somerville “Software Engineering” 5th Edition, Addison
Wesley, 1996.
Kan, K. Stephen, “Metrics and Models in Software Quality
Engineering”, 2nd edition, Pearson Education, 2003.
Ghezzi, Carlo., Jazayeri, Mehdi., Mandrioli, Dino., “Fundamental of
Software Engineering”, Prentice-Hall of India Pvt. Ltd., New Delhi,
2002.
Khan, R. A., Mustafa, K. “Assessing Software Maintenance – A
Metric Based Approach”, DeveloperIQ, March 2004.
Alan
Joch
(2005),
“Eye
on
Information”,
Oracle.com/OracleMagazine, January-February 2005, Page 27-34.
Sunday, A. David, “Software Maintainability- A New „ility‟,”, IEEE
Proceedings Annual Reliability and Maintainability Symposium.”,
1989, pp. 50-51
Muthana, S., Knotogiannis, K., Ponnambalam, K., Stacy, B., “A
Maintainability Model for Industrial Software Systems Using
Design Level Metrics”, IEEE Transaction on Software Engineering,
2002, pp. 248-256.
Aggarwal, Krishan., Singh, Yogesh., Chabra, Jitender Kumar., “An
Integrated Measure of Software Maintainability”, IEEE Proceedings
Annual Reliability and Maintainability Symposium, 2002, pp. 235241
Chu, C. Williams., Lu, Chih-Wei., Chang, Chih-Hung., Chung, YehChing., Huang, Yueh-Min., Xu, Baowen., “Software Maintainability
Improvement : Integrating Standards and Models”, IEEE
Proceedings of the 26th Annual International Computer Software
and Application Conference”, 2002.
Briand, C. Lio21nel., Morasca, Sando., Baslii, R. Victor.,
“Measuring and Assessing Maintainability at the End of High Level
Design”, IEEE Press 1993, pp. 88-97
CR = LOC/LOM
LOC = Total Lines of Code
LOM = Total Lines of Commented in the
Source Code.
IV.
CONCLUSION
There are techniques based on different factors
like confusion, permutation and shuffling. The
decision for scalability is set in the context of a
software engineering environment. Although
these elements influence the design of an
6
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthkanker Mahaveer University , Moradabad
[2015]
Web Pen-Testing: A Case Study
Dr. Rakesh Shrama1, Rakhi Saha2
1
Associate Professor, CCSIT, Teerthkanker Mahaveer University Moradabad
2
Assistant Professor Department of Management IMT Ghaziabad
1
[email protected]
[email protected]
2
ABSTRACT-- This is a story of a penetration test I did against
medium sized enterprise. Using only web based vulnerabilities
I was able to compromise a company's security. The names
have been changed and some of the events are slightly
fictitious as I was not authorized to actually penetrate the
company. I am writing this to prove that with a number of
seemingly minor issues a company can be completely
compromised.
Keywords: pen testing, information security, cyber crime,
vulnerability assessment.
I. INTRODUCTION
The details include a case study of a
penetration test that was undertaken against a
medium sized enterprise dealing with services
related info-tech. I was able to compromise a
company's security, using only web based
vulnerabilities. The names have been changed
and some of the events are slightly fictitious as I
was not authorized to actually penetrate the
company. I am writing this to prove that with a
number of seemingly minor issues a company
can be completely compromised. I was not
authorized to DoS, and I was not authorized to
attack the network or the underlying server/OS
since it was hosted by Rackspace. My task was to
find as many significant issues with the corporate
website (anything at that domain name) as
possible in a realistic amount of time that an
attacker would spend attacking the system. The
company had many other websites, but this one
in particular was the most critical to them and
thus I was contracted to provide an external
black-box security assessment of the application.
This case study shows how 10 mostly
innocuous security issues were used together to
leverage a major attack against a company.
II. THE SECURITY ISSUES
First thing was first, I needed to see what was
actually on that domain. I began by performing a
number of web-queries to determine what servers
were available. The most interesting search I
performed was actually using Alexa. This yielded
the first issue: #1 - webmail is easily located.
company.com - 91%
webmail.company.com – 9%
The reason webmail showed up, I later found
out, was because the company had a number of
professional SEO people on contract, who used
webmail (Outlook Web Access). They had the
Alexa toolbar installed (which is essentially
spyware) and ultimately allowed disclosure of
that URL to the world. Since that time I wrote a
tool called Fierce that would have yielded the
same results, but with far less luck. Here's the
results from a Fierce scan:
Trying zone transfer first... Fail: Response
code from server: NOTAUTH
Okay, trying the good old fashioned way...
brute force:
DNS Servers for company.com:
ns.rackspace.com
ns2.rackspace.com
Checking for wildcard DNS... Nope. Good.
Now performing 359 test(s)...
123.123.123.145 www.company.com
123.123.123.145 ftp.company.com
123.123.123.250 support.company.com
222.222.222.194 webmail.company.com
123.123.123.100 mail.company.com
222.222.222.194 smtp.company.com
222.222.222.125 pop.company.com
222.222.222.125 pop3.company.com
123.123.123.104 blog.company.com
123.123.123.109 vpn.company.com
Subnets found (may want to probe here using
nmap or unicornscan):
123.123.123.0-255 : 6 hostnames found.
222.222.222.0-255 : 4 hostnames found.
1
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthkanker Mahaveer University , Moradabad
Done
with
Fierce
http://ha.ckers.org/fierce/
Found 10 entries across 10 hostnames.
Have a nice day.
scan:
Sure, it seems like a small issue, most
companies have webmail servers, but should they
be known to the rest of the world? Theoretically
a brute force attack could have yielded some
results, but that's noisy. But just on the safe side,
I decided to see if I could uncover some email
addresses. This leads me to the second minor
issue: #2 - easily discoverable and plentiful email
addesses. After a handful of web-searches and
playing around with some social networking sites
I was able to uncover several dozen email
addresses of company employees. But brute force
is so blase. I decided to leave that one alone for
the moment - especially because one of my
mandates during the penetration test was to avoid
any potential of denial of service.
At this point it was time to start attacking the
application directly, looking for holes. I created
an account and then looked at the forgot
password policy. This leads me to the next minor
issue: #3 - forgotten passwords are sent in plain
text. I think everyone would agree that sending
your password in plaintext format is bad, but
most people don't really understand why it's bad.
It's not just dangerous because the an attacker
could sniff the password, but it's also bad
because the password is stored in plaintext (or
otherwise recoverable). If the server were ever
compromised the entire list of usernames and
passwords would have been compromised. There
are other issues as well, but I'll get to that later.
Upon inspecting the functionality of the site it
becomes clear that a user can modify their email
addresses to anything they want with no
verification that they are in fact the owner of the
account. That may not seem that bad, but it can
be, not to mention it could allow for spamming.
That brings us to our next seemingly minor issue:
#4 - system will allow users to change email
address to any email address they want (with no
verification). Next I began looking at the
[2015]
application for cross site scripting (XSS)
vulnerabilities. I did end up finding a number of
XSS vulnerabilities in the application. Some
might not consider this a minor issue, but in the
relative scheme of things, the XSS vulnerability
is not a method to compromise a server by itself.
So for the time being let's list it also as a minor
issue: #5 - XSS vulnerabilities in the application.
I realized at some point that the application
uses email addresses as usernames. Many
websites do this for ease (users remember their
usernames easier, and there is no chance of
overlap/conflict). Still, it's a way to trace users
back to their email accounts, and as email
addresses are often used as a factor of
authentication I still list this as a minor issue and
later on this will become very useful: #6 usernames are email addresses. Part of the
function of the site is a recommendation engine
to allow other people to see the same page you
are seeing in case they are interested in it.
Unfortunately the recommendation engine allows
you to customize the content of the email you
send, making it a perfect spam gateway. That's a
minor issue but could get the whole site put on
anti-spam lists if a spammer ever found it: #7 recommendation engine sends custom emails.
A common function of many websites is to
require authentication and then redirect the user
to whatever page they originally intended to go
to upon authentication. Although it's a minor
issue the user is not necessarily aware of the page
they are going to. It's a very minor issue but can
be used to the attacker's advantage as I'll show
later: #8 - login redirects. After some work I
found a function that allows users to see if other
users are valid. In this case it's the change email
address function. The site is smart enough to
know that two people cannot have the same
email address and will warn you that the address
is taken if there is a valid user on the system with
that email address. These functions are common
on websites, and most people don't see them as
issues, but they do allow for information
disclosure. However, it's a minor issue most of
the time: #9 - function to detect valid users.
2
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthkanker Mahaveer University , Moradabad
The last minor issue I found was that the
change email function was vulnerable to cross
site request forgery (CSRF). An attacker can
force a logged in user to change their email
address. Although this is an issue, typically sites
are full of CSRF issues, and are often
overlooked. This, however, was the final tool in
my arsenal by which to initiate the attack: #10 change email function is vulnerable to CSRF.
III. THE ATTACK
Now, let's combine the issues and initiate an
attack. First, I take my list of corporate email
addresses (#2) and check to see which ones are
also user accounts (#6) using the function to
detect valid users (#9). This shows us which
users have corporate email accounts as well as
legitimate user accounts on the site in question.
Now I change my email address to one of the
email addresses of a corporate user (#4) that's
NOT a user on the system. I found these users at
the same time as I found valid users using the
change email function (#9). It is still an email
address of an employee of the company (#2)
though. Then I send an email to one of the valid
users on the system (#2) using the
recommendation engine (#7). The email looks
like it's coming from one of their co-workers
(since I changed my email address to one of the
corporate email address) and contains a link,
asking them to see if they like the
recommendation or not.
The link is a link to the login function (#8) that
redirects the user to an XSS hole (#5). Now the
user has logged in and their browser is under our
control. While showing the user some page that
they are probably not interested in, I forward the
user invisibly to the change email function and
force them to change their email address through
CSRF (#10) to another email address that I've got
control over. Then I have their browser submit
the forgot password function (#3) which delivers
their password to my inbox.
Now I know their email address since I know
which email addresses I sent the original message
to (a one to one mapping of email addresses I've
[2015]
got control over to email addresses I want to
compromise helps with this) and I know their
password (#3). Since most users tend to use the
same password in multiple places there is a high
probability of the user having the same corporate
email password as corporate website password. I
quickly log in, and change their email address
back to their own account, to cover my tracks.
Then I log into the webmail server (#1) with their
username and password. I have now completely
compromised a legitimate user's corporate email.
IV. CONCLUSIONS
Often minor issues are overlooked but even in
some cases the smallest issues can mount into
huge compromises in security. Of course I could
have used the XSS to scan the corporate intranet,
and compromise internal machines, but I wanted
to prove that through only minor issues alone (no
direct exploits against any servers) I was able to
steal corporate email - send email on the user's
behalf from their account and uncover other users
of the system. Not to mention enabling corporate
espionage, usernames and passwords sent over
email and access to the helpdesk who can give
me further access through social engineering.
Even minor issues that are regularly dismissed in
security assessments can be leveraged by a
determined attacker to compromise a corporation
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
Belani, R., & Mookhey, K. K. (2005, February 9). Symantec.
Retrieved
from
Penetration
Testing
IPsec
VPNs:
http://www.symantec.com/connect/articles/penetration-testing-ipsecvpns
Bodi, S. (2010, September). Techtips By Satish. Retrieved from
VPN Security: http://techtipsbysatish.blogspot.com/2010/09/vpnsecurity.html
Combs, G. (1998). Wireshark - Go Deep. Retrieved from Wireshark:
http://www.wireshark.org/
G0tmi1k. (2010, March 17). G0tmi1k: [Script][Video]
chap2asleap.py (v0.1.1) Cracking VPN (asleap & THC-pptp-bruter).
Retrieved
from
G0tmi1k:
http://g0tmi1k.blogspot.com/2010/03/script-chap2asleappy.html
Hills, R. (2009, August 15). NTA Monitor. Retrieved from Ike-scan
User
Guide:
http://www.nta-monitor.com/wiki/index.php/Ikescan_User_Guide
Kitchen, D. (2009, November 3). Hak5. Retrieved from Hacking
PPTP VPNs with ASLEAP: http://revision3.com/hak5/asleap /
Kurgas, M. (2009). The Official Social Engineering Framework Computer Based Social Engineering Tools: Common User
Passwords Profiler (CUPP). Retrieved from Social Engineering
Security
Through
Education:
http://www.socialengineer.org/framework/Computer_Based_Social_Engineering_Too
ls:_Common_User_Passwords_Profiler_(CUPP)
3
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthkanker Mahaveer University , Moradabad
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[2015]
Microsoft. (2011). Microsoft TechNet. Retrieved from
Understanding
PPTP
(Windows
NT
4.0):
http://technet.microsoft.com/en-us/library/cc768084.aspx
Mitchell,
B.
(n.d.).
VPN
Tutorial.
Retrieved
from
http://compnetworking.about.com/od/vpn/a/vpn_tutorial.htm
Ou, G. (2004, December 17). ZdNet. Retrieved from PPTP VPN
authentication protocol proven very susceptible to attack:
http://www.zdnet.com/blog/ou/pptp-vpn-authentication-protocolproven-very-susceptible-to-attack/21
Pandya, H. M. (2011). FreeBSD. Retrieved from FreeBSD
Handbook
Chapter
14
Security:
http://www.freebsd.org/doc/en_US.ISO88591/books/handbook/ipsec.html
Richardson, M., Wouters, P., Antony, A., Bantoft, K., &
Trojanowski, B. (2003). Openswan. Retrieved from Openswan:
http://www.openswan.org/
Wright, J. (2007, May 10). asleap - exploiting cisco leap. Retrieved
from Hacking, Pen-Testing, Securing and Defending Wirelesss
Networks: http://www.willhackforsushi.com/Asleap.html
Zhan, R. (2010, April 30). Configure L2TP/IPSec VPN on Ubuntu.
Retrieved from Riobard: http://riobard.com/blog/2010-04-30-l2tpover-ipsec-ubuntu
4
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
[2015]
Big Data Analytics: Challenges and Solutions
Pranjal Agarwal1, Rupal Gupta2
1
MCA Student, CCSIT, TMU.
Assistant Professor, CCSIT, TMU.
2
1
[email protected]
2
[email protected]
ABSTRACT-2015 is the digital world in which the data size is
exceeded zettabyte. Today’s Business requires this huge data to
be processed extremely fast using some optimization techniques.
Bigdata is a collection of heavy heterogeneous and complex data
sets that contains massive data along data management
capabilities and categorized real-time data, which is expressed in
volume, variety and velocity three major aspect of Bigdata.
Bigdata analytics is all about discovering the patterns and other
meaning information by collecting, analysing and organising this
large amount of data.Big data analytics help organizations to
only process the information which is meaningful to their
business excluding the noisy data i.e irrelevant to business.This
paper gives the in-depth knowledge of Bigdata analytics followed
by the challenges for adapting the Bigdata Technology in
business along with its solution using Hadoop for Bigdata
analytics. Hadoop gives the better understanding of Bigdata
analytics to business logic for their decision making process.
Keywords- Bigdata, Analystics, Hadoop, HDFS, MapReduce.
I.
INTRODUCTION
Big data analytics is the process of examining large
amounts to discover hidden patterns and unknown
correlations which helps for business intelligence in
current era.so it can help in better decision making.
It informed business decisions by enabling data
scientist , predictive modellers and other analytics
professionals to analyse large volumes of
transaction data as well as other forms of data that
may be untapped by conventional business
intelligence programs.[6] Big data analytics also
includes the efficiency and effectiveness of data
warehousing Now a days Web blocks, emails,
mobile-phone records social media and survey and
machine data captured by sensors connected to the
Internet.
Relational database management systems and
desktop statistics and visualization packages often
have difficulty handling big data. The work instead
requires "massively parallel software running on
tens, hundreds, or even thousands of servers". Big
Data Analytics Applications (BDA Apps) are a new
type of software applications, which analyse big
data using massive parallel processing frameworks
(e.g., Hadoop).
Developers of such applications typically develop
them using a small sample of data in a pseudocloud environment. Afterwards, they deploy the
applications in a large-scale cloud environment
with considerably more processing power and
larger input data. Working with BDA App
developers in industry over the past three years, we
noticed that the runtime analysis and debugging of
such applications in the deployment phase cannot
be easily addressed by traditional monitoring and
debugging approaches.
II.
BIG DATA ARCHITECTURE
Big data architecture is a two level architecture in
which first level is client level which is further
divided in different layers.[2] Client-server
architecture for Big Data is as followsA.
CLIENT LEVEL ARCHITECTURE
The client level architecture consists of NoSQL
databases, distributed file systems and a distributed
processing framework. NoSQL Database as a
distributed key-value database designed to provide
highly reliable, scalable and available data storage
Fig1- Client level architecture of big data.
1
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
across a configurable set of systems that function as
storage nodes. NoSQL databases provide
distributed, highly scalable data storage for Big
Data
The next layers consist of the distributed file system
that is scalable and can handle a large volume of
data, and a distributed processing framework that
distributes computations over large server
clusters.[2]
B.
SERVER LEVEL ARCHITECTURE
The server level architecture contains parallel
computing platforms that can handle the associated
volume and speed. Cluster or grids are types of
parallel and distributed systems, where a cluster
consists of a collection of inter-connected standalone computers working together as a single
integrated computing resource, and a grid enables
the sharing, selection, and aggregation of
geographically distributed autonomous resources
dynamically at runtime [2].
Fig2 - Server level architecture of big data.
III.
BIG DATA- CHALLENGES
In the distributed systems world, “Big Data”
started to become a major issue in the late 1990‟s
due to the impact of the world-wide Web and a
resulting need to index and query its rapidly
mushrooming content. Database technology
(including parallel databases) was considered for
the task, but was found to be neither well-suited nor
cost-effective for those purposes. The turn of the
millennium then brought further challenges as
companies began to use information such as the
topology of the Web and users‟ search histories in
[2015]
order to provide increasingly useful search results,
as well as more effectively-targeted advertising to
display alongside and fund those results. Google’s
technical response to the challenges of Web-scale
data management and analysis was simple, by
database standards, but kicked off what has become
the modern “Big Data” revolution in the systems
world [3]. To handle the challenge of Web-scale
storage, the Google File System (GFS) was created
. GFS provides clients with the familiar OS-level
byte-stream abstraction, but it does so for extremely
large files whose content can span hundreds of
machines in shared-nothing clusters created using
inexpensive commodity hardware [5].
Main challenges with big data1)
HETEROGENEITY AND INCOMPLITENES: When
humans consume information, a great deal of heterogeneity is
comfortably tolerated. In fact, the nuance and richness of
natural language can provide valuable depth. However,
machine analysis algorithms expect homogeneous data, and
cannot understand nuance. In consequence, data must be
carefully structured as a first step in (or prior to) data analysis.
Computer systems work most efficiently if they can store
multiple items that are all identical in size and structure.
Efficient representation, access, and analysis of semistructured data require further work
SCALEThe first problem with Big Data is its size. After all,
the word “big” is there in the very name. Managing large and
rapidly increasing volumes of data has been a challenging
issue for many decades. In the past, this challenge was
mitigated by processors getting faster, following Moore’s law,
to provide us with the resources needed to cope with
increasing volumes of data. But, there is a fundamental shift
underway now: data volume is scaling faster than compute
resources, and CPU speeds are static.
2)
TIMELINES: The flip side of size is speed. The
larger the data set to be processed, the longer it will take to
analyse. The design of a system that effectively deals with
size is likely also to result in a system that can process a given
size of data set faster. However, it is not just this speed that is
usually meant when one speaks of Velocity in the context of
Big Data. Rather, there is an acquisition rate challenge.
3)
PRIVACY: The privacy of data is another huge
concern, and one that increases in the context of Big Data. For
electronic health records, there are strict laws governing what
can and cannot be done. For other data, regulations,
particularly in the US, are less forceful. However, there is
great public fear regarding the inappropriate use of personal
data, particularly through linking of data from multiple
sources. Managing privacy is effectively both a technical and
2
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
a sociological problem, which must be addressed jointly from
both perspectives to realize the promise of big data.
4)
HUMAN COLLOBRATION: In spite of the
tremendous advances made in computational analysis, there
remain many patterns that humans can easily detect but
computer algorithms have a hard time finding. Ideally,
analytics for Big Data will not be all computational rather it
will be designed explicitly to have a human in the loop. The
new sub-field of visual analytics is attempting to do this, at
least with respect to the modelling and analysis phase in the
pipeline. In today’s complex world, it often takes multiple
experts from different domains to really understand what is
going on. A Big Data analysis system must support input from
multiple human experts, and shared exploration of results.
These multiple experts may be separated in space and time
when it is too expensive to assemble an entire team together in
one room. The data system has to accept this distributed
expert input, and support their collaboration. [9]
5)
VARIETY-Variety refers to the many source and
types of data both structured and unstructured. We used to
store data from sources like spread sheets and databases. Now
data comes in the form of emails, photos, videos, monitoring
devices, PDFs, audio, etc. This variety of unstructured data
creates problems for storage, mining and analysing data.
The type of content, and an essential fact that data analysts
must know. This helps people who are associated with and
analyse the data to effectively use the data to their advantage
and thus uphold its importance.
6)
VALIDITY: Like big data veracity is the issue of
validity meaning is the data correct and accurate for the
intended use. Clearly valid data is key to making the right
decisions. Phil Francisco, VP of Product Management
from IBM spoke about IBM’s big data strategy and tools they
offer to help with data veracity and validity.
7)
DATA INTEGRATION: The ability to combine data
that is not similar in structure or source and to do so quickly
and at reasonable cost. With such variety, a related challenge
is how to manage and control data quality so that you can
meaningfully connect well understood data from your data
warehouse with data that is less well understood. [11]
8)
SOLUTION COST: Since Big Data has opened up a
world of possible business improvements, there is a great deal
of experimentation and discovery taking place to determine
the patterns that matter and the insights that turn to value. To
ensure a positive ROI on a Big Data project, therefore, it is
crucial to reduce the cost of the solutions used to find that
value.[11]
Biggest Challenges for Success in Big Data and Analytics.
Source: TM Forum, 2012.
[2015]
Fig3 - Biggest Challenges for Success in Big Data and Analytics. Source:
TM Forum, 2012.[11]
IV.
SOLUTIONS
In big data analytics should be face many issue like
data validity, volume, integrity and speed.to
overcome this problems we have a mechanism that
name is HADOOP.
A.
HADOOP
Hadoop is a free Java-based programming
framework that supports the processing of large
data sets in a distributed computing environment. It
is part of the apache project sponsored by the
Apache Software Foundation.
The base Apache Hadoop framework is composed
of the following modules:
1)
HADOOP COMMON: contains utilities that support
the other Hadoop modules[4].
2)
HADOOP DISTRIBUTED FILE SYSTEM (HDFS): A
distributed file system that provides high-throughput access to
application data.
3)
HADOOP YARN: YARN is essentially a system for
managing distributed applications. It consists of a
central ResourceManager, which arbitrates all available
cluster resources, and NodeManager, which takes direction
from the Resource Manager and is responsible for managing
resources available on a single node[5]
4)
HADOOP MAPREDUCE: A YARN-based system
for parallel processing of large data sets.
V.
HADOOP ARCHITECTURE:
The layers found in the software architecture of a
Hadoop stack. At the bottom of the Hadoop
3
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
software stack is HDFS, a distributed file system in
which each file appears as a (very large) contiguous
and randomly addressable sequence of bytes. For
batch analytics, the middle layer of the stack is the
Hadoop Map Reduce system, which applies map
operations to the data in partitions of an HDFS file,
sorts and redistributes the results based on key
values in the output data, and then performs reduce
operations on the groups of output data items with
matching keys from the map phase of the job. For
applications just needing basic key-based record
management operations, the HBase store (layered
on top of HDFS) is available as a key-value layer in
the Hadoop stack. As indicated in the figure, the
contents of HBase can either be directly accessed
and manipulated by a client application or accessed
via Hadoop for analytical needs. Many users of the
Hadoop stack prefer the use of a declarative
language over the bare Map Reduce programming
model. High-level language compilers (Pig and
Hive) are thus the topmost layer in the Hadoop
software stack for such clients.[3]
[2015]
It was originally developed by Google and built
on well-known principles in parallel and
distributed
processing. Since then Map
Reduce was extensively adopted for analysing
large data sets in its open source flavour
Hadoop. [6] map reduce technology based on
mainly two task.

MAP- In map task an input dataset is converted
into a different set of value pairs, or tuples;
Fig 5: MapReduce Master/slave architecture

Map reduce is a simple programming model for
processing huge data sets in parallel. Map Reduce
have master/slave architecture. The basic notion of
Map Reduce is to divide a task into subtasks,
handle the sub-tasks in parallel, and aggregate the
results of the subtasks to form the final output.
Programs written in Map Reduce are automatically
parallelized: programmers do not need to be
concerned about the implementation details of
parallel processing. Instead, programmers write two
functions: map and reduce. The map phase reads
Fig 4 – layer architecture of hadoop
the input (in parallel) and distributes the data to the
a. MAP REDUSE
reducers. Auxiliary phases such as sorting,
This is a programming model that is used for partitioning and combining values can also take
processing large amount of data set in computer place between the maps and reduce phases.
cluster. Any Map Reduce implementation Map Reduce programs are generally used to process
consists of REDUSE- In Reduce task several of large files. The input and output for the map and
the outputs of the map task are combined to reduce functions are expressed in the form of keyform a reduced set of tuples.MAP REDUSE value pairs.
A Hadoop Map Reduce program also has a
ARCHITECTURE
component called the Driver. The driver is
4
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
responsible for initializing the job with its
configuration details, specifying the mapper and the
reducer classes for the job, informing the Hadoop
platform to execute the code on the specified input
file(s) and controlling the location where the output
files are placed.
Fig 6: MapReduce architecture
In most computation related to high data volumes, it
is observed that two main phases are commonly
used in most data processing components. Map
Reduce created an abstraction phases of Map
Reduce model called 'mappers' and 'reducers'
(Original idea was inspired from programming
languages such as Lisp). When it comes to
processing large data sets, for each logical record in
the input data it is often required to use a mapping
function to create intermediate key value pairs.
Then another phase called 'reduce' to be applied to
the data that shares the same key, to derive the
combined data appropriately.
Mapper
The mapper is applied to every input key-value pair
to generate an arbitrary number of intermediate
key-value pairs. The standard representation of this
is as follows:
Map(inKey,inValue)->list(intermediateKey,
intermediateValue)
The purpose of the map phase is to organize the
data in preparation for the processing done in the
reduce phase. The input to the map function is in
the form of key-value pairs, even though the input
to a MapReduce program is a file or file(s). By
default, the value is a data record and the key is
[2015]
generally the offset of the data record from the
beginning of the data file.
The output consists of a collection of key-value
pairs which are input for the reduce function. The
content of the key-value pairs depends on the
specific implementation.
For example, a common initial program
implemented in MapReduce is to count words in a
file. The input to the mapper is each line of the file,
while the output from each mapper is a set of key
value pairs where one word is the key and the
number 1 is the value.
1) map: (k1 , v1 ) → [(k2 , v2 )]
the file_name and the file content which is denoted by k1 and
v1. So, with in the map function user may emit the any
arbitrary key/value pair as denoted in the list [k2, v2].
To optimize the processing capacity of the map phase,
MapReduce can run several identical mappers in parallel.
Since every mapper is the same, they produce the same result
as running one map function.
2) Reducer
The reducer is applied to all values associated with the same
intermediate key to generate output key-value pairs.
3)
reduce(intermediateKey,list(intermediateVal
ue))-> list(outKey, outValue)
Each reduce function processes the intermediate values for a
particular key generated by the map function and generates
the output. Essentially there exists a one-one mapping
between keys and reducers. Several reducers can run in
parallel, since they are independent of one another. The
number of reducers is decided by the user. By default, the
number of reducers is 1.
Since we have an intermediate 'group by' operation, the input
to the reducer function is a key value pair
Where the key-k2 is the one which is emitted from mapper
and a list of values [v2] with shares the same key.[6]
reduce: (k2, [v2 ]) → [(k3 , v3 )]
b. HDFS
HDFS is a block-structured distributed file system that holds
the large amount of Big Data. In the HDFS the data is stored
in blocks that are known as chunks. HDFS is client-server
architecture comprises of NameNode and many Data Nodes.
The name node stores the metadata for the Name Node.
NameNodes keeps track of the state of the Data Nodes.
NameNode is also responsible for the file system operations
etc. When Name Node fails the Hadoop doesn’t support
automatic recovery, but the configuration of secondary node is
possible. HDFS is based on the principle of “Moving
Computation is cheaper than Moving Data
5
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
VI.
FUTURE PLANS
Business data is analysed for many purposes: a
company may perform system log analytics and
social media analytics for risk assessment, customer
retention, brand management, and so on. Typically,
such varied tasks have been handled by separate
systems, even if each system includes common
steps of information extraction, data cleaning,
relational-like processing, statistical and predictive
modelling, and appropriate exploration and
visualization tools [8]. But also Enterprise data
security is challenging task to implement and calls
for strong support in terms of security policy
formulation and mechanisms. We plan to take
update collection, pre-treatment, integration, map
reduce and prediction using machine learning
techniques. [7]
VII.
CONCLUSION
In this paper we elaborate the challenges which the
business is facing to manage their large amount of
data which is growing day by day in exponent
somewhere like transactional data of banking sector
etc. We need to manage this big data which is
treated as an challenge for business now. Big data is
a problem in business, to take an efficient and effective
decision. There are a no of challenges and issues
related big data like time, speed, volume, variety,
cost of data . This paper focuses on Big Data
processing problems. These technical challenges
must be addressed for efficient and fast processing
of Big Data. The challenges include not just the
obvious issues of scale, but also heterogeneity, lack
of structure, error-handling, privacy, timeliness,
provenance, and visualization, at all stages of the
analysis pipeline from data acquisition to result
interpretation.[9] These technical challenges are
common in big domain also Hadoop is an open
source which are used to manage big data. Hadoop
was developed by Google’s Mapreduce that is a
software framework where an application break
down into various parts. The Current Appache
Hadoop ecosystem consists of the Hadoop Kernel,
Mapreduce, HDFS and numbers of various
[2015]
components like Apache Hive, Base and Zookeeper
MapReduce is a programming model and an
associated implementation for processing and
generating large data sets.[1] The MapReduce
programming model has been successfully used at
Google for many different purposes.[12]
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
Shilpa Manjit Kaur “BIG Data and Methodology-A review”
International Journal of Advanced Research in Computer Science
and Software Engineering Volume 3, Issue 10, October 2013
ISSN: 2277 128X
Joseph O. Chan An Architecture for Big Data Analytics
Communications of the IIMA Volume 13, Issue 2, 2013
Puneet Singh Duggal Sanchita Paul “Big Data Analysis:
Challenges and Solutions” International Conference on Cloud,
Big Data and Trust 2013, Nov 13-15, RGPV
"Welcome to Apache™ Hadoop®!” Hadoop.apache.org.
Retrieved 2015-09-20.
Murthy, Arun (2012-08-15). "Apache Hadoop YARN–Concept
and Applications". Hortonworks.com. Hortonworks. Retrieved
2014-09-30.
Dr. Siddaraju, Sowmya C L, Rashmi K, Rahul M “Efficient
Analysis of Big Data Using Map Reduce Framework”
International Journal of Recent Development in Engineering and
Technology (IJRDET) ISSN 2347-6435(Online) Volume 2, Issue
6, June 2014
Bhawna Gupta and Dr. Kiran Jyoti , “Big Data Analytics with
Hadoop to analyse Targeted Attacks on Enterprise Data”
International Journal of Computer Science and Information
Technologies (IJCSIT),ISSN 0975-9646Volumn. 5(3), 2014,
3867-3870
“Challenges and Opportunities with Big Data”A community
white paper developed by leading researchers across the United
States
Harshawardhan S. Bhosale1, Prof. Devendra P. Gadekar2 “A
review paper on big data and hadoop” International Journal of
Scientific and Research Publications, Volume 4, Issue 10,
October 2014 ISSN 2250-3153
Renato P. dos Santos “Big Data as a Mediator in Science
Teaching: A Proposal”
Webinar presentation, “Getting Value from Big Data”, Doug
Laney, Gartner, 2011.
Jeffrey Dean and Sanjay Ghemawat “MapReduce: Simplied Data
Processing on Large Clusters”, Google, Inc.
6
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
[2015]
5G Techonology
Dhanushri Varshney1, Mohan Vishal Gupata2
1,2
CCSIT, TMU, BAGADPUR MORADABAD U.P. 244001
1
[email protected]
[email protected]
Abstract— 5G Technology stand for fifth generation mobile stage of life so they are highly advanced in
technology. 5G technologies will change the way most highbandwidth uses access their phones. User never experienced ever technology. Today mobile phones are being used as
before such a high value technology. This paper also focuses on „ten tasks in one minute concurrently‟ like we play
preceding generations of mobile communication along with fifth games, download files, chatting with friends, taking
generation technology. The paper throws light on network
architecture of fifth generation technology. In fifth generation pictures while hanging with dear ones. Mobile
researches are being made on development of World Wide terminals include variety of interfaces like GSM
Wireless Web (WWWW), Dynamic Adhoc Wireless Networks which are based on circuit switching. All wireless
(DAWN) and Real Wireless Word. Fifth generation focus on
(Voice Over IP) VOIP-enabled devices that user will experience a and mobile networks implements all-IP principle,
high level of call volume and data transmission. Fifth generation that means all data and signaling will be transferred
technology will fulfill all the requirements of customers who via IP (Internet Protocol) on network layer. The
always wants advanced features in cellular phones. Fifth
generation technology will offer the services like Product fifth generation wireless mobile multimedia internet
can
be
completely
wireless
Engineering, Documentation, supporting electronic transactions networks
(e-Payments, e-transactions) etc. The 5G technologies include all communication without limitation, which makes
types of advanced features which make 5G technology most
perfect wireless real world – World Wide Wireless
dominant technology in near future.
Web (WWWW). Fifth generation is based on 4G
technologies. The 5th wireless mobile internet
networks are real wireless world which shall be
supported
by
LAS-CDMA(Large
Area
I. INTRODUCTION
Synchronized
Code-Division
Multiple
Wireless communication has started in early 1970s.
Access),OFDM(Orthogonal
frequency-division
The Fifth generation technologies offer various new
multiplexing), MCCDMA(Multi-Carrier Code
advanced features which makes it most powerful
Division Multiple Access), UWB(Ultra-wideband),
and in huge demand in the future. Now days
Network-LMDS( Local Multipoint Distribution
different wireless and mobile technologies are
Service), and IPv6. Fifth generation should be
present such as third generation mobile networks
more intelligent technology that interconnects the
(UMTS- Universal Mobile Telecommunication
entire world without limits. The world of universal,
System, cdma2000), LTE (Long Term Evolution),
uninterrupted access to information, entertainment
WiFi (IEEE 802.11 wireless networks), WiMAX
and communication will open new dimension to our
(IEEE 802.16 wireless and mobile networks), as
lives and change our life style significantly.
well as sensor networks, or personal area networks
(e.g. Bluetooth, ZigBee). The reliable way of
II. EVOLUTION
communication over mobile phones includes Mobile communication has become more popular
heterogeneous voice and data services such as e- in last few years due to fast revolution in mobile
mails, voice and text messages and the most technology. This revolution is due to very high
innovative one is internet which works on wireless increase in telecoms customers. This revolution is
technology. Today mobile phones are being used as from 1G- the first generation, 2G- the second
„ten tasks in one minute concurrently‟ like we play generation, 3G- the third generation, and then the
games, download files, chatting with friends, taking 4G- the forth generation,5G-the fifth generation.
pictures while hanging out with dear ones.
Teenagers get the mobile devices in their early
Keywords— 5G, WWWW, DAWN, Comparison of all
Generations, What is 5G technology ?
1
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
[2015]
drawbacks of 2G i.e. it demands strong digital
First Generation(1G):
1G mobile network used analog system for signals to assist connections of mobile phones,
communication of speech services. Mobile unable to hold complex data such as videos
telecommunication in 1G first introduced in 1980s
and continued till 1990. Analog Mobile Phone
Systems (AMPS) was first established in USA in
mobile networks. It has simple voice only cellular
telephone parameters. The first generation of analog
mobile phones has speed up to 2.4 Kbps. It allows
end users to make voice calls only within 1 country .
1G had limited advantages but major drawbacks
such as poor voice quality, handoff reliability, and
battery life, large size of phones, no security
mechanism alike many more.
A.
Fig.2.2G Generation
C. Third Generation (3G):
3G technology refers to 3rd cellular generation
established in 2000s. This network has highest
speed as compared with 1G and 2G i.e. 144Kbps2Mbps. It is also known as International Mobile
Fig.1.1G Mobile Phones
Telecommunications-2000. It is able to transfer
B.
Second Generation (2G):
2G wireless technologies are based on Gsm and packet switch data at higher and better bandwidth.
use digital signals. The main difference between 1G It offers technically advanced services to end users.
and 2G is that former uses analog signals where There is extraordinary clarity in voice calls services.
latter uses digital signals. These 2G telecom There have been found some advance features of
networks were launched on GSM standard in 3G technology as it provides faster communication,
Finland by Radiolinja in 1991. This technology large broadband capabilities, video conferencing,
holds efficient security mechanisms for the sender 3D gaming, high speed web, more security methods
and receiver. It enabled various high level services like many more. There exist also some drawbacks
such as messages, images, text sms etc. 2g mobile like expensive fees for 3G Licenses Services, big
network technologies are whether time division size of mobile phones, expensive in nature, higher
multiple access (TDMA) or code division multiple bandwidth requirements etc.
access(CDMA). Both CDMA & TDMA have
different platforms and access mechanisms. These
networks have data speed up to 64Kbps. 2G
communications is formerly linked with the isolate
system for GSM services. 2G invented the concept
of short messages services (SMS) which is very
cheap and fast way to talk with other person at
anytime. It proves to be beneficial for end users and
Fig.3.3G Generation
mobile operators at same time. There exist some
2
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
[2015]
D. Fourth Generation (4G):
The next generation of mobile technology gives
higher data access rates and can enlarge multimedia
processing services. 4G offers a downloading speed
of 100Mbps.4G provides same feature as 3G and
additional services like Multi-Media Newspapers,
to watch T.V programs with more clarity and send
Data much faster than previous generations. LTE
(Long Term Evolution) is considered as 4G
technology. 4G is being developed to
accommodate the QoS and rate requirements set by
Fig.5.5G Generation
forthcoming applications like wireless broadband
access, Multimedia Messaging Service (MMS),
III. COMPARISON OF ALL GENERATION OF
video chat, mobile TV, HDTV content, Digital
TECHNOLOGI
Video
Broadcasting (DVB), minimal services like voice Technolo
gy
1G
2G
3G
4G
5G
and data, and other services that utilize bandwidth.
Features
Start/
Deployme
nt
Data
Bandwidt
h
Technolo
gy
Fig.4.4G Generation
E. Fifth Generation (5G):
5G Technology stands for 5th Generation Mobile
technology. 5G mobile technology has changed the
means to use cell phones within very high Service
bandwidth. User never experienced ever before
such a high value technology. Nowadays mobile
users have much awareness of the cell phone
(mobile) technology. The 5G technologies include
all type of advanced features which makes 5G
mobile technology most powerful and in huge
demand in near future. A user can also hook their Multiplex
5G technology cell phone with their Laptop to get ing
broadband internet access. 5G technology including
Switching
camera, MP3 recording, video player, large phone
memory, dialing speed, audio player and much
more you never imagine. For children rocking fun Core
Bluetooth technology and Piconets has become in Network
marke.
1970
–
1980
1990 2004
20042010
Now
Soon
2kbps
64kbps
2Mbps
1 Gbps
Higher than
1Gbps
Analog
Cellula
r
Techno
logy
Digital
Cellular
Technol
ogy
WiMax
LTE
Wi-Fi
WWWW(co
ming soon)
Mobile
Teleph
ony
(Voice
)
Digital
voice,
SMS,
Higher
capacity
packetiz
ed data
CDM
A
2000
(1xRT
T,
EVDO
)
UMTS
,
EDGE
Integra
ted
high
quality
audio,
video
and
data
Dynami
c
Informa
tion
access,
Wearabl
e
devices
Dynamic
Information
access,
Wearable
devices with
AI
Capabilities
FDMA
TDMA,
CDMA
CDM
A
CDMA
CDMA
Circuit
Circuit,
Packet
Packet
All
Packet
All Packet
PSTN
PSTN
Packe
t N/W
Internet
Internet
3
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
IV. WHAT IS 5G TECHNOLOGY ?
[2015]
 All the technologies can use the same
frequency spectrum in very efficient
manner. This is also called smart radio.
 New features such as online TV, newspaper
and researches with advanced resolution.
 We can send Data much faster than that of
the
 previous generations.
This technology is fifth generation of wireless
mobile network which was begun in late 2010s. It
has almost no limitation which makes it isolated or
completed wireless communication. Mobile users
not had experience of such a highly advance
technology . An end user can also connect their 5G
mobile phones with their desktops to have internet
connection. It totally supported World Wide
VII. FEATURE OF 5G TECHNOLOGY
Wireless Web (WWWW). This communication
The following are the features of 5G that makes it
technology merges all enhanced benefits of mobile
phoneslike dialing speed, MP3 recording, cloud extraordinary phone:
storage, HD downloading in instant of seconds and  5G technology offers high resolution for crazy
cell phone user and bi- directional large
much more that you had never imagined.
bandwidth shaping.
V. WORKING CONCEPT OF 5G
 The advanced billing interfaces of 5G
As stated earlier, 5G will be completely user
technology
centric i.e. nothing is hidden from user. It will have  make it more attractive and effective.
new error prevention schemes that can be installed
 5G
technology also providing subscriber
through internet anytime and have modulation
supervision tools for fast action.
methods and software defined radios. 5G will be a
 The high quality services of 5G technology
collaboration of networks and individual network
based on Policy to avoid error.
handle user mobility. This network will be based on

5G technology is providing large broadcasting
Open Wireless Architectures as it has Physical
of data in Gigabit which supporting almost
Access Control Layer i.e. OSI Layer.
65,000
 connections.
 5G technology offers a transporter class
gateway with unparalleled consistency.
 The traffic statistics by 5G technology makes it
more accurate.
 Through remote management offered by 5G
 technology a user can get a better and faster
solution.
 The remote diagnostics also a great feature of
5G
 technology.
VI. KEY CONCEPT OF 5G TECHNOLOGY
 The 5G technology is providing up to 25 Mbps
 No zone issues in this network technology.
 connectivity speed.
 No limited access the user can access
 The 5G technology also supports virtual private
unlimited data.
 Several technologies such as 2G, 2.5G, 3G,  network.
and 4G can be connected simultaneously  The new 5G technology will take all delivery
 services out of business prospect
along with the 5G.
 The uploading and downloading speed of 5G
 technology touching the peak.
 HD quality picture.
4
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
[2015]
VIII. FUTURE SCOPE OF 5G
technology that interconnects the entire world
The 5G technology is designed as an open without limits. This generation is expected
to be released around 2020. The world of
platform on different layers, from the physical layer
up to the application. Presently, the current work is universal, uninterrupted access to information,
in the modules that shall offer the best Operating entertainment and communication will open new
System and lowest cost for a specified service using dimension to our lives and change our life style
one or more than one wireless technology at the significantly.
same time from the 5G mobile. A new revolution of
REFERENCE
5G technology is about to begin because 5G
[1] A. Bria, F. Gessler, O. Queseth, R. Stridth, M. Unbehaun, J. Wu, J.
technology going to give tough completion to
Zendler, "4-the Generation Wireless Infrastructures: Scenarios and
Research Challenges", IEEE Personal Communications, Vol. 8,
normal computer and laptops whose marketplace
No.6, December 2001.
value will be affected. The new coming 5G
[2] Toni Janevski, “A System for PLMN-WLAN Internetworking”,
Journal of Communications and Networks (JCN), pp. 192-206,
technology is available in the market at inexpensive
Vol 7, No. 2, June 2005.
rates, high peak expectations and much reliability
[3] Janise McNair, Fang Zhu, “Vertical Handoffs in FourthGeneration Multinetwork Environments”, IEEE Wireless
than its foregoing technologies. 5G network
Communications, June 2004.
technology will release a novel age in mobile
[4] Toni Janevski, “Traffic Analysis and Design of Wireless IP
Networks”, Artech House Inc., Boston, USA, May 2003.
communication. The 5G technology will have
[5] Suk Yu Hui, Kai Hau Yeung, “Challenges in the Migration to 4G
access to different wireless technologies at the
Mobile Systems”, IEEE Communications Magazine, December
2003
identical time and the terminal should be able to
[6] Willie W. Lu, “An Open Baseband Processing Architecture for
merge different flows from different technologies.
Future
Mobile
Terminals
Design”,
IEEE
Wireless
Communications, April 2008.
5G technology offers high resolution for passionate
[7] Jivesh Govil, Jivika Govil, “5G : Functionalities development and
mobile phone consumer. We can watch an HD TV
an Analysis of Mobile Wireless Grid”, First International
Conference on Emerging Trends in Engineering and Technology
channel in our mobile phones and Laptop without
[8] M. Hata, “Fourth Generation Mobile Communication Systems
any disturbance. The 5G mobile phones will be a
Beyond IMT-2000 Communications,” Proc 5th Asia Pacific Conf.
[9] Commun. 4th Optoelect. Commun. Conf., vol. 1, 1999, pp. 765–67.
tablet PC. The most effective and attractive feature
[10] M. Bhalla, A. Bhalla, “Generations of Mobile Wireless
of 5G
Technology: A Survey” International Journal of Computer
Applications, Volume 5- No.4, August 2010
will be its advanced billing interfaces.
[11] T. Janevski, “5G Mobile Phone Concept” – CCNC conference in
IX. CONCLUSION
In this paper, we conclude that 5G network is
very fast and reliable. Fifth generation is based on
4G technologies. The 5th wireless mobile internet
networks are real wireless world which shall be
supported
by
LAS-CDMA
(Large
Area
Synchronized
Code-Division
Multiple
Access),OFDM (Orthogonal frequency-division
multiplexing), MCCDMA(Multi-Carrier Code
Division Multiple Access), UWB(Ultra-wideband),
Network-LMDS( Local Multipoint Distribution
Service), and IPv6. Fifth generation technologies
offers tremendous data capabilities and unrestricted
call volumes and infinite data broadcast together
within latest mobile operating system. Fifth
generation should make an important difference and
add more services and benefits to the world over 4G.
Fifth generation should be more intelligent
Las Vegas, 2009.
[12] A. Tudzarov , T. Janevski, “Design of 5G Mobile Architecture”
International Journal of Communication Networks and
Information
[13] Security, Vol. 3, No. 2, August 2011.
[14] Sapana Singh, Pratap Singh “Key Concepts and Network
Architecture for 5G Mobile Technology” International Journal of
[15] Scientific Research Engineering & Technology Volume 1 Issue 5
pp 165-170 August 2012.
[16] ABDULLAH GANI, XICHUN LI, LINA YANG, OMAR
ZAKARIA, NOR BADRUL ANUAR “Multi-Bandwidth Data
Path Design for 5G Wireless Mobile Internets” WSEAS
[17] TRANSACTIONS on INFORMATION SCIENCE and
APPLICATIONS.
[18] 5G Mobile Technology
[19] Available:http://seminarprojects.com/Thread-5gmobiletechnology-documentationdownload?pid=116396#pid116396.
[20] Vadan Mehta “5G Wireless Architecture” Available:
http://www.4gwirelessjobs.com/pdf/5g-Wireless-architecture.pdf
[21] 5G mobile Technology Abstract Available:
[22] http://www.seminarsonly.com/Labels/5g-Mobile-TechnologyAbstract.php
[23] The Flat World Theory
[24] Available:http://connectedplanetonline.com/wireless/news/core_ne
tworks_090201/
[25] Idea about 4g Available: http://www.studymode.com/essays/Idea[26] About-4G-344635.html
5
4th International Conference on System Modeling & Advancement in Research Trends (SMART)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer University , Moradabad
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[2015]
5G technology Available: http://freewimaxinfo.com/5gtechnology.
Html
4G wireless System Available:
http://www.authorstream.com/Presentation/aSGuest1283821350425-4g-wireless-systems-final/
4G as a next wireless network Available:
http://www.scribd.com/doc/45905504/4G-as-a-Next-GenerationWireless-Network
4G Features Available: http://www.mindya.
com/shownews.php?newsid=2248
Niki
Upaddyay”5G
wireless
technology”
Available:http://www.slideshare.net/upadhyayniki/5-g-wirelesssystem
Life emitates Arts:Mas and 5G Available:
http://aresproject.com/life-imitates-art-mas-and-5g/
5G mobile terminal network Available:
http://www.scribd.com/doc/84304292/Concepts-for-5g-MobileNetworks 2013 International Conference on Intelligent Systems
and Signal Processing (ISSP).
6