crosstab for client-side javascript using automated fault

Transcription

crosstab for client-side javascript using automated fault
International Journal of Recent Research in Science,
Engineering and Technology
Vol. 1, Issue 1, April 2015
CROSSTAB FOR CLIENT-SIDE
JAVASCRIPT USING AUTOMATED
FAULT LOCALIZER
V.Elamathi 1, G.Ilanchezhiapandian2
PG Scholar, Ganadhipathy Tulsi’s jain Engineering College,Vellore,Tamilnadu,India1
Faculty of computer science engg, ganadhipathy tulsi’s jain engineering college, vellore, Tamilnadu2
India
ABSTRACT- Fault localization is the problem of determining where in the source code changes need to be
made in order to fix detected failures. A fault is the cause of the failure or the internal error referred as “bug”.
Fault localization techniques isolate the root cause of a fault based on the dynamic execution of application. To
localize faults effectively in web applications written in PHP,Tarantula,Ochai,Jaccard fault-localization
algorithm is enhanced by using an extended domain for conditional and function-call statements and using a
source mapping but it is not useful for JavaScript. In this we examine the fault localization in Output DOMrelated JAVASCRIPT errors using an open source tool AUTOFLOX. The proposed system does the testing
process for gaining confidence in software dependability. The process of isolating the errors manually takes lot
of time and work efficiency so we are going to implement an automated technique.
KEYWORDS: Fault localisation, Debugging, web application, source mapping, Test generation.
I. INTRODUCTION
An smutty posture,comport oneself,or statistics definiteness in a abacus program which causes the program to
accomplishing an unexpected manner.Automatic fault localization technique keenness has led to the form and
betterment of various techniques has over recent years. Completely these techniques have identical goals,they
posterior be unconditionally additional foreign pair another and often stem forms ideas which themselves
originated from several differentdisciplines.Misemploy prevarication is run through techniques and procedures
which intent to sidestep the birth of faults a side blue-collar time of safety lifecycle of the safety-related
system.Objurgate ration is the knack of a productive connive to stand firm by to perform a doomed function in
the appearance of faults or errors.
Automatic fault localization technique demand has led to the goal and get ahead of various techniques over
recent years. Measure these techniques allowance way goals,they base be unconditionally choice foreigner brace
alternate,and often stem from ideas which themselves originate from seversl different disciplines.
Misapplication quibbling is benefit techniques and procedures which objective to shun the initiation of faults
next to gauche dated of the safety lifecycle of the safety-related system.Rhetoric catachresis toleration is the wit
of a personal property club to linger to polish off a resolved measure in the aspect of faults or errors.
Software testing and debugging occupy more than 50% of software development. Software testing endeavors to
discover deviation of a program from its expected comportment. Once such deviations found, the software
debugging is to fine-tune them. Katz and Anderson describe the debugging process1 of a program as chain of
three tasks: (1) finding the potential location of bug, (2) fine-tuning it, and (3) testing the program. These chains
perpetuate until the program prospers in tests. DeMillo coins the term ―fault localization‖ for the first step.
Conspicuously, precise and efficient fault localization can reduce the number of tribulation-error cycle in the
Copyright to IJRRSET
www.ijrrset.com
7
International Journal of Recent Research in Science,
Engineering and Technology
Vol. 1, Issue 1, April 2015
debugging,hence alleviate the debugging
burden.
A good automated fault localization technique can preserve time and reduce the cost of software development
and maintenance. Moreover, since most companies are under market pressure, they incline to relinquish
software with kenned and unknown bugs. By facilitating timely debugging, good automated fault localization
techniques can reduce the kenned bugs and amend the dependability of software products. The goal of
automated fault localization techniques is to provide information that avails developers to discover the cause of
program failure. To present the state of the art of automated fault localization techniques. Regardless of the
effort spent on developing a computer program it may still contain bugs.
In fact, the more immensely colossal, more involute a program, the higher the likelihood of it containing bugs.
It is always challenging for programmers to efficaciously and efficiently abstract bugs, while not inadvertently
introducing incipient ones concurrently. Furthermore, to debug, programmers must first be able t0 identify
precisely where the bugs are, which is kenned as fault localization; and then find a way to fine-tune them, which
is kenned as fault fine-tuning. Software fault localization is one of the most expensive activities in program
debugging. It can be further divided into two major components. The first part is to utilize a technique to
identify suspicious code that may contain program bugs.
The second part is for programers to authentically look at the taken to be general rule of behavior of a group to
come to a decision whether it in fact has in it apparatus for hearing. No article, without thought or attention of
measure or distance down, can hope to cover all of them. as an outcome of that, while try to cover as much get
onto land as possible, to primarily fixate on state of the art techniques and have a discussion some of the key
issues and business houses that pertain to error localization.
. II. RELATED WORK
Literature survey has been done in the area of Software Engineering and its testing process.The research done
by various authors are studied and some of them are discussed in the following section.
A. An Evaluation of Similarity Coefficients for Software Fault Localization
The coefficients studied are taken from the systems diagnosis / automated debugging implements Pinpoint,
Tarantula, and AMPLE, and from the molecular biology domain (the Ochiai coefficient). To evaluate these
coefficients on the Siemens Suite of benchmark faults, and assess their efficacy in terms of the position of the
authentic fault in the probability ranking of fault candidates engendered by the diagnosis technique..
Experiments indicate that the Ochiai coefficient consistently outperforms the coefficients currently used by the
tools mentioned [1]. In terms of the amount of code that needs to be inspected, the coefficient improves 5% on
average over the next best technique, and up to 30% in specific cases. The purpose of diagnosis is to locate
faults. Diagnosis applied to computer programs is known as debugging. The automated methods that we study
here have wider applicability, but in the context of paper they fall in the category of automated debugging
techniques.
The influence of different similarity coefficients were studied for fault diagnosis using program hit spectra at the
block level. Specifically for the Tarantula system, experiments seem to suggest that its diagnostic capabilities
can be improved by switching to the Ochiai coefficient. Tarantula uses statement hit spectra, which give more
detailed information than our block hit spectra, but this will only lead to a different analysis in case of the C
constructs. The possible causes for poor rankings are, mainly, when the faulty block is always exercised in
passed and failed runs (for instance, the main function), deleted code, and dependent blocks.
B. Framework for Automated Testing of JavaScript Web Applications
An in the early stage framework was introduced for made automatic testing of JavaScript internet requests.
The framework takes into account the idiosyncrasies of JavaScript, such as its event-driven getting things done
design to be copied and effect on one another with the printed material not in agreement design to be copied
(DOM) of net of an insect pages. The framework is parameterized by (i) an execution unit to model the browser
Copyright to IJRRSET
www.ijrrset.com
8
International Journal of Recent Research in Science,
Engineering and Technology
Vol. 1, Issue 1, April 2015
and server, (ii) an input engenderer to engender incipient input sequences, and (iii) a prioritize to guide the
exploration of the application‘s state space. By instantiating these parameters congruously, sundry forms of
feedback-directed arbitrary testing can be performed [10]. Here, the conception is that when a test input has
been engendered, the system executes the application on that input and observes the effects, which can then be
habituated to guide the generation of adscititious test inputs. To implemented the framework in an implement
called Artemis1, and engendered several feedback-directed arbitrary test generation algorithms by instantiating
the
framework with different prioritization functions and input
engenderers.
Several efficacious test generation algorithms by instantiating the framework with different prioritization
functions and input engenderers that employ simple feedback mechanisms. The rudimental algorithm, events,
engenders are show surprisingly good coverage (69%on average) if enough tests are engendered.
The influence of different similarity coefficients were studied for fault diagnosis using program hit spectra at the
block level. Specifically for the Tarantula system, experiments seem to suggest that its diagnostic capabilities
can be improved by switching to the Ochiai coefficient.
Tarantula uses statement hit spectra, which give more detailed information than our block hit spectra, but this
will only lead to a different analysis in case of the C constructs. The possible causes for poor rankings are,
mainly, when the faulty block is always exercised in passed and failed runs (for instance, the main function),
deleted code, and dependent blocks.
The first main ingredient of amalgamated approach is the Tarantula algorithm by Jones, et al. which soothsays
verbalizations that are liable to be responsible for failures. It does so by computing, for each verbal expression,
the percentages of passing and failing tests that execute that verbal expression. From this, a suspiciousness
rating is computed for each executed verbalization. Programmers are emboldened to examine the verbalizations
inorder of
decrementing
suspiciousness.
Dynamic test generation has proved to be quite efficacious in experiments with the Siemens suite, consisting of
versions of minute C programs into which artificial faults have been seeded [5]. A variation on the rudimental
Tarantula approach that consider is an enhanced domain for conditional verbalizations, which enables us to
more accurately pinpoint errors due to missing branches in conditional verbal expressions. The second main
ingredient of the approach is utilization of an output mapping from verbal expressions in the program to the
fragments of output they engender. This mapping when amalgamated with the report of the HTML Validator,
which denotes the components of the program output that are erroneous provides a supplemental source of
information about possible fault locations, and is utilized to fine-tune the suspiciousness ratings provided by
Tarantula.
C. Scalable Dynamic Test Generation
The search strategies have been implemented in CREST, a prototype test generation implement for C. CREST
uses CIL to instrument C source files and to extract control-flow graphs to solve path constraints. The search
strategies was experimentally evaluated the efficacy by running CREST on supersede, the most immensely
colossal program in the Siemens Benchmark Suite, and two popular open-source applications, grep 2.2
(15Klines) and Vim 5.7 (150K lines).For each benchmark, The performance of the different search strategies
over a fine-tuned number of iterations were compared i.e. runs of the instrumented program. Search strategies
are an opportune measure for the testing budget, because, for more immensely colossal programs, the strategy
themselves was dominated process for the cost of concrete and symbolic execution. Experiments were run on
2GHz Core2 Duo servers‘ with2GB of RAM and running Debian GNU/Linux. All unconstrained variables are
initially set to zero. The technique is experimentally validated on an OO system composed of over a thousand
verbal expressions. Experiments will be carried out to evaluate whether classical test adequacy criteria can
efficiently isolate DBBs.
Copyright to IJRRSET
www.ijrrset.com
9
International Journal of Recent Research in Science,
Engineering and Technology
Vol. 1, Issue 1, April 2015
III. EXISTING SYSTEM
Jones et al. presented Tarantula, a fault-localization technique that associates with each statement a
suspiciousness rating that indicates the likelihood for that statement to contribute to a failure. The
suspiciousness rating for a statement that occurs at line3 l is a number between 0 and 1, calculated with the
following similarity coefficient
Failed l ⁄Total failed
Passed l ⁄Total passed +Total failed
S tar l =
Where passed(l) is no of passing executions that execute statement l, Failed(l) is the number of failing
executions that execute statement l. Total Failed is the total number of failing test cases. In the field of data
clustering, other similarity coefficients have been proposed that can also be used to calculate suspiciousness
ratings. These include the Jaccard coefficient used in pinpoint program
Sjac l =
Failed l
Total Failed + Passed
and the Ochiai coefficient used in molecular biology domain
Soch l =
Failed l
√Failed l ∗ Passed l + Failed l
IV. PROPOSED SYSTEM
Having summarized the state of the art in the area of fault localization, now present our proposed methodology.
First describe CBT in detail, followed by an example to walk the reader through how CBT is to be applied.
Then, present the primary criterion used to evaluate the effectiveness of the technique, which also serves as a
means to compare various fault-localization techniques to one another. Consider a statement ω in the program
that is being debugged. To defines the following notations in order to facilitate further discussion regarding the
CBT. The crosstab analysis is used to study the relationship between two or more categorical variables. A
crosstab is constructed for each statement such that the crosstab has two column wise categorical variables—
covered and not covered; and two row wise categorical variables— successful and failed.
A.PERFORMANCE EVALUATION
Crosstab method shows better performance when compared to that of the existing system, the analysis is carried
out in PHP web application and the results are as follows.
TABLE 1
The table consider the various web pages for fault localization process using different algorithms to determine
the coverage statements. Total number of lines in a page is mentioned. Line factor verses various algorithm
coverage is plotted.
Statement Coverage Comparison with various Algorithms
Copyright to IJRRSET
www.ijrrset.com
10
International Journal of Recent Research in Science,
Engineering and Technology
Vol. 1, Issue 1, April 2015
Fault
Localization
Algorithms
Localized
Fault (%)
[LOC]
Tarantula
Executed
Statements
(%)
[LOC]
40%
Ochiai
Crosstab
55%
70%
75%
90%
In general the number of statement examined increase the statement coverage of the application also increased.
Compare three algorithms, crosstab method having high coverage rate that is total number of statement
examined high so that finding location of fault also increased. Ochiai having next high coverage represent in
graph. Tarantula having minimum error finding rate that is examine minimum statements examine.
TABLE 2
The table 2 describes the factor such as executed statement and localized fault. The localized fault of executed
statement is increased compare to the existing system.
Comparison of Fault Localization Process with various Algorithms
Page
1
2
Lines
[LOC]
1,25
25,37
38,41
1,16
17,28
29,52
Tarantula
Ochiai
Crosstab
0
0.54
0.54
0.256
0.54
0.54
0
0.69
0.69
0.38
0.69
0.69
0
0.81
0.81
0.5
0.81
0.81
In general the number of statement examined increase the statement coverage of the application also increased.
Compare three algorithms, crosstab method having high coverage rate that is total number of statement
examined high so that finding location of fault also increased. Ochiai having next high coverage represent in
graph. Tarantula having minimum error finding rate that is examine minimum statements examine.
Fig 4.1 Comparison of Fault Localization Process for various Algorithms
Copyright to IJRRSET
www.ijrrset.com
11
International Journal of Recent Research in Science,
Engineering and Technology
Vol. 1, Issue 1, April 2015
The fig 4.1 shows the percentage of fault localization in executed statements of various algorithms. Average of
localized fault in proposed approach is 30% more than existing algorithms.
V. RESULTS AND DISCUSSION:
FIG 5.1 Login Page
Figure 5.1 represents login page of the website. In this application there are three logins are used. The
first login is administrator for the application to manage the information. Next login is user login. This is use for
user to search the relevant information. Finally student login that are inform comments by use it.
Fig 5.2 Execution result for dynamic web application
Figure 5.2 shows the result of successful and failure execution of particular page of web
application. Calculate the error rate using crosstab that result also shows in this result. This result represent 1-25
lines execution is success so the error rate zero. Next 25-37 lines execution is failure and error rate 081 and also
38-41.
Copyright to IJRRSET
www.ijrrset.com
12
International Journal of Recent Research in Science,
Engineering and Technology
Vol. 1, Issue 1, April 2015
FIG 5.3 Total Result
Figure 5.3 shows that total result of the particular web site. How many number of lines are presented and how
much percentage of fault calculated using tarantula Ochiai and cross tab algorithms.
VI. CONCLUSION AND FUTURE WORKS
A better solution is to utilize which is a systematic and statistically well-defined method to automatically
identify suspicious code that should be examined for possible fault locations. To presented a crosstab-based
statistical method using the coverage information of each executable statement and the execution result (success
or failure) irrespective of each test case. A construction of a crosstab for each statement and to compute
suspiciousness,a statistic is computed. The higher suspiciousness statements are more likely to contain bugs and
should be examined before those with a lower suspiciousness. The proposed results suggest that the crosstabbased method is effective in fault localization and performs better (in terms of a smaller percentage of
executable statements that have to be examined until the first statement containing the fault is reached) than
other methods such as Tarantula and Ochiai. The difference in efficiency (computational time) between these
two methods is very small. As a future work can be propose the application of a modified radial basis function
neural network in the context of software fault localization. The relationship between coverage information of
testcases and its execution result is learned by using neural network.
ACKNOWLEDGEMENT
First and foremost, I thank the Lord Almighty who paved the path for my walk which lifted me to pluck the
fruits of success and who is the torch for all my endeavours and engagements. I record my deep sense of
indebtedness and whole heated gratitude to my friends, for their active involvement, encouragement, caretaking
and valuable suggestions in all the way and also shaping project work and include report. Home is the backbone
of every success and our humble salutations to our beloved parents R.Venkatesan and V.Kalaichelvi and my
lovable sister V.Prathipa and sweet brother V.Mothil Raj, who inspired, motivated and supported us throughout
the course of the project. I also extend my sincere thank to my guide G.ILANCHEZHIAPANDIAN and the
Faculty Members of CSE Department, Friends who have render their valuable help in completing this project
successful.
REFERENCES
[1] R. Abreu, P. Zoeteweij, and A.J.C. van Gemund, ―An Evaluation of Similarity Coefficients for Software Fault Localization,‖ Proc. 12th
Pacific Rim Int‘l Symp. Dependable Computing, pp. 39-46, 2006.
Copyright to IJRRSET
www.ijrrset.com
13
International Journal of Recent Research in Science,
Engineering and Technology
Vol. 1, Issue 1, April 2015
[2] C. Cadar, V. Ganesh, P.M. Pawlowski, D.L. Dill, and D.R. Engler, ―EXE: Automatically Generating Inputs of Death,‖ Proc.
Conf.Computer and Comm. Security, 2006.
[3] B. Baudry, F. Fleurey, and Y. Le Traon, ―Improving Test Suites for Efficient Fault Localization,‖ Proc. 28th Int‘l Conf. Software
Eng.,L.J. Osterweil, H.D. Rombach, and M.L. Soffa, eds., pp. 82-91,2006.
[4] P. Arumuga Nainar, T. Chen, J. Rosin, and B. Liblit, ―Statistical Debugging Using Compound Boolean Predicates,‖ Proc. Int‘l Symp.
Software Testing and Analysis, S. Elbaum, ed., July 2007.
[5] J. Burnim and K. Sen, ―Heuristics for Scalable Dynamic Test Generation,‖ Proc. IEEE/ACM Int‘l Conf. Automated Software Eng.,pp.
443-446, 2008.
[6] S. Artzi, A. Kie_zun, J. Dolby, F. Tip, D. Dig, A. Paradkar, and M.D.Ernst, ―Finding Bugs in Dynamic Web Applications,‖ Proc.
Int‘lSymp. Software Testing and Analysis, pp. 261-272, 2008.
[7] S. Artzi, J. Dolby, F. Tip, and M. Pistoia, ―Practical Fault Localization for Dynamic Web Applications,‖ Proc. 32nd ACM/IEEE Int‘l
Conf. Software Eng., vol. 1, pp. 265-274, 2010.
[8] S. Artzi, A. Kie_zun, J. Dolby, F. Tip, D. Dig, A. Paradkar, and M.D. Ernst, ―Finding Bugs in Web Applications Using Dynamic Test
Generation and Explicit State Model Checking,‖ IEEE Trans.Software Eng., vol. 36, no. 4 pp. 474-494, July/Aug. 2010.
[9] S. Artzi, J. Dolby, F. Tip, and M. Pistoia, ―Directed Test Generation for Effective Fault Localization,‖ Proc. 19th Int‘l Symp. Software
Testing and Analysis, pp. 49-60, 2010.
[10] S. Artzi, J. Dolby, S. Jensen, A. Møller, and F. Tip, ―A Framework for Automated Testing of Javascript Web Applications,‖ Proc.
Int‘l Conf. Software Eng., 2011 .
[11] R. Abreu, P. Zoeteweij, and A.J. van Gemund, ―On the Accuracy of Spectrum-Based Fault Localization,‖ Proc. Testing: Academic
and Industry Conf. Practice and Research Techniques, pp. 89-98, Sept.2007.
[12] H. Agrawal, J.R. Horgan, S. London, and W.E. Wong, ―Fault Localization Using Execution Slices and Dataflow Tests,‖ Proc. Int‘l
Symp. Software Reliability Eng., pp. 143-151, 1995.
[13] P. Arumuga Nainar and B. Liblit, ―Adaptive Bug Isolation,‖ Proc.32nd ACM/IEEE Int‘l Conf.
Software Eng., pp. 255-264, 2010.
[14] G.K. Baah, A. Podgurski, and M.J. Harrold, ―Causal Inference for Statistical Fault
Localization,‖ Proc. 19th Int‘l Symp. Software Testing and Analysis, pp. 73-84, 2010.
[15] M.Y. Chen, E. Kiciman, E. Fratkin, A. Fox, and E. Brewer, ―Pinpoint: Problem Determination in Large, Dynamic Internet Services,‖
Proc. Int‘l Conf. Dependable Systems and Networks, pp. 595-604, 2002.
BIOGRAPHY
AUTHOR 1
ELAMATHI.V
M.E.(CSE)
GTEC,VELLORE,
TAMILNADU,
INDIA
AUTHOR 2
G.ILANCHEZHIAPANDIAN,
HOD OF CSE,
GTEC,VELLORE,
TAMILNADU,
INDIA.
Copyright to IJRRSET
www.ijrrset.com
14