Skip to content
Oviatt Library

Oviatt Library Catalog

 
     
Author Nataraj, Anusha, author.
Title Design and implementation of test case prioritization in iValidator / by Anusha Nataraj.
Published [Northridge, California] : California State University, Northridge, 2011.
LOCATION CALL # STATUS
 Electronic Book  QA76 .Z95 2011 N38eb    ONLINE
  
Description 1 online resource (x, 62 pages) : charts, graphs, some color.
Content Type still image
text
Format online resource
File Characteristics text file PDF
Thesis M.S. California State University, Northridge 2011.
Bibliography Includes bibliographical references (pages 57-58).
Summary The objective of this work is to facilitate early detection of software defects using a test ordering technique based on test case prioritization. The proposed technique works by assigning a priority to each test case in a suite of test cases and then executing the test cases in descending priority order. The priority values are computed using an algorithm developed as part of this work, the algorithm computes priorities as a function of defects detected in the previous test runs (test history), presence of error prone code constructs and McCabe's complexity. This technique is intended to be used during integration testing and regression testing. The concepts developed in this project have been implemented and integrated into an open source test tool called -iValidator. The iValidator tool is capable of automatically executing a suite of test cases in some specified order. It is also capable of reporting the test results and maintaining a test history. In the iValidator nomenclature, test cases are called test steps. A test step is a composite entity that includes one or more software unit to be tested and the associated unit test descriptions. A test step can relate to testing of a use case or a sequence within a use case. It can also represent a collection of test cases used in functional testing of a software component. A collection of test steps make up a test description. Typically, each test description is associated with a System under Test (SuT) representing the higher-level software application being tested. In this work, the iValidator tool has been extended to provide capabilities for performing static code analysis for detecting error prone code constructs, and for computing McCabe's complexity values. The results are expressed in XML. The enhanced tool then uses these computed values, together with the previously recorded test history to compute the test priority for each test step in a test suite. In a typical test scenario, the enhanced iValidator tool is used to determine the test priorities of a collection of test steps in a test description. After that the tool is programmed to execute the test steps in descending order of their test priorities. The tool generates two reports upon completion of each test run. The first report describes the test execution results for the test steps in the test suite. The second report describes the test execution history, results from static analysis for error prone code constructs, and the McCabe complexity values. A prototype of the iValidator tool enhancements has been designed and implemented. The enhancements have been tested and validated with code production quality code.
Note Description based on online resource; title from PDF title page (viewed on November 29, 2011).
Subject Computer programs -- Testing.
Debugging in computer science -- Computer programs.
Local Subject Dissertations, Academic -- CSUN -- Computer Science.
OCLC number 849958668