Metrics based approach for test estimation

[Me] :- Guruji, isn’t there a method to estimate based on the past experience of an organization’s projects?

[Guruji]:- Yes, there is. It is a useful approach is to track past experience of an organization's various projects and the associated test effort that worked well for projects. Once there is a set of data covering characteristics for a reasonable number of projects, then this 'past experience' information can be used for future test project planning. (Determining and collecting useful project metrics over time can be an extremely difficult task.) For each particular new project, the 'expected' required test time can be adjusted based on whatever metrics or other information is available, such as function point count, number of external system interfaces, unit testing done by developers, risk levels of the project, etc. In the end, this is essentially judgment based on documented experience', and is not easy to do successfully

Implicit Risk Context Approach for Test Estimation

[Me] :- Guruji, What is this Implicit method of test estimation? Can anybody do this?

[Guruji]:-  A typical approach to test estimation is for a project manager or QA manager to implicitly use risk context, in combination with past personal experiences in the organization, to choose a level of resources to allocate to testing. In many organizations, the 'risk context' is assumed to be similar from one project to the next, so there is no explicit consideration of risk context. (Risk context might include factors such as the organization's typical software quality levels, the software's intended use, the experience level of developers and testers, etc.) This is essentially an intuitive guess based on experience.

Metrics for Evaluating system testing

[Me] :- Guruji, what are the metrics for evaluating system testing?

[Guruji]:-  Well to start off,

Metric = Formula
Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code)
Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).
Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria
Defects per size = Defects detected / system size
Test cost (in %) = Cost of testing / total cost *100
Cost to locate defect = Cost of testing / the number of defects located
Achieving Budget = Actual cost of testing / Budgeted cost of testing
Defects detected in testing = Defects detected in testing / total system defects
Defects detected in production = Defects detected in production/system size
Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100
Effectiveness of testing to business = Loss due to problems / total resources processed by the system.
System complaints = Number of third party complaints / number of transactions processed
Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10
Source Code Analysis = Number of source code statements changed / total number of tests.
Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation
Test Execution Productivity = No of Test cycles executed / Actual Effort for testing

Test Estimation - best approach?

[Me] :- Guruji, what is the best approach for test estimation?

[Guruji]:-There is no simple answer for this. The 'best approach' is highly dependent on the particular organization and project and the experience of the personnel involved.
For example, given two software projects of similar complexity and size, the appropriate test effort for one project might be very large if it was for life-critical medical equipment software, but might be much smaller for the other project if it was for a low-cost computer game. A test estimation approach that only considers size and complexity might be appropriate for one project but not for the other

Test Estimate - Quick 'n' Dirty

[Me] :- Guruji, I know this is wrong, but can you give me a quick method to determine a high level estimate for test projects?

[Guruji]:-Hmm…there is a method and you would be surprised that it gives almost the correct results :-) Just compute the times required for these items. It should do the trick

 

testestimate

Test Metric

[Me] :- Guruji, please tell what metrics I need to follow in any project.

[Guruji]:-There are a lot of metrics you can follow. Test Metrics come under various headers – Project, Product & Process. I have listed below few of them, their purpose and how we can calculate them.

PRODUCT

Test metric

Definition

Purpose

How to calculate

Number of remarks

The total number of remarks found in a given time period/phase/test type. A remark is a claim made by test engineer that the application shows an undesired behavior. It may or may not result in software modification or changes to documentation.

One of the earliest indicators to measure once the testing commences; provides initial indications about the stability of the software. 

Total number of remarks found.

Number of defects

The total number of remarks found in a given time period/phase/test type that resulted in software or documentation modifications.

A more meaningful way of assessing the stability and reliability of the software than number of remarks. Duplicate remarks have been eliminated; rejected remarks have been done.

Only remarks that resulted in modifying the software or the documentation are counted.

Remark status

The status of the defect could vary depending upon the defect-tracking tool that is used. Broadly, the following statuses are available: To be solved: Logged by the test engineers and waiting to be taken over by the software engineer. To be retested: Solved by the developer, and waiting to be retested by the test engineer. Closed: The issue was retested by the test engineer and was approved.

Track the progress with respect to entering, solving and retesting the remarks. During this phase, the information is useful to know the number of remarks logged, solved, waiting to be resolved and retested.

This information can normally be obtained directly from the defect tracking system based on the remark status.

Defect severity

The severity level of a defect indicates the potential business impact for the end user (business impact = effect on the end user x frequency of occurrence).

Provides indications about the quality of the product under test. High-severity defects means low product quality, and vice versa. At the end of this phase, this information is useful to make the release decision based on the number of defects and their severity levels.

Every defect has severity levels attached to it. Broadly, these are Critical, Serious, Medium and Low.

Defect severity index

An index representing the average of the severity of the defects.

Provides a direct measurement of the quality of the product—specifically, reliability, fault tolerance and stability.

Two measures are required to compute the defect severity index. A number is assigned against each severity level: 4 (Critical), 3 (Serious), 2 (Medium), 1 (Low). Multiply each remark by its severity level number and add the totals; divide this by the total number of defects to determine the defect severity index.

Time to find a defect

The effort required to find a defect.

Shows how fast the defects are being found. This metric indicates the correlation between the test effort and the number of defects found.

Divide the cumulative hours spent on test execution and logging defects by the number of defects entered during the same period.

Time to solve a defect

Effort required to resolve a defect (diagnosis and correction).

Provides an indication of the maintainability of the product and can be used to estimate projected maintenance costs.

Divide the number of hours spent on diagnosis and correction by the number of defects resolved during the same period. 

Test coverage

Defined as the extent to which testing covers the product’s complete functionality. 

This metric is an indication of the completeness of the testing. It does not indicate anything about the effectiveness of the testing. This can be used as a criterion to stop testing.

Coverage could be with respect to requirements, functional topic list, business flows, use cases, etc. It can be calculated based on the number of items that were covered vs. the total number of items.

Test case effectiveness

The extent to which test cases are able to find defects.

This metric provides an indication of the effectiveness of the test cases and the stability of the software.

Ratio of the number of test cases that resulted in logging remarks vs. the total number of test cases.

Defects/ KLOC

The number of defects per 1,000 lines of code.

This metric indicates the quality of the product under test. It can be used as a basis for estimating defects to be addressed in the next phase or the next version.

Ratio of the number of defects found vs. the total number of lines of code (thousands) 

PROJECT

Workload capacity ratio

Ratio of the planned workload and the gross capacity for the total test project or phase.

This metric helps in detecting issues related to estimation and planning. It serves as an input for estimating similar projects as well.

Computation of this metric often happens in the beginning of the phase or project. Workload is determined by multiplying the number of tasks against their norm times. Gross capacity is nothing but planned working time, determined by workload divided by gross capacity.

Test effort percentage

Test effort is the amount of work spent, in hours or days or weeks. Overall project effort is divided among multiple phases of the project: requirements, design, coding, testing and such. 

The effort spent in testing, in relation to the effort spent in the development activities, will give us an indication of the level of investment in testing. This information can also be used to estimate similar projects in the future.

This metric can be computed by dividing the overall test effort by the total project effort.

Defect category

An attribute of the defect in relation to the quality attributes of the product. Quality attributes of a product include functionality, usability, documentation, performance, installation and internationalization.

This metric can provide insight into the different quality attributes of the product.

This metric can be computed by dividing the defects that belong to a particular category by the total number of defects.

PROCESS

Should be found in which phase

An attribute of the defect, indicating in which phase the remark should have been found.

Are we able to find the right defects in the right phase as described in the test strategy? Indicates the percentage of defects that are getting migrated into subsequent test phases.

Computation of this metric is done by calculating the number of defects that should have been found in previous test phases.

Residual defect density

An estimate of the number of defects that may have been unresolved in the product phase.

The goal is to achieve a defect level that is acceptable to the clients. We remove defects in each of the test phases so that few will remain. 

This is a tricky issue. Released products have a basis for estimation. For new versions, industry standards, coupled with project specifics, form the basis for estimation.

Defect remark ratio

Ratio of the number of remarks that resulted in software modification vs. the total number of remarks.

Provides an indication of the level of understanding between the test engineers and the software engineers about the product, as well as an indirect indication of test effectiveness.

The number of remarks that resulted in software modification vs. the total number of logged remarks. Valid for each test type, during and at the end of test phases.

Valid remark ratio

Percentage of valid remarks during a certain period. Valid remarks = number of defects + duplicate remarks + number of remarks that will be resolved in the next phase or release.

Indicates the efficiency of the test process.

Ratio of the total number of remarks that are valid to the total number of remarks found.

Bad fix ratio

Percentage of the number of resolved remarks that resulted in creating new defects while resolving existing ones. 

Indicates the effectiveness of the defect-resolution process, plus indirect indications as to the maintainability of the software.

Ratio of the total number of bad fixes to the total number of resolved defects. This can be calculated per test type, test phase or time period.

Defect removal efficiency

The number of defects that are removed per time unit (hours/days/weeks)

Indicates the efficiency of defect removal methods, as well as indirect measurement of the quality of the product.

Computed by dividing the effort required for defect detection, defect resolution time and retesting time by the number of remarks. This is calculated per test type, during and across test phases.

Phase yield

Defined as the number of defects found during the phase of the development life cycle vs. the estimated number of defects at the start of the phase.

Shows the effectiveness of the defect removal. Provides a direct measurement of product quality; can be used to determine the estimated number of defects for the next phase.

Ratio of the number of defects found by the total number of estimated defects. This can be used during a phase and also at the end of the phase.

Backlog testing

The number of resolved remarks that are yet to be retested by the development team.

Indicates how well the test engineers are coping with the development efforts.

The number of remarks that have been resolved.

Scope changes

The number of changes that were made to the test scope.

Indicates requirements stability or volatility, as well as process stability.

Ratio of the number of changed items in the test scope to the total number of items.

www.CodeNirvana.in

Powered by Blogger.

Translate

Total Pageviews

Copyright © T R I A G E D T E S T E R