Code coverage Administration

To administer code coverage for big projects or any project for that matter, we can employ a simple workflow to help enable everyone understand the process.

Like any other work flows , this is just an indicator and can be changed to suit any project

CodecoverageWF

Code Coverage using VSTS

 

Pre-requisite

  • Build having dll’s/exe along with their corresponding pdb’s
  • VS 2008

Limitations:

  • Code coverage of VSTS is limited to 32 bit.
  • 64 bit dll, during instrumentation will throw error. This implies that instrumentation is successful but the 64 bit dll are forcibly converted to 32 bit. We can only proceed only if its compatible on both 32 & 64 bits
  • The tool must be installed on c:\programfiles and not on c:\programfiles<x86>.

Steps to perform code coverage

 
Locating Application Binaries
  • Open Bin folder( should have dll’s & corresponding pdb’s)
  • Make sure that Bin folder is not read only
  • Ensure to backup dll’s in case of roll back
Instrumentation of Binaries
  • Run cmd window using administrator account
  • Change to directory
    1. (For 32 bit)

C:\Program Files\Microsoft Visual Studio 9.0\Team Tools\Performance Tools>

    1. (For 64 bit- Ensure to install vanguard tool)

C:\Program Files\VSCC 10.0>

  • Run cmd window using administrator account
  • Change to directory
  • Execute “vsinstr /coverage <assemblyname>” for all your binaries

If your binary name is ‘test.dll’, then type

vsinstr /coverage D:\Test\TestDLLApp\TestDLLApp\bin\Debug\test.dll

  • After instrumentation we will get 2 additional files. The original Test.dll will be replaced with test.dll.orig and a new Test.dll & Test.instr files will be created
  • Repeat the same for other dll’s
  • Alternatively we can also use the following command: (ensure that we have vsinstr and vsperfcmd utilities in the working folder).

for %i in (*.dll) do vsinstr /coverage %i

Start Code coverage monitoring process
  • In command prompt type “vsperfcmd /start:coverage /output:yourfilename.coverage

E.g. If want to save coverage file in particular folder then give the path of that folder.

vsperfcmd /start:coverage /output:c:\test\sample.coverage

  • Verify if the coverage file is successfully created
Execute Tests
  • Execute the tests (Manual + Automation)
End the code coverage monitoring process
  • In the command prompt, type vsperfcmd /shutdown
  • This shuts down the coverage monitor and saves the coverage data in the coverage file.(w.r.t to above example in c:\test\sample.coverage)
Generate Code coverage report
  • To generate code coverage report Open VSTS select from MenuTest->Windows->Code Coverage Resultsclip_image002

Now click on import button and select the path of .coverage file

clip_image004

Note: In order to retain the results it’s necessary to have the .coverage file along with the instrumented binaries

Leveraging existing test automation tools

In an approach to improve the test efficiency in automation, we can think of many areas of improvement. But there can be nothing better than identifying the problem space, like we have done earlier, it is always seen that

Test automation tools/frameworks not leveraged across teams

–Why?

  • No common forum to share and discuss test automation challenges and solutions
  • No team wide test automation focus groups
  • Tools not supported for entire product lifecycle

This results in continuing costs of maintaining multiple tools which normally include

  • Multiple UI automation frameworks
  • Multiple driver test frameworks

In order to mitigate this, we can think of few action areas like

Problem

Action

Automation tools poorly leveraged across teams

  • Evangelize existing quality reusable automation framework and tools
  • Portal to share reusable automation – tools, frameworks, plug-ins/modules
  • Conduct cross project sessions
  • Provide test automation guidelines – design patterns, selecting amongst existing frameworks, coding guidelines, automation project spec template etc.

No common forum to share and discuss test automation challenges and solutions

  • Introduce discussion boards
  • Discuss automation challenges in different components

No team wide test automation focus groups

Create groups to discuss

  • Discuss automation challenges in different components
  • Discuss and propose use of existing solutions

Code coverage analysis for API

Code coverage is one (not the only) metric you can use to measure the effectiveness of your test cases. Usually, you’ll use it to find holes in your test model or test coverage. For example, if you run your full suite of tests for an API, and then code coverage shows you that you only touched 50% of the code making up the API, you know you need to go back and add more test cases.

Best Practices:

  • Perform code coverage analysis on your API tests before you mark a deliverable as “done”.
  • After completing your API tests, run coverage analysis at least once per milestone to make sure any changes to the public APIs are still being covered properly

API Testing best practices

When developing your API tests, keep in mind the following types of test deliverables

1. Test plan

Best Practice: Track all your deliverables as work-items in any test management tool of your choice. This provides a great high-level view of what our API test coverage is

2. BVT’s

BVT means “Basic Verification Tests,” and they do exactly what the name implies. We use BVTs to verify that the basic functionality of a feature is solid enough to self-host the feature.

Best Practice: BVTs should be the first tests you will develop for your API set

If you do this, you’ll automatically satisfy the next best practice:

Best Practice: All public APIs should have BVTs

3. Stress Tests

The purpose of stressing an API is to ensure it doesn’t cause resource leaks, handles low-resource conditions well, and scales well to large input/data usage. The rules of test mix should follow

  • the test must not hog the CPU - give other stress apps a chance to do work
  • the test must have a cleanup mode – usually a cmd line switch you can call the app with to clean up any changes it made to the system
  • the test must be solid – check for out of memory, don’t leak, etc. Don’t cause “false positives” in the nightly stress runs
  • the test must not be focus-based
  • the test must be run in private mode alongside the regular stress mix

4. Performance

Public APIs need performance tests because it is a feature that other code depends on. By using a certain API, developers have certain expectations as to how much time and memory is required by an API.  Execution time and working set measurements are probably the most critical for an API. These two are good indicators signaling when additional investigation is necessary. The issue with performance testing is even small variations in the execution environment or test code can lead to results that are widely inconsistent, or not reliable enough to be used as a metric.

Best Practices:

  • Work with your feature team (dev/test/PM) to appropriately prioritize performance testing for your public APIs
  • Create small, focused tests that minimize outside impact for performance testing.
  • Simple unit tests or basic scenarios are better for performance monitoring
  • Ensure your tests produce consistent results

Structure of a Performance Test

  • Setup a neutral environment – ensure what you are testing is not cached in memory, or flush the process’s working set
  • Take measurements before running the test
  • Run the test/perform the action
  • Take measurements afterwards
  • Diff the results

5. Leak tests

Public APIs can cause memory and other resource leaks just like UI features can. If your APIs are leaking, you’ll generally notice this while running your stress tests. However, just knowing that something leaked doesn’t really help you debug what leaked. If you use the standard test harnesses, you can use the built-in leak testing functionality to generate logs of leaked allocations and resources while you run your tests.

Best Practice: Use the standard test harnesses to inherit basic leak testing functionality

When writing your tests, try not to cache any data as your test is running. Doing this could cause false positives when the harness is run in leak testing mode. For example, if you’re doing your own logging (which you shouldn’t be), you might be storing messages in memory & waiting until the tests are finished before dumping them to disk. If you do leak testing on a per-test basis, the leak testing code will detect your growing internal log array as a leak.

6. Boundary /Error Tests

Boundary testing is usually done after BVTs are complete and checked in. This type of testing includes calling your APIs with invalid input, NULL pointers, empty strings, huge strings, etc. Making your tests data-driven is recommended as it will be easy to add future test cases without recompiling your code.

Best Practice: Use ITE to model and automate your API system for boundary testing.

This category also includes your Security tests. For example, path strings greater than MAX_PATH characters in length, or path strings that include “::$DATA”, etc

7.  Developer regression tests

DRT means “Developer Regression Test”. Generally, these are designed and written by developers. They should be run by developers before any check-ins. The goal of DRTs is to catch heinous bugs before the build process or BVT team does.

Best Practice: Work with your developer to get DRTs written for your public APIs & identify what should be covered in those tests. Make sure he/she is running them before checkins

8. SDK samples

Sample code should be as clear and concise as possible while still conveying the prominent developer scenarios for the API & respecting solid coding practices (e.g. security checks, error checking, etc.) It is the tester’s responsibility to ensure that the sample code works properly and is included in the SDK as appropriate.

Best Practice: Make sure you test the SDK samples for your APIs on daily builds.

9. API Documentation

Part of testing public APIs is verifying that the documentation is complete, clear, and concise.

Best Practice: When writing your API tests, try copying/pasting the function definitions directly from the SDK docs

Research in testing

Leveraging research in testing – how can we improve the testing we do to be more efficient and effective?  How to make testing more interesting for researchers to work with?  What is our pipeline into research?

Any takers?

Testing Types

This can get bigger. But we can restrain it if we know the scope. But these questions are worth finding answers

1. Building automation around integration scenarios

2. Debugging responsibilities – Classical resolution is; because you built the system, you have to debug. Possibly a better solution here?

3. What to include in integration scenarios?

4. Integration testing in general

5. Timing

6. Infinite Input space – ex. API value input, use input, etc

7. Security Testing

---Deep security testing is difficult and we aren't doing it - specialists may be needed.  We don't have good feel for the level of security testing being done.  We don't know how (education) and there is a base level of testing that we could be doing but that doesn't seem to be happening because it isn't baked into the process.  What tools are out there to help?  How do you use them?

Risk Management

Its the most important but often most neglected. If we can answer these, then i think we are good to go in any project

1. Informal Risk Management

--I care more about its being ineffective than its informality

2. Risk Management

--More people need to know how to do risk assessment; not clear we hit high risks; bad at risk assessment; We don't have a good idea of what risks are - can we get to all parts of our product?  How to design to reach code paths (automated)?  Don't know how to measure.  Don't have high level of push to make this happen.  Don't know what it means.

Product wide issues

Well these happen in all products or application when the ship time nears

1. Product-wide perf vs. individual teams owning their own perf

2. Product-wide stress and MTTF

3. Test Matrix:

Automation Challenges

Sometimes these are conveniently ignored. If we can work on getting answers to these, automation engineers( SDET’s) can relax a bit

1. Maintenance nightmares

2. How to simulate eyes/ears

3. Lack of consistency in automation GUID's

4. Non persistent automation GUID

5. Simulating ‘Real’ customers

6. Simulating N number of customers  for performances

7. Test tools and infrastructure are not stable when needed to test a feature

8. Better End to End Automation

---Better End to End (customer scenario) Automation - test execution and authoring; How to get machines to do more of our work.  How to determine what should or should not be automated.

Testing Influence

Another big thing in testing that virtually controls the flow

1. Power of Go/No-go ship decisions

2. Design time input – testability concerns

3. Fighting for bugs – i.e. How to effectively argue against "No one is ever going to do that!" type of arguments

4. Performance being affected by measurement tools

5. Harder to find customer bugs

--What is the issue? Are the customer's bugs harder than they used to be? are we just dumber than we used to be? is it just customer bugs? Why?

6. Test Cycle expanding

--does this mean taking longer?

7. Stabilization phase is too long

8. Stabilization phase is too short

9. Test Environments are more complex, with fewer tools, processes, people or machines to accommodate matrix growth

10. Expanding Test Cycle

---Test Cycle expanding; stabilization phase is too long; estimating time to zero bugs (release ready); a single complete test pass is longer as matrix increases

11. "retaining testers" ???

Problems in Metrics

Quality is hard to assess and our measurement systems are ineffective resulting in ship decisions that can negatively impact our quality. We use OCA analysis - What else? Are we testing the right things - What classes of things are we missing that our customers see?   What is the effectiveness of our tests?  When are we going to ship?  What is the quality and when will we get there?  God, this thing always grows on you.

1. Articulating quality of product

2. Coverage – block, functional, user scenario, … What are the ‘right’ goals?

3. Bugs found by: Customer vs. Internal (non-team) vs. Team found.

4. Regressions

5. Lots of possible metrics, so question is: What are the core set all teams need to measure? How to relate other (old and new) metrics to quality?

6. Establishing Ship Criteria – what’s the right set to base ship/no-ship decisions on?

7. Test Effectiveness – including measuring changes

8. How to define "Done"

9. Is it "good"?

--For items 8 & 9, i am contemplating of writing a paper and filing for patency

10. Customer perception of quality is low

--this soft phrasing still makes it sound like it's the customers' problem

11. Ineffective measurement systems

--or worse, "misleading measurement systems"

12.Measuring Product Quality

Prioritization of tasks

I think we are majorly influenced by these items when we prioritize our task. The great one said – “ Prioritize what;s on your schedule and not what is not “ &  Like one of my friend retorted – if there is nothing on schedule how do you prioritize?

1. Time/Schedule influenced?

2. Customer influenced?

3. Bugs Per Test value? (or some other ROI measurement)

4. Code changed vs. tests that cover code?

5. Code Complexity?

6. Hard to prioritize most important activity – i.e.: choosing between increasing coverage and automating a regression

--I think I disagree with this, but I'm not sure what it refers to. Are we referring to the general case of prioritizing what the testers should do?

7. Estimating time / costing for new projects

8. Estimating time to zero bugs (release ready)

9. Testability

Customer scenarios increase testing complexity; testability of our products; what do I test vs not test?  Automate vs. not?  Code coverage vs. not?  Etc.

Estimation & Scheduling

Some difficult items on the estimation and scheduling front

1. Short cycle testing (i.e. Projects working with SCRUM style process).

2. Test having input on scheduling decisions.

3. Accurately estimating test time required.

General Concerns

Some of these concerns could be a more or less like a statement and not necessarily an issue.  But some how i have over hauled all the questions that I think need answers for.

1. After several release cycles, test suites can grow large and unmanageable

2. Code Bloat – duplicate code, unnecessary code, no common libraries

3. Test case redundancy

4. Maintenance

5. Customer scenarios increase testing complexity

6. Number of Testers per developer continues to rise

7. Lab costs rising

8. Harder to find good testers

--I disagree with this. The problem is that we don't make it attractive enough for good people to become testers.

9. Need Specialists

10.Need better training

11.When tools fall short, we revert to manual, which does not attract “best and brightest”

12. Testing career issues – historically in Testing, management was the only way to advance

--Back to the "testers are considered second class citizens" issue

Let me know your thoughts, we can discuss and come out with some pattern to address them.

SMAF - State Machine based test automation framework

Proposed approach for the harness

Problem statement

Create a model based intelligent automation harness that would make testing cheaper, faster and better.

Approach considerations

Address the # 2 of the disadvantages of Model based testing

Creating a state model

Model-based testing solves these problems by providing a description of the behavior, or model, of the system under test. A separate component then uses the model to generate test cases. Finally, the test cases are passed to a test driver or test harness, which is essentially a module that can apply the test cases to the system under test.

Given the same model and test driver, large numbers of test cases can be generated for various areas of testing focus, such as stress testing, regression testing, and verifying that a version of the software has basic functionality. It is also possible to find the most time-efficient test case that provides maximum coverage of the model. As an added benefit, when the behavior of the system under test changes, the model can easily be updated and it is once again possible to generate entire new sets of valid test cases.

(A) Model : - State machines are the heart of the model, linking to the test driver is cumbersome. A dedicated editor saves time and effort. The editor also enforces coherency in the model by using a set of rules defining legal actions. I am looking at WWF (state transition workflows to achieve this)

(B) Test case generator

The following algorithms can be considered for generating test cases

· The Chinese Postman algorithm is the most efficient way to traverse each link in the model. Speaking from a testing point of view, this will be the shortest test sequence that will provide complete coverage of the entire model. An interesting variation is called the State-changing Chinese Postman algorithm, which looks only for those links that lead to different states (i.e. it ignores self-loops).

· The Capacitated Chinese Postman algorithm can be used to distribute lengthy test sequences evenly across machines.

· The Shortest Path First algorithm starts from the initial state and incrementally looks for all paths of length 2, 3, 4, etc. This is essentially a depth-first search.

· The Most Likely First algorithm treats the graph as a Markov chain. All links are assigned probabilities and the paths with higher probabilities will be executed first. This enables the automation to be directed to certain areas of interest.

(C) Here is a possible implementation of a test driver outlined:


The decision module gets a test sequence from any one of the graph we have designed. It reads the test sequence input by input, determines which action is to be applied next, and calls the function in the implementation module that performs that input. The implementation module logs the action it is about to perform and then executes that input on the system under test. Next, it verifies whether the system under test reacted correctly to the input. Since the model accurately describes what is supposed to happen after an input is applied, oracles can be implemented at any level of sophistication.

A test harness designed in this particular way is able to deal with any input sequence because its decision logic is dynamic. In other words, rather than always executing the same actions in the same order each time, it decides at runtime what input to apply to the system under test. Moreover, reproducing a bug is simply a matter of feeding as input sequence the execution log of the test sequence that caused or revealed the failure.

Difficult test items

All things are difficult before they are easy

I think, in an effort to understand the philosophies of testing and how best we can get things going, If we can list down the top issues we face and try to find some patterns for them , it would be a good start.

I will begin with starting to think aloud on various topic areas where testers find testing hard. Let’s list the questions first and then have the community or Guruji answer them.

Difficult testing questions/areas which plague every testers

1. General Concerns

2. Estimation & Scheduling

3. Prioritization

4. Metrics

5. Testing Influence

6. Automation challenges

7. Product/Application wide issues

8. Risk Management

9. Testing Types

10. Research

Hope to add a few more in the days of thought

 

Sample Release checklist

[Me]: Guruji, can you please help me with a sample release checklist for any product?

[Guruji]: Well, i can help you with only a sample. You may have to moderate it with respect to your project

 

Activities

Person Responsible

Engineering activities

 

1

Create install and configuration scripts as needed.

Dev Lead

2

Remove debugging and testing code from the software (including disabling assertions).

Dev Lead

3

Update version strings with final version information

Dev Lead

4

Cleanup of Resource file

PGM

Quality Assurance Activities

 

1

Verify all source code meets coding standard; run Checkstyle or other style-checker and do manual inspection

Dev Lead

2

Check that all defects on current defect list have been resolved & have a changelist attached

PGM

3

Smoke test/regression test final build number

Test Lead

4

Verify that all demo scenarios and BVT;s are passing

Test Lead

5

Verify all manual  tests run on the current build and coverage is good

Test Lead

6

Verify all system test scripts run on the actual released software & code coverage

Test Lead

7

Verify all non-functional system tests like performance are documented

Test Lead

8

All fixed defects have been verified as fixed and closed

Test Lead

9

Ensure that all open bugs have been either fixed or moved to an other release.

Test Lead

10

Verify installation by installing system on clean machine

Test Lead

11

Have someone not on your team install and run your system without assistance by following your installation directions.

Test Lead

12

Install program on machine with older version of program (upgrade install)

Test Lead

13

Verify all steps in the deployment plan are completed

Test Lead

14

Verify all acceptance criterias are met

Test Manager/Project Manager

15

Automation deliverable - scripts & installer made available to customer

Test Lead

16

Consistency check:  SRS, User Manual, System Tests, Staged Delivery Plan, and Software must all match

PGM

User Experience

 

1

Any new or changed functionality is deemed usable

Test Lead/PGM

2

All error messages are friendly & appropriate

Test Lead/PGM

Release Activities

 

1

Final version of exit report is ready for sending

Test Manager

2

Schedule Acceptance Test date with customer (and instructor)

Onsite PGM

3

Verify the URL of  Web app application is working as expected

Test Lead

4

Synchronize date/time stamp on all release files

Dev Lead

5

Tag and Branch the source code repository.

Onsite PGM

6

Create a backup of the build environment and place the development environment under change control

Onsite PGM / Test Manager / Project Manager

Documentation Activities

 

1

Verify that User Documentation matches current Release.

PGM/Tech Writer

2

Create a "ReadMe" text file with installation instructions

PGM

3

Write "Known Issues" List

Test Manager

Other Activities

 

1

Project Visiblity Calculations are current and accurate

Project Manager

2

Road map for the product

Account Manager / Group Project Manager

3

Schedule project Survival Assessment or Post-Mortem meeting

Project Manager

www.CodeNirvana.in

Powered by Blogger.

Translate

Total Pageviews

Copyright © T R I A G E D T E S T E R