Test Automation code review guidelines

[Me]:-  Guruji, we have some automation in place. Are there any review guidelines that are similar to Code review standards we have in Dev?

[Guruji]:- Yes, automation typically is code or script . This needs to be reviewed. And yes, there are a few guidelines you can follow for effectivness

[Me]:-  Sounds good. Can you share some code review guidelines?

[Guruji]:- There are several guides, i have listed a few for your convenience

Duplicate code - code that is the same or almost the same or even similar to code elsewhere 1.Creating 2 separate test case methods for similar test conditions such as min/max values, nulls or other boundary conditions for parameters. Instead, extract out the common logic in the two test cases and make it a method that both can call
2.Testing the same functionality dropped in 2 different releases by rewriting existing test cases just for the new release. Existing test code should be written in a way that it works across multiple releases.
3.If an existing test case changes from one release to another, the use of pre-compile directives can be employed. Or, if the test case becomes fundamentally different then a new test case could be written
Duplicating functionality - Re-writing existing libraries is expensive to write and maintain 1.Creating a separate random or project data generation when a new product revision is released. Instead, data generation libraries should be re-used as much as possible across projects
2.Writing separate releases of library code to handle multiple different namespaces of the same product. Pre-processor directives or reflections or generics should be used to make code independent of project namespaces. This applies specifically to code which will be used to test multiple releases of the product
3.Re-writing functions which have part of the same functionality implemented by other methods should be avoided
Incorrect Exception handling - "swallowing" exceptions is the worst practice, but even solely logging is bad. In general, test code should not need exception handling, as the underlying test harness will automatically deal with it. The exception is if you have significant information to add to the exception message, and in that case, the original exception should be passed as an InnerException 1.System resource leaks - This is especially common when using SqlConnection, SqlCommand etc. statements. The best way to use these is to wrap it with a "using" statement. Anything that implements IDisposable should be wrapped in this statement
2.Any resources used should be opened pessimistically, i.e. they should be opened in such a way that they are automatically closed once the usage is complete. This includes opening files in read-only mode unless read/write is required, and closing database connections immediately after usage
3.In the case of unmanaged pointers, system resources need to be properly released. It is a good design approach to implement a SafeHandle derived class to manage this for you. This would force you to implement the releasehandle code which can include memory clean-up related code
4.Un-managed code usage should be avoided as much as possible and C# should be used as much as possible. If unavoidable, then usage should be called out and clearly documented
Non-use of verification functionality - Test code should not throw exceptions but rather use Assert.* methods, and Assert.Fail() if nothing else matches. Don't hesitate to incorporate Assert.* calls into your libraries - some of the most efficient test cases relegate all verifications to library methods. All test cases should rely on some form of verification 1.Verifications should be performed for all possible fields of the object being tested. This should include core fields and also audit fields such as created, updated date etc
2.Verifications should be bundled together for an entire object and implemented hierarchically, for example if a customer object has a emails collection then there should be a separate verification method for verifying email collections which should be called from the customer verification method
3.
Magic numbers and the like - Hard coded values can be a maintenance nightmare down the road. Centralize these settings somewhere in your code instead 1.All configuration and other data values should be in a configuration file. Initially, these can be exposed by a separate "Settings" class with static fields or methods, where the values are inline. Later, this class can be modified to get the values from a different source, like a configuration file
2.Settings such as web service URL's, database connection strings and assembly names/paths are another example
3.All paths should use relative pathsinstead of absolute paths. Use Reflection to determine the execution directory and configuration paths.

In addition to these,A summary document in the solution or code comments can help significantly  and not to loose out on ReadMe file, this file should include any external dependencies, URL's special set-up instructions or note configuration values that need to be changed for a local machine. Testers could generate a .chm file to give an extra professional look to it, especially in the cases where an adopting tester will use it.

author

Vinay Jagtap

A hard core Technocrat with over a decade of extensive experience in heading complex test projects coupled with real time experience of project management and thought leadership. Extensive experience in Performance, Security and Automation Testing and development of automation frameworks and ability to setup and execute Global service centers and Center of Excellences for testing.

Get Free Email Updates to your Inbox!

www.CodeNirvana.in

Powered by Blogger.

Translate

Total Pageviews

Copyright © T R I A G E D T E S T E R