[Me]: - Guruji, for a novice like me, can you tell me what are the common test terms that do rounds in office? I don't want to be left behind :-)
[Guruji] : - I can see that you want to sound cool in front of your colleagues. Well most of the organizations have terms specific to them. But they are all derived from the universal list. Some times they are used interchangeably, but for your sake here is the compilation.
Term | AKA | Definition |
Agent | daemon | Automation System component - A process running on any machine which is a part of the automation system. Talks to the distributor to give “ready” and “busy” status. Also usually gathers machine software and hardware configuration data to upload to a configuration database. |
Automation | Executable test code that requires no human intervention during the execution in order to complete successfully | |
Automation System | An end to end solution for automated test case execution. Provides the infrastructure for test case automation to be executed and reported on. | |
Automation Target Machine/Machine Under Test | Automation System Component - Machine where Harness is launched and test cases are executed. A single machine. | |
Code Coverage | The degree to which a component or product has had automation touch some metric of code. Metric can be function coverage, branch coverage, block coverage, or arc coverage. Block coverage is most often used. Product code needs to be instrumented for code coverage in order to track it. | |
Code Generators | Agents send machine software and hardware information to this database. Used by the scheduler to find valid automation target machines for automation. | |
Configuration database | Automation System Component - Agents send machine software and hardware information to this database. Used by the scheduler to find valid automation target machines for automation. | |
Code Writer | Code Generators | A tool that can query some structure of a target of test (an assembly, a database) and generate test code that can be executed against that target of test. |
Configuration Matrix | The hardware and software configuration dimensions supported for a product release. The dimensions are multiplied into a full matrix | |
Data Driven Automation | Automation that accepts variable input at run time. Variables are stored in some persisted state (database, file) and read into the code as it is executing. | |
Delta | diff | The resultant difference between 2 things. |
Distributor/scheduler/controller | Scheduler, controller | Automation System Component - This is the part of the automation system where test automation is queued up to be executed on machines within a machine pool (lab). It often has access to configuration data from the automation target machines in order to send tests to the correct automation targets (e.g. If a test requires IA64). The distributor is usually in communication with agents on the Automation Target Machines. In some cases, Distributors are overloaded with coordination instructions for multi-machine automation (better to do this in a harness, for portability). Distributors are basically fancy ShellExec engines. |
Genetic Algorithms | Data driven automation that "evolves" towards the favored result. User "selects" the favored result prior to execution and the genetic algorithm can mutate and become more and more efficient at finding the least number of steps required to get from start to favored result. | |
Harness | Automation System Component - The end point executable run on an Automation Target Machine which knows how to interpret structure in a binary or some other blob which contains test cases and execution instructions. Easy to use harnesses have both a GUI and a command line mode (for use with a distributor). | |
Instrumented Build | A build of the product which has been modified in some way to trace code behaviors, such as code coverage, or memory tracing. | |
ITE | Integrated Test Environment. A tool used to create and execute Model Based Tests. | |
Log File | Verbose data about the Run, parameters used, system configuration used, trace data, and the result of test cases. Log files can be are parsed and can have result data uploaded to a result database. Log files are often saved to a file share and linked to results if needed for later analysis. | |
Logger | Logging API | API used by test automation authors to write comments and pass/fail results to the log file. |
Machine Paver | Automation System Component - A component that can provision machines depending on test requirements. A paver can blow away and restore machines in the Lab to allow for the install of custom, or standard test baseline images. May be reactive (test cases submitted to the system and request a configuration that is not available so the paver blows away machines and restores or creates the correct image for the TC), or proactive (paver sets up machines to base configurations prior to test case execution, test cases are designed to run on those configs available in the lab only) | |
Machine Pool | Automation System Component - A logical grouping of random machines, usually in a Lab. Might be secured. | |
Model Based Testing | A testing method that usually uses a finite state machine from which test cases and paths for testing are derived. | |
Offline Run | Automation System Component - a persisted representation of a Run, can be used for offline automation. | |
Parameter Matrix | A set of dimensions and values for a method signature which takes more than 1 parameter. | |
PICT | PairWise Independent Cominatorics | A tool that can be used to create equivalency classes based on pair wise combination of a set of dimensions. Results in far fewer outputs than a full matrix expansion. Guarantees that all values from any 2 dimensions are combined at least once in the result set. |
Report | Visualization of automation results. | |
Result | The actual Pass/Fail/Other from a test case. Also a logical object in the result database. Linked to test case and contains metadata describing the Automation Target machine on which the test was executed. Data for the result is retrieved from the log file. | |
Run | Job | A logical object that represents the execution instructions, parameters, binaries, command line, and any other information required by the harness to execute a test case or set of test cases. |
Scenario | End user case | An end to end test case that covers any user scenario. |
TCM | Test Case Manager | A database where test cases can be created, modified, stored, and deleted. Stores metadata about automated test cases, stores descriptions and test steps for manual test cases. |
Test Case (automated) | • A test case *must* be atomic – 100% self contained, can be executed independently, or in any order with any other set of test cases. • Test cases *must* be implemented so that they can be run in any order (API and UI) • API (not UI) Test cases *must* be implemented so that they can be run in parallel (or multi-threaded). • Test cases *must* be globalized, that is to say that any test can run on any international platform/OS. • Test cases *must not* have dependencies on each other, execution sequence *must* be irrelevant • Test cases *should* clean up any data that it wrote, or restore any data it deleted or modified during execution. Test Cases *should not* have side/after-effects that could affect other tests. • Test cases *must* verify a valid starting state exists on the machine under test. • Test cases *must* be able configure the machine under test with data required for the test. • Test cases *must* be able to execute successfully on a valid automation target machine that is not connected to a network. • Test cases *must* specify an execution timeout which if exceeded results in a failure. • A test case *should* map to a method, or function in an Assembly DLL. (assuming we will use a Harness like NUnit, or Perseus.net) – For example, a single assembly could contain multiple test cases, but each of those cases should be independently executable. – One programmatic entry point per test case is a good rule of thumb. • A test case *must* specify any special execution constraints that are beyond the baseline, and be able to verify those execution constraints prior to execution. – For example, a test case might be specifically built to run *only* on a specific SKU of Green. The test case needs to verify that it is being executed on that SKU prior to test step execution to avoid a false failure due to invalid execution environment. • A test case *must* log one and only one Pass/Fail result per execution – Internal test steps can log pass fail, but the Test case pass/fail is the result that we record for reporting purposes. – Test step results are logically “anded” together to create the test case result. • If you have a “test case” that requires another “test case” to execute before it, you actually have 2 test steps within 1 test case. Log one result. This is critical for results to mean something for reporting later on. • A test case is represented in a Test Case manager, with the following required data: – A test case *must* have a description of each execution step in the test. – A test case *must* have a description of required environment/hardware that is beyond the baseline configuration which is needed for successful execution. – A test case *must* have a reference to the automation scripts/binaries as well as source code. – A test case *must* have a description and a reference to any data required to set up the test. | |
Test Suite | a collection of test cases. | |
Test Variation | Same test case, using different variables for each execution. |