When developing your API tests, keep in mind the following types of test deliverables
1. Test plan
Best Practice: Track all your deliverables as work-items in any test management tool of your choice. This provides a great high-level view of what our API test coverage is
BVT means “Basic Verification Tests,” and they do exactly what the name implies. We use BVTs to verify that the basic functionality of a feature is solid enough to self-host the feature.
Best Practice: BVTs should be the first tests you will develop for your API set
If you do this, you’ll automatically satisfy the next best practice:
Best Practice: All public APIs should have BVTs
3. Stress Tests
The purpose of stressing an API is to ensure it doesn’t cause resource leaks, handles low-resource conditions well, and scales well to large input/data usage. The rules of test mix should follow
- the test must not hog the CPU - give other stress apps a chance to do work
- the test must have a cleanup mode – usually a cmd line switch you can call the app with to clean up any changes it made to the system
- the test must be solid – check for out of memory, don’t leak, etc. Don’t cause “false positives” in the nightly stress runs
- the test must not be focus-based
- the test must be run in private mode alongside the regular stress mix
Public APIs need performance tests because it is a feature that other code depends on. By using a certain API, developers have certain expectations as to how much time and memory is required by an API. Execution time and working set measurements are probably the most critical for an API. These two are good indicators signaling when additional investigation is necessary. The issue with performance testing is even small variations in the execution environment or test code can lead to results that are widely inconsistent, or not reliable enough to be used as a metric.
- Work with your feature team (dev/test/PM) to appropriately prioritize performance testing for your public APIs
- Create small, focused tests that minimize outside impact for performance testing.
- Simple unit tests or basic scenarios are better for performance monitoring
- Ensure your tests produce consistent results
Structure of a Performance Test
- Setup a neutral environment – ensure what you are testing is not cached in memory, or flush the process’s working set
- Take measurements before running the test
- Run the test/perform the action
- Take measurements afterwards
- Diff the results
5. Leak tests
Public APIs can cause memory and other resource leaks just like UI features can. If your APIs are leaking, you’ll generally notice this while running your stress tests. However, just knowing that something leaked doesn’t really help you debug what leaked. If you use the standard test harnesses, you can use the built-in leak testing functionality to generate logs of leaked allocations and resources while you run your tests.
Best Practice: Use the standard test harnesses to inherit basic leak testing functionality
When writing your tests, try not to cache any data as your test is running. Doing this could cause false positives when the harness is run in leak testing mode. For example, if you’re doing your own logging (which you shouldn’t be), you might be storing messages in memory & waiting until the tests are finished before dumping them to disk. If you do leak testing on a per-test basis, the leak testing code will detect your growing internal log array as a leak.
6. Boundary /Error Tests
Boundary testing is usually done after BVTs are complete and checked in. This type of testing includes calling your APIs with invalid input, NULL pointers, empty strings, huge strings, etc. Making your tests data-driven is recommended as it will be easy to add future test cases without recompiling your code.
Best Practice: Use ITE to model and automate your API system for boundary testing.
This category also includes your Security tests. For example, path strings greater than MAX_PATH characters in length, or path strings that include “::$DATA”, etc
7. Developer regression tests
DRT means “Developer Regression Test”. Generally, these are designed and written by developers. They should be run by developers before any check-ins. The goal of DRTs is to catch heinous bugs before the build process or BVT team does.
Best Practice: Work with your developer to get DRTs written for your public APIs & identify what should be covered in those tests. Make sure he/she is running them before checkins
8. SDK samples
Sample code should be as clear and concise as possible while still conveying the prominent developer scenarios for the API & respecting solid coding practices (e.g. security checks, error checking, etc.) It is the tester’s responsibility to ensure that the sample code works properly and is included in the SDK as appropriate.
Best Practice: Make sure you test the SDK samples for your APIs on daily builds.
9. API Documentation
Part of testing public APIs is verifying that the documentation is complete, clear, and concise.
Best Practice: When writing your API tests, try copying/pasting the function definitions directly from the SDK docs