Test from theTop

Environment & Risk Aware
Test Coverage

All types of software testing have different strengths and risks. Manual, functional, integration, unit, static code analysis; these help assure the quality of the system under test.

Test coverage attempts to quantify how completely a system is tested. Coverage metrics like "code coverage" can't prove a test is ever actually run against the code it covers, and offers no assurance the test is a good one, simply that it exists.

A test is itself a system, and that system's quality should be continually evaluated, and it's usefulness justified.


ERA principles improve test quality.

ERA coverage score measures how many tests were run, when and where, and how many were not.



For unit tests, each individual test represents a scenario. Coverage is determined by passing test executions in a particular environment, for a particular version.

Coverage is different from pass/fail results. ERA coverage shows how many tests have been run, and how many are missing.

An ERA score exposes the relationship between failed, broken and ignored tests in a way pass:fail cannot.


The software development lifecycle is a stepped process with continuous deployment across nodes, internal and external. Testing should match the cadence of a software system version on it's way through the pipeline.

Some tests need to be skipped in certain environments.

Example: An engineer testing locally during development might avoid functional tests that run slowly.

Example: Automated test code should not be deployed to production or shipped to the customer.

Only by tracking test execution per environment, can you see where to improve coverage across the software lifecycle.


The direct risk of writing test code, is poor/low assurance tests.

The ERA test score exposes unused test scenarios, and shows the real effect these risks have on coverage.

These factors contribute to bad tests, or reduced coverage.

How To

Measuring an ERA coverage score is easy. Every time you run tests, manual or automated, record the result, version and environment (context) where it ran.

Compare the passing tests to all defined test scenarios in that context.

Become suspicious of tests that never seem to break.

Consider a suite of tests that report bugs without assertions.

Think about test reuse. Where in your software lifecycle can you increase coverage? How can code be leveraged and repurposed?

Track your test maintenance. Outside the scope of measuring coverage, this is an important risk to understand. ERA coverage scores will help clarify how many of your tests are in a broken state at any point in time