Test from theTop

Assertions

Too many assertions hurt our software testing efforts!

Tests Break

When your test breaks, by fail or error, further assertions are never executed, and test coverage is reduced.

To address this fact of testing, I propose:

Hard vs. Soft Assertions

Hard assertions are asserts that halt test execution on failure.

Soft asserts fail the test, but record the result and yield to continue test execution. This means you always get complete test results without halting. That's an ideal separation of concerns.

Test code reuse is easier if asserts don't halt execution, you could string tests together for any kind of workflow automation without modifying the code. This makes end to end feature testing easier and more robust.

Some test frameworks already implement soft assertions.

I'm sure there are more out there, drop me a line and inform me!

Target State

Any assertion depends on one single state of the system under test, which could legitimately use multiple assertions. I'll call that the "target state".

For some tests, one single assert is enough, but often one isn't enough. For example, consider a user interface loading some widget and verifying the structure once loaded. For unit tests, the target state might be a complex data structure with multiple elements.

The target state of the system is the final crucial context of the test, and it would be ideal to have the possibility of multiple assertions in that context if they don't halt execution when they fail.

Reusing Test Code

Automated test code is ripe for reuse!

You are wasting potential if you don't consider new use cases for your automation systems.

Developers need automated workflow tests during development that are visible and flexible for them to confirm functionality or build new tests.

Visual testing that gathers screenshots of your application for analysis can be entirely built from repurposed test code if it exists.

When you start to reuse test code, the first thing you'll do is nullify assertions to prevent halting execution. If your tests are built with soft assertions this is not necessary. More to the point they will still tell you where the automation is acting unexpectedly, without breaking it.

Reach out beyond your development team. Account managers, support representatives, developer operations. All these groups and more are likely to benefit from reusing your automation code! Talk to them, listen and try to help.

Manual testing can and should be augmented. If your Quality Assurance or User Acceptance Testing teams are slogging through repetitive steps, take your automated tests and re-purpose them!

I promise someone at your organization dreams of the power to automate that login form bypass, or some minor change to application state, trivial to you with a little test code.

Unit tests

Unit testing has less opportunities for code reuse, it's true.  A Unit test ideally is a highly targeted, small, independent single purpose test.

Usually, your target state will be simple, and a lone assert does the job.  In that case, a soft assert will accomplish the same function as a halting assert.

If you are using soft asserts for integration and functional tests, it's worth unifying all asserts to soft for consistency and utility.  Consistent code standards are important and contribute to developer adoption of testing.

Exceptions

During test execution, unhandled exceptions fail and halt execution, exposing unexpected behavior.

This is an important part of testing, and is a source of bugs found. However this unexpected test behavior is a risk: Tests halt on exceptions. This is the same problem as too many assertions. Unexpected behavior requires more triage than handled warnings.

The unexpected is notoriously hard to plan for, so accept and acknowledge it. Pay attention to tests when exceptions are thrown and redesign.