Outside-in: Using your Acceptance Criteria to drive unit tests
Tl;dr – We can fall into the anti pattern of adding unit tests that test the implementation, not the ask and that could mean codifying the wrong behaviour into tests.
Implementation might not always be the requirements
A big assumption we can make in teams is that what we’ve built is actually what we wanted in the first place. The truth of the matter is that our code and implementation doesn’t automatically always match what was wanted by the business. There’s a lot of reasons that might happen:
- Undefined or incomplete asks leading to assumptions
- Scope creep
- Ignoring the story and building something anyway
- Misinformation or misremembering things
This can lead us to a situation where we have code, it’s WELL WRITTEN AND AWESOME and it does something… just maybe not the something we were looking for or were expecting.

So why is it a problem? The code is good and it works doing something, it’s even secure and maintainable! Because, dear reader, it’s not doing what was needed so it doesn’t add value.
Why do we code?
1. To do great engineering?
2. To serve business needs
<— this one
The whole reason we code is to support a product and build features that lead to a business need and (hopefully) profit. Arguably, more important that good engineering is meeting that business need with our development.
Testing the implementation creates problems
Frequently, when we’re adding unit tests (or other code based tests) we tend to write code, then test. Given that the way we’re then taught to test code is to make sure the code logic is good, we base our tests off of what the code is already doing.
But what happens if that’s wrong?!
If, like we described above, our implementation doesn’t match the business needs then we’re codifying defects into our code base and giving ourselves false positives. These are tests that will run and say “it’s all good” when, in reality, it’s a big problem. A product problem more than an engineering specific problem.
A quick note on PRODUCT
In modern ways of working, engineering and development tend to forms part of product teams, a team that holistically owns a product they manage. This means the success of a product falls under everyone in the team and not just the Product Owner, so we have to ensure that we’re supporting the meeting of business needs.
When we get busy, or overloaded, it can be really easy to focus just on our silos and not holistically. But that can cause us problems as the whole team is set up to be supposed to build a product together.
Testing code via the requirements or Acceptance criteria
When writing tests, we need to go back to the requirements (or Acceptance Criteria) given to us to make sure that code logic & behaviour is in service to those asks. Tests, even at a unit level, should ideally be based from business asks rather than what we’ve built.
Note: There may be some tricky logic that we also want to test from the implementation, but only when we know our code meets a business need.
This is where our OLD FRIEND TEST DRIVEN DEVELOPMENT (TDD) comes in. If writing tests after we code means we fall into the trap of testing the implementation, why not try writing tests before implementation; based on the only thing we have: the business asks?
Using TDD and driving our code tests from the business behaviour that’s wanted allows us to check that we’re building the right thing and also test our code logic / have small tests to run in a pipeline. These take a bit more thought to implement as you may have to split a behaviour smaller to work with the code, or have enough mocks in place to test code at a behaviour level, but the value is worth it.
This is also why having strong, holistic and well thought out ACs can help us, they define the business ask in a way that tells us what to test.
Using a way of working like TDD helps us to be more focused on the business asks and not fall into the anti patterns of testing from the implementation. Although teams could (and probably do) base code tests from requirements after implementation, it might be less top of mind to think about requirements when the implementation is in front of you leading to thinking OF COURSE I MET ALL OF THE ACCEPTANCE CRITERIA, I’LL TEST FROM THE CODE which is an assumption that could be wrong.
When all we have are the requirement, they are our focus and top of mind for us so we’re more likely to create good code based tests that cover what’s needed. Our cognitive load is less at this point too because we can think in the small (one AC at a time) rather than getting distracted holistically and with implementation details.
Promoting TDD (especially outside-in) is a great way for us to improve our regression tests prove the business needs; avoiding having to create more costly end to end tests.

