Using Triforce to define Acceptance Criteria
Acceptance Criteria (ACs) are the basics for providing the “what” for any business ask. At their core they are a series of functionality statements telling us what behaviours we want from a feature and link the business ask back to engineering. As testers we use the Acceptance Criteria to help guide our testing and can even push our testing to the left (earlier) by helping to refine the ACs to think about errors and edge cases.
How do we do that? Using Triforce!
What is Triforce?
Triforce are sessions used to refine and create User Stories to provide a shared and complete understanding of what is to be built. You might have seen these sessions named Three Amigos or Power of Three; I call them Triforce because 1) That’s an non-gendered name and 2) It sounds cool and fun to be a part of. They are a collaboration between project team members who each champion a perspective needed: Business, Engineering & Testing.
- Business – provide context on requirements and user needs (e.g. product owner / product designers).
- Software engineering – provide context on implementation details and what’s possible.
- Testing – seek to ensure tickets are testable and ready for development by ensuring all thoughts are covered.
The Tri in Triforce does not refer to the number of people in a session, it refers to the number of perspectives (or hats) that should be represented in a session. Given that there may be people with different amounts of domain knowledge or front end / back end expertise it’s very likely that more than one person will need to represent that perspective.
The session itself does not need to be onerous. In the past I’ve had a running schedule of 15 minutes after stand up to go through any stories that needed refining.
Triforce is not the time for bringing everybody along for the journey of what we’re doing. That happens in sprint kick off ceremonies where tickets that have been refined (as part of Triforce) can be discussed. Nor is it a time to talk about potential solutions; we’re just interested in making sure tickets have ACs and are ready to bring into the sprint.
Why refine the ACs?
The short answer is; to be ready to start development. We need to know what we’re working towards and have a view of the business needs before we can start working on them. If we don’t drive out good Acceptance Criteria then we could run into all sorts of risks:
- Scope creep from not knowing what we’re building / how far to go.
- Building the wrong thing because we’ve misinterpreted the ask.
- Slowing down our team velocity because we don’t have enough information to get started.
- Slowing down velocity because feedback on what we’ve build is coming in too late.
- Having poor quality of what we’ve built because we haven’t considered what could go wrong.
- Testing the wrong thing so not knowing that what we’ve built is right or done.
As a tester, helping to refine the ACs allows us to explore and apply critical analysis to features and functionality before any code has been written. This means we can identify problems before they occur and build fixes into the code up front, rather than having to raise bugs.
Writing good ACs
Each User Story should have Acceptance Criteria written before implementation to guide design, development and testing.
We should ensure a story meets the Definition of Ready before it is picked up for development to ensure we know what we’re delivering and that it’s testable to prove this.
Help the team to think about what information we need to drive development.
- What are the requirements the user had, how are we meeting these?
- What non functional requirements do we have or agreements do we have?
- Do we know whether it can be picked up in this sprint?
To this end every ticket should have Acceptance Criteria to guide development and testing as well as a sizing estimate to guide whether we can achieve it in the sprint.
Each AC should be independently testable.
Each Acceptance Criteria should refer to one piece of behaviour that we want to achieve and we should be able to test that this has been done.
Help the team to break down any verbose ACs into multiple that each document one behaviour at a time by ask whether that AC contains multiple behaviours to confirm understanding and then suggest splitting it out.
Use short statements that cover one thing:
- Show a student’s current assessment score.
- Provide an option to print
- Provide an option to share
- Display an error message if the service is not responding.
Over trying to cover off multiple things at once:
- Show a students current and past assessment scores with an option to print, share and save the page or show an error if the service is not responding.
ACs should have a clear Pass / Fail result.
Each Acceptance Criteria should be written in such a way as to detail what should be present to a user for it to be met.
Use something specific and measurable:
- Display a statement balance upon authentication.
- Display the total balance pounds sterling.
- Display the payment date due in DD/MM/YYYY.
Over something general and inspecific:
- Interact with the balance service endpoint to return data.
- Page is usable and accessible.
ACs focus on the end result and not the solution approach (i.e. what behaviour will the user be able to do).
We need context as to the what a user will be able to do to inform development and testing, rather than the how. Adding implementation details will bias development and influence testing so should be avoided. To this end, ACs are not the design details and should be used to guide design like they would coding and testing.
Use the what of behaviours:
- User should be able to capture a title.
- A user can capture a title with A-z, 0-9 and special characters.
- User should be able to capture a description.
Over the how of implementation:
- A free text box titled “title” will be present on the screen under the main page title.
- A free text box titled “description” will be present on the screen under the title field.
- A horizontal rule will be displayed under this field.
ACs should include non-functional criteria (where applicable).
Where appropriate we should include Acceptance Criteria for considerations such as performance, security or accessibility when needed.
Help the team to identify any additional needs of the system by reminding them about performance, security and accessibility. Remember to point out that as this is a tech demo that these might not be needed right now and if not, should not be captured.
Use specific and testable non functional requirements:
- The system should respond to all search requests within 10 secs of receiving the request.
- The system should be able to handle 50 concurrent users that are creating, editing and viewing simulation information.
- This page should meet a minimum WCAG 2.1 AA standard for accessibility across all fields and controls.
ACs should cover the happy, negative and edge cases.
Acceptance criteria should tell us how to design and implement for all user journeys. This includes error handling and appropriate edge cases. When a team focuses their ACs on implementation details(the how) they usually focus only on the happy path. As testers we should use critical analysis skills to capture acceptance criteria for things like:
- What happens if the network fails, can I save?
- What happens in an error condition?
- User concurrency, what happens if another user edits or views this?
- Are there limits or boundaries to inputs?
- Are there any logical limitations to the functionality (can everything map to everything?)
- Refer back to other features and functionality to think holistically.
ACs should be clearly written.
Acceptance criteria should be meaningful and representative of their audience using plain and concise language. Use something like the Monzo Tone of Voice to ensure things are captured in a way that everybody can understand. Avoid jargon, half formed thoughts, placeholders and acronyms that could be misinterpreted.
How I get involved in a Triforce session.
As the tester in a Triforce session I bring the insights related to quality. My role is to try and flesh out the ticket with ACs that will allow us to fix bugs before they even happen by covering off edge and negative cases. I also help guide understanding by asking probing questions to drive out the ask and the what of the user story.
- Raise risks about how things might not work (I sometimes conduct a risk analysis before joining the session).
- Bring insights as to what the customer might want or how they might use it.
- Ensure that things are testable by asking “how would we know this is done”?
- Help shape the what by asking questions that’ll lead to clarity “what can the user do here?” or “what does this look like?”
- Question monoculture by being a different voice in the room and not agreeing with everything.
SO LET’S TRY USING TRIFORCE TO GET OUR TEAM THINKING ABOUT “THE WHAT” AND PUSH OUR TESTING EARLIER TO AVOID BUGS.