When you join a new team, one of the first things we have to do as a tester is set out and communicate our approach for testing. Doing this lets us set out our stall and shows how we’re here to help the team; that we want pull in the same direction as everyone, are pragmatic and also tells people what they can hold us accountable for doing.
I’ve been working with a number of teams and testers who’ve asked about how to write an approach (and I’ll be writing a new one soon for my new team) so here’s my tips on writing an approach. We’re going to keep it simple by answering the 5W and an H questions: Who, What, Where, When, Why and How.
Who are the key players in the team and what do we expect them to do? Who is accountable and responsible for testing tasks that need to happen?
I tend to capture all of the roles that exist in the team and then state the tasks that they are accountable for and responsible for. Accountability means they’re in charge of ensuring it happens, responsibility means they actually do that task.
|Role||Accountable for||Responsible for|
|Test Lead||Test approach|
Reporting quality to the team
|Functional testing of tickets|
Non functional testing
Environment set up
Involved in Triforce (story grooming)
|Product Owner||Writing user stories|
Setting up / running Triforce meetings
Setting the Definition of Done
Agreeing tickets are done
|User acceptance testing|
|Developer||Ensuring testing happens on tickets|
Unit testing strategy
|User acceptance testing|
The reason we capture this information is to make it clear that testing can/should be (and is) a whole team sport. We expect everyone to have a part to play and so need to communicate this to them.
This is an important part of the approach as it’ll inform how we want to test. We need to break down what’s going to be developed in order to make a clear assessment of the techniques and tools we’ll need to use.
I use the same techniques as I do to break down something for creating exploratory testing ideas. Looking at architecture, documents and designs identify the pieces that’ll be included in the project (including non-functional, deployments, environments and any documentation).
This requires analysis of the actual project, so resist the temptation of copy/pasting something high level thats generic; a generic approach is no help to anyone. By breaking down the project into its smaller pieces it’ll be much easier to know how we want to test things.
Where can people find information about testing and bugs and where will we be doing our testing? This section informs other people on where to go for information about quality and also starts to shape what tools we’ll need for collaboration and testing.
I think about all the things I’ll want to share (progress, bugs, quality narratives, test notes, coverage) and look at what I want to test (from above) to think about where I’ll share / use these. That can be a meeting, a Jira dashboard, a folder or even slack.
|Test strategy||G-Drive folder (link)|
|Acceptance criteria||Jira tickets on board Project (link)|
|Risk analysis mindmap||Attached to Jira ticket|
|Daily test report||Shared in slack channel #team|
|Defects||Raised in Jira (link to filtered bug list)|
This section can be a simple list of “if you want to find this thing, look here”. It’ll give the team and stakeholders an understanding of what you intend to do and share and where to find it.
When will this testing be happening, will there be different testing at different times? Will we be pushing our testing left and right? This section informs others about when to expect testing to get involved and be done (especially useful to combat the ”testing at the end” mentality of waterfall).
Pull out the parts of the software development lifecycle and think about the testing that needs to happen during these stages.
- Discovery; how can we help understand and drive out requirements? What will we need to do to help set up our testing.
- Design; will we help write ACs and review architectures and UX designs? Is there a definition of ready for us to help the team to get to?
- Development; what testing needs do we have in parallel development. Is there code based testing that needs to happen or will we test in parallel to development?
- Merging; does integration or environmental testing needs to happen? Do you need to test your merging processes?
- Acceptance; do you need an acceptance phase? What testing happens here? How can you get the team to meet a definition of done? Is there E2E testing or bug bashes that will happen here?
- Pre deployment; what needs to be in place to support deployment? Are there documents, release details or environmental factors to consider?
- Post deployment; what testing is needed to check that the deployment worked and what happened afterwards? Is monitoring needed?
The when section, along with the what, will help you to drive more towards how; it’s an additional breakdown of the overall task of project testing. Remember to ask yourself what is good enough for these times; this may change project by project so be aware of this projects needs.
In why we look to why this approach makes sense to do. What is it about how we want to test that will make sense. We need to look to the specifics of the project environment to see what will work for us.
I look to identify needs of the project to help me drive out the how and then afterwards come back to justify the how with a why.
- Is this a short term project so wouldn’t make sense to spend a lot of time automating?
- Are there regulatory requirements that means you have to capture data or test notes a certain way?
- Is there a tool or technique managers want you to use?
- Do we need to get in quickly and get lots of information, meaning we might want some exploration?
- Do I have to work with others and they need things to be done in a certain way?
All of these types of statement justify some “why I did it like this” which we can reverse engineer into a how. Once we’ve set that how we can then reframe why into a justification of the how properly.
This is the main part of the strategy. This is where we take everything we’ve learned and actually say the testing we will do.
I take all the parts of the project that we identified in “what” and apply where, when and any why to come up with my test approach for each area:
What – APIs.
Where – Test notes in Jira / Testing local builds.
Why – We have Postman licenses / No time to set up a framework / short project needing info quickly.
My approach for API testing will be exploratory testing using the Postman tool. Test notes will be made and stored in Jira and we’ll share information after running test sessions via debrief to the developer and to the team via slack.
Once you have a how for all of the sections, you have your approach in place that covers what you’ll be building, NFRs, deployments, environments… you name it! Then you can treat this as a living document and keep updating it as things change / more information becomes available.