In testing context is important

In my testing career I’ve certainly found a number of issues or bugs that got ignored or weren’t prioritised. My first thought always used to be “How dare they ignore the awesome bugs I found, don’t they care about making this good?” which it turns out was wrong.

Yes people care about things being good, but maybe their version of good didn’t align with mine.

Being told I wasn’t pragmatic

I’ve been given feedback before that I’d found baffling, that my bar for quality was too high.

Well, sure… obviously… a high bar for quality is a good thing right? I’m a tester after all.

That feedback left me really confused, especially when it happened more than once; I assumed that managers just didn’t understand that my role in the project was to find all the issues and make it good.

Fig 1. Pikachu is confused… and he’s not the only one.

It would take me a couple more roles to realise that my good didn’t mean their good.

What is good?

When we look up the word good we get definitions like “that is meets a high standard” or a moral standpoint, that good “is a state of being better or more righteous than others”.

Fig 2. Mandatory D&D reference about alignments from WorldAnvil.

This can warp our thinking into assuming that when people say good, they mean the best. That anything less than perfection (or bug free) isn’t good. But there’s another definition of good, something “that it is desirable or approved of”. With this definition we can understand that what good really means, is good enough for me now.

As testers we need to be pragmatic enough to understand what good enough means for our projects and that’s what requires context.

What is good enough?

In the real world, we only have a finite amount of resources to complete a project. Good enough means that our product will meet the needs of the end users of the project, rather than trying to solve all problems and issues that might occur. So that means we need to understand the context of our projects, the use of products and our customers in order to drive out what good enough means.

  • If the product is a demo or technical demonstrator, only using happy path in a limited environment, then do we really need to know about edge cases or NFR details?
  • If the product is an API interface used by only a handful of known expert users who know the system inside-out, then do we need to consider all limits of usability and learnability?
  • For a financial product perhaps speed, reliability and security is more important than accessibility and usability.
  • For a child’s education game with no online connectivity, used only at home, we may not need to consider security over understandability and accessibility.

To work out what good looks like, we can talk to our teams, user researchers and Product Owners to see what they think and make sure we’re all aligned. One way we can do this is through the setting of acceptance criteria in Triforce sessions, here we can talk about “what would make this thing good enough to be done”. Another way is by using debriefs in exploratory testing sessions to align our thoughts of good (based on what we’ve seen from testing) with the developers and team on the project.

If we’re raising issues that aren’t getting fixed, then in all likelihood we’re not meeting “good enough” for the team. We need to be able to remember that we’re not competing to find the most bugs or screw over the developers, we’re working together to build something.

Agile changes everything

Being in an Agile team means that as a tester we have to be more on top of our team’s needs and wants regarding quality. If we can’t report quickly and pragmatically on the quality of what’s being built, our team may stop engaging with us. We have to be seen to be trusted advisors that are working with the team to help them to understand what they need to know about quality.

Fig 3. Wormtongue was seen as a trusted advisor… but don’t be like him.

In some instances, we may have to set our own scope of testing rather than being told specifically what to test. This is when having strong context of the project and product matters most, so that we know what to test and how much to test to support the team.

  • Security – is there a risk that data can leak or people can be manipulated? If this is all offline or on a closed system then maybe not. If everything in the system is publicly available then do we really care about leaks? Does the implementation automatically sanitise some attacks so it doesn’t matter if special characters can be input?
  • Performance – is there a risk that this will slow down or fail? If this is a one off demo on our systems with developers on hand, can we manage a memory leak (if it happens)? Do we really need to be able to support 100 users if the demo is only with one?
  • Accessibility & Usability – who’s using this system? Are they a super user who’ve had training already and don’t need help? is this a demo and we need to use branding over usability?
  • Compatibility – What are the customers really using? Can we limit our environment of usage to certain hardware and software and if so do we care about the others? Does it matter that this doesn’t run on an outdated browser or operating system version?
  • Functionality – Do we care about errors, do users need to be supported? If this is a happy path only demo given by us, we probably don’t care about edge cases so much.

I’ve found that the more pragmatic I am, by asking these kinds of questions, the more engagement I get from my team. I’m not looking to try and prove I’m a good tester by finding *all the issues ever* but I’m trying to be a good team mate who cut’s through the noise to give useful information.

Quality of information over quantity.

Pushing for a “better” good

Okay Callum, I hear you say, so you’re telling us to be pragmatic and let the project tell us how good things are, right?

Well…. yes, and sometimes we need to push for better.

As testers we’re the experts in quality plus we have a high bar for what good looks like. We need to be the ones to say if good enough isn’t good enough, but that still needs to be pragmatic and driven by context. When we push for better quality it should come from a place of understanding the customer and the business’ end goals and we need to use this understanding to drive our narratives.

  • We’ve profiled the end user and learned that a sizeable amount have accessibility needs, we should aim for AA standards.
  • We’re working in the finance domain and a leak of data would be catastrophic, we need to consider better security.
  • In this demo the customer will be hands on and the system isn’t very learnable, adding more error handling will support the customer and make the product more saleable.

It’s when we push for better (or perfect) without this context that we get told we’re not being pragmatic and can lose our status as the team’s trusted advisor.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at