How to Make QA Work in Engineering.

Just a decorative image for the page.

TL;DR

  • Go without a separate QA team by default
  • Hire engineers that write tests on the proper level and own the quality aspect
  • QA sometimes make sense when applied carefully as detailed below

Intro

I just read a very nice post by Regina Gerbeaux and Case Sandberg about “Setting up an engineering team structure for success”.

It’s a great article and a recommended read. But there is one thing I totally disagree with:

“I think the best thing to do is to have a dedicated QA Engineer. Senior and staff level engineers are able to go 50% faster at shipping features if they don’t have to worry about writing tests. Bugs coming in could get a test case written by a QA engineer, and there is still confidence that any fix by an engineer solves the problem."

What? Senior engineers not writing any tests? QA doing all the testing work?

This sounds extremely wrong to me - and I’ve seen this approach fail not only once. Let’s checkout some patterns of QA departments. Some of these patterns lead to failure - while others work nicely.

Failure patterns of QA departments

The unmaintainable e2e test suite

The QA department creates a huge automation suite as end-2-end (e2e) test. This is done using frameworks like Selenium, Playwright, Cypress and others.

This test suite sounds like a great idea! You should automate everything after all.

The downside is that this e2e test suite is very hard to maintain. Often small changes in the user interface lead towards large parts of the suite failing.

Debugging failures in tests takes QA a lot of time. Maintaining the e2e suite becomes a full-time job.

Even worse - most e2e failures are not real problems, but “just” things that have been changed during new stories or represent minor UI changes.

These false positives still make your deployment pipeline fail.

Engineers then get angry because the e2e tests are flaky and kill their cycle time. As a “solution” the e2e test suite gets sidelined in the pipeline. The new software gets deployed anyway - even though the e2e tests are red. Ouch.

I observed that patterns multiple times in different companies. The e2e test suite was too large, flaky, unmaintainable and raised many false alerts.

The conclusion is clear: A large e2e test suite seems like a clear anti-pattern.

Engineers become lazy

I am an engineer and human being myself. If someone does my job, I become lazy, and just delegate. And if - as Regina wrote - QA does all the testing for the engineers it does something with the engineers.

But QA cannot test on the same level as engineers can. QA usually tests on the e2e test level (top of the testing pyramid). Engineers do everything below (unit and integration tests).

When engineers stop writing tests because QA does the testing then three things happen:

  • A large e2e test suite is being built - as written above: a clear anti-pattern.
  • Spaghetti code galore! Missing unit tests also potentially mean bad architecture that leads to spaghetti code and unmaintainable software. Ouch.
  • If the developers rely solely on e2e tests then debugging bugs becomes extremely hard. E2e tests simply tell you that something is wrong. But they don’t tell you which component or class is responsible for the bug. Having unit tests would help, because they can exactly point you to the problem. E2e test? Not that much.

Therefore I expect engineers to write proper unit / integration tests and make “quality” their responsibility. This is also an absolute prerequisite for continuous delivery - without anyone “having to check your code manually or adding tests afterwards”.

QA as product acceptance testers

I’ve also sometimes seen that QA is used by product to approve their tickets.

In my world, it is the Product Manager that writes the tickets and defines the acceptance criteria. The Product Manager also has a clear idea about how a story should look and feel.

Once the engineers finished a story the PM checks out the finished story. Ticks the checkboxes of the acceptance criteria, and also checks whether it looks right and got the right feel. If not then feature can be reworked by the team.

Sometimes the approval of a ticket is delegated to QA.

The Product Managers write acceptance criteria and define UX/UI aspects. Once the ticket is finished, QA comes and checks the acceptance criteria. The Product Manager would not approve the ticket or even try it out.

Ouch.

This is very inefficient as QA needs a huge amount of time to understand each ticket. And QA is not in the shoes of the product process and does not know how the feature should look and feel.

Sometimes your Product Managers complain that they don’t have enough time to do “everything”. Then it is even more important to make sure they have proper time management in place and concentrate on the most important and urgent tasks. Approving a ticket is at the core what a product manager should do.

Three patterns that make QA work

Knowing about things that do not work is important - but much more important - what does work when it comes to QA? Let’s go over four QA patterns that worked for me!

Metrics, quality KPIs and post-mortems

The QA team is responsible to oversee the quality of the software in the department. That’s what the Q is about.

Some concrete examples:

  • If a software release got many bugs, it is the responsibility of QA to create a post-mortem. This makes sure the same problems don’t happen again. That’s a soft approach to “quality” but very important. Engineers should be held accountable. Mistakes can and should happen. But the same mistake should only happen once.
  • Usually departments have recurring meetings like a Show and Tell or Team Lead meetings. In one of these meetings QA will be present and highlight current quality KPIs. These quality KPIs might be, for instance, the number of bugs, customer support requests or DORA-metrics like cycle-time and release-frequency. If the metrics go south, it is in the responsibility of QA to highlight these problems.

Support teams when large features go live and exploratory testing

In my world, the engineering team plus the product person of a team can decide whether a feature can go live. The engineers write unit and integration tests. Product checks the acceptance criteria and overall impression of the new feature. If both engineering and product give a thumbs up, the feature can go live. In 99% of the cases, without any involvement of QA.

But sometimes it makes sense to loop in QA. For instance if the feature is really large and touches many different areas of the application. Then it makes sense to complement the product approval with a separate QA step.

Automation - for few critical paths

While a huge e2e test suite is an anti-pattern as described above - automation is important.

A best practice is having a small number of e2e tests that get triggered after each deployment and check the critical paths of the application. Login / Logout and similar things.

This usually helps to spot most of the problems in advance. The key is to keep the e2e suite still small and maintainable.

Summary

Most QA teams are a waste of money, time and make your engineers and product people lazy. A separate QA step with back and forth between QA and engineering and product will kill your cycle time.

Go without any dedicated QA team by default.

If you hire for the right engineering skills, you won’t need QA at all. Contrary to Regina’s opinion at the start of this article - make your engineers write tests on the proper level - otherwise bad things will happen.

But sometimes a small and dedicated QA team can help in great ways:

  • To keep your engineers on their toes by watching key quality metrics and doing post-mortems
  • Helping with exploratory testing when large features go live
  • Creating e2e tests that test the critical path of your application after deployment

Your mileage may vary…

More