Discover more from Simple, Beautiful Software Development
Explainers: How we approach quality assurance testing
Explainer (noun): a statement, article, or video that provides an explanation of a concept, topic, situation, etc. —…
a statement, article, or video that provides an explanation of a concept, topic, situation, etc.
When I started at Sitewards I had already been a professional developer for approximately 3 years. Most of what Sitewards was doing I’d seen or done before, and the rest I figured I could tackle pretty comfortably. However, I was at a loss at a couple of topics:
Project Management, and
Quality Assurance (QA)
I had access to neither of these resources in previous jobs. Instead, we talked directly to the customer, completed the work and tested it ourselves, and deployed it to production. At Sitewards, there were additional people in the mix! The project management team is a story for another day; for now, I’d like to chat about how I’ve come to love our QA team, and the approach we take to QA. It is perhaps most useful to understand how the QA process works by viewing it through the lens of a normal ticket.
Writing a story
In discussion with a client we will determine that a new feature should be implemented. We will discuss who the users of this feature will be, how they are expected to use this feature and how we can make sure that they are using the feature as expected (or not) going forward.
These requirements are noted down into a “story ticket” in Jira. A developer will then estimate this work, and when time allows, take the ticket for implementation.
Implementing the feature
A developer is then free to implement the change as they see fit. We do not legislate any particular development tooling or environments.
Reviewing the feature
Once a developer has completed the work, it is subject to code review. Other developers will review the work that developer has completed, ensuring that they:
Understand the changes
Cannot see any immediate issues
After the ticket has passed code review, it qualifies to QA
Before we continue onto what someone in QA does, it’s perhaps best to step back and see exactly what we mean by “Quality”.
Sitewards try and produce the highest quality of work for the budget that we’re allotted. We set standards as to how well our software will work, and make the trade-offs explicitly between improving it even further and continuing to make the budget. The standards are defined in two ways:
Each story has it’s own “Definition of Done (or DoD)”, or mechanism by way we determine this work is complete. Additionally,
Each site has an additional “Definition of Done”, which determines if the site still performs to expectations.
These standards cover a range of different things, such as:
Does the implementation of the feature do what the story intended to do?
Does the implementation work correctly in all defined browsers and devices?
Does the implementation introduce any new issues in critically important features?
An example DoD is shown below:
# Definition of Done (or DoD)
### Browser compatibility
The website is expected to be compatible with the following browsers:
* Safari latest, latest–1 (Mac OS) * Safari Mobile for iPad 2, iPad Mini, iPad with Retina Display (iOS 7 or later), for desktop storefront * Safari Mobile for iPhone 4 or later; iOS 7 or later, for mobile storefront * Chrome latest, latest–1 (any operating system) * Firefox latest, latest–1 (any operating system) * Internet Explorer 11 or later, latest–1 * Chrome for mobile latest–1 (Android 4 or later) for mobile storefront
where latest–1 means one major version earlier than the latest released version.
### Critical Features
These features should be tested once there is any architectural changes. If in doubt, test them.
1. The user can complete a purchase with all payment methods 2. The user can create an account 3. The user can login to that account 4. The user can search for a given product by name and find it 5. The user can visit a product page 6. The user can find a dealer with the “dealer search” (Händlersuche) 6. The administrator can log into the admin panel
Once work has qualified for QA, a new QA ticket is created. It usually looks something like as follows:
Heya! This work needs quality assurance. __GENERAL_THING__ needs testing; specifically, the following functionality needs to be checked:
A full test of all critical features defined by the DoD *__IS_OR_IS_NOT__* required.
The following stores should be included in this testing:
* German * English
Additionally, please test this work in all browsers defined by the DoD
That particulars of this environment are:
h2. Resources: * __THE_FULL_URL_TO_SITE__ (Website) * __ANY_OTHER_URLS_TO_OTHER_SERVICES (__OTHER_SERVICE__)
The DoD should have more specific information: * https://bitbucket.org/__ORGANISTAION__/__PROJECT__/src/master/docs/DOD.md
Remember bugs are good! Anything that feels odd to you is a bug. You are the closet thing we have to a customer!
The QA team member will then assign themselves to this ticket and begin testing the environment. They are somewhat ruthless at testing both whether the feature works, but whether it works in ways the developers isn’t likely to have thought of.
As the QA team member finds a flaw in the work, they create a new “bug ticket”. This ticket will typically include:
A screenshot of the bug
The version and type of the browser in which the bug was reported
The URL in which the bug occurs, or
A series of steps in which the bug may be reproduced.
These tickets are then assigned back to the developer, who must then go and fix them.
And the process repeats
Once the QA team is happy that there have been no bugs introduced as a result of the work (or that all bugs have since been fixed) and that the feature works as expected, it graduates to deployment.
Deployment is a straight forward task in which a developer will queue up the new code for deployment through our deployment automation. Generally speaking we deploy each change immediately after it passes QA, and only one change at a time so more QA is not required prior to sending the code to production.
Following this pattern has yielded tremendous benefits for Sitewards. Unfortunately, by nature, their benefits that our clients do not often see — it’s difficult to explain bugs that were caught and fixed before production“bugs”, and not simply part of the development process. However, it keeps our work of a reasonably high quality, and prevents most critical issues from hitting production environments.
Additionally, our QA team has several times caught critical issues coming out of the Magento core release, including in one particularly unfortunate instance, a full page cache bug that prevented products being added to the cart. Awkward!
The QA team are an invaluable part of the service that we provide here at Sitewards and I hope this post makes it somewhat clearer how they sit in the development lifecycle.