After speaking in a 99 seconds talk in a QA Conference, I decided to write a blog on the SMART* testing strategy I discussed there. As a QA Engineer I have learned and used different traditional testing strategies, but I realized each one has its limitations. Combining these testing strategies to test a given story or a feature can create test redundancy and shift focus.
We talk about having quality over quantity, but we might not always apply this while testing a feature story. We end up putting more “quantity” of testing hours to gain more “quality”. This might be a good approach but not an efficient one.
After considering the strengths and weaknesses of the different strategies, I explored and experimented with a hybrid testing technique. It will save time and still retains the quality of the feature deliverable, which I call SMART* testing.
SMART* Testing gives an individual feature its own testing strategy. SMART* testing accompanied with Feature testing will help deliver healthy features to the customers.
What is SMART*?
S M A R T in Product terminology is (Specific, Measurable, Achievable, Relevant, Time- oriented)
But in testing, I classify it as
S – Smoke testing
M – Most Important workflows
A – Application of Knowledge
R – Relevant Regression | Running Automation tests
T – Tree-structure Testing
* – for experienced QA testers/engineers.
SMART* tests are more suitable for the testers who have experience in the company, know the company goals and have knowledge about the product.
Before going through S.M.A.R.T. in brief, please remember SMART* testing is not a replacement to Feature testing. I’ve included examples of use cases from my time at Paperless Post after each definition.
S – Smoke testing – is preliminary testing to reveal simple failures severe enough to reject a story/feature.
Examples of small but crucial functions : login & signup, search engine.
M – Most important workflow – Validate start-to-finish, real world scenarios and workflows to expose failures. This prevents adverse effects on customer interactions and checkout experience.
Examples of important workflows, on which a company’s revenue is dependent are : sending invites, placing orders, receiving order emails, uploading photos.
A – Application knowledge – In this step, knowledge about the product plays an important role in validating the feature story. We all know and have experienced bugs cropping up in different areas of the software that we hardly anticipate. Acquiring the software architecture and institutional knowledge can help build up our testing strategy which uncovers these unexpected bugs.
Example: If there are changes made to the pricing model, along with verifying the checkout workflow, testing other areas of the site affected by this change like search pages, user-events, CMS-admin tool, OMS-tool and iOS app are considered as application of knowledge.
R – Relevant regression – it’s a subset of regression testing. Both manual and automation testing can be performed in this step. Performing regression on a specific application that surrounds the feature story could be enough rather than performing unnecessary regression on the entire application. This can help move the features out the door faster with quality and integrity.
Example: On a feature story to add photos on the back of paper cards, along with feature testing, a regression testing was performed on the existing application which included:
(a) Verifying whether feature is not available on cards in online mode.
(b) Verifying cloning from Paper to online card does not carry over the photos on the back of the cards.
(c) Verifying switching card designs does not affect the position of the photos on the cards.
(d) Looking through the application code in the repo confirmed that regression testing on envelopes was not needed.
T – Tree structure testing – Tree analogies in software systems are a familiar topic. A large website is typically organized into a hierarchy (a “tree”) of features, sub features, and their dependencies. Tree-structure testing is different from bottom-up or top-down testing, in that it’s about testing the root feature and its branches. Mind mapping a feature can help expose relationships amongst functionalities of an application.
To conclude, SMART* testing gives you the liberty to put your knowledge to practice and helps gain confidence in your testing strategy. There are no fixed rules in SMART* testing standards. It is flexible based on the scope of the feature (frontend / backend), which gives the tester flexibility to optimize their testing process.
I have had great experiences exploring this strategy and signing off on releases and feature stories with quality and confidence. Refining and improvising SMART* tests will be an on-going learning process for me.