Eye-opening insights from 700+ product managers & leaders.
When I started working at productboard a few months ago, there weren’t any formal QA processes set up. You might say the whole thing was a hack. And while we’ve consistently received excellent feedback on our product, we are entering a hyper-growth period and must invest in the future.
Enter QA. productboard helps teams figure out what to build next and is central to a company’s product strategy. Our goal is not only to make sure that companies build the right products, but to truly delight our users.
Let’s dive into more detail about what we’ve done and what we still plan to do.
Our development team lead, Alex, carries the torch for QA automated tests. He built and implemented test automation infrastructure and integrated it into productboard’s software development life cycle (SDLC). He introduced Cypress.io and helped establish it as the automated end-to-end (E2E) testing framework for our company. He even helped recruit me and Antonio, our senior QA.
Joining productboard felt like jumping onto a space rocket mid-flight. We needed to quickly grasp a complex product and get familiar with the team’s standard engineering workflows and codebase. And of course, we needed to get up to speed with the product teams.
productboard is optimized for agile product development, which makes us a purely agile organization. This means we follow continuous product discovery and delivery best practices instead of leaving testing to the end (like it is done in the waterfall development model).
We started attending daily standups and bi-weekly groomings with the team to get “in the loop.” Based on my colleague’s recommendation, I read Inspired by product guru Marty Cagan. It helped me absorb new concepts and understand the rationale behind many of the philosophies and processes we follow at productboard.
Prior to the QA team’s efforts, all testing (both manual and automated) was conducted in the same space. The resulting pollution translated into difficulties in executing manual tests and very low resilience when it came to automated tests.
It was quite common for a developer to run automated tests on a local space and get everything green. But then if tests were run in the Continuous Integration (CI) environment, some would fail. There was a lot of “noise” which made it unclear if failed tests were indeed true or just false negatives. They weren’t adding much value and developers did not feel confident during deployment.
We started “cleaning the house.” In order to have a reliable, robust, and scalable test infrastructure, we needed to set up clean test environments to eliminate noise and make tests easier to run and interpret — especially the failures.
On top of the absence of a proper testing environment, our technical debt, in terms of QA, was substantial. Our automated test coverage was low, we did not have proper procedures for manual test planning, and there were no guidelines on how automated test scripts should be written. The codebase for automated tests was full of “spaghetti code” — hard to maintain and almost impossible to read, with unnecessary levels of abstraction.
After planning, we decided that we should first refactor the automated test codebase. With a newly clean codebase, we were finally able to start work on new automated E2E tests and add more coverage.
We also implemented procedures for manual testing. We found an intuitive, simple tool to write and execute manual test cases — TestPad — and started using it to test new pieces of functionality. This has made manual testing of productboard more structured and issue reporting more streamlined, thus increasing the quality of software entering the delivery phase.
Throughout the development lifecycle, we conducted multiple iterations of testing. This is because for product teams, having the QA team test granular components greatly increases confidence when it comes to the final delivery stage. It means that we are less likely to encounter low-level bugs during final testing because they have been “weeded out” during previous iterations.
Now, we conduct final testing in a purely end-to-end manner, uncover higher level bugs, and cover more edge cases.
In order to provide testing for multiple teams, we have to prioritize and split tasks in an efficient, objective-driven way.
Antonio, productboard’s senior QA, handles the most technical aspects of maintaining test infrastructure. He works with one of the product teams to provide the appropriate manual test coverage.
Alex, who is more junior, works with the other two product teams and helps with the implementation of new automated tests and maintenance of existing ones.
We leverage agile methodologies such as SCRUM and Kanban as a means to organize our work. We work in weekly sprints, keep track of our tasks in Trello, and conduct daily standups as well as weekly planning sessions and retrospectives. We define our own OKRs as a team and on the individual level.
Thanks to Antonio’s hard work, we have successfully revamped our entire testing infrastructure. He completely refactored the automated E2E test codebase and implemented a new internal testing framework on top of cypress.io. We can now write cleaner, more resilient, and maintainable automated tests, and have made significant progress in scaling our automated test coverage.
An important component of our internal testing framework are page object modules that contain both component locators and lower level testing commands. Since our component locators are dynamically generated, Antonio had the brilliant idea to develop a module that resolves the locators by hashing them according to the running environment.
By having the lower level testing commands wrapped in helper functions, our test suites contain easy-to-read and reusable code.
The increased resilience and maintainability of our tests makes for a stabilized Continuous Integration (CI) pipeline with faster build times.
That said, we are slowly but steadily — despite the sheer amount of manual test work required by the product teams — increasing the automated test coverage of our application.
In addition to this, we produced some technical documentation for our Engineering team with instructions on how to set up and configure a local test environment. We have also collaborated on a reference article for our own internal Flux development tools API.
As a fast-growing organization, we are looking to scale our QA team in order to keep up with the product. Our goal is to have at least four QA Engineers on our team: two senior QAs to take care of tooling and infrastructure-related tasks and two QAs to work with the product teams.
If you happen to be a QA Engineer or Software Tester and are interested in jumping on a space rocket, then join our team on this great adventure! If you don’t work in QA but are keen on working side-by-side with a dedicated QA crew, then see if there is an open role that interests you.