Five ways testing is different at Lucid
Craig Randall
Reading time: about 8 min
After twenty years in the software testing industry, I can say for certain that it is an interesting and diverse field. I’ve tested marine cargo terminal planning software, containership cargo load planning software, video games, and 3D modeling and animation software. I’ve been directing testing efforts at Lucid for over eight years. There are so many tools, ideas, approaches, and even pitfalls. Experience has reinforced the idea that context is key in deciding how to approach testing from the individual all the way up to the organizational level. Making decisions about how to approach testing without fully understanding the related context can result in unmitigated risks, inefficiencies, and frustration for all involved. When you think through your application's unique context for use, you can narrow the focus of your testing to techniques and practices that stand to have a greater impact on the user’s experience. What follows is a summary of the main points where our context at Lucid brought us about to a unique course.
1. Company attitude towards Quality assurance (QA)
Today, more than ever, a testing team that is respected by the organization is far more effective than one that is not. We often hear of teams working to improve the relationship between developers and testers such that it is not the sort of "throw it over the wall" relationship that has been so commonplace. At Lucid, our developers and QA teams have a positive working relationship. Developers consider testing and automation early in the process and they plan for writing automation to reduce our testing efforts. Developers are also aware of the time it takes to complete tasks like the regression and they actively work on writing testable code. However, we don’t limit the scope of our efforts to only testing the work done by developers.
We emphasize this one idea: QA is here to help. No matter what department someone works in at Lucid, QA is available to help provide a second set of eyes on anything. Our QA Team offers testing to anyone at Lucid because testers have keen attention to detail and aren’t afraid to ask questions, especially hypothetical questions about edge case situations. Testers have empathy for the user, which they use to evaluate how any changes to functionality, user interface, or messaging, may impact a user’s experience with the product. These same skills can be beneficial internally. We'll help the recruiting teams with their on-campus recruiting events. We'll help the office managers plan events and set up for them. We'll help marketing teams test email campaigns and new templates. We'll help PMs groom their backlogs, plan and manage their sprints. We'll help UX by reviewing their mock-ups before committing them to a story. We believe our job is not limited to the software itself, rather every part of the business needs quality assurance and testing. We can help other departments think through their process, identify risks, and ask questions that helps ensure everyone is on the same page.
People often aren't sure what QA does, but by showing other teams and individuals how the QA process can be applied to and improve all areas of the business—not just software testing—the value of a testing team becomes much clearer and more valuable to the entire organization.
2. Performance review process
Our QA team has its own performance review process, created by testers for testers. Having a performance review for testers that was created by testers that actually have done the job for which you are being reviewed is a first for me. All of my prior experience involved non-testers managing the QA team. In my experience, the performance review forms for QA team members have just been modified versions of developer review forms. Those past reviews sometimes included things that really didn’t apply to what a tester actually does and often prioritized things arbitrarily rather than within context.
With our new approach, we have made it as clear and transparent as possible how we plan to evaluate the performance of our testers. We have provided examples for each potential level of performance rating, as well as what information goes into assigning a particular rating. All of the reviews have a consistent format so as to remove the risk of some review writers being more effective than others.
We have provided enough granularity in the review process that testers are empowered to take control of their career. Testers can choose what is important to them and they will know how to meaningfully impact those areas. They can even decide that some areas are not of interest to them and see how to focus their efforts elsewhere to still perform well. This approach shows that when we say we value diversity of thought on our team, we mean it. Not every tester needs to be good at the same things, and this review process reinforces that fundamental belief. We are better as a team because we are different.
With this approach, we've made it much easier for everyone to have conversations about career direction and growth. Testers can easily figure out roughly how their rating is trending and they can see from the provided examples what kinds of things they can do to improve their rating.
Managers can more easily determine how to talk about career progression with testers and we have the ability to really recognize testers for the work they're doing.
3. Documentation and automation
Lucid understands that the greatest value of a human tester is time spent exploratory testing. Exploratory testing is a great way to represent how our users will likely interact with our software and thus serves as the best way to help mitigate risk. Developers are mindful of adding functionality that would have to be added to a manual regression and will instead figure out ways to automate those tests. QA doesn’t spend a great deal of time writing extensive test documentation. For our context, it is enough to write testing prompts (just enough to remind a tester of what should be tested without explicitly telling them how to test). Here again, our focus is on creativity and diversity. By only providing a testing prompt, we leave it to the individual testers to figure out what needs to be tested. Each time that same test prompt is seen, the resulting tests will be different, and that helps us have better overall test coverage.
Likewise, developers are in charge of writing automated tests for their stories, which encourages writing testable code. Automation engineers write utilities and frameworks to make writing tests easier and improve their stability.
4. QA relationship with product, UX, and dev
QA is a first-class member of the team and we're involved all along the way. Today, the question is more about balancing QA input early and often with making sure we have time for exploratory testing. We continue to work on finding ways to minimize check-list style acceptance testing. Testers run mob tests, swarm tests, and bug hunts regularly with their scrum teams and across scrum teams.
Product managers (PM) and devs have asked, "What can we do to help QA have more time for exploratory testing?" PMs ask this because they understand that testers represent the user and that they have learned how the software works, they have a feel for it, and can tell when something doesn’t feel intuitive. Both PMs and devs have seen QA demonstrate an ability to uncover a good number of problems that users would have run into, and they know that QA often finds those through exploratory testing.
5. Hiring testers
Our primary focus for hiring human testers is around creativity, curiosity, passion, and communication—not technical skills. Because of the nature of our products, those are the traits that will have the most influence on a testers ability to be successful. They are also the skills that are hard or even nigh-impossible to teach someone, unlike the technical skills. We aren’t looking for the highest GPA or candidates with certifications because neither of those things ensure that the candidate will be a successful tester.
Furthermore, it is far more important that the testers we hire help model our vastly diverse user base. Our users surely have a diverse range of education and technical knowledge, so our testers should too. We need a diversity of thought to help mitigate risk as much as possible. With that diversity comes a much greater variety of testing approaches and that, in turn, leads to improved testing coverage. Some of our testers have an eye for design and UX bugs. Some have a strong understanding of the back-end and how changes can impact the user experience. Some have an ability to go down the rabbit hole of edge cases and find really obscure but potentially terrible bugs. This diverse group of testers can work together to help provide a consistent and positive user experience and that is not something we could achieve if we hired based on grades, schools, or certifications.
About Lucid
Lucid Software is a pioneer and leader in visual collaboration dedicated to helping teams build the future. With its products—Lucidchart, Lucidspark, and Lucidscale—teams are supported from ideation to execution and are empowered to align around a shared vision, clarify complexity, and collaborate visually, no matter where they are. Lucid is proud to serve top businesses around the world, including customers such as Google, GE, and NBC Universal, and 99% of the Fortune 500. Lucid partners with industry leaders, including Google, Atlassian, and Microsoft. Since its founding, Lucid has received numerous awards for its products, business, and workplace culture. For more information, visit lucid.co.