This is a guest posting by Carol Brands.
As a tester, I always wish I had more communication with our customers. I have plenty of access to our support staff, whom I often ask to act as a customer surrogate, but there’s something about talking directly to customers that makes me feel enthused and confident about my ability to test our products to meet their needs.
Each year I have one opportunity to interact directly with customers: our customer user seminar. The customer user seminar is an annual event lasting three and a half days. During the day, we deliver hands-on trainings, product presentations and customer forums. In the evening, there’s a hospitality suite with food, drinks and small computer labs where we can demonstrate and troubleshoot products for our customers.
Just ahead of this year’s seminar, I got some exciting news from our development team. They planned to perform some user experience testing at the conference! I asked if I could join the planning discussion as they decided what our testing would look like.
I sent them links to Katrina Clokie’s Accessibility and Usability Testing Pathway, highlighting the section on testing with real users. They decided to use a script verbally delivered by one of the developers. He asked the customer to perform specific tasks without offering any help to determine whether those tasks were discoverable.
During each session, there was one developer working with the customer via the script and one developer taking notes, and I participated as a second set of eyes and ears. With the customer’s permission, we also recorded the screen and audio for each session. I wasn’t really sure what to expect from these sessions, but I knew we would learn a lot.
Get TestRail FREE for 30 days!
One of the biggest takeaways for us was that it was very difficult for customers to navigate our product without using their own data. We chose to use the default computer labs in the hospitality suite, which were loaded with generic data. It’s similar to what a customer would use, but it is not a complete dataset. I didn’t realize how much users depend on seeing their own data in context to make decisions about what to do on the screen. In future user experience tests, we should try to use an environment that includes the customers’ specific and recognizable data to work with.
Beyond how to conduct user experience testing, we also learned more about how the customers interact with their data. One of the main features in the product we were testing was the ability to search for a piece of equipment. We identified a few pieces of data that we expected users to want to search against to find a particular piece of equipment.
However, after talking to the customers, we discovered that they are rarely trying to find a single piece of equipment. They are more likely to look for all the equipment that meets particular criteria. For example, rather than wanting to find the piece of equipment with serial number 123ABC, our customer would want to find all the pieces of equipment that have maintenance scheduled over the next two months. This is an important distinction. We created filters to enable the finding of a particular piece of equipment from a large set of equipment, but we might need to alter the design to focus on finding groups of equipment that meet specific criteria.
Paradigm Shift: Our last major finding
Our last major finding was that our customers are hesitant to let go of the equipment query paradigm that exists in our flagship product. The product we were testing is intended to be a lightweight, browser-based replacement for that flagship product. In the flagship, the users need to select a location and an equipment type in a screen called the “Navigator” before they can begin querying for equipment; we replaced the Navigator with a simple Google-like search bar.
We found that users are so used to using that Navigator to begin the identification of the equipment they want that they had a difficult time thinking about how to edit their query in the search bar to get what they want. We thought they would simply add the location name and the equipment type to their query input. Instead, they don’t always know the location they want until they see it on the Navigator. The location tree displayed by the Navigator means they don’t need to memorize location names, but instead can choose the location based on their understanding of the tree. This paradigm shift in how users find equipment is something we’ll need to consider carefully going forward.
Of course, we also got some of the usual visual feedback during our testing: We were told filters needed to be more visible, and returning to the search home page needed to clear the applied filters. But I found the differences between how we thought about the customers’ relationship to their data and the customers’ demonstrated relationship to their data to be a major revelation. That’s the kind of feedback I can use going forward to test our understanding of the customers, rather than simply testing against the acceptance criteria.
I can’t wait to see what additional user experience testing holds for us in the future.
This is a guest posting by Carol Brands. Carol is a Software Tester at DNV GL Software. Originally from New Orleans, she is now based in Oregon and has lived there for about 13 years. Carol is also a volunteer at the Association for Software Testing.
Test Automation – Anywhere, Anytime