For UX Research, we typically want to understand how and why our users enjoy or don’t enjoy our products. We learn this by understanding our users behaviors and motivations. For example, we explore why they want such-and-such feature, and the results they expect from using it. This is called needs analysis and it’s important because then we can understand both usefulness and usability - which are the 2 principle categories of user centered design. How can surveys help us with this type of analysis?

Surveys can help us because they collect data regarding the effectiveness of our product decisions, such as “does this feature meet our customer’s needs (usefulness); and to what extent does it meet those needs (usability)”. And the data we collect is quantifiable; as opposed to the data we collect in qualitative studies.

Quantifiable data is important because it can help us identify patterns, such as “80% of our users find X-feature to be very useful”; or “only 50% of our users find our product easy to use.” Product teams can use such data to make decisions, such as roadmapping usability efforts so that the ease-of-use score will increase by Q4; or adding a UX resource to help identify how we might make the X-feature even more useful in hopes of attaining a 95% score.

In my own experience, surveying users was the ideal method for bringing back actionable data to a product team that was developing its first AI product. The survey resulted in formative recommendations that the team was able to act upon quickly, while also providing confidence that we had designed the right thing, and were ready to move into the engineering phase of the product development life cycle. Now I will provide some context as to how that was accomplished.

The large team I was supporting was creating an AI tool in a B2B/B2C sales environment. They had designed Recommended Actions - RAs for short - by consulting with the product advisory committee made up of subject matter experts (SME). This team put their best foot forward in designing Insight/RA pairings. I helped the stakeholders understand that formative UXR methodology would help us validate the usefulness of the designed pairings so we could move into the build stage knowing our SME had successfully executed the design stage.

Once I had the stakeholders commitment to the research, I picked survey as the method because it would allow me to work quickly; and because I would be able get a good representation of our actual market by targeting 3 different segments of customers - beginners, intermediates and advanced. I then set about designing the survey by focusing on the goal of evaluating the RAs to determine the degree to which our users found them useful.

The RA feature worked like this: it analyzed a customer’s sales pipeline to develop an “Insight”; and then the tool used AI predictive modeling to categorize that customer. Based on the category label applied to that customer, an RA was presented to the sales personnel - or distributor - that “owned” the customer. The distributor would then literally do what the RA feature was suggesting, such as “share a recipe for creating a breakfast shake with this customer” or “invite the customer to a local event”.

When I synthesized the data from the survey, I learned that our distributors were delighted with the feature. The advanced distributors saw the feature as being a coaching tool which could save them time, i.e., rather than coaching the newbie distributors in their organization, they could have those newbies use the app to become familiar with recommended actions. And the less seasoned distributors were perceiving the RA as useful suggestions - perhaps ones they’d thought about, but needed reminding of.

This kind of reporting was extremely impactful to the team. It validated the SME design, while also triggering ideation. The product team was able to roadmap small incremental adjustments, such as verbiage changes - while also strategizing larger more impactful iterations to the RA feature - such as adding copy/paste “Scripts” to a post-Beta version.

I have fond memories of this project and the team I worked with. In the end, we were able to launch this AI-driven feature to great effect. It was the first of its kind at the company, and we all felt bonded by the experience.