Comments are off for this post.

How Surveys Helped Launch An AI Model

For UX Research, we typically want to understand how and why our users enjoy or don’t enjoy our products. We learn this by understanding our users behaviors and motivations. For example, we explore why they want such-and-such feature, and the results they expect from using it. This is called needs analysis and it’s important because then we can understand both usefulness and usability - which are the 2 principle categories of user centered design. How can surveys help us with this type of analysis?

Surveys can help us because they collect data regarding the effectiveness of our product decisions, such as “does this feature meet our customer’s needs (usefulness); and to what extent does it meet those needs (usability)”. And the data we collect is quantifiable; as opposed to the data we collect in qualitative studies.

Quantifiable data is important because it can help us identify patterns, such as “80% of our users find X-feature to be very useful”; or “only 50% of our users find our product easy to use.” Product teams can use such data to make decisions, such as roadmapping usability efforts so that the ease-of-use score will increase by Q4; or adding a UX resource to help identify how we might make the X-feature even more useful in hopes of attaining a 95% score.

In my own experience, surveying users was the ideal method for bringing back actionable data to a product team that was developing its first AI product. The survey resulted in formative recommendations that the team was able to act upon quickly, while also providing confidence that we had designed the right thing, and were ready to move into the engineering phase of the product development life cycle. Now I will provide some context as to how that was accomplished.

The large team I was supporting was creating an AI tool in a B2B/B2C sales environment. They had designed Recommended Actions - RAs for short - by consulting with the product advisory committee made up of subject matter experts (SME). This team put their best foot forward in designing Insight/RA pairings. I helped the stakeholders understand that formative UXR methodology would help us validate the usefulness of the designed pairings so we could move into the build stage knowing our SME had successfully executed the design stage.

Once I had the stakeholders commitment to the research, I picked survey as the method because it would allow me to work quickly; and because I would be able get a good representation of our actual market by targeting 3 different segments of customers - beginners, intermediates and advanced. I then set about designing the survey by focusing on the goal of evaluating the RAs to determine the degree to which our users found them useful.

The RA feature worked like this: it analyzed a customer’s sales pipeline to develop an “Insight”; and then the tool used AI predictive modeling to categorize that customer. Based on the category label applied to that customer, an RA was presented to the sales personnel - or distributor - that “owned” the customer. The distributor would then literally do what the RA feature was suggesting, such as “share a recipe for creating a breakfast shake with this customer” or “invite the customer to a local event”.

When I synthesized the data from the survey, I learned that our distributors were delighted with the feature. The advanced distributors saw the feature as being a coaching tool which could save them time, i.e., rather than coaching the newbie distributors in their organization, they could have those newbies use the app to become familiar with recommended actions. And the less seasoned distributors were perceiving the RA as useful suggestions - perhaps ones they’d thought about, but needed reminding of.

This kind of reporting was extremely impactful to the team. It validated the SME design, while also triggering ideation. The product team was able to roadmap small incremental adjustments, such as verbiage changes - while also strategizing larger more impactful iterations to the RA feature - such as adding copy/paste “Scripts” to a post-Beta version.

I have fond memories of this project and the team I worked with. In the end, we were able to launch this AI-driven feature to great effect. It was the first of its kind at the company, and we all felt bonded by the experience.

Comments are off for this post.

Focus Groups for UX?! Please, Yes!

I never thought I'd embrace the focus group as part of UX process, until I had to. Basically, I was asked to produce qualitative results in a quantitative manner asap. (Scheduling a whole bunch of one-on-ones was not an option.) Though I have found that I can meet that outcome of qual results in a quant study with a well structured survey (thanks to this article on Norman Neilsen for inspiring me), in my situation, doing a survey was not an option.

Fortunately for me, I had access to our company's training room for this focus group. That gave me the leverage to put every one of the test subjects at an identical laptop, so I knew what experience they would each be having, as I myself drove the meeting also using an identical laptop.

Based on my experience, I have created a scientific lab study template that I am offering for others to use. Please modify it for your own use. Enjoy.

=====================

Hypothesis: Users will find the information architecture of a our redesigned product to be efficient and sensible. The actions required to complete their task of [Task Description, e.g., creating a listing for the bicycle they are selling] will be easy to intuit. Link scent will be strong.

Supplies
every user has a laptop connected to the internet
every laptop has a pdf that contains 2 links:
- 1 link to the online Figma prototype w/ link sharing on
- 1 link to the online Google Form SUS survey
name placard (link to Google doc template: https://docs.google.com/document/d/1SgBqfB5xLN860C6pj-hRuLhUzukXuJmPTs0PoG7Spss/edit)

Procedure
users enter and check their name off the sign in sheet
users sign the non-compete clause
users pick up their name placard
users sit in pairs or groups of 3
moderators (UX designers and/or product managers) facilitate, 1 per group
Conduct the Test:
Lead Moderator introduces team; outlines the expected outcomes
- provide instructions on how to use the prototype, i.e., "the flow is somewhat linear, but for the most part you can click any link; tap R to rewind the prototype to the first page in the flow; if you ever see a blue flash, that's a hotspot which you can tap on"
- let users interact with the prototype
--- moderators facilitate, with a focus on allowing for users to think aloud
--- if moderators interject with questions, they should allow for uncomfortable pauses
--- soon, the small groups will be bubbling with conversation; moderators are to facilitate the conversation without interfering
- All-group discussion and summary
--- ask a moderator from each group to identify 1 or 2 highlights, such as a "rose, bud, or thorn" moment (delight, opportunity, pain point)
------ be sure subjects - not moderators - are doing most of the talking
- SUS (System Usability Scale) survey
--- ask subjects to return to the pdf document and click on 2nd link, which links to the SUS
- team synthesis session
--- team meets to share notes and observations which will in turn be presented to stakeholders and other interested parties

 

CITE
Laura Bridges-Pereira. "Fold paper for name tag". YouTube
Susan Farell. "28 Tips for Creating Great Qualitative Surveys". Nielsen Norman Group

Comments are off for this post.

How to Calculate and interpret an NPS score

Imagine you've conducted a survey of your users asking them to provide you with an NPS score. And now you have those results and you'd like to make sense of them. This how-to will explain how to calculate your NPS score, and interpret it.Imagine you've conducted a survey of your users asking them to provide you with an NPS score. And now you have those results and you'd like to make sense of them. This how-to will explain how to calculate your NPS score, and interpret it.
====
The following chart serves as a reminder of the NPS categories:

0-6 : Detractors

7-8 : Passives

9-10 : Promoters

Here is the formula you will apply to determine the NPS score:

(% of promoters - % of detractors) = NPS

And to help you understand how your product is doing, consider the following rubric:

  • 0 or below: your product is failing in terms of customer satisfaction
  • 0-30: your product is doing "quite well" but probably needs improvement
  • 30-70: great in terms of customer satfisfaction70-100: most of your customers are brand advocates

CITE:

  1. "9 Practical Tips for an Effective NPS Data Analysis and Reporting." Retently (blog) https://www.retently.com/blog/nps-data-analysis-reporting/
    Jennifer Rowe.
  2. "Analyzing your Net Promoter Score℠ survey results (Professional Add-on and Enterprise Add-on)." Zendesk (support article) https://support.zendesk.com/hc/en-us/articles/203981113-Analyzing-your-Net-Promoter-Score-survey-results-Professional-Add-on-and-Enterprise-Add-on-

Comments are off for this post.

Graduated! Interaction Design Specialization from UCSD!

I'm so excited to announce my graduation from the University of California San Diego Design Lab. I've spent my weekends over the past 2+ years studying for the Interaction Design Specialization and learned so much from it. Please see my Capstone project on Medium.