Comments are off for this post.

How Surveys Helped Launch An AI Model

For UX Research, we typically want to understand how and why our users enjoy or don’t enjoy our products. We learn this by understanding our users behaviors and motivations. For example, we explore why they want such-and-such feature, and the results they expect from using it. This is called needs analysis and it’s important because then we can understand both usefulness and usability - which are the 2 principle categories of user centered design. How can surveys help us with this type of analysis?

Surveys can help us because they collect data regarding the effectiveness of our product decisions, such as “does this feature meet our customer’s needs (usefulness); and to what extent does it meet those needs (usability)”. And the data we collect is quantifiable; as opposed to the data we collect in qualitative studies.

Quantifiable data is important because it can help us identify patterns, such as “80% of our users find X-feature to be very useful”; or “only 50% of our users find our product easy to use.” Product teams can use such data to make decisions, such as roadmapping usability efforts so that the ease-of-use score will increase by Q4; or adding a UX resource to help identify how we might make the X-feature even more useful in hopes of attaining a 95% score.

In my own experience, surveying users was the ideal method for bringing back actionable data to a product team that was developing its first AI product. The survey resulted in formative recommendations that the team was able to act upon quickly, while also providing confidence that we had designed the right thing, and were ready to move into the engineering phase of the product development life cycle. Now I will provide some context as to how that was accomplished.

The large team I was supporting was creating an AI tool in a B2B/B2C sales environment. They had designed Recommended Actions - RAs for short - by consulting with the product advisory committee made up of subject matter experts (SME). This team put their best foot forward in designing Insight/RA pairings. I helped the stakeholders understand that formative UXR methodology would help us validate the usefulness of the designed pairings so we could move into the build stage knowing our SME had successfully executed the design stage.

Once I had the stakeholders commitment to the research, I picked survey as the method because it would allow me to work quickly; and because I would be able get a good representation of our actual market by targeting 3 different segments of customers - beginners, intermediates and advanced. I then set about designing the survey by focusing on the goal of evaluating the RAs to determine the degree to which our users found them useful.

The RA feature worked like this: it analyzed a customer’s sales pipeline to develop an “Insight”; and then the tool used AI predictive modeling to categorize that customer. Based on the category label applied to that customer, an RA was presented to the sales personnel - or distributor - that “owned” the customer. The distributor would then literally do what the RA feature was suggesting, such as “share a recipe for creating a breakfast shake with this customer” or “invite the customer to a local event”.

When I synthesized the data from the survey, I learned that our distributors were delighted with the feature. The advanced distributors saw the feature as being a coaching tool which could save them time, i.e., rather than coaching the newbie distributors in their organization, they could have those newbies use the app to become familiar with recommended actions. And the less seasoned distributors were perceiving the RA as useful suggestions - perhaps ones they’d thought about, but needed reminding of.

This kind of reporting was extremely impactful to the team. It validated the SME design, while also triggering ideation. The product team was able to roadmap small incremental adjustments, such as verbiage changes - while also strategizing larger more impactful iterations to the RA feature - such as adding copy/paste “Scripts” to a post-Beta version.

I have fond memories of this project and the team I worked with. In the end, we were able to launch this AI-driven feature to great effect. It was the first of its kind at the company, and we all felt bonded by the experience.

Comments are off for this post.

Crazy Eights Fun

 

Recently, my team set out to redesign a portion of an SPA (single page application) that we had already spent time designing and reviewing. The company strategy had changed considerably since our first go-around so we needed to iterate on the design. In order to kick-start our ideation efforts, I workshopped with the team in completing Crazy Eights, a collaborative activity that stimulates idea generation.

Our workshop began with each participant folding a piece of paper into 8 squares; then we set a timer and each of us spent 8 minutes conceiving of 8 new ideas, 1 per square of paper.

The result of our Crazy Eights activity is that we were able to gather a few strong concepts in a very short amount of time. It bonded us as designers and problem solvers as we shared our "eights" with each other. And it helped us reject ideas very quickly too. Arguably, the most value of this activity was derived from the analysis portion, when we expounded upon our ideas and negotiated their merits. In the analysis portion, we got to re-think our own ideas, while also providing feedback on our teammates'.

Have you ever tried Crazy Eights? If not, I suggest you do. The entire workshop can be completed in less than 30 minutes while providing a launchpad for team bonding and idea generation.

SUGGESTED READING
Yael Levey. "How to: Run a Crazy Eights exercise to generate design ideas"

Comments are off for this post.

Focus Groups for UX?! Please, Yes!

I never thought I'd embrace the focus group as part of UX process, until I had to. Basically, I was asked to produce qualitative results in a quantitative manner asap. (Scheduling a whole bunch of one-on-ones was not an option.) Though I have found that I can meet that outcome of qual results in a quant study with a well structured survey (thanks to this article on Norman Neilsen for inspiring me), in my situation, doing a survey was not an option.

Fortunately for me, I had access to our company's training room for this focus group. That gave me the leverage to put every one of the test subjects at an identical laptop, so I knew what experience they would each be having, as I myself drove the meeting also using an identical laptop.

Based on my experience, I have created a scientific lab study template that I am offering for others to use. Please modify it for your own use. Enjoy.

=====================

Hypothesis: Users will find the information architecture of a our redesigned product to be efficient and sensible. The actions required to complete their task of [Task Description, e.g., creating a listing for the bicycle they are selling] will be easy to intuit. Link scent will be strong.

Supplies
every user has a laptop connected to the internet
every laptop has a pdf that contains 2 links:
- 1 link to the online Figma prototype w/ link sharing on
- 1 link to the online Google Form SUS survey
name placard (link to Google doc template: https://docs.google.com/document/d/1SgBqfB5xLN860C6pj-hRuLhUzukXuJmPTs0PoG7Spss/edit)

Procedure
users enter and check their name off the sign in sheet
users sign the non-compete clause
users pick up their name placard
users sit in pairs or groups of 3
moderators (UX designers and/or product managers) facilitate, 1 per group
Conduct the Test:
Lead Moderator introduces team; outlines the expected outcomes
- provide instructions on how to use the prototype, i.e., "the flow is somewhat linear, but for the most part you can click any link; tap R to rewind the prototype to the first page in the flow; if you ever see a blue flash, that's a hotspot which you can tap on"
- let users interact with the prototype
--- moderators facilitate, with a focus on allowing for users to think aloud
--- if moderators interject with questions, they should allow for uncomfortable pauses
--- soon, the small groups will be bubbling with conversation; moderators are to facilitate the conversation without interfering
- All-group discussion and summary
--- ask a moderator from each group to identify 1 or 2 highlights, such as a "rose, bud, or thorn" moment (delight, opportunity, pain point)
------ be sure subjects - not moderators - are doing most of the talking
- SUS (System Usability Scale) survey
--- ask subjects to return to the pdf document and click on 2nd link, which links to the SUS
- team synthesis session
--- team meets to share notes and observations which will in turn be presented to stakeholders and other interested parties

 

CITE
Laura Bridges-Pereira. "Fold paper for name tag". YouTube
Susan Farell. "28 Tips for Creating Great Qualitative Surveys". Nielsen Norman Group

Comments are off for this post.

How to Calculate and interpret an NPS score

Imagine you've conducted a survey of your users asking them to provide you with an NPS score. And now you have those results and you'd like to make sense of them. This how-to will explain how to calculate your NPS score, and interpret it.Imagine you've conducted a survey of your users asking them to provide you with an NPS score. And now you have those results and you'd like to make sense of them. This how-to will explain how to calculate your NPS score, and interpret it.
====
The following chart serves as a reminder of the NPS categories:

0-6 : Detractors

7-8 : Passives

9-10 : Promoters

Here is the formula you will apply to determine the NPS score:

(% of promoters - % of detractors) = NPS

And to help you understand how your product is doing, consider the following rubric:

  • 0 or below: your product is failing in terms of customer satisfaction
  • 0-30: your product is doing "quite well" but probably needs improvement
  • 30-70: great in terms of customer satfisfaction70-100: most of your customers are brand advocates

CITE:

  1. "9 Practical Tips for an Effective NPS Data Analysis and Reporting." Retently (blog) https://www.retently.com/blog/nps-data-analysis-reporting/
    Jennifer Rowe.
  2. "Analyzing your Net Promoter Score℠ survey results (Professional Add-on and Enterprise Add-on)." Zendesk (support article) https://support.zendesk.com/hc/en-us/articles/203981113-Analyzing-your-Net-Promoter-Score-survey-results-Professional-Add-on-and-Enterprise-Add-on-

Comments are off for this post.

Graduated! Interaction Design Specialization from UCSD!

I'm so excited to announce my graduation from the University of California San Diego Design Lab. I've spent my weekends over the past 2+ years studying for the Interaction Design Specialization and learned so much from it. Please see my Capstone project on Medium.

No Comments

Winning the Capital One Hackathon

CapTap is the winning app we designed for unifying communities, which leverages Capital One's new developer APIs. Its customer is the users that are flocking to the growing real estate market of mixed-use condo living; It provides a secure and seamless transaction experience that benefits the residents, builders and merchants in these mixed-use developments.

The key technologies driving this application are Vault - Capital One's secure document exchange API - plus Capital One's 2 oAuth APIs. And the key facets driving the user experience are:

  • simple user profile creation
  • a wallet-free transaction experience.

The CapTap team included 2 Full-Stack Developers, 1 iOS developer, a Scrum Master, a Biz Dev leader and myself (UX Designer/Product Manager).

My participation included facilitating brainstorming and development/design discussions; wireframing flows; UI design and asset preparation; whiteboarding; and serving as team presenter to present the project to the Capital One Hackathon's judges and audience.

Our final deliverables were:
- A web application where a customer enters a code provided by the condo association;
- an iPhone app which:
-- gets paired with the account profile (by detecting a QR code);
-- communicates via a beacon to a merchant app
-- has a simple interface for accepting a charge from a merchant;

Wondering what the 1st place prize was? An Amazon Echo Show!

Comments are off for this post.

Compare and Contrast – A Technique for Better Specs

SUMMARY

By providing compare and contrast opportunities in our specs, we are infusing ux into the ux/ui hand-off experience.

When we are in the process of providing specs for a product enhancement, the developers are already familiar with the product (assuming they built the original product).

Since they are familiar with the current product, it facilitates their cognitive comprehension of the desired changes if we provide them with a compare and contrast spec that is annotated. The result being that we are working with their current mental model and then gently ushering them into the new design. The annotations help quickly identify the differences. And the side-by-side presentation allows them to

  1. not work from memory, and
  2. be able to quickly identify the differences visually.

Compare and contrast is a teaching strategy employed at all levels of curriculum, since it’s so effective at scaffolding knowledge. By bringing this technique into software development, we can make the process more delightful for everybody involved - especially the engineering staff that needs to interpret the specifications. But similarly for stake holders that need to understand and sign off on designs before they get moved to the development stage.

Since employing this technique in my specs, I find that there is much less back-and-fourth, much less “mansplaining” and simply a lot more calm around the design handoff process. If you haven’t tried it yet, I strongly suggest you do.

Comments are off for this post.

What’s the Difference Between a Product Manager and a UX Designer?

In the words of Julie Zhuo, product designer at Facebook, a UX designer is “responsible for the actual design (the flow, the sound bytes/pixels, etc.)” while the product manager is responsible for “coordinating across different teams, getting folks aligned on goals and timelines, inward/outward communication, and analysis of whether a product is actually a ‘best product’.

In my own experience, it depends on the size of the company. When Eat Sleep Poop App was just me and some developers, I had to divide my time between doing all the ux work - research, flows, wireframes, prototypes, testing, analytics - and the product management work: finding synthesis between the tech and the design; making sure the product fits into higher-level goals of the business - such as having a release in time for the holidays; communicating a unified message about the product across different verticals, such as Facebook and the App Store; adjusting our IAP strategy to increase sales.

Now that I have a very talented UX designer working with me, I am freed up to focus more on the product. Of course, the UX is still vital, and I continue to be involved with all the intricacies of the customer experience; but now I’m more able to consider the business aspects of the product - namely growth and sales and their alignment with both tech and design.

A lot of how a small product business such as Eat Sleep Poop App evolves is: “what can we afford to do this month that will produce the most value?” And then the value is determined by understanding our customer’s needs. A good example is this pending 4th quarter of 2017. While we have spent most of 2016 growing, we did not take the time to do some essential development work on the code base. Now we are looking down the barrel of Apple’s release of a brand the new version of Swift 4, and we are barely catching up to be steadily entrenched in Swift 3. That means that if we are to continue down the path we’ve been on for most of 2016, we are further distancing ourselves from having an updated code base.

So here’s some more specificity for you: I received a request from some users to update the Sleep module of Eat Sleep Poop. And believe me, I am dying to give them that enhancement - especially because my analytics shows that it’s the #2 feature of my app - but if we focused our energies on that enhancement, we would be building it on top of a shaky code base - and with new, un-tested engineering talent. That could be a big waste of time. Consider the math: each team member spends about 15 hrs/week on the product (it’s a side project for all of us); if an engineer who is unfamiliar with the code attempts to build the Sleep enhancement and fails, we could potentially have lost a month. This same engineer could be assigned the task of refactoring the code (the migration to Swift 4), which would put her in the position of learning the code base without a ton of risk - while getting us closer to our goal of having a Swift 4 code base.

In the meantime, my UX designer and myself have the time to prepare and test designs at a prototype level - all the while determining which features and enhancements will bring the most value (in addition to the Sleep module enhancement that has already been planned).

Back to the question of how the role of UX differentiates itself from the role of product designer, I see the role of the product designer as quarterbacking the ball down the field; and the UX designer as the role of the running back. They are intrinsically combined; however, the product designer has to strategize at a higher level, one that encompasses the commercial success of the product. In the words of Jeff Lash at SiriusDecision, “the product manager is responsible for the commercial success of a product, overseeing it from inception/ideation through design, build, launch and growth/enhancement.” Whereas before 5% of my time as UX designer was spent on growth, 30% of my time as product manager is spent on growth. Similarly, 90% of my energies were spent on the mobile app; whereas now about 50% of my energies are spent on alternative applications which can become pillars of Eat Sleep Poop - such as a web application and IOT applications.

Related Articles: A Product Managers Job. By Josh Elman. Medium. 2013

References:

  1. Julie Zhou on Quora
  2. Jeff Lash on SiriusDecisions.com

Comments are off for this post.

What are Mental Models and How do they Apply to User Experience?

A mental model is a concept in a person’s brain; it’s how they imagine a system works. It’s based on their past experiences and influences how they make decisions in their current experiences. For example, a user that has used a watch will have ideas about how an Apple Watch will work. Additionally, if they’ve used an iPhone to compose a text message using the Messages app, they’ll have a mental model of what the texting experience on an Apple Watch using the Messages app will be.

The challenge we face as user experience designers is achieving an interface that matches a user’s mental model; that way there won’t be a disconnect between how the user imagines the interface will work, and how it actually works.

Referring back to our Apple Watch texting app (Messages), we have to respect the work that Apple did on making the watch app be intuitive. They designed an app that barely works like any app users have interacted with before - and only kind-of-like the iPhone Messages app. To use the iPhone app, a user types out a message on a digital keyboard on their phone or iPad. However, on the watch, there is no digital keyboard - so the user has to figure out how to use the app in a different way.

And yet, the design Apple has implemented does seem to align itself with user’s mental models. Personally, when I approached the app for the first time, I realized that I had 1 option to get started: I had to do a “long-press” action. At the time, long-press was pretty unheard of and not yet a convention I was accustomed to doing ever, on any device. But Apple had successfully onboarded me as to how that gesture would be an option at times. So even though there is no indicator in the Messages list view, I intuited that I should try that gesture. And I was rewarded by the succeeding steps in the flow. Apple has applied other common patterns that help first time users adjust to the new interface, such as graying out the send button until a recipient has been chosen and a message has been composed.

Summarily, user experience designers can apply the lessons Apple has applied to their Watch design when designing our own experiences. Personally, I’m a huge fan of the grayed-out button pattern and apply it regularly; similarly, onboarding is vital for innovating. In my app Eat Sleep Poop, I saw a big jump in user retention after improving my onboarding screens to better communicate the innovative pattern I’ve implemented on the app’s home screen.

Related Articles:

  1. The Secret to Designing an Intuitive UX, by Susan Weinschenk, Ph.D.
  2. Mental Models, by Jacob Nielsen

Comments are off for this post.

Usability Testing and the Mistaken Next Button

I recently completed a series of wireframes to communicate the flow of a feature inside a mobile app. When I built these wireframes, my focus was on designing an intuitive experience around the navigation of the product. To make the experience intuitive, I concentrated on 1) the content, and 2) how the user could seamlessly navigate that content to accomplish certain tasks. (For the record, the task was to personalize a greeting card to his client(s).)

When I initiated the usability testing, my aim was to get feedback on wether the navigation was intuitive: had I designed a flow that allowed him to seamlessly navigate the content?

My process for creating the wireframes was that I worked quickly, copying and pasting artboards within Sketch - combining other iterations into a new one. My goal was to be nimble: to not be inhibited by my tools. Thus, my wires were lo-fi; and my Sketch files were simple (e.g., not relying on symbols or components and using a white, black and a couple default primary colors).

When I began testing with my user, I was pleasantly surprised by one of the simple responses he had to my wireframes. I had some questions around the placement of the “next” button in the design - but it was not a key element of my research. It was more of a micro detail that would not solve the flow problem - but would enhance the overall user experience if executed properly. For example, the button would not appear until a user had made a selection.

Toward the end of the testing, after having learned a great deal about the usability of the flows, I asked my user about the next button - almost as an after-thought. And the reply I received kind of blew me away. It’s almost as if the user couldn’t wait for me to ask about it - because it truly stumped him. He pointed out that on the last screen in the flow - one which I’d copied-and-pasted and made a small tweak to - the next button was confusing to him. Its appearance on that screen was confusing because he thought he’d finished the flow but here the button was still reading next. Should he hit next again, he wondered.

I realized immediately that i’d made a mistake. While copying and pasting and rapidly adjust my wires, I had overlooked a tiny detail - but one that created a massive usability issue for my user.

To recover, I explained that he was absolutely correct - that the next button should not appear in that screen. That screen was the end of his flow. I drew his attention back to what I was testing: did he realize the color of the list items he’d interacted with had updated? Did he feel like he was getting the feedback he needed from the new updated state of the starting screen?

He confirmed that yes, he definitely felt a sense of confirmation and completion. I thanked him and made sure to remove that next button before testing again with another user.