How to Conduct Mobile UX Research (and What to Do with It)

The number of people browsing and shopping on mobile devices continues to grow. During the past 10 years, smartphone and tablet browsing has increased to more than 50% of all web traffic.

mobile internet usage chart

As for ecommerce, projections forecast mobile sales to reach 54% by the year 2021. And, for their part, Google has switched to mobile-first indexing, pulling ranking signals from the mobile version of websites.

These realities focus attention on developing websites and UX for the mobile user first. And yet, when we asked participants in a recent webinar if they considered their company to be “mobile first,” 65% said, “No.”

This post does three things:

  1. Identifies why companies struggle with mobile UX;
  2. Highlights mobile UX research methods;
  3. Details how to prioritize testing based on mobile UX research.

Not all mobile research tactics differ from “traditional” UX research methods. However, those that do are essential for avoiding common missteps and joining the (lucrative) ranks of mobile-first companies.

Why companies struggle with mobile UX

Mobile UX mistakes usually result from one of four issues:

  1. Siloed teams;
  2. Data traps;
  3. Technical blind spots;
  4. Customer blind spots.

Siloed teams. Teams are often built around channels instead of journeys. Structuring teams around the user journey ensures that the customer experience is seamless.

Data traps. These are often Net Promoter Score (NPS) traps. Ultimately, it’s using the wrong measurements or the wrong tools to track what’s happening at different stages of the user journey.

Companies often try to force NPS onto the mobile experience—but it’s just not the right measurement for the purpose. It can put product teams under pressure to boost NPS when they don’t have the information they need to optimize their blind spots.

Technical blind spots. Research on desktop often doesn’t make its way to the mobile experience. Mouse tracking and session recordings—commonly used on desktop—are rarely deployed on mobile sites. That leaves blind spots for mobile optimization.

Customer blind spots. Involving the user in mobile site development has gone from “nice to have” to absolutely essential. Even Google asks for customer feedback:

google mobile search result satisfaction survey

Despite a near-infinite amount of quantitative usage data, Google still feels it’s valuable to survey their users. And yet, when we asked recent webinar attendees, “Are you currently capturing feedback from your mobile users?”, nearly half (46%) answered, “No.”

So where should you start? First, you need to know how to capture customer feedback correctly.

Mobile UX research methods: How do you capture feedback on mobile devices?

We’ve previously written about Voice of Customer (VoC) data in the context of copywriting: word-for-word translations of qualitative feedback into marketing copy.

But it has a broader definition that encompasses many user-centric research methods. Still, there’s a lot of confusion about how to set up VoC campaigns, especially on mobile devices—screens are small and users are in a hurry.

The process can be broken down into three components:

  1. How to ask;
  2. Whom to ask (and what to ask them);
  3. When to ask.

1. How to ask

Organizations often obsess over NPS, but it’s a metric that sits at the top of the pyramid. It’s not easy to move without understanding what’s going on at a deeper level.

Starting from the bottom delivers a granular understanding of the customer and puts you in position to take action that will move NPS in the right direction.

research methods for websitesAn example of how to set up a Voice of Customer research campaign from Usabilla.

Surveys at different levels of the pyramid require different approaches and different tools, something we’ve learned at CXL agency in years of user research:

survey methods for mobile sites What are you trying to learn? The answer helps determine which research tool you should use.

Here’s a deeper dive into the each method, based on what you’re trying to understand about your mobile customers.

Understand users and perceptions

AdoreBeauty is an Australian beauty and cosmetics brand from Australia. They’ve been an agency client for over three years, and we’ve run a variety of experiments over that time.

At one point, we got stuck in what we thought was the local maxima. To unlock the situation, we surveyed their customers and compared them to competitors. We used a format similar to this mobile UX competitive benchmark survey of top brands in the beauty and cosmetics space.

Users were asked to perform one task on the website (find and buy an item), then rate their experience on a number of indicators. We also included a couple of open-ended questions in which they used their own words to describe their motivation and perceived benefits of choosing one site over another.

Their answers offered behavioral cues: hints of anxiety and uncertainty that affected the purchasing decision. Importantly, benchmarking establish a baseline so that we could measure subsequent changes in perception. (Benchmarking is also useful before a major site redesign or rebrand.)

usabilla mood score

There are simpler methods of benchmarking. Usabilla has a benchmarking tool they call a “mood score.” They ask users how they “feel” about the website and then compare that feedback across websites in the same industry.

Competitive UX competitive benchmarking gives deeper insights but comes at a greater cost.

Understand friction

When we talk about gathering data from customers, most people think about user testing. User testing, however, is not a Swiss Army knife—it has utility but also limitations. User testing is best for identifying friction in your UI or UX. It doesn’t explore user perceptions, motivations, or fears.

Understand motivation

Motivation is the biggest driver of sales. A sound understanding of users’ motivation is your best bet for increasing your bottom line. To uncover motivation, set up surveys and polls to ask for feedback from users who just bought from you.

Typical questions for this type of micro-survey are:

  1. What mattered to you most when buying [product] online?
  2. What made you choose [website] over other online stores?
  3. Please list the top 3 things that made you purchase from [website].
  4. What did you like most about your shopping experience with [website]?

What can you expect to learn? A car insurance company, a Usabilla client, found out through a micro-survey that most mobile users were not visiting their website with the intent to purchase insurance. Instead, mobile visitors were trying to figure out the cost of the insurance to help budget for a new car.

Understand fears/uncertainties/doubts (FUDs)

Intercept pools (more on those later) are great for understanding FUDs. The aim of an intercept pool is to get a detailed description of the user’s state of mind while going through the online experience.

This feedback is extremely valuable because it’s usually given in the natural language of the customer. In addition to helping improve mobile UX, it can improve the copy on your sales pages.

Each method of user feedback offers a unique perspective on user behavior. You should start with at least one method, but, in the end, you need them all. That leads us to the next question: Whom should you ask?

2. Whom to ask (and what to ask them)

You shouldn’t ask for feedback from just anyone who visits your site on mobile. There are two categories of users you want to ask for feedback:

  1. Those who just bought;
  2. Those who visited a sales page but didn’t buy.

Asking other users can give insights on how to introduce different paths to existing funnels, or how to develop different messaging. However, information from that segment is likely to be noisy and less useful than feedback from the two groups listed above.

Asking users who just purchased, “What is the one thing that almost kept you from buying?” will give you answers about a variety of fears and friction points.

tommy hilfiger micro-survey

You can also ask “What did you like about the check-out process?” Just because users “liked” an aspect of the process doesn’t mean it was perfect. But you’ll learn which aspects customers pay attention to. That knowledge, in turn, is an opportunity to focus improvements on aspects that customers care about the most.

(Positive feedback can also foster growth. Once a customer leaves positive feedback, your survey software can display a pop-up that prompts happy purchasers to rate your product on a site or app store.)

The non-purchaser segment will contain a lot of people who are just curious. The information you get from them should be taken with a grain of salt. In general, feedback from paying customers is much more valuable than feedback from someone who wasn’t willing to buy.

Some questions you want to ask:

  • What were you looking to accomplish with this visit?
  • Did you have the intention to buy when you started browsing the sales page?
  • Does the product/service meet your needs?
  • What kept you from buying?

Since the user could drop off at any time during the survey, ask the questions in a multi-step form and save each answer as it’s entered. That said, users who complete the entire survey will usually leave the most valuable feedback.

3. When to ask

Mobile visitor attention spans are short. If you ask your customer what they think about your sales page a week after a purchase, and they’ll barely remember the cost of your product. The best time to ask for feedback is on the spot, right after the action.

Following a successful purchase, use “intercept pools” or your thank-you page to ask for feedback. For visitors leaving a sales page without buying, use exit surveys.

sales page survey on mobile site

You can also retarget non-purchasers on social media in the first 24 hours following a visit. Doing so can even bring in extra sales—every time users return to your site, you have another chance to sell to them.

To increase response rates on mobile devices, ask only relevant, time-sensitive questions. No one wants to fill out a questionnaire, but many visitors will answer a specific question if it relates to what’s on their mind at that moment. Usabilla calls this the “micro-survey” approach.

mobile site survey

Asking your visitors questions should be part of an ongoing process, not a one-time campaign. Micro-surveys are like measuring the pulse on a patient with a heart condition—you want to monitor it continuously.

This is exactly what Google was doing in the earlier example. They asked users for feedback right on the search results page: one question, immediately after they delivered search results.

On mobile devices especially, don’t waste time with an explanation of the survey—jump straight to the question. A Usabilla experiment showed much higher engagement with a no-introduction micro-survey (right) compared to a version with a survey introduction (left).

mobile survey comparison

Data alone can make for a nice presentation and pretty visuals. But it won’t make you any money. So once you have your data, how do you put it to work?

How to use mobile research data: A process to prioritize testing

According to our recent report on the State of Conversion Optimization, one of the biggest struggles for CRO practitioners is establishing a good optimization process—putting their research data to work.

There are a number of optimization processes out there. At CXL Agency, we’ve developed a process that we’ve honed through thousands of tests. Here it is in a nutshell:

  1. Collect quantitative and qualitative data.
  2. Process the data.
  3. Interpret the data.
  4. Create test hypotheses.
  5. Define testing conditions and limits.
  6. Prioritize test hypotheses.
  7. Implement selected tests.
  8. Review the results of testing.
  9. Learn from the results.
  10. Rinse and repeat from Step 3.

Diving more deeply into this process is out of scope for this article. But we want to stress the importance of prioritization. Proper prioritization makes the most of your resources and helps you run meaningful tests that can generate the biggest impact.

Prioritization starts with a list of candidate tests. For each test, a few indicators are evaluated and scored. The sum of the scores for each indicator determines the “potential impact” of the test.

ICE is a simple prioritization framework. The ICE framework prioritizes tests based on Impact, Confidence, and Ease (of implementation):

ice prioritization framework

Another popular prioritization framework is PIE. The PIE framework prioritizes tests based on Potential, Importance, and Ease (of implementation).

pie prioritization framework

A limitation for both frameworks is that, for any particular test, the score assigned to each indicator is arbitrary, subject to biases and errors. To overcome this challenge—and base more decisions on objective facts—we developed a proprietary prioritization framework we call PXL.

pxl prioritization framework

The PXL framework is a variation of ICE. Its advantage is that the scores for Impact and Confidence are determined based on granular, binary indicators. Effort (or ease of use) is still scored in line with ICE or PIE.

Those binary indicators are:

For impact:

  • Is it above the fold? (yes = 1; no = 0)
  • Is it noticeable within 5 seconds? (yes = 2; no = 0)
  • Is is adding or removing an element? (yes = 2; no = 0)
  • Is it designed to increase user motivation? (yes = 1; no = 0)
  • Is it running on high-traffic pages? (yes = 1; no = 0)

For confidence:

  • Does it address and issue discovered via user testing? (yes = 1; no = 0)
  • Does it address an issue discovered via qualitative feedback? (yes = 1; no = 0)
  • Does it address an issue discovered via digital analytics? (yes = 1; no = 0)
  • Is it supported by mouse tracking, heatmaps, eye tracking? (yes = 1; no = 0)

(To answer the four questions above, you need to conduct research with all four methods. In other words, PXL works only if you have a sound data collection framework like the one detailed above.)

Finally, when evaluating candidate tests, it is essential to “simulate” their impact. We do this with a simple Excel-based tool we call the Test Bandwidth Calculator.

Using quantitative estimates, the tool returns the potential monetary impact of the test and how it compares to other test candidates. If PXL identifies which tests you should run, the Test Bandwidth Calculator tells you where you should run them.

test bandwidth calculator

Conclusion

Mobile browsing and online shopping continues to grow. As such, it’s crucial to focus on improving the mobile user experience. Getting “live” feedback from recent purchasers and near-purchasers provides a foundation of research.

But you can’t rely on a single method or ask random questions. Proper VoC data collection requires a good process that ensures you:

  • Know which methods yield the right kind of data.
  • Know which users to ask, and what to ask them.
  • Know when to ask those questions.

Once you collect that data, you can use it as the basis for an experimentation program. The PXL framework can help prioritize which tests to run, with a Test Bandwidth Calculator can help you understand where to run them for maximum impact.



from Business 2 Community http://bit.ly/2IN5UU1

Comments