Collection 2
Handbook 3
Topic 1
How to avoid bad recruitment practices
summary
This is some text inside of a div block.

Diagnose, then Optimize

No two research environments are alike. However, the issue of unreliable participation is common across industries, products, and businesses. Getting people to show up and participate in meaningful ways can be an inconsistent, frustrating part of the research process.

But reliable recruitment is more than posting your survey on Facebook or paying a recruitment vendor and expecting great, informative participation. To make recruitment work for your specific context, you’ll have to recognize what issues are slowing you down. What’s working with your current recruitment process and what’s not? What have you tried? What can’t you try?

You need to diagnose what’s wrong with your recruitment before fixing it.

Many UX researchers know that their recruitment isn’t working but they can’t explain what those problems are. Based on research to create the Fruitful library & toolkit, there are three categories of issues that plague study recruitment: stakeholder issues, sampling issues, and a final category known here as optimizable issues. Note that these categories can hamper your recruitment no matter the industry or business you work in.

Let’s start with stakeholder issues as they can be relatively easy to address in a short amount of time.

Stakeholder Recruitment Issues

While you can take a deeper look into stakeholder issues with research in this Topic, the table below lists some specific issues stakeholders have about recruitment.

Let’s move to the second group of issues that negatively affect study recruitment: how you define, sample, and select participants.

Sampling Recruitment Issues

Sampling issues affect who, where, and how you can recruit. Let’s revisit a diagram from an earlier Handbook in this collection: Recruiting the Most Informative Participants. You can take this diagram and add in sampling issues to get a more realistic view of study recruitment.

(A) Poor Most Informative Participant (MIP) Definition

The very first sampling issue that affects recruitment has to do with the target population or segment. If you poorly define this population (and the Most Informative Participant (MIP) that “live” in that population), every subsequent sampling stage is doomed. Narrow or broad population/MIP definitions pose real challenges to your recruitment.

The act of defining how you’ll study or specify something in your studies is known as operationalization (covered more in the Handbook on Study Design).

For an example, consider defining the population/MIP as something broad and generic like “all users”. When you recruit, everyone who uses the product technically fits the MIP definition. But you can’t contact everyone to participate. And your generic findings won’t helpful for making smarter product decisions. If you do try to study “all users”, you’ll spend a ton of time screening and filtering uninformative participants, causing you headaches and your stakeholders frustration.

Or consider the opposite: you define the MIP narrowly. If you tried to recruit “bananas with brown spots, who speak Mandarin, who’ve been to Egypt and like to skateboard on Wednesdays”, there might be only a dozen people that fit that definition. You’ll spend time trying to even understand where these participants “live”. Also, your findings from these hyper-specific participants won’t generalize to other populations or segments, making it harder for your stakeholders to find value in your research.

(B) Coverage bias

The next sampling issue affects your sampling frames. Remember a sampling frame represents the people you can readily contact for a study, based on their relevant, recorded, and consented characteristics. But when you look at your sampling frames, you might notice an issue: who you need to contact and who you can contact might not overlap.

Who you need to contact and who you can contact might be two different sets of people.

Sampling frames won’t naturally reflect your population on the variables you believe are relevant for your research questions. In this case, your sampling frames don’t cover the necessary people in your population. Known as coverage bias, your sampling frame might over or under represent specific subgroups within your target population or segment (representativeness is covered further in this Topic).

If you have over or under-coverage in your sampling frame, you’ll experience issues with recruitment. If you have too many people from a particular subgroup (over-coverage), some people are more likely to selected which means your final sample will be unrepresentative. If you have too few (under-coverage) you might never learn from informative participants, leading to skewed or inadequate study findings. You want as close to a complete sampling frame to address coverage bias (aka as close to a full list of everyone in your population or segment-of-interest as possible).

Having a complete and representative sampling frame is a difficult, almost impossible goal to reach. The goal is to strive for a more complete frame while balancing your recruitment constraints and resources.

(C) Frame decay or stagnation issues

With digital communities and platforms, more UX researchers are recruiting online. Common sources include Facebook Groups, specific sub-Reddits, LinkedIn channels, Twitter hashtags, etc.). While there’s a ton of engagement across these sources, some communities will grow more than others. A Facebook Group for new parents will see a steady stream of new members coming into the group and members leaving the group as their children age.

As a UX researcher, this poses a problem for you. Over time, if new people aren’t joining and participating in the communities, you’ll find that the number of people you can contact will stagnant. You can only re-contact someone so many times before they ignore/block you or leave the source altogether. If more people leave, you’ll find your sampling frame starting to decay, meaning even less people to contact.

(D) Limited, inappropriate sampling techniques available

Next, are issues with the sampling techniques you have available or can use confidently. Recall that non-random sampling techniques are the default for most UX researchers. But in your quantitative research, you want to use random sampling whenever possible.

If you don’t meet the requirements for random sampling, you’ll be limited to using only non-random sampling techniques. You won’t be able to establish margin-of-errors for any quantitative estimates (you can read more about margins-of-error here).

(E) Participant Nonresponse

No matter your recruitment efforts, budget, or patience, there’ll always be some portion of people who can’t or don’t want to participate in your research. Even if you’re offering one million dollars to every person in your population-of-interest, you’ll see some amount of nonresponse.

Nonresponse is when people can’t or don’t want to participate in your studies. Nonresponse can also be partial where someone starts your study but chooses to leave or stop participating halfway through. What sounds simple is one of the most irreducible parts of every study you run.

Partial nonresponse is a common issue in quantitative research, especially when the studies are long or mundane.

Nonresponse exists between the sampling pool stage (aka the number of people who could participate) and final sample size stage (aka the number of people who did participate). If you contact only a handful of people, you’ll likely be stressed because every participant that doesn’t show up represents a large portion of the pool.

For example, if you contact five people for an interview and three people don’t or can’t respond, you’ve quickly lost 60% of your possible sample size. But if you contact 100 people and 30 people don’t or can’t respond, you’re still in a good place for recruitment.

If you have a nonresponse rate lower than 70-80% (meaning only 20-30% of the sampling pool completed your study), that’s actually a good thing. Expect high nonresponse if you work at a startup or at a company who’s brand is unrecognizable.

If you regularly see a 30%+ response rate, you’re doing better than most UX researchers around the world.

Nonresponse rates are notoriously hard to predict. From study to study, you’ll see different levels of participation. But that doesn’t mean you can’t learn how to address it. Track your nonresponse rates and see if you can identify where you can make meaningful changes to your recruitment processes.

Below is a simple chart you can create to measure and address nonresponse issues (note that response – and nonresponse – rates are based on your sampling pool or the number of people you contact to participate).

The column labelled “Recruitment/marketing awareness” is where you need to experiment to address nonresponse issues. Iterate and test your recruitment screeners. Make your recruitment language more action-oriented or better yet, use the language your MIP use to describe the problem.

If you can, keep track of the recruitment language you use and how effective it is. You need to take an active part in understanding why potential participants won’t or can’t take the time to be involved in your research.

Below are a list of strategies you can try to address these sampling issues. You should notice that some strategies are listed for different issues. Sampling and recruitment are intertwined. Small changes or improvements to one aspect of your sampling can have big effects across several recruitment issues.

From an efficiency perspective, that’s good for you because UX researchers rarely have the time or budget to try several solutions at a time. Pick the most taxing issue and try to implement one strategy to address it in the next 90 days.

Sampling issues can take time to fix and educating stakeholders about recruitment isn’t a linear journey. So, let’s end this Topic by reviewing recruitment issues that you have more control over.

Optimizable Recruitment Issues

Below is a list of issues that affect recruitment that you have some amount of control over. You can make decisions before or during recruitment to maximize your resources and time. Please note that based on your specific context, some of these issues will be more or less optimizable. You have to decide what can and can’t be improved and what actions to take to improve then.

Recruitment Issues You Can Optimize Over Time
  • (A) Uninformative participation interest
  • (B) Short-term recruiting practices
  • (C) Friction in the participant experience

(A) Uninformative Participation Interest

The first optimizable recruitment issue is mostly manageable. When you recruit, you want to reduce or remove as many uninformative participants as possible. In practice, however, this is easier said than done. No matter how intensive your recruitment screeners and efforts are, you’ll likely see some amount of people that have nothing to do with your MIP definition or research questions.

There are three groups of uninformative participants you want to be aware off and look to limit: the trolls, the “professional”, and the unrelated.

Check out this YouTube video where a past participant offers his review of using DScout to get paid, alongside tips to get enrolled/qualified in more studies to make money.

A quick note on “professional” participants: expect more of them. With more and more “take this survey to get $10” platforms popping up, people are quickly recognizing that UX research can be a somewhat regular source of income. While they’re not all malicious, do recognize that their goal is to maximize their earnings.

Their involvement in a study isn’t necessarily to help you and your stakeholders build a better product. While participants should be compensated fairly (this idea is covered deeper in Topic 3 and Topic 4 of this Handbook), recognize that not all interested participants should be selected or learned from.

Below is a short list of behaviors or indicators you can look for to see if there are any “professional” participants in your sampling frames or within your recruitment screener data.

Behaviors to Help Identify Professional Participants
  • Multiple attempts to enroll in a study
  • Multiple accounts that come from same IP address
  • Noticeably shorter time-to-complete a screener than other respondents
  • Large number of studies completed (overall or recently) relative to the majority of your participants
  • Someone has participated in studies that are unrelated or contradictory (”meat-lovers” and “vegan” surveys completed)
  • Someone only participates in studies with financial compensation (typically higher rates or from very recognizable brands)
  • Participates mostly in unmoderated research (such as surveys, unmoderated testing, etc.)

(B) Short-term Recruiting Practices

Next are the issues that make your recruitment unsustainable. Your recruitment efforts and behaviors from one study make it harder — or impossible — to recruit in the next. Without thinking ahead, you can’t shift your recruitment from ineffective to reliable.

You can’t build a recruitment strategy when you’re thinking about one study at a time.

The table below lists common recruiting practices that might work in the short-term but are challenging to use over time.

(C) Friction in the Participant Experience

This final optimizable issue is actually the focus of the entire next Topic (as well as in Guide 16). But let’s introduce the participant experience here and how you improve it.

Briefly, the participant experience is what it’s like to be a research participant in one of your studies. It’s about understanding your study from their point-of-view. There are four phases in the participant experience as shown below.

Regardless of you running a qualitative or quantitative study, every participant will go through these phases. However, if there’s a lot of friction between and within the phases, you’ll struggle to get meaningful participation. You can jump to the next Topic to learn about friction and the phases or use the guide below.

Guide 16: Mapping the Participant Experience

To keep this Topic coherent, actionable strategies to lower friction in the participant experience are listed below.

It can feel overwhelming to learn about so many possible issues with recruitment for one study alone. To make things easier, use the two tables and the guide above to review possible, approachable strategies to address your recruitment issues. There are lots of things you can do but the advice here is simple: pick one clear recruitment issue and try something small to fix it. You can’t build a better recruitment process without investment. Just be smart about what you try and when.

In the next Topic, let’s take a closer look at how someone goes from being unaware about your research all the way through completing a study. Let’s break down the participant experience.

Search
  • Participant experience
  • Study recruitment
  • Study compensation / study incentive
  • Participant attrition; participant drop-off; participant mortality
  • Participant nonresponse; nonresponse bias
  • Nonresponse rate; response bias
  • Item nonresponse; partial nonresponse
  • Coverage bias; sampling bias
Handbook 3
Topic 2
Reworking the participant experience
Read Next