Guide 16
New
Guide 16
Mapping the Participant Experience
A process to identify recruitment bottlenecks and discover ways to recruit faster and with more precision
Trigger
Review regularly between studies to map any changes to the participant experience; Share and discuss with stakeholders to educate how a better participant experience leads to faster, more meaningful UX research
The participant experience funnel

For more about this diagram, jump to this Topic (”What is the Participant Experience?”)

Steps:

  • Pull all relevant recruitment and participation data for the last 3-4 studies
  • Walk through each of the phases below, using the tables (and other Handbooks & Topics) to answer the the relevant questions
  • Take screenshots, draw pictures, and write responses to the questions to keep track of what you’re discovering
  • Mark down which phases pose the greatest challenge/threats to reliable recruitment and identify 1-2 action steps to try in the next 90 days to address these challenges

Sampling Frames

  • Where can you actually recruit from?
  • How would you describe these sampling frame(s)? (use the table below to help you understand the quality of each accessible sampling frame)
  • Which frame(s) do you typically see high participation rates? How often can you recruit from these frame(s)?
  • Jump to this Topic for more on dealing with sampling issues

Recruitment Criterion

  • For your last 3-4 research studies, write down all of your recruitment criterion (use the table below to help describe your criterion clearly).
  • Which criterion did you find yourself struggling to satisfy? Can you identify any reasons why?
  • How do you communicate unsatisfactory recruitment to your stakeholders? How early or quickly do you communicate recruitment issues to your stakeholders?
  • Which criterion was relatively easy to satisfy? Does this criterion stay easy to satisfy from study to study? Why or why not?
  • Which criterion do your stakeholders expect or demand that you satisfy or not change?

Phase 1

Study Awareness

  • What are the explicit or managed ways a potential participant becomes aware of your research study?
  • Do you raise study awareness inside the product, service, or interaction? Why or why not?
  • Do you use targeted or general emails to raise study awareness? How effective are these emails for recruitment?
  • How do you know if someone has become aware of your study request?
  • Do you make it easy for potential participants to recognize if your study is upcoming, active, or closed?
  • Do you make it easy for potential participants to get in contact with you?
  • Do you clearly communicate what your study compensation is? Do clearly state how and when completed participants will receive their compensation?
  • Do you communicate if there are any rules or restrictions on where and how completed participants can use that study compensation?
  • What are some strategies you use to raise the number of potential participants who become aware of your study?
  • What information do you provide or communicate to potential participants about your study? (use the list below for the minimum details to provide)
Study Recruitment Main Points to Cover
  • The purpose of the study or how any collected data will be used
  • The primary research questions you’ll be studying
  • A brief description of how participants will provide data (e.g., via an interview, survey, etc.)
  • A brief description of the Most Informative Participant (MIP)
  • A study timeline (including when it starts, ends, and when compensation can be expected)
  • Study compensation (and how such funds will be sent)
  • Your contact information (such as name, email, and/or phone number, alongside the name of the business)
  • How to sign-up or get enrolled to participate
  • (if relevant) How provided data will be anonymized or if its confidential

Phase 2

Study Qualification

  • What are the explicit steps or actions a potential participant must take to qualify or get selected to participate?
  • What happens people who don’t qualify or screened out?
  • Do you use a recruitment screener? How often have you tested and improved that screener? Do you use a generic screener across studies?
  • How restrictive or narrow is your screening logic? Is it really hard for anyone to become qualified?
  • How easy is it to guess or manipulate your screener to become qualified for your study?
  • How often do you use moderated scheduling for qualitative sessions? (see the table for more) Can you use an unmoderated scheduling tool (like Calendly) instead?
  • How quickly do you respond or confirm a participant for a moderated, qualitative session?
  • Of the participants you schedule/confirm for qualitative sessions, how many are no-shows (meaning they don’t come to their scheduled session)?
  • For your last 3-4 studies, how much manual time and effort did you devote to qualifying, screening, and scheduling participants? What steps can be automated or done asynchronously? (use the table below for help)

Phase 3

Study Engagement

  • Do you validate or confirm the screening/qualifying logic for qualitative sessions?
  • Do you obtain informed consent before recording or collecting any data? (You can read more about informed consent here).
  • Do you explain to participants that their participation is voluntary and they can leave at any time?
  • Do you explain if the collected data will be confidential or anonymized? (You can read more about the differences between the two terms here) Do you allow participants to remain anonymous if they want?
  • Do participant need to download or bring something with them to start providing data? If so, how easy or difficult is it to do so?
  • How much does your study design stress someone’s abilities? (You can learn more about this with this paid book or the list below)
  • If someone is late to their session, how often do you email, call, or message them with a polite reminder? If needed, can you conduct your session asynchronously?
  • Do you provide directions if participants have to meet you at a specific location?
  • If you’re going to someone’s home, office, or personal space, do you get approval from your participant and/or other relevant parties before starting the session?
  • Do you explain or inform participants that other, non-speaking observers might be in their session? If observers make a participant uncomfortable, can you ask observers to leave the session?
  • If you’re going to discuss taboo or sensitive topics, do you build rapport? Do you offer participants a chance to write or type their responses instead of speaking them out loud?
  • If you’re remote, is your background clean and tidy? Is it easy to hear and see you? Do you control or minimize any background noises? Do you have adequate and/or professional lighting?
  • For longer sessions, do you offer participants a 5-10 minute mental break?
  • For longer sessions, do you give the participant an update on how many topics are left? (Such as “Only 2 topics and 10 questions left!”)
  • For unmoderated research, do you make it easy for participants to get in contact with you if they have a question or issue?

Tips to Reduce Participant Drop-off
  • Limit or reduce the demands on a participant’s memory (i.e., don’t require people to recall info from months ago)
  • Reduce or remove as many steps to get started (e.g., don’t force participants to download additional software to participate, driving to an on-site location, etc.)
  • For longer studies, offer a clear sense of progress (e.g., ”Only 5 more survey questions!”, “3 more usability tasks left!”, etc.)
  • Keep any study prompts or questions understandable and relevant to your MIP
  • Repeat directions or instructions (assuming people won’t read)
  • Provide an easy way to get in contact with you if any issues arise
  • Keep qualitative sessions to 45-75 minutes and avoid/limit sessions longer than 90 minutes
  • Reduce or limit your diary studies to a week and only 2-3 daily contacts/entries per day
  • Limit surveys to only 10-12 questions
  • Limit usability testing sessions to 20-30 minutes

Phase 4

Study Compensation

  • Do you use a lottery-based compensation structure? (such as “Complete this survey for a chance to win a $25 Amazon gift card) If so, why?
  • Was compensation delivered accurately and on time?
  • If relevant, was prorated or partial compensation delivered? (Such as 80% of allotted compensation for a survey that’s 80% completed)
  • Could participants change how they wish to get compensated?
  • Do any offered compensation meet or exceed the qualities below? (use list below)
  • How is delivered compensation tracked or managed?
  • How do participants feel about the current compensation options? Do they feel it’s fair? What other options or rates would they prefer to see? Why?
  • How would participants feel if compensation was removed? How would they feel if only non-financial compensation was offered?
  • Do you ask current participants about ideating or creating more exciting or appropriate compensation options?
  • Do you offer study compensation for interviews, surveys, and usability tests? (use the tables below for help)
  • Do you use the median hourly wage (MHW) as a reference point for compensating participants fairly?

Qualities of Appropriate Study Compensation
  • Matches the brand of the business (bigger/well-known company means larger incentive)
  • Matches the study’s time demands (longer studies means larger incentive)
  • Matches the study’s effort (more intensive studies means larger incentive)
  • Matches the comfort needed to participate or how intrusive the study is (less comfort means larger incentive)
  • Easy for participants to receive (quickly, reliably, and doesn’t create a logistical nightmare for you to disperse or manage)
  • Easy for you/your team to track (to see what works and what doesn’t and record for financial accounting purposes)

For interview research
For survey research
For usability testing research

For more on the median hourly wage (MHW) strategy, jump to this Topic.

After the Study

  • Is there a way for participants to give feedback on their participant experience? If so, how much friction is there to give this feedback?
  • is there a way for a completed participant to review or amend any provided data?
  • can a participant have their data be switched from anonymous or confidential or vice versa?
  • Can a participant have their data deleted?
  • What happens to completed participants? Are they eligible to participate in future studies? If so, when? If not, why not?
  • Do you ask completed participants to be referred to 1-2 informative people in their personal network? (aka chain sampling)
  • Can completed participants be placed (with consent) into a managed research panel to take part in future research?
Part 1
Part 2
Part 3