First, Learn; Then, Apply
Whenever you create and deliver research reporting, you have two main goals: the first goal is to facilitate learning in a comprehensible, accurate, and vibrant way.
The second goal is to evoke your stakeholders (or the reporting audience) into action by applying and testing that learning. You can't call your stakeholders to action if you don’t first make the report consumable and interesting. To accomplish both goals, you must first understand your research report's audience.
In this Handbook, your stakeholders are referred to as your audience as they’ll consume and view your report. Your secondary audience is the people who’ll consume the report but weren’t directly involved in the planning of the study.
Audience & Impact Are Connected
Your audience affects every single aspect of your reporting. From their expectations to what should be in a report to how they share and discuss the report with their peers, you have to understand your audience to report as effectively as possible.
A research report is for your stakeholders, not you. They're the ones that use your findings to make or avoid certain decisions. They're the ones who translate findings into code, pixels, or roadmaps. You must get to know them and customize your reporting format, structure, content, and style to fit how they learn best.
The last thing you want to do is create reports with a one-size-fits-all mentality. A report made for a big tech executive looking to optimize their advertising strategy shouldn't be or feel the same as a report for a new product owner who wants to understand how people sign-up for a free trial.
Reporting is a great way to drive impact with your research. While reports themselves can’t succeed or fail, you can measure the impact of your reports in a few ways. The most obvious way is when your research-based recommendations (covered more at the end of this lesson) are considered and implemented.
In practice, however, things rarely go this smoothly. Politics, emotions, budgets, timelines, and even the industry can change, making it harder to implement your recommendations.
You can’t measure your research impact easily because good research kills bad ideas.
Also, you can't easily measure research impact by what gets released because good research can stop bad ideas from getting considered in the first place. So, what can you do?
Instead, turn to your local sphere of influence: your stakeholders. You can help them reshape or refine how they think about the product and the experience. You can help them make smarter, faster decisions. You can help them avoid bad ideas and recognize the good ones.
In fact, the list of behavioral indicators below showcases how stakeholders or an audience are necessary for measuring your impact as a researcher.
General Impact Indicators After Reporting
- Consistent or increasing attendance when presenting findings
- Sharing findings with other teams
- Engaging in conversation (during a presentation or after) about the research
- Asking for additional or follow-up research
- Inviting to present findings to another team
- Asking you to be present in strategic or planning meetings
- Asking for raw data to review and understand
- Using research in day-to-day meetings (referring to findings or themes, using language participants use, correcting incorrect assumptions, etc.)
- Using research to justify/support decisions or to avoid certain decisions
It's a general list because you'll have to figure out what impact looks like where you work. Perhaps "success" looks like you getting funding to run another round of research? Or maybe you’ll get an opportunity to be involved in the next sprint? Be on the lookout for the behaviors you see after sharing a report.
If you don't see much change quickly, that's okay. It's not a simple light switch that can be turned on or off, where your stakeholders automatically get energized and crave more research. It can take months – in some situations, years – for your research reports to be consumed, challenged, discussed, and used.
This is why getting as many of your research-resistant stakeholders directly involved in the research process is an effective way to guarantee that your report will be consumed and shared with other teams).
Let’s look at other ways your audience affects your report.
Credibility vs. Digestibility
Making a 30+ slide deck will take you significantly longer than shooting off an email with some well-written bulleted findings. But the longer you take to make and share a report, the slower the entire research process can feel.
There'll always be some discrete amount of time needed to go from data analysis to reportable findings. This is your report creation time and it always must be budgeted for when planning your study timelines.
There's also a discrete amount of time it'll take for your report to be consumed and digested. The digestion time of your report is roughly how long it takes your stakeholders to read and make sense of your report.
If the report is too long, you'll wind up hearing the dreaded question: "I haven't read the report yet, but what's the key takeaway? What should we know? What should we do?"
So, does this mean that you should make the shortest report possible? Not exactly.
When your reports are too short, it is harder for your stakeholders to find these findings credible. For example, if you ran 15 interviews and sent your time an email report containing two bullet points, then your stakeholders would struggle to accept your findings. But if you ran 15 interviews and made a mini-podcast with raw clips and your findings, stakeholders would likely find your findings to be more credible, even if the podcast creation time took an extra week.
There's a secret relationship between the creation time, digestion time, and credibility of your report.
Longer reports take longer to create and longer to absorb but are more credible. Vice versa, shorter reports are seen as less credible, even if they can be created and absorbed quickly.
There's a sweet spot for every reporting format you use. Based on personal experience and interviews with researchers all over the world, you can use the table below as a rough benchmark to think about the ideal reporting length:
Optimal Lengths for Common Reporting Formats
- Email / messaging report: 5-7 bullet-points
- Highlights document: 1-3 pages
- Full-length presentation: 20-30 pages/slides
- Audio report: 3-8 minutes
Pick a reporting format that's appropriate to the type of study you're running. Use more email, messaging, and one-pager reports for everyday tactical research. Enrich these shorter reports with quotes, specific case studies, and media (such as images or videos) to make them vibrant.
For strategic research, go with a longer full-length presentation using a reporting template, making sure you're making easy-to-skim, concise and vibrant. If you have time, have a peer review your longer reports to find any confusing areas or to help make them more concise.
Remember this diagram from this Topic in Collection 1? Let's pretend that you did get your stakeholders directly involved in the research process, filling the blank space in the diagram below. This could mean that your report could be shorter or less complex.
In these situations, your stakeholders likely don't need - or want – a big fancy flashy report because they were actively present throughout the entire process of research. You should still create and store a research highlights document as a research artifact but you can make it shorter.
You might have to take a starting guess about what format and length will work best, but here are some questions you can ask after you deliver or share a report:
Questions To Ask Your Audience to Improve Reporting
- What are your thoughts on this reporting style and format?
- How “short” or “long” did the report feel? What parts felt “long”? Why?
- What parts of the report did you find yourself skipping or speeding through?
- Do you have any suggestions on how to make future research reports easier for you to consume?
Your stakeholders might not automatically know what type of report is important or even how they read and learn best. It's a touch-and-go process of trying to report vibrantly, seeing what resonates, and then updating your reporting approach accordingly.
While the length and format of your report matter, the content inside matters much more.
(Unmet) Expectations & Audience Emotions
Based on your study design, your audience might be expecting to see certain things in your report. For example, if you set out to understand a problem, they’ll likely want to see what caused the problem and possible solutions. Or if you ran an intensive journey mapping experience, they’ll likely expect a report that walks across and within each phase of the journey.
The worst situation is having your audience be excited to learn or understand something in your report only to find out it’s not in your report. You need to be confident that you know their expectations as early as possible (see guide #3: Using the 3-Part Research Plan for more).
If your stakeholders like seeing raw session data include meaningful snippets in your report. If they want help choosing between option A or option B, then you'd need to recommend an option and bring evidence as to why.
Research reports can also hurt the feelings of some stakeholders or audiences because they're personally invested in the findings or the product. Seeing evidence that participants didn't care for or even understand a possible feature can be devastating to a product owner who suggested the feature after a "Eureka!" moment. This doesn't mean lying, hiding the truth, or conveniently forgetting to report something.
There’s a balance between being objective and respectful when reporting.
When you report, you want to balance the fine line between being objective and being considerate of the real emotions of your stakeholders
Build relationships with your stakeholders and maintain them over time whenever you can. This way, you can report the "good" findings alongside the improvement opportunities you've found. If you do this right, then you can still report "hurtful," "devastating," or "awful" findings and still have your stakeholders find value in your research.
One of the impact indicators to look for after reporting is when your audience shares and discusses the report with their peers. But if your research was complex or long, they might share the wrong message.
The Telephone Effect
If your research is super impactful and valuable, then there's a strong chance they'll want to share this knowledge in these learnings with other—relevant partners within the company. Stakeholders who actively share and discuss reports help make your research culture more robust.
You want to encourage sharing and discussing your reports to expand your impact. This allows more of your company to update their knowledge about the people they’re building for, without forcing you to directly interact with every possible audience. But too much sharing, especially with complex studies and reports, can be a bad thing.
The worst part is when your audience accidentally alters or distills the wrong message to share with their colleagues. This is similar to the schoolyard game, the telephone game.
In the game, students line up and the first student in line whispers a short sentence to the student in front of them. This next student whispers what they believe they’ve heard to the next student. This repeats until the very last student in line shouts out what they believe is the original message. If you were watching this game unfold, you’d find the original whispered message and the final shouted message are rarely the same.
While you and your stakeholders aren't playing games at work, the idea is similar. Your stakeholders will share what they believe is the correct message coming out of your research with their partners. And if that continues, eventually, the message that's being shared about your research is getting further away from what was reported. As the creator of the report, your job is to make sure that it's as succinct and shareable as possible.
One great way of reducing the telephone effect is by using repetition. You mention key findings a few times across a report. Human memory works better with repetition because there are more chances to encode and absorb information. If you can pair repetition with other strategies (like displaying helpful diagrams, using action language, adding rich media, asking some recall questions, etc.), your report and its findings will stick in your audience’s brains for longer.
You can also make your core message short, succinct, and catchy. Examples of succinct messages are listed below.
Examples of Short, Interesting, and Informative Core Messages
- “Simple ≠Powerful” (when sharing findings for how a redesign made a powerful tool too “minimal”, “clean”, or “sparse”, making participants feel the tool is no longer effective for their needs)
- “Cut the Plastic” (when sharing findings about how participants felt the packaging for their online order was unnecessary and unsustainable”
- “Security is a Team Game” (when sharing findings about how corporate phishing or hacking attempts come from some coworkers who don’t practice safe cybersecurity behaviors).
The simpler and more memorable your core message, the lower the chances of the telephone effect leading to shared and discussed erroneous conclusions. Make your important or urgent findings short and memorable. The last way to avoid this issue is by making your reports shareable (along with your contact information) so that any new audience can get in touch with you.
Make your important or urgent findings short and memorable. Avoid the banana phone, consider the emotions, and choose the reporting format so that your research is seen as credible and digestible. But how do you choose what report to use?
- Research reporting