inVibe logo

imagine

Ensuring Only the Highest Quality Voice Responses

How inVibe collects top data for deeper insights

By Emily Fisher

Tue Jun 21 2022

If you’re familiar with market research, chances are you have a good idea of what an excellent response sounds like. The participant is emotionally engaged and clearly demonstrates their feelings through the speed, volume, and tone of their voice. They talk at length about relevant topics and supply information that directly serves your research goals, addressing every aspect of each question in depth. In short, not only do their answers dazzle—so too does their delivery.

Responses like these can easily stand alone as persuasive testimonials for a wide range of stakeholders. But at inVibe, we don’t just capture what people say—we also help clients strategically harness the power of their words. The better the data we get, the more actionable the insights we can produce. This is why we’ve established rigorous data quality control (QC) procedures that ensure that the responses we collect meet our high standards. Years of honing our process based on research and experience has enabled us to develop a robust, multi-tiered screening system to maximize data quality for analysis.

Designing for Detail

Our data QC process begins long before we’ve even collected anything, during our study design phase.

Over the past decade, inVibe has presented participants with over 9,000 voice prompts, each of which has been engineered to elicit the richest possible responses. In addition to drawing on our linguistic and behavioral expertise to inform design, we also maintain a comprehensive prompt library where we catalogue and score each prompt based on relevant parameters (such as topic, research goal, and quality of responses elicited). This allows us to repurpose previously successful prompts from similar projects and customize them as needed when designing new studies.

Over-Recruiting for Flexibility

The next portion of our QC process takes place during data collection. We typically over-recruit participants by about 10% and only close the field after we’ve evaluated the quality of the responses we’ve received. This leaves us room to screen out those that don’t satisfy our high standards while still comfortably meeting our quotas. What’s more, our automated technology allows us to collect additional responses speedily and efficiently, while other qualitative methods require a significant investment of time and resources for each interview.

Listening for Quality

After we have sent out our study, our QC process culminates with our team of trained data validators. Two or more validators may evaluate each response to ensure consistent data quality within and across projects. The end result is a dataset that comprises only on-target, detail-rich responses that our analysts can draw the most useful insights from. Before we begin our analysis, though, we listen to each and every recording in order to assess the quality of the answers and of the audio.

Answer Quality

Our validators begin by assessing answer quality. We score responses to each question on a scale from 1-5, with 5 being “excellent,” 3 “acceptable,” and 1 “poor.” These comprehensive scores take into account the following criteria:

1. COMPLETENESS: Does the participant answer the main question? Do they answer each of the follow-up questions?

2. RELEVANCE: Does the participant provide relevant information? Do they talk about what we want them to talk about?

3. RICHNESS: What level of detail does the participant delve into? How nuanced is their response?

4. EMOTIONALITY: To what extent does the participant demonstrate emotion? How personal is their response?

High-scoring responses demonstrate high compliance, relevance, richness, and emotionality, while those that score low exhibit poor quality or lack of usability. Instead of accepting low-scoring responses, we send participants invitations to re-record that contain broad instructions on how to improve the quality of their answers (e.g., “Please provide more detail on the decision-making process you discussed in Question 2.”).

When all responses have been scored for a participant, the average score across responses makes up their “overall rating.” This rating serves as a benchmark for final screening decisions, when validators use it to sort participants into one of 3 buckets: “good,” “border,” and “poor.” Good responses enter the final data set, while poor responses are filtered out. Border responses, which meet our baseline of acceptability but do not exceed it, are filed away as a backup. Our team only includes them in the final data set if we are unable to fill our quota entirely with “goods” after undertaking every possible effort to do so.

Audio Quality

We perform audio quality checks to make sure that our analysts can clearly hear all of the voice data. (After all, this is the cornerstone of what we do!) We want to make sure that the audio has high fidelity so that our Speech Emotion Recognition software will produce accurate analyses. Too much background noise can throw off its careful acoustic measurements.

There are several criteria that each response must meet to pass audio screening:

  • Clearly articulated, discernable speech
  • No static
  • No background noise
  • No distortion or other audio interference

Responses that do not meet the above criteria are marked accordingly and withheld from our final data set. However, rather than throwing them away, we give participants the chance to re-record. If they choose to do so, their responses once again undergo the same QC procedures.

Having successfully passed through every check in our QC process, the final set of responses are published to our platform. Using our interactive project dashboards, clients and analysts are able to engage with the high-quality data that results from our rigorous data validation process.

Innovating for Success

At inVibe, we always operate with our client’s end goal in mind. This means that we are not only a team of language experts, but also a team of innovators, modernizing traditional qualitative methodologies to maximize client research outcomes. Baking QC measures into every step of our research process is only one of the many ways that we do this. Innovations like these—client-centered and science-driven—are what enable us to remain at the forefront of research. Contact us today for a demo.

Thanks for reading!

Be sure to subscribe to stay up to date on the latest news & research coming from the experts at inVibe Labs.

Recently Published

Mind the (Communication) Gap

How inVibe Identifies and Addresses HCP-Patient Misalignment

By Janine Karo

Thu Apr 13 2023

Listening Deeply: How inVibe gets to bottom of what patients are thinking

Using inVibe's mixed methods research to gain deeper insights

By Janine Karo

Tue Mar 07 2023

Another Home for Listening: Leveraging Real Patient and Caregiver Voices

How to Develop a Patient-Centric Clinical Trial with inVibe

By Tripp Maloney

Wed Dec 14 2022

/voice

  1. Why Voice
  2. Sociolinguistic Analysis
  3. Speech Emotion Recognition
  4. Actionable Insights
  5. Whitepapers
  6. The Patient Voice

@social

2024 inVibe - All Rights Reserved