inVibe logo

imagine

A tech-enhanced, human-led analytical approach

How the inVibe Listening Platform makes data collection more efficient, analysis more effective, and insights more actionable

By Janine Karo

Tue Jul 05 2022

“Making data collection more efficient, analysis more effective, and insights more actionable.” Such core tenets of inVibe sound pithy and exciting, but perhaps also opaque. How does inVibe truly actualize these aspirations?

The short answer is: We rely on a harmonious blend of skilled humans and cutting-edge technology throughout the project lifecycle, but especially in the analysis phase. On the human side, we have our expert linguists critically investigating the data, forming initial hypotheses, and gleaning deep nuances across responses. Technologically speaking, our bespoke software solutions facilitate and augment our linguists’ discoveries. Together, we’re able to dig deeper into three main areas: what is said, how it’s said, and how it sounds.

What is said

Understanding on the most basic level what information our participants conveyed to us is the first step in the analysis journey. After our fielding closes and all responses are in, our linguists set aside dedicated time to simply listen to the data, discover trends and notice anything particularly intriguing or surprising. We also account for any budding hypotheses that we’re interested in exploring further. This initial pass-through of the data and concurrent notetaking ensures we have a sense of each question’s big picture and a rough idea of what key aspects we may want to investigate further in subsequent listens.

With this vague roadmap in hand, we begin to further investigate the responses, breaking them up into meaningful categories in a process known as “tagging.” While we could categorize the data with existing tools like Microsoft Word and Excel, that can be inefficient and inflexible. Luckily for us, the inVibe product team has built a custom online dashboard that allows tagging responses in a clear, quick, and user-friendly way. Once we tag all the data, we can easily sort through tags to see what themes are most prevalent across participants, which allows us to focus our analysis and reporting accordingly. Here you can see tagging in action.

How it’s said

After listening to and tagging all responses for content, we want to dive into the nuances of the language participants use in their answers. This is because as linguists, we know that so much of what we mean is implicit, existing beneath the surface of the literal message. To more deeply understand how we should interpret responses, we “read between the lines.” On a broad level, our linguists analyze how participants structure their responses. What information do they bring up first? What ideas do they repeat? What are those “noisy silences,” or things left unsaid? Narrowing in, we listen for the kind of language participants employ to express their thoughts. Do they vocalize doubt through hedging – words like “guess” or “unsure”? Are they enthusiastic in their responses, indicating positivity through words like “excited” or “great”? Do they use intensifiers (e.g. “so,” “very,” “extremely”) to emphasize certain points? Taken together, participants’ answer structures and specific language add color and validity to what is said, illuminating what is top of mind to participants and how they actually feel.

Alongside our linguists’ expertise in the realm of “how it’s said” comes our AI-powered sentiment analysis algorithm, which our product team co-built alongside Edge Analytics. Sentiment analysis is a natural language processing (NLP) technique performed on textual data that determines whether data is positive, negative, or neutral. Our algorithm is particularly adept at “subject-aware” sentiment analysis, meaning it specifically considers whether the language used in relation to the subject of the phrase is positive or negative rather than the response on the whole. Such a tool reinforces and supplements our team’s human-powered analysis by scrutinizing language sentiment in context, contributing to a more accurate interpretation of responses.

How it sounds

The final piece of the analysis puzzle is to zoom out above the level of what content is conveyed and what words and structure are used to convey it to understand how it’s conveyed acoustically. In other words, how do “contextualization cues” — features like pitch, tone, and pauses — clue us in to how we should perceive these responses? Our blog post on acoustics covers this topic much more in-depth, but simply put, this is once again a partnership between humans and tech. First, our sociolinguists are especially trained to deeply listen for and note these nuances, which validate and enhance our existing analysis. Our intuitions are expanded upon by our established, rigorously tested speech emotion recognition (SER) algorithm, which tracks these contextualization cues present in each response and “translates” them into approximate emotions by scoring valence (positivity), activation (interest), and dominance (confidence). Our tools enable us to plot these emotions on a graph for each prompt, with one dot corresponding to one respondent. Such technology allows us to “see what we hear,” painting an even clearer, more definitive picture of language data for our clients.

This graph represents multiple HCP respondents in terms of their dominance "confidence" (x-axis) and activation "interest" (y-axis) for one prompt. Most responses are falling to the right side of the x-axis, indicating respondents’ more confident and assured as opposed to wavering and unacceptable

Why it matters

At inVibe, we embrace a partnership between human and technological expertise to simultaneously simplify and strengthen the analysis process. We see the value in building unique analysis tools for our researchers because that enables us to deliver more pragmatic insights for our life science clients. If you’re interested in learning more about the inVibe approach, reach out to schedule a demo. We’re ready to lend an ear!

Thanks for reading!

Be sure to subscribe to stay up to date on the latest news & research coming from the experts at inVibe Labs.

Recently Published

Mind the (Communication) Gap

How inVibe Identifies and Addresses HCP-Patient Misalignment

By Janine Karo

Thu Apr 13 2023

Listening Deeply: How inVibe gets to bottom of what patients are thinking

Using inVibe's mixed methods research to gain deeper insights

By Janine Karo

Tue Mar 07 2023

Another Home for Listening: Leveraging Real Patient and Caregiver Voices

How to Develop a Patient-Centric Clinical Trial with inVibe

By Tripp Maloney

Wed Dec 14 2022

/voice

  1. Why Voice
  2. Sociolinguistic Analysis
  3. Speech Emotion Recognition
  4. Actionable Insights
  5. Whitepapers
  6. The Patient Voice

@social

2024 inVibe - All Rights Reserved