Coding open-ended responses


Deep probe is currently an α alpha feature. Have you got any suggestions for how we can improve the functionality? We are always looking for feedback, so feel free to get in touch!

Extracting themes and insights from open-ended responses typically involves reading through responses, translating multilingual content, interpreting meanings, and categorising findings manually. With Deep probe you can transform your open-ended responses into structured data within minutes, saving you time and minimising human error.

The example below shows you how Deep probe’s LLM analysis converts open-ended feedback into clear categories.

Working example: Categorising market research trend insights

A retail company collected open-ended feedback from customers around the world about their shopping experience. The survey contained multiple open-ended questions, with responses in multiple langauges. The researchers needed to quickly categorise these responses into themes for analysis.

Multilingual responses on market research trends

To save time from translating and coding individual respondent answers, a deep probe was added to identify and group responses by topic regardless of language or response length.

Deep probe setup

  1. Navigate to the Deep probe tab in the experiment report and click Add a deep probe .
  2. Select Add LLM analysis .
  3. Insert the analysis request. For example, Code open ends into several topics
  4. The system suggests categories as output type with options such as “emerging technologies”, “AI applications”, “research methodologies”, “data analysis techniques”, “industry trends”, “Conjointly features”, “challenges and concerns”, etc.
  5. After reviewing the suggested categories, researchers can accept the recommendations or modify them based on their analysis objectives. In this example, the researcher accepted all suggested categories.
Step-by-step setup

Generated prompt for analysis

Based on the settings, the system generates an analysis prompt. Researchers can review and edit this prompt before processing begins.

The following shows an example prompt generated by the LLM based on the request in this example.

Request: Code open ends into several topics

Response:

You are analyzing an individual survey respondent for a deep probe analysis. Your objective is to code open-ended responses into several topics based on the participant's answers to specific questions:

Analyze the following responses from a single survey participant:

  1. Market research trends : answer(Q25: Market research trends)
  2. Use of Conjointly : answer(Q26: Use of Conjointly)
  3. New feature suggestions : answer(Q27: New feature suggestions)
  4. AI in market research: Trends, benefits, challenges : answer(Q28: _ AI in market research: Trends, benefits, challenges)

Your task is to carefully examine this individual's responses to these questions, identifying patterns, themes, and key points in their answers. Based on your analysis, you will assign this participant to one of the following predefined categories:

  • Emerging technologies
  • AI applications
  • Research methodologies
  • Data analysis techniques
  • Industry trends
  • Conjointly features
  • Platform suggestions
  • Challenges and concerns
  • Future expectations
  • Other

Consider the following guidelines when making your category assignment:

  1. Look for recurring themes or topics across the participant's responses.
  2. Identify the primary focus or area of interest expressed by the participant.
  3. Consider the context and depth of knowledge demonstrated in their answers.
  4. Evaluate any specific concerns, suggestions, or expectations mentioned.
  5. If the responses don't clearly fit into one of the specific categories, use the "Other" category.

Based on your analysis of this participant’s responses, assign them to the most appropriate category. Your output should be in the following format:

[category]

Ensure that you choose only one category that best represents the overall theme of this participant’s responses. The category must be exactly as written in the provided list.


Outputs overview

The Deep probe outputs generates a main topic category for each individual respondent. Like all AI tools, Deep probe may occasionally produce inaccurate results. Please review outputs carefully.

You can edit values and output types of these individual results if needed.

Interpreting the results

Based on these individual results, the Summary for category outputs tab presents the distribution chart of the categories. Nearly two-third of respondents mentioned AI applications in their feedback. Challenges and concerns and research methodologies each featured in approximately one in nine responses (11.4%). The remaining categories showed lower frequency.

Interpreting the results

Best practices for open-end coding

  • Create comprehensive categories that cover most expected responses while keeping an “Other” option for outliers.
  • Use clear category definitions that distinguish between similar concepts to ensure accurate classification.
  • Test iteratively by previewing results and refining categories based on sample response patterns.
  • Maintain coding consistency by using specific criteria rather than subjective judgments in category descriptions.

Other Deep probe use cases

You can also read about: