Coding open-ended responses
Extracting themes and insights from open-ended responses typically involves reading through responses, translating multilingual content, interpreting meanings, and categorising findings manually. With Deep probe you can transform your open-ended responses into structured data within minutes, saving you time and minimising human error.
The example below shows you how Deep probe’s LLM analysis converts open-ended feedback into clear categories.
Working example: Categorising market research trend insights
A retail company collected open-ended feedback from customers around the world about their shopping experience. The survey contained multiple open-ended questions, with responses in multiple langauges. The researchers needed to quickly categorise these responses into themes for analysis.

To save time from translating and coding individual respondent answers, a deep probe was added to identify and group responses by topic regardless of language or response length.
Deep probe setup
- Navigate to the Deep probe tab in the experiment report and click .
- Select .
- Insert the analysis request. For example,
Code open ends into several topics
- The system suggests categories as output type with options such as “emerging technologies”, “AI applications”, “research methodologies”, “data analysis techniques”, “industry trends”, “Conjointly features”, “challenges and concerns”, etc.
- After reviewing the suggested categories, researchers can accept the recommendations or modify them based on their analysis objectives. In this example, the researcher accepted all suggested categories.

Generated prompt for analysis
Based on the settings, the system generates an analysis prompt. Researchers can review and edit this prompt before processing begins.
The following shows an example prompt generated by the LLM based on the request in this example.
Request: Code open ends into several topics
Response:
Outputs overview
The Deep probe outputs generates a main topic category for each individual respondent. Like all AI tools, Deep probe may occasionally produce inaccurate results. Please review outputs carefully.
You can edit values and output types of these individual results if needed.

Based on these individual results, the Summary for category outputs tab presents the distribution chart of the categories. Nearly two-third of respondents mentioned AI applications in their feedback. Challenges and concerns and research methodologies each featured in approximately one in nine responses (11.4%). The remaining categories showed lower frequency.

Best practices for open-end coding
- Create comprehensive categories that cover most expected responses while keeping an “Other” option for outliers.
- Use clear category definitions that distinguish between similar concepts to ensure accurate classification.
- Test iteratively by previewing results and refining categories based on sample response patterns.
- Maintain coding consistency by using specific criteria rather than subjective judgments in category descriptions.
Other Deep probe use cases
You can also read about: