A/B Test is an experimental method for comparing the performance of two stimuli. This experimental method exposes respondents to each stimuli individually to allow for focussed comparison between the two stimuli. These stimuli may be anything from claims, to packaging, to two different products. A/B testing can be used for:
- Determining which ad is more effective at increasing brand awareness
- Finding which packaging will increase purchase intent
- Deciding between two claims
Main outputs of A/B tests
Table of outputs for each question:
Summary metrics for each question included in the monadic for each stimulus for easy comparison.
At-a-glance summary allowing for quick comparison between stimuli with the option of drilling down into detailed metrics of any question for any stimulus.
Detailed question output for a particular stimulus:
Displays detailed outputs for an in depth look into responses for a particular question.
With Conjointly, we provide detailed statistics into each question for each stimulus if you are interested in the nitty gritty details such as distribution of responses, medians, ranges.
Segmentation of the market
Find out how preferences differ between segments.
With Conjointly, you can split your reports into various segments using the information collected automatically by our system, respondents' answers to additional questions (for example, multiple choice), or GET variables. For each segment, we provide the same detailed analytics as described above.
How to set up an A/B Test on Conjointly
Start by selecting the A/B Test experiment type from the list of experiment types. Then, insert the name for the first item (not displayed to respondents) and insert what will be shown to respondents. You can insert an image, description, or short video.
Insert the name and respondent display for the second item
Import questions from another experiment. If you have a previous Conjointly experiment such as a Monadic Test or Concept Test your can import questions from these experiments to quickly set up an A/B Test. You can skip this step as you will be able to add new questions later on.
Review both stimuli, including adding and modifying views. Views allow you to insert different representations of the concept in the diagnostics question. You can add images, long descriptions, use fancy formatting and lots more. You can also add or replace a stimulus in this step
While an A/B test is primarily designed to compare the performance of two concepts, including the top selling products in your categories will allow you to benchmark your concepts against existing products.
Add or modify the questions that will be asked of respondents. you can adjust the questions found in the Monadic block section. To add questions to this block, simply drag and drop any of the additional question types into Monadic block.
Customise the rest of the experiment. You are given complete control over the survey flow respondents will see.
Modify the following parameters to change the flow of the survey:
- Randomise questions allows the system to vary the order of the questions inside the monadic block in order to reduce respondent bias.
- Maximum number of questions to display per respondent lets you control the survey length by limiting the number of questions for each respondent.
- Maximum number of stimuli per respondent helps you get more detailed feedback for each concept without introducing survey fatigue.
- Sequence of questions lets you choose between lets you choose between sequential testing and random monadic.
Preview the survey as a participant to test your setup and prepare to launch.
Complete solution for product testing
Fully-functional online survey tool with various question types, logic, randomisation, and reporting for unlimited number of responses and surveys.
Efficiently test up to 300 product claims on customer appeal, fit with brand, and diagnostic questions of your choice.
Identify winning product variants from up to 300 different ideas (e.g., designs, materials, bundle options) on customer appeal, fit with brand, and diagnostic questions of your choice.
Efficiently test product descriptions to identify the best one for your product
Feature and claim selection and measuring willingness to pay for features for a single product.
MaxDiff (aka Maximum Difference Scaling or Best–Worst Scaling) is a statistical technique that creates a robust ranking of different items, such as product features.
Pricing, feature and claim selection in markets where product characteristics vary across brands, SKUs, or price tiers.
Test pricing of new and existing consumer goods in a competitive context using elasticity charts, revenue, and profitability projections.
Ask respondents to evaluate product concepts and digital assets one-by-one to get a read of their preferences and perceptions with various question types.
Efficiently evaluate potential business names to identify the best one to represent your brand
Efficiently test potential brand names to identify the best one to represent your business
Efficiently test potential domain names to identify the perfect new home for your brand
Efficiently test potential product names to identify the best one to reflect your brand
Efficiently test potential business card designs to identify the best one for your business
Allowing advanced choice modellers to upload their own experimental designs and perform data collection on Conjointly.
Determine price elasticity for a single product and identify revenue-maximising price level.
The Price Sensitivity Meter helps determine psychologically acceptable range of prices for a single product and approximately estimate price elasticity.
Conduct automated TURF analysis on results of any Conjointly experiment (or an outside dataset) using this user-friendly TURF analysis tool.