Technical points on DCE with Conjoint.ly


This note is prepared for those familiar with the specifics of discrete choice experimentation to answer key questions in detail. Please contact the team if you have any further questions about the methodology.

DCE or conjoint?

Conjoint.ly uses discrete choice experimentation, which is sometimes referred to as choice-based conjoint. DCE is a more robust technique consistent with random utility theory and has been proven to simulate customers’ actual behaviour in the marketplace (Louviere, Flynn & Carson, 2010 cover this topic in detail). However, the output on relative importance of attributes and value by level is aligned to the output from conjoint analysis (partworth analysis).

Experimental design

Conjoint.ly uses the attributes and levels you specify to create a (fractional factorial) choice design, optimising balance, overlap, and other characteristics. Our algorithm does not specifically attempt to maximise D-efficiency, but it tends to produce D-efficient designs. It tends to produce designs of resolution IV or V (as such, it does support measurement of two-way interactions, even though they are not used in our modelling at this stage).

In most cases, the number of choice sets is excessive for one respondent and the experiment is split into multiple blocks (often between five and ten). We do not support individualised designs (i.e., every respondent has their own block). Each choice set consists of several product construct alternatives and, by default, one “do not buy” alternative.

To review the experimental design for your experiment:

  • Open the “Advanced options” tab in your experiment set-up page.
  • Click “Export experimental design”.

Are survey respondents required to complete all available blocks?

No, each survey respondent only needs to complete one “Block”, which is randomly assigned to them.

Minimum sample size

Conjoint.ly automatically recommends a minimum sample size. In most cases, it is between 50 and 300 responses. In our calculations, we use a proprietary formula that takes into account the number of attributes, levels, and other experimental settings.

What is Hierarchical Bayes?

Conjoint.ly uses Markov Chain Monte Carlo Hierarchical Bayes (MCMC HB) estimation to calculate individual-level preference coefficients. Hierarchical Bayesian modelling is a statistical model in conjoint analysis is a type of modelling that estimates parameters (partworth utilities) not for market as a whole, but rather for individuals.

The word “hierarchical” refers to the nested structure (individuals are nested in the market). “Bayesian” refers to the statistical paradigm of Bayesian statistics that is based on Bayes’ theorem.

This approach has three benefits over the “traditional” (market-level) models, where preferences are assumed to the same across all respondents:

  • Individual-level coefficients help market share estimation account for heterogeneity of preferences in the market.
  • They allow for segmentation of respondents by their preferences.
  • This approach allows more parameters (attributes and levels) to be estimated with smaller amounts of data collected from each respondent.

Simpler regressions are often not suitable because of heterogeneity of preferences. For example, imagine 50% of people like feature A and 50% of people like feature B. If we use an aggregate-level model, we will end up averaging between these two groups (average preference for both will be close to zero) and thus will fail to capture the importance of this factor for decision-making.

Marginal willingness to pay (MWTP)

For experiments where one of the attributes is price, Conjoint.ly estimates a separate model with price as a numerical variable. We also perform checks for appropriateness of calculation of the measure, taking into account both the experimental set-up and the received responses (for example, limiting MWTP calculation in cases where there is non-linearity in price). Marginal willingness to pay is only an indicative number.

Share of preference simulation

Share of preference simulation is performed using individual coefficients from the estimated HB multinomial logit model. Two models for calculating market shares are available:

  • “Share of preference” model, which is appropriate for low-risk or frequently purchased products: FMCG, software, etc. This model is applicable in the vast majority of applications.
  • “First choice” model, which is suitable for high-risk or seldom purchased products: education, life insurance, pension plans, etc.

Partworth utilities (level scores)

Conjoint.ly estimates a hierarchical bayesian (HB) multinomial logit model of choice using qualified conjoint responses. The final coefficients in this model are the partworth values of each level — these values reflect how strongly each level sways the decision to choose one alternative over another. When running preference share simulations, the partworth utilities are used to compute relative preference for the examined alternatives.

Note on how we display level scores in the report: The raw conjoint utility scores (which appear in the “Individual preferences” tab of the excel export), are post-processed before being displayed in the online report.

Transforming conjoint raw utility scores into displayed level scores

The following formula applies to Generic Conjoint, Brand-Specific Conjoint, Claims Test, Product Variant Selector, Brand-Price Trade-Off, and MaxDiff.

By respondent:

  1. Within each attribute, zero-centre the utilities. This is done by subtracting the mean utility for the attribute from each utility value. Because of this step, a resulting score of 0 is, by definition, the average level score.
  2. Multiply by 10. This is done simply because it is easier to talk about whole numbers than decimals.
  3. Average across respondents.

Relative attribute importances

Using the partworth utilities scores derived for each level, we can estimate a rough measure of relative importance of each attribute in the choice decision. Specifically, we calculate attribute importance as the spread (maximum minus minimum) of partworth values within an attribute, normalised across attributes.

By respondent:

  1. For each attribute, compute the maximum utility value minus the minimum. This can be thought of as the raw importance score.
  2. Divide each raw importance score by the sum of the importance scores for all attributes.
  3. (Brand-Specific Conjoint only): Multiply by a scalar value (the exact value will depend on the specific experiment — specifics of this calculation are proprietary).
  4. Average across respondents.

The resulting scores are the relative attribute importances. Attributes with large variations in the sway factor are deemed more important.

Note on Brand-Specific Conjoint experiments: Be careful when looking at raw utility values in the “Individual preferences” tab of the Excel export. Attributes are repeated for each ‘brand’, so there may be several more than you defined in your experiment.

Relative performance of brands (Brand-Specific Conjoint only)

In Brand-Specific Conjoint experiments, the partworth utility calculation is the same; however, the reported relative performance of brands is calculated a bit differently. These values are based on the best performing combination of levels within each ‘brand’. Specifically, each value is the sum of the partworth utilities of the levels that comprise that brand’s best offering.

Importantly, relative performance of a brand will be affected by the features that applied to that brand, especially if one of the brands was shown with unusual or unrealistic features or price levels.

The scale of these values is arbitrary, but it is consistent with the partworth utilities as well as the “ranked list of product concepts”.