Gathering data from online research panels without Conjointly's state-of-the-art quality checks is like driving in the rain without windscreen wipers.
Research without quality checks? More like driving in the rain without windscreen wipers
Published on
30 March 2026
Nik Samoylov image
Nik Samoylov
Director


Online consumer panels offer access to a vast and diverse pool of respondents, making them an invaluable resource for market research. However, the quality of data collected is partially compromised by fraud and lack of engagement from respondents. Compromised data means compromised findings leading to compromised business decisions.

Gathering data from online research panels without Conjointly’s state-of-the-art quality checks is like driving in the rain without windscreen wipers.

Don’t trust, and verify

There is a misalignment of incentives in the panel business where suppliers are paid on sample sizes but buyers are paying for outcomes. This means that suppliers are incentivised to provide as much sample as possible, even if it is of low quality.

Even when your panel provider advertises quality controls (even with strong verification), our experience as sample buyers suggests that they are insufficient.

Fake data

In a typical week, 16% of all traffic coming into the Conjointly platform is made by definite bots, which are completely blocked.

Within traffic from consumer panels:

  • 26% coming from residential proxies: Theis trick is typically used by individual fraudsters in locations like Nigeria and Bangladesh to masquerade as local respondents in target countries.
  • 10% from VPNs: Another trick used for the same purpose.
  • 2% from virtual machines: This is a common tactic used by highly automated fraudsters who imitate real respondents by running multiple instances of browsers on virtual machines.
  • 5% from respondents using various automation tools to complete surveys, such as AI agents.
  • 2% from countries that do not match the target country of the survey, which is a clear indication of fraudulent activity or sampling errors from panel suppliers.
  • 2% comes from repeat offenders.

Around 2% of the sample is then flagged as potentially fraudulent during manual reviews.

There is overlap between these categories, but in total, ~40% of the traffic from consumer panels is fake. This is consistent with the findings of other researchers (e.g. Willard-Grace et al. 2026 or Walker et al. 2024).

Inattentive responses

Depending on the survey, the share of inattentive respondents varies from 0% to (in many cases) 30%. These could be the right people, but they are not providing quality data.

This is especially important for conjoint analysis simulations, where respondents are required to make trade-offs between different product profiles. If respondents are not paying attention, their choices will be random.

Around 3% of the sample is then flagged as inattentive during manual reviews.

Duplicate entries

Approximately 1% of entries from online consumer panels are duplicates, meaning the respondents have already completed the same survey. This can happen for various reasons, such as:

  • technical glitches,
  • same people using different panel apps,
  • intentional fraud, or
  • respondents trying to increase their chances of winning incentives.

ROI on quality checks

The cost of quality checks on Conjointly is included in the price of sample that we supply.

If you bring your own sample from consumer panels, we implore you to switch on the checks. Most of them are free. There are charges for:

  • LLM analysis of open-ends (catching unsophisticated bots, duplicates, etc.): USD 0.10 per complete,
  • Detection of anomalous network activity (catching proxies, VPNs, etc.): USD 0.05 per entry.

If you are bringing your own sample, consider this example calculation of the return on investment for quality checks:

Checks offChecks on
Cost per complete charged by your supplier$4.00
Incidence rate (IR)%70%
N ordered10001000
Quality check cost: LLM analysis of open-ends-$100
Quality check cost: network activity analysis-$119
Total cost$4000$4219
% of sample that should be excluded afterwards%%
N actually useful600900
Cost per useful complete$6.67$4.69

In this example, you reduce the cost per useful response by 30% when quality checks are on, without increasing your budget excessively.

This, of course, does not take into account the cost of compromised business decisions based on flawed data, which is much higher.

Why Conjointly’s quality checks work

  • Because our quality checks are inbuilt and not an afterthought bolt-on.
  • Because we have dedicated resources to manually review data and refine the survey platform based on these reviews.
  • Because we buy sample from multiple suppliers and have a large volume of data to learn from.
  • Because we are transparent about the quality of data from different suppliers and provide this information to our clients.

No checks are perfect, like no windscreen wipers give completely clear vision. There are false positives and false negatives. But having them on is much better than having them off.

Windscreen wipers are essential for driving in the rain, just like quality checks are essential for research with online consumer panels.

Read these articles next:

Random effect models with lmer function in R

Random effects are everywhere in survey data. Let's try to do appropriate modelling for them in R!

View article
Comparing the new and old Nokia Logo

Nokia Rebrand — Old vs. New?

To evaluate the effectiveness of Nokia's recent logo overhaul, this Logo Test compares the new 2023 logo with the 45-year-old iconic Nokia logo.

View article
What is TURF Analysis and When to Use It?

What is TURF Analysis and When to Use It?

TURF analysis (Total Unduplicated Reach and Frequency) is a statistical technique that ranks combinations of products by how many people will like these combinations.

View article