Synthetic market research: The case for a hybrid approach

Michelle Van Zandt
Read Time: 6 Minutes

Share:

AI featured image

Table of contents

In the competitive world of go-to-market (GTM) strategy, speed to (quality) insights, and data accessibility are make-or-break advantages. Yet traditional market research (e.g., surveys, interviews, focus groups, etc.), takes time. Enter a new alternative in the AI era: Synthetic research.

Synthetic market research is the use of AI-generated or simulated data to generate responses, behaviors, or insights that tries to simulate what real populations would produce. Broadly speaking, synthetic research involves creating or synthesizing data instead of collecting data from human respondents to mimic real-world outcomes.

At a technical level, synthetic research can take on one of two approaches:

  1. Machine learning models create simulated versions of real data that preserve statistical patterns (synthetic data generation) (PMC 2023); or
  2. Existing datasets and studies are aggregated to produce meta-insights across markets and buyer segments (research synthesis) (ResearchGate 2022)

What are the use cases for synthetic research related to marketing?

  • Simulated buyer journeys: Using synthetic personas to map awareness, consideration, and conversion pathways to inform campaign design
  • Concept and message testing at scale: Generating thousands of “AI respondents” to evaluate positioning hypotheses within hours
  • Synthetic competitive benchmarking: Modeling how buyers might respond to competitor moves in pricing, packaging, or product features
  • Synthetic segmentation: Combining modeled behavioral data with real CRM and digital signals to project how markets may shift under new conditions

Should you use synthetic market research?

Yes and no. It has its purposes but shouldn’t be used as a panacea. Based on our expertise and an experiment (details below), human research is still the gold standard because it provides the level of depth, context, nuance and confidence synthetic research cannot produce. Synthetic data and research should not be used alone, as a direct replacement for human research. The right approach often is a hybrid to capitalize on the benefits of both human and synthetic research.

“Synthetic data and research should not be used alone, as a direct replacement for human research. The right approach often is a hybrid to capitalize on the benefits of both human and synthetic research.”

To come to that conclusion, we conducted an experiment comparing responses from a survey we conducted using traditional quant and qual methodologies, with responses from ChatGPT.

The topic of our research: healthcare benefits (Employers as the surveyed audience).

Let’s dive into the results from the experiment first – followed by our rationale of why a hybrid approach is often the best solution.

What our AI vs traditional research experiment found

On the surface, ChatGPT wasn’t wrong – but it wasn’t accurate either.

It was able to mirror the high-level themes from traditional research: employers value strong benefits, react to rate increases at a specific level, and weigh network strength, employee feedback, and reputation – at impressive speeds.

But it lacked the detail that fuels activation – the nuance, context and reliability that drive real insights and understanding can only come from engaging real people.

The differences

  1. Depth & context: Without human anchoring, synthetic data insights risk being disconnected with how and why respondents feel, think and act
  2. Over-simplification: AI lacks the ability to create and spot nuance, generalizing data without context, ultimately smoothing out niche findings that represent real-world behaviors
  3. Closed loop responses: Over time and with repetition of the same (stale) data, AI feeds off of itself, ultimately declining in consistency and accuracy
  4. Trust & transparency: Much is still unknown or opaque about how insights are generated, casting doubt on recommendations solely sourced from AI

Examples from our experiment

Depth & context

When asked to prioritize factors influencing employer decisions, ChatGPT’s responses diverged from real-world data and failed to explain how humans actually make decisions.

  • ChatGPT not only downplayed the role of carrier brand relative to other factors, it also suggested switching carriers for a better price or coverage was a ‘no-brainer’ or a decision made lightly
  • Our human-led study highlighted the significant role brand plays for Employers selecting Employee plans – likely as a proxy to demonstrate quality
  • Our human-led study also found switching carriers is a really big deal. Employers overwhelmingly prefer the status quo because of inertia – something ChatGPT failed to capture. Beyond cost and coverage, the emotional and operational barrier (inertia) of switching carriers is a major roadblock…almost regardless of savings

Implication: If played out in a real-world recommendation, instead of suggesting the carrier client focus on building up its brand, the recommendation may be to focus messaging and GTM strategies on the tactical plan components – which likely would not resonate as much as including the carrier, resulting in wasted time and resources.

Over-simplification

When asked about how Employers compare health plans for employees, ChatGPT provided surface-level insights, without detail into why respondents chose certain factors – and how answers may differ based on respondents.

  • Our human-led research surfaced nuanced priorities—like mental health services for younger workforces, or Occupational Therapy/Physical Therapy coverage for older ones
  • ChatGPT’s output stayed broad and predictable, citing priorities like deductible and out of pocket costs, regardless of respondent profile and nuance

Implication: Respondent motivations, emotions, and context make research actionable – without those insights, recommendations can fall flat. Further, surfacing differences between respondents is key to crafting GTM strategies that resonate.

Closed loop responses (and trust & transparency)

When asked about the rate renewal increase threshold to explore alternatives, ChatGPT provided different answers all five times we asked the question – with a variance between iterations of 11 percentage points.

  • ChatGPT indicated as high as 77%, and as low as 66% of respondents would say the renewal rate increase would only need to be 20% or less
  • Our human-led research found the portion of respondents to be 61% – validated via extensive qualitative studies and secondary research

Implication: Such variance in results sparks skepticism and more questions. Being open about methods and analytical approaches helps to build confidence in the data and recommendations.

Despite these limitations, there are certainly benefits of synthetic research:

  • Speed and agility: dramatically compresses time-to-insight. Instead of recruiting panels and waiting weeks for responses, marketers can simulate thousands of buyer reactions in hours
  • Lower cost: Traditional primary research is expensive with panel recruitment, respondent incentives, data tools, and analysis
  • Access to hard-to-reach audiences: B2B research often struggles to reach niche audiences. ChatGPT can “easily” generate responses for specific employer segments like size, industry, and geography, assuming the data exists already, for it to synthesize
  • Scenario testing: AI’s flexibility makes it ideal for “what-if” modeling

Our take: Use synthetic research to complement traditional, human-led methods

Our experience conducting qualitative and quantitative research with real human beings (B2B and B2C), tells us decision making is complex, nuanced and sometimes contradictory (i.e., you want more choice, but more choice can be overwhelming). While saving time and money are extremely attractive benefits of synthetic research, we’re not yet convinced that AI can produce the same quality in the context of often complicated and interwoven results and insights.

That’s why, at this point int time, we recommend synthetic research be considered as a complement to — but not a replacement for — real human-led research method, particularly in early exploratory phases where speed matters more than precision.

AI can scan, summarize, and simulate at scale—surfacing patterns, stress-testing ideas, and informing early hypotheses. But it can’t interpret motivations, segment behaviors, or provide consistency with confidence.

A hybrid research model unites both worlds:

  • AI’s efficiency to explore, iterate, and model possibilities
  • Human expertise to interpret, validate, and ground findings in reality

Together, they create an insight engine that’s both fast and deeply rooted in human truth.

“…synthetic research should be considered as a complement to — but not a replacement for — real human-led research method… Together, they create an insight engine that’s both fast and deeply rooted in human truth.”

How to apply a hybrid research approach

  1. Use GenAI early to explore
    Before fielding a study, use AI to identify themes and generate hypotheses. This sharpens your questions and focuses human research where it matters most.
  2. Validate with human-led methods
    Surveys and interviews provide the context, emotion, and nuance AI can’t replicate—turning directional findings into defensible insights.
  3. Iterate continuously
    Leverage primary research to enhance or refine the synthetic data set, improving the accuracy of future analyses.

In summary

GenAI expands what’s possible in research by accelerating learning, enables scenario testing, and broadens early exploration. But the insights that stick, that inspire action and confidence, still come from people. Consider using synthetic market research as a complement to — but not a replacement for — real human-led research method. When used together, GTM leaders can create a new breed of insight engine—one that’s both fast and deeply rooted in human truth.

What’s next?

LET’S
SET THE STANDARD.​