Look, if you’re building or funding an early-stage SaaS company, you know the drill: you need to learn fast, but you’re burning through cash. So here’s the million-dollar question—can you use AI to simulate user interviews and save time and money? The short answer: yes, but not the way you think.
I’ve gone through five solid research papers on this topic, and here’s what you actually need to know. Think of AI-generated research participants (they call them “synthetic panels”) as a really useful tool in your toolkit—but not the whole toolkit. Used right, they’ll speed you up. Used wrong, they’ll send you down the wrong path with false confidence.
The Bottom Line (For People Who Skim)
- AI participants are great for the messy early stuff—brainstorming, drafting interview questions, testing survey wording, exploring “what if” scenarios. But they absolutely cannot replace talking to real humans when you need to understand actual experiences, context, or make big decisions.
- They’re weirdly too perfect sometimes. LLMs can replicate some classic human behaviors from psychology experiments, but they also show this creepy “hyper-accuracy” thing where they’re unrealistically correct. Real people are messier, and that messiness matters.
- They have a Western bias. These models work best for U.S., UK, and other English-speaking wealthy countries. If you’re selling globally—especially in Latin America, Asia, or other regions—don’t trust AI to understand those markets without serious validation.
- The statistics don’t hold up. AI can give you survey results that look decent on the surface, but dig deeper and the math falls apart. Variance is too narrow, relationships between variables get distorted, and you can’t reproduce results reliably.
- Sometimes they’re actually really good. For specific structured tasks—like role-playing scenarios or training situations—top AI models can match or beat humans. Just keep humans in the loop and be transparent about what you’re doing.
Why Should You Care?
- Speed matters. You can go from question to first insights in hours instead of weeks. That’s huge when you’re trying to ship fast and iterate.
- Money matters. When you’re pre-seed or seed stage, using AI to figure out what’s worth investigating with real people can save you thousands in recruitment costs.
- The world is bigger than Silicon Valley. If you’re selling outside the U.S., AI’s Western bias can really mess you up. It might miss regional requirements—like how procurement works in Mexico, privacy concerns in Germany, or payment preferences in Costa Rica.
What the Research Actually Says (Without the Academic Jargon)
1) AI Doesn’t Actually Live a Human Life (And That Shows)
Researchers at Carnegie Mellon talked to 19 qualitative researchers about using AI-generated interview responses. The verdict? Sure, AI can sound human at first glance. But it’s missing something crucial—actual lived experience. It doesn’t know what it feels like to be frustrated with clunky software at 11 PM when you’re trying to close the quarter. It’s never felt the political dynamics of pushing for a new tool purchase in a bureaucratic organization.
What this means for you: Use AI to prep better human interviews. Let it help you brainstorm questions and test your interview guide. But when you need to understand real context, emotions, and the messy human stuff that drives adoption—talk to actual people.
2) AI Can Act Human… Until It Suddenly Doesn’t
One study tested whether AI could replicate classic psychology experiments like the Ultimatum Game and those “gotcha” sentence puzzles. It could! But then it did something weird: in “wisdom of crowds” tests, it gave unrealistically perfect answers—way more accurate than real people would be.
What this means for you: Don’t assume AI gives you realistic error margins. Real people are uncertain and noisy in their answers. If your AI panel is too consistent and confident, that’s a red flag, not a feature.
3) Works Great in America, Gets Sketchy Everywhere Else
When researchers compared AI to the World Values Survey (a massive global opinion database), they found AI was pretty accurate for Western, English-speaking, wealthy countries. Everywhere else? Not so much. The errors multiplied.
What this means for you: If you’re selling to customers in the U.S., UK, and Western Europe, you’re probably okay with some caution. But if your ideal customer profile includes Latin America, Asia, Africa, or non-English markets—be very skeptical of “global insights” from AI. Validate with real people in those regions.
4) The Averages Look Fine, But the Math Is Broken
One paper compared AI-generated survey responses to a gold-standard political survey. The headline numbers looked similar, but when they dug into the statistics, things fell apart. The variance was too tight, about half the correlations between variables were wrong, and results changed depending on how you asked or when you asked.
What this means for you: Don’t use AI surveys to make pricing decisions or forecast demand. Use them to test whether your questions make sense before you survey real people.
5) For Some Specific Things, AI Is Actually Excellent
In one study, researchers gave AI and humans a standard test for social judgment (the kind used in HR assessments). Several AI models matched or beat a strong human sample and aligned well with expert ratings.
What this means for you: For sales role-plays, customer support training, or leadership scenarios, AI can create high-quality practice situations and feedback. Just make sure humans are reviewing the output and you’re upfront about using AI.
How to Actually Use This Stuff (The Practical Playbook)
Think of AI participants as your research assistant, not your research team. Here’s how to use them without shooting yourself in the foot:
Start with AI to Make Your Human Research Better
- Build draft personas. Let AI generate some initial customer personas and user journeys. Then mark everything you need to validate with real people—their actual jobs, buying triggers, and objections.
- Test your questions. Have AI take your draft interview guide or survey. It’ll surface confusing wording, missing questions, or places where you’re leading the witness.
- Explore edge cases. Ask AI to role-play unusual stakeholders—like that paranoid security guy or the penny-pinching CFO. Then bring those scenarios into real stakeholder conversations.
- Practice your pitch. Before your next demo, have AI play a skeptical CTO. See where your messaging falls apart. (But also practice with real advisors or coaches.)
Then Talk to Real Humans—But Smarter
- Recruit strategically. Based on what you learned from AI, recruit 8-12 specific people who sit at key decision points—like security gatekeepers or operations managers.
- Go deep where AI can’t. Spend time on things AI cannot understand: internal politics, switching costs, social risk, compliance pressures.
- Don’t assume U.S. = World. If you’re targeting Latin America, Europe, or Asia, do country-specific interviews. Don’t assume your U.S. findings apply everywhere.
Make Your AI More Realistic
- Add some randomness. Deliberately inject human-like messiness so your AI responses don’t all sound suspiciously similar and perfect.
- Weight toward reality. Adjust AI responses to match your actual customer mix (industry, geography, company size, role).
- Check your work. After real interviews, rerun your AI panel on the same questions. If the distributions don’t match (both averages and spread)—don’t trust the AI for decisions.
Be Honest About What You’re Doing
- Label it clearly in your board decks and investor updates. Say “synthetic panel exploration” not “customer interviews.”
- Don’t claim you talked to X customers when you only used AI. That’s misleading at best, dishonest at worst.
Real Use Cases You Can Try This Month
- Interview Guide Testing
Run your 45-minute interview guide through AI, spot the weak questions, clean it up—then do 6-10 real founder-led interviews. - Objection Handling
Have AI simulate different buyer roles (economic buyer, technical buyer, security, legal). Collect objections and responses. Test these in real sales calls and see what actually works. - Training Scenarios
Use AI to create realistic role-plays for your sales or support team. Keep a human coach to review, adjust, and localize the content. - Survey Pretesting
Test your survey questions with AI first to catch confusing wording. Then run a small real survey (100-300 people) in your target market for actual statistics. - International Expansion Planning
Use AI to brainstorm likely challenges by region (payments, privacy, workflows). Then validate the top 3 with regional experts or customers before committing resources.
What to Track (For Your Board)
- Speed & Cost
How long from question to first draft? How much cheaper per iteration? - Reality Checks
Do AI and human responses have similar spread and patterns on the same questions? Flag when they don’t match. - Bias Checks
Break results down by geography and language (U.S. vs. UK/EU vs. Latin America). Require human validation outside the U.S. before taking action. - What Actually Works
How many product or messaging changes started with AI insights and survived real-world validation?
For VCs: How to Evaluate “AI Research” Claims
- Ask for proof. Not just whether the averages match, but whether the spread and patterns match real data. Look at how results drift over time and with different prompts.
- Check beyond the U.S.. Are there validation points in non-English markets?
- Watch for misleading claims. Make sure they’re not calling AI simulations “customer interviews” in pitch decks.
- Look at fit. Some tasks (structured role-plays) are great for AI. Make sure they have humans in the loop for nuanced stuff.
The Real Story
AI research participants are here, and they’re useful. They speed up learning, help you design better studies, and let you explore more options when you’re resource-strapped. But here’s the thing the research makes crystal clear: you still need real humans for credible insights and real depth.
The winning strategy for 2025 is hybrid: use AI for speed and breadth, use humans for depth and truth.
If you’re running a seed-stage SaaS in the U.S., UK, Europe, or Latin America (yes, including Costa Rica and Central America), treat AI participants as a research accelerator, not a replacement for actual research. Test it, validate it, be honest about it—and you’ll move faster without fooling yourself.
The Research Papers (With Plain-English Summaries)
1) “Simulacrum of Stories” — Carnegie Mellon (2024)
What it says: Researchers interviewed 19 qualitative researchers about using AI as interview participants. The consensus? AI can sound plausible but lacks real-world context, obscures whose voice it represents, reduces participant agency, and risks erasing marginalized perspectives. They call this the “surrogate effect”—AI standing in for real communities can distort or erase their voices.
Link: https://arxiv.org/abs/2409.19430
2) “Using LLMs to Simulate Humans” — ICML 2023
What it says: This paper introduces “Turing Experiments” to see if AI can replicate famous psychology studies. It can replicate some (like the Ultimatum Game) but shows “hyper-accuracy distortion” in wisdom-of-crowds tests—giving unrealistically perfect answers unlike messy humans.
Link: https://proceedings.mlr.press/v202/aher23a.html
3) “Performance and Biases in Public Opinion Simulation” — 2024
What it says: Using World Values Survey data, researchers showed AI simulates public opinion well in Western, English-speaking, developed countries but poorly elsewhere. Demographic and topic biases persist (gender, education, issue type). The recommendation: use carefully alongside conventional methods, especially for cross-cultural work.
Link: https://doi.org/10.1057/s41599-024-03609-x
4) “Synthetic Replacements for Human Survey Data? The Perils” — Political Analysis 2024
What it says: Comparing ChatGPT “personas” to a gold-standard political survey, researchers found similar averages but crushed variance, distorted relationships between variables (about half were wrong), results that changed with different prompts or timing, and poor reproducibility. Bottom line: don’t use AI-generated survey data for statistics or inference.
Link: https://doi.org/10.1017/pan.2024.5
5) “LLMs Can Outperform Humans in Social Judgments” — Scientific Reports 2024
What it says: On a validated social judgment test, several AI models matched or beat a high-performing human sample, aligning with expert ratings. Consistency varies by model, but the potential for training and enablement is real—with proper oversight and transparency.
Link: https://doi.org/10.1038/s41598-024-79048-0
Final Thought
Think of AI participants like a wind tunnel for your research. They help you design better experiments, but they don’t replace real-world flight tests. If you set up a hybrid workflow now—calibrated AI plus targeted human research—you’ll learn faster and build credibility. That’s what your board wants to see, and it’s what your market will reward.
FAQs on Synthetic Panels in Qualitative Research
What are synthetic panels in qualitative research?
Synthetic panels use AI-generated participants based on large language models (LLMs) to simulate user interviews and feedback, enabling faster hypothesis testing and discovery.
Can synthetic panels replace human interviews?
No. Research shows LLMs can mimic human narratives but lack lived experience and emotional context. A hybrid approach combining synthetic and real participants produces reliable results.
Why are synthetic panels relevant for SaaS startups?
They drastically cut research time and cost, helping SaaS founders test assumptions quickly. However, validation with real customers remains critical for credible go-to-market and pricing decisions.
How should VCs view startups using synthetic research?
VCs should ensure that startups use synthetic panels responsibly—as accelerators for human research, not substitutes—to maintain market accuracy and user trust.



