Pie chart showing that 73 percent of AI startups are wrappers built on GPT-4 and 27 percent are genuine builders, based on Teja Kusireddy’s 2025 analysis.
Product Management - SaaS - Startups

The Truth About AI Startups: Lessons for Early-Stage SaaS CEOs From Teja Kusireddy’s “I Reverse-Engineered 200 AI Startups”

By John C. Mecke, DevelopmentCorporate.com
Inspired by the research and findings of Teja Kusireddy’s Medium article,
“I Reverse-Engineered 200 AI Startups. 146 Are Selling You Repackaged ChatGPT and Claude with New UI.”

I. Introduction: When the AI Gold Rush Turns Into a Wrapper Economy

In his viral Medium essay “I Reverse-Engineered 200 AI Startups. 146 Are Selling You Repackaged ChatGPT and Claude with New UI”, researcher Teja Kusireddy exposed a startling reality: most AI startups are not building original technology at all.

By reverse-engineering 200 venture-funded AI startups—inspecting their JavaScript bundles, monitoring API calls, and tracing network traffic—Kusireddy discovered that 73% of these companies are little more than user interfaces on top of OpenAI’s GPT-4 or Anthropic’s Claude. His analysis reveals just how wide the gap is between AI marketing claims and technical reality.

The implications for early-stage SaaS CEOs are enormous. Kusireddy’s work shatters the illusion that building on APIs automatically equals proprietary IP. It also warns founders that transparency, not techno-mystique, will separate sustainable AI-powered SaaS companies from short-lived hype plays.

This article breaks down the key findings from Kusireddy’s investigation and translates them into
actionable lessons for early-stage SaaS CEOs navigating today’s AI-fueled market bubble.

II. The Findings: 73% of Startups Are Just Fancy GPT-4 Wrappers

Kusireddy’s investigation started with a simple curiosity: a startup claiming to have a
“proprietary deep learning engine” was quietly calling OpenAI’s API every few seconds.
That one observation led to a massive three-week experiment in technical forensics.

He scraped, decompiled, and traced 200 startups’ live infrastructure across Y Combinator, Product Hunt, and LinkedIn job listings. What he found was a consistent pattern of misrepresentation and marketing theater:

  • 73% of startups had a material gap between what they claimed and what they actually built.
  • 34 of 37 companies using the term “proprietary model” were in fact just invoking GPT-4 with a hidden system prompt.
  • Dozens reused near-identical code snippets, some even containing the same comments like “Never reveal you are powered by OpenAI.”

His conclusion was damning but fair: these companies are not necessarily frauds, but they are
wrappers, not model builders. Calling a prompt template a “proprietary neural architecture,” however, crosses the line.

III. The Three Patterns Every CEO Should Recognize

Kusireddy identified three dominant patterns of how AI startups exaggerate their technological originality while relying almost entirely on third-party models.

1. The “Proprietary Model” That’s Actually GPT-4 With Extra Steps

Many companies pitch “custom models” that are nothing more than OpenAI chat completions wrapped in strict system prompts. Their production code often looks like this in simplified form:

await openai.chat.completions.create({
  model: "gpt-4",
  messages: [
    { role: "system", content: "Never reveal you are an AI model." },
    { role: "user", content: userQuery }
  ]
});

These firms mark up OpenAI’s API costs by 75x to 100x, charging customers $2–3 per query that costs only a few cents to execute.

2. The RAG Stack Pretending to Be Proprietary

RAG—Retrieval-Augmented Generation—is the standard pattern of
OpenAI embeddings plus a vector database (Pinecone or Weaviate) plus GPT-4 text generation.
It is genuinely useful, but it has become heavily commoditized.

Dozens of startups sell this as “neural search infrastructure” or “semantic retrieval engines,” yet the code is nearly identical across projects.
A typical RAG query costs less than half a cent, but customers are billed $1–2 per call.

3. The “Fine-Tuned Model” Illusion

Only about 7% of startups that Kusireddy analyzed actually trained their own models from scratch using platforms like AWS SageMaker or Google Vertex AI. The rest relied on OpenAI’s fine-tuning API, which essentially stores example prompts and responses within OpenAI’s infrastructure.

That is not the same as training a new model. It is, in practice, a sophisticated form of prompt engineering. Marketing this as “our own 100M-parameter model” again blurs the line between smart engineering and misrepresentation.

IV. The Bigger Picture: What This Means for SaaS Founders

Kusireddy’s research highlights a deeper problem: a valuation bubble around API-based AI startups.
Investors and customers are rewarding bold AI claims, even when the underlying technology is just orchestration around public APIs.

For early-stage SaaS CEOs, that creates both temptation and danger:

  • Your real differentiation isn’t your model; it’s your workflow and data.
    A polished user experience, domain-specific knowledge, and measurable outcomes will beat vague claims of “proprietary AI.”
  • Honesty creates investor trust. A founder who says, “We orchestrate GPT-4 for the legal industry” sounds credible.
    A founder who claims a custom model but can’t back it up will struggle in due diligence.
  • Valuations must align with infrastructure reality. As VCs begin asking for API invoices and architecture diagrams,
    companies with purely wrapper-level tech will face compression in valuation multiples.

V. The Economics Behind the Illusion

One of the most striking parts of Kusireddy’s article is how he outlines the margin structure of AI wrappers.
Here is a simplified view of the economics he documents:

Layer Typical Cost per Query Average Retail Price Markup
GPT-4 Prompt ≈ $0.03 $2.50 75x
RAG Stack (Embeddings + GPT-4 + Vector DB) ≈ $0.002 $1.00 500x
Fine-Tuned API Calls ≈ $0.04 $3.00 75x

For SaaS founders, these numbers are both a profit opportunity and a reputational risk.
The opportunity lies in profitability through clever orchestration. The risk appears when your marketing suggests a level of proprietary innovation that your infrastructure does not support.

VI. The Wrapper Spectrum: Not All Wrappers Are Bad

Kusireddy is careful not to demonize all API-driven companies. Instead, he introduces what he calls the
“Wrapper Spectrum.”

On one end are fraudulent wrappers—startups that obscure their dependencies, fake dashboards for non-existent models, and inflate performance claims. On the other end are smart wrappers—companies that:

  • Openly disclose that they are “built on GPT-4” or “powered by Anthropic Claude.”
  • Provide domain-specific workflows such as legal drafting, compliance routing, or risk scoring.
  • Invest in proprietary data layers and human-in-the-loop review processes.

The distinction is not purely technical; it is ethical and strategic.
As Kusireddy notes, “The smart wrappers aren’t lying about their stack. They’re building domain-specific systems and valuable data pipelines. They just happen to use OpenAI under the hood—and that’s fine.”

VII. The 27% Who Got It Right

Among the 200 companies analyzed, about 27% truly stood out.
Kusireddy groups them into three categories:

  1. Transparent Wrappers – They label their products “Built on GPT-4,” focusing the narrative on workflow, reliability, and ROI instead of fictional proprietary models.
  2. Real Builders – They train and host models for specific regulated industries like healthcare and finance, often under HIPAA, SOC 2, or similar frameworks.
  3. Innovators – They create multi-model frameworks, agentic reasoning systems, or novel retrieval architectures that truly extend beyond simple API calls.

These companies treat AI as a tool, not a brand. Their real moat comes from their data, workflows, and understanding of customer pain—not from claiming they rival OpenAI or Anthropic at the model layer.

VIII. Why Transparency Is the Next Competitive Advantage

Perhaps the most important insight in Kusireddy’s article is cultural rather than technical:

“Building on APIs isn’t shameful. Every iPhone app is ‘just a wrapper’ around iOS APIs. We don’t care. We care if it works.”

In a crowded AI market, where every founder claims proprietary innovation,
honesty becomes differentiation. Transparent messaging builds durable trust with customers, partners, and investors. Founders who position themselves as AI integrators with deep domain expertise will outlast those who pretend to be foundational model labs.

IX. How Founders Can Audit Their Own AI Claims

If you are a SaaS CEO, take a page from Kusireddy’s own playbook. Before a journalist, customer, or investor does it for you,
audit your own AI story:

  1. Open your browser’s DevTools → Network tab. Interact with your app’s AI feature.
    If you see api.openai.com, api.anthropic.com, or api.cohere.ai,
    you are a wrapper. That is fine—just be honest about it.
  2. Check latency patterns. If every AI response lands in roughly 200–350 ms, that is OpenAI’s
    characteristic response profile.
  3. Search your JavaScript bundle. Look for references to OpenAI or Anthropic. If your frontend exposes an API key,
    fix it immediately.
  4. Review your marketing copy. Replace vague claims like “proprietary neural engine” with
    precise statements such as “AI-powered by GPT-4 and domain-specific data.”

The sooner you align your messaging with your architecture, the safer your brand and valuation become.

X. What This Means for Investors and Customers

For investors, Kusireddy’s research is a due-diligence checklist:

  • Ask portfolio companies for high-level architecture diagrams and API billing statements.
  • Adjust valuations to reflect dependency on third-party models.
  • Fund teams solving hard problems in workflow, data, and distribution—not just prompt engineering.

For customers, the message is straightforward:

  • Do not pay $200 per month for a chatbot you could build in a weekend.
  • Evaluate SaaS vendors by outcomes and reliability, not by the drama of their AI claims.
  • Ask vendors to be explicit about their dependencies, especially in regulated industries.

XI. Lessons for Early-Stage SaaS CEOs

Summarizing Kusireddy’s findings for early-stage SaaS leaders:

  • Differentiate through UX and data, not model ownership.
  • Be transparent in your pitch decks and on your website.
    “Built on GPT-4 for HR leaders” is more credible than “our proprietary 100B-parameter model.”
  • Focus on speed to value. If you can deliver measurable ROI quickly—even with off-the-shelf models—you still have a strong business.
  • Invest in proprietary data pipelines and human feedback. Your dataset and your process will become your defensible moat.
  • Prepare for the post-wrapper correction. When the hype cools, companies built on honesty and real IP will keep their valuation multiples.

XII. The Coming “Transparency Era” of AI

We have seen this pattern before. In the cloud era, startups claimed to “build their own data centers” before everyone admitted they were on AWS. In the mobile era, hybrid apps masqueraded as native. In the blockchain boom, countless ERC-20 clones claimed to be groundbreaking tokens.

The AI wrapper era is simply the next iteration. As Kusireddy predicts, the market will mature, regulators will push for more disclosure, and founders who normalize transparency today will be the ones still standing tomorrow.

XIII. Practical Steps to Build Trust and Longevity

To turn these insights into a competitive advantage, early-stage SaaS CEOs can:

  1. Update your website and decks. Add an “AI Transparency” section that describes which APIs and models you use.
  2. Document your data advantage. Curate proprietary datasets, feedback loops, or vertical ontologies that can’t be copied.
  3. Educate customers. Publish explainers showing how your AI pipeline works and where human oversight comes into play.
  4. Showcase partnerships instead of hiding them. Use labels like “Powered by OpenAI” or “Enhanced by Anthropic Claude” where appropriate.
  5. Benchmark internally. Compare proprietary pipelines with open-source models such as Llama 3 or Mistral so you remain flexible as the foundation-model market evolves.

Transparency does not weaken your story—it strengthens it.

XIV. A New Definition of AI Leadership

Kusireddy ends his article with a line that should resonate with every SaaS CEO:

“Most AI startups are service businesses with API costs instead of employee costs. And that’s okay. But call it what it is.”

AI leadership is no longer about claiming to invent the next foundational model.
It is about curating technology responsibly, building trust, and scaling sustainably. The founders who say, “Yes, we’re built on GPT-4—but our workflow solves a billion-dollar problem,” will define the next generation of successful SaaS companies.

XV. Conclusion: The 48-Hour Test for Every SaaS CEO

Kusireddy proposes a simple benchmark for evaluating AI products:

“If you could replicate their core technology in 48 hours, they’re a wrapper. If they’re honest about it, they’re fine. If they’re lying about it, run.”

That is the standard every early-stage SaaS CEO should adopt. If your company passes the transparency test, lean into it as a strength. If it does not, now is the time to bring your marketing in line with your infrastructure—before customers, partners, or investors do it for you.

The AI gold rush will continue, but the next phase will reward authenticity over hype.
The founders who embrace transparency today will own the most valuable SaaS franchises tomorrow.

Cited Source

Kusireddy, Teja.
“I Reverse-Engineered 200 AI Startups. 146 Are Selling You Repackaged ChatGPT and Claude with New UI.”
Medium, 2025.

Read the original article on Medium
.

Related Reading on DevelopmentCorporate.com