Builder.ai, once a darling of the London tech scene with a $1.5 billion valuation and backing from Microsoft and Qatar’s sovereign wealth fund, has become the poster child for everything wrong with AI hype manipulation. The company claimed its digital assistant “Natasha” powered revolutionary AI-driven app development, when in reality, around 700 engineers in India were manually building those apps. The entire AI narrative was fabricated.
This isn’t just another startup failure. This is the largest AI fraud since the ChatGPT investment boom began, and it reveals how easy it is to bamboozle investors with AI claims when the underlying technology barely functions. Builder.ai filed for bankruptcy in May 2025 after an audit revealed they had inflated their 2024 revenue by 300%. They claimed $220 million in sales when the reality was $50 million.
The scandal goes deeper than fake AI claims. Builder.ai allegedly engaged in “round-tripping” transactions with Indian social media company VerSe Innovation, where both companies billed each other $180 million for non-existent services like “AI licensing” and “market research” between 2021 and 2024. This financial engineering allowed them to artificially boost their reported revenues while providing zero actual value.
The Anatomy of AI Washing at Scale
Builder.ai’s deception wasn’t subtle. They marketed themselves as an AI-powered no-code platform where Natasha, their supposedly sophisticated AI assistant, would automate the entire app development process. The reality was a massive human workforce in India doing the manual coding while the company told clients and investors that 80% of the work was AI-driven.
This is AI washing at its most brazen. While many companies stretch their AI capabilities for marketing purposes, Builder.ai created an entirely fictional AI product. The 700 engineers weren’t supplementing AI tools or handling edge cases – they were doing all the work while the company pretended artificial intelligence was the driving force. This situation underscores a critical point I often make: the value of an AI tool is in its actual output, not in its branding. If a company claims AI does the heavy lifting, but it’s human effort, that’s a misrepresentation of value.
What makes this particularly egregious is how long they maintained the charade. This wasn’t a brief misleading marketing campaign; it was years of systematic deception across multiple funding rounds. Investors handed over hundreds of millions based on AI capabilities that simply didn’t exist. This case highlights the crucial need for deeper technical due diligence, especially in a field as complex and rapidly moving as AI. It’s not enough to take a company’s word for it; investors must verify the claims themselves, or hire experts who can.
Financial Engineering Meets AI Hype
The round-tripping scheme with VerSe Innovation shows how AI hype can be weaponized for financial fraud. Both companies artificially inflated their revenues by billing each other for imaginary AI services. This created the appearance of booming business in the AI sector when actual value creation was minimal to nonexistent.
The round-tripping scheme allowed both companies to inflate revenues with fictional transactions.
VerSe’s co-founder has denied these allegations as baseless, but the pattern fits perfectly with Builder.ai’s broader strategy of creating the appearance of AI success without the underlying substance. When your entire business model depends on convincing people you have AI capabilities you don’t possess, financial engineering becomes a natural extension of the deception.
The bankruptcy filing revealed even more chaos. Among Builder.ai’s creditors are corporate spies – a detail that highlights just how secretive and dysfunctional the company’s operations had become. When your creditor list includes actual espionage professionals, you know things have gone completely off the rails.
The Human Cost Behind the AI Facade
While Builder.ai executives were crafting elaborate AI narratives for investors, approximately 1,000 employees lost their jobs when the company collapsed. The 700 engineers in India who were actually building the apps – the real talent behind Builder.ai’s output – became unwitting participants in a massive fraud scheme.
This reveals one of the most perverse aspects of AI washing. Companies can exploit skilled human labor while simultaneously claiming that artificial intelligence is doing the work. The engineers weren’t getting credit for their capabilities, and clients weren’t getting the AI innovation they were promised. Everyone except the executives perpetrating the fraud got shortchanged. This is a classic example of how businesses can treat AI as a magic solution without understanding workflow automation, ultimately harming both employees and customers.
The irony is thick. Builder.ai marketed itself as revolutionizing app development through AI, but their actual competitive advantage was access to talented engineers willing to work for reasonable rates. Instead of building a legitimate business around that reality, they chose deception. This is a poor business decision. As I often state, companies that reinvest their time savings from AI into building better services are the winners. Builder.ai chose to build a house of cards instead of a sustainable business.
Investor Due Diligence Failure
How did sophisticated investors like Microsoft and Qatar’s sovereign wealth fund get fooled by such obvious fraud? The Builder.ai scandal exposes serious gaps in due diligence when it comes to AI claims. Investors got caught up in the hype and failed to verify the fundamental technical capabilities they were supposedly investing in.
This isn’t unique to Builder.ai. The entire AI investment space has been plagued by inflated claims and insufficient technical verification. When everyone wants exposure to AI and deals are moving quickly, basic verification of AI capabilities often gets skipped. The result is funding flowing to companies that can talk convincingly about AI rather than those actually building it. This is why I advocate for using off-the-shelf models for most businesses rather than trying to build proprietary ones, because proprietary companies are going to do better anyway. This also applies to investors. They should be looking for proven models and applications rather than untested, unverified claims.
The fact that Builder.ai maintained this deception across multiple funding rounds suggests investors never demanded proof of their AI claims. They were content with demos, marketing materials, and revenue numbers without digging into how those results were actually achieved. That’s a systematic failure that extends far beyond this single company.
Regulatory Response and Legal Consequences
US prosecutors have launched investigations into Builder.ai and are demanding access to company records and customer data. This represents one of the most significant legal responses to AI fraud we’ve seen, and it could set important precedents for how authorities handle similar cases.
The challenge for prosecutors will be proving intent to defraud versus mere marketing exaggeration. Builder.ai could argue they were building toward AI capabilities even if they hadn’t achieved them yet. However, the systematic nature of the deception, combined with the round-tripping financial schemes, makes this look like clear-cut fraud rather than optimistic marketing.
This case could significantly impact how AI startups present their capabilities to investors. If prosecutors successfully prosecute Builder.ai’s leadership, it will send a strong signal that AI washing has real legal consequences. That would be a healthy development for the entire industry. I believe very little regulation is needed in the AI space, and what we have is sufficient, but fraud is fraud, and it should be treated as such.
Industry-Wide AI Washing Problem
Builder.ai represents an extreme case, but AI washing is endemic across the startup ecosystem. Companies routinely rebrand traditional automation as “AI-powered” to attract investment and customers. Most cases are less egregious than Builder.ai’s outright fabrication, but the underlying dynamic is the same.
The problem stems from a fundamental information asymmetry. Most investors and customers lack the technical expertise to verify AI claims, creating opportunities for companies to oversell their capabilities. When funding and valuations depend heavily on AI positioning, the incentives for exaggeration become overwhelming. This is why I consistently recommend that businesses focus on practical results rather than AI branding when evaluating tools and services. A solution that works well and delivers value is more important than one that claims to use cutting-edge AI. Builder.ai’s actual output quality might have been fine – the problem was the fraudulent claims about how it was produced.
This situation also raises questions about the effectiveness of AI benchmarks. I often point out that benchmarks do not accurately reflect how useful AI models are in real-world applications. In Builder.ai’s case, there was no real AI to benchmark, but the general principle holds: relying solely on reported metrics or vague claims, whether from a company or a benchmark, can be misleading. Real-world performance and tangible value are what matter.
The Broader Implications for AI Investment
The Builder.ai scandal represents the largest AI startup collapse since the ChatGPT investment boom began, and it could mark a turning point for investor attitudes toward AI claims. The easy money phase of AI investing may be ending as investors become more skeptical of unverified AI capabilities.
This could actually benefit the AI industry in the long run. Companies with genuine AI capabilities will face less competition from fraudsters and exaggerators. Investment capital will flow toward demonstrable results rather than convincing presentations. The marketplace for AI tools and services will become more honest and efficient.
However, the immediate impact will likely be increased scrutiny and higher requirements for proof of AI capabilities. Legitimate AI companies may find fundraising more challenging as investors become more cautious. This represents a natural correction after a period of excessive hype and insufficient verification.
Builder.ai’s collapse also highlights the importance of regulatory oversight in the AI space. While heavy-handed regulation can stifle innovation, basic fraud prevention and truth in advertising enforcement are essential for maintaining market integrity. The AI industry needs enough regulatory framework to prevent obvious scams like Builder.ai without hampering legitimate development. I stand by my position that regulatory capture is a bad idea, and we do not want to be like Europe with their regulation. However, clear cases of financial fraud and deceptive practices must be addressed.
Lessons for AI Evaluation
The Builder.ai case offers clear lessons for anyone evaluating AI tools or services. First, demand concrete evidence of AI capabilities rather than accepting marketing claims. Ask for technical details, independent verification, or trial periods that let you assess actual performance. As I’ve said, AI-generated content can be better than human-written content, excluding the best writers, but only if the AI is actually doing the work and is guided by a strong framework. Builder.ai had no such AI.
Second, be skeptical of companies that claim dramatic cost savings or capability improvements through AI without detailed explanations of how their technology works. Genuine AI advances usually come with technical trade-offs and limitations that honest companies will discuss openly.
Third, consider whether claimed AI capabilities make technical sense given current state of the art. Builder.ai’s claims about AI handling 80% of app development should have raised immediate red flags for anyone familiar with current AI limitations in code generation and complex software development. There are not many use cases for AI agents in business processes, and for most things, workflows are generally better. What Builder.ai claimed was far beyond current AI capabilities.
Most importantly, focus on results rather than methods. If a service delivers good outcomes at reasonable prices, the underlying technology matters less than the value provided. Builder.ai’s clients might have been satisfied with the apps they received – the fraud was in misrepresenting how those apps were created.
The Builder.ai scandal will likely be remembered as a watershed moment that ended the most credulous phase of AI investing. For an industry built on the promise of artificial intelligence, that’s probably a good thing. Real AI innovation will benefit from higher standards and more honest evaluation of capabilities and limitations.
The company now owes $85 million to Amazon and $30 million to Microsoft for cloud services, with lenders seizing $37 million from company accounts. This represents one of the most spectacular AI startup failures on record, with consequences extending far beyond the immediate financial losses to employees, investors, and customers who trusted in Builder.ai’s fabricated AI capabilities.