AI Bubble
I was at a coffee shop in Plaka last week when a friend asked me if I thought the AI bubble was about to burst. While I was sipping my coffee, another person at the next table was literally pitching their “AI-powered productivity app” to someone on Zoom. It was the fifth AI app pitch I’d overheard that week. I wasn’t even looking for them.
That conversation stuck with me because it crystallizes something I’ve been wrestling with for the past year or so. The AI space right now feels like standing in two different realities simultaneously. On one hand, the technology is genuinely transformative. On the other hand, the economics powering the ecosystem feel… precarious. And I mean that both as someone who’s built a product using AI and as someone who’s watched enough business cycles to know when things don’t quite add up.
Let me be clear upfront, I’m not here to declare whether the bubble will or won’t burst. That’s not the interesting question anyway. The real questions are more nuanced, and they matter a lot if you’re building something, investing in something, or trying to figure out whether the AI hype affects your life in meaningful ways.
The Thing About Bubbles
First, let’s separate two conversations that people often conflate, and this distinction is crucial.
There’s the technology bubble and there’s the economic bubble. They’re related but they’re not the same thing.
The technology part is real. I know this because I’ve been using AI tools like Claude to build TAGiT, my Chrome extension project. Over three months, I wrote 15,000+ lines of TypeScript without being a developer. That wouldn’t have been possible three years ago. The capability is genuine. The acceleration is real. When I hit a bug or needed to understand architecture concepts, Claude helped me learn and iterate. That’s not hype. That’s a tool that legitimately changed what was possible for someone like me. And this is only one branch of the tree.
But here’s where it gets complicated. The economic bubble is a different animal entirely.
Right now, AI companies are valued at astronomical multiples based on theoretical future value. OpenAI’s recent funding round valued the company in the tens of billions. Anthropic, xAI, and a dozen other labs are capturing capital at scales that would have seemed insane just a few years ago. The question nobody’s really answering is simple, though: at what point do these businesses need to be actually profitable?
Not growing fast. Actually profitable.
That matters because venture capital has this interesting property. It can fund losses for a while, but not forever. When you’re burning millions every month on compute costs, you eventually need revenue that covers that burn. And we’re starting to see investors ask harder questions about when that happens.

The Hype vs. Reality Gap
Here’s what I’ve observed in the last 18 months. In late 2023, everyone thought AI was about to solve everything. Sales teams were going to be replaced. Content creators were finished. Programmers were obsolete. We’d all have personal AI assistants that handled our entire lives.
Then reality started whispering back.
Yes, AI is useful. Incredibly useful in specific contexts. But the problems that actually make money solving are way narrower than the hype suggested. And the companies that solve them usually aren’t sexy AI startups. They’re boring infrastructure companies like Nvidia that make the chips. Or they’re existing companies that quietly integrated AI into their workflows.
This is actually the pattern I’ve seen with other tech cycles too. The hype inflates around application-layer innovations, but the real winners are usually infrastructure plays. In the early internet days, everyone wanted to be Pets.com or a pure e-commerce play. The money was made by Cisco and Intel and the companies building the pipes.
I think we’re seeing that pattern repeat, and it’s important to notice.
Two Different Markets Emerging
What’s becoming clear is that there’s a divergence happening. On one side, you have companies solving specific, defined problems with AI. Real customers, real revenue, real profitability paths. These are usually boring compared to the hype machine. A design tool with better AI features. A customer service system that actually reduces costs. A sales assistant that actually closes deals.
These companies will probably survive and thrive, especially if they’re bootstrapped or have sensible unit economics. They’re not betting the company on AI. They’re using AI to make their existing business better.
On the other side, you have companies that exist primarily because “AI is happening.” They’ve raised massive funding rounds. They have impressive demo videos. They’ve built something technically interesting. But the path from technology to sustained business? Less clear. These are the ones where the bubble dynamics are most apparent.
Now, here’s what makes this tricky. Right now, differentiation in AI is hard. There are 5,000 new AI apps launching every week or less. I’m not exaggerating. When the technology is essentially available to anyone (you can use Claude, ChatGPT, Gemini, open-source models), what separates a successful app from one that gets crushed? Usually it’s execution, distribution, marketing and finding a specific problem narrow enough that a general tool doesn’t solve it perfectly.
That’s why I think the next two to three years will see a massive correction. Not because AI isn’t real, but because 99% of these apps are trying to do the same things as 1,000 other apps. The AI noise is deafening.
What Happens When the Noise Clears?
But here’s where I think it gets interesting, and this is actually reason for optimism if you squint the right way.
When companies start failing (and many will), something productive happens. The capital that was funding 47 slightly-different AI chatbots gets reallocated. Some goes to companies that actually make money. Some goes to infrastructure. Some of it just leaves the space entirely.
The survivors won’t be the ones with the best funding rounds or the most hype. They’ll be the ones with real customers paying real money for something they can’t get elsewhere. That’s actually a healthier market than what we have now.
After that correction, you’ll see aggregation. The winners will get bigger. Probably the big tech companies (Google, Microsoft, Apple) will buy a bunch of specialized AI tools and integrate them. That sounds bad at first, but from a market perspective, it’s necessary. You can’t have 50,000 AI apps. You need like 50 doing the job.
And then something interesting happens. Once the dust settles and the consolidation happens, society actually gets access to better tools. The specialized solutions that survived, merged into bigger players, and are now integrated into things people use every day. The infrastructure got cheaper and better. The business models make sense.
That’s not pessimism. That’s just how technology markets actually work. There’s hype. There’s irrational exuberance. There’s a correction. Then you emerge with something genuinely useful, just at lower valuations and with fewer millionaires made along the way.
The TAGiT Lesson
I built TAGiT because I saw a specific problem: video learners spend hours watching but forget most of it. That’s a real problem. The solution I built isn’t revolutionary, but it’s genuinely useful for the people who use it.
But here’s what matters: I didn’t build it because I thought AI was going to change everything. I built it because I had an actual problem to solve. That’s the healthier mindset. If the bubble bursts tomorrow, TAGiT might become less relevant, but that’s okay. The thing itself solved a real problem, for me at least.
I think that’s the filter that matters. Not “is this an AI company?” but “does this solve a specific problem for specific customers in a way that makes economic sense?”
The Skeptics Might Be Right (For a Bit)
I should be honest here. The skeptics pointing out bubble dynamics aren’t crazy. There are genuine warning signs.
Companies with billions in valuation and no clear path to profitability. Enterprise AI implementations that promise massive ROI but deliver confusing results. The same venture capital pattern we saw in 2000 with the dot-com bubble, where it felt more important to be “in the space” than to have a working business model. Retail investors treating AI stocks like lottery tickets. Analyst reports from AI companies claiming AGI is two, three, five years away.
These are real dynamics. They exist. And yes, if you’re invested in some of these companies at current valuations, you might be in for a rough ride in the next months.
But here’s the thing. Even if there’s a correction, AI isn’t going away. The technology is too useful. The capability gains are too real. What will change is the valuations, the number of companies, and which specific bets pan out.
What Actually Survives
The companies that survive won’t be the ones with the best marketing or the biggest funding rounds. They’ll be the ones that figure out how to:
Build something specific enough to be valuable, general enough to have a real market. Make money doing it, not just capture mindshare. Actually reduce costs or increase revenue for customers in measurable ways. Maintain unit economics that make sense long-term.
It’s not exciting. It’s not going to make headlines about disrupting industries. But it’s what actually lasts.
And honestly, I think that’s healthy. We’ve had enough tech cycles where the winner is whoever raised the most money and convinced enough people to believe. The next wave should be companies that actually solve problems.
Where We Probably Headed
My guess, and it’s just a guess, is that we see a significant correction in the next year. Maybe 30 to 50 percent of current AI companies disappear or get absorbed. The valuations become less insane. The hype cycle deflates.
But the technology doesn’t go anywhere. We don’t have an “AI winter” where development stops. We just have a winter for the speculative part of the ecosystem. The infrastructure improves. The tools get cheaper. The useful applications become clearer.
Then, maybe three to five years from now, we have this moment where AI is just… normal. Part of everyday tools. Not this separate category of “AI companies” but just companies that use AI the same way they now use databases or APIs.
That’s not pessimism. That’s maturation.
Okay, I want to end this with something you’ll see below. I’m going to give you two possible futures. One optimistic. One considerably less so.
The fun part is, they’re both plausible. The difference between them isn’t really about whether AI works or doesn’t work. It’s about whether we collectively make sane decisions about how we deploy capital and what we choose to believe.
Which version do we get?
Option A: The Optimistic Path (Click to reveal)
The correction happens around mid-2026. It’s painful but not catastrophic. Companies with real unit economics survive and actually get stronger because the hype dies down and investors finally reward profitability. The infrastructure layer (chips, compute, foundational models) keeps improving and gets cheaper. Specialized AI solutions become genuinely useful and boring, like Excel was for spreadsheets. Big tech companies integrate AI into their existing products in ways that actually help people. Five years from now, we don’t talk about “AI companies” anymore, we just talk about companies that happen to use AI. Society actually gets more productive. Salaries don’t collapse because the jobs that matter (creative, strategic, human-facing) are ones AI makes better, not obsolete. We avoided the hype trap, learned the lesson, and ended up with better technology at healthier economics. But above all, it gives the chance to society to catch up and ease the curve after this explosive wave of logarithmic advancement.
Option B: The Controlled Burn (Click to reveal)
Here’s the uncomfortable possibility nobody wants to say out loud: the bubble doesn’t burst because they don’t let it. Think about it. The US is in a tech cold war with China. AI supremacy isn’t just about market cap, it’s about who controls the infrastructure of the next century. You really think they’re going to let market forces slow this down? If economics mattered, this bubble should have popped already. The valuations don’t make sense. The burn rates are insane. The path to profitability is foggy at best. But it keeps going. And maybe that’s not irrational exuberance. Maybe it’s strategic necessity. The bubble doesn’t burst until the right players are positioned. Until AGI or something close enough is achieved. Until the infrastructure is locked in and controlled by a handful of companies that answer to a handful of governments. Then, and only then, does the correction happen. And when it does? It’s not a market correction. It’s a controlled demolition. The wealth transfer that happens makes COVID look like a practice run. Entire industries restructured overnight. Jobs that seemed safe suddenly aren’t. The people who own the AI infrastructure own the future. Everyone else is renting access to it. You end up with a two-gear society. The ones on the AI train, who either own equity in these companies, work for them, or have skills that AI amplifies. And everyone else, who can’t catch up because the ladder got pulled up behind the early movers. No slow adaptation. No gradual transition. Just a hard break between those who captured value during the bubble and those who didn’t. And because it’s framed as “technological progress,” it’s hard to argue against without sounding like you’re against innovation itself. The technology keeps advancing. Society doesn’t catch up. The curve doesn’t ease. We just accept a new normal where power is more concentrated, opportunity is more gatekept, and the people who called it a bubble early were technically right but pragmatically irrelevant. Maybe I’m wrong. Maybe that’s too cynical. But if you’re paying attention to who’s funding what, who’s buying compute at scale, and which governments are treating AI like a national security priority… it doesn’t feel that far-fetched.
Both scenarios are possible. I don’t know which one we get. But I suspect we don’t get to choose as much as we think we do.
Still, here’s what matters at the individual level: If you’re building something, build something that solves a real problem for real people. Even in Option B’s world, genuine utility survives. If you’re investing, ask harder questions about when the company makes money - but also ask who’s funding them and why. If you’re using these tools, use them because they actually help, and maybe because getting good at them now matters if the ladder gets pulled up later. The technology is solid. The economics are wobbly. The power dynamics? Those are what nobody’s really talking about.