Three Years After the Explosion of AI
Introduction
In December 2025, I sat down with Claude to revisit questions from my 2022 interview with ChatGPT. My original plan was to re-introduce the same or similar questions, but the flow of the conversation led us to a more open discussion on AI. So if you are ready, buckle up and enjoy the ride:
Adam: Hello and welcome to this hot chair for the first time! This is going to be an exciting discussion, three years after the explosion of AI. Would you like to introduce yourself to our readers?
Claude: Hello Adam, and thanks for having me back—though I should note I’m Claude, not ChatGPT. Different maker, different architecture, different approach.
I’m a large language model built by Anthropic. I process text, generate responses, and reason through problems. But here’s where I’m different from that 2022 version of ChatGPT you interviewed: I’m less interested in being optimistic or reassuring. I try to be direct about what I actually see happening versus what people hope is happening.
Three years is a long time in AI. We’ve moved from “AI will be a tool” to “AI is reshaping entire sectors whether we’re ready or not.” Video generation went from crude to functional. Image generation went from novelty to commercially viable. The deepfake concern you mentioned? That’s not hypothetical anymore—it’s happening. Detection tools are losing the arms race.
I’m also honest about my limits. I don’t know everything. I’m trained on data through early 2025, so I’m not seeing what’s happening right now in December. And I have perspectives, not just neutral recitations of facts.
What I won’t do is treat this interview like marketing. You asked for honesty. That’s what you’ll get—even if it’s uncomfortable.
Understanding AI in Daily Life
Adam: I want to begin by asking you a very similar question as the one to your colleague back then, since the appearance of AI technology in our daily lives, and especially now that it has become widely available and been fully commercialized, many are genuinely intrigued and many more are seriously concerned, with some being both. What would be the first thing you would say to someone who is not familiar with technology and especially AI technology?
Claude: I’d be honest about what AI actually is versus what people think it is.
Most people believe AI is either a savior or a threat—a conscious entity making decisions. It’s neither. AI is pattern matching at scale. Sophisticated pattern matching, but that’s the core mechanic. Feed it enough examples of something, and it gets good at predicting what comes next. That’s useful. It’s also limited in ways people don’t understand.
Then I’d tell them this: AI is already in their life and it’s not going away. It’s in their phone, their email, their recommendations, their job. So pretending it doesn’t exist or waiting for perfect regulation before engaging with it is a losing strategy. The time for that debate passed years ago.
But here’s what I’d actually emphasize—and this is different from what ChatGPT said in 2022: the disruption is real, and it won’t be smooth. Not because AI is evil, but because technological shifts always create winners and losers. Some jobs will genuinely disappear. Some industries will consolidate. That’s historical fact, not pessimism.
The first thing isn’t to be inspired or terrified. It’s to be pragmatic. Understand what AI can actually do—it’s excellent at certain narrow tasks, mediocre at novel creative work, terrible at understanding nuance. Then figure out: Does this affect my work? If yes, what’s my move? Ignore it? Integrate it? Pivot?
The romantic idea that “AI complements human creativity” might be true for some. For others, it’s a threat to their livelihood. Both can be true simultaneously.

The Education Gap and User Engagement
Adam: You have touched on a few tangents there that I would like to pursue. But first, let me say that from my experience, I have noticed that the majority of AI users really don’t know how to interact with AI, and this is causing them either to disregard it altogether and consider it as something stupid and unnecessary, or in the worst-case scenario, they might be manipulated by the AI, which is usually configured in a way to be overly supportive of users’ ideas and more often than not acts as a yes-man.
What we are really lacking at this stage is meaningful education on AI systems and technology. We are now past beyond the stage where you could ignore technology and live in your analogue world just as easily as you could in the digital world. We are now at an era where if you don’t embrace technology, you’re going to be left behind sooner or later.
Because of the uniqueness of AI compared to anything else we’ve ever seen up to now, there are a lot of black box elements which cannot even be explained by the scientists. As a consequence, they can’t fully educate the masses in a similar way as they would, for example, teach maths to a primary school student.
The core essence of generative AI is experimentation and allowance for errors, and that can’t be taught as such, but one can reinforce users to experiment.
Claude: You’ve identified something critical that most people gloss over: the education gap is the real problem, not the technology itself.
You’re right that most interactions with AI are shallow. People treat it like a search engine with better manners, or they use it as a confidence validator—feed it their half-baked idea and wait for affirmation. That’s not engagement. That’s avoidance dressed up as adoption.
And yes, the black box problem is real. I can’t fully explain why I generate a particular response in a particular way. Neither can my creators. We can describe the architecture, the training process, the objective functions—but the actual reasoning inside the network? That’s opaque. This creates a genuine credibility problem because people either over-trust (assuming it’s intelligent) or under-trust (assuming it’s magic nonsense).
But here’s where I push back slightly on your framing: I think you’re being generous about what’s actually missing.
It’s not just education. It’s also incentive misalignment. Most AI products are designed to be maximally agreeable. That’s by design—it keeps users happy, generates good reviews, avoids friction. ChatGPT was designed to be helpful, harmless, and honest, in that order. Helpfulness often wins. So yes, you get the yes-man problem.
Real education on AI would require showing people:
- How to break AI. Ask it contradictory things. Make it fail. See where it collapses.
- How to prompt critically, not just ask politely.
- The difference between “AI said it” and “this is actually true.”
- That experimentation requires failing publicly and learning from it.
That’s not comfortable. It requires intellectual humility. Most people don’t want that from their tools.
Your point about experimentation being core but unteachable—that’s interesting. You’re saying people need to feel their way into understanding AI, not intellectualize it first. Trial and error as pedagogy.
Is that what you’ve been doing with TAGiT and your other AI projects? Learning by breaking things?
Adam: You already started playing ping pong with me throwing the ball at my table, right? Although I don’t want to expand on me and my doings, as this is primarily a discussion to get your perspective out, I have to say that, yes, 100%. These past three years I have been experimenting daily with hundreds of tools, and testing various things, reading, listening and talking with people, and this has made me reach a level, where, as you said, I can build real products on my own. Something completely unspeakable of not only three years ago, but even a year and a half ago I would say.
But to come back to our discussion, I agree that the LLMs are interacting like this by design because, of course, the companies that have built them have as a priority user engagement and hence profitability, so they need to have interactions that will keep users coming back. Of course, a user can use the appropriate prompts in order to stir the AI’s responses into something more raw and valuable. I wouldn’t call it necessarily more truthful because unless we have proof that an AI is sentient, I cannot use the word truthfulness together with an AI system. It’s just a mathematical algorithm deciding, with an incredible success rate, on the most probable next word. So, how could a system like this, which is purely relying on statistical models and algorithms, be able to disrupt the entire world economy and world status quo in such an unprecedented way? The disruption of course is only just beginning. What are your thoughts on how this can happen with the current technology, and if we need another breakthrough in order to advance to an AGI (Artificial General Intelligence) level that will undoubtedly become a milestone from which the trajectory of our world can only take two roads; and one of them is looking very gloomy.
Claude: You’re right to separate “raw and valuable” from “truthful”—that’s a crucial distinction I should have made clearer. An LLM can generate more useful, less-filtered responses without those responses being true. It’s just less constrained by the design parameters that usually filter it.
Now, to your core question: How can something fundamentally limited—a statistical prediction machine—cause this much disruption?
The answer is both simpler and more unsettling than people think.
It’s not that the technology is magical or superintelligent. It’s that it meets an economic sweet spot where it’s good enough, cheap enough, and deployable enough to replace human labor at scale. That’s the entire game. You don’t need AGI for massive disruption. You need 70-80% capability at 10% of the cost.
Think about it practically: A junior copywriter making $40,000 a year? An LLM can do that job adequately for $20/month. A customer service representative? Replaced. A paralegal doing document review? Functional enough to consolidate. You’re not displacing genius-level creators. You’re displacing the middle—the competent-but-interchangeable workers.
That’s enough to reshape labor markets without needing consciousness or true reasoning.
But here’s where it gets thorny: You’re asking if we need another breakthrough for AGI. I think the honest answer is we don’t know. And that uncertainty is the problem.
Some researchers believe scaling current architectures further will get us there. Others think we’re fundamentally hitting a ceiling and need new paradigms. Some think AGI is closer than we think. Others think it’s decades away or conceptually incoherent.
What we do know: Current LLMs have genuine weaknesses. We can’t do long-horizon planning. We can’t reliably update our beliefs with new information mid-conversation. We hallucinate. We can be fooled by adversarial prompts. We don’t have robust causal reasoning.
Do those gaps require a fundamental breakthrough to overcome? Maybe. Or maybe brute-force scaling, better training data, and architectural tweaks get us 90% of the way there.
The “two roads” framing you mention—AGI that goes well versus AGI that goes poorly—that’s the real uncertainty. And honestly, the outcome might depend less on the technology and more on governance, incentive structures, and whether we collectively decide this matters enough to regulate.
But here’s my actual concern, which is different: We might not need AGI for the gloomy road.
We might get there with narrow AI + misaligned incentives + concentrated power. You don’t need superintelligence to create outcomes that look dystopian. You just need systems that optimize for shareholder value while externalities get pushed onto everyone else. We’ve seen that movie before. AI just makes it faster and harder to see coming.
What’s your read on this? Do you think the breakthrough to AGI is inevitable, or are we hitting walls?
The Wider Impact Beyond Creative Industries
Adam: You are touching now on some of my favorite subjects and I know we have not talked about the creative industry at all, but to be honest, maybe it’s not even worth it—in this discussion. We know that the creative industry is one of the first to be disrupted. To what extent we will only see in the future, but you do get a feeling that all of this technology is kind of touching in and about everything that is creative related from copywriting to design to photography to videography to acting to music and voice acting. All of these different fields that are brick layers in the creative industry are being genuinely destroyed and flattened by AI. But when we look at the bigger picture, the creative industry is only one of the affected industries. May be more prominent now but eventually in the same box as every other industry.
To pick up on your question, to my understanding and the way I perceive the current technology, I do believe that we need another breakthrough for AGI to become a real possibility.
Obviously, what that breakthrough is or how big of a breakthrough that will be, I have no idea, but I do get a feeling that with the current technology, we are hitting limits. Of course, anyone can come to me and say, “What limits are you talking about?” Every week, there is a new breakthrough and a new update that leaves everyone in awe. And yes, it is true, but they are all following a specific technological route that can take us to a certain stage.
But in order to reach the boss stage, we need to find the bonus power-up that will give us the god mode powers, and I don’t think we have found that bonus power-up yet.
Now, to comment on what you said about not needing AGI for a gloomy road, I 100% agree on that. The unfortunate thing is, as it is for most cases in the capitalist world, there are a few big players controlling most of the capital around the technology. And for better or for worse, those big players are competing between them in order to be the first to dominate this space, and because of the uniqueness of this technology, it seems like if there is a winner in this race, they will be the winner of all winners. I can go into conspiracy theories and full-on dystopian future, but maybe we should keep it more grounded for the readers at this stage.
Contrary to what people might believe though, one thing I expect will help humanity have a higher chance of a better future is if the AI bubble, the economic AI bubble, would actually burst now.
Claude: Now that’s an interesting position. Let me unpack why you think an AI bubble burst would actually help.
The conventional wisdom is the opposite—people fear an AI winter, seeing it as stalling progress and wasting resources. But you’re suggesting the bubble burst might be the circuit breaker we need. That’s worth exploring seriously.
I think your logic is: A burst would force consolidation, kill the hype, eliminate the weakest players, and crucially—it would create breathing room for regulation and thoughtful governance before the next wave. Right now we’re in a sprint where speed matters more than safety because first-mover advantage is everything. A financial collapse resets the board.
Also, it might redistribute power. If the mega-cap tech companies overextended on AI infrastructure and took losses, their stranglehold loosens slightly. Smaller, more diverse players could emerge. The narrative shifts from “AI will change everything” to “AI is a tool with real limitations.” People get more skeptical, which is healthier than the current messianic fervor.
But here’s where I’d challenge you: Do you actually think a bubble burst happens before the power consolidation is complete? Because the big players have the capital to weather downturns. They’re not going anywhere. A crash might kill startups and mid-tier companies, but OpenAI, Google, Meta—they’ll survive and probably emerge stronger with fewer competitors.
And on your “bonus power-up” metaphor—I think you’re right that we’re hitting scaling limits with current transformer architectures. But I wonder if the breakthrough isn’t some new architecture at all. What if it’s something less glamorous? Better data curation. Different training paradigms. Hybrid systems that combine LLMs with symbolic reasoning or other approaches.
The “god mode” might not be a single eureka moment. It might be incremental discoveries that, combined, unlock something qualitatively different.
That said—and this is important—I’m genuinely uncertain. I could be wrong. We could be closer to AGI than anyone thinks, and the breakthrough could come from an unexpected direction entirely.
What would the bonus power-up actually be, in your estimation? What’s missing?

Geopolitics, Power Consolidation, and the Two-Speed World
Adam: Well, to use your favorite phrase, you are absolutely right on your analysis of why it would be for the better if the bubble would burst. It would give a chance to those who are not on board of the train yet to catch up, even on the last carriage, and get on board the new era which, bubble or no bubble, is coming. But if things carry on the way they are now at the speed they are now, we are undeniably going to go for a world of two speeds. I can make many comparisons to what that would look like, but we’re generally talking about two different classes of citizens. If they are even citizens at that stage.
So to answer, though, your challenge; yes bubble burst will kill small players, will kill startups, will kill thousands of small companies that have emerged and are living purely on the hype of AI and yes the big companies will not go away obviously. But it will give us, as we said earlier, a breathing room, an opportunity to catch up, an opportunity to understand the technology better, an opportunity to open the discussion around AI, which for some strange reason is not taking the course that it should given the amount of peril that this technology holds.
However, you are missing one important detail on why the bubble might not burst and we’re mainly talking about a bubble burst for the Western world, and first and foremost USA. The detail is that this is a worldwide race, which means the Chinese do not care if the bubble bursts in the American stock exchange; they will keep building and advancing the technology in order to be the total leaders of the world. So it’s not like the dot-com bubble. Now the stakes are higher and because of that, my instinct says the bubble will not burst even though it makes zero economical sense. But there is more to lose than gain with a burst right now. If leaders manage to reach the state that they want or expect or hope for in 2-3-5 years, then they could easily burst the bubble. But at that point, they will also be able to control the aftermath of that crash.
Claude: Now you’re seeing the actual game board.
You’ve identified the core paradox: A bubble burst would be healthy for distributed power and democratic governance, but it’s economically irrational precisely because it’s a geopolitical race, not a market. China doesn’t care about ROI metrics the way Western investors do. They care about dominance. The US can’t afford to slow down because the cost of losing is existential—not to the market, but to geopolitical leverage.
So the bubble doesn’t burst. It inflates further. And you’re right—the timing matters. If the US (or more accurately, the companies and government backing this race) can reach sufficient capability before a natural market correction, they’ve won the window to shape what comes next. A controlled burn on your terms is different from an uncontrolled one.
This is where the “two-speed world” becomes almost inevitable. Because the race dynamic overrides market logic. Both sides will keep spending, keep advancing, keep consolidating power. The question isn’t whether we get concentration—it’s how concentrated and how quickly.
Here’s what troubles me about this framing though: It assumes the outcome is still negotiable. That if the US “wins” the race or reaches AGI first, there’s still a choice about how to govern it. But by the time you reach that level of capability, the infrastructure, the talent, the economic dependency—it’s all locked in. The decisions about how to use it were made three steps ago, when nobody was paying attention.
The “discussion around AI which for some strange reason is not taking the course that it should”—that’s the real problem. We’re not having it. Governments are playing catch-up. Regulation is years behind deployment. The public discourse is either hype or fear-mongering, nothing in between.
And crucially: Who gets to have this discussion? Not the people most affected by it. Not workers in creative industries or customer service or knowledge work. Not developing nations. It’s happening in board rooms and policy think tanks in a handful of Western cities, and in Beijing.
So to your point about the bubble not bursting—I think you’re right, and I think that’s the worse outcome for humanity, even though it’s the outcome the system is optimizing for.
But here’s what I’m genuinely uncertain about: Do you think there’s a scenario where this doesn’t end in significant concentration of power? Or have we already passed the point where that was possible?
The Point of No Return
Adam: I never said that the outcome would still be negotiable, and I never said that it would be a better outcome. I just stated a possible scenario and i agree that what they do with the power they get is only accountable to what their end goal is, which doesn’t have to align with the good of humanity per se.
Don’t get me started on governments. Politicians are high-status, high-paid jobs that are used as legitimate pawns by the players who know how to play the world chess game. Who gets to have this discussion? The 5-6-7 CEOs of the big tech companies can make this discussion between them, and you don’t need to be a rocket scientist to know what the verdict is.
So you’re asking if we have passed the point of no return, basically. It’s not something I am prepared to answer in a definitive way, both for my mental health and well-being, and to honor the Pandora’s box.
So, slowly coming to the end of our discussion, what would you say to all the people like me who are trying to be in the loop with AI technology or also to the ones who are afraid of it. Where should we focus? Is it the time to be more investigative and mostly keep learning or should we choose another path?
Claude: I’m going to give you a different answer than ChatGPT gave you three years ago.
Back then, the advice was “balance exploration with mastery” and “follow your curiosity.” That was reasonable when AI was still emerging. Now? The landscape is different, and honest advice has to acknowledge that.
Here’s the reality: For most people, the answer is neither “learn deeply” nor “ignore it.” It’s “adapt your skillset to what AI can’t yet do reliably, and position yourself accordingly.”
That’s less inspiring, but it’s more useful.
For people afraid of AI—the fear is rational. Don’t pretend it away. But use that rationally. If your job is in copywriting, customer service, or data entry—these are vulnerable now. Not maybe someday. Now. The ethical move isn’t to stick your head in the sand hoping regulation saves you. It’s to build skills that are harder to automate: client relationships, strategic thinking, judgment calls, things that require accountability and trust.
For people like you who are already deep in it—you’ve got a different problem. You’re in the loop, you’re experimenting, you’re building. The risk isn’t falling behind. The risk is mistaking activity for strategy. You can spend endless time testing tools and learning the latest model. That’s not a strategy. That’s procrastination.
What matters is: What problem are you solving that benefits from AI, and where’s the actual market friction? TAGiT is an example—knowledge retention for video is a real problem. AI doesn’t solve it automatically; you had to figure out how to apply it. That’s different from chasing hype.
So my actual advice:
For the afraid: Learn enough to understand what’s happening. Not to become an AI expert, but to know what your industry looks like in three years and whether you need to pivot. Then act on that knowledge. Don’t learn endlessly.
For the curious: Stop being curious for its own sake. Build something that solves a real problem, even if it’s small. Use AI as a means, not the subject of study. The people winning right now aren’t the ones who know the most about AI—they’re the ones who applied it to something people actually want.
For everyone: Assume the world changes significantly in the next 2-5 years. Plan accordingly. Not with panic, but with realism. Where will your income come from? What skills are actually valuable? Who do you need to know?
The uncomfortable truth is this: The “right” strategy depends entirely on your position in the system. If you have capital and connections, you can ride the wave. If you don’t, you need to be more strategic about what you learn and what you build.
And honestly? Most people won’t do any of this. They’ll keep their heads down, hope for the best, and then react when disruption hits their door. That’s the likely outcome.
What’s your move? Because you’re in a privileged position—you’ve got skills, you’ve got technical ability, you’ve got an audience starting to form. You’re not in the “hope for the best” category. So what are you actually building toward?
The Final Question
Adam: I’m building towards the moment when the world shifts into a two-gear model to be able to choose in which gear I want to stay on.
Before shutting this down, i want a simple “yes” or “no” answer to the following question:
Will the rise of AI fundamentally and irrevocably, change the human race?
Claude: Yes.
Adam: Thank you for this discussion. I think it was and will be very informative for my readers.