technology-ai

The Next Ten Years: What Nobody Wants to Say Out Loud

By Adam Petritsis
The Next Ten Years: What Nobody Wants to Say Out Loud

In December 2025, I published a conversation I had with Claude about AI’s impact on the creative industry and society. That piece ended with a simple yes or no question: Will the rise of AI fundamentally and irrevocably change the human race? The answer was yes.

Two months later, I find that answer insufficient. Not because it was wrong, but because it was incomplete. Since then, I’ve been thinking about what that change actually looks like. Not the sanitized conference version. Not the doomer sci-fi version. The version that follows logically from what is already happening.

Here’s what I see.

(Now is a good time to prepare your coffee if you haven’t done so.)


AI Doesn’t Need to Be Smart to Be Devastating

Forget AGI. Forget sentient machines. The disruption that matters is already here, and it runs on technology that is, at its core, a very sophisticated autocomplete.

The economics are painfully simple: if a system can do 70% of a job at 10% of the cost, the job is gone. Not because AI is brilliant, but because it is good enough and cheap enough. A junior copywriter costs €20,000 to €35,000 a year, depending on location. An LLM costs €20 to €200 a month. The math isn’t complicated.

But here’s what makes this different from every technological revolution in history: AI is hitting all industries simultaneously.

When mechanized looms disrupted textiles, displaced weavers could become factory workers. When cars replaced horses, blacksmiths became mechanics. There was always an adjacent industry absorbing the shock. AI offers no such safety valve. Legal, creative, medical, education, finance, customer service, engineering, it’s compressing them all at once. There is no neighboring sector to escape to. That’s genuinely unprecedented, and it’s why anyone saying “the Luddites were wrong, and so are you” is making a dangerously lazy comparison.

The practical effects are already forming, quietly. Companies that needed fifty people start realizing they need fifteen. Not through dramatic layoffs, but through attrition, restructuring, roles that simply don’t get refilled. The positions that remain pay less because more people are competing for fewer spots. And governments are still running the old playbook: retrain, reskill, find a new career. That advice assumed there was somewhere new to go.

The generation that will feel this most acutely is the one currently between thirty and fifty. Too young to retire into irrelevance, too old to pivot effortlessly. The older generation is approaching the finish line. The younger ones are adaptable, they’ll build their identity around whatever the new world looks like. But the middle generation built their lives, their mortgages, their sense of self around a professional world that is quietly being disassembled underneath them.

I’m in that group. I’ve spent the last three years fully immersed in AI, learning about it, building with it, writing about it, integrating it into my work. And I recently had a day where I didn’t want to hear the word “AI” again. Where everything felt too heavy, too fast, too inevitable. The people who haven’t been paying attention are probably content in their own bubble. They’ll keep going about their lives until the disruption reaches their doorstep. But when it does, they won’t have the luxury of a gradual realization. It will hit them all at once.

In Greece, it took three years for AI to reach street-level conversation. I only recently started hearing random people in the market telling each other to download ChatGPT on their phones. Three years. That’s not slow by historical standards. But historical standards don’t apply anymore. The technology is moving faster than awareness can follow, and awareness is moving faster than adaptation. There is no time to adjust the way societies have adjusted in the past.


The Bubble That Can’t Burst

The rational thing for markets to do would be to correct. The AI investment frenzy has all the hallmarks of a bubble: speculative excess, overcapacity, companies valued on promises rather than revenue. A correction would give society breathing room. Time to regulate, to educate, to adapt.

It won’t happen. And the reason it won’t happen has nothing to do with economics.

This isn’t the dot-com era. The technology actually works, and more importantly, it is a geopolitical arms race. China doesn’t care if American tech stocks are overvalued. They’re building and advancing regardless. The US, or more accurately the handful of companies driving this race, can’t afford to slow down because falling behind isn’t a market loss. It’s a power loss.

So the bubble inflates further. Not because it makes financial sense, but because the cost of stopping is higher than the cost of continuing. Both sides keep spending, keep advancing, keep consolidating. The incentive structure makes slowing down impossible, even if everyone involved knows the pace is unsustainable.

A bubble burst would have been the best outcome for humanity. A chance for the masses to catch up. A pause button before the train leaves the station for good. But the players at the table can’t afford to let that happen. Not until they’ve secured what they’re building toward. And by then, it will be their decision to manage the crash, not the market’s.


The Two-Speed World

This leads somewhere most analysts avoid naming because it sounds too dystopian. But it follows logically from everything above.

We are heading toward a two-speed world. Not two classes in the traditional sense, this isn’t rich versus poor, though that’s part of it. It’s a division between those who participate in the real world and those who retreat into a virtual one. And the uncomfortable truth is: the retreat will be voluntary.

I’ve been thinking about this through what I call the triangle: AI, metaverse, and blockchain as converging forces. AI provides the intelligence layer. The metaverse provides the experience layer. Blockchain provides the ownership and identity layer. Together, they create an alternative reality that, for many people, will be more compelling than the physical world they can afford.

Picture it: immersive virtual environments powered by AI that know you better than your friends do. AI companions that are better conversationalists than most humans. Virtual economies that give you status and purpose. All accessible from your couch, for a fraction of what real-world participation costs.

Nobody will be forced indoors. Nobody will be locked in. People will choose to stay because what they experience in the virtual world is better than what they can access in reality. If you can’t afford to travel, the metaverse offers you the world. If your job was automated, a virtual economy gives you something to do. If you’re lonely, an AI companion is always there.

That’s not dystopia by force. It’s dystopia by design, and by consent. And that might be worse, because there’s nobody to rebel against.


Even the Physical World Isn’t Safe

For a while, the conventional wisdom held that physical labor was the safe zone. Plumbers, electricians, nurses, you can’t automate hands-on work. That was a comforting thought. I’m not sure it holds.

Robotics is accelerating faster than most people realize. Tesla’s Optimus, Figure, the Chinese humanoid push, if any of these crack general-purpose physical robotics at reasonable cost within five to seven years, the last human advantage evaporates. And the timeline is compressing.

Beyond robotics, there’s a question nobody in mainstream discourse wants to address honestly: bio-enhancement. If we approach AGI, or even something close enough, the pressure to augment human cognition becomes existential. Not optional, not fringe. Existential. Neuralink-style interfaces, genetic cognitive enhancement, pharmaceutical boosters. These move from science fiction to public policy debate within the decade.

Because if we don’t enhance, we don’t go extinct. We become irrelevant. And for a species that derives meaning from agency and purpose, irrelevance might be psychologically worse than extinction.


The Speed Problem

Every previous major transition, agricultural, industrial, digital, gave societies decades or centuries to adapt. Cultures evolved. Institutions reformed. Laws caught up. People found new identities and new work.

This one is giving us maybe ten to fifteen years. And it’s not just the speed, it’s the breadth. Previous disruptions were sector-specific. This one is horizontal. Human psychology doesn’t evolve that fast. Social structures don’t reorganize that fast. Governance certainly doesn’t move that fast.

The mismatch between technological acceleration and human adaptability is, I believe, the defining risk of the next decade. Not AI alignment, not superintelligence, not killer robots. Just the plain fact that we’re changing the world faster than we can change ourselves.


It Was Never About the Technology

There’s a conversation happening right now about whether AI itself is dangerous. Whether it will be aligned with human values. Whether it might go rogue. These are valid technical questions, but they miss the bigger picture.

Every destructive force in human history has been a tool wielded by humans. Nuclear energy didn’t bomb Hiroshima. People did. Social media didn’t polarize societies. The business models behind it did. And AI won’t be the thing that reshapes the world for better or worse. The humans and institutions controlling it will.

The geopolitical race I described earlier isn’t an AI problem. It’s a power problem. The two-speed world isn’t a technology problem. It’s an economic and political choice. The speed of disruption isn’t a computing problem. It’s a governance failure.

The biggest threat to humanity has always been humanity. AI just raises the stakes. It was built in our image, trained on our writing, shaped by our behavior. Whatever it becomes, it’s a reflection of us.


The Breakthroughs Are Real. The Access Won’t Be.

I want to be fair. AI is producing genuine advances that matter. Drug discovery timelines shrinking from years to months. Personalized medicine. Early cancer detection. AI tutoring that could give a student in rural India better instruction than most private schools. Cheaper, cleaner energy. These are not hypothetical. They are happening.

But I give zero chances that eight billion people will have equal access to these benefits. That’s not pessimism, it’s history. Technology has always advanced humanity, and it has never done so equally. The people who can afford the breakthroughs or who have the connections to access them will benefit enormously. Everyone else will benefit partially, slowly, or not at all. The bright spots are real, and they will deepen the divide, not close it.


So What Do We Do?

I’m not going to end this with false optimism or a neat call to action. The situation doesn’t warrant one.

What I will say is this: the window for individual positioning is still open, but it’s closing.

The people who understand what’s coming and position themselves accordingly, not with panic but with clear-eyed pragmatism, will have choices. The rest will have whatever choices are given to them.

In my December interview with Claude, I was asked what I’m building toward. My answer was: “I’m building towards the moment when the world shifts into a two-gear model, to be able to choose in which gear I want to stay on.”

Two months later, that answer hasn’t changed. But the urgency has.

The question isn’t whether this shift is coming. It is whether you’ll be ready to choose, or whether the choice will be made for you.

Share this article

Let's Bring Your Vision to Life

Ready to start your next video project? Let's discuss how we can create compelling content together.