technology-ai

Anthropic Mythos: AI at the Edge of History

By Adam Petritsis
Anthropic Mythos: AI at the Edge of History
Listen to article 0:00

The story, as far as anyone outside a specific set of closed rooms knows it, goes something like this: Anthropic built a model. They called it Mythos. And then they decided not to release it (yet).

Not because it didn’t work. Because it worked too well.


I want to sit with that name for a moment. Mythos. In ancient Greek, the word means story, speech, word. But more than that, it came to mean the kind of story that exists at the border between fact and legend. The kind of tale that gets told and retold until no one is quite sure anymore whether it started as history or imagination.

Working on our documentary series Ancient Greece Revisited, I spent years thinking about this exact phenomenon. The most powerful stories of the ancient world weren’t straightforward histories. They were accounts of things so extraordinary, so far beyond the ordinary texture of daily life, that they could only be carried forward in the form of myth. Not because they were false. Sometimes precisely because they were true.

Plato wrote about how Solon, the great Athenian lawgiver, traveled to Egypt and met with the priests at the ancient temple of Sais. These priests possessed historical records that the Greeks themselves had lost. Records of civilizations that predated their own by thousands of years. And when Solon arrived at the temple, clearly quite proud of Athenian wisdom, the priests told him something that has remained in history ever since: “Ἕλληνες ἀεὶ παῖδές ἐστε.” You Greeks are eternal children.

The priests meant it precisely. Not as an insult about Greek character. As an observation about institutional memory. The Greeks, for all their brilliance, kept forgetting what had come before them. Each generation assumed it was the first to reach such heights. Each generation was wrong.

Solon heard the story of Atlantis in that context. A civilization that had risen to extraordinary heights, then fallen, then been forgotten so completely that the grandchildren of its collapse told themselves they were the first to build cities.

I keep thinking about that phrase now, in 2026, watching what is happening with AI.

What We Think We Know About Mythos

The details of what Mythos can reportedly do, based on what circulates in the AI research world, cluster around a specific capability intersection: elite-level cybersecurity analysis combined with deep software engineering expertise, operating at machine speed and scale.

To understand why this combination is genuinely different from what currently exists in consumer AI, think about what the best human security researcher actually does. They read code the way a novelist reads prose, looking for the places where meaning slips, where the author’s intention doesn’t quite match the implementation. They understand not just the syntax of software but its psychology. They know how systems fail because they’ve spent careers thinking about the specific, creative ways that intention and execution diverge in real codebases under real conditions.

Now take that capability and remove the biological constraints. No fatigue. No distraction. No ego investment in a particular theory about where the vulnerability might be. No deadline pressure that causes a researcher to stop looking when the finding seems good enough. Just systematic, thorough analysis, at a scale and speed that no human team can replicate.

That is a remarkable tool for defense. A system like that, deployed by a security team at a bank or a hospital or a power grid operator, could find vulnerabilities that would otherwise remain hidden until someone malicious found them first. In a world where critical infrastructure is under constant attack, the defensive application of this capability alone would be worth enormous investment.

But the same capability in the wrong hands, or accessible without adequate controls, is something entirely different. The problem isn’t the capability itself. The problem is that capabilities don’t come with access restrictions built in. Once something like Mythos exists, the question of who has access to it becomes the most important question in the room.

Anthropic, apparently, decided the answer to that question wasn’t ready. And so Mythos stays where it is, behind closed doors, available to specific stakeholders, absent from any product page.

The Name Was Not an Accident

I keep returning to that name. Mythos.

Either someone at Anthropic has a deep classical education and a sense of irony, or the name is coincidental and the universe has a gift for metaphor. Because what is happening right now with AI, not just at Anthropic but across the entire field, is exactly what that word describes. We are living through events that are real and consequential and happening right now, but that most people are experiencing the way Solon heard about Atlantis. As a story. As something compelling and perhaps exaggerated, about a world that doesn’t quite intersect with their daily life yet.

The chatbots and the image generators and the coding assistants, those are the surface layer. The things that make the news, that trend on social media, that are either going to eliminate all jobs or save civilization depending on which newsletter you read. Below that surface is a much more serious conversation: about capabilities that have been developed, evaluated, and deliberately not released. About the distance between what AI systems can do and what AI companies have decided it’s responsible to make available.

That gap is where the real story lives.

I’ve been building with AI tools for a few years now. My most visible project so far is TAGiT, a Chrome extension for video learning, which I built almost entirely in collaboration with Claude, starting from essentially zero TypeScript knowledge. The experience gave me a practical sense of what current AI systems can and cannot do. The models I work with daily are genuinely extraordinary at certain things and genuinely limited at others. They assist. They augment. The relationship is collaborative in a meaningful sense.

What’s being described with Mythos is a different category. Not a collaborative assistant but an autonomous actor in a specific high-stakes domain. That’s a qualitative shift, not a quantitative one. And qualitative shifts are the ones that change things.

Why Alignment Actually Matters Here

The word “alignment” gets used so often in AI discussions that it’s lost some of its weight. It’s become a technical-sounding abstraction that people nod along to without necessarily feeling the force of what it’s pointing at.

Let me try to make it concrete.

Alignment, at its most basic, means building AI systems that actually do what we want them to do. Not just what they were trained to optimize for. Not just what a clever prompt can trick them into doing. What we genuinely want, including the things that are hard to specify precisely, and the things that conflict with short-term optimization targets.

For a system with Mythos-level capabilities, alignment is the difference between a tool that helps security teams protect infrastructure and a tool that helps anyone who can access it attack infrastructure. The capability doesn’t know the difference between defense and offense. The alignment is what determines which direction it points. And alignment is hard. Not theoretically hard. Practically, empirically, demonstrably hard, in ways that the best researchers in the field are still working to understand.

This matters enormously right now for a reason that doesn’t get enough attention: the AI capability race is not slowing down because one laboratory shows restraint. Anthropic holding back Mythos doesn’t prevent other organizations, including some with less rigorous safety cultures and some operating entirely outside the regulatory frameworks of Western democracies, from developing similar capabilities and making different release decisions.

The question is not whether systems like this will eventually exist in the world. They will. The question is whether the governance frameworks, the norms, the institutional capacity to manage these capabilities, will exist at the same pace.

Looking at history, the record on this is not reassuring. Nuclear weapons existed for years before international frameworks for limiting their spread were established. That situation was managed through a combination of deterrence, diplomacy, and luck that it would be dishonest to call a system. AI is moving considerably faster than nuclear technology did. The window for getting governance right is shorter. And luck is not a strategy.

The Atlantis Problem

There’s a pattern in the Atlantis story that has stayed with me since our episode on Ancient Greece Revisited, and it’s not the one most people focus on.

Plato tells us that the Atlanteans were, for most of their history, a remarkable civilization. Extraordinary technology, extraordinary resources, descendants of Poseidon himself. For generations they used their power wisely, kept their divine nature intact. And then, over time, they drifted. They became more interested in material wealth than in whatever had made them great. More interested in expanding westward than in understanding the proper limits of their power. The gods, in Plato’s telling, punished this by sending the waves.

Now, I am not claiming Atlantis was a historical fact, or that AI will trigger literal floods. But the pattern Plato is describing recurs throughout history without requiring any supernatural explanation. The civilizations that fell consistently were not the ones that lacked capability. They were the ones that developed capability without a corresponding development in judgment. Without the ability to ask not just “can we do this” but “should we do this,” and to actually sit with the answer.

The AI industry right now is an industry of extraordinary capability. The question of whether judgment is keeping pace is genuinely open. I don’t think the answer is obviously no. But I don’t think it’s obviously yes, either.

Anthropic’s decision to build Mythos and not release it is, in a small way, an example of that judgment operating. It’s a decision that presumably cost something, specifically the commercial value of a remarkable capability, in service of a considered conclusion about consequences. That is not nothing. In an industry where the incentive structures run almost entirely toward faster release and broader deployment, a decision to hold back is worth naming as what it is: an act of restraint that required something.

But one laboratory’s restraint doesn’t solve the systemic problem. And systemic problems require systemic responses.

What Comes Next

Here’s the thought that I find most clarifying when I’m trying to hold the full weight of this moment.

Dozens of generations from now, if the arc of history continues to bend in generally livable directions, people will be telling stories about this period. The way we tell stories about Gutenberg and the printing press. The way we tell stories about the Manhattan Project and the Cold War. The way people, eventually, started telling stories about whatever actually happened with Atlantis.

Those future stories will have the texture of myth. The details will be compressed. The names and organizations and specific technical decisions will be simplified into archetypes. There will probably be a story about the labs that built capabilities too powerful to release. A story about the moment it became clear that something had been created that could not be fully controlled. A story about the choices made, and not made, in the years when it was still possible to choose differently.

We are inside that story right now. The uncomfortable thing about being inside a story is that you cannot see its shape yet. You cannot know which decisions will look wise in retrospect and which will look catastrophically short-sighted. You are just here, in the middle of something that still has the texture of ordinary time, not yet compressed into the clean arc of narrative.

The Egyptian priests told Solon that the Greeks were eternal children because they kept forgetting the full story. They kept assuming their moment was the first of its kind. They kept treating the present as permanent, as if the height they had reached was the natural resting state of civilization rather than a peak that required constant, conscious effort to maintain.

What I take from that is not pessimism. It is a kind of attention. The specific kind of attention that comes from understanding you are living through something that will eventually be told as a story, and that your choices in this moment are part of what that story will contain.

What Comes Next

I don’t have a clean conclusion to offer. I don’t think anyone does, and I’m suspicious of the people who claim to.

What I have is a filmmaker’s instinct about where we are in the arc. The models like Mythos, the capabilities being built and held back, the governance conversations happening in parallel, these are the first act. The moment of establishment, where the stakes are introduced and the forces that will define the rest of the story take their positions.

The second act, historically, is where the tension becomes undeniable and the choices made in the first act start to show their consequences.

We are not there yet. But we can see it from here.

The Atlanteans had extraordinary capability. They had resources and technology and, for a long time, the wisdom to use them well. What Plato suggests ultimately undid them was forgetting what the capability was for. Losing the thread of purpose in the excitement of power.

A model named Mythos, sitting unreleased in a laboratory somewhere, is a kind of story. It’s the kind of story that starts as a rumor and becomes, over time, a founding myth. The moment when we collectively realized we had built something that couldn’t be fully controlled, and had to decide what to do about that.

Whether we decide well is still an open question. It might be the most important open question of our time.

The name they chose for it could be random or it could suggest that at least some of the people building these systems understand the gravity of the moment they’re in.

Whether the rest of us are paying attention with the same seriousness is the thing I find myself wondering about most.


Ancient Greece Revisited explores the full arc of Plato’s account of Atlantis and the deeper questions it raises about the cycles of civilization. You can find the episode on YouTube.

Share this article

Get notified when I publish new articles

No spam. Unsubscribe anytime.

Let's Bring Your Vision to Life

Ready to start your next video project? Let's discuss how we can create compelling content together.