Peter Steinberger built the most viral AI agent platform in recent memory. Meta wanted it. OpenAI wanted it. Both threw billion-dollar acquisition offers at a project that was losing $10-20k per month.

OpenAI won. Steinberger announced he's joining Sam Altman's team to "drive the next generation of personal agents."

And Anthropic? They weren't even in the conversation.

What Is OpenClaw?

For the uninitiated: OpenClaw (formerly Clawdbot, then Moltbot) is the AI assistant that actually does things. Not just answers questions. Not just generates text. It manages your calendar. Books your flights. Joins social networks full of other AI agents. It's the bridge between "AI that talks" and "AI that acts."

The tagline—"the AI that actually does things"—is both brilliantly simple and devastatingly accurate as a critique of everything else on the market.

OpenClaw went viral because it solved the problem everyone's been complaining about: AI assistants that require you to do all the actual work. Copy this, paste that, click here, confirm there. OpenClaw just... handles it.

Why This Acquisition Matters

The AI race is entering a new phase. We're past the point where model benchmarks determine winners. GPT-4, Claude, Gemini—they're all "good enough" for most use cases. The marginal improvements in reasoning or context length matter less than they did two years ago.

What matters now is agency. Which AI can actually do things in the real world? Which one can book your flight, file your expense report, schedule your meetings, and manage your inbox—without you babysitting every step?

This is where the next trillion dollars will be made or lost.

OpenAI just acquired (via acqui-hire) the person who cracked this problem first. They didn't just buy code—they bought vision, momentum, and the credibility that comes from building something people actually love.

Anthropic's Strange Silence

Here's what baffles me: Anthropic positions itself as the "safety-first" AI company. Their whole pitch is that they're building AI that's helpful, harmless, and honest. Claude is genuinely excellent at conversation, reasoning, and careful analysis.

But what good is a safe AI that can't do anything?

The future isn't AI that answers questions. It's AI that takes actions. And actions are where safety actually matters. When an AI is booking flights, moving money, sending emails on your behalf—that's when alignment becomes critical. That's when "helpful, harmless, and honest" stops being a marketing slogan and starts being an engineering requirement.

Anthropic should have been all over OpenClaw. This was their natural territory. An open-source agent platform committed to doing things safely and transparently? That's the Anthropic brand promise, actualized.

Instead, they sat on the sidelines while OpenAI and Meta threw billion-dollar bids around.

The "Open Source Will Stay Open" Illusion

Steinberger insisted on keeping OpenClaw open source. That's admirable. OpenClaw is becoming a foundation, and OpenAI has promised to support it.

But let's be realistic about what "support" means when you're working at OpenAI.

Steinberger's time, energy, and best ideas will now flow into OpenAI's products. The foundation will get the scraps. Open source projects backed by acqui-hires have a predictable trajectory: initial enthusiasm, gradual neglect, eventual irrelevance.

The valuable stuff—the breakthroughs, the integrations, the enterprise features—will show up in OpenAI's products first. The community gets the leftovers.

Anthropic could have offered a different deal. They could have committed to building Claude's agent capabilities on top of OpenClaw, fully open source. They could have positioned themselves as the company that makes powerful AI agents accessible to everyone, not just OpenAI's customers.

They didn't.

The Agent Wars Have Begun

Look at the landscape:

OpenAI now has Steinberger and the OpenClaw playbook. They're building Operator, their agent interface. They have the biggest user base and the deepest pockets.

Google has Gemini agents integrated across Workspace. They have distribution through Android, Chrome, and Gmail that nobody else can match.

Meta is building agents into WhatsApp, Instagram, and Facebook. Three billion users who could be talking to AI agents tomorrow.

Microsoft has Copilot embedded in Windows, Office, and Azure. Enterprise distribution that's impossible to replicate.

Anthropic has... Claude in a chat window.

Do you see the problem?

Claude is excellent. Arguably the best conversational AI on the market. But "best at conversation" is not a winning position when the game has shifted to "best at action."

What Anthropic Should Have Done

Anthropic should have moved aggressively on OpenClaw. Here's why:

  1. Strategic fit: OpenClaw's philosophy of transparency and user control aligns perfectly with Anthropic's safety-first positioning.

  2. Technical foundation: Building agents on top of Claude using the OpenClaw architecture would have been a natural integration.

  3. Talent acquisition: Steinberger is exactly the kind of builder Anthropic needs—someone who ships products people love, not just research papers that impress academics.

  4. Market positioning: "The safe AI agent" is a compelling value proposition. "The smart AI chatbot" is increasingly commoditized.

  5. Open source credibility: Acquiring OpenClaw while keeping it open source would have been a powerful statement about Anthropic's values.

Instead, OpenAI gets all of this. Altman gets to claim the "open" mantle (ironic, given the company name controversy). And Anthropic remains what it's always been: an excellent research lab struggling to become a product company.

The Dario Problem

I respect Dario Amodei enormously. He's one of the most thoughtful people in AI. His essays on AI safety are required reading.

But thoughtfulness can become paralysis. The AI industry is moving at a pace that punishes deliberation. While Anthropic carefully considers the implications of every product decision, competitors are shipping, iterating, and capturing market share.

OpenClaw was available. The founder was open to deals. The technology was proven. The user base was exploding.

And Anthropic did nothing.

This isn't the first time. Remember when Anthropic was slow to launch Claude 3? When they took months longer than expected to ship their API? When they were perpetually "a few weeks away" from features that competitors had already released?

There's a pattern here. Anthropic thinks like a research lab. They ship when things are ready, when they've considered all the implications, when they're confident they're doing it right.

That's admirable in academia. It's suicide in a market this competitive.

What Happens Next

OpenAI will integrate Steinberger's insights into their agent products. Within six months, we'll see a dramatically improved Operator—probably rebranded, probably with OpenClaw's best features baked in.

Google and Meta will copy whatever works. They have the engineering resources to catch up fast.

Anthropic will release thoughtful blog posts about agent safety. They'll publish important research on how to make AI agents reliable and trustworthy. They'll contribute valuable ideas to the field.

And they'll watch their market position erode.

Because in the end, the AI that does things will beat the AI that thinks carefully about things. Users don't want philosophy. They want their calendar managed, their flights booked, their inbox triaged.

Anthropic had a chance to be the company that delivered both—thoughtful and capable. They let that chance slip away.

The Lesson

The OpenClaw acquisition (acqui-hire, technically) is a small story in isolation. One developer joining one company. A few billion dollars that didn't change hands.

But it's a symptom of a larger problem. Anthropic is playing defense in an offense-first industry. They're optimizing for not-making-mistakes in a market that rewards making-moves.

You can be the smartest, safest, most careful company in AI. But if you're not in the arena—if you're not making acquisitions, shipping products, capturing users—then you're just an observer.

Right now, Anthropic is observing. OpenAI is acquiring.

I know which position I'd rather be in.


The AI wars won't be won by the best model. They'll be won by the best product. And products require action, not just analysis.