
It’s been hard to miss Clawdbot over the last few days.
Open-source, self-hosted, living inside your DMs — developers have been spinning it up, sharing screenshots, and getting a feel for what always-on AI agents actually look like in practice. And honestly, it’s easy to see why it’s taken off. It works. It feels different.
This isn’t a post about Clawdbot specifically, or a critique of an early open-source project. It’s about what tools like this are quietly revealing about where agentic AI is headed — and a class of problems we’re going to run into more often.
For a long time, AI lived behind a prompt box.
You opened a tab, asked a question, got an answer, and moved on. Even when AI showed up inside IDEs or products, the interaction model stayed mostly the same: you explicitly invoked it when you needed help.
Agents flip that model.
They live inside email, messaging tools, calendars, and internal systems — the same places where work already happens. You don’t “go to” them as much as they sit alongside you, continuously observing and acting.
That shift in where AI lives turns out to matter a lot.
Autonomous systems and background automation aren’t new. What is new is how easily language models can reason across tools, ingest untrusted human input, and be deployed with real permissions — often by a single developer in a few hours.
When you run an agent like this, you’re no longer dealing with a chat interface. You’re running a long-lived service that:
At that point, something important changes. You’re no longer the only input to the system.
Every email the agent reads was written by someone else.
Every webpage it visits is untrusted by default.
Every message or calendar invite it parses could come from anywhere.
All of that content now flows into a system that has memory, credentials, and the ability to act.
This is why issues like prompt injection stop being an academic concern once agents gain execution capability. It’s not about “bad prompts” in isolation. It’s about untrusted external input influencing systems with real permissions and side effects.
Some of the security issues people noticed around Clawdbot over the weekend weren’t surprising if you look at this through an infrastructure lens.
Anything with a public endpoint gets scanned.
Anything holding long-lived credentials attracts abuse.
Automation reliably attracts more automation.
This wasn’t an AI failure. It was infrastructure reality showing up the moment these agents started behaving like services instead of tools.
Right now, most of these setups are personal. One user. One machine. Full trust. Small blast radius.
That makes a lot of problems easy to ignore.
But architectures don’t stay personal for long.
The deeper issue here isn’t that agents are early or rough around the edges. It’s that we’re letting them implicitly inherit human identity and access.
Agents run where we run. They reuse our tokens.They see everything we see.
That’s convenient — but it’s also fragile.
We’re not really delegating tasks. We’re cloning ourselves, without clear boundaries or constraints.
As agents become more autonomous, they need to be treated as actors in their own right. That means:
This isn’t something user authentication or traditional service authentication solves on its own. It’s a new layer that sits between the two.
It’s easy to see all of this as a personal productivity issue today. But the same agent patterns will inevitably show up inside products and workflows that act on behalf of teams and customers.
When that happens, the blast radius changes — and so does the cost of getting identity wrong.
Clawdbot didn’t expose a bug. It exposed a shift.
It made it clear that once AI moves into the systems where work happens, the old assumptions stop holding. Autonomy changes the security model. Persistence changes the trust model.
What’s interesting about moments like this isn’t the specific tool or the specific incident — it’s that they surface architectural questions earlier than we expected.
The details will change. The tools will evolve. But the underlying question won’t:How do we safely let autonomous systems act on our behalf?
This is exactly the problem we’re working on at Scalekit.
With Agent Auth, agents don’t inherit human credentials by default. They get explicit, scoped identities of their own — so teams can control what access is being delegated, audit what an agent did, when it acted, and why it acted, and limit blast radius as agents move from experiments into real production systems.
As agents move deeper into the systems where work happens, we’ll need to get more intentional about how we model trust, delegation, and identity — not as an afterthought, and not as a bolt-on security layer, but as a first-class part of how agentic systems are designed.
We’re still early. But the shape of the problem is already visible.
When AI moves from answering questions to taking actions, identity stops being an implementation detail.