Always Run a Changing System

Part 1 of a series on building systems that change themselves.

The data center at Parkeon in Kiel wasn’t built by a hyperscaler. It was built by local craftsmen. Double floor, secure entry, the kind of setup where you could feel the cables running beneath your feet and hear the hum of every machine in the room. This was the early 2000s. Microsoft Server 2003. Virtual machines processing credit card transactions and analytics data from devices. Our most sophisticated automation was DOS scripts — batch files shuffling files around, regular expressions on a Windows command line that fought you every step of the way.

My boss, Andrea Menge, ran a tight ship. And the unspoken rule — the rule everyone in IT lived by back then — was simple: never touch a running system.

If the VMs were up, you didn’t patch. If the transactions were flowing, you didn’t update. If the batch scripts were running at 3 AM and the files were landing where they should, you left it alone. You treated a working server the way you’d treat a sleeping baby. Any change was a risk. Every deployment was a held breath.

It was beautiful, in its own way. Wild west. Hands-on. You knew every machine by name.

And the mantra made sense.

The Physics of Fear

Why did “never touch a running system” dominate an entire generation of IT professionals?

Because the physics of the environment demanded it. The cost of failure was disproportionate to the cost of stagnation. A stable but outdated system was predictable. A recently changed system was a liability. The math was clear.

The feedback loops were slow. You deployed, then waited. Hours. Sometimes days. If something broke, you might not know until a client called or a transaction failed silently. Visibility was minimal — we were navigating by feel, not by telemetry. The system was a black box. Something went in, something came out, and you had to trust the transformation in between was correct.

Change, in that environment, was entropy. Every modification increased disorder. The only defense was rigidity.

That was rational then. It isn’t anymore.

An Old Laptop and a Different Door

Late January 2026. I’m sitting in front of an old laptop I had lying around. On the screen is OpenClaw — an open-source AI agent platform built by Peter Steinberger that had been tearing through the developer world. I’ve just installed it. The tokens are burning at a rate that makes me wince. The platform is raw.

But I can see it immediately.

This is the exact inversion of everything I grew up with. This isn’t a system you carefully maintain in a frozen state. This is a system that is designed to change — and at its most mature, a system that changes itself.

I’d been working with N8N before this — solid, enterprise-grade, gives you control. But OpenClaw opened a different door. Not just automation. Autonomy.

Within weeks, I’d built a framework around it. A hierarchy: an architect that designs agents through structured interviews, a builder that deploys them to dedicated servers, and the agents themselves — running 24/7, talking to their humans through Telegram, doing their jobs. Each agent on its own server, its own API keys, its own cost tracking. No shared infrastructure. No cascading failures.

And a ladder. Five levels of trust, from “I check everything you do” to “I glance at you once a month.” The human decides when to promote. The agent never promotes itself.

I recently listened to a conversation between Lex Fridman and Peter Steinberger where Steinberger said something that landed hard: we don’t have to be afraid of changing systems or big refactors anymore. If something isn’t working, we can fix it. Not by sweating through night shifts until it’s up again — by prompting our way to the right solution and letting the tools do their work.

That’s a completely different relationship with change. Change isn’t entropy anymore. Change is the operating principle.

What a Changing System Looks Like

I won’t go deep into the architecture here — that’s a story for the next post in this series. But here’s the shape of what “always run a changing system” looks like in practice.

The system has three tiers. At the top sits an orchestrator that designs new agents through structured interviews — not from a one-line prompt, but from a genuine conversation about what the agent should do, who it serves, what it costs, and how it should fail. The output is a complete genetic blueprint.

Below that, a dedicated builder per agent. It takes the blueprint, deploys it to a server, wires up the skills, connects the communication channels, and sticks around as a caretaker. It’s scoped entirely to its own agent’s directory — it literally cannot see or touch anything else.

At the bottom, the agent itself. Always on. Doing its job.

Two agents are live today. My personal assistant was promoted from Level 1 (I review everything) to Level 2 (I check in every few days) in fifteen days. She’s handled 27 escalations. At one point, she decided to build her own Trello integration without asking — the safety architecture caught it, the builder rewrote it properly, and the system learned from the incident. That’s a changing system with guardrails.

A second agent handles quantitative market intelligence. Level 1 — every session reviewed. Three layers of boundary enforcement: operating system, database, and instruction level. It literally cannot modify its own core engine.

Both run on dedicated servers. Total infrastructure cost: roughly fifteen to twenty-five euros a month per agent.

Not chaos. Structure. Not carelessness. Graduated trust.

A Fool With a Tool Is Still a Fool

I want to be clear about something. “Always run a changing system” is a philosophy, not an operations manual. It’s a provocation, not an invitation to be reckless.

A fool with a tool is still a fool.

These systems require a human who understands risk management. Who knows how delivery frameworks work. How system architecture works. How things fail. The AI doesn’t replace that knowledge — it amplifies it. If you don’t know what you’re doing, faster tools just mean faster mistakes.

But if you do know what you’re doing, the relationship with change inverts completely. The old world punished change because the tools were crude and the feedback was slow. The new world rewards change because the tools are precise and the feedback is immediate. Something breaks? You don’t need a war room and a sleepless night. You need a clear prompt and a capable model.

The ocean is never the same wave twice. The currents shift, the sandbars move, the swell direction changes by the hour. A surfer who refuses to adapt to changing conditions doesn’t last long in the water. But a surfer who respects the power beneath them, reads the patterns, and adjusts — that’s the one who finds the best waves.

The future belongs to optimists. Not naive ones. Competent ones.

The Next Set

This is Part 1. The philosophy. The why.

Next, I’ll take you inside the factory — how the agents are designed through interviews, built on dedicated infrastructure, and gradually released into autonomy. How a personal assistant and a market analyst are running on twenty-euro servers, communicating through Telegram, and getting better every week. How the system itself is learning.

If you’re still living by “never touch a running system” — I get it. I lived there for years. The mantra served me well when the environment demanded it.

But the environment has changed. And your systems should too.


Image Prompt

A split composition photograph, left half shows a dimly lit early 2000s server room with beige rack-mounted PCs, tangled cables, amber status LEDs, double raised floor tiles slightly ajar, shot on Canon EOS 5D Mark II 35mm f/2.8, warm tungsten cast, dust particles visible in the air, desaturated muted tones. Right half shows a close-up of hands on a worn keyboard in a minimal workspace, terminal with scrolling text reflected in reading glasses resting on the desk, shallow depth of field, natural morning light from a window, warm color temperature. The two halves share a horizon line. Color palette grounded in deep teal and warm off-white tones. No people’s faces visible. No text overlays. No coffee cups. Documentary tone, slightly desaturated, not polished. –ar 16:9 –v 7 –s 150 –q 2

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *