Complexity Is the Enemy of Human Agency

Modern computing is optimized for machines to be precise, not for humans to be effective. This translation layer steals attention, time, and cognitive energy at a civilizational scale.

After working in ambient computing, building AI agents, and designing automated workflows, I’ve started to notice a pattern that’s difficult to unsee. It shows up in small moments — the way we hunt for files we know exist, the way simple tasks fracture into steps, the way attention gets consumed by coordination rather than creation. None of this feels dramatic. It’s just constant. And ov er time, it adds up to something larger than inconvenience.

The more systems I’ve worked on, the more it’s felt like modern computing is optimized for machines to be precise, not for humans to be effective. We spend our days translating intention into inputs — clicks, taps, commands, workflows — and we accept thi s as normal because we’ve never experienced an alternative.

I’ve come to believe that this translation layer steals attention, time, and cognitive energy at a civilizational scale, pulling peo ple away from creativity, presence, and meaningful work.

I didn’t hav e language for it at first. I just knew something was off.

The Pattern

I first noticed it in ambient computing, where the promise was that technology would recede into the background. Experiences would persist across devices. Tasks would complete without constant interv ention. The system would carry the burden of coordination.

But again and again, whe n things got complex, that burden leaked back to the human.

I saw the same thing later when working with AI agents and automated workflows. These tools are powerful — the first I’ve seen that can genuinely absorb complexity rather than just accelerate it. But most of the time, they still operate inside an input-first world. The human has to frame the task precisely, assemble context, choose tools, and verify the result. The age nt executes steps, but the person remains the orchestrator.

This is the key distinction: agents today reduce input, but they don’t remove the need to manage it.

When something breaks, the human steps in. When context changes, the human updates it. When the system is unsure, the human decides. The work shifts shape, but the coordination cost remains. The machine moves faster, but the human still carrie s responsibility for correctness, continuity, and recovery.

Over time, the pattern becomes clear. As systems grow more capable, humans become the glue holding them together — not because the tools are weak, but because the paradigm hasn’t changed. We’ve accele

rated execution without transferring ownership of outcomes. That’s the can yon between agentic automation and intent-first computing.

The Hidden Cost

The real cost of this pattern isn’t time. It’s attention. Every time a system asks a human to coordinate, supervise, verify, or recover, it leaves behind a trace. A fragment of attention that doesn’t fully return. A small interruption that breaks momentum. Over the course of a day, those frag ments accumulate into something heavier than inconvenience.

We’ve learned to live with it because it’s quiet. A few clicks here. A clarification there. A moment of doubt. A manual fix. None of it feels significant in isolation. But together, they sha pe how much thinking we can actually do before we’re tired.

This is how agency erodes. Not in dramatic failures, but in small, constant withdrawals. We spend more energy managing work than doing it. More time steering systems than pursuing ideas. Creativity becomes something you schedule, rather than something that emerges naturally. And because this cost is distributed across thousands of micro-interactions, it’s invisible. We blame ourselves for losing focus, for being less productive, for needing breaks — without noticing that the systems we rely on are quietly taxing the very thing they’re supposed to support.

At sca le, this isn’t just a p roductivity issue. It’s a human one.

Naming the Failure

Agency is the ability to choose and execute meaningful action. It’s what allows a person to turn intention into reality without friction consuming the process. It’s also what makes creative work possible — the state where thought can flow forward without being constantly re directed by coordination, correction, or context switching.

Complexity undermines this. Not all at once, but gradually. Each new system that requires oversight, each automation that needs supervision, each workflow that demands setup pushes a little more coordination back onto the human. The system works, but only if the person stays alert. O nly if they carry the state. Only if they bridge the gaps.

Over time, execution becomes expensive. Not financially, but cognitively. The effort required to act grows until people begin to avoid action altogether, or default to the smallest possible tasks. This is h ow ambition shrinks. This is how creativity gets postponed.

When systems externalize coordination costs onto humans, agency collapses. Not because people are incapable, but bec ause the environment makes execution too costly to sustain.

This is the hidden failure mode of input-first computing.

A Scar

I ran into this problem years ago while working in ambient computing. We were trying to build experiences that flowed across devices — phone to desktop, desktop to headset, physical world to digital and back again. The technology mostly worked. We could synchronize frames, pass sta te, maintain continuity. In demos, it felt like the future.

But outside the lab, the experience kept breaking down. Not because of any single failure, but because of accumulation. Every new surface added another decision. Every transition added another assumption. When something went wrong, the system had no way to recover on its own — the human had to step in, re-establish context, and push things forward again. At the time, I thought this was an ecosystem problem. Bandwidth wasn’t there yet. Standards were missing. Hardware was imm ature. All of that was true. But it wasn’t the whole story.

What we were really missing was orchestration at the level of intent. The system could move data, but it couldn’t understand what the person was trying to accomplish, or why. So whenever complexity increased, coordination leaked back to the user. The promise of ambient computing held — but only until the system needed judgment.

That was the lesson I carried forward. If machine s can’t own outcomes, humans end up owning everything else.

Why This Matters Now

For a long time, this problem was unsolvable. Machines were good at executing instructions, but not at understanding intention. They could move data, but not meaning. They could automate steps, but not own outcomes . So complexity had nowhere to go exce pt back to the human.

That constraint is finally changing.

Modern AI systems can now interpret intent, reason about goals, and act across tools in ways that weren’t possible even a few years ago. They can maintain context. They can adapt. They can ask when they’re unsure. Combined with persistent state, fast connectivity, and new interfaces, it’s now feasible fo r systems to absorb complexity instead of externalizing it.

This doesn’t mea n the problem is solved. It means the equation has changed.

For the first time, we can imagine computing systems that don’t just accelerate execution, but take responsibility for it — systems that let human

s stay in the work instead of con stantly managing the work. That’s the shift I’m exploring.

An Open Question

If complexity is the enemy of human agency, then restoring agency bec omes one of the most important design problems of our time.

What woul d people do if execution stopped consuming their attention? What kinds of work would become possible again? What kinds of creativity would re-e merge if momentum wasn’t constantly broken by coordination?

I don’t know the answer yet. But for the first tim e, it feels like the question is worth exploring seriously.

That’s what I’m working on now.