The Work AI Moves
AI is a genuinely useful tool. I use it, and it means I spend less time typing. It handles boilerplate, syntax translations, and repetitive scaffolding well. It is not a fad.
But it is also not a magic bullet. Using AI effectively requires you to already know how to engineer software. If you do not understand system design, state management, or how data should flow through an application, AI will not save you. It will just help you write unmaintainable code much faster.
To get useful output, you have to be highly specific. I recently built Ghostwire, a Cloudflare bypass library, entirely through directed AI generation. To make that work, you have to define the architecture, dictate the boundaries, and explicitly state the constraints. The machine writes the implementation, but the human must own the engineering.
The review bottleneck
The most significant problem AI introduces to a team is where it shifts the friction. Fred Hebert wrote that complexity has to live somewhere — it cannot be removed, only moved. AI is a clean example of this. Historically, software development was constrained by the speed of writing code. AI removes that constraint. A single developer can now generate massive pull requests in an afternoon.
This moves the work onto the reviewer. Reading code takes more cognitive effort than writing it. Reading AI-generated code requires even more.
When you review a human’s pull request, you are following a logical thread. You can usually infer the developer’s intent, even if the execution is wrong. AI does not have intent. It predicts tokens based on patterns. It will confidently produce a function that looks structurally perfect but contains a subtle race condition, a hallucinated library method, or a complete misunderstanding of the domain model.
AI removes the friction of typing. But typing was never the hard part of software engineering.
The time saved during generation is easily lost if the reviewer has to untangle plausible-sounding but fundamentally broken logic. You cannot skim AI-generated code. You have to read every line with active suspicion.
The knowledge gap
There is a second place the work gets moved, and it is less visible.
When AI handles a domain you are expanding into, the feature ships but the understanding does not necessarily follow. I ran into this building the LDAP integration for Security Platform API. I knew what I needed, I directed the implementation, and it worked. But I came away without a solid mental model of the protocol itself — how the directory tree is structured, what distinguished names are actually doing, why certain bind sequences behave the way they do. I understood the integration at the level of my codebase. I did not understand LDAP.
This is not a beginner problem. If anything, experience makes it harder to notice. When you are directing an AI through unfamiliar territory, the output looks like code you would have written yourself. There is no obvious signal that anything was missed.
When you write something from scratch, even slowly, the gaps surface. You hold the concepts long enough to implement them and they stick. That process gets skipped when the implementation arrives ready-made. The work of actually learning the domain gets moved — to later, when something breaks and you have to understand it under pressure.
The output is done. The understanding is optional.