Agency Beats Intelligence: How I Now Hire (And Evaluate Myself)

Sketch of a compass choosing direction through competing paths

Agency: choosing a direction and moving, even through uncertainty

About eighteen months ago I passed on a candidate who solved our take-home faster and more elegantly than anyone we had seen in two years of searching. The code was clean. The architecture was sound. One of my senior engineers texted me after the debrief: "That's the best solution we've ever gotten — we have to hire this person."

We did not hire this person.

The take-home was a scoped, well-defined problem. When we got to the live interview and gave the candidate something open-ended — a vague product requirement with no clear success criterion, deliberately underspecified — the wheels came off. They asked for clarification repeatedly, each question narrowing the problem until it was essentially prescribed. When I said "make a call and run with it," they froze. When I asked them to walk me through how they would know if the solution was working, they could not answer. They had never had to define that themselves. They had always been handed the definition.

We hired someone else. Someone whose code was messier, whose system design had one choice I would have made differently. But when we gave that person ambiguity, they moved. They made a call, stated their assumption, built toward it, and checked in at the right moments. Eighteen months later that person is one of my most effective engineers. The brilliant candidate? I found out they took a role at a large tech company where problems are well-defined and requirements arrive fully specced. Smart people find their right environments. But they were not right for mine.

That decision crystallized something I had been sensing for a while but had not articulated cleanly. Intelligence — raw problem-solving horsepower — is no longer the scarce thing. Agency is.

The Old Mental Model

For most of my career, the hiring signal I was optimizing for was intellectual firepower. Could this person reason through a hard problem? Could they hold complexity in their head? Were they fast? Were they rigorous?

This made sense for a long time because software was a domain where raw cognitive output was directly rate-limiting. The best engineer in the room was usually the one who could hold the most mental state, navigate the most layers of abstraction, and produce the clearest logic under pressure. Intelligence was the bottleneck, and credentials were a credible proxy for it — not perfect, but correlated enough to be useful.

Then AI tools became good. And everything I thought I knew about the bottleneck shifted.

The smartest engineers I know are not measurably faster than they were three years ago. The ones with high agency are two to five times more productive. That gap is not explained by intelligence. It is explained by something else — something I had not been hiring for systematically.

What Changed

The distinction I keep coming back to: intelligence is the ability to understand. Agency is the ability to act, pursue goals, learn from feedback, and self-correct when things go wrong. These are not the same thing, and in an era where AI tools can do a substantial portion of the understanding for you, the delta between them matters enormously.

I have watched this play out in my teams in real time. The engineers who are thriving are not always the smartest ones in the traditional sense. They are the ones who treat AI tools like a capable collaborator — they direct clearly, catch errors, set success criteria, and know when to override the output. They can decompose an ambiguous goal into a series of concrete verifiable sub-tasks. They do not wait to be told what done looks like. They define it themselves and then build toward it.

The engineers who are struggling are often highly intelligent people who have not yet made the mental shift. Their intelligence is not in question. But they are waiting for clarity that is not coming. They are asking for the specification that nobody has written. They are uncomfortable in the gap between intent and instruction. AI tools have made that gap larger, not smaller — because now the leverage is on the person doing the directing, and directing is an agentic skill.

Agency, Defined Concretely

Agency is not hustle. It is not "moves fast." It is a specific cluster of behaviors that are observable if you know what to look for.

Setting success criteria without being asked. The agentic engineer, handed a vague requirement, produces their own definition of done before they write a line of code. They make it legible to others. When I see someone do this unprompted, it is a strong signal.

Deciding under uncertainty. Every project has a moment where you lack the information you would ideally have before making a call. The agentic engineer makes the call. They state the assumption. They note the risk. They move. The non-agentic engineer waits, asks, escalates, or worse — quietly spins in place until someone notices.

Catching their own failures. This is the one I weight most heavily. In an AI-augmented workflow, catching failure is the whole game. AI tools are confidently wrong at a rate that will sink you if you are not critically engaged with their output. Engineers who accept AI-generated code without reading it, without running it against an adversarial case, without checking the edge — they multiply risk. The agentic engineer has a built-in skepticism. They verify. They probe. They test the thing they just built before they ship it.

Self-correcting without drama. Things go wrong. The agentic engineer diagnoses, adjusts, and re-runs. They do not disappear into a shame spiral. They do not over-explain or defensively reframe. They fix it and tell you what they changed and why.

How I Evaluate It Now

I changed my interview process in two meaningful ways.

I dropped most of the algorithmic whiteboard questions. Not because they are useless — they do measure something — but because they measure intelligence in a context that is now less rate-limiting. I kept a small set of problems that require judgment under time pressure, because judgment under pressure is still relevant. But I stopped treating performance on clean, well-specified algorithmic problems as a strong signal.

What I added is a deliberately underspecified take-home component followed by a structured debrief. The take-home prompt is written to be genuinely ambiguous. We do not clarify the ambiguity in advance. We want to see what candidates do with the space: do they make a call, or do they spin? Do they define their own success criteria, or do they produce something and wait to be told if it is right?

In the debrief, I ask three questions that are now my most reliable signal:

"How did you know when you were done?" This question separates people who built toward a target they owned from people who built until the time ran out.

"What would break this?" I want to see if they went looking for the failure modes themselves. The engineers who can enumerate adversarial cases — who tested the edges, who prodded the weak spots — are the ones who will not ship the bug I discover in production.

"Tell me about a time you were confident you were right and turned out to be wrong. What did you do next?" This one surfaces self-correction. What I am listening for is speed and lack of defensiveness. How quickly did they recognize the error? Did they blame the external or interrogate the internal? Did they adapt the process or just the output?

I also pay close attention to how candidates use AI during the take-home. We explicitly permit it — encourage it. What I learn from watching how they direct the tool, what they accept, what they override, and what questions they use it to answer versus what they reason through themselves tells me more about their agency than anything else in the process.

Three Types I Have Observed

In the last eighteen months of hiring through this lens, I have started to see three rough categories of engineer in the AI era.

Thriving: high agency, decent intelligence. These people are operating at a level that would not have been possible for someone at their intelligence tier five years ago. The tools have multiplied their reach. They direct clearly, catch failure reliably, and compound quickly because they close their own feedback loops without waiting for external correction. They are the ones I try hardest to keep.

Surviving: low agency, high intelligence. These are often very smart people doing work that looks good on paper but is slower than it should be. They are not yet directing AI tools effectively. They may distrust the tools or under-use them, or they may over-rely on them without adequate skepticism. The bottleneck is not their cognitive horsepower — it is their comfort with unspecified goals and their willingness to own the definition of done. Some of them are developing agency. Some are waiting for the environment to become more structured again. The waiting strategy does not work.

Struggling: low both. This category is honestly small. Most people in engineering have decent baseline intelligence. But there is a subset who entered the field in a period where specialization allowed them to operate competently in a very narrow lane, and that lane has been automated or deprioritized. The path forward for them is more about career reorientation than skill development.

If You Are Evaluating Yourself

This framing is not just a hiring framework. It is the most honest lens I have found for evaluating my own trajectory.

Agency is developable. I know because I have watched people develop it deliberately, and I have noticed the specific conditions that seem to accelerate it. The clearest pattern: agency grows when you take on problems that are more ambiguous than you are comfortable with, and you force yourself to produce a success criterion before you start working. Do it in writing. Share it with someone. The act of externalizing what "done" means before you begin is a practice, and like most practices, repetition compounds.

The other accelerator is closing feedback loops yourself instead of waiting for them to close externally. When you ship something, run your own postmortem. When you use an AI tool, evaluate its output adversarially before you accept it. When something goes wrong, write down what assumption was incorrect and what you would do differently. The feedback is always available. The agentic habit is to seek it rather than wait for it.

The engineers I want to hire — and the engineer I want to keep being — are not the ones who are smartest in the room. They are the ones who can walk into a room where nobody is sure what the right thing to do is, form a view, execute against it, notice when they are wrong, and adjust. Intelligence helps. It is not sufficient. It never really was.

It is just more visible now.