Emilio Carrión
Invisible Heuristics: What Seniors Know but Can't Explain
Senior engineers resolve incidents faster, but they can't explain how they do it. Gary Klein discovered the same thing with firefighters in 1984: tacit knowledge is built through experience and can't be easily articulated. This matters now more than ever.
- 1The Code Nobody Understands Is Already in Production
- 2Invisible Heuristics: What Seniors Know but Can't Explain
- 3AI Won't Replace the Software Engineer. It Will Replace the One Who Only Wrote Code.
- 4The Selfish Senior
- 5Generating Is Easy. Verifying Is the Work.
In 1984, a psychologist named Gary Klein sat down with a group of veteran fire commanders with a simple question: how do you make decisions when a building is burning?
They all told him the same thing: "We don't make decisions. We just follow the procedure."
Klein didn't buy it. He asked them to describe their last complicated fire. And what he discovered changed the science of decision-making forever.
It turns out veteran firefighters did make decisions (dozens of them, at full speed, under extreme pressure). But they didn't do it by comparing options the way classical theory says. They did it by recognizing patterns. They saw the smoke, felt the heat, heard the sound of the fire, and their brain instantly returned an action. No deliberation. No analyzing alternatives. Without even being aware they were deciding.
In 80% of cases, the first course of action that came to mind was the one they executed. And it worked.
I've been thinking about this research for months. Because I'm seeing the exact same thing in software engineers.
In 60 seconds: Seniors resolve incidents faster than juniors, even in code they've never seen. Not because they're smarter, but because they have an internal library of failure patterns built over years. It's tacit knowledge (what Polanyi called "we know more than we can tell"). And now, with AI writing more and more production code, these invisible heuristics have become critical. The problem is that nobody is passing them on.
The 3 AM Firefighters
In my previous article I wrote about a problem that's spreading silently: we're filling production with code that nobody fully understands. AI-generated code, code inherited from engineers who have left, systems whose mental model has evaporated.
And I mentioned something I found revealing: engineers with more experience resolve incidents in these systems faster. Not because they know the code (nobody does). But because they have something more.
Since then I've been paying attention to what exactly these engineers do when they face an opaque system at 3 AM. And I've been identifying repeating patterns.
It turns out they do the same thing Klein's firefighters did.
They don't compare options. They don't read the code line by line. They don't follow a mental checklist of possible causes. They look at a few signals, and something in their head tells them "look there." And they're almost always right.
I call them invisible heuristics (because even they don't know they have them).
Knowledge You Can't Articulate
In 1958, the philosopher Michael Polanyi wrote something that sounds simple but is profound: "We know more than we can tell." He called it tacit knowledge. It's what lets you recognize someone's face among a thousand people without being able to explain how. It's what a surgeon has when they sense something "doesn't look right" before any metric confirms it.
Senior software engineers are full of tacit knowledge. Years of watching systems fail in specific ways have left them with an internal library of failure patterns. But if you ask them "how did you know the problem was there?", most will say something like "I don't know, it was obvious" or "it just looked like that."
It's not that they don't want to explain. They literally can't. Tacit knowledge, by definition, can't be easily articulated. It's acquired through experience, not instruction. It's transferred through observation and imitation, not documentation.
And here's the problem.
What I've Observed They Do (and Don't Know They Do)
After paying attention for months, I've identified a handful of heuristics that senior engineers use repeatedly when operating systems they don't understand. None of them described these to me in these terms — I extracted them by observing their behavior.
They don't start with the code. They start with the symptoms. A junior opens the file and starts reading. A senior looks at metrics, logs, and the system's behavior from the outside before touching a single line of code. They're building a mental model of the problem before looking for the cause. They're asking "what type of failure produces these symptoms?" before asking "which line is broken?"
They classify before they investigate. Before diving deep, a senior has already placed the problem in a category: "this looks like a connection leak," "this smells like a race condition," "this looks like a timeout on an external service." That classification, which seems instantaneous, is the result of having seen dozens of similar problems. It's exactly what Klein calls recognition-primed decision: the brain recognizes the pattern and returns a category without you consciously asking for it.
They know what to ignore. This is perhaps the most important and most invisible one. During an incident, the amount of information is overwhelming: logs, metrics, alerts, stack traces, code. A junior tries to process everything. A senior discards 80% in seconds. They know that error in the logs always appears and means nothing. They know that alert is noise. They know the stack trace is pointing at the symptom, not the cause. Knowing what to ignore is a heuristic that only builds through seeing many false positives.
They formulate hypotheses that can be falsified quickly. A senior doesn't think "I'm going to read all the code until I understand it." They think "if my hypothesis is correct, I should see X in the logs of service Y." They go straight to verify or rule out. If it fails, they formulate another hypothesis. It's the scientific method compressed into minutes, applied instinctively. Klein found that firefighters did exactly this: they mentally simulated their course of action before executing it, looking for signs it might fail.
They navigate by architecture, not by code. When a senior opens code they've never seen, they don't read it like a book from start to finish. They look for entry points, identify architectural patterns (is it event-driven? is there a pipeline? where are the boundaries?), and build a high-level map. Only after having that map do they zoom into detail. A junior goes straight to the detail and gets lost.
They use time as a diagnostic variable. "Did this start suddenly or gradually?" is a question a senior asks almost always and a junior almost never. The answer completely changes the search space. An abrupt change suggests a deploy, a configuration change, or a crossed threshold. A gradual change suggests a leak, a degradation, or a cumulative effect. That simple question eliminates 50% of possible hypotheses.
Enjoying what you read?
Join other engineers who receive reflections on career, leadership, and technology every week.
Why This Matters Now More Than Ever
All of this existed before AI. Seniors have always been better at debugging someone else's code. That's not new.
What's new is the scale of the problem.
When AI writes 30-40% of the code, the amount of "someone else's code" in production multiplies exponentially. It's no longer code that a colleague who left two years ago wrote — it's code generated by a statistical model that never had intent, never understood the domain, and never documented its decisions.
A recent article in InfoWorld described it precisely: seniors have "architectural memory" (they remember the outage caused by a coupled service, they remember the debate that led to isolating a component, they remember why simplicity was chosen over extensibility in a specific module). AI doesn't have access to that memory. And the more AI-driven the workflow, the more valuable that memory becomes.
Meanwhile, the Stack Overflow 2025 survey shows that 45% of developers say debugging AI-generated code takes them longer than writing it themselves. And 66% say their biggest frustration is AI solutions that are "almost right, but not quite."
The invisible heuristics of seniors are exactly what you need to operate in this world. And there's a huge problem: nobody is passing them on.
The Broken Pipeline
Traditionally, heuristics were transmitted without anyone planning it. A junior sat next to a senior during an incident. They watched what they looked at first, what they discarded, what they asked. They did pair programming. They reviewed PRs and learned to recognize patterns. Over the years, they built their own library of failure patterns.
That pipeline is breaking on two fronts simultaneously.
First, AI is absorbing the simple tasks that served as training. Today's juniors have fewer opportunities to build mental models gradually because AI is eating exactly that work.
And second, the seniors themselves don't know they have these heuristics. They don't teach them because they're not aware of them. When Klein asked the firefighters how they decided, they said "we just follow the procedure." When you ask a senior how they knew the problem was a connection leak, they say "I don't know, it was obvious." They're not hiding anything from you. It's just that tacit knowledge, by nature, is invisible to the person who has it.
Making the Invisible Visible
How do you start transmitting something you can't even articulate? Klein worked on that too. He developed techniques like the Critical Decision Method: sitting with an expert, walking through an incident step by step, and asking specific questions about what they saw, what they expected to see, and what would have changed their decision. The goal isn't for the expert to give you a rule — it's for you, as the observer, to extract the rule they don't know they're using.
I think we can do something similar in software engineering. Some ideas I'm exploring:
Postmortems focused on reasoning. Instead of only asking "what failed and how did we fix it," ask "what did the person who diagnosed it look at first? What did they discard? What made them suspect the real cause? What would they have looked at differently if the symptoms had been different?" That turns every incident into an opportunity to make the senior's heuristics explicit.
Debugging out loud. When a senior investigates an incident, have them narrate their thinking. Not to teach, but so others can observe the reasoning process. It's the closest thing to an apprenticeship you can do in software. "I'm looking at the gateway logs because the latency suggests the problem is before the service, not inside..."
Failure pattern catalog. Document the patterns your team sees repeatedly. Not as formal technical documentation, but as operational heuristics: "When latency rises gradually over hours, the first thing we check is X. When it spikes suddenly after a deploy, we check Y." Make explicit what seniors already know implicitly.
What Comes Next
I'm convinced that invisible heuristics will define the difference between an engineer who simply uses AI and one who can operate what AI produces. It's the difference between the pilot who trusts the autopilot and the one who can fly when the autopilot fails.
Klein showed that expert firefighters weren't smarter or had better reflexes. They had a richer internal library of patterns, built over thousands of hours of real experience. Senior software engineers have the same advantage. And right now it's an advantage that's becoming critical, but that we're neither recognizing nor transmitting.
Free code doesn't exist. Someone has to operate it. And the heuristics of the person who operates it are the only thing an LLM can't generate.
Question for you: Can you identify a heuristic you use when debugging that you've never articulated out loud? I'd love for you to share it — because that's exactly the one we need to make visible.
This content was first sent to my newsletter
Every week I send exclusive reflections, resources, and deep analysis on software engineering, technical leadership, and career development. Don't miss the next one.
Join over 5,000 engineers who already receive exclusive content every week
Related articles
Generating Is Easy. Verifying Is the Work.
Anthropic separated the agent that generates from the one that evaluates and quality skyrocketed. That pattern describes the future of software engineering: generation is commodity. Verification is craft.
Your AI Agent Doesn't Need to Think Better. It Needs to Know When It Screwed Up.
Teams getting real value from agents don't have magical models. They have verification loops that catch failures fast and force correction with external signals.
The Selfish Senior
When a senior hoards knowledge in their head, the team is left without a safety net. With AI accelerating code creation, sharing context is no longer a nice-to-have -- it's a critical responsibility.
