This past week I spent more concentrated time with the newest generation of AI models than I have in months. What struck me was not just that they are better, but where they are better. They now handle routine engineering tasks with a competence that would have seemed impossible a year ago. The more I watched them work, the more obvious it became that the tasks they excel at are the same tasks that used to form the on-ramp for new engineers. This is the visible surface layer of software development, the part above the waterline in MIT’s Iceberg Index.
What these systems still cannot reach is everything beneath that waterline. That submerged world contains the tacit knowledge, constraint navigation, history, intention, and human forces that quietly shape every real system. It holds the scars and the institutional memory that never appear in documentation but govern how things actually work.
Developers have always been described mostly by skills. You could point to languages, frameworks, and tools and build an easy mental model of who someone was. These signals were simple to compare, which is why the industry relied on them. But skills alone do not explain why certain developers become the ones the entire organization depends on. The difference has always been context.
What the models can and cannot do
The models thrive in environments that are routine, self-contained, and free of history. They write small functions. They assemble glue code. They clean up configuration. They do the kind of work that once filled the first two years of an engineering career. In this territory they operate like a competent junior developer with perfect memory.
The challenges begin where real systems live. The deeper you go, the more you find decision spaces shaped by old outages, partial migrations, forgotten constraints, shifting incentives, and compromises that were never recorded. Production systems contain interactions and path dependencies that have evolved for years. These patterns are not present in training data. They exist only in the experiences of the people who worked in the system long enough to understand it.
There is also a human operating layer that quietly directs everything. Customers influence it. Compliance obligations shape it. Old political negotiations echo through it. Even incidents from years ago leave marks in code and behavior that no documentation captures. None of this is visible to a model.
The vanishing on-ramp
As AI absorbs more of the low-context work, the early career pathway narrows. New engineers still need time inside real systems to build judgment, but the tasks that once provided this exposure are being completed before a human ever sees them. The set of small, safe tasks that helped beginners form a mental map of how systems behave is slowly disappearing.
This creates a subtle but significant problem. AI takes on the easy work. Humans are asked to handle the hard work. Yet new humans have fewer opportunities to learn the hard work, because the simple tasks that once served as scaffolding are no longer available. The distance from beginner to meaningful contributor grows longer just as the ladder is being pulled up.
AI can help with simulated practice. A motivated learner can now ask a model to recreate plausible outages, messy migrations, ambiguous requirements, or conflicting constraints. These simulations resemble real scenarios closely enough to be useful. For people with curiosity and drive, this is a powerful supplement to traditional experience.
But a simulation is not the same as lived exposure. It does not restore the proving ground. It does not give someone the slow accumulation of judgment that comes from touching a system over time. The skill curve can accelerate, yet the opportunities to prove mastery shrink. We will need more developers, not fewer, but the pathway into the profession is becoming more difficult to follow.
What remains human
As skills become easier to acquire and easier to automate, the importance of context grows. Contextual judgment allows someone to understand why an architecture looks the way it does, how decisions ripple through a system, where the hidden dependencies live, and how history explains the odd behaviors that would otherwise be dismissed as bugs. These insights develop slowly. They come from exposure to the real thing.
There is also a form of entrepreneurial capability that stands out among strong engineers. It is the ability to make decisions that span technical concerns, organizational dynamics, customer needs, and long-term consequences, often without complete information. It is the ability to reason across constraints and understand how tradeoffs echo through time. This capability is uniquely high-context and uniquely human.
At the more granular level, some work is inherently easier to automate. Common patterns with clear boundaries are natural territory for models. Rare or historically shaped tasks are not. Anything requiring whole-system awareness remains stubbornly human. This aligns with predictions from economic and AI research: visible tasks are automated first, while invisible tasks persist.
The vanishing on-ramp sits directly at this intersection. AI is consuming the visible work while the invisible work becomes more important and harder for new engineers to access.
What we must build next
If the future is going to function, we need new mechanisms for developing context. That may mean rethinking apprenticeships, creating ways for beginners to interact with real systems earlier, or designing workflows that preserve learning opportunities rather than eliminating them. Senior engineers will not only need to solve difficult problems but will also need to create the conditions for others to eventually do the same.
AI is changing the shape of engineering. It is not eliminating developers, but it is transforming how people become developers. It removes the visible tasks and leaves behind the invisible ones. The work that remains is the work that depends on context, judgment, and the slow accumulation of lived understanding.
Those qualities have always been the real source of engineering wisdom. The difference now is that we can no longer pretend otherwise.
This shift requires us to change how we evaluate talent. We can no longer define engineers by the visible stack they use. We must define them by the invisible context they carry.
I have been working on a framework to map this shift]. It attempts to distinguish between the skills AI can replicate today (common domains, low complexity) and the judgment it cannot (entrepreneurial capability, systems awareness).