The Illusion of Constant Acceleration

Spend enough time around AI right now and you start to get the feeling that everything is speeding up, all the time.

Every week there is a new model, a new capability, a new claim that some industry is about to be remade. It starts to feel like rapid change is just the new baseline. Like history has bent into a permanently steeper slope.

I do not think that is right.

What I think is closer to the truth is that we have gotten used to confusing motion with progress, and delay with inevitability. Some things are moving very quickly. Others are barely moving at all. We treat the former as inevitable and the latter as unavoidable.

Neither is true.

My father was born in 1942. That is not ancient history. When he was born, there was still a lot of basic infrastructure left to build.

Within a little more than a decade, nonstop transcontinental passenger air service became viable. Less than eight years after that, a human entered space. Eight years later, people were walking on the Moon.

That is a staggering amount of change in a very short period of time.

In one person’s early life, we went from making coast-to-coast air travel practical to landing human beings on another celestial body. Not as a thought experiment. Not as a roadmap. We just did it.

And it was not only aerospace. The Golden Gate Bridge was built in about four years. The first transcontinental railroad was completed in about six. These were massive physical undertakings that reshaped how people moved and how economies functioned, delivered on timelines that would feel almost implausible now.

The easy way to dismiss this is to say that software is fast and physical infrastructure is slow. That if AI looks fast and transit looks slow, that is just how the world works.

But that does not really hold up.

Ukraine did not build its drone ecosystem on leisurely timelines. Tesla compressed what many assumed would be a slow industrial transition into something the rest of the auto industry had to react to. When something actually matters, physical systems move. Supply chains get reorganized. Tradeoffs get made. Bureaucracies get bent. Talent concentrates. People stop explaining why something is hard and start figuring out how to get it done.

That is part of what makes Artemis interesting.

This is not a criticism of Artemis. It is an ambitious and serious effort. But it is also a reminder that progress is not self-sustaining. Apollo is often remembered as a triumph of technology, but it was just as much a triumph of focus, alignment, and urgency. Artemis reminds us that those things matter just as much as the rockets do.

There is another force that shows up in systems like this.

At Google, there was a name for it: slime mold.

It is what happens when layers of process, approvals, coordination costs, and local incentives build up over time until forward motion gets harder even when nobody involved is being unreasonable. Everything makes sense on its own. The system just moves more slowly.

Technology policy has its own versions of slime mold.

We saw it in the crypto wars, when policymakers convinced themselves that math could be slowed down with policy, as if cryptographic reality were open to negotiation. It was not. What that produced was not real control. It produced friction, workarounds, and the illusion of governance.

You can see the same instinct showing up again in parts of the conversation around AI. When institutions feel outpaced, they respond with process. That instinct is understandable, but it rarely solves the problem. You do not make systems safer by pretending inevitabilities are optional. You make them safer by building the infrastructure, incentives, and accountability needed to deal with what is actually happening.

But that is not how we tend to think about progress.

We talk about technological achievement as if it were mostly about invention, as if once something has been demonstrated it remains latent in society, ready to be called back into service whenever we need it.

That is not how any of this works.

The ability to do ambitious things quickly depends on organizational memory, industrial capacity, political alignment, tolerance for risk, and a culture that still expects big things to happen on human timescales.

Lose enough of that, and even getting back to where you once were becomes hard.

You can see it in infrastructure. Projects that once would have been treated as urgent now take decades, often in fragments so small that earlier generations would have treated them as preliminary milestones. Over time, that changes expectations. Slowness starts to look like responsibility. Ambition starts to sound naive.

That is the trap.

The problem is not just that progress slows. It is that people get used to it. What would once have looked like drift starts to look like process. What would once have sounded like an excuse starts to sound like maturity.

Meanwhile, in domains where urgency and incentives line up, things still move very quickly. ChatGPT was released publicly in late 2022. In a few years, AI went from something most people associated with research labs to something embedded in everyday workflows, products, and policy debates.

AI did not prove that everything is accelerating.

It proved that when enough capability, capital, and attention line up, rapid change is still possible.

That is the point.

The world is not uniformly speeding up. Some parts of it are. Others are not. And the difference has less to do with atoms versus bits than with whether we have decided something actually matters.

That ought to make us a little less complacent.

People like to tell themselves that once a technology is important enough, the rest somehow sorts itself out. The problems get solved. The risks get managed. The surrounding systems catch up.

History does not really support that.

Things were only all right in the past because people worked very hard to make them all right. The systems that made aviation safe, that made infrastructure dependable, that made computing usable in high-trust environments, none of that appeared on its own.

The same will be true here.

If we want AI to be safe, trustworthy, and broadly useful, that will not happen as a side effect of capability gains. Security will not emerge on its own. Governance will not emerge on its own. The infrastructure needed to make these systems worthy of dependence will not emerge on its own.

Those things only happen when people decide they matter.

That is the real problem with the idea that everything is accelerating. It makes it easy to believe that progress takes care of itself.

It does not.

Progress happens when people decide it needs to, and then do the work.

Leave a Reply

Your email address will not be published. Required fields are marked *