Category Archives: Parenting

Teach to the Median, Punish the Variance

Factories exist to produce consistent, cost-effective products. That is the point. The relentless optimization of cost of goods sold is not a side effect of industrial production. It is the mandate. And it works, until it doesn’t. The reason products last so much less than they did twenty years ago is not that we forgot how to make durable things. It is that durability lost the cost argument. Quality is expensive. Variance is expensive. The system optimizes both out. What survives is the median product, built to a price, reliable enough to ship, and no more.

Modern schooling often behaves the same way. It batches children by age, sequences content for throughput, and optimizes for a predictable median. Sir Ken Robinson made this observation twenty years ago, and the metaphor stuck, not because it is clever but because it names incentives, not architecture. When a system must operate at scale under budget, policy, and staffing constraints, variance becomes expensive. The median becomes the target. Outliers become the problem.

That is how you get the loop so many families recognize.

A child with a spiky profile, gifted and struggling at the same time, or simply learning in a different sequence, is hard for a production line to interpret. The system cannot see internal state. It can only see outputs it knows how to count. Pacing, compliance, turn in rates, standardized measures, and classroom friction. When it cannot measure what is actually happening, it collapses complexity into a label. Lazy. Defiant. Behind. Broken. Sometimes worse. The misclassification is not incidental. It is structural. The factory cannot afford to treat every student as a special case, so it treats special cases as defects.

Twice exceptional programs were a serious attempt to address exactly this failure mode. 2e was not supposed to be a vibe. It was an operational category, a way to route support without denying capability.

Institutions rarely attack reforms head-on. They metabolize them. The common move is not to announce that everyone is 2e. It is more subtle. Fold 2e into the general program, justify the change as an opportunity for all, and quietly remove the differentiated pathways, expertise, and accountability that made 2e real. The label survives. The function does not. The specialist becomes a roaming consultant, the pull-out becomes a generic intervention block, and the documentation becomes a checkbox.

Spencer Silver at 3M spent years trying to make a strong adhesive and produced one that was too weak to hold permanently. By factory logic it was a failed batch. It sat in the lab for years because the system had no category for a glue that did not stick properly. A colleague with a different problem recognized the variance as the feature. The factory almost never found out what it had.

This pattern is familiar in M&A. Companies are often acquired to address a capability, culture, or talent gap. The acquirer gets what it wanted on paper, and then the organization takes over. Microsoft bought Hotmail to compete in web-based email. Hotmail ran on Linux. Microsoft ported it to Windows, the product degraded, and what had been acquired to solve a problem became an example of the problem. The engineers who built Hotmail watched what they had created get dismantled and left. The institution did not transform around the acquisition. The acquisition transformed into the institution, and the talent that made it valuable walked out with their badges.

The proof a program still exists is not whether the brochure mentions it. It is whether the supports remain distinct, staffed, and enforceable. When a category stops changing what adults do, the system reverts to default settings. Teach to the median, punish variance, treat the casualties as defects.

You can see the same dynamic in curriculum fights. When a system cannot reliably lift the floor, the path of least resistance is to lower the ceiling and call it equity. This is not cynical in intent. It is cynical in effect. Acceleration does not disappear. It moves off the books. Tutoring, test prep, schedule hacking, summer programs, parent advocacy. The families who can afford those channels use them. The families who cannot are left with the official story that the ceiling was lowered for their benefit. The median experience is preserved. The gap widens. Official metrics improve because the ceiling has been redefined.

None of this is morally mysterious. It is operational. What makes it damning is that schooling runs this population optimization model without the measurement and accountability that would make it legitimate.

Medicine is honest about something uncomfortable. Treatments have side effects. They do not affect everyone equally. Approval assumes some negative outcomes are acceptable in exchange for a greater good. But medicine only earns the right to make that utilitarian bargain because it is paired with surveillance and accountability. Trials, defined endpoints, adverse event reporting, label changes, and sometimes recalls. When a drug underperforms or causes unacceptable harm, the system has mechanisms to withdraw it.

Schooling borrows the utilitarian posture and skips the legitimacy conditions. There is no adverse event tracking for predictable harms like anxiety spirals, learned helplessness, disengagement, or the systematic grinding down of nonstandard profiles. When you ask what the rollback criteria are, you get a blank stare, because the system does not think in rollback terms. It thinks in throughput terms.

Here is a small, concrete example. One of my children has an accommodation plan tied to a documented set of specific needs. A teacher recently told us the plan would not be needed anymore because the child does not show ADHD signs. There is no ADHD diagnosis, and the plan is not based on ADHD. The teacher was not acting maliciously. They were acting normally inside a system that treats supports as vibes. In a system with real measurement, you do not withdraw support based on a vibe. You tie withdrawal to documented criteria, with a rollback plan if the criteria are wrong. This is not exotic engineering. It is basic change management. Define the hypothesis, define success, define failure, and pre-commit to the revert.

Schooling routinely does the opposite, and the response when things go sideways is not to revisit the decision. It is to escalate.

More pressure. More compliance. More labeling. The system treats opt-out as a containment breach rather than a performance signal, because enrollment and funding are coupled together. The institution has no incentive to register failure. It has strong incentives to frame failure as the students’.

So why does this cycle finally have a credible exit?

Because AI breaks the monopoly on instruction.

For most of modern history, if you wanted a coherent explanation, feedback loops, sequenced practice, and the ability to revisit a concept from a different angle without embarrassment, you needed the institution or you needed money. Those are the same thing for most families. AI makes those pieces abundant. It makes it cheaper to learn in a different order. It makes it cheaper to revisit a concept from five angles without being punished for needing a sixth. It reduces the penalty for variance in a way that nothing else in the past century has.

This is why models like Alpha School are worth watching, whatever you think of their specific implementation. They are proof that you can architect learning around mastery and coaching rather than batching and seat time. They are not just a new school brand. They are evidence that instruction is no longer scarce, and that the existing system’s grip on the delivery layer is loosening.

The tradeoff is real and worth being honest about. The devil you know versus the one you do not.

The existing system’s harms are normalized, which means they are mostly invisible. The new world introduces different risks. Dependency on opaque tools, misinformation at scale, AI-driven learning environments that are even more coercive than human ones because they optimize metrics nobody agreed to, and a widening gap between families who can navigate the options and those who cannot.

The credential layer will be the next fight. Institutions that lose control of instruction will shift to defending legitimacy. Seat time requirements, accreditation barriers, and the bureaucratic right to define what counts for the purposes of the next gate. If instruction becomes abundant, the last monopoly is not learning. It is recognition.

But the direction of travel is hard to reverse. Bureaucracy protects the status quo long past the point where the quo has lost its status. AI accelerates the expiration date. The more schooling responds to exits with escalation rather than adaptation, the more it will be outcompeted by systems that treat variance as signal rather than a defect.

I keep coming back to the medicine analogy, but with a sharper edge. In medicine, adverse events are data. In schooling, adverse events become discipline referrals and bad grades. One system updates on failure. The other system records the failure as the student.

AI is not a magic cure. But it is the first credible exit from a century-old loop. A factory that mistakes difference for defect, and calls the casualties the cost of scale.

The Housing Affordability Crisis

Recently, I was talking to one of my kids, now in university, about why housing feels so out of reach here in Washington. He asked the simple question so many young people are asking: Why is it so expensive to just have a place to live?

There’s no single answer, but there is a clear outlier, especially in big cities, that drives up costs far more than most people realize: bureaucracy.

How broken is the math? Policymakers are now seriously debating 50-year mortgages just to make homeownership work. A 50-year loan lowers the monthly payment, but it also means you never build real equity. You spend most of your adult life paying interest and end up owing almost as much as you started with. You cannot use it to move up because you never escape the debt. It is not a bridge to ownership. It is a treadmill.

And the reason we need it is not interest rates or construction costs. It is the cost of permission.

The Price of Permission

According to the National Association of Home Builders, about 24 percent of the price of a new home in America is regulation: permits, zoning, fees, and delays.

In Washington, the burden is closer to 30 percent.

At Seattle’s median home price of about $853,000, roughly $250,000 is regulation: permits, fees, and delays. If Seattle carried Houston’s regulatory burden, that same house would cost much closer to $600,000.

The difference is not labor or lumber. It is paperwork. It is the cost of waiting, of hearings, of permission.

King County then takes about one percent a year in property taxes, around $8,400 annually, for the privilege of keeping what you already paid for. Combined, bureaucracy and taxes explain almost a third of the cost of shelter in one of America’s most expensive cities.

The Hidden Cost of Bureaucracy

The public conversation stops at the sticker price. It should not.

That regulatory cost does not disappear once you close. It gets financed.

If you borrow $250,000 in regulatory overhead at 6.5 percent, here is what that bureaucracy really costs:

Loan TermRegulatory PrincipalInterest PaidTotal Cost
30 years$250,000$307,000$557,000
50 years$250,000$564,000$814,000

A quarter-million dollars of regulation quietly becomes more than $800,000 over the life of a 50-year loan.

Bureaucracy does not just raise prices. It compounds them.

The System That Made Housing Expensive

Every rule had a reason. Fire safety. Drainage. Noise. Aesthetic harmony. Each one made sense on its own. Together they have made it almost impossible to build.

Seattle’s design review process can take years. The Growth Management Act limits where anything can go. In parts of Woodinville, just minutes from Seattle, zoning is RA-5: one home per five acres. The same land under typical urban zoning could hold forty homes. Under Seattle’s new fourplex rules, one hundred sixty units. The scarcity is not geographic. It is legal.

Fees pile up. Permits expire mid-project. Every safeguard adds cost and delay until affordability becomes a memory.

If you want to see the endgame of this logic, look at the California coast.

After the fires that swept through the Santa Monica Mountains and Malibu last year, more than four hundred homes were lost.

By early 2025, fewer than fifty rebuilding permits had been issued, and barely a dozen homes had been completed.

Each application moves through overlapping city, county, and coastal reviews that can take years even for an identical replacement on the same lot.

In Texas, the same house could be rebuilt in less than a year.

Here, the process outlived the purpose.

Rules written to preserve the landscape now keep people from returning to it.

The result is a coastline where the danger has passed, but the displacement never ends.

We built a system that rewards control instead of results. The outcome is exactly what the incentives predict: scarcity.

The Multi-Family Trap

Try to build multi-family housing and you will see how the system works in practice. In much of Seattle it is still illegal. Where it is technically allowed, the odds are against you.

You buy land. You design a project. You spend years and millions navigating variances, hearings, and neighborhood appeals. You pay lawyers, consultants, and taxes while you wait. And at the end, the city might still say no.

You are left holding land you cannot use and a balance sheet you cannot fix.

Seattle’s “One Home” four-unit reform was meant to solve this. It helps on paper. In practice, the same bureaucracy decides what counts as acceptable housing, and the same delays make it unaffordable to build. We did not fix the problem. We moved it.

This is where incentives collapse. If a small developer looks at that risk and realizes they might spend years fighting the city and still lose, they walk away. They put the money in the stock market instead. It is liquid, predictable, and far less likely to end with a worthless lot.

When housing policy makes real investment riskier than speculation, capital leaves. When capital leaves, supply dies.

The Death of the Small Home

It used to be possible to build small. Starter homes, bungalows, cottages. The foundation of the middle class. They are gone.

Codes now set minimum lot sizes, minimum square footage, and minimum parking. Each rule pushes builders toward large, expensive projects that can survive the regulatory drag. The system punishes simplicity.

Seattle’s accessory dwelling unit and backyard cottage rules are small steps in the right direction. They make small building legal again, but not easy. Permitting still takes months, and costs that once seemed modest are now out of reach.

Some assume builders would choose large homes anyway. The math says otherwise. A builder who can sell ten $400,000 homes makes more than one who sells three $900,000 homes on the same land and moves the capital faster. Builders follow returns, not square footage. They build large because the regulatory drag makes small uneconomical, not because they prefer it.

The result is predictable. Modest housing disappears. “Affordable” becomes a campaign word instead of a floorplan.

The Tax You Can Never Escape

Even if you beat the system and buy a home, the meter never stops.

Property taxes rise every year, often faster than wages. The rate barely changes, but assessed values jump. A $600,000 house becomes an $850,000 house on paper, and the tax bill rises with it.

Those assessments are often based on bad data.

Valuations can rise even when real prices fall. The appeal process is slow and opaque. Few succeed. My own home’s assessed value rose 28 percent last year while Zillow’s and Redfin’s estimates fell 10 percent over the prior year. As a result, the county now values it substantially above what the market says it’s worth. I appealed. No response.

For many families, especially retirees on fixed incomes, it means selling just to survive. People move not because they want to, but because the tax bill leaves them no choice.

In places like Seattle, it does not end there. When you finally sell, you face a city-level real-estate excise tax on top of the state’s version. The government takes a cut of the same inflated value it helped create.

The overhead gets buried in the mortgage, compounded by interest, and slowly eats whatever equity a family might have built. By the time you sell, the city takes another cut. The cycle repeats. Ownership becomes a lease under another name.

The Rent Illusion

Renters often think they are immune to this.

They are not.

That same regulatory overhead that a buyer finances into a mortgage gets built into the developer’s cost structure before the first tenant ever moves in.

If a building costs 30 percent more to complete, the rent must be 30 percent higher just to service the debt.

Developers confirm it in their pro formas. Roughly 25 to 35 percent of monthly rent in new Seattle buildings reflects regulatory costs: fees, permitting delays, compliance financing, and required “affordable” offsets that increase the baseline cost for everyone else.

For a $2,800 two-bedroom apartment, that is $700 to $980 every month paid for process. Over a ten-year tenancy, a renter pays between $84,000 and $118,000 in hidden bureaucracy, enough for a down payment on the very home they cannot afford to buy.

Because rents are based on the cost to build, not the cost to live, the renter never builds equity and never escapes the cycle.

The result is two generations trapped in the same system: owners financing bureaucracy with debt, and renters financing it with rent.

The only real difference is who holds the paperwork. One signs a mortgage, the other a lease, but both are paying interest on the same bureaucracy.

It Does Not Have to Be This Way

Other places made different choices.

Houston has no conventional zoning. It enforces safety codes but lets supply meet demand. Builders build. Prices stay roughly twenty to thirty percent lower than in cities with the same population and heavier regulation, according to the Turner & Townsend International Construction Market Survey 2024.

Japan and New Zealand show that efficiency does not require deregulation. In Japan, national safety codes replace local vetoes and permits clear in weeks, not years, keeping the regulatory share near ten percent of cost. New Zealand’s 2020 zoning reforms shortened reviews and boosted new-home starts without sacrificing safety. Both prove that when policy favors results over process, affordability follows.

These places did not get lucky. They decided housing should exist.

The Collapse of Ownership

Owning a home was once the reward for hard work. It meant security, independence, a stake in the future. Now it feels like a rigged game.

The barriers are not natural. They were built.

Rules, fees, and taxes add a third to the cost of every house, yet do little to make any of it safer or better. They make it slower, harder, and more expensive.

Greed exists. But greed did not write the zoning map or the permitting code. What drives this system is something quieter and more permanent.

Every form, review, and hearing creates a job that depends on keeping the process alive. As John Kenneth Galbraith observed, bureaucracy defends its existence long past the time when the need for it has passed.

Regulation has become a jobs program, one that pays salaries in delay and collects rent from scarcity.

The toll is not abstract. It shows up in the quiet math of people’s lives.

Families sell homes they planned to retire in because the taxes outpaced their pensions.

Young couples postpone children because saving for a down payment now takes a decade.

Teachers, nurses, and service workers move hours away from the cities they serve.

Neighborhoods lose their history one family at a time.

It is not a housing market anymore. It is a sorting machine.

When my son graduates, this is the world he will walk into, a market where hard work no longer guarantees a place to live.

Ten years behind him, his sister will face the same wall, built not from scarcity but from policy.

They are inheriting a system designed to sustain itself, not them.

We could change this. We could make it easier to build, to own, to stay. We could treat shelter as something worth enabling rather than something to control.

That would mean admitting the truth.

This crisis is not the result of greed, or interest rates, or some invisible market force.

It is the outcome of decades of good intentions hardened into bad incentives.

When a system that claims to protect people starts protecting itself, everyone pays, whether they own or rent.

It was not the market that failed. It was the process.


Sources and Further Reading

Intuition Comes Last

Early in my career, I was often told some version of the same advice: stop overthinking, trust your intuition, move faster.

The advice was usually well-intentioned. It also described a cognitive sequence I don’t actually experience.

For me, intuition does not arrive first. It arrives last.

When I am confident about a decision, that confidence is not a gut feeling. It is the residue of having already explored the space. I need to understand the constraints, see how the system behaves under stress, identify where the edges are, and reconcile the tradeoffs. Only after that does something that feels like intuition appear.

If I skip that process, I don’t get faster. I’m guessing instead of deciding.

This took me a long time to understand, in part because the people giving me that advice were not wrong about their experience. What differs is not the presence of intuition, but when it becomes available.

For some people, much of the work happens early and invisibly. The intuition surfaces first; the structure that produced it is only exposed when something breaks. For others, the work happens up front and in the open. The structure is built explicitly, then compressed.

In both cases, the same work gets done. What differs is when it shows and who sees it.

This is why that advice was most common early in my career, before the outcomes produced by my process were visible to others. At that stage, the reasoning looked like delay. The caution looked like uncertainty.

Over time, that feedback largely disappeared. As experience accumulated, the patterns I had built explicitly began to compress and transfer. I could recognize familiar structures across different domains and apply what I had learned without rebuilding everything from scratch.

That accumulation allowed me to move faster and produce answers that looked like immediate intuition. From the outside, it appeared indistinguishable from how others described their own experience. Internally, nothing had changed—the intuition was still downstream of the work. The work had simply become fast enough to disappear.

This is where people mistake convergence of outcomes for convergence of process.

This is where large language models change something real for people who process the way I do.

Large language models do not remove the need for exploration. They remove the time penalty for doing it explicitly.

The reasoning is still mine. The tool accelerates the exploration, not the judgment.

They make it possible to traverse unfamiliar terrain, test assumptions, surface counterexamples, and build a working model fast enough that the intuition arrives before impatience sets in. The process is unchanged. What changes is the latency.

This is why the tool does not feel like a shortcut. It doesn’t ask me to act without coherence. It allows coherence to form quickly enough to meet the pace others already assume.

For the first time, people who reason this way can move at a pace that looks like decisiveness without abandoning how their judgment actually forms.

For some people, intuition is a starting point.
For others, it is an output.

Confusing the two leads us to give bad advice, misread rigor as hesitation, and filter out capable minds before their judgment has had time to become visible.

AI doesn’t change how intuition works.
It changes how long it takes to earn it.

And for people who process this way, that difference finally matters.

The Vanishing On-Ramp

This past week I spent more concentrated time with the newest generation of AI models than I have in months. What struck me was not just that they are better, but where they are better. They now handle routine engineering tasks with a competence that would have seemed impossible a year ago. The more I watched them work, the more obvious it became that the tasks they excel at are the same tasks that used to form the on-ramp for new engineers. This is the visible surface layer of software development, the part above the waterline in MIT’s Iceberg Index.

What these systems still cannot reach is everything beneath that waterline. That submerged world contains the tacit knowledge, constraint navigation, history, intention, and human forces that quietly shape every real system. It holds the scars and the institutional memory that never appear in documentation but govern how things actually work.

Developers have always been described mostly by skills. You could point to languages, frameworks, and tools and build an easy mental model of who someone was. These signals were simple to compare, which is why the industry relied on them. But skills alone do not explain why certain developers become the ones the entire organization depends on. The difference has always been context.

What the models can and cannot do

The models thrive in environments that are routine, self-contained, and free of history. They write small functions. They assemble glue code. They clean up configuration. They do the kind of work that once filled the first two years of an engineering career. In this territory they operate like a competent junior developer with perfect memory.

The challenges begin where real systems live. The deeper you go, the more you find decision spaces shaped by old outages, partial migrations, forgotten constraints, shifting incentives, and compromises that were never recorded. Production systems contain interactions and path dependencies that have evolved for years. These patterns are not present in training data. They exist only in the experiences of the people who worked in the system long enough to understand it.

There is also a human operating layer that quietly directs everything. Customers influence it. Compliance obligations shape it. Old political negotiations echo through it. Even incidents from years ago leave marks in code and behavior that no documentation captures. None of this is visible to a model.

The vanishing on-ramp

As AI absorbs more of the low-context work, the early career pathway narrows. New engineers still need time inside real systems to build judgment, but the tasks that once provided this exposure are being completed before a human ever sees them. The set of small, safe tasks that helped beginners form a mental map of how systems behave is slowly disappearing.

This creates a subtle but significant problem. AI takes on the easy work. Humans are asked to handle the hard work. Yet new humans have fewer opportunities to learn the hard work, because the simple tasks that once served as scaffolding are no longer available. The distance from beginner to meaningful contributor grows longer just as the ladder is being pulled up.

AI can help with simulated practice. A motivated learner can now ask a model to recreate plausible outages, messy migrations, ambiguous requirements, or conflicting constraints. These simulations resemble real scenarios closely enough to be useful. For people with curiosity and drive, this is a powerful supplement to traditional experience.

But a simulation is not the same as lived exposure. It does not restore the proving ground. It does not give someone the slow accumulation of judgment that comes from touching a system over time. The skill curve can accelerate, yet the opportunities to prove mastery shrink. We will need more developers, not fewer, but the pathway into the profession is becoming more difficult to follow.

What remains human

As skills become easier to acquire and easier to automate, the importance of context grows. Contextual judgment allows someone to understand why an architecture looks the way it does, how decisions ripple through a system, where the hidden dependencies live, and how history explains the odd behaviors that would otherwise be dismissed as bugs. These insights develop slowly. They come from exposure to the real thing.

There is also a form of entrepreneurial capability that stands out among strong engineers. It is the ability to make decisions that span technical concerns, organizational dynamics, customer needs, and long-term consequences, often without complete information. It is the ability to reason across constraints and understand how tradeoffs echo through time. This capability is uniquely high-context and uniquely human.

At the more granular level, some work is inherently easier to automate. Common patterns with clear boundaries are natural territory for models. Rare or historically shaped tasks are not. Anything requiring whole-system awareness remains stubbornly human. This aligns with predictions from economic and AI research: visible tasks are automated first, while invisible tasks persist.

The vanishing on-ramp sits directly at this intersection. AI is consuming the visible work while the invisible work becomes more important and harder for new engineers to access.

What we must build next

If the future is going to function, we need new mechanisms for developing context. That may mean rethinking apprenticeships, creating ways for beginners to interact with real systems earlier, or designing workflows that preserve learning opportunities rather than eliminating them. Senior engineers will not only need to solve difficult problems but will also need to create the conditions for others to eventually do the same.

AI is changing the shape of engineering. It is not eliminating developers, but it is transforming how people become developers. It removes the visible tasks and leaves behind the invisible ones. The work that remains is the work that depends on context, judgment, and the slow accumulation of lived understanding.

Those qualities have always been the real source of engineering wisdom. The difference now is that we can no longer pretend otherwise.

This shift requires us to change how we evaluate talent. We can no longer define engineers by the visible stack they use. We must define them by the invisible context they carry.

I have been working on a framework to map this shift]. It attempts to distinguish between the skills AI can replicate today (common domains, low complexity) and the judgment it cannot (entrepreneurial capability, systems awareness).

Beyond Gutenberg: How AI Is Teaching Us to Think About Thinking

At breakfast the other day, I was thinking about those old analogy questions: “Hot is to cold as light is to ___?” My kids would roll their eyes. They feel like relics from standardized tests.

But those questions were really metacognitive exercises. You had to recognize the relationship between the first pair (opposites) and apply that pattern to find the answer (dark). You had to think about how you were thinking.

I was thinking about what changes when reasoning becomes abundant and cheap. It hit me that this skill, thinking about how you think, becomes the scarcest resource.

Learning From Nature

A few years ago, we moved near a lake. Once we moved in we noticed deer visiting an empty lot next to us that had turned into a field of wildflowers. A doe would bring her fawn and, with patient movements, teach it where to find clover, when to freeze at a scent, and where to drink. It was wordless instruction: demonstration and imitation. Watch, try, fail, try again. The air would still, the morning light just breaking over the field. Over time, that fawn grew up and brought its own young to the same spot. The cycle continued until the lot was finally developed and they stopped coming.

That made me think about how humans externalized learning in ways no other species has. The deer’s knowledge would die with her or pass only to her offspring. Humans figured out how to make knowledge persist and spread beyond direct contact and beyond a single lifetime.

We started with opposable thumbs. That physical adaptation let us manipulate tools precisely enough to mark surfaces, to write. Writing captured thought outside of memory. For the first time, an idea could outlive the person who had it. Knowledge became persistent across time and transferable without physical proximity. But writing had limits. Each copy required a scribe and hours of work, so knowledge stayed localized.

Then came printing. Gutenberg’s press changed the economics. What took months by hand took hours on a press. The cost of reproducing knowledge collapsed, and books became locally abundant. Shipping and trade moved that knowledge farther, and the internet eventually collapsed distance altogether. Local knowledge became globally accessible.

Now we have LLMs. They do not just expose knowledge. They translate it across levels of understanding. The same information can meet a five-year-old asking about photosynthesis, a graduate student studying chlorophyll, and a biochemist examining reaction pathways. Each explanation is tuned to the learner’s mental model. They also make knowledge discoverable in new ways, so you can ask questions you did not know how to ask and build bridges from what you understand to what you want to learn.

Each step in this progression unlocked something new. Each one looked dangerous at first. The fear is familiar. It repeats with every new medium.

The Pattern of Panic

Socrates worried that writing would erode memory and shallow thinking (Plato’s Phaedrus). He was partly right about trade-offs. We lost some oral tradition, but gained ideas that traveled beyond the people who thought them.

Centuries later, monks who spent lifetimes hand-copying texts saw printing as a threat. Mass production, they feared, would cheapen reading and unleash dangerous ideas. They were right about the chaos. The press spread science and superstition alike, fueled religious conflict, and disrupted authority. It took centuries to build institutions of trust: printers’ guilds, editors, publishers, peer review, and universities.

But the press did not make people stupid. It democratized access to knowledge. It expanded who could participate in learning and debate.

We hear the same fears about AI. LLMs will kill reasoning. Students will stop writing. Professionals will outsource thinking. I understand the worry. I have felt it.

History suggests something more nuanced.

AI as Our New Gutenberg

Gutenberg collapsed the cost of copying. AI collapses the cost of reasoning.

The press did not replace reading. It changed who could read and how widely ideas spread. It forced literacy at scale because there were finally enough books to warrant it.

AI does not replace thinking. It changes the economics of cognitive work the same way printing changed knowledge reproduction. Both lower barriers, expand access, and demand new norms of verification. Both spread misinformation before society learns to regulate them. The press forced literacy. AI forces metacognitive literacy: the ability to evaluate reasoning, not just consume conclusions.

We are in the messy adjustment period. We lack stable institutions around AI and settled norms about what counts as trustworthy machine-generated information. We do not yet teach universal AI fluency. The equivalents of editors and peer review for synthetic reasoning are still forming. It will take time, and we will figure it out.

What This Expansion Means

I have three kids: 30, 20, and 10. Each is entering a different world.

My 30-year-old launched before AI accelerated and built a foundation in the old knowledge economy.

My 20-year-old is in university, learning to work with these tools while developing core skills. He stands at the inflection point: old enough to have formed critical thinking without AI, young enough to fully leverage it.

My 10-year-old will not remember a time before you could converse with a machine that reasons. AI will be ambient for her. It is different, and it changes the skills she needs.

This is not just about instant answers. It is about who gets to participate in knowledge work. Traditional systems reward verbal fluency, math reasoning, quick recall, and social confidence. They undervalue spatial intuition, pattern recognition across domains, emotional insight, and systems thinking. Many brilliant minds do not fit the template.

Used well, AI can correct that imbalance. It acts as a cognitive prosthesis that extends abilities that once limited participation. Someone who struggles with structure can collaborate with a system that scaffolds it while preserving original insight. Someone with dyslexia can translate thoughts to text fluidly. Visual thinkers can generate diagrams that communicate what words cannot.

Barriers to entry drop and the diversity of participants increases. This is equity of potential, not equality of outcome.

But access without reflection is noise.

We are not producing too many answers. We are producing too few people who know how to evaluate them. The danger is not that AI makes thinking obsolete. It is that we fail to teach people to think about their thinking while using powerful tools.

When plausible explanations are cheap and fast, the premium shifts to discernment. Can you tell when something sounds right but is not? Can you evaluate the trustworthiness of a source? Can you recognize when to dig deeper versus when a surface answer suffices? Can you catch yourself when you are being intellectually lazy?

This is metacognitive literacy: awareness and regulation of your own thought process. Psychologist John Flavell first defined metacognition in the 1970s as knowing about and managing one’s own thinking, planning, monitoring, and evaluating how we learn. In the AI age, that skill becomes civic rather than academic.

The question is not whether to adopt AI. That is already happening. The question is how to adapt. How to pair acceleration with reflection so that access becomes understanding.

What I Am Doing About This

This brings me back to watching my 10-year-old think out loud and wondering what kind of world she will build with these tools.

I have been looking at how we teach gifted and twice-exceptional learners. These are kids who are intellectually advanced but may also face learning challenges like ADHD or dyslexia. Their teachers could not rely on memorization or single-path instruction. They built multimodal learning, taught metacognition explicitly, and developed evaluation skills because these kids question everything.

Those strategies are not just for gifted kids anymore. They are what all kids need when information is abundant and understanding is scarce. When AI can answer almost any factual question, value shifts to higher-order skills.

I wrote more detail here: Beyond Memorization: Preparing Kids to Thrive in a World of Endless Information

The short version: question sources rather than absorb them. Learn through multiple modes. Build something, draw how it works, explain it in your own words. Reflect on how you solved a problem, not only whether you got it right. See connections across subjects instead of treating knowledge as isolated silos. Build emotional resilience and comfort with uncertainty alongside technical skill.

We practice simple things at home. At dinner when we discuss a news article: How do we know this claim is accurate? What makes this source trustworthy? What would we need to verify it? When my 10-year-old draws, writes or builds things: I ask what worked? What did not? What will you try differently next time, and why?

It is not about protecting her from AI. That is impossible and counterproductive. It is about preparing her to work with it, question it, and shape it. To be an active participant rather than a passive consumer.

I am optimistic. This is another expansion in how humans share and build knowledge. We have been here before with writing, printing, and the internet. Each time brought anxiety and trade-offs. Each time we adapted and expanded who could participate.

This time is similar, only faster. My 20-year-old gets to help harness it. My 10-year-old grows up native to it.

They will not need to memorize facts like living libraries. They will need to judge trustworthiness, connect disparate ideas, adapt as tools change, and recognize when they are thinking clearly versus fooling themselves. These are metacognitive skills, and they are learnable.

If we teach people to think about their thinking as carefully as we once taught them to read, and if we pair acceleration with reflection, this could become the most inclusive knowledge revolution in history.

That is the work. That is why I am optimistic.


For more on this thinking: AI as the New Gutenberg

Beyond Memorization: Preparing Kids to Thrive in a World of Endless Information

What does it take to prepare our children for a tomorrow where AI shapes how they get information, robots change traditional jobs, and careers transform faster than ever—a time when what they can memorize matters far less than how quickly they can think, adapt, and create? As a parent with children aged 29, 18, and 9, I can’t help wondering how to best prepare each of them. My oldest may have already found his way, but how do I ensure my younger two can succeed in a world so different from the one their brother entered just a few years before?

We’ve faced big changes like this before—moments that completely changed how we work and what opportunities exist. A century ago, Ford’s assembly line wasn’t just about making cars faster; it changed what skills workers needed and how companies treated employees. Decades later, Japan’s quality movement showed us that constant improvement and efficient thinking could transform entire industries. Each era required us to learn not just new facts, but new ways of thinking.

Today’s change, driven by artificial intelligence and robotics, is similar. AI will handle basic knowledge tasks at scale, and robots will take care of repetitive physical work. This means humans need to focus on higher-level skills: making sense of complex situations, evaluating information critically, combining ideas creatively, and breaking down big problems into solvable pieces. Instead of memorizing facts like a living library, our children need to know how to judge if information is trustworthy and connect ideas that might not seem related at first glance. They need to see knowledge not as something you collect and keep, but as something that grows and changes through questioning, discussion, and discovery.

Where can we find a guide for developing these new thinking skills? Interestingly, one already exists in our schools: the teaching strategies developed for gifted and twice-exceptional (2e) learners—students who are intellectually gifted but may also face learning challenges.

Gifted and 2e children think and learn in ways that are often intense, complex, and different from traditional methods. Teachers who work with these learners have refined approaches that develop multimodal thinking (using different ways to learn and understand), metacognition (thinking about how we think), and critical evaluation—exactly the skills all young people need in a future filled with smart machines and endless information.

Shift from Memorization to Meaning Instead of drilling facts, encourage your child to question sources. If you’re discussing a news article at dinner, ask: “How do we know this claim is accurate? What makes the source trustworthy?” Now they’re not just absorbing information; they’re actively working to understand it.

Foster Multimodal Exploration Make learning richer by using different approaches. Let them build a simple robot kit, draw a diagram of how it works, and then explain it in their own words. By connecting hands-on activity (tactile learning), visual learning, and verbal explanation, they develop deeper understanding.

Encourage Metacognition After solving a puzzle or coding a simple project, have them reflect: “What worked best? What would you try differently next time?” By understanding their own thought processes, they become better at adapting their approach to new challenges.

Highlight Interdisciplinary Connections and Global Outlook Show them that knowledge doesn’t exist in separate boxes. A math concept might connect beautifully with a musical pattern, or a historical event might be understood better through science. Help them see that good ideas and innovation come from everywhere in the world, not just one place or tradition.

Emphasize Emotional and Social Intelligence In a world where machines handle routine tasks, human qualities like empathy, communication, and teamwork become even more important. Encourage them to be comfortable with uncertainty, to see setbacks as chances to learn, and to develop resilience (the ability to bounce back from difficulties). These people skills will matter just as much as any technical knowledge.

Deep Learning and Entrepreneurial Thinking Like classical scholars who focused deeply on fewer subjects rather than skimming many, children benefit from spending more time thinking deeply about carefully chosen topics rather than rushing through lots of surface-level information. Consider teaching basic business and problem-solving skills early—like how to budget for a project or spot problems in their community that need solving—so they learn to create opportunities rather than just wait for them.

Finally, we’re raising children in an age where AI is becoming a constant helper and resource. While information is everywhere, the ability to understand it in context and make good judgments is rare and valuable. By using teaching techniques once reserved for gifted or 2e learners—multiple ways of learning, thinking about thinking, careful evaluation, global awareness, and creative combination of ideas—we prepare all children to be confident guides of their own learning. Instead of being overwhelmed by technology, they’ll learn to work with it, shape it, and use it to build meaningful futures.

This won’t happen overnight. But just as we adapted to big changes in the past, we can evolve again. We can model skepticism, curiosity, and flexible thinking at home. In doing so, we make sure that no matter how the world changes—no matter what new tools or systems appear—our children can stand on their own, resilient, resourceful, and ready to thrive in whatever tomorrow brings.