At breakfast the other day, I was thinking about those old analogy questions: “Hot is to cold as light is to ___?” My kids would roll their eyes. They feel like relics from standardized tests.
But those questions were really metacognitive exercises. You had to recognize the relationship between the first pair (opposites) and apply that pattern to find the answer (dark). You had to think about how you were thinking.
I was thinking about what changes when reasoning becomes abundant and cheap. It hit me that this skill, thinking about how you think, becomes the scarcest resource.
Learning From Nature
A few years ago, we moved near a lake. Once we moved in we noticed deer visiting an empty lot next to us that had turned into a field of wildflowers. A doe would bring her fawn and, with patient movements, teach it where to find clover, when to freeze at a scent, and where to drink. It was wordless instruction: demonstration and imitation. Watch, try, fail, try again. The air would still, the morning light just breaking over the field. Over time, that fawn grew up and brought its own young to the same spot. The cycle continued until the lot was finally developed and they stopped coming.
That made me think about how humans externalized learning in ways no other species has. The deer’s knowledge would die with her or pass only to her offspring. Humans figured out how to make knowledge persist and spread beyond direct contact and beyond a single lifetime.
We started with opposable thumbs. That physical adaptation let us manipulate tools precisely enough to mark surfaces, to write. Writing captured thought outside of memory. For the first time, an idea could outlive the person who had it. Knowledge became persistent across time and transferable without physical proximity. But writing had limits. Each copy required a scribe and hours of work, so knowledge stayed localized.
Then came printing. Gutenberg’s press changed the economics. What took months by hand took hours on a press. The cost of reproducing knowledge collapsed, and books became locally abundant. Shipping and trade moved that knowledge farther, and the internet eventually collapsed distance altogether. Local knowledge became globally accessible.
Now we have LLMs. They do not just expose knowledge. They translate it across levels of understanding. The same information can meet a five-year-old asking about photosynthesis, a graduate student studying chlorophyll, and a biochemist examining reaction pathways. Each explanation is tuned to the learner’s mental model. They also make knowledge discoverable in new ways, so you can ask questions you did not know how to ask and build bridges from what you understand to what you want to learn.
Each step in this progression unlocked something new. Each one looked dangerous at first. The fear is familiar. It repeats with every new medium.
The Pattern of Panic
Socrates worried that writing would erode memory and shallow thinking (Plato’s Phaedrus). He was partly right about trade-offs. We lost some oral tradition, but gained ideas that traveled beyond the people who thought them.
Centuries later, monks who spent lifetimes hand-copying texts saw printing as a threat. Mass production, they feared, would cheapen reading and unleash dangerous ideas. They were right about the chaos. The press spread science and superstition alike, fueled religious conflict, and disrupted authority. It took centuries to build institutions of trust: printers’ guilds, editors, publishers, peer review, and universities.
But the press did not make people stupid. It democratized access to knowledge. It expanded who could participate in learning and debate.
We hear the same fears about AI. LLMs will kill reasoning. Students will stop writing. Professionals will outsource thinking. I understand the worry. I have felt it.
History suggests something more nuanced.
AI as Our New Gutenberg
Gutenberg collapsed the cost of copying. AI collapses the cost of reasoning.
The press did not replace reading. It changed who could read and how widely ideas spread. It forced literacy at scale because there were finally enough books to warrant it.
AI does not replace thinking. It changes the economics of cognitive work the same way printing changed knowledge reproduction. Both lower barriers, expand access, and demand new norms of verification. Both spread misinformation before society learns to regulate them. The press forced literacy. AI forces metacognitive literacy: the ability to evaluate reasoning, not just consume conclusions.
We are in the messy adjustment period. We lack stable institutions around AI and settled norms about what counts as trustworthy machine-generated information. We do not yet teach universal AI fluency. The equivalents of editors and peer review for synthetic reasoning are still forming. It will take time, and we will figure it out.
What This Expansion Means
I have three kids: 30, 20, and 10. Each is entering a different world.
My 30-year-old launched before AI accelerated and built a foundation in the old knowledge economy.
My 20-year-old is in university, learning to work with these tools while developing core skills. He stands at the inflection point: old enough to have formed critical thinking without AI, young enough to fully leverage it.
My 10-year-old will not remember a time before you could converse with a machine that reasons. AI will be ambient for her. It is different, and it changes the skills she needs.
This is not just about instant answers. It is about who gets to participate in knowledge work. Traditional systems reward verbal fluency, math reasoning, quick recall, and social confidence. They undervalue spatial intuition, pattern recognition across domains, emotional insight, and systems thinking. Many brilliant minds do not fit the template.
Used well, AI can correct that imbalance. It acts as a cognitive prosthesis that extends abilities that once limited participation. Someone who struggles with structure can collaborate with a system that scaffolds it while preserving original insight. Someone with dyslexia can translate thoughts to text fluidly. Visual thinkers can generate diagrams that communicate what words cannot.
Barriers to entry drop and the diversity of participants increases. This is equity of potential, not equality of outcome.
But access without reflection is noise.
We are not producing too many answers. We are producing too few people who know how to evaluate them. The danger is not that AI makes thinking obsolete. It is that we fail to teach people to think about their thinking while using powerful tools.
When plausible explanations are cheap and fast, the premium shifts to discernment. Can you tell when something sounds right but is not? Can you evaluate the trustworthiness of a source? Can you recognize when to dig deeper versus when a surface answer suffices? Can you catch yourself when you are being intellectually lazy?
This is metacognitive literacy: awareness and regulation of your own thought process. Psychologist John Flavell first defined metacognition in the 1970s as knowing about and managing one’s own thinking, planning, monitoring, and evaluating how we learn. In the AI age, that skill becomes civic rather than academic.
The question is not whether to adopt AI. That is already happening. The question is how to adapt. How to pair acceleration with reflection so that access becomes understanding.
What I Am Doing About This
This brings me back to watching my 10-year-old think out loud and wondering what kind of world she will build with these tools.
I have been looking at how we teach gifted and twice-exceptional learners. These are kids who are intellectually advanced but may also face learning challenges like ADHD or dyslexia. Their teachers could not rely on memorization or single-path instruction. They built multimodal learning, taught metacognition explicitly, and developed evaluation skills because these kids question everything.
Those strategies are not just for gifted kids anymore. They are what all kids need when information is abundant and understanding is scarce. When AI can answer almost any factual question, value shifts to higher-order skills.
I wrote more detail here: Beyond Memorization: Preparing Kids to Thrive in a World of Endless Information
The short version: question sources rather than absorb them. Learn through multiple modes. Build something, draw how it works, explain it in your own words. Reflect on how you solved a problem, not only whether you got it right. See connections across subjects instead of treating knowledge as isolated silos. Build emotional resilience and comfort with uncertainty alongside technical skill.
We practice simple things at home. At dinner when we discuss a news article: How do we know this claim is accurate? What makes this source trustworthy? What would we need to verify it? When my 10-year-old draws, writes or builds things: I ask what worked? What did not? What will you try differently next time, and why?
It is not about protecting her from AI. That is impossible and counterproductive. It is about preparing her to work with it, question it, and shape it. To be an active participant rather than a passive consumer.
I am optimistic. This is another expansion in how humans share and build knowledge. We have been here before with writing, printing, and the internet. Each time brought anxiety and trade-offs. Each time we adapted and expanded who could participate.
This time is similar, only faster. My 20-year-old gets to help harness it. My 10-year-old grows up native to it.
They will not need to memorize facts like living libraries. They will need to judge trustworthiness, connect disparate ideas, adapt as tools change, and recognize when they are thinking clearly versus fooling themselves. These are metacognitive skills, and they are learnable.
If we teach people to think about their thinking as carefully as we once taught them to read, and if we pair acceleration with reflection, this could become the most inclusive knowledge revolution in history.
That is the work. That is why I am optimistic.
For more on this thinking: AI as the New Gutenberg