The Expertise Gap: AI Partnerships in Early Mathematics Education
AI can be a powerful thought partner — but only when the human brings real expertise. In early math education, that expertise gap has serious consequences for how teachers and parents use AI tools.

I recently read D. Graham Burnett's thought-provoking article in The New Yorker, "Will the Humanities Survive Artificial Intelligence?" As someone working at the intersection of early childhood math education and technology, I found his perspective refreshingly nuanced — neither dismissing AI as the end of meaningful education nor embracing it as a miraculous solution to all our challenges.
Burnett, a historian at Princeton, describes an assignment where students engaged with AI systems about topics they had studied deeply. Rather than seeing AI as an existential threat, he suggests it might help humanities education return to its core purpose: not knowledge production, but "the work of understanding."
The Surprising Freedom of AI Thought Partnership
One of Burnett's most revealing insights came from a student named Jordan, who made a profound discovery during her AI dialogue. As part of a course assignment examining the history of attention, she engaged an AI in conversation about this topic. During office hours, Jordan told Burnett she had found "a kind of pure attention she had perhaps never known" from the AI. She explained that because she felt no social obligation toward the machine — "no need to accommodate, and no pressure to please" — she experienced an intellectual freedom that allowed her to descend more deeply into her own thinking.
I've experienced similar moments of clarity while exploring complex ideas with AI systems — the sense that my "thought partner" is fully, completely there just for me and my ideas. Because of this, I too was able to explore my thoughts in ways I have never been able to in human-to-human interaction.
Expertise: The Hidden Requirement for Effective AI Dialogue
Burnett's insights also caused me to think deeply about my work with generative AI in early math education — work that reveals a crucial distinction. His students engaged with AI from positions of expertise. They had already developed deep understanding of their subjects through traditional learning methods before their AI interactions. This expertise allowed them to recognize limitations in AI responses and to ask sophisticated questions that pushed conversations in meaningful directions.
This got me thinking about how early childhood educators might interact with AI as thought partners, especially in the area of early mathematics. Unlike Burnett and his students, most early childhood educators lack the specialized knowledge needed for truly productive AI dialogues about math. It's an unfortunate reality that many teachers enter classrooms with minimal mathematical preparation — often just a single methods course, if that.
When AI generates flawed content about early math concepts (which happens with alarming frequency in my experience), these educators typically don't have the foundational understanding to detect these misconceptions or to formulate the critical questions that would guide the conversation toward more accurate understanding.
The Paradox: Those Who Need Help Most Can Benefit Least
This creates a troubling paradox: the very educators who might benefit most from AI support in teaching mathematics are the least equipped to critically evaluate the content it generates. Without robust mathematical knowledge, how can they distinguish between developmentally appropriate activities and those that might reinforce common misconceptions?
While Burnett and his students could engage in sophisticated dialectical exchanges with AI, evaluating its outputs against their existing knowledge, many early childhood educators simply don't have that mathematical foundation to leverage the technology effectively.
The Mirror Effect: What AI Gets Wrong Reveals Our Own Misconceptions
What's most revealing about my interactions with AI systems is that their limitations aren't random glitches — they're precise reflections of our collective blind spots in early mathematics education. For example, when I ask AI to generate activities for spatial reasoning or pattern recognition, the quality plummets compared to number-focused activities. This mirrors exactly how we've privileged counting and number operations in early childhood while neglecting other crucial mathematical domains.
Similarly, when AI repeatedly defaults to scenarios involving counting toys, sharing crackers, or measuring with blocks — what I call the "crackers, cars, and blocks" problem — it's reflecting the narrow contextual imagination present throughout our field.
The most eye-opening pattern I've observed is how AI consistently reproduces fundamental misconceptions about early math concepts and developmental progression — like conflating verbal counting (reciting the number sequence) with one-to-one correspondence (matching objects to count words) or cardinality (knowing the last number counted represents the whole set). These aren't failures of the AI technology but accurate reflections of widespread misunderstandings that permeate teaching materials, curriculum guides, and even academic literature.
Beyond Numbers: Domain Gaps in Our Mathematical Understanding
This "AI mirror" makes visible what has long been invisible — that our collective understanding of early mathematics development remains fundamentally incomplete and often deeply flawed, with serious consequences for children's long-term math learning. The machine isn't getting it wrong; it's faithfully reproducing the very misconceptions that have been baked into our educational ecosystem for decades.
This presents both a challenge and an opportunity: can we use these reflections to confront the gaps in our understanding and transform how we conceptualize early mathematical learning?
Burnett writes that AI has "learned our moves, and now they can make them." In early math education, this means AI has learned our misconceptions too. The key challenge before us is preparing teachers with the mathematical knowledge they need to engage productively with AI as true thought partners.
When teachers possess robust mathematical knowledge, AI partnerships can become transformative — not because the technology is perfect, but because the human partner brings the critical discernment needed to navigate beyond our collective misconceptions toward more powerful mathematical understanding for young children.
Originally published on LinkedIn
