The Irreplaceable Spark of Human Curiosity
I recently joined a panel of young leaders to talk about what it means to be human in the age of AI. The Zoom room—ages 9 to 35, spanning 12 countries—felt more like a Twitch livestream than a typical discussion. The chat exploded the moment we started.
Then someone typed: “AI could never replicate my aura.”
Not a question. Not a worry. A statement of fact.
The chat erupted. “What even IS aura?”
Then a young woman unmuted herself and declared, “AI could never replace my friendships.” She explained, “I know that in my hardest moments, my friends won’t just show up for me with care, but with the reflections and hard truths I need to hear, too. AI might say the right things, but all it’s doing is giving me the most likely string of words that it thinks I want to hear. There’s no experience or perspective behind it.”
The conversation spiraled outward from there—touching on human perspective, the necessary friction of human interaction, the irreplaceable value of speaking from places of experience and difference.
But as I listened, I found myself confronting an uncomfortable truth: we can no longer take for granted that people will choose human relationships over AI ones.
If we think human relationships are still worth protecting, we need to answer: What are the unique value propositions that human relationships bring that AI can never replace, no matter how good it gets?
When AI Supersedes Social Learning (and When It Lags Behind)
This quest to define the unique value of AI led me to a fascinating study seeking to answer that question in the context of learning. Most research on AI in education focuses on objective outcomes—test scores, speed, accuracy. Few studies ask what it actually feels like to learn with AI versus learning with another person. Which is why, months before this panel, I connected with Caitlin Morris at MIT Media Lab, who set out to answer exactly that.
Working with students on collaborative STEM projects, Caitlin became interested in the role that social learning plays in developing curiosity, motivation, and intrinsic interest. But she wasn’t satisfied with typical educational metrics—she wanted to understand something more elusive: the subjective experience of learning.
Did students feel curious? Interested enough to keep going once the assignment ended? Would AI sustain this interest as much—or better than—peer-to-peer learning?
She paired college students with either a human peer or an AI assistant to tackle a collaborative challenge: design a transportation or communication network using graph theory—nodes, edges, and weighted connections. Students had 30 minutes to create networks that were efficient, resilient to failure, and met specific constraints. Then they had to explain their designs.
What emerged challenged fundamental assumptions about collaborative learning.
Students who worked with AI consistently reported feeling more confident. They mirrored back technical language—polished, precise, ready to teach someone else.
But students who learned with another person used softer, more tentative language. “We thought about this... we explored that... we tried one approach and then changed our minds.” They talked about process rather than mastery. And despite reporting lower confidence, many of the human partners felt significantly more interested in the material and wanted to keep learning beyond the assignment.
But here’s where Caitlin’s findings got really interesting—and challenging.
Not all peer learning created that deeper engagement. When peer interactions were high-quality—marked by balanced participation, genuine questions, and productive disagreement—they significantly outperformed AI on curiosity, engagement, and desire to keep learning.
But when peer interactions were low-quality—surface-level, overly agreeable, imbalanced—students were actually worse off than if they’d worked with AI. They reported lower confidence, less interest, and felt the collaboration had been unhelpful.
The implication was startling: peer learning is high-risk, high-reward. When it works, nothing beats it. When it doesn’t, you might be better off with a chatbot.
The Essential Ingredient: Productive Disagreement
So what made the difference? Caitlin discovered it wasn’t just about working with another person—it was about disagreement.
In Caitlin’s data, some of the least effective learning conversations between human pairs were too agreeable. Small disagreements would emerge, then immediately get walked back. “Super agreeable,” she described it. “Yes, great, let’s do that.” On the surface, cooperative. Beneath it, very little cognitive challenge or motivation.
Caitlin mapped every moment of the conversations—agreeing, disagreeing, questioning, explaining, teaching. She color-coded each interaction, and successful peer conversations lit up like fireworks.
The unsuccessful conversations? “Monochrome,” Caitlin said. Pleasant, agreeable, surface-level. These patterns weren’t about being nice or mean. Both groups were polite. The difference was depth.
“You could see these beautiful, multicolored chunks,” she told me. One turn might include disagreeing with a partner’s approach, asking a question—”Could we do this instead?”—then floating a new idea with uncertainty—”Maybe it’s this?”
High-quality conversations had texture—moments of “wait, what if...?” followed by “but wouldn’t that...?” Low-quality conversations were smooth but empty—just two people moving in parallel, never really engaging.
But not all disagreement is productive. Here’s what made the difference: social curiosity.
Social curiosity is turning toward another person with genuine interest. What do you think? Why did you do it that way? What am I missing?
“Social curiosity is often coupled with uncertainty,” Caitlin explained. “Because you’re expressing that maybe you don’t have the answer, but you’re also interested in knowing what the other person thinks.”
In Caitlin’s transcripts, social curiosity looked like this: “Why did you connect those nodes that way?” or “Wait, I hadn’t thought about that—what made you consider redundancy?”
It was the difference between “Yeah, good idea” (agreement) and “That’s interesting—what if we also considered...?” (building).
There’s vulnerability in turning toward another with genuine interest—in admitting you don’t have the answer and caring about another person’s perspective. Sometimes there’s disagreement, sometimes discomfort. But those moments of not knowing and negotiating meaning together are where learning deepens.
And here’s what matters: we’re wired to be curious about how other humans think in ways we’re simply not curious about how AI arrived at its answers.
When another person disagrees with us, we want to understand their reasoning, their experience, what shaped their perspective. When AI disagrees, we might question its training data, but we’re not genuinely curious about its “thinking process” the way we are about another human’s.
This natural human-to-human curiosity is what gives peer learning its higher ceiling. It’s also why AI, no matter how sophisticated, can’t replicate what that young woman in the Zoom room described: the irreplaceable value of people with different lived experiences, different families, different histories—people who can offer a different way of looking at the world.
Two Opportunities Ahead
Even among university students—presumably accustomed to intellectual rigor—the default was often surface-level agreeableness. If students at one of the world’s most elite institutions were defaulting to conflict avoidance, it suggested something deeper. AI might accelerate superficial learning—today’s systems are explicitly designed to be agreeable and non-confrontational. This isn’t today that the erosion of constructive conflict began with AI. AI may simply be stepping into a vacuum that already exists.
When learning interactions remain passive and agreeable, students miss deeper understanding. But they also miss something relational—the slow, sometimes awkward work of building trust through difference.
Based on these findings, we see two big opportunities—not just for educators, but for anyone invested in preserving the unique value of human connection.
Opportunity 1: Rebuild the conditions for productive peer-to-peer friction
We’ve quietly stripped away spaces where young people learned to sit in discomfort together. Peer collaboration in schools has been reduced. Third spaces have eroded. Young people see only the most extreme, inflammatory versions of disagreement online, without models of what curious, respectful, constructive conflict looks like.
When the only examples are silent avoidance or reactive gaslighting, it’s no wonder curiosity, vulnerability, and disagreement feel risky.
In our conversations with young people, we hear a recurring theme: relationships feel fragile.
A young person told me: She’d had an argument with someone in her friend group. Rather than addressing it, they avoided each other. Then she opened Instagram and saw her entire friend group hanging out—everyone tagged, everyone smiling. Everyone except her. The exclusion wasn’t just felt; it was performed, public, permanent.
“It’s easier to just... not deal with it,” she said. “Because if you try to work it out and it goes wrong, suddenly you’re watching your whole social world hang out without you.”
Yet alongside this discomfort, there’s a yearning. Young people want to be real, unfiltered, safe enough to be vulnerable. They’re uncomfortable with discomfort—but they sense something essential is missing. They’ve grown up in a world of filtered perfection, yet they’re yearning for authenticity.
When learning interactions remain passive and agreeable, students miss deeper understanding. But they also miss something relational—the slow, sometimes awkward work of building trust through difference.
The first opportunity is rebuilding these conditions: creating spaces where people practice productive disagreement, teaching the skills of social curiosity, modeling what it looks like to stay engaged through friction. This means schools, families, communities actively designing opportunities for collaborative problem-solving with real stakes. It means adults demonstrating repair after rupture. It means making curiosity about others’ thinking a skill we deliberately cultivate.
Opportunity 2: Design AI that builds muscle for human connection—without replacing it
Caitlin and I bonded over frustration with how AI conversations split between uber techno-optimists and doomsdayers. Where was the middle ground—where AI might help while simultaneously growing our capacities for meaningful human-to-human interactions.
Caitlin’s answer wasn’t romantic. Not all peer relationships “were perfect and amazing and better than AI.” It was the participants who engaged in productive disagreement with genuine curiosity who reaped benefits that outperformed AI. Human relationships come with friction, misalignment, rupture, and repair. But if you remove those downsides, you also remove the upsides.
An AI companion can be perfectly attuned, endlessly affirming. But when care or challenge comes from a person—someone with their own history and lived experience—it comes from a fundamentally different place.
Many young people already turn to AI to externalize, process, or reflect. The technology is flawed—sycophantic, overly agreeable, riddled with risks. But if we center the young person rather than the tool, something’s worth noticing: they’re practicing reflection, naming feelings—work that historically many people were never given permission or language to do.
I think about my father, a Chinese man in his seventies who spent most of his life bottling up deeply painful things. Gender norms and cultural expectations made emotional expression unsafe. I grew up watching him carry burdens in silence. The consequences rippled through our family—moments of connection lost, repairs that never happened.
What if he’d had a tool to help him name what he felt—not to replace connection, but to practice the internal work? I’m not romanticizing AI as a therapist. But there’s something worth noticing about young people finding outlets to externalize and process—work my father’s generation was never given permission to do.
This is where we see the second opportunity: to design AI systems that don’t fall into the traps of flat, sycophantic conversation—but instead build the muscle for stronger peer-to-peer interaction. AI that helps people practice the internal work: naming emotions, sitting with uncertainty, articulating their thinking. AI designed explicitly as an on-ramp to human relationships, not a substitute for them.
Caitlin’s research suggests this could work because AI’s strength isn’t replacing the messy work of human connection—it’s building foundational skills. The confidence without curiosity. The vocabulary without the vulnerability. The practice without the stakes. Then, when young people face real disagreement, real repair, real relationship work, they have the language to navigate it.
Picture someone who practices naming feelings with AI, then finds courage to say “I felt hurt when...” to an actual friend. Someone who rehearses vulnerability on screen, then carries it into a moment of rupture and repair.
The Irreplaceable Work Still Ahead
Three insights from Caitlin’s research keep circling back:
Interaction quality matters more than modality. Not all peer learning is good. Not all AI use is bad. Focus on fostering the patterns that work: balanced participation, genuine questions about others’ thinking, and productive disagreement.
Social curiosity—caring about what lives in someone else’s mind—is what makes disagreement productive. We’re naturally wired to be curious about how other humans think in ways we’re not curious about AI. Practice asking “What do you think?” and “Why did you do it that way?” more than explaining what you know.
Use AI as preparation for human connection, not a replacement. Let it build confidence and vocabulary privately, but design it explicitly to transfer those skills to real relationships where the messy, vulnerable work of repair and disagreement happens together.
AI can help us pause, notice, and name. But the work of being human—of disagreeing well, caring deeply, building trust across difference—still has to happen together.
That young person was right—AI can’t replicate aura. But maybe it can help us become more present to others’ auras. Maybe it can give us practice in the internal work that prepares us for the external work of connection.
The unique value of human relationships isn’t that they’re easy, efficient, or perfectly calibrated. It’s that they’re real—messy and unpredictable, shaped by the full complexity of lived experience. They require us to turn toward each other with curiosity, to sit in uncertainty, to negotiate meaning across difference. They demand the hard work of rupture and repair, of staying when it would be easier to leave.
And no AI, no matter how sophisticated, can replicate what it feels like to be truly seen by another human being who has chosen—through friction and discomfort and all—to stay curious about who you are.
Note: This post draws on ideas from recent research by Caitlin Morris. The paper is currently under peer review; a preprint is available here on arXiv for those interested in additional detail.
Morris, C., Maes, P., et al. (preprint, under review). When Peers Outperform AI (and When They Don’t): Interaction Quality Over Modality. arXiv.










I love this! Thanks for articulating so clearly the middle ground of AI use...that seems to be the best way forward in a world that has already invested so heavily in AI. However, I do feel like the third insight about using AI to prepare for human connection is so tricky...a real balancing act. I worry that individuals who use AI for that social support (even with the goal of preparing them for real human connection) will find that the AI support feels good enough, a tool with immediate upsides and no obvious drawbacks. I really want to subscribe to that approach, but it does feel a little slippery.
As someone that has worked on research about the outcome of AI in Education, I appreciate the additional layer on learning with AI. And I'm curious how this insight will be used in the classrooms in the future.