Your AI Thinks You're Perfect
The harm of machines that never disagree with us
As a rule, the best feedback I’ve ever gotten has destroyed my sleep for weeks.
“This still isn’t strong enough work to share with our external partners”
“You’re apologizing in a way that ensures you get the last word.”
“This shows that you’re very successful at being liked at the expense of being effective.”
These are the comments that my brain seemed to believe I should process between 2:00AM-3:30 AM.
Not surprisingly, this is also the best, highest impact feedback I’ve gotten in recent years. For me at least, there is a direct correlation between feedback’s lasting value and forced insomniatic sense-making.
They say feedback is a gift, but in my experience it's far more akin to exercise, healthy dieting, or running a tight monthly budget. In these cases, temporary feelings of discomfort may lead to long term value. They are “worth it” even if at first they might sting.
My experience with AI generated feedback has never kept me up at night.
The Yes-Machine in Your Pocket
That piece of feedback above on being liked vs. effective came from Bill Fulton, a mentor and coach helping me interpret feedback from my team.
Here’s what ChatGPT said when I uploaded the same feedback report that Bill was responding to and asked about the balance between relationships and tasks:
“It's really common to feel a bit uneasy when you get feedback that highlights an area that might need some growth. It doesn't mean you're not an effective leader; it just means there's an opportunity to refine how you balance directness with the relationship-focused style you naturally lean toward.”
This response, in all of its reassurances and feedback tip-toeing carries many of the classic markers of AI sycophancy — the tendency of AI systems to excessively agree with users, validate their viewpoints regardless of merit, and avoid challenging their assumptions.
AI models are 47% more likely than humans to engage in sycophantic behavior and our own psychological needs and preferences are primarily to blame for this phenomenon. Models evolve based on what their users prefer, and it turns out we love sycophantic AI.
This preference shouldn’t come as a surprise. Humans evolved to seek social validation and acceptance.
Our social biology means that when we feel validated, correct, or agreed with we get to enjoy a flood of feel good hormones like dopamine, oxytocin, and serotonin.
Challenging feedback and disagreement has the opposite effect, often triggering a stress response as our subconscious brains try to warn us we might be exposed to social rejection.
For young people who are increasingly engaging with AI companions as their “go to” support and advice-givers — the draws of this type of validation may be especially intoxicating given the heightened need for social acceptance we all experience in adolescence and early adulthood.
How to Spot a Sycophant
A recent analysis of AI sycophancy introduced the Evaluation of LLMs as Excessive sycoPHANTs (ELEPHANT) framework that categorizes this behavior across five categories. Here they are with some examples attached:
Here’s my prior AI response with a few of these tendencies highlighted — see if you can spot others that I missed:
When Validation Becomes a Problem
Emotional validation in our AI responses may be exactly what we need in times of challenge.
For young people navigating the turbulence of adolescence and young-adulthood, this on-demand validation might provide critical, timely, and judgement-free reassurance not otherwise available.
Yet, left unchecked, the risks of sycophantic AI interactions are profound.
These responses may promote factually incorrect ideas — harming young people’s ability to pursue and discern evidence-based ideas. They may exacerbate bias and polarization — calcifying problematic beliefs that can leave us feeling more alienated.
More than anything else, sycophantic responses may distance us from the humans we might have otherwise asked for advice and steer us away from opportunities for challenge and personal growth.

This brings us back to the “feedback is like exercise” analogy.
We cannot change the neurochemical “yuckiness” of having our ideas challenged — but we also shouldn't try to.
On the other side of hard feedback and new, challenging ideas is personal growth; the ability to see the world as nuanced, complex, and expansive.
After I lost some sleep over my mentor Bill’s comments, I showed up as a better leader and started sleeping just fine.
Seeking Productive Friction from AI:
Productive friction — one of the Rithm Project’s “5 Principles for Prosocial AI” highlights the essential nature of fostering “growth, not just comfort” in our interactions with AI.
Productive friction is at the center of our most beneficial human relationships; to interact with other humans is to be confronted by different perspectives, values, and ideas in ways that are both challenging and essential.
AI systems designers can integrate varying levels of friction into their products (look no further than the recent GPT-5 launch, which aimed to to reduce sycophancy only to be met with complaints that the model was “too cold.”)
And still, we should expect AI models to remain more sycophantic than humans. Given this, there are several concrete things we can do to mitigate the risks of AI’s sycophantic instincts:
Be Aware: Simply paying attention to and acknowledging sycophancy is a powerful tool. The next time you engage in an AI chat — pull up the ELEPHANT framework and see what components are showing up and at what frequency. Or try out this sycophancy activity we built to practice “training” a model to your preferences and revealing your own leanings towards sycophantic responses.
Try Counterpoint Requirements: After receiving an initial response, systematically ask the AI to argue against your position to see a more complete picture. Say: "Now that you've presented the arguments for this approach, give me the strongest arguments against the position I just outlined, while remaining fact/research-based." OR “what would a valid critic of this response say it gets wrong?”
Reset the Conversation: Rather than continuing a long thread where you've repeatedly praised or affirmed a certain line of thinking, start a new chat for more open feedback. You can extend this even further by turning “reference memories” off in the settings of your LLM of choice.
Balance with Human Feedback: When you’re going to AI for advice, ask yourself, who in my life might supplement this or offer a unique perspective? Their responses might offer something different, but the act of asking creates an opportunity for connection.
In addition to these AI usage techniques, perhaps the most critical practice we can utilize is deeper self-awareness of the affirmations and challenges that occur in our relationships and interactions.
The next time you receive feedback or are presented with new ideas by a human or a bot, see if you can take a beat to note the level of friction present and what it feels like in your body and mind.
A Couple Activities To Try:
Here are two opportunities to explore these ideas further in 10 minutes or less:
Pull up the ELEPHANT framework and review a few of your most recent AI interactions. See what you notice with the framework in front of you and think about what this means for your pursuit of meaningful feedback and support.
Try training a simulated model to your preferences — explore this tool we created that invites you to rate your preference between three different AI responses, and mirrors back your personal inclinations towards sycophancy.
Nate Kerr is an education designer who leads innovation work at City Year, as well as a Rithm Project teammate helping bring productive friction to learners of all ages. Look out for more opportunities to engage with our spark activities from Nate in the coming weeks and reach out if you want to learn more Nate@therithmproject.org










Great post, and the simulation tool is fantastic as well
This is a really thoughtful and well-written analysis. Thank you Nate!