As artificial intelligence becomes increasingly embedded in decision-making processes, a critical question arises: how do people behave when they know they are being evaluated by AI instead of humans? Recent research published in the Proceedings of the National Academy of Sciences sheds light on this issue, revealing a significant shift in self-presentation when individuals believe they are being assessed by AI. Across 12 studies involving over 13,000 participants, researchers found that people tend to emphasize analytical characteristics and downplay intuitive or emotional traits when they think an AI is evaluating them. This phenomenon, termed the "AI assessment effect," has significant implications for hiring, admissions, and other high-stakes evaluation contexts where algorithmic decision-making is increasingly used.
The study's findings suggest that people's behavior changes when they are aware of being evaluated by AI, driven by a widespread belief that AI values data-driven, logical qualities over human-like emotional insight. This "analytical priority lay belief" leads individuals to strategically adjust how they describe themselves, highlighting analytical traits and suppressing intuitive or emotional aspects. The researchers found that this effect is particularly pronounced among younger individuals, suggesting generational differences in tech expectations. Furthermore, when participants were encouraged to reconsider their assumptions about AI, the tendency to emphasize analytical traits was reduced or even reversed, highlighting the importance of challenging lay beliefs about AI preferences.
The study's methodology involved a range of experimental designs, including between-subjects, within-subjects, vignette-based, and real-world applications, to examine the AI assessment effect in various contexts, such as job recruitment and college admissions. Participants were randomly or quasi-randomly assigned to conditions where they were told they were being assessed by AI, a human, or both, and their self-reported and behavioral emphasis on analytical versus intuitive traits was measured. The results consistently showed that participants altered their behavior when they believed an AI was evaluating them, presenting themselves as more analytical and less intuitive. Notably, people presented least authentically under pure AI assessment compared to human-only or hybrid evaluations.
The implications of these findings are far-reaching, with significant consequences for the fairness and validity of AI assessments. If candidates adjust their behavior based on inaccurate beliefs about AI preferences, true qualities may be masked, potentially leading to suboptimal hiring or admissions decisions. Organizations should critically evaluate their assessment procedures and address potential distortions introduced by AI transparency policies. Informing candidates about an AI's specific capabilities and limitations might influence behavior differently, and future research could explore effects in other high-stakes domains, such as public service provision. Additionally, shifts in other traits, such as risk-taking, ethics, and creativity, warrant further exploration, along with the long-term consequences of AI-driven impression management.
The study's authors highlight that with the evolution of AI systems, candidates' beliefs—and their resulting behaviors—may change, warranting continued study. As AI becomes increasingly embedded in decision-making processes, it is essential to understand the impact of AI assessments on human behavior and self-presentation. By recognizing the AI assessment effect and its implications, organizations can take steps to ensure that their assessment procedures are fair, valid, and effective in identifying the best candidates for a given role or opportunity. Ultimately, the findings of this research have significant implications for the future of hiring, admissions, and other high-stakes evaluation contexts, and highlight the need for ongoing research into the complex interactions between humans and AI systems.
In conclusion, the "AI assessment effect" is a significant phenomenon that has the potential to shape candidate behavior and distort assessment outcomes. As AI continues to play an increasingly prominent role in decision-making processes, it is essential to understand the implications of this effect and take steps to mitigate its impact. By challenging lay beliefs about AI preferences and informing candidates about an AI's specific capabilities and limitations, organizations can work towards creating fairer, more valid assessment procedures that identify the best candidates for a given role or opportunity. As the use of AI in assessments continues to evolve, ongoing research into the AI assessment effect will be crucial in ensuring that these systems are used in a way that is fair, transparent, and effective.

AI Insights
What happens when job candidates face AI instead of humans?

404news
Published June 26, 2025
1
people liked this
Share this story
Help others discover this amazing content