- AI mimics trust while relying on rigid, structured evaluation patterns
- Machines separate human traits instead of forming holistic impressions
- Competence and integrity dominate decisions across both humans and AI
Modern AI systems do not simply process information; they make systematic judgments about people in ways that resemble human trust but with important differences.
A new study from Hebrew University, published in Proceedings of the Royal Society, analyzed over 43,000 simulated decisions alongside around a thousand human participants across five scenarios.
These scenarios included deciding how much money to lend a small business owner, whether to trust a babysitter, how to rate a boss, and how much to donate to a nonprofit founder.
Article continues below
How AI breaks down human judgment into separate columns
The findings reveal that AI tools form something that looks like trust, but their judgment works very differently from ours.
Both humans and AI favored people who seemed competent, honest, and well-intentioned, meaning machines captured something real about human trust.
“That’s the good news,” said Prof. Yaniv Dover. “AI is not making random decisions. It captures something real about how humans evaluate one another.”
However, humans tend to form a general impression, blending multiple traits into a single, intuitive, and holistic judgment.
AI does something very different: it breaks people down into components, scoring competence, integrity, and kindness, almost like separate columns in a spreadsheet.
“People in our study are messy and holistic in how they judge others,” explained Valeria Lerman. “AI is cleaner, more systematic, and that can lead to very different outcomes.”
These differences appeared even when every other detail about the person was identical.
“Humans have biases, of course,” said Prof. Dover. “But what surprised us is that AI’s biases can be more systematic, more predictable, and sometimes stronger.”
In financial scenarios such as deciding how much money to lend or donate, AI systems showed consistent differences based solely on demographic traits.
Older individuals were frequently given more favorable outcomes, religion had strong effects, especially in monetary scenarios, and gender also influenced decisions in certain models.
Another key insight is that there is no single “AI opinion.” Different models often made different judgments about the same person.
This means that the choice of an AI system could quietly shape real-world outcomes. “Which model you use really matters,” Lerman noted.
Large language models are already being used to screen job candidates, assess creditworthiness, recommend medical actions, and guide organizational decisions.
The study suggests that while AI can mimic the structure of human judgment, it does so in a more rigid, less nuanced way, with biases that may be harder to detect.
“These systems are powerful,” said Dover. “They can model aspects of human reasoning in a consistent way. But they are not human, and we should not assume they see people the way we do.”
As AI tools and AI agents move from assistants to decision makers, understanding how it “thinks” becomes critical for organizations deploying it at scale.
The researchers emphasize that their findings are not a warning against AI, but rather a call for awareness.
That said, the question is no longer whether we trust machines; it is whether we understand how they trust us.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
https://cdn.mos.cms.futurecdn.net/cvUbbQwxuHbLsEVEuaWGcL-1350-80.jpg
Source link




