Some of you might remember the stir caused by the infamous AI “Gaydar” study, where researchers claimed to predict sexual orientation from facial features using machine learning. Others might recall attempts to use facial structure to determine whether someone is a criminal.
I thought we had moved past these crude, AI-driven inferences. But I was wrong.
A few days ago, a paper was published on SSRN titled “AI Personality Extraction from Faces: Labor Market Implications.” The authors—many of them from business schools—claim that their model can predict “school rank, compensation, job seniority, industry choice, job transitions, and career advancement” simply by analyzing facial images.
I haven’t read the full paper. But even the premise is deeply troubling – from both an empirical and an ethical standpoint.
Let’s call it what it is: junk science at scale. These models don’t uncover hidden truths. They reflect and amplify existing biases—those embedded in hiring practices, social mobility, and law enforcement.
The result? Biased predictions disguised as objectivity.
And it’s not just bad science—it’s a business model. A business model built on pseudoscience: AI being used to repackage discredited, discriminatory ideas as innovation.
The ethical red flags are just as serious. AI that is used to infer personality traits or predict behavior doesn’t just analyze data—it attempts to invade mental privacy.
It doesn’t matter whether the system “works” or not. The very act of rewarding or penalizing people based on AI-generated inferences about their character or potential is a fundamental violation of human rights.
As Susie Alegre puts it: “The problem is not just whether the technology works. It’s that we allow automated systems to make judgments about who we are, what we think, and what we might do—without consent, without accountability, and without a way to challenge it.”
Let’s not fall for the illusion that AI can objectively predict human potential. The risk isn’t just bad science. It’s the automation of inequality—and the erosion of fundamental human rights.
Further reading – and what inspired this post:
- Calling Bullshit by Carl T. Bergstrom & Jevin D. West
- Freedom to Think by Susie Alegre
- AI Snake Oil by Arvind Narayanan & Sayash Kapoor
I first shared a version of these reflections on LinkedIn on February 18, 2025.