Do you remember Klarna’s big AI announcement in 2023? Back then, their CEO boldly declared that AI was so effective the company would stop hiring and let AI replace roles through “natural attrition.”
I saved that announcement at the time. I was preparing a keynote on AI in call centers, and Klarna seemed to be writing my intro for me.
Fast forward to December 2024, and the CEO doubled down:
“AI can already do all of the jobs that we, as humans, do” – including his own.
But fast forward again, to February 2025, and something had clearly changed. The CEO had a new insight – or, in his words, an “epiphany”:
“In a world of AI, nothing will be as valuable as humans!”
Klarna’s new ambition?
“To become the best at offering a human to speak to!!!”
(Yes, that’s three exclamation marks.)
At first I was not sure how to take this. Was it a welcome moment of disarming honesty in an industry where others continue to repeat their AGI mantras – without ever pausing to rethink.
Or was it just another case of AI hype whiplash – confident proclamations one day, complete reversals the next?
In any case, it’s telling, and in fact sad, that it apparently takes AI to highlight the value of people.
AI vs. Human Fallibility
The same debate comes up every time I teach AI ethics. When I present examples of flawed AI-based decision-making, students often ask:
“Why do we criticize AI when humans are fallible too?.”
It’s a fair question. Humans make mistakes, are biased, and have bad days. But aside from the simple fact that humans were here before AI and have a right to exist, there are important differences worth keeping in mind:
Humans can be questioned and challenged. Yes, we rationalize our decisions – but we also reflect, justify, feel consequences. And we live in the world that our decisions shape.
AI doesn’t.
Finally, as humans, on this planet, we need to be able to deal with humans. Maybe avoiding exposure to our fellow human beings at all costs isn’t the smartest long-term strategy.
When we choose AI over humans, we’re not eliminating flaws – we’re replacing flawed but accountable agents with flawed and unaccountable systems. And that’s a trade-off we should be honest about.
The Quiet Part Out Loud
I can’t seem to let go of Klarna. There’s just too much in this case that raises ethical red flags.
So let’s recap:
- 2023–2024: AI can do all jobs!
- Early 2025: Humans are invaluable!
Beyond the cringe factor, the CEO’s rhetoric exposes deeper contradictions.
He presents himself as concerned about AI’s impact on workers. He muses on podcasts about older translators, disruption, and the need for honesty.
And yet he brags about headcount reductions, flirts with being OpenAI’s „favorite guinea pig“, and openly dismisses trade unions as something that makes Klarna “thick and slow moving”.
Bluntness ≠ Virtue
Siemiatkowski’s honesty is unusual. He doesn’t adapt his message to different audiences. Most executives do:
- For the public: “AI will free people to do more meaningful work.”
- For investors: “We’ve stopped hiring. AI is replacing jobs. Our margins are up.”
Klarna’s CEO says the quiet part out loud. As the New York Times put it, he has “helped surface a conversation that has largely been whispered in executive suites.”
Because let’s be honest: the fastest way for AI investments to pay off is by replacing workers.
Most companies simply aren’t ready to admit it.
So where does that leave us?
Let’s appreciate honesty where we see it—but let’s not mistake it for virtue.
Transparency can be uncomfortable, but it also creates an opportunity:
Now that the quiet part is being said out loud—what are we going to do about it?
A version of these thoughts was originally shared on LinkedIn between February 20 – 27, 2025.
Picture from Ethan Hoover auf Unsplash