Artificial intelligence (AI) is often described as one of the greatest technological revolutions of our time. But what is its impact on gender equality, among other things? Ahead of my keynote speech at the Business Day for Women in Liechtenstein in October 2024 I was interviewed by Wirtschaft Regional (in German). Here are some of my key points:
Is AI neutral or a mirror of injustice?
AI is often seen as a neutral tool that automates decisions and could overcome human prejudices, for example when it comes to equal opportunities. Sounds magical. But there’s a catch: AI learns from past data – and this also depicts inequalities. Instead of creating justice, AI thus becomes a continuation of discrimination by other means. The idea that AI acts morally superior is a tempting but dangerous misconception.
Inclusion through AI: not so easy
AI can break down many barriers – for people with language or reading difficulties, for example. But in order to realise this potential, everyone needs to get involved. Currently, however, far fewer women use AI than men. The same applies to the development of AI. If women want AI to represent them, they need to be actively involved – as users, designers and critics.
It has been proven that homogeneous teams develop applications that do not work for people with other characteristics. The promise of eliminating bias through ‘de-biasing’ tools is also misleading. Fairness and impartiality cannot be translated into pure mathematical formulas – behind every calculation are human judgements. If we use AI without scrutinising it, biases risk being perceived as facts.
Who bears the responsibility – AI or humans?
No matter how advanced AI becomes: In high-risk contexts, such as decisions about jobs, housing or matters in court, there must ultimately be a human – with responsibility and accountability, humans will always have the final say. If no one has to bear the consequences of wrong decisions, we will use AI unscrupulously.
Environmental costs: nuclear power plants for fake cat pictures?
The euphoria surrounding generative AI overlooks one key point, namely the massive consumption of energy and water. Do we really want to use these scarce resources to cool data centres and generate fake cat images? Is this a valid reason to expand nuclear technologies again? We should scrutinise this critically.
We all need to become more realistic about increasing productivity through AI. But of course, people’s excitement of a new toy that they feel they can do anything with will not disappear. And that’s a dilemma, because on the one hand we don’t want to spoil people’s joy, but on the other hand we pay a high price for it.
And how should companies deal with AI?
AI should always be in line with a company’s existing values – and these values must include responsibility towards society and the environment. If we want change, it is not enough to simply let AI run its course. We have to make conscious decisions about how and for what we use it.
The data revolution is eating its children
ChatGPT and other large language models are now reaching their limits: After years of stealing data all across the internet – often in violation of copyright – more and more companies are fighting back. The models are therefore beginning to feed themselves with their own AI-generated data. However, this creates a dangerous dynamic: incorrect or distorted patterns reproduce themselves uncontrollably. The system collapses when the data revolution eats its own children.
Conclusion: Regulation and empowerment in dealing with AI
AI alone will not bring about a revolution for justice or equal opportunities. It is a tool, not a miracle cure. We therefore need to think carefully about the problems we want to solve and how AI can make a difference. If the risks are too high, it may be more courageous not to use it.