Everyone concerned with ethics is probably used to cringing while reading news on AI. For me, one of the most reliable triggers for such cringing is the talk about ‘democratizing AI’; and I am not the only one.
Take this: Selling video analytics as “surveillance in a box” is said to democratize high-powered surveillance. Why democratize? Since buying the software is much cheaper than hiring video analysts, surveillance becomes accessible to and affordable for a much larger customer base. And this is what is meant by ‘democratizing AI’: making AI accessible to those who want it. If we follow this line of reasoning, drag-and-drop AI tools represent the ‘most democratic AI’. Some people praise them as so-called no-code automated machine learning tools that are easy to use even if “you’re less technical” or have no tech knowledge at all.
The first question is: Do we really want to make AI accessible to anyone? Short answer: no.
Long answer: It’s simply too dangerous, not just in the case of very controversial surveillance tech. Ethical problems like bias, discrimination and lack of explainability are aggravated when people without any formal training in data science haphazardly use AI tools. At the same time, the risk of misinterpreting data or algorithms and of applying them in the wrong context significantly increases. The same holds true for the danger of systemic misuse of models.
The second question is: What does it have to do with democracy? Short answer: nothing.
Long answer: Democracy entails ‘self-legislation’. It is group decision making characterized by equality among the participants. Equality can mean anything from the simple ‘one person one vote’ rule in elections to more substantial requirements such as equality in deliberation processes.
So, where is the link between ‘making AI accessible to as many people as possible’ and democracy? It doesn’t exist.
Tech evangelist talk about ‘democratizing AI’ has got nothing to do with self-legislation or collective decision-making; and it has got nothing to do with equality in a democratic sense. The only equality is equality in using AI.
The way AI is deployed even directly opposes these core elements of democracy: Instead of engaging people in collective decision-making, AI is often applied by some people onto other people; and neither the users nor the ‘targets’ are in any way involved in the decision-making. Take a judge who relies on algorithmic predictions to assess my risk of being a repeat offender. The judge simply uses an algorithmic decision based on the data of other people and she impacts my fate by doing so. AI here means decisions by someone about me, but without me — and none of us is engaged in the creation of those decisions. That’s quite the opposite of self-legislation.
As to equality, the self-rule in democracies as an ideal implies “an equal distribution of both protections and chances for influence in collective decisions”. Do we have that in AI? Definitely not, as evident by the problems of bias in datasets, the lack of diversity among software developers, and the discriminatory effects of AI on marginalized communities (in lending, in predictive policing, in healthcare etc.).
By suggesting that AI is in everyone’s interest, it is not far off from framing AI as a basic need. But let’s not be fooled: AI is not a basic need. It is a tool, that is, a means to an end, that must be measured by its contribution to human flourishing. And whether a commercial mass rollout of AI as it stands can achieve that, is doubtful to say the very least, whether you call it democratizing or not.
Hosts from all over the world invite me to share my thoughts on ethics, artificial intelligence, data protection, sustainability or my personal career. Podcasts are a great opportunity to present my views and convictions in a structured and understandable manner. Every single one of these conversations has been an eye-opener for myself as well.
If we have a choice: do we want to have Big Tech at the table when discussing regulation, or do we want them to lobby behind closed doors? I argue for the latter. We have a duty to engage.
Here is how I developed from a primary school feminist protesting against handicraft lessons for girls, to a rebellious anticapitalist teenager in high school, to a bored business administration student in the 1990ies, into who I am today.
The Montreal AI Ethics Institute interviewed me, along with my ForHumanity colleagues Merve Hickok and Ryan Carrier, about our thoughts on teaching AI and ethics. I recommend keeping AI ethics as applied as possible and inspiring people to think about what that means for their own work experience.