Let’s talk

No, we don’t want to ‘democratize’ AI

Citizens voting at the ‘Landsgemeinde’ in Glarus (CH); one of the most archaic types of democracy; no traces of AI (source: private, 2017).

Preliminary remark: This article was first published on medium.com on December 4, 2020.

Everyone concerned with ethics is probably used to cringing while reading news on AI. For me, one of the most reliable triggers for such cringing is the talk about ‘democratizing AI’; and I am not the only one.

Take this: Selling video analytics as “surveillance in a boxis said to democratize high-powered surveillance. Why democratize? Since buying the software is much cheaper than hiring video analysts, surveillance becomes accessible to and affordable for a much larger customer base. And this is what is meant by ‘democratizing AI’: making AI accessible to those who want it. If we follow this line of reasoning, drag-and-drop AI tools represent the ‘most democratic AI’. Some people praise them as so-called no-code automated machine learning tools that are easy to use even if “you’re less technical” or have no tech knowledge at all.

Long answer: It’s simply too dangerous, not just in the case of very controversial surveillance tech. Ethical problems like bias, discrimination and lack of explainability are aggravated when people without any formal training in data science haphazardly use AI tools. At the same time, the risk of misinterpreting data or algorithms and of applying them in the wrong context significantly increases. The same holds true for the danger of systemic misuse of models.

The second question is: What does it have to do with democracy? Short answer: nothing.

Long answer: Democracy entails ‘self-legislation’. It is group decision making characterized by equality among the participants. Equality can mean anything from the simple ‘one person one vote’ rule in elections to more substantial requirements such as equality in deliberation processes.

So, where is the link between ‘making AI accessible to as many people as possible’ and democracy? It doesn’t exist.

Tech evangelist talk about ‘democratizing AI’ has got nothing to do with self-legislation or collective decision-making; and it has got nothing to do with equality in a democratic sense. The only equality is equality in using AI.

The way AI is deployed even directly opposes these core elements of democracy: Instead of engaging people in collective decision-making, AI is often applied by some people onto other people; and neither the users nor the ‘targets’ are in any way involved in the decision-making. Take a judge who relies on algorithmic predictions to assess my risk of being a repeat offender. The judge simply uses an algorithmic decision based on the data of other people and she impacts my fate by doing so. AI here means decisions by someone about me, but without me — and none of us is engaged in the creation of those decisions. That’s quite the opposite of self-legislation.

As to equality, the self-rule in democracies as an ideal implies “an equal distribution of both protections and chances for influence in collective decisions”. Do we have that in AI? Definitely not, as evident by the problems of bias in datasets, the lack of diversity among software developers, and the discriminatory effects of AI on marginalized communities (in lending, in predictive policing, in healthcare etc.).

So, let’s be clear: the talk about democratizing AI is a clever marketing move. It is mass-commercialization in a humanitarian disguise. It is smart because democracy is an inherently positive term. Only elitist or authoritarian people oppose democracy.

By suggesting that AI is in everyone’s interest, it is not far off from framing AI as a basic need. But let’s not be fooled: AI is not a basic need. It is a tool, that is, a means to an end, that must be measured by its contribution to human flourishing. And whether a commercial mass rollout of AI as it stands can achieve that, is doubtful to say the very least, whether you call it democratizing or not.

Podcasts – my thoughts on ethics on air

Hosts from all over the world invite me to share my thoughts on ethics, artificial intelligence, data protection, sustainability or my personal career. Podcasts are a great opportunity to present my views and convictions in a structured and understandable manner. Every single one of these conversations has been an eye-opener for myself as well.

Read More

On teaching AI & ethics

The Montreal AI Ethics Institute interviewed me, along with my ForHumanity colleagues Merve Hickok and Ryan Carrier, about our thoughts on teaching AI and ethics. I recommend keeping AI ethics as applied as possible and inspiring people to think about what that means for their own work experience.

Read More

Subscribe to my newsletter