Let’s talk

No, we don’t want to ‘democratize’ AI

Citizens voting at the ‘Landsgemeinde’ in Glarus (CH); one of the most archaic types of democracy; no traces of AI (source: private, 2017).

Preliminary remark: This article was first published on medium.com on December 4, 2020.

Everyone concerned with ethics is probably used to cringing while reading news on AI. For me, one of the most reliable triggers for such cringing is the talk about ‘democratizing AI’; and I am not the only one.

Take this: Selling video analytics as “surveillance in a boxis said to democratize high-powered surveillance. Why democratize? Since buying the software is much cheaper than hiring video analysts, surveillance becomes accessible to and affordable for a much larger customer base. And this is what is meant by ‘democratizing AI’: making AI accessible to those who want it. If we follow this line of reasoning, drag-and-drop AI tools represent the ‘most democratic AI’. Some people praise them as so-called no-code automated machine learning tools that are easy to use even if “you’re less technical” or have no tech knowledge at all.

Long answer: It’s simply too dangerous, not just in the case of very controversial surveillance tech. Ethical problems like bias, discrimination and lack of explainability are aggravated when people without any formal training in data science haphazardly use AI tools. At the same time, the risk of misinterpreting data or algorithms and of applying them in the wrong context significantly increases. The same holds true for the danger of systemic misuse of models.

The second question is: What does it have to do with democracy? Short answer: nothing.

Long answer: Democracy entails ‘self-legislation’. It is group decision making characterized by equality among the participants. Equality can mean anything from the simple ‘one person one vote’ rule in elections to more substantial requirements such as equality in deliberation processes.

So, where is the link between ‘making AI accessible to as many people as possible’ and democracy? It doesn’t exist.

Tech evangelist talk about ‘democratizing AI’ has got nothing to do with self-legislation or collective decision-making; and it has got nothing to do with equality in a democratic sense. The only equality is equality in using AI.

The way AI is deployed even directly opposes these core elements of democracy: Instead of engaging people in collective decision-making, AI is often applied by some people onto other people; and neither the users nor the ‘targets’ are in any way involved in the decision-making. Take a judge who relies on algorithmic predictions to assess my risk of being a repeat offender. The judge simply uses an algorithmic decision based on the data of other people and she impacts my fate by doing so. AI here means decisions by someone about me, but without me — and none of us is engaged in the creation of those decisions. That’s quite the opposite of self-legislation.

As to equality, the self-rule in democracies as an ideal implies “an equal distribution of both protections and chances for influence in collective decisions”. Do we have that in AI? Definitely not, as evident by the problems of bias in datasets, the lack of diversity among software developers, and the discriminatory effects of AI on marginalized communities (in lending, in predictive policing, in healthcare etc.).

So, let’s be clear: the talk about democratizing AI is a clever marketing move. It is mass-commercialization in a humanitarian disguise. It is smart because democracy is an inherently positive term. Only elitist or authoritarian people oppose democracy.

By suggesting that AI is in everyone’s interest, it is not far off from framing AI as a basic need. But let’s not be fooled: AI is not a basic need. It is a tool, that is, a means to an end, that must be measured by its contribution to human flourishing. And whether a commercial mass rollout of AI as it stands can achieve that, is doubtful to say the very least, whether you call it democratizing or not.

On teaching AI & ethics

The Montreal AI Ethics Institute interviewed me, along with my ForHumanity colleagues Merve Hickok and Ryan Carrier, about our thoughts on teaching AI and ethics. I recommend keeping AI ethics as applied as possible and inspiring people to think about what that means for their own work experience.

Read More

Is there business ethics in Clubhouse?

What can AI ethics learn from business ethics? What’s the ethics of Clubhouse, if any? Is the Robinhood app undermining free will? And how can tech companies create an ethical business culture? Listen to my thoughts in this interview.

Read More

TedX Zürich: «AI – freedom within, freedom without»

AI frees us from having to solve complex problems ourselves, but does it also deprive us of the ability to think for ourselves? In my TedX speech I reflect on the ambiguous role AI plays for our freedom. Due to the pandemic, my speech was recorded in a huge, empty, pitch-black TV studio in Zurich. I missed the audience, but I am glad that I had the opportunity anyway.

Read More
Robin Hood (the Sherwood Forest version) would be disappointed by his Silicon Valley-Wallstreet namesake. (Photo by Steve Harvey on Unsplash)

Robinhood: democratized finance on shaky ground

More than 900 years after the heroic figure Robin Hood set out to steal from the rich and give to the poor, two American entrepreneurs borrowed his name to establish a fintech company that claims to “democratize finance for all”. But the new Robinhood’s claim of ‘democracy’ is on shaky ground. Just as the company can make financial markets accessible to everyone, it can also deny access within a split second. This is what happened when they shut down Gamestop trading on January 28, 2021. Thousands of investors were presented with a fait accompli.

Read More