Lassen Sie uns reden

Big Tech and AI ethicists — a reciprocal duty to engage?

A couple of months ago I was invited to join Mark van Rijmenam, aka The Digital Speaker, and Dan Turchin, Co-founder and Chief Executive Officer of PeopleReign, in their “Between Two Bots” podcast. This podcast features “data revolutionaries who are leading the new field of AI ethics, establishing new regulatory frameworks for AI, and reimagining the role of government as tech companies increasingly act as autocrats”.

Mark and Dan always start the talk by discussing three articles. The first article we discussed was named “How Big Tech dominates AI ethics group. Its main claim is that Big Tech has a very big influence on the EU AI regulation. In 2018 when the European commission set up a high-level expert group, almost half of the experts (i.e. 26 out of 56) came from ‘the industry’, whereas only six people represented civil society. The article is skeptical about this and also implicitly criticizes Luciano Floridi, a well-known Professor of Philosophy and Ethics of Information at the University of Oxford with “long-standing ties to Big Tech”. Mark and Dan asked:

“What do you make of the fact that Big Tech is part of the committees which advise regulations? Do you consider this to be a problem?”

Big Tech and Ethics: containment or engagement?

Screenshot from euobserver.com, with a seemingly thinking robot.

Here is an edited version of my answer.

The influence of Big Tech on policy-making is certainly a problem in an industry, which is dominated by an oligopoly and where all the players from the oligopoly are involved. That’s an unfortunate circumstance. But beyond the mere numerical question of ‘how many big tech representatives were there? And how many civil society representatives were there?’, it raises the basic question for anyone else present whether you want to engage with Big Tech or whether you want to contain it.

I studied International Relations, a few years after the Cold War had ended, and there we learned that in the Cold War, the US oscillated between ‘containment’ and ‘engagement’ with the Soviet Union. The same distinction comes into my mind in the discussion about how to deal with Big Tech: some people treat Big Tech like an enemy whose influence should be contained, while others are very happy to engage with Google, Facebook & Co.

Should Big Tech be included in policy-making?

Beyond questions of accepting funding, the article specifically raises the question whether or to what extent Big Tech should be included in policy-making. Now, we certainly don’t want to have our policy dictated by Big Tech but I would argue as follows:

if there was a choice between having Big Tech sitting on the table or having them lobby for their business interests behind closed doors, I’d rather have them on the table.

The article quotes someone who says “’money does not necessarily change someone’s viewpoint, but there is ’self-selection’”. By this he means that some researchers will still attend an expert roundtable also when funding from Big Tech is involved, while others will stay away because they’re too critical.

I personally think it is important to go there even if [or actually: right because!] Big Tech is there, and even if you are critical of Big Tech. One could even go as far as to claim: it is your responsibility to go there and you must not abstain from this responsibility. [NB: Because, after all, you are an expert, and we, i.e. the non-expert people who care about AI ethics policy-making, but don’t have the competence or authority to participate, need people like you to represent our interests].

I personally think it is important to go there even if [or actually: right because!] Big Tech is there, and even if you are critical of Big Tech. One could even go as far as to claim: it is your responsibility to go there and you must not abstain from this responsibility. [NB: Because, after all, you are an expert, and we, i.e. the non-expert people who care about AI ethics policy-making, but don’t have the competence or authority to participate, need people like you to represent our interests].

Who plays WWF, who plays Greenpeace and who plays Extinction Rebellion in AI ethics?

However, critics sometimes go as far as to denounce all people who work within Big Tech and who are doing AI ethics on a corporate level. Those who engage on AI ethics in Big Tech are sometimes discredited by other voices who say things like “you’re being co-opted; you can’t be taken seriously as an AI ethicist”. But in my opinion, we need to be able to integrate ethics into Big Tech and for that matter we need people who take care of it in these companies. Of course, AI ethics also needs to be assessed from the outside, but that alone will never be sufficient. And outside the corporations, we could think of a division of labor between different AI ethics advocates. I imagine the AI ethics landscape to be comparable to what we have with NGOs in sustainability. There you have e.g. WWF who cooperates with almost any company they can find, then you have Greenpeace who are much more more critical, and finally you have Extinction Rebellion who you will never find on the table because they are outside on the street where they chain themselves to traffic lights. We need different stances of criticism but we also need some people who are willing to talk to the big influential players. Because what Big Tech says in such consultations at least gets publicly heard and they can be held accountable for what they say in those context.

Listen to the entire episode to hear our discussion on the other two articles: With Robotics4EU, the EU wants us to believe in responsible AI robots; and Machine Learning, Deep Fakes, and the Threat of an Artificially Intelligent Hate-Bot.

Find the full podcast episode on the following platforms:

https://betweentwobots.com/episodes/ep06-dorothea-baur

https://www.thedigitalspeaker.com/ep06-between-two-bots-dorothea-baur/

Abonnieren sie meinen Newsletter