Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Let’s talk

Why AI really needs social scientists

San Francisco, home of OpenAI. Photo by Hardik Pandya on Unsplash

This article was originally published on Medium on March 15, 2019.

A few weeks ago, the San Francisco based think tank Open AI published a paper titled “AI Safety Needs Social Scientists”.

The paper describes a new approach to aligning AI with human values, i.e. ensuring that AI systems reliably do what humans want, by treating it as a learning problem. AI alignment is acknowledged as a key issue to ensure AI safety. At the paper’s core is the description of an experimental setting aiming at reasoning-oriented alignment via debate, in which humans that are assigned specific roles (e.g. ‘honest debater’ or ‘dishonest debater’) engage in an exchange which is monitored by another human, cast as a ‘judge’, who decides which debater gave the true or most useful answer.

OpenAI states that in order to assure a rigorous design and implementation of this experiment, they need social scientists with backgrounds ranging from experimental psychology, cognitive science, economics, political science, social psychology, etc. with experience in human cognition, behavior, and ethics.

The title immediately caught my attention given that the kind of “AI ethics” I am dealing with hinges on an interdisciplinary approach to AI. So, I sat down and spent a couple of hours to read through the whole paper.

Scope and Ambition?

It didn’t take long until I started to feel confused. My confusion first emerged on a terminological level. Throughout the paper, authors fail to define what they mean by terms like ‘values’, or ‘preferences’ or ‘reasoning’. For example, they randomly make claims about ‘correct intuition about values’ and

they generally seem to assume that there is a ‘truth’ or ‘right answer’ to every debate.

This is true if we debate on factual things (e.g. ‘does the US have more inhabitants than China?’ in which case a judge needs to have the correct knowledge) or personal tastes (e.g. best vacation destinations, in which case a judge needs to be trained to identify preferences correctly). But it is certainly not true when we engage in ethical discussions (e.g. on whether the death penalty is fair).

Overall the paper contains very few examples that allow outsiders to understand what they are planning to assess exactly in their ‘debate experiments’ and to which types of debates they deem their model applicable.

It seems as if the ambition of their experiment is equally grand as their overall mission, which is “to ensure that artificial general intelligence benefits all of humanity”.

There seems to be no use in quibbling over definitional details at such a level of ambition.

Research ethics?

Even though I found the lack of definitions disconcerting, I had to remind myself that this is not an academic paper, and that I should not hold it up to the standards I used to apply in my ‘former life’ as a scholar with double-blind peer reviewing activities. But this insight leads us to another, very valid concern, which has been pointed out by Cennydd Bowles with reference to ‘user research projects’ by the tech industry. Bowles argues that user research experiments are essentially human experiments, which by contrast to human experiments conducted by research hospitals or universities are exempt from research ethics (Bowles, C. 2018, Future Ethics, p. 40). Research ethics is typically ensured by an independent ethics committee, or a research ethics board. These boards approve, monitor and review not just biomedical but also behavioral research that involves humans. The goal is to protect the rights and welfare of the participants in a research study.

The experiments OpenAI envisions fall into the category of behavioral research that involves humans. Even if the humans involved by contrast to those in user research projects might give informed consent, the ethical premises and implications of the experiment are evident and they deserve to be monitored closely.

The sheer size of the experiment, which according to the paper might need “to involve thousands to tens of thousands of people for millions to tens of millions of short interactions” warrants special consideration.

However, from all we know, there seems to be no intention to set up anything comparable to an institutional review board (IRB). OpenAI’s acknowledgement that they need social scientists based on those people’s experience in ‘rigorous experiment design’ is no substitute for a credible and effective ethical oversight.

Role of Social Scientists?

Last but not least, I became aware of my own bias that triggered my interest in the paper. I had automatically assumed that the piece would feed into the widespread calls for interdisciplinarity in AI based on ethical considerations.

In August 2018, Salesforce appointed Kathy Baxter as their Architect of Ethical AI Practice. Baxter’s goal is to “create a more fair, just, and equitable society”. At the beginning of this year, KPMG identified AI ethicists as one of the top hires, AI companies “need to succeed” — again, based on the recognition that the ethical and social implications of AI require “new jobs tasked with the critical responsibility of establishing AI frameworks that uphold company standards and codes of ethics”. And recently, there was a portrait of Accenture’s global lead for responsible AI, political scientist Dr. Rumman Chowdhury, which highlighted the difference that interdisciplinarity makes for responsible AI, by promoting thinking beyond code, acknowledging interdependencies between society and tech and adopting a stakeholder perspective.

When I had finished reading OpenAI’s paper it dawned on me that their notion of the role of social scientists did not chime in with any of the above-mentioned examples. All these examples refer to the importance of having social scientists on board in order to avoid or mitigate bias, enhance fairness, accountability etc. — i.e. for the purpose of strengthening ethics. I would not say that they assign social scientists an ‘intrinsic value’ (because responsible AI hopefully also conveys a competitive advantage), but it is clear that in these contexts,

social scientists are tasked with a responsibility that requires a distinctively critical attitude along with the ability to broker between different worlds..

OpenAI by contrast seems to allocate social scientists a merely instrumental role as a means to the end of furthering their aspired artificial general intelligence that guarantees long-term safety to the benefit of “all of humanity”.

I don’t mean to deny the importance and legitimacy of employing social scientists in order to achieve a pre-defined AI-related goal, but

I find it worrying that an organization like OpenAI with an ethically highly impactful mission — and massive financial resources to work towards achieving this mission — seems to primarily use social scientists as a means to an end,

without simultaneously embedding their work in a credible and effective ethics governance structure.

If these arguments fail to convince, let’s remember, that shortly before publishing the paper in question, OpenAI triggered a heated debate when they presented a conversational AI system (GPT-2), which is so good “at generating articles from just a few words and phrases” that they refuse to release the model in its entirety because they worry “about ‘malicious applications of the technology’” (e.g. for automatically generating fake news). This news caused ‘mass hysteria’ with some calling the “release (or lack thereof) an irresponsible, fear mongering PR tactic”, while others criticized OpenAI’s decision to suppress access to their work.

In light of this, it should be evident that any organization developing AI (or even AGI), be it for-profit, non-profit or something in between, is well-advised to establish credible and effective governance structures, which include interdisciplinary perspectives in order to continuously review and monitor the ethical implications of their ambitions. Failing to do so allows them to push innovation with potentially severe consequences for humanity in a black box. Whether this will be in line with ambitions to promote AI safety or any other type of safety, is highly questionable.

Subscribe to my newsletter