Not surprisingly that’s good news for an ‘old business ethicist’ like me. Such lists definitely strengthen my ‘raison d’être’. Yet, when I tweeted something on this, reactions showed that not everyone shares this opinion. There is obviously no common understanding whether we need AI ethicists in the first place, and whether creating such a profile inevitably leads to “machinewashing”. Let me address these concerns below and argue what it takes to really make AI ethicists a “top hire”.
1. Do we need distinctive AI ethicists?
Not everyone seems to think so. As one follower stated:
Is this true? Should ethics rank just as one among many equally material considerations (e.g. financial, marketing, HR, technological or legal) that shape the business strategy? I disagree. The role and work of AI ethicists need to be visible. Ethics is often perceived as being ‘abstract’, ‘theoretical’, and ‘impractical’. If we want to make AI ethics credible, it needs to be made tangible. Companies need to be able to outline the specifically ethical considerations in their business.
One of the key ethical concerns with AI is the lack of transparency and accountability of algorithms. Algorithms are typically perceived as black boxes — for those affected by their decisions, and sometimes even for the data scientists who program them. It would, therefore, be a particular disservice to #AIethics to hide ethical considerations in a huge overarching black box where financial, technological, legal and marketing considerations are mingled together. This would certainly undermine the credibility of the emerging field of #AIethics, which at its core should bring transparency to hitherto intransparent matters.
Of course, there is always a choice whether issues are framed as moral or not. You can frame diversity as an economic issue or a moral issue, accountability can be a legal concern or a moral concern, accessibility can be a technological term or a moral term — and each of these examples can be both at the same time. The important thing is to be aware that the framing has an impact on our perception:
Using moral language (positive words like justice or integrity, or negative words like lying or cheating) tends to trigger moral thinking because the terms are linked to existing cognitive categories which contain moral content. Reasons for avoiding moral vocabulary are fear of confrontation (moral talk might provoke confrontation), efficiency considerations (moral talk might cloud issues) and hanging on to an image of power and effectiveness — there is a fear of coming across as idealistic and utopian when using moral language.
Of course, in many cases (or: ideally), ethical and business considerations coincide, i.e. they lead to the same conclusion. Most popular among companies and PR experts are so-called ‘win-win’ situations, in which it pays off to be ethical. When this is the case, this should be made evident. It doesn’t ‘devalue’ the ‘seriousness’ of ethical considerations if they happen to foster business. (I have found during my time in academia that some people would tend to only acknowledge ethical aspects as such when they were limiting business).
That is: it needs to be acknowledged that a win-win situation between ethics and profit is not always the case, or better, must not be the condition for ethics to be taken seriously.
The litmus test for ethics is when a company refuses to do something even though it is legal and would ‘make business sense’. Undisputed cases of such behavior are arguably hard to prove in reality. Whenever a company claims they are basing their business on ethical responsibility, critics will find ways to argue that the underlying rationale is a ‘business’ one. Digging into this question is beyond the scope of this piece — it points to the age-old question whether true altruism exists — but it leads us to the next point, namely: will AI ethicists necessarily always be instrumentalized by companies as mere fig leaves to promote their business?
2. Are AI ethicists just part of a “machinewashing” scheme?
This question was raised by a second reaction to my tweet which pointed to the ‘worrisome trend’ of ‘machinewashing’:
Such a reaction is not surprising. Trying to establish ethics in any kind of corporate context is always inevitably met with ‘x-washing’ allegations from all kinds of stakeholders. A search for the term ‘greenwashing’ on my personal hard disk full of business ethics articles and documents yields 467 matches.
According to the Oxford Dictionairies blog:
Washing comes in many shades and cycles. It’s thus only logical that the ethics of AI has also been fed into its own washing cycle, aptly coined “machinewashing”. As the Boston Globe puts it, “machinewashing” denotes tech giants’ attempts to “assure us of their good intentions surrounding AI. But some of their public relations campaigns are creating the surface illusion of positive change without the verifiable reality”.
What makes the hiring or appointment of AI ethicists a machinewashing exercise? And how can it be avoided? Any ethicist hired by a company will be met with skepticism.
As I see it, AI ethics needs to start internally in a company, which uses moral language (see above), reflects on their values, anticipates and monitors the ethical impact of their products/algorithms etc. An internal AI ethicist should be given a maximum amount of trust and freedom to look into all the processes and to challenge everyone — comparable to the (admittedly short-lived) role of Paul Birch as a corporate jester at British Airways in the 1990ies “who would question authority, promote honesty, and approach problems in creative ways”, but with a slightly more serious touch and a distinctive focus on ethics. An AI ethicist makes sure that the key ethical aspects are identified and considered in the overall strategy of the company and in their daily routines and does not restrict her arguments to those that increase profit.
This is undoubtedly a rather broad job description, which certainly needs to be refined in order to become practicable. An AI ethicist can also only truly unfold her potential when the company engages with various stakeholders on #AIethics (e.g. employees, NGOs, regulators, academia, the media, shareholders, etc.). My intention here is to outline that AI companies should not let a priori allegations of ‘machinewashing’ deter them from — or should not use them as an excuse for not — appointing AI ethicists. Instead,
So, in a nutshell, KPMG has identified a very relevant but also very challenging job profile in their list of “top 5 hires AI companies need to succeed”. In order to live up to this expectation, it is crucial that AI ethicists become visible representatives of companies with a tangible impact. Anything else, like keeping their heads low among the crowd of standard corporate professionals and clouding their distinctive focus by avoiding moral language, only plays into the hands of those who sense ‘machinewashing’ whenever companies use the term ethics.
There is a divide between those working on Responsible Tech inside companies and those criticizing from the outside. We need to bridge the two worlds, which requires more open-mindedness and the willingness to overcome potential prejudices. The back and forth between ‘ethics washing’ and ‘ethics bashing’ is taking up too much space.
OpenAI states that in order to assure a rigorous design and implementation of this experiment, they need social scientists from a variety of disciplines. The title immediately caught my attention given that the kind of “AI ethics” I am dealing with hinges on an interdisciplinary approach to AI. So, I sat down and spent a couple of hours to read through the whole paper.