Lassen sie uns reden
AI frees us from having to solve complex problems ourselves, but does it also deprive us of the ability to think for ourselves? In my
More than 900 years after the heroic figure Robin Hood set out to steal from the rich and give to the poor, two American entrepreneurs borrowed his name to establish a fintech company that claims to “democratize finance for all”. But the new Robinhood’s claim of ‘democracy’ is on shaky ground. Just as the company can make financial markets accessible to everyone, it can also deny access within a split second. This is what happened when they shut down Gamestop trading on January 28, 2021. Thousands of investors were presented with a fait accompli.
“People often feel uncomfortable talking about ethics. My mission is to enter a company, a classroom, a stage, and take away that unease”, I say in my interview with influencer marketing platform Onalytica.
“We might trust machines more than people when we communicate with them but this is dangerous because behind every machine there are the people that create it”. Just one of my statements from my lively talk with Kimberly Misquitta from Indian chatbot company Engati.
The Algo 2020 conference invited me on a panel discussion titled “Fake it till you make it – AI and Hype”. My 4 key points:
1. AI hype does not question the very purpose of AI.
2. AI hype is linked to misleading promises.
3. AI hype directs energy at something that is barely tangible.
4. AI hype exaggerates the capabilities of AI when effectively humans are still doing most of the work.
“Technische Innovation verschafft Menschen mit Behinderung idealerweise Erleichterungen. Gleichzeitig hilft sie Menschen mit Behinderungen, den Erwartungen der Gesellschaft gerecht zu werden. Aber sie steigert eben diese Erwartungen auch – denn sie verändert, was in der Gesellschaft «normal» ist” So lautet eine meiner Aussagen im Rahmen meines Impulsreferates beim Cybathlon 2020, auf dem Podium von Pro Infirmis.
Auf ConnectaTV spreche ich mit Aileen Zumstein über den «Messwahn» in der Nachhaltigkeitsdebatte, über den «Rückzug in die Intuition», über low hanging fruit und Oberflächenkosmetik, sowie die Bedeutung von Prinzipien und Unvermeidlichkeit von Abwägungen.
Im Kampf gegen Covid-19 setzen die Schweiz und weitere Staaten auf digitales Contact-Tracing. Die Weitergabe eigener Daten ist angesichts der Pandemie eine Form der praktizierten Solidarität, sage ich in einem Gespräch im Auftrag der Stiftung Sanitas Krankenversicherung.
There is a divide between those working on Responsible Tech inside companies and those criticizing from the outside. We need to bridge the two worlds, which requires more open-mindedness and the willingness to overcome potential prejudices. The back and forth between ‘ethics washing’ and ‘ethics bashing’ is taking up too much space.
Kate O’Neill is a global thought leader, author, keynote speaker, strategic advisor, and “tech humanist”. We talked about connecting the dots between AI ethics, privacy, climate change, CSR, ESG, contact tracing, carbon offsetting and much more, including quite some laughter.
As part of his series “Interviews with global leaders in the field of Artificial Intelligence” I spoke with Johan Steyn about AI ethics, privacy, contact tracing, buiness ethics, CSR, etc. – live from my kitchen table.
UNESCO Forum invited me as a speaker to share my thoughts on the Covid-19 crisis. The pandemic has sparked fundamental ethical debates. Think of the terrifying reports from hospitals in Italy in Spring 2020. Intensive care units were overrun with patients. There were not enough ventilators. And suddenly we asked ourselves: What is the value of a human life?
Wie wichtig ist unsere Privatsphäre? Was bedeutet Freiwilligkeit? Und wie können wir solidarisch sein? Die Einführung von Contact-Tracing-Apps wirft grosse Fragen auf. Florian Wüstholz von der WOZ lud mich zusammen mit dem Innovationsethiker Johan Rochel zu einem kontroversen Gespräch ein.
Environmental sustainability is one of the most promising domains to deploy ‘AI for Good’. The environment is an excellent use case for collecting and analyzing data that help us to better understand and address key environmental challenges. In contrast to the use of AI in ‘human settings’, you typically don’t run into problems of privacy and discrimination when using it for environmental purposes.
OpenAI states that in order to assure a rigorous design and implementation of this experiment, they need social scientists from a variety of disciplines. The title immediately caught my attention given that the kind of “AI ethics” I am dealing with hinges on an interdisciplinary approach to AI. So, I sat down and spent a couple of hours to read through the whole paper.
Reading a report on “Discrimination, Artificial Intelligence and Algorithmic Decision-Making”, I wondered to what degree algorithmic decision-making could serve to further exacerbate discrimination in already deeply divided societies. If we want AI in general and algorithmic decision-making in particular to flourish and to contribute to the common good rather than promote or exacerbate division, we need to work towards creating societies where all members have genuine freedom and equal opportunities in their choice of lifestyles and identities regardless of their protected characteristics.
KPMG ranked “AI ethicist” as one of the “top 5 AI hires companies need to succeed in 2019”. That’s good news for an ‘old business ethicist’ like me. However, there is no common understanding whether we need AI ethicists in the first place, and whether creating such a profile inevitably leads to “machinewashing”. I address these concerns and argue what it takes to really make AI ethicists a top hire.