, ,

AI: Lessons from Business ethics

In April 2024 I gave a keynote on „AI – lessons learned from business ethics“ at one of the most exciting AI conferences in my career so far: „The Futurist of the Year“. It was hosted by the newly established Szkoła Główna Mikołaja Kopernika in Warsaw.

I took the audience to a trip down the memory lane to show them key issues, concepts and events that have played a significant role in business ethics during the past two decades. And I showed them the lessons AI companies can learn.

Screenshot

My starting point is: AI companies act as if they had never heard about business ethics before. They ignore some of the basic concepts of business ethics when it comes to accountability, supply chain responsibility and product safety.

Accountability in the AI supply chain

Statements like „AI is just a tool. Up to you how you use it“, serve to diffuse responsibility. But AI companies must recognize their responsibilities. The hidden exploitation in AI supply chains, reminiscent of past sweatshop scandals, demands scrutiny. The term „ghost work“ describes the invisible gig workers behind AI, that clean toxic content from AI systems under terrible working conditions.

„Have we moved from sweatshops to tear shops? The tech industry has been saved from proper scrutiny of labor conditions in their supply chains.“

Recognizing human data sources and objects of AI

AI companies must recognize their responsibilities towards human data sources. These are the humans whose data has been used to train AI. And they must be accountable towards human objects of AI, namely those who are affected by AI applications, such as job applicants or patients.

„AI companies must see beyond their algorithms and recognize the people behind the data and those impacted by their applications. They must embrace a holistic stakeholder responsibility.“

Lessons from the financial crisis and nuclear technology

The opaque nature of AI decision-making, coupled with the emergence of General Purpose AI, and their potentially existential risks (I am not too convinced about the exact extent of those), pose significant challenges to product safety. The financial crisis has highlighted the damage that a lack of transparency can cause. Nuclear technology has brought us important insights about existential risks. And products like the Ford Pinto have made the devastating consequences of a purely profit-oriented focus clear. But what do AI companies learn from that?

„How can you guarantee the safety of a product when you don’t understand how it works? The stakes with AI are higher than ever, requiring unprecedented transparency and diligence.“

No room for exceptionalism when it comes to ethics

Yes, AI companies create groundbreaking innovation. And some aspects of it are truly exceptional. But when it comes to ethics, there is no room for exceptionalism. The core of ethical business remains forever as follows:  to act with integrity and to ensure that what you do serves humanity, and not the other way around.

AI companies must stop feigning ignorance and overcome their exceptionalism. Groundbreaking innovation comes with the fundamental ethical responsibility to ensure that what they do serves humanity, not the other way around.

These are some of my key points. Watch the full speech in front of impressive 180 degrees LED screens below to get the full picture!