Algorithms decide. But who Determines whAT’S fair?

Algorithmic systems are increasingly supporting decisions about people – in lending, medical diagnostics, education, or the justice system.

These systems appear objective. But when they determine access to opportunities and resources, questions of fairness arise – and these rarely have a clear answer.

This workshop reveals the trade-offs behind these decisions – and shows how you can navigate them.

Why This Matters

When algorithmic systems make decisions, the underlying questions are rarely purely technical.

Different definitions of fairness lead to different consequences. Choosing a metric means deciding which errors are acceptable – and who bears them.

Organizations cannot delegate these decisions to technology. They have to make them themselves

When you deploy algorithmic systems, you need to decide:

  • Which errors are acceptable?
  • Who bears the consequences?
  • Who is responsible for these decisions?

This workshop makes these questions concrete.

How it Works

This is not a lecture about AI.

You work with synthetic datasets and a physical game board. Each piece represents a real person. Change a decision, move a piece – and you immediately see what that means for others.

The workshop covers four contexts: credit scoring, medical diagnostics, criminal justice, and education. In each context, the same fundamental problem emerges: fairness cannot simply be calculated – it requires a decision.

Workshop Elements

  • Analyze the dataset: classify decisions
  • Use the game board: make consequences visible
  • Define fairness: choose a perspective
  • Discuss the implications: ethical, social, and economic

In the end, it’s not about finding the one right answer – it’s about recognizing trade-offs and making responsible decisions.

What Changes

The workshop changes how you assess algorithmic decisions. You recognize that fairness is not a purely technical problem – it is a decision about competing values, and one that organizations are responsible for making.

In concrete terms:

  • You identify typical fairness conflicts in algorithmic decision-making
  • You understand why different fairness definitions lead to contradictory outcomes
  • You assess the consequences of algorithmic decisions for different groups
  • You can justify which errors are acceptable in your context – and which are not
  • You can defend your fairness decisions to management, regulators, and the public.

Who this Workshop is For

The workshop is for everyone who bears responsibility for the use of algorithmic systems:

  • Executive and senior leaders
  • Compliance and risk teams
  • Data scientists
  • Legal professionals
  • Those responsible for AI or data projects

No technical background required.

Where This Workshop Is Most Relevant

Industries

The workshop is particularly relevant wherever algorithmic decisions have real consequences – in financial services, healthcare, education, or public administration.

Format

A compact decision-making workshop:

  • Half-day workshop (3–4 hours)
  • In person
  • Group size: up to 16 participants

Problematic effects of algorithmic systems are rarely visible at first glance. They hide behind aggregate rates and seemingly neutral numbers. Recognizing them requires a closer look.

Because fairness is not a technical property of a system – it is a human decision.

About

I have worked at the intersection of business, academia, and society for over a decade.

As a consultant, lecturer, and speaker, I focus on the ethical and societal dimensions of artificial intelligence.

This workshop grew out of that work – with the goal of translating abstract concepts into concrete decisions.

More on my work in AI and ethics here.

Bring the Workshop to Your Organization

I’d be happy to come to you – for a half-day that changes how your team understands and evaluates algorithmic decisions.

Feel free to reach out – I’m glad to walk you through the format.

Send me an email. Or call me: +41 79 292 77 55.