,

AI in publishing: Bridging gaps, upholding values

During the Frankfurt Book Fair, Wiley, a global leader in publishing, education and research, invited me to episode 3 of their new format The Conversations, entitled „Into the AI Age | The Role of AI in Research & Learning“, filmed with a live audience.

The debate with my brilliant co-panelists Ivana Bartoletti and Olivia Gambelin ranged from ethical dilemmas to practical applications. Here are some of my key contributions:

AI as a Bridge, Not a Shortcut

AI’s true potential lies in its ability to create connections. AI can help you master the jargon of journals where you never get accepted because you are not a native speak. And it can build bridges between disciplines that are often siloed and thus make research more accessible.

All the horizontal transitions are less risky and much more valuable than the arms race where people use AI for shortcuts.

Ethics and Human Judgment in Publishing

Generative AI brings some distinctive ethical challenges:

We have a crisis of trust in science and now we have a technology that is characterized by the ability to create more misinformation. Misinformation and misattribution are real risks in a field already grappling with trust issues.Publishers must define what aspects of their processes require human judgment and cannot be handed over to machines.

Adoption is not a virtue in itself. AI tools must align with existing core values and they must enhance—not distort—publishing culture.

Transparency Before Partnership

Big Tech often proposes partnerships to use publishers‘ very valuable structured data for training models. But here’s the reality: More data won’t fix inherent flaws like hallucinations in current AI models. So, the question is: Do they want to sell the data (NB: whose data is it anyway?) just so that they are represented in a model that will probabilistically create hallucinations.

In any case, publishers should demand transparency and accountability from tech companies before engaging. Stakeholder input, especially from authors and researchers, is crucial.

Responsibility Isn’t Optional

AI doesn’t exempt businesses from accountability. We have come a long way from thinking „the business of business is business“ to finally holding businesses accountable for human rights violations, environmental impacts etc.

For years, companies have had to respect frameworks addressing human rights, non-discrimination, and environmental impacts. AI must be integrated into these existing discussions, not used as an excuse to sidestep responsibility.

In publishing, the choice isn’t whether to adopt AI, but for which steps and how to do so responsibly. Thoughtful, values-driven integration can help the industry build bridges without compromising its foundations.

Watch the full discussion here »