Preliminary remark: This article was originally published on Medium on April 4, 2019.
It shouldn’t take a scandal of the dimensions achieved by Facebook/ Cambridge Analytica or a creepy cackle from Alexa to make it clear that we must not use technology blindly without asking ourselves some ethical questions, but incidents like these certainly help to raise awareness on an ever broader scale.
Yet, despite an increasing amount of articles calling for integrating ethics into algorithms (e.g. in order to overcome algorithmic bias), or for subjecting the implementation of new technologies “to applied ethics, or due diligence around asking tough questions”, it often remains unclear what is effectively meant by ethics.
In fact, it seems that what we mean by ethics is no less a black box than the algorithms into which it is supposed to be integrated.
Here is an attempt to outline in clear and simple terms how ethics, as in ‘ethical foundations’, can give us some guidance when deciding whether we should adopt digital technologies. While the explicit focus is on digitalization (within a corporate context), most of what is said here, equally applies to any type of AI etc.
First of all, we need to keep in mind that digitalization is a technological process initiated by humans. We decide, what we want to digitalize, and which consequences we are willing to accept.
But which guidelines should we follow when taking these decisions? This is where ethics comes into play.
At the center of this piece is the claim that we need to preserve the conditions to accept responsibility for our actions and that we must not delegate this ability entirely to technology. This means that digitalization measures must neither threaten the autonomy and agency of individuals nor the very existence of humankind. Digitalization is here to relieve us without depriving us of the right of decision.
Technological nirvana or apocalypse?
The debate on digitalization often resembles a war of opinions. On the one hand there are those who are convinced that technology, given the right code, the right algorithms and robots, has the potential to resolve all problems of humankind and to guarantee a life free from trouble and worry to all of us (keyword: ‚technological solutionism’). On the other hand there are those who most of all caution against unintended negative consequences of technology and who worry that we are effectively overrun by digitalization.
It is clear that technological change nowadays occurs at an exponential rate and that the complexity increases continuously as a consequence of the ever stronger connection between different devices and processes. This evokes the impression that we are losing control. We are dependent on devices, whose workings we at best understand rudimentarily. These devices communicate increasingly autonomously with each other. Our data is stored in clouds, and we neither know where these clouds are located nor who has access to them.
But it is also a fact that digitalization is made by humans. It is not a factual constraint following natural laws like an earthquake, over which we have no influence. Digitalization has reached dimensions, which we cannot and should not reverse entirely. But within these dimensions we have some scope of action, and this means that we also have responsibility to shape the course of digitalization. It is irresponsible to surrender or resign in the face of digitalization. We must set the boundaries for what we are willing to accept.
Ethics as a subdiscipline of philosophy focuses on those areas of life which are subject to decisions made by people — these areas are within the scope of human responsibility. Everything that has been shaped by people’s decisions, could in principle also look different. This is what sets digitalization apart from earthquakes, or rather what explains why it wouldn’t make sense to develop an ‘ethics of earthquakes’. While we can try to predict earthquakes and to mitigate the damages they cause, we are simply not responsible for their occurrence.
So now it has been argued that digitalization should be subject to ethical scrutiny. But we still do not have any idea what should guide this scrutiny.
As stated above, the central claim of this article is that we need to preserve the conditions to accept responsibility for our actions and that we must not delegate this ability entirely to technology. Thus, the relevant ethical questions for which we need guidance are:
How far can we delegate tasks to digital technology in a responsible way? Where do digital technologies serve to relieve us and where do they deprive us of our right of decision?
Starting points for answering these questions can be found in the works of two influential philosophers of the modern era, namely Immanuel Kant on the one hand and Hans Jonas on the other hand. Don’t be scared by these names — I will argue in a few simple words how their ideas can give us some guidance.
Using our reason
In his brief essay titled „What is Enlightenment?“ (1784) Kant called upon people to ‚sapere aude’, in English: „dare to know“, or „dare to be wise“. Kant wanted people to release themselves from their „self-incurred immaturity“, and instead use their reason without guidance from others (such as the church or other authorities). Immaturity is self-incurred if it is the result of either the fear to think for ourselves, or idleness or cowardice. All of these three ‘vices’ are a standard component of human behavior.
The core concern of enlightenment consisted in promoting human autonomy. We should finally be liberated from the shackles of religion and authoritarian institutions, and instead be able to apprehend the world based on our own reason and to decide based on our own insights which laws we want to follow (the latter being the essence of democracy).
Enlightenment has brought us very far. In particular, it has set the stage for technological progress.
But now, almost 250 years later, we have developed technologies, by virtue of our reason, that are so complex that we are forced to trust them blindly. Does this not mean that we are depriving ourselves (yet again) of our hard-won right of decision and that we therefore violate the fundamentals of enlightenment?
Preserving responsibility
Another important input comes from Hans Jonas. In the second half of the 20th century he addressed the need for an ethics of technology. During the Cold War the nuclear arms race between the US and the Soviet Union made it very clear that not everything, which is technically possible, is morally desirable.
For a long time we implicitly assumed that humankind as such would always exist. Nuclear technology, in particular the development of nuclear weapons of mass destruction, suddenly provided us with the means to extinguish humankind within a relatively short period of time and to make our planet uninhabitable. What is more, due to progress in human genetics, we have also acquired the power to directly impact on the ‚core of human beings’ to an extent hitherto unimagined. Repeatedly we are hearing about horror scenarios of human clones, who have been designed exactly according to the twisted ideas of their power-hungry creators.
Jonas therefore called upon us to always ask ourselves the following two questions when deciding on whether specific technological progress is desirable:
- Does a technology threaten the existence of humankind as such?
- How does technological progress impact on the quality of human existence?
The quality needs to be in line with human dignity. We can only preserve dignity if we are capable of taking our own decisions, that is, if we maintain human agency. According to Jonas it would be irresponsible to trade our right of decision for technological progress.
Implications for digitalization
In its most radical consequences, digitalization has the potential to severely undermine human agency and to therefore violate the conditions for a ‚ life in dignity’, i.e. our ability to take our own decisions.
A big part of digital technologies serve to gain more knowledge. They empower us to apprehend the world based on our own reason as postulated by enlightenment. However, as stated above, we now delegate an increasing number of tasks to devices and programs, whose workings we effectively do not understand. The more these devices are connected to each other and communicate autonomously with each other — keyword ‚internet of things’ — the more they emancipate themselves from us. As a result we lose our right of decision and our ability to be held responsible for the commands that are being exchanged between the devices.
In the best, or maybe the worst case, input from humans is not needed anymore in order to operate these devices. There is a real fear that one day we might become ‚slaves of machines’ and that robots might take over control over the world.
Relevant ethical questions in practice
So, what does all of this mean for actors who have to make choices regarding the introduction or expansion of digital technologies? Let’s break these general reflections down to the corporate level. In general, a company facing decisions on how to address digitalization must take into account to the following principles.
A digitalized company is only a responsible company if it:
- commits itself to upholding the conditions that enable the use of human reason.
And a digitalized company only provides the basis for a life in human dignity, if it:
- ensures that the right of decision of people (at least) within the company can be asserted at all times.
Neither of this implies that a company that addresses digitalization in a responsible way should refrain from technological progress. All it means is that companies should always keep an eye on whether the technological measures they take are still within their control, i.e. whether they can still be the subject of human reason (and are susceptible to debate).
For example, a company that addresses digitalization in a responsible way must not irreversibly delegate autonomy to algorithms which act in place of people, and which pass on data to anonymous third parties, which mine them like a commodity and which use the information collected against people whenever it suits them (think Cambridge Analytica).
So, this was a start. The goal was to argue why digitalization should be subject to ethics, and how we can create a direct link between ethics (as in ethical foundations) and digitalization. In a next step, these abstract thoughts need to be translated to more specific, e.g. corporate contexts. Based on the central norms identified in this article, i.e. the preservation of responsibility, upholding the use of human reason and the right of decision, we need to identify the relevant ethical questions for specific companies and answer them for their individual context.
Here is an exemplary list of questions that might be relevant to many companies:
- Employees: How do digitalization measures impact on the individual freedom and the self-realization, that is on the right of decision of employees?
- Clients: How does digitalization impact on the clients of a company? If a company digitalizes its services to the clients, does it serve the empowerment of clients or deprive them of their right of decision?
- Data privacy:And what does digitalization mean for data privacy? Are clients in control over the collection and use of their data?
Keeping questions like these in mind will contribute to making ethically informed decisions when advancing digitalization. We need to make sure that we continue to see technological progress as a conscious choice subject to common standards of responsibility. Only by doing so we can make sure that companies can be held accountable for the impact of their technological choices. If we surrender in the face of digitalization we let others willingly guide us back into the dark cave of unenlightenment which we so forcefully many years ago.