Skip to main content

Public values and human rights should guide the development and use of artificial intelligence

Published on 16.6.2023
Tampere University
Two digitally illustrated green playing cards on a white background, with the letters A and I in capitals and lowercase calligraphy over modified photographs of human mouths in profile.
Illustration: Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0.
The ethical and legal concerns, risks and problems related to artificial intelligence (AI) have sparked much discussion in recent years. A research report prepared at Tampere University presents standards for the ethical design, development, and use of AI applications. The report draws attention to shared public values and clear and effective regulation that would safeguard both human and fundamental rights.

The research report in the field of philosophy examines the trends in AI ethics in recent years. The report describes a multilevel approach to AI ethics that covers the entire lifecycle of AI applications.

Doctoral Researcher Otto Sahlgren’s research report was published as part of Tampere University’s multidisciplinary project Human-Centered AI Solutions for the Smart City (KITE). The project researched human-centred AI design and developed AI solutions together with various stakeholders, including small and medium-sized enterprises in the Pirkanmaa Region. Ethics is one of the central aspects that is emphasised in human-centred design.

“Many ethical guidelines and principles of AI have been presented in the last decade. In the project, we looked at how their central values and principles could be distilled into an understandable framework without overlooking the complexity of AI-related ethical questions,” Sahlgren says.

The report presents four general principles on the planning, development, and use of AI that should be complied with:

  • avoiding unnecessary and disproportional harm to humans, non-human animals, and the environment
  • promoting the good of humans, the well-being of non-human animals and the flourishing of the environment
  • protecting freedom and autonomy, and respecting human dignity
  • complying with applicable laws and regulations, and promoting procedural, distributive, and relational justice.

These principles should guide the choices and actions of responsible persons throughout the life cycle of an AI application starting from brainstorming the system and ending when the system is decommissioned. However, ethics requires much more than that.

“Ethical principles are an important first step, but ethically sustainable AI also requires legislation and concrete actions in organisations,” Sahlgren emphasises.

The report aims to consider different actors and levels of activity to offer a wide range of theoretical insights and practical tools.

“The ethical perspectives of AI should be examined on multiple levels, such as legislation and the organisational level. The same applies to the individuals and teams developing the apps,” Sahlgren points out.

AI legislation should safeguard human and fundamental rights

Legislation on the development and use of AI is being developed, among other things, in the form of an EU Regulation. Sahlgren’s report recommends that legislation should establish and protect the rights of persons affected by AI. Developers and decision-makers should have special responsibilities to ensure compliance with human and fundamental rights. The report also proposes bans on AI applications or use cases that violate these rights.

“In the report, like in the draft AI Act, I propose a total ban on, among other things, ‘social scoring’. To ensure the legality and ethical sustainability of AI applications, special rights and responsibilities should also be set,” Sahlgren points out.

Individual rights would ensure that people can challenge decisions that have been made about them with AI applications. People should have access to evidence about decisions that concern them. According to Sahlgren, organisations should have the duty to assess, for example, human rights, data protection, non-discrimination and environmental impact before applications are introduced and to do that regularly throughout their use.

Professor of Social Philosophy Arto Laitinen, the supervisor of the research and the report, says that drafting ethical guidelines requires knowing both ethical principles and rights and detailed practices and techniques of the target domain. Even though philosophical ethics emphasises the same general principles, AI and other specialist fields from genetic manipulation to research ethics still have their unique features.

“For example, AI’s so-called ‘black box problem’ is an ethically relevant characteristic of artificial intelligence. In what kind of tasks can we rely on a system that does not reveal the foundations of the recommendations it is making? On the other hand, also ethical principles that are not specific to AI must be fully realised. We cannot think that human rights or the loss of biodiversity do not matter in the context of advanced technology,” Laitinen stresses.

Ethical practices are also needed on the organisational and individual level

The report emphasises that ethics and social responsibility should also be visible in local, practical contexts where technology is being developed and used.

It is important to ensure that responsible agents have the necessary skills, resources and means to identify and promote right conduct and practices.

“Research has found that designers and developers often find it hard to apply abstract ethical principles in practice,” Sahlgren says.

“The principles should translate into standards, practices and tools that support the ethical actions of individuals. This requires that the responsible agents recognise and articulate the ethical goals and objectives that the AI applications should reflect,” Sahlgren adds.

All ethical problems that emerge in the real use contexts of technologies cannot necessarily be identified in advance. To effectively identify ethical risks and problems, we need an understanding of an AI application as part of a larger socio-technical system. This involves the interaction of people, technologies and wider societal institutions and structures.

The research report offers many analytical tools for, among other things, assessing risks and managing complexity.

“However, organisations developing and using AI should also invest in multidisciplinary and wide-ranging expertise and genuinely collaborate with different stakeholder groups. These are central ways to promote ethical sustainability,” Sahlgren points out.

Sahlgren and Laitinen present the report at a discussion event organised by Finnish Internet Forum at Tampere University of 19 June 2023.

Read the report

Image: Alina Constantin / Better Images of AI