Skip to main content

AI can help leaders develop bolder visions and make more informed decisions

Published on 18.9.2020
Tampere Universities
Tekoälypomo
Photo: Teemu Launis
Managers can use weak signal detection algorithms to manage risks, identify problems and aid decision-making. Still, we humans must bear the ultimate responsibility for our decisions and actions. Emotions play an important role, too.

Should we invest in a new product that could make or break our business? Or should we play it safe by focusing on an existing product that enjoys a solid position in the domestic market but stands little chance of international success?

The executive team of a company meets to discuss the plan. The attendees sharing their views include the CEO, CFO, sales director – and an AI. 

How does that sound?

AI analyses data but humans call the shots

Artificial intelligence (AI) is steadily interweaving itself into our daily lives, so much so that it often blends into the background. AI has tremendous potential for applications in heavy industry, healthcare, social services and consumer markets.

Smartwatches are already tracking the quality of our sleep and providing personalised workout advice. Advertisements for running shoes appear in marathon runners’ news feeds on Facebook. GPS automatically alerts drivers to upcoming construction zones. Intelligent systems are managing accounting and book-keeping processes, and doctors are using AI as a diagnostic tool. 

Ideally, AI will not take away our jobs but allow us to focus on what we do best. A salesperson climbing into her car in the morning might consult an AI system to find out which customer to meet first. While driving, she could devise an effective sales strategy based on data provided by the AI about the customer’s order history, recent business developments and decision-making style.  

Teemu Laine, professor of industrial engineering and management at Tampere University, dismisses the romanticised image of an AI making executive decisions.

“Instead of calling AI the new boss, I see it as resource that we can ask questions. AI systems do offer great promise despite being inherently restricted by the data they receive,” he says.

AI improves human decision-making

Laine leads the multidisciplinary NewBI5 project underway at Tampere University to explore the potential of advanced analytics for management and business.

He believes AI can bring new perspectives and add depth to the debate leading up to a decision. Communication is an important element of any decision-making process, and it can assist us in making better choices.

Instead of only pushing for the faster development of AI, we could also challenge ourselves to engage in bolder and more wide-ranging debate.

“AI can teach us to ask better questions and help us identify the values and assumptions that underlie our decision-making. Instead of only pushing for the faster development of AI, we could also challenge ourselves to engage in bolder and more wide-ranging debate. It is important to openly discuss how fully we are able to understand what goes into the analysis that precedes decision-making. AI can challenge us to do this, too,” Laine says.

Hasty and superficial discussion leads to mechanical decision-making and increases the risk of AI being perceived as a black box that spits out numbers.

At worst, the key mechanisms and reasoning behind AI-driven decisions may remain opaque, though admittedly human decision-making that is based on complicated calculations from spreadsheets may well be fraught with similar transparency problems.

Humans must take responsibility

One of the areas where AI comes in handy is the analysis of consumer data. Learning algorithms are able to quickly organise and identify connections between data. Besides detecting who buys what and where, these algorithms are capable of spotting weak signals that may easily go unnoticed by humans, especially amidst the hectic pace of the business world.

People must decide how to respond to signals identified by AI – or whether to respond at all.

At its best, AI will not merely churn out massive amounts of data, but also, for example, inform the production manager of a factory whether an order with an imminent due date is a cause for concern. For instance, is the health and wellbeing of staff at such a level that would allow the factory to meet a large order on time? 

Work time may be saved and employees’ productivity and sense of meaning in their work may be improved if the production manager is able to focus on resolving the problems uncovered by AI instead of poring over reports. The early detection of emerging risks and opportunities enables a proactive response.

Still, people must decide how to respond to signals identified by AI – or whether to respond at all. 

“Humans are responsible for upholding ethical standards. We will continue to be held accountable for the consequences of our decisions and for our choice of whether or not to look into risks flagged by AI,” Laine says.

Photo: Teemu Launis

The importance of emotion in the workplace

By virtue of being human, we may not always act ethically. Emotions and attitudes are always present in human communication and affect the decisions we make. Given that AI cannot experience emotions such as envy and the desire for power, could AI sometimes make a better and fairer boss compared to a human?

“That sounds quite dangerous,” says Johanna Ruusuvuori, professor of social psychology.

Ruusuvuori and Teija Ahopelto, who is writing her dissertation on social psychology, bring their expertise in social interaction to the NewBI5 project. Ruusuvuori points out that our values and feelings are integral to everything we do. Brain research has demonstrated that our ability to feel emotions is fundamental to our ability to function.

Emotions help us navigate our personal relationships and life. If we were to take emotions out of the equation, I am not convinced that we would be making better decisions, maybe even quite the opposite.

Ruusuvuori tells a story about a person who sustained an injury in the part of the brain that controls emotions. He was unable to work after the injury: he appeared to be hard at work pushing papers back and forth, but he was getting nothing done. He simply did not know how to go about it.

“Emotions help us navigate our personal relationships and life. If we were to take emotions out of the equation, I am not convinced that we would be making better decisions, maybe even quite the opposite,” Ahopelto argues. 

An objective AI recruitment tool may sound like a good idea in principle. With personal chemistry and emotions out of the picture, would it not be only fair to hire the candidate whose objectively measured abilities are the closest match to the requirements of the role?

“But how could AI compare, let’s say, candidates’ CVs? Would a long list of publications outrank one in-depth study that revolutionised our knowledge and understanding of a topic?” Ruusuvuori asks.  

The idea of a completely value-free robot boss is also in conflict with the fact that values are one of the basic building blocks that govern corporate activity and decisions. Company decisions being a reflection of values is by no means a bad thing.

Algorithms reflect human values and emotions

When looking to make a convincing argument, we may routinely attempt to subordinate our feelings to reason. In strict either/or terms, feelings may be seen as vague and unreliable, whereas logic is something that keeps us from being swayed by the fickle tides of emotion.

In fact, emotion and intuition have been shown to play a fundamental role in the workplace. Many executives, sales professionals and recruiters admit that at times they base their decisions on intuition. Whether we are talking about decisions made based on AI information or human reasoning alone, we should openly discuss the reasons and values that underlie our actions.

Ruusuvuori and Ahopelto emphasise that no matter how intelligent machines are, they can never be completely devoid of human emotions, attitudes and values.

As AI has the potential to perpetuate social inequality, we should openly discuss the human biases and prejudices that may slip into our algorithms.

For example, humans decide what data is fed into an AI algorithm. This decision is a value judgement that excludes other possible options. As AI has the potential to perpetuate social inequality, we should openly discuss the human biases and prejudices that may slip into our algorithms. 

“AI was used to carry out a study on criminal recidivism in the USA. The study claimed that black men commit more crimes and are more likely to reoffend than white men,” Ahopelto says.

The predictions turned out to be severely skewed. Algorithms may reflect racist and sexist biases and the inequalities of our society.

Can a robot understand sarcasm?

Besides emotions, contextual understanding is an area where humans outperform AI. Depending on context, the same word may be considered wholly inappropriate or a harmless joke. Humans can pick up on subtle nuances of speech and understand why we may adapt our speech style to match the social context.

“Membership categorisation analysis allows us to uncover the values that are implicit in our everyday conversations, such as undercurrents of sexism. Do we, for example, automatically associate motherhood with childcare in our speech? While AI could surely carry out such an analysis, the problem is that all human activity is contextual and there is an infinite number of possible contexts. When developing AI systems, it is important to strictly narrow down the scope of their applicability,” Ruusuvuori says.

For example, behaviour that people can immediately identify as witty banter between friends could be mistakenly labelled workplace bullying by AI. Both Ahopelto and Ruusuvuori doubt whether AI can ever reach such a level of sophistication as to be able to detect all the subtleties of human communication and read between the lines.

Help for clarifying values

So, the experts agree that AI cannot replace human bosses, not that is it intended to do so. Nevertheless, AI will be able to help both managers and employees think more broadly and from different angles.

“Despite an AI-generated report being so narrow as to be erroneous, it can still be useful if we bring the assumptions and perspectives embedded in the algorithms out into the open. This type of discussion can shed light on the reasoning that lies behind our decisions, clarify our values and help us understand what we want to do and why,” Laine says.

We should also discuss whose perspective dominates our understanding of the expected benefits of machine learning.

Ruusuvuori and Ahopelto stress that we should also discuss whose perspective dominates our understanding of the expected benefits of machine learning. Does it make sense to save time or money in the short term if it will have negative effects on employee performance, job satisfaction or customer experience in the long term? 

“If AI can help us, so much the better, but we should be asking how exactly it is supposed to be helping us,” Ruusuvuori says.

NewBI5

A multidisciplinary project that explores the potential of advanced analytics for management and leadership.

The project seeks to improve our understanding of management and leadership, and to identify new AI-based activities, services and the potential for added value. 

The project explores AI from economic, technological and social science perspectives.

Partner companies are closely involved in the project, from conceptualisation to commercialisation.

Author: Janica Brander