AI will profoundly change the world, including the way we work. It is expected that AI will ultimately take over mundane tasks, leaving us free to engage in creative and visionary pursuits. However, Lauri Lahikainen, a social scientist at Tampere University, wonders whether it will be possible for AI to truly liberate us from repetitive tasks so that creative work is all that remains.
“Even creative jobs involve a great deal of routine. If we take away the drudgery, we might be left with something other than we expected,” he says.
Creativity requires freedom of thought and occasionally a high tolerance for chaos, but routines provide essential structure and predictability and give us a sense of control.
Joni Kämäräinen, associate professor of signal processing at Tampere University, says that we must be prepared for AI’s growth in the workplace to radically transform our society. He believes that it is entirely plausible that only 50% of the working-age population will have jobs in the next 20–30 years.
What will this mean for our society?
Pekka Pöyry, senior lecturer at Tampere University of Applied Sciences, says that the integration of AI into the workplace is certain to have important implications for the field of education. He argues it is difficult to estimate how many jobs will eventually be lost.
In fact, when talking about AI, we have tended to overestimate its short-term effects but underestimate its long-term effects.
AI is now mainly deployed to perform routine tasks quickly, effectively and more accurately than humans can. For example, banks are using AI to make informed credit decisions.
The experts see this as a natural stage in the continuum of technological progress stretching far back into history. Excavators displaced legions of men with shovels, and keypunch operators have likewise become a thing of the past. In industry, AI is continuing what automation started.
In health care, the power of AI has been harnessed to improve cancer screening, among other things. Kämäräinen and other researchers at Tampere University have examined how AI could detect the early signs of breast cancer in mammograms. AI already outperforms experienced oncologists in the interpretation of mammograms, allowing high-risk patients to be monitored more closely and any cancer to be found and treated early.
AI touches virtually every aspect of our lives. It tracks our online activity to help advertisers better target their ads. It matches us with potential partners on dating apps and offers personalised styling advice when we are shopping for clothes online. The latest smartphones are equipped with a camera that is capable of recognising what is in front of the lens and adjusting the settings accordingly. Phones can also understand spoken commands.
When leaving a parking garage, we may no longer have to go through the awkward process of inserting a ticket into a slot through the car window. Instead, an image recognition algorithm reads our license plate and determines whether we have paid the parking fee.
License plate recognition is especially useful in ports and work sites where access requires a permit. Heikki Huttunen, professor at Tampere University’s Department of Signal Processing, believes that ideally AI will assume a supportive role rather than work independently.
“AI should free human workers to only handle the trickiest cases, which in the license plate scenario would be the muddiest number plates or photos with the sun shining directly into the camera,” he states.
At its best – or at its worst – AI is a highly effective technology. There are countries where, for example, the government monitors its population. The arrival of AI means that these governments now possess unprecedented surveillance capabilities. AI can also be used for profiling purposes.
According to Forbes, at least the USA, Russia, Israel and China are rushing to develop military killer robots that are more effective than human soldiers but cannot distinguish between combatants and civilians. The United Nations has been calling for a ban on fully autonomous weapons, but so far the attempts have been unsuccessful.
Joni Kämäräinen agrees that this particular direction of AI development is disconcerting.
“I am worried about the world going increasingly mad, because in advanced AI we have a technology that could be misused with disastrous consequences,” he says.
Researchers know that the technologies they are studying and developing are being used for malicious purposes. Few would agree to design offensive military robots, but there are always some who will. Indeed, some researchers believe that AI has already gone too far.
“We can ask whether it is ethically or morally acceptable to use facial recognition technologies to track people. We are continuously leaving behind digital traces of our activities, whereabouts and interactions. The downfall of anonymity is an alarming development,” Pekka Pöyry says.
Pöyry adds that the use of AI to change the nature of war or reinforce social inequalities ultimately comes down to people and not to AI being somehow inherently evil.
“I would not blame technology, but the people who misuse it,” he adds.
Joni Kämäräinen says that as a scientist, he cannot leave the topic unexplored, because new technology can also be a force for good.
Heikki Huttunen believes that the more people really understand AI, the better are our chances of preventing misuse.
“What makes AI more democratic is that fact that computing power is cheap and algorithms are openly accessible. On top of that, the use of AI requires open data – which is already widely available – and AI skills,” Huttunen notes.
In the 1984 science fiction film The Terminator, the all-powerful AI network Skynet becomes self-aware and attempts to eradicate humankind. Could this happen in real life?
Current AI is still far away from such a sci-fi dystopia, although the field has made some tremendous strides in recent years. AI is often divided into weak and strong AI based on the level of autonomy. However, Heikki Huttunen finds this division somewhat artificial.
“It is not a simple question of one or the other. Some researchers claim that the ability to understand natural language is already a hallmark of strong AI. In this sense, we are close to achieving strong AI,” Huttunen says.
If we are communicating with an AI that understands natural language, we will not be able to tell if we are speaking with a human or a machine.
Huttunen believes that in the next 10 to 20 years, Apple’s Siri – or whatever equivalent is around at the time – will have reached such a high level of sophistication as to be indistinguishable from a human conversational partner. Still, truly human-like artificial superintelligence is a long way off.
“For it to become a reality, we would have to develop a general-purpose AI that is sent out with no other task than to explore and learn about the world. We do not have the tools for that yet,” Joni Kämäräinen says.
A more pressing issue than dystopian sci-fi visions may be the lack of transparency in AI decision-making. If banks and insurance companies are making AI-based decisions, they must be able to open up the “black box” and explain how the AI comes to its conclusions.
“The logic and rationale behind AI decisions may remain opaque to the person concerned and even the person responsible for the decisions. The decisions must be traceable and explainable to be contestable,” Lauri Lahikainen says.
Those using AI must be able to understand how AI works and reaches its decisions to ensure they do not hand over their decision-making power to a machine.
If trained on biased data, AI can learn to discriminate. This could happen, for example, in recruitment: if a company has mainly hired white men in the past, AI may teach itself to favour white male candidates.
“This preference may be due to a biased algorithm that reflects existing patterns of social inequality rather than actual competence,” Lahikainen says.
How can we make sure that AI is used ethically and transparently, and continues to be in the future?
According to Huttunen, the responsibility for developing transparent AI is spread far and wide. It lies not only with corporate executives, politicians, journalists and researchers, but also with consumers who willingly share their data online in exchange for benefits. For example, users of the popular FaceApp or Facebook may be giving up much more information than they think.
Lahikainen is calling for more rigorous constraints and public debate about the use of consumer data. Online services such as Google and Facebook have infiltrated our lives to such an extent that it is difficult to imagine life without them.
In spring 2019, the EU released guidelines for the ethical development of AI. While not legally binding, these guidelines could serve as a framework for future legislation. In a new government programme, the Finnish government is pledging to support the ethical, social and financial regulation of AI.
The new technologies that incorporate AI are raising questions about liability, especially in the case of autonomous cars. Who is liable for an accident caused by a self-driving car? Lawmakers are yet to pass legislation regulating self-driving cars as they are still in the testing stage, but the Swedish manufacturer Volvo has already assumed liability for its self-driving cars.
“This shows that Volvo is confident that not only are its cars safe, but also that autonomous cars are safer than cars driven by humans,” Huttunen says.
Besides safety, there are some very high expectations of driverless technology. As self-driving cars will not accelerate or brake unnecessarily, they will consume less fuel than regular cars. In addition, they will be capable of analysing traffic data to select less congested routes. In public transport, autonomous vehicles could significantly reduce personnel costs.
For researchers, AI is at times fascinating and at times tedious. When Professor Joni Kämäräinen attended the world's leading AI summit in Boston in 2015, he became excited and could hardly believe the applications showcased. When he headed off to the same summit this past summer, that feeling of giddy excitement had given way to boredom.
Kämäräinen dismisses many of the solutions deployed in the past few years of AI fever as too gimmicky. He believes that AI should above all be harnessed to address large-scale social problems. For example, the quality of health care should be developed holistically rather than focusing on designing individual cancer screening algorithms.
“Great breakthroughs will not change anything unless they are adopted across the cancer care continuum,” Kämäräinen points out.
The key is multidisciplinarity. Kämäräinen believes that, for example, the development of elderly care should be a collaborative effort involving AI scientists, sociologists, geriatricians, psychologists and psychiatrists.
“Researchers should wake up and study real problems. I am afraid that many of the current AI-powered clever tricks will eventually be lost and forgotten,” he says.
Kämäräinen is also championing the development of unsexy AI technology. He is convinced that robots designed to help elderly people use the toilet could bring massive benefits, as people could live in their home longer. However, with all the hype surrounding sexy and futuristic AI, the idea of a toilet assistant robot would need some clever branding.