It is certainly possible to regard social robots simply as tools and amusements of the smart age. However, they should be consistent with ethically solid and maybe even politically balanced modes of interaction and behavior.

No longer can technical efficiency be the only basis for letting loose self-driving cars, liners or planes.

The first practically useful industrial robots were introduced in car factories at the beginning of the 1960s. Talk of service robots became more common somewhere in the 1970s when machines, devices or tools programmable by a computer, otherwise called manipulators, were also put to use outside factory production lines.

A central distinction between factory robotics and field robotics had to do with the quality of the operational environment.  In factories, the placement of the robot was deliberate, and the robot had predetermined trajectories. Under field conditions, however, the robots were forced to move and operate in “natural” or “unstructured” environments.

Service robots, or field robots, of this type consist mostly of remote-controlled devices used to carry out difficult tasks in environments that are unsuited or even impossible for humans to operate in. For this reason, a trinity of “dirty-dull-dangerous” became a famous rule of thumb to depict the scope of these robots. They were still bound by the same general rule as factory robots, though: they were to be isolated or distanced from the workers to guarantee their safety.

Paradigm shift in robotics: working with humans

Today, the situation is nearly the opposite. No more do humans need to keep their distance to robots, not even in factories,  can install components next to the worker during assembly. Technology and related standards have together created conditions for safe side-by-side use, along with a basis for bringing humans and robots closer together. Simultaneously, the previously acknowledged distinctions have lost some of their relevancy. Remote controlling performed by humans is still relevant in the control of a surgeon robot, but this is not so much due to the dangers involved rather than because of the precision required by the work.

On the other hand, autonomous robotics has already become a part of everyday consumption. Roombas vacuum floors, robot lawnmowers function amidst everyday living. Consumption robotics has, in a sense, already started to produce social aspects.

A more significant change is the extension of autonomous robotics, ie. artificial intelligence and machine learning, to human social interaction. This research direction has become stronger especially since the 2000s, when humanoid robots started to be able to stand on their own two feet and their facial expressions too started to become more meaningful. Machines were no longer just instruments of communication, but instead control of communication and interaction processing became integral aspects of the machine itself.

The process of developing a social robot is only in its infancy. The learning algorithms materialized in that process may at some point, however, revolutionize our relationship to machines, where humans have traditionally been the masters of the machines they have designed. It is certainly not a question that will remain hanging in the air, as intelligent interactive robots bring the question to the level of individual relations.

Machine as a subject of humanities

The limitations of technical capacities have largely guided both design and execution of robots working in production and services. Robots have traditionally belonged to the purview of engineers, programmers and technocrats. However, the intellectualization, humanization and socialization of robots has created a need for more and more input from non-technical disciplines.

Various fantasies, utopias and dystopias have entertained other options, but so far the working machines have been controlled by humans. Humans invented tools as extensions to hands, which in turn were controlled by the brain. All machines did was increase the efficiency of the work further. Remote-controlled field robot was an extension of the human hand, too, but it made levers and actuators reachable by radio waves. Even with programmable machines, the master-slave relationship was clear at the beginning. Computers were merely seen as speeding up the calculations which would previously have used pen and paper, slide rule and a calculator.

The speed of the calculating machines, also known as computers and robots, offered much to marvel at in popular treatments (e.g. Strehl 1954). However, soon the incalculable increase of speed by the standards of the time aroused the misgivings of experts, who were worried that machines may one day develop beyond the control of humans. Perhaps the most serious and authoritative of these voices was Norbert Wiener, who contributed wide-ranging discussion on the post-war use in society of cybernetics, developed by him (Wiener 1989, originally 1950, compare Pickering 2010).

The problem was not that researchers were not able to comprehend the performance of the computer given the chance. Rather, it was the speed of the decisions made by the machines that constituted the threat. Wiener laid out a vision of its most extreme potential consequences, toying with the idea of the computer as a tool of warfare amidst the Cold War. If the computer had been assigned a task of simply winning the war, it could well have ordered a bomb to be dropped before the programmer had the chance to cancel its decision.

Wiener did think the use of automatized machinery necessitated a heightened awareness that progress may also take another course, despite the benefits of the technology. Who precisely, then, should be aware? C.P. Snow, who held the Rede Lecture at the University of Cambridge in 1959, shared his own scientific perspective. His seemingly innocent question was whether humanists and literary intellectuals should be better informed about the advancements in technology. Industrialization and the use of technology had progressed so far that ancient literature could no longer be relied on as guidelines. Snow did not have the chance to speak about working robots or artificial intelligence, as the first industrial robot was patented only shortly after the lecture and the possibility of a thinking machine was only just raised at a meeting held in Dartmouth in 1956. However, his stance on the nature of scientific revolutions, namely that it was automation that defined progress alongside electronics and nuclear power, was certainly applicable to robots.

Snow’s views sparked an outrage among humanist circles. Especially in England, discussion of “the two cultures” was intense and prolonged. The idea of a gap between these “two cultures” was to remain as Snow’s stamp in the history of science, with multiple editions made of the book with that exact title. To be sure, Thomas S. Kuhn’s take on the nature of scientific revolutions and the normativity of scientific research, published in 1962, did exert a more significant influence on philosophy of science,  both works can be regarded as symptoms of the same trend: academia began to see natural sciences and its technological consequences as deserving and requiring interpretations based on social sciences and philosophy.

Snow and Kuhn started out as natural scientists, but on the other hand there were figures like Robert Solow who brought technology into his growth model equation as an economist, or Arnold Gehlen, who approached the objectifying, self-defining nature of technology purely from the perspective of philosophy. There was also Langdon Winner, who drew attention to political choices inherent in technological artefacts, also pointing out that the gap between technological reality and individuals’ views about it was set to increase. Winner can be counted among the representatives of the wave of technology criticism beginning at the start of 1970s, concerned with preconditions of information use, and with building a more social science-based understanding of the relationship between society and technology.

In the robot design of today and especially in robot criticism from a social sciences perspective, it is considered nearly self-evident that contributions from social sciences are needed to support co-design.

The criticism of the planning and the use of technology from the perspective of sociology of science and sociology of knowledge increased in the 1980s. Alongside purely technological knowledge, areas where the benefits of technology were realized contributed research too. The research showed that in practice there were many unintended consequences in addition to the planned goals. For example, Lucy Suhman’s observations on the efficacy of digital user interfaces resulted in a realization of the importance of situated social practices. In the robot design of today and especially in robot criticism from a social sciences perspective, it is considered nearly self-evident that contributions from social sciences are needed to support co-design.

Can we learn from robots?

The socio-technical turn took place amidst a breakthrough of artificial intelligence, a field that was reaching a new level in terms of research and the application of results. The “good-old artificial intelligence”, familiar from the Dartmouth days, was accompanied by the paradigm of a learning artificial intelligence in the 1990s. The main aim was no longer a pitch-perfect, computer-modeled representation of a surrounding symbolic reality. Instead, researchers settled for teaching the machine in small steps. This new approach was sparked by the fact that even the most recent microcomputers could not process the masses of data collected from the environment according to the standards of the old paradigm. Simply put, computers collapsed under these impossible operations. Learning by smaller steps produced results extraordinarily swiftly. Not long after the beginning of the 2000s, robot vacuum cleaners were available in shops.

Scientists then applied the same principle to social robotics. Now, if ever, were humanists and social scientists challenged to take a stance on the direction of technological research. It has gotten to a point where leaving machines designed for social activity only to the responsibility of technical expertise seems almost anachronistic, presumably in the opinion of many roboticists, too.

In theory, robots, especially social robots, offer a near-perfect springboard for further development and application of research-based knowledge about the relation between science, technology and society.

In theory, robots, especially social robots, offer a near-perfect springboard for further development and application of research-based knowledge about the relation between science, technology and society. In fact, Steve Woolgar and Harry Collins, having analyzed the turn in technology research, raised the question of whether sociologists could gain new perspective from artificial intelligence already at the end of the 1980s.  Back then, people talked of expert systems, where expert knowledge could be compressed and packaged to a compact form to direct a machine’s behavior. Controversially for their time, the researchers put forward the view that this offered a chance for sociology to renew itself too.

The question is even more pertinent now that social robots may also modify their own algorithms, based on social interaction, whereas before it was limited to expert knowledge. The machine does run off, but it does not necessarily run amok, as Wiener predicted, but only in that it is able to choose logical pathways independently.   , the speed of the artificial intelligence is part and parcel of this, as is the fact that the researchers do not always recognize all the thought processes of the artificial intelligence designed by themselves. In a way, we are making a conscious decision in choosing uncertainty, because it leads to useful results.

Of course, it is obvious that new technology will have unintended consequences in the future too. By necessity, the technological reality forms itself also through a process of trial and error.  Even in the case of self-driving and intercommunicating cars, we will only get used to them once we use them in traffic. Through artificial neural networking, we are going to have to and maybe finally are even able to better control the uncertainties of future.

The control of a single piece of technology is also subject to the requirements of multidisciplinarity and competent criticism.

The design of ever more complex systems also creates demand for a type of design knowledge not traditionally established in the process of technical design. We are a long way from hammers by now. No longer can technical efficiency be the only basis for letting loose self-driving cars, liners or planes. Rather, wide societal acceptance and multinational approval and regulatory systems are required. The control of a single piece of technology is also subject to the requirements of multidisciplinarity and competent criticism. After all, we probably would not dare to let a nursing robot care for a patient suffering from dementia or Alzheimers without hearing a nursing professional’s opinion on the matter.

It is certainly possible to regard social robots simply as tools and amusements of the smart age. However, they should be consistent with ethically solid and maybe even politically balanced modes of interaction and behavior. Echoing Winnerian technology critique, one could hope that robots would be able to communicate views about the state of the world that transcend hedonism, while reducing the understanding gap.

References

Brooks, R.A. and Stein, L.A. (1994). Building Brains for Bodies, MIT Artificial Intelligence Laboratory. http://groups.csail.mit.edu/lbr/hrg/1993/AIM-1439.pdf [accessed 14 Dec 2016].

Collins, H. M. (2012). Expert Systems and the Science of Knowledge. In: Bijker et al. (ed.), The Social Construction of Technological Systems, anniversary ed., Cambridge, US: The MIT Press, 311-328.

Gehlen, A. (1980). Man in the Age of Technology. New York: Columbia University Press. (the German Original  1957).

Kuhn, T. S. (1970). The Structure of Scientific Revolutions. Chicago: The University of Chicago Press.

Pickering, A. (2010). The Cybernetic Brain. Sketches of Another Future. Chicago & London: The University of Chicago Press.

Šabanović, S. (2010) Robots in society, society in robots: Mutual shaping of society and technology as a framework for social robot design. International Journal of Social Robotics, 2 439-540.

Šabanović, S. , Reeder, S. M.  & Kechavartzi, B. (2014).  Designing Robots in the Wild: In situ Protype Evaluation for a Break Management Robot. Journal of Human-Robot Interaction, Vol 3, No. 1: 70-88.

Snow, C. P. (1998). The Two Cultures. Cambridge University Press. (1.p. 1959)

Solow, R. (1957). Technical Change and the Aggregate Production Function. The Review of Economics and Statistics, Vol. 39, No. 3: 312-320.

Strehl, R. (1955). The Robots are among us. London, New York: Arco Publishers.

Suchman, L. A. (1985). Plans and situated actions. The problem of human-machine communication.  ISL-6, Xerox Corporation.

Wiener, N. (1960). Some Moral and Technical Consequences of Automation. Science, Vol. 131, No. 3410: 1355-1358.

Wiener, N. (1989). The Human Use of Human Beings. Cybernetics and Society. London: Free Assciation Books. (English Original 1950)

Winner, L. (1977). Autonomous Technology. Technics-out-of-control as a Theme in Political Thought. Cambridge ym.: The MIT Press.

Woolgar, S. (2012). Reconstructing Man and Machine: A Note on Sociological Critiques of Cognitivism. In: Bijker, W. et al., (ed.). The Social Construction of Technological Systems, anniversary ed., Cambridge, US: The MIT Press, 311-328.