Humanoid social robots as a medium of communication
The online version of this article can be found at: can be found at:
New Media & Society
Additional services and information for
(this article cites 6 articles hosted on the Citations
SAGE Journals Online and HighWire Press platforms): 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Copyright 2006 SAGE PublicationsLondon, Thousand Oaks, CA and New DelhiVol8(3):401–419 [DOI: 10.1177/1461444806061951] Humanoid social robotsas a medium ofcommunication This article examines the emerging phenomenon of humanoid social robots and human–humanoid interactions.
A central argument of this article is that humanoid social robots belong to a special type of robotic technology used for communicating and interacting with humans. These robotic entities, which can be in either mechanical or digital form, are autonomous, interactive and humanlike.
Some of them are used to interact with humans for utilitarian purposes and others are designed to trigger human emotions. Incorporation of such robotic entities into the realm of social life invariably alters the condition as well as the dynamics of human interaction, giving rise to a synthetic society in which humans co-mingle with humanoids. More research is needed to investigate the social and cultural impact of this unfolding robotic android • artificial intelligence • communication • human– computer interaction • human–humanoid interaction • I am certain that our future, our near future, will include a burgeoningrelationship between humans and intelligent machines. I do not see this as a‘cyborg’ future, nor do I see it in the overly dramatic science-fiction terms of a 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
potential future competition between humankind and machine . . . It is clearto me that the arrival of new nonhuman virtual intelligence is a wholly naturalstep for humankind. (Hammond, 2000: 340) A revolution of humanoid social robots is quietly taking place in oursociety: autonomous, interactive and humanlike entities of various sizes andshapes are leaving research laboratories in large numbers, making their wayinto the world of our everyday lives (Menzel and D’Aluisio, 2000;Nightline, 2002). Automated teller machines (ATMs), vending machines andautomated telephone response systems are standing in for human attendantsto serve real people; online search agents, game bots and chat programs areworking for and playing with human users; and robotic dolls and pets arecuddling up with children and talking to the elderly. A large army of‘relational artefacts’, ‘friendly machines’ and ‘socially intelligent robots’ areinvading the realm of human social life, sharing the living environment withpeople, communicating emotionally with humans and learning ‘right’ and‘wrong’. Some of these sociable robots are even capable of interacting withhumans with facial expressions, gaze directions and voices, mimicking theaffective dynamics of human relationships. This emerging movement ofsocial roboticization is causing a fundamental change in the meaning ofsocial interaction and the nature of human communication in society.
Human–humanoid interaction, which is the focus of the present study, needs to be distinguished from computer-mediated communication (CMC),human–computer interaction and ‘post-human’ cyborgization. CMC ishuman-to-human contact through electronic mediation (Hiltz and Turoff,1978). There, ‘computer’ is used as a generic term referring to thoseelectronic devices which enable physically separated individuals tocommunicate with one another instantaneously across distances. Two-wayradios, telephones, networked computers and, more recently, the internetare examples of such devices. Humanoid social robots differ from CMCtechnologies in that they are not a medium through which humans interact,but rather a medium with which humans interact. Acting as humansurrogates, humanoid social robots extend the domain of human expression,discourse and communication into the computerized world.
Human–computer interaction overlaps with human–humanoid interaction to a certain extent. Both involve computers that serve as an interactivemedium. However, in human–computer interaction, often the computer ispart of a larger technical operation that requires human participation.
Research in human–computer interaction as pioneered by people such asDouglas Engelbart (1963) aims to augment human intellect through thecreation of ‘user-friendly’ interfaces that are optimized to the physical andpsychological characteristics of human operators. The optimization ofcomputer–user interaction at the interface level allows humans to work 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
more effectively within a technical system. Unlike human–computerinteraction, human–humanoid interaction goes beyond the level of interfaceexchange in becoming part of the discursive communication thatcharacterizes human–human interaction. Humanoid social robots are notuser-friendly computers that operate as machines; rather, they are user-friendly computers that operate as humans.
‘Post-human’ cyborgization refers to the phenomenon of technological restoration, augmentation and alteration of human natural capacities andfunctionality (O’Mahony, 2002). ‘Cyborgs’ thus includes not only those whoembed silicon chips in their brains to be able to manipulate objects withmere thoughts, but also people wearing restorative prostheses such aspacemakers, artificial limbs, dentures, contact lenses and those who ‘pumpiron’ or take steroids to enhance their muscles or take Viagra to increasetheir sex drives and prowess (Bell and Kennedy, 2000). Some scholars goeven further to include within cyborgization the use of nearly all forms oftechnologies, ranging from screwdrivers, bicycles, cars and airplanes to theinternet. In this broader sense of the term, human–humanoid interactioncan be regarded as part of the general process of cyborgization, namely,‘extending humanity’ by means of technology (Zylinska, 2002).
Humanoid social robots serve as ‘prosthetic extensions’ of human individuals by acting as their surrogates in social interaction. Standing inproxy for bank tellers, shop assistants, telephone operators, tour guides,housemaids and playmates, to name but a few, humanoid social robotsinteract with humans, as humans and on behalf of humans. As a specialinteractive medium, humanoid social robots enable human individuals toengage in communicative exchanges with others in solitude. Characterizedby programmed interactivity, artificial intelligence (AI) and syntheticemotion, this emergent form of human communication is playing anincreasingly prominent role in today’s computerized society.
Despite the growing prevalence of humanoid social robots in everyday life, there has been a lack of sociological interest in human–humanoidrelationships. In recent years, a great deal of attention has been devoted tocomputer or internet-mediated human–human interactions (DiMaggio etal., 2001) and only a small number of sociologists have concernedthemselves with issues related to social robotics. Of those sociologists, somehave explored the prospects of using AI technologies to improve sociologicalanalysis (Bainbridge et al., 1994; Carley, 1996), others have looked into thecontributions that sociology might make to the social robot project (Collins,1992; Restivo, 2001), and only very few have examined the social impact ofhuman–humanoid interaction (Turkle, 1984, 1995; Wessells, 1990; Wolfe,1991). Overall, sociologists know relatively little about humanoid socialrobots and their effects on individuals and society. A major reason for this isthat, for the most part, robotic artefacts have been regarded mistakenly as 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
mere technological gadgets or matters that are of concern only to roboticistsand, as such, the sociological implications of this important technologicaldevelopment has been overlooked.
This study aims to achieve three objectives. The first objective is to define humanoid social robots. In the existing literature, humanoid robots havebeen referred to variously as ‘interactive machines’, ‘autonomous agents’,‘mobots’ and ‘bots’. Obviously, not all of these concepts carry the sameconnotation. ‘Autonomous agents’, for example, includes all forms of self-directed entities, whether natural or human-made. While ‘mobots’ denotesmobile mechanical robots, ‘bots’ refers to online software agents. In thisstudy, however, ‘humanoid social robots’ has been chosen as a genericconcept referring to all the robotic entities, either physical or digital, thatare designed to interact with humans in a humanlike way.1 The second objective is to examine the interactions between humans and humanoid social robots. This article would like to argue that, while human–humanoid interaction differs from human–human interaction in importantways, the former resembles the latter in language use, relationality andnormativeness. As human surrogates for communication, humanoid socialrobots are designed to interact with humans not as machines but as humans.
Thus human–humanoid interaction is inherently more akin to human–human interaction than human–machine interaction.
The final objective is to reflect upon the sociological implications of human–humanoid interaction. It will be argued that recent incorporation ofhumanoid social robots into the realm of human communication is givingrise to an important social transformation that will eventually redefinesociety as well as individuals. The social and psychological impact of thisunfolding robotic revolution deserves our attention.
HUMANOID SOCIAL ROBOTSThe American Heritage Dictionary for Windows (1994) defines the robot as ‘amechanical device that sometimes resembles a human being and is capableof performing a variety of often complex human tasks on command or bybeing programmed in advance’. The second college edition of the AmericanHeritage Dictionary (1991), however, defines the robot as ‘a mechanical devicethat resembles a human being and is capable of performing human tasks orbehaving in a human manner’. The differences between these twodefinitions are subtle but non-trivial. While both describe robots as‘mechanical devices’ which perform complicated ‘human tasks’, the secondcollege edition makes human resemblance (‘resembles a human being’ and‘behaving in a human manner’) a necessary rather than optional condition.
Furthermore, the electronic edition allows robots to be either externallycontrolled (‘on command’) or internally controlled (‘being programmed inadvance’), whereas the second college edition leaves unspecified the level of 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
autonomy required of a robot. These definitional variations reflect actualdifferences in existing subtypes of robots.
Humanoid social robots are a special category of robots formally defined here as human-made autonomous entities that interact with humans in ahumanlike way. This definition contains three essential components: ‘human-made autonomous entities’ (robotic), ‘interact with humans’ (social) and ‘in ahumanlike way’ (humanoid). These three components are elaborated in turnas follows.
The robotic component: ‘human-made autonomousentities’Humanoid social robots are technological artefacts made by humans. Theseartefacts can take either physical (e.g. machine) or digital (e.g. animation)form and are endowed with at least a minimum level of autonomy or theability to act on their own (thus remote-controlled robots deployed inteleoperation are excluded from this category). An autonomous entity isself-directed and self-direction, in this case, is created through automationand simulation. An autonomous robot may or may not be environmentallysituated (Suchman, 1987). On the one hand, a non-situated autonomousrobot, such as an automated paint sprayer, repeats a given routine ofoperation by strictly following a set of pre-programmed instructions. On theother hand, a situated autonomous robot is able to adjust itself to variationsin the environment. An anti-lock brake system, for example, respondsdifferently to varying road conditions for the purposes of maintaining thebalance of an automobile. However, a truly situated robot, according tosome roboticists, must be able to improve its responses to a changingenvironment through learning (Brooks, 1999).
Depending on where they are deployed by the users, autonomous robots can be grouped into two major categories: mechanical robots that reside inphysical space and software agents or bots that reside in cyberspace. Forthe sake of simplicity, unless otherwise specified, the generic label ‘robots’will be used in this article to refer to both mechanical robots andsoftware agents.
The social component: ‘interact with humans’Not all robots are designed to interact with humans. Robots that aredesigned to interact with non-human objects are often called ‘industrialrobots’ (Frude, 1984). Examples of autonomous industrial robots includeautomated packaging devices, paint sprayers and waste cleaners. These robotsare deployed to carry out repetitive and/or hazardous tasks in place ofhumans. Most robotic home appliances also belong to this category. Airconditioners, washing machines, automatic cookers, self-directed vacuum 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
cleaners and lawnmowers are all robotic entities that work autonomously forhumans to make home a more comfortable place in which to live.
Social robots are autonomous entities designed specifically to interact with humans (Breazeal, 2001). However, not all robot–human relationships aresocial. For example, interactions between a robotic wheelchair and its user, ahearing aid and its wearer, or an automobile and its driver are prostheticrather than social. A large part of what is known as assistive technologiesdeals with the ergonomic aspect of such human–machine interactions (Gill,1996). To be social is to be communicative. A social robot orients itself tothe mind of an individual and acts upon the individual for purposes ofeliciting certain behavior and emotion; similarly, its human partner tends tobelieve that the robot has a mind and seeks to interpret the meaning of itsaction (Nass and Steuer, 1993). Social robots interact with humanindividuals both verbally and non-verbally. Automated telephone responsesystems, for example, answer callers’ inquiries in voice; online ‘chatter bots’chitchat with their human counterparts in text; and robotic dolls and petswin the hearts of children with smiles and hugs.
The humanoid component: ‘in a humanlike way’Just as not all robots are social robots, not all social robots are humanoidrobots. Mechanical dogs or digital cats that play with humans are no doubtsocial robots, but they are not necessarily humanoids. To be humanlike, arobot must show ‘an uncanny ability to simulate human behavior’ (Wiener,1950: 1), particularly, the ability to use natural human language. Linguisticcommunication through symbol manipulation has been long recognized as adistinctive human faculty. Although it is arguable whether robots canactually understand human language (Searle, 1990) or whether they shouldbe considered intelligent (Dreyfus, 1992), it is undeniable that robots nowcan be programmed to converse with humans. Recent advances in thetechnologies of speech recognition and synthesis, continuous dictation andtext-to-speech conversion have made it possible for robotic entities tocommunicate with humans in natural human language (Allen et al., 1987;Lee, 1989).
Another important humanoid attribute is the resemblance to human morphology, which can take either physical or digital form depending onwhere a robot resides. A static human appearance proves to be less effectivethan a dynamic one that responds to the varying situations of socialinteraction. For this reason, efforts have been made to equip embodiedhumanoid robots with the capabilities of communicating with humans usinggaze direction, gesture and facial expression (Cassell et al., 2000).
It must be pointed out that certain humanoid features can be found also in industrial robots. For example, there are ‘washy talky’ machines that speakwith a ‘warm female voice’ (Ananova, 2002) or light switches on the wall 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
that respond to human voice commands (Takahashi, 1998). The purpose ofadding a humanoid touch to a non-social robot in this situation is to createa ‘human-centered interface’ (Norman, 1993) that is user-friendly.2 But, assuch interfaces may be ‘part of the microwave, stove and sink’ (Dertouzos,1998: 118), they are not considered humanoid social robots that are speciallydesigned to interact with humans.
Humanoid social robots are not a modern concept. The idea of mechanical maidservants, automated music players and other humanlikemachines, for example, can be found in the designs of Medieval andRenaissance clockmakers (Wosk, 1992). However, it is generally agreedthat the contemporary project of humanoid social robots began with thefamous Dartmouth Summer Conference on AI in 1956, where the agendafor building a machine with the general intelligence of an average humanbeing was set. The first autonomous humanoid robot, ‘Shakey’, was built atthe Stanford Research Institute in the late 1960s. For a while manyresearchers believed that humanlike artificial intelligence was within reach(Minsky, 1967). However, the AI project became stalled in the 1970s and1980s after repeated failures to overcome the so-called ‘world knowledge’problem (Lenat and Guha, 1990). In the early 1990s, the AI project took anew turn, adopting the strategy of developing ‘situated and embodied’robots that are adaptive to the environment (Brooks, 1999). Cog (Brooks,2002) and Kismet (Breazeal, 2001) became the first two research prototypesfor such humanoid robots. In addition, in the 1990s a barrage ofhumanoid robotic pets, toys and dolls, such as Tiger’s Furby (, Sony’s Aibo (see and Hasbro’sMy Real Baby (see hit the consumermarket, all designed to trigger human emotions with believable socialinteraction. Parallel to the development of various mechanical social robotswas the emergence of socially intelligent software agents that communicatewith human users in natural human language. The first well-known chatterbot (Mauldin, 1994) was ‘Eliza’, a computer program capable of conversingwith people in text by playing the role of a psychiatrist (Weizenbaum,1966). The spread of the internet in the 1990s contributed to the rise ofnumerous online conversational agents, such as ‘Julia’ ( ‘Alice’ (, which chat with people round the clock ontopics ranging from politics to sex (Foner, 2000). We are now living in aworld that is being cohabited by an ever-increasing number of humanoidsocial robots.
According to the roles that they perform, humanoid social robots can be divided into two major types: utilitarian humanoid social robots andaffective humanoid social robots (Breazeal, 2000). Utilitarian humanoidsocial robots are humanoid social robots designed to interact with humansfor instrumental purposes. Currently, such robots are being used widely in 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
commercial sectors, replacing human attendants in serving humancustomers. Besides ATMs, vending machines and automated telephoneanswering systems, which are already commonplace, many other types ofutilitarian humanoid social robots are deployed to stand in for humans astelephone operators, helpdesk receptionists, salespersons, private tutors, travelagents, hospital food servers and museum tour guides (Dertouzos, 2001).3These autonomous robots interact with humans either as disembodiedentities or as embodied anthropomorphic figures capable of verbal (text orvoice-based) as well as non-verbal expressions (Cassell et al., 2000).
Affective humanoid social robots are humanoid social robots that are designed to interact with humans on an emotional level. These robots areused mostly in two kinds of environment: online chat and private homes.
The anonymous settings of online chat have become the natural habitat forchatter bots that are disguised as humans by design. If programs such asEliza were ‘brittle’ enough to be seen through easily in the early days, newgenerations of conversational software agents can now pass the Turing Test ina number of restricted domains (Goodwins, 2001). Embodied conversationalagents capable of non-verbal expression are being developed also to enhancetheir ‘friendship relations’ with humans (Stronks, 2002). In privatehouseholds, affective humanoid social robots act as pets (e.g. Furby, Aibo)and dolls (e.g. Robota:; My Real Baby) that cohabitwith people on a daily basis, making humans bond to them (Woodall,2001). A full-sized adult female, ‘Valerie’, is currently under development ina research lab. When fully built, it is said that she will be able to ‘speakseveral languages’, ‘remember previous conversations with you’, ‘have a senseof touch all over like people do’ and ‘dress or undress herself ’, in additionto being able to perform simple household chores (Androidworld, 2002).
Unlike utilitarian humanoid social robots that are employed by human usersfor work, affective humanoid social robots are kept by their human ownersas ‘pleasant characters’ (Frude, 1984) for play and companionship.
In addition, an integrated type of humanoid social robot (e.g. R100), which combines both utilitarian and affective functions, has been created.
These robots are able to play different roles in human social life. They canbe used to serve, for example, as assistants and companions to the elderly(Johnstone, 1999; Kerr, 2002) and therapists and playmates for children withautism (Dautenhahn and Billard, 2002).
HUMAN–HUMANOID INTERACTIONTraditionally it is held that there is an inviolable boundary between human–human interaction and human–machine interaction. Human interactions aresymbolic, communicative and meaningful; interactions between humans andmachines, on the other hand, are devoid of meaning (Searle, 1969). For thisreason, some argue that the concept ‘social interaction’ should be applied 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
only to engagements between human beings and it is inappropriate whenused to describe human–machine contact (Liestol, 2003). The emergence ofhuman–humanoid interaction, however, seems to pose a challenge to thissimplistic dichotomy.
Humanoid social robots are machines designed to emulate humans. As Zylinska points out: ‘What we are witnessing here is a process in which thehuman is extrapolated onto an external agent which is supposed to be botha replacement for “man” and “his” perfect simulation’ (2002: 2–3).
Interactions with these ‘extrapolated humans’ resemble human–humaninteractions in that they are linguistically based, relational and normative.
These three distinctive features of human–humanoid interaction areillustrated vividly in the following excerpt from a dialogue between a digitalhumanoid real estate agent, ‘Rea’, and a human customer, ‘Lee’ (Cassell etal., 2000: 47).
[Lee approaches the projection screen. Rea is currently turned side on and isgazing idly about. As Lee moves within range of the two cameras mountedabove the screen, Rea turns to face him.] Rea > Hello. How can I help you?Lee > I’m looking to buy a place near MIT.
Rea > [Rea nods, indicating that she is following.] I have a house to show [A picture of a house appears on-screen behind Rea, who blinks and looks atthe house and then at Lee.] Rea > It’s in Somerville.
Lee > Tell me about it.
Rea > [Rea looks up and away while she plans what to say.] It’s big. [Lee makes an expansive gesture with her hands. Lee brings his hands up asif he is about to speak and so Rea does not continue; instead, she iswaiting for him to speak.] Lee > Tell me more about it.
Rea > Sure thing. It has a nice garden. [Rea sketches a curve gesture with her hands indicating that the garden extends along two sides of the house.] Lee > How far is it?Rea > It is five minutes to the Porter Square T station. [Rea makes it clear that it is five minutes on foot from the T station by making a walkinggesture with her fingers.] Lee > How big is the house?Rea > It has four bedrooms, three bathrooms . . . [Lee interrupts Rea who Lee > Wait. Tell me, where is the master bedroom?Rea > I’m sorry, I didn’t catch that. What did you ask me?Lee > Where is the master bedroom?Rea > It’s upstairs. [Rea points up.] 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
This episode of human–humanoid encounter bears a striking resemblance to human–human communication in the following three aspects. First, theinteraction between Rea and Lee is linguistically based. From orality toliteracy, humans have developed the skills of communication using bothspoken and written languages (Ong, 1982). Like humans, humanoid socialrobots are capable of communicating in natural human language. In theabove example, Rea and Lee are engaged in an oral dialogue. Wheninteracting online with chatter bots, however, verbal exchanges areconducted in text. Similar to face-to-face human–human interaction, ‘face-to-face’ human–humanoid interaction may involve non-verbal expressions.
Being digitally embodied, Rea acts as a person nodding and gesturing whiletalking on the screen. Physically embodied humanoid social robots are ableto use body language also, as demonstrated by Kismet (Breazeal, 2001).
Relationality is another feature of human–humanoid interaction, which is also characteristic of human–human interaction. Human–human interactionsare relational in the sense that interlocutors identify themselves and locateothers within a complex web of interpersonal relations, such as therelationships between mother–son, father–daughter, husband–wife, neighborsand friends. A generic form of relational identification is the use of personalpronouns such as ‘I’, ‘you’, ‘we’ and ‘they’. Rea, for example, uses thepronoun ‘I’ to refer to itself and the pronoun ‘you’ to Lee. Baby ‘Hal’, anaffective humanoid social robot, calls its human caretaker ‘mummy’ (Laxon,2001). Instead of giving telegraphic answers such as ‘unauthorized request’,many automated service robots are now programmed to reply to humanrequests in a ‘you-and-me-against-the-world’ tone of voice, such as ‘I’msorry, my supervisor doesn’t allow me to make that transaction’ (Hapgood,2001). Research has shown that the use of anthropomorphic pronounsmakes human partners more likely to treat humanoid social robots as realpeople (Brennan and Ohaeri, 1994). ‘I tried to call Hal ‘it’ at thebeginning’, Hal’s caretaker reports, ‘but as our communication deepened, Ifound it harder. Yes, I’m attached to him. You just can’t help it’(Laxon, 2001).
Finally, human–humanoid interaction is normative. ‘Normality’ in this context refers to social conventions that regulate human interaction, such aspoliteness norms, turn-taking rules and other ethical, moral and legalconstraints. Many of these regulatory principles have been shown to beapplicable to human–humanoid interaction. In the dialogue cited above,Rea was not only knowledgeable but also courteous and polite. Sheapologizes for having to ask Lee to repeat and pauses when Lee cuts in toask a question. Studies have shown that the personalities, such as dominantor submissive, exhibited by interactive machines have a significant effect onhuman users (Nass et al., 1995). As in human–human interaction, peopleprefer to interact with machines that exhibit a personality similar to theirs.
2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Furthermore, people subconsciously apply normative criteria, such as genderstereotypes, to the autonomous humanoid entities with which they interact(Green, 1993).
Language use, relationality and normality generate a communicative context in which humanoid social robots engage in meaningful exchangeswith humans. Never before has it been possible for machines to interactwith humans in the way that humans do. However, human–humanoidinteraction is not exactly the same as human–human interaction. Perhaps themost important difference between the two lies in what Collins (1990) callsthe ‘interpretative asymmetry’ believed to be inherent in human–machineinteraction. As machines act according to programmed instructions ratherthan true understanding (Dreyfus, 1992), human–humanoid interaction is anunavoidably asymmetrical process in which the interaction is onlymeaningful to the human. For example, an affective humanoid social robotcan be programmed to utter ‘I love you’ to humans, but these affectionatewords, which may move a human to tears, have no meaning to the robot.
The question is then why humans will ever take humanoids seriously. Thereare at least two plausible explanations. The first is that people may regardthe responses of humanoids as representing the intentions of humanprogrammers (Biocca, 1992), so from the users’ standpoint, whenever theyinteract with a humanoid they are interacting with those who programmedthe robot. The second is that certain behavioral cues tend to elicit humanresponses. Examples of such cues include humanlike voices, written signsand human faces. It has been shown that even simple representations ofthese cues are ‘sufficient to evoke social responses’ from humans (Nass andSteuer, 1994: 556). For these and perhaps other reasons, human users endup treating humanoid social robots as humans even though they are awareof the asymmetrical nature of such interactions.
Another important difference between human–humanoid interaction and human–human interaction has to do with the restrictiveness of the domainof communication. ‘Domain’ is defined here as the knowledge base inrelation to a given subject matter. Human communication is domain-related,but not domain-restricted. It is domain-related because humancommunication is always about something or some subject matter; but it isnot domain-restricted because communication is also a vehicle by whichhumans acquire new knowledge. Learning is a constant process ofconsolidating the current domains of knowledge and developing new ones.
In contrast, human–humanoid interaction is both domain-related anddomain-restricted. Unlike humans, who are able to expand their knowledgebase through exploring unknown territories, humanoids are entirelydependent on the existing stock of knowledge with which they are provided(Forsyth, 1984). Within the programmed domain of knowledge (e.g. chessgames), humanoid social robots can be highly ‘intelligent’, but they become 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
completely ‘dumb’ the moment that the interaction moves beyond the givendomain. Such ‘brittleness’ of the artificial intelligence of robots makeshuman–humanoid interaction helplessly domain-confined.
A further factor is the indexical nature of human communication.
‘Indexical’ refers to the reliance of an expression on the situation of its usefor significance (Garfinkel, 1967). A human expression can have both literaland indexical meanings. The literal meaning can be looked up in adictionary, but the indexical meaning needs to be ascertained from thesituation in which the expression is made. An expression is synchronicallyembedded in a situation when it takes its meaning from the circumstance inwhich the expression is made (e.g. ‘That’s a nice one’; Suchman, 1987: 59).
An expression is diachronically embedded in a situation when it takes itsmeaning from a prior circumstance the current expression makes referenceto (e.g. ‘Dana succeeded in putting a penny in a parking meter todaywithout being picked up’; Garfinkel, 1967: 25). Many human expressionsare both synchronically and diachronically embedded, which means thatunderstanding of them requires knowledge of both current and pastcircumstances. The situatedness of human interlocutors in a commonlifeworld makes indexical human communication possible. However, ashumanoid social robots are not socially situated with human users,human–humanoid interaction is essentially non-indexical. It remains to beseen whether the development of situated humanoid social robots (Brooks,2002) will change the non-indexical nature of human–humanoidinteraction eventually.
Human–humanoid interaction, therefore, centers on the borderline between human–human interaction and human–machine interaction.
Humanoid social robots, which may be best described as ‘humachines’(Poster, 2002) that combine the characteristics of both machines andhumans, extend the realm of communication to the machine world byplaying the role of humans. As a special medium of communication,humanoid social robots are designed to interact with, for and as humans. Tointeract with humans, humanoid social robots are equipped with ‘artificialintelligence’ so that they can understand and respond to humans. To interactfor humans, they take the role of human surrogates in communicating withhumans.5 Finally, to interact as humans, they emulate human appearance aswell as actions. In the sense that humanoid social robots are technological‘extrapolations’ of human individuals, human–humanoid interaction is the‘prosthetic extension’ of human–human interaction. From this perspective,the problems such as interpretative asymmetry, domain restriction and non-indexicality that are associated currently with human–humanoid interactioncan be regarded as the inconvenience of prosthesis that needs to betolerated. It can be argued that, as robotic technologies improve in the 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
future, the gap between human–humanoid interaction and human–humaninteraction will be narrowed further.
SOCIOLOGICAL IMPLICATIONSA central tenet of this article has been that while humanoid social robots arenot industrial machines, neither are they fancy gadgets that we play with forfun; rather, they are a special medium of communication that affects theway we see ourselves and relate to others. Autonomous, interactive andhumanlike, humanoid social robots are media technologies that ‘extend newpossibilities for expression, communication and interaction in everyday life’(Mayer, 1999: 328).
For a long time in human history, corporeal co-presence of the interlocutors in the same physical locale has been the only condition underwhich real-time human contact can take place. Face-to-face from body-to-body, human individuals communicate with one another in close physicalproximity using both verbal and non-verbal expressions (Goffman, 1963).
The advent of the internet has made prevalent the second condition ofinstantaneous human interaction, namely, remote co-presence or ‘teleco-presence’ (Zhao, 2003). Under this condition, physically separated humanindividuals are able to stay in ‘electronic proximity’ (Dertouzos, 1998),interacting with one another synchronously through the mediation ofcertain telecommunications devices such as telephones, two-way radios ornetworked computers. The emergence of humanoid social robots gives riseto the third condition of instantaneous human interaction – virtual co-presence.6 In corporeal co-presence and teleco-presence, both sides of thehuman interlocutors must be present simultaneously at the site ofinteraction, either locally or remotely. However, in virtual co-presence suchco-location is not necessary, as one side of the interlocutors is representedby a humanoid social robot which interacts with its human counterpart onbehalf of someone who is absent from the site. An automated telephoneresponse system, for example, fields enquiries from human callers in place ofa human receptionist. Technically, it is also possible for humanlikecommunication to take place in ‘hypervirtual co-presence’, where humaninterlocutors are all represented by humanoid social robots (Zhao, 2003).
Virtual co-presence must not be confused with what Rheingold (1993) calls ‘virtual community’. Virtual communities, such as online networks orweb-based groups, are human associations formed through mediatedcontacts in teleco-presence. In the sense that all the participating parties arereal human individuals, albeit not physically located in the same place, theresulting associational ties are real rather than virtual. Virtual co-presence is asituation of human versus human surrogate, where human individuals meethumanoid social robots. This virtual social condition allows a human 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
individual to remain in ‘interactive solitude’ (Carlsson, 1995) and be ‘a loner,but never alone’ (Turkle, 1995: 307).
Virtual co-presence is part of the so-called ‘post-human condition’ (Pepperell, 1995), which is characterized by the union of ‘meat and metal’and the co-mingling of humans with humanoids. The possibility of engagingin humanlike interaction with humanoid social robots fundamentally altersthe field of human communication, the process of meaning production andthe form of social action. In a world where humans are the onlycommunicative agents, ‘social’ is synonymous with human to human and‘social relationship’ refers to nothing other than dealings with anotherhuman being. However, in a world where a child plays house with aninteractive doll, a teenager chats online with a chatter bot and the elderlyfind companionship with a talking ‘robopet’, ‘social’ is no longer confinedto interhuman and ‘social relationship’ includes the bonding betweenhumans and humanoids.
The rise of a synthetic social world where human individuals and humanoid social robots co-mingle calls for a new conceptualization ofsociety. The traditional view of society as consisting of only humanindividuals needs to be revised. For one thing, the boundary betweenhumans and human artefacts is no longer inviolable due to the increasingtechnological prostheticization of human bodies (Stelarc, 2000). Technologiesare becoming an integral part of the human condition. Furthermore, roboticreplacement of human individuals in the processes of social interaction andcommunication creates a human–machine nexus that is indispensable to theoperation of everyday life. Today’s society comprises not only humanindividuals as delimited by their biological bodies, but also technologicalextensions of individuals, including their robotic surrogates.
Another way to conceptualize society is to redefine it in terms of communication. For example, Luhmann (1995) regards society as a systemof communicative interaction. Human individuals and the technologies thatthey produce and use are not seen as something located within society butwhich belong to the environment of society. It is the communication,discourse and interaction taking place between and among them thatconstitutes human society. According to this conception, ‘individuals think,technologies function and society communicates’ (Rasmussen, 2003: 459).
Inevitably, the rise of a synthetic social world will affect the formation of self in human individuals. As individuals come to know themselves by takingthe attitudes of others toward them, others become the ‘looking glass’(Cooley, 1902) in which individuals see themselves. Can human surrogatessuch as humanoid social robots also serve as a looking glass for humanindividuals? Wolfe has argued that the ‘other must itself be a self before a selfcan communicate with it’ (1993: 60) and, as humanoid social robots haveno self, they cannot affect the self of a human individual. However, recent 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
studies of the interactions between children and robotic dolls have shownthat humanoid social robots can be equipped with a programmed self thatlooks so believable that the self-view of those who interact with them willbe affected (Bumby and Dautenhahn, 2000). When humanoid social robotsare made virtually indistinguishable from humans at the interface level, theyproduce what is known as the ‘Eliza Effect’ (Bates, 1992), the phenomenonwhere individuals think that they are communicating with a real personwhen in fact they are interacting with a human surrogate. In a syntheticsocial world, therefore, the way that people view and feel about themselvesis influenced not only by other human individuals but also by the humanoidsocial robots with which they communicate and interact.
In sum, this represents a fundamental shift in our culture, one that is not nearlyas prominent and visible as that represented by the internet and the world wideweb, but with far greater potential for altering the way we learn and think.
(Aarseth, 2003: 432) Although humanoid robotic technologies may look primitive at this stage, there is no doubt that they will become only more technically advanced andsocially sophisticated as time goes by. Now is the time for us to payattention to this ongoing robotic revolution, to understand the humanoidsocial robot as a new medium of communication and to study its impact onindividuals, society and culture.
AcknowledgementsI would like to thank the anonymous reviewers for their helpful comments. I am alsograteful to Randall Collins for his valuable suggestions on an earlier draft of this article.
Notes1 Similar concepts suggested by other scholars include ‘socially intelligent autonomous robots’ (Breazeal, 2001) and ‘socially intelligent agents’ (Dautenhahn, 1998).
2 Not all designers of interactive machines agree that humanoid interfaces are necessarily desirable. It has been argued, for example, that humanoid features can getin the way of designing efficient machines. Thus designers have been asked to focustheir energies on the real job rather than on an interface (Norman, 1990).
3 To catch a glimpse of the automated telephone response systems that are currently in testing, it is possible to try some of MIT Media Lab’s human–computer dialoguesystems by dialing Jupiter at 1–888–753-TALK for weather information and Pegasus at1–877–527–8255 for airline information.
4 With thanks to MIT Press for kind permission to use this material.
5 There are two basic types of proxy relationships: real proxy relationships and non-real proxy relationships. In a real proxy relationship, the humanoid social robot acts onbehalf of a real person or institution. A household answering machine, for example,interacts with callers on behalf of its owner and an ATM machine interacts withcustomers for a company. In both instances the interaction is legally bound because itimplicates a real human–human engagement. In a non-real proxy relationship, thehumanoid social robot does not act on behalf of any real person or institution.
2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Household robotic dolls and pets, for example, interact with people as personalplaymates rather than someone else’s surrogates. Therefore, this type of interaction isnot legally bound.
6 Virtual co-presence needs to be differentiated from virtual presence, also known as ‘parasocial’ presence (Horton and Wohl, 1979), in which one encounters the virtualpresence of other people through mass media such as books, radio and television.
Unlike virtual co-presence, there is no two-way interaction in a parasocial situation.
ReferencesAarseth, E. (2003) ‘We All Want to Change the World: The Ideology of Innovation in Digital Media’, in G. Liestol, A. Morrison and T. Rasmussen (eds) Digital MediaRevisited: Theoretical and Conceptual Innovation in Digital Domains, pp. 415–39.
Cambridge, MA: MIT Press.
Allen, J., M.S. Hunnicutt and D.H. Klatt (1987) From Text to Speech: the MITalk System.
New York: Cambridge University Press.
American Heritage Dictionary (1991) Second College Edition. Boston, MA: Houghton American Heritage Dictionary for Windows (1994) Cambridge, MA: SoftKey International Ananova (2002) ‘World’s “First” Talking Washing Machine Unveiled’, URL (consulted December 2004): Androidworld (2002) ‘Valerie, a Domestic Android’, URL (consulted December 2004): Bainbridge, W.S., E.E. Brent, K.M. Carley, D.R. Heise, M.W. Macy, B. Markovsky and J. Skvoretz (1994) ‘Artificial Social Intelligence’, Annual Review of Sociology 20:407–36.
Bates, J. (1992) ‘Virtual Reality, Art and Entertainment’, Presence: Teleoperators and Virtual Bell, D. and B.M. Kennedy (eds) (2000) The Cybercultures Reader. New York: Routledge.
Biocca, F. (1992) ‘Communication Within Virtual Reality: Creating a Space for Research’, Journal of Communication 42(4): 5–22.
Breazeal, C.L. (2000) ‘Robot in Society: Friend or Appliance’, URL (consulted December 2004): Breazeal, C.L. (2001) Designing Sociable Robots. Cambridge, MA: MIT Press.
Brennan, S.E. and J.O. Ohaeri (1994) ‘Effects of Message Style on Users’ Attributions Toward Agents’, in Celebrating Interdependence: Conference Companion for the 1994ACM/SIGCHI Conference on Human Factors in Computing Systems, pp. 281–2. NewYork: ACM Press.
Brooks, R.A. (1999) Cambrian Intelligence: The Early History of the New AI. Cambridge, Brooks, R.A. (2002) Flesh and Machines: How Robots Will Change Us. New York: Bumby, K.E. and K. Dautenhahn (2000) ‘Investigating Children’s Attitudes Towards Robots: a Case Study’, URL (consulted December 2004): Carley, K. (1996) ‘Artificial Intelligence Within Sociology’, Sociological Methods & Research Carlsson, C. (1995) ‘The Shape of Truth to Come: New Media and Knowledge’, in J. Brooks and I. A. Baol (eds) Resisting the Virtual Life: the Culture and Politics ofInformation, pp. 235–44. San Francisco, CA: City Lights.
2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Cassell, J., T. Bickmore, L. Campbell, H. Vilhjalmsson and H. Yan (2000) ‘Human Conversation as a System Framework: Designing Embodied Conversational Agents’,in J. Cassell, J. Sullivan, S. Prevost and E. Churchill (eds) Embodied ConversationalAgents, pp. 29–63. Cambridge, MA: MIT Press.
Collins, H.M. (1990) Artificial Experts: Social Knowledge and Intelligent Machines.
Collins, R. (1992) Sociological Insight. Oxford: Oxford University Press.
Cooley, C.H. (1902) Human Nature and the Social Order. New York: Scribner’s.
Dautenhahn, K. (1998) ‘The Art of Designing Socially Intelligent Agents: Science, Fiction and the Human in the Loop’, Applied Artificial Intelligence Journal, Special Issueon Socially Intelligent Agents 12(7–8): 573–617.
Dautenhahn, K. and A. Billard (2002) ‘Games Children with Autism Can Play with Robota, a Humanoid Robotic Doll’, URL (consulted December 2004): ~ comqkd/DautenhahnBillardweb.pdf Dertouzos, M.L. (1998) What Will Be: How the New World of Information Will Change Our Lives. New York: HarperCollins.
Dertouzos, M.L. (2001) The Unfinished Revolution: Human-Centered Computers and What They Can Do for Us. New York: HarperCollins.
DiMaggio, P., E. Hargittai, W.R. Neuman and J.P. Robinson (2001) ‘Social Implications of the Internet’, Annual Review of Sociology 27: 307–36.
Dreyfus, H.L. (1992) What Computers Still Can’t Do: a Critique of Artificial Reason.
Engelbart, D.C. (1963) ‘A Conceptual Framework for the Augmentation of Man’s Intellect’, in P, Howerton (ed.) The Augmentation of Man’s Intellect by Machine, Vistas inInformation Handling, Vol. 1, pp. 1–27. Washington, DC: Spartan Books.
Foner, L.N. (2000) ‘Are We Having Fun Yet? Using Social Agents in Social Domains’, in K. Dautenhahn (ed.) Human Cognition and Social Agent Technology, pp. 323–48.
Amsterdam: John Benjamins.
Forsyth, R. (1984) ‘The Architecture of Expert Systems’, in R. Forsyth (ed.) Expert Systems: Principles and Case Studies, pp. 9–17. New York: Chapman and Hall.
Frude, N. (1984) The Robot Heritage. London: Century.
Garfinkel, H. (1967) Studies in Ethnomethodology. Englewood Cliffs, NJ: Prentice-Hall.
Gill, K.S. (ed.) (1996) Human Machine Symbiosis: the Foundations of Human-Centered Systems Design. London: Springer.
Goffman, E. (1963) Behavior in Public Places. New York: Free Press.
Goodwins, R. (2001) ‘ALICE Victorious in AI Challenge’, ZDNet, 15 October, URL (consulted December 2004):–3513_22–53090.html Green, N. (1993) ‘Can Computers Have Genders?’, paper presented at the annual conference of the International Communication Association, Washington, DC, May.
Hammond, R. (2000) ‘Robots for Kids: The Landscape’, in A. Druin and J. Hendler (eds) Robots for Kids: Exploring New Technologies for Learning, pp. 338–52. SanFrancisco, CA: Morgan Kaufmann.
Hapgood, F. (2001) ‘Look Who’s Talking’, URL (consulted December 2004): http:// Hiltz, S.R. and M. Turoff (1978) The Network Nation: Human Communication via Computer. Reading, MA: Addison-Wesley.
Horton, D. and R.R. Wohl (1979) ‘Mass Communication and Parasocial Interaction: Observation on Intimacy at a Distance’, in G. Gumpert and R. Cathcart (eds) Inter/Media: Interpersonal Communication in a Media World, pp. 32–55. New York: OxfordUniversity Press.
2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Johnstone, B. (1999) ‘Japan’s Friendly Robots’, Technology Review 102(3): 64–9.
Kerr, G. (2002) ‘Robots Bring Dubious Cheer to the Lonely Elderly’, Asahi Shimbun Laxon, A. (2001) ‘Making a Machine More Like a Man’, New Zealand Herald, 22 October, URL (accessed 10 April 2006): Lee, K.F. (1989) Automatic Speech Recognition: The Development of the SPHINX System.
Lenat, D.B. and R.V. Guha (1990) Building Large Knowledge-Based Systems. Reading, Liestol, G. (2003) ‘“Gameplay”: From Synthesis to Analysis (and Vice Versa)’, in G.
Liestol, A. Morrison and T. Rasmussen (eds) Digital Media Revisited: theoretical andConceptual Innovation in Digital Domains, pp. 389–413. Cambridge, MA: MIT Press.
Luhmann, N. (1995) Social System. Stanford, CA: Stanford University Press.
Mauldin, M. (1994) ‘ChatterBots, TinyMUDs and the Turing Test: Entering the Loebner Prize Competition’, in Proceedings of the Twelfth National Conference on ArtificialIntelligence, Vol. 1, pp. 16–21. Menlo Park, CA: AAAI Press.
Mayer, P.A. (1999) ‘Computer Media Studies: an Emergent Field’, in P.A. Mayer (ed.) Computer Media and Communication: A Reader, pp. 320–36. Oxford: University Press.
Menzel, P. and F. D’Aluisio (2000) Robo Sapiens: Evolution of a New Species. Cambridge, Minsky, M. (1967) Computation: Finite and Infinite Machines. Englewood Cliffs, NJ: Nass, C. and J. Steuer (1993) ‘Anthropomorphism, Agency and Ethopoeia: Computers as Social Actors’, Human Communication Research 19(4): 504–27.
Nass, C., J. Steuer, L. Henriksen and D.C. Dryer (1994) ‘Machines, Social Attributions, and Ethopoeia: Performance Assessments of Computers Subsequent to “Self ” or“Other” Evaluations’, International Journal of Human–Computer Studies 40(3): 543–59.
Nass, C., Y. Moon, B.J. Fogg, B. Reeves and D.C. Dryer (1995) ‘Can Computer Personalities Be Human Personalities?’, International Journal of Human–Computer Studies40(3): 223–39.
Nightline (2002) ‘Those Crazy Robots’, ABC Nightline, 19 August, URL (accessed 10 April 2006): Norman, D.A. (1990) ‘Why Interfaces Don’t Work’, in B. Laurel (ed.) The Art of Human–Computer Interface Design, pp. 209–19. Reading, MA: Addison-Wesley.
Norman, D.A. (1993) Things That Make Us Smart: Defending Human Attributes in the Age of the Machines. Reading, MA: Addison-Wesley.
O’Mahony, M. (2002) Cyborg: The Man-Machine. New York: Thames & Hudson.
Ong, W.J. (1982) Orality and Literacy: The Technologizing of the World. London: Methuen.
Pepperell, R. (1995) The Post-Human Condition. Oxford: Intellect.
Poster, M. (2002) ‘High-Tech Frankenstein, or Heidegger Meets Stelarc’, in J. Zylinska (ed.) The Cyborg Experiments: the Extensions of the Body in the Media Age, pp. 15–32.
New York: Continuum.
Rasmussen, T. (2003) ‘On Distributed Society: The Internet as a Guide to a Sociological Understanding of Communication’, in G. Liestol, A.Morrison and T. Rasmussen(eds) Digital Media Revisted: Theoretical and Conceptual Innovation in Digital Domains,pp. 443–67. Cambridge, MA: MIT Press.
Restivo, S. (2001) ‘Bringing Up and Booting Up: Social Theory and the Emergence of Sociologically Intelligent Robots’, URL (consulted December 2004): 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Rheingold, H. (1993) The Virtual Community: Homesteading on the Electronic Frontier.
Searle, J.R. (1969) Speech Acts: an Essay in the Philosophy of Language. Cambridge, MA: Searle, J.R. (1990) ‘Is the Brain’s Mind a Computer Program?’ Scientific American 262(1): Stelarc (2000) ‘From Psycho-Body to Cyber-Systems: Images as Post-Human Entities’, in D. Bell and B.M. Kennedy (eds) The Cybercultures Reader, pp. 560–76. New York:Routledge.
Stronks, J.J.S. (2002) ‘Friendship Relations with Embodied Conversational Agents: Integrating Social Psychology in ECA Design’, URL (consulted December 2004) ~ heylen/Publicaties/chi-sat-2002.pdf Suchman, L.A. (1987) Plans and Situated Actions: the Problem of Human–Machine Communication. Cambridge: Cambridge University Press.
Takahashi, D. (1998) ‘Sensory Circuits’ Cheap Chip Has People Talking’, Wall Street Journal, 10 March, URL (accessed 10 April 2006): Turkle, S. (1984) The Second Self: Computers and the Human Spirit. New York: Simon and Turkle, S. (1995) Life on the Screen: Identity in the Age of the Internet. New York: Weizenbaum, J. (1966) ‘A Computer Program for the Study of Natural Language Communication Between Man and Machine’, Communications of the Association ofComputing Machinery 9(1): 36–45.
Wessells, M.G. (1990) Computer, Self and Society. Englewood Cliffs, NJ: Prentice Hall.
Wiener, N. (1950) The Human Use of Human Beings: Cybernetics and Society. Boston, MA: Wolfe, A. (1991) ‘Mind, Self, Society and Computer: Artificial Intelligence and the Sociology of Mind’, American Journal of Sociology 96(5): 1073–96.
Wolfe, A. (1993) The Human Difference: Animals, Computers, and the Necessity of Social Science. Berkeley, CA: University of California Press.
Woodall, M. (2001) ‘Taking a Critical Look at the Toys that Bond’, Philadelphia Inquirer, 20 September, URL (accessed 10 April 2006): Wosk, J. (1992) Breaking Frame: Technology and the Visual Arts in the Nineteenth Century.
New Brunswick, NJ: Rutgers University Press.
Zhao, S. (2003) ‘Toward a Taxonomy of Co-presence’, Presence: Teleoperators and Virtual Zylinska, J. (ed.) (2002) The Cyborg Experiments: The Extensions of the Body in the Media SHANYANG ZHAO is Associate Professor of Sociology at Temple University. He received his PhD in sociology from the University of Maryland at College Park. Prior to joining the Temple faculty in 1997, he worked as a senior research associate at the Institute for Social Research in the University of Michigan. His research interests include human co-presence and interaction, metatheory and mental health.
Address: Sociology Department, Temple University, Philadelphia, PA 19122, USA. [email: 2006 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.


Bez tytu³u-

Zastosowanie technologii interferencji RNA w medycynieApplication of RNA interference in medicineMarta Gabryelska1, Eliza Wyszko1, Stanis³aw Nowak2, Ryszard ¯ukiel2, Jan Barciszewski12 z Katedry i Kliniki Neurochirurgii i Neurotraumatologiigenny RNA, np. wirusowy (18). Dlatego w³aœnie RNAiInterferencja RNA nale¿y do technik anty-mRNA ada-mo¿e funkcjonowaæ jako specyficzny stra¿nik ge

Many parents struggle with the decision of whether or not to send their possibly sick child to school. Juggling the demands of work and the demands of their students school work may make the decision even more difficult. It’s tempting to give a dose of Tylenol or Motrin and hope for the best. However, school age children are especially good at spreading germs and children cannot learn as effec

Copyright © 2010 Medicament Inoculation Pdf