Book Review: Sherry Turkle – Alone Together: Why We Expect More from Technology and Less from Each Other

Joel Kuennen

In Sherry Turkle’s most recent book, Alone Together: Why We Expect More from Technology and Less from Each Other, she relates a story of an artist with “cyborgs dreams.”[1] In 2005, artist Pia Lindman visited the MIT Media Lab to learn from the robots that could learn. The project resulted in an amazing series of drawings and a video that explore the slowly dissolving line between humans and the technology they have created.

She wanted to go even further, to, as Turkle puts it, “commune with the robot.” Initially, she wanted to find a way to hook herself up to the robot so that it control her motions, her body. She wanted to understand the thought process of Artificial Intelligence, to plug in. Of course, this isn’t possible. We don’t live in the Matrix. Lindman was acting out common cultural expectations of AI, anticipating a likeness through intelligence. Next, Lindman tried to find a way to have the robot control her facial muscles through electrodes, much like the experiments of Daito Manabe, but ultimately decided against it; she was afraid of the knowledge that might be produced in herself. She envisioned an exchange of impetus, a loss of control over the self that would ultimately mean a loss of self that would radically reshape her own understanding of body and mind. Lindman’s desire to become one with technology ultimately recalls for Turkle her original reasons for writing this book: “the boundaries between people and things are shifting. What of these boundaries are worth maintaining?”

Turkle, founder of MIT’s Media Lab, has been studying the interactions between people, artificial intelligence, and artificially social programs since their development in the late 1960’s. Alone Together is written as the culmination of her years of research and is the third in her trio of books on human-computer relationships (preceded by The Second Self, 1984 and Life on the Screen, 1995). Written in the style of an anthropological summation of a lifetime of research, the book meanders through years of experiments and observation. The first section, entitled “The Robotic Moment: In Solitude, New Intimacies, reflects on the effect of robotic companions, mostly related through studies concerning children and senior citizens.

Part I: Towards Embodied Intelligence

Some stories are entertaining, some heart-warming, but most are bizarre in uncanny fashion. They start rather mundanely normal—adults who can no longer take care of their aging, depressed parents, children from homes with working parents who are perennially absent—they describe a life common to those who have grown up in the millenial generation. But what remains bizarre is the amazing capability for people to allow the inanimate to animate the absent parts of the self. Turkle found that people are willing to give that extra little bit, to project emotions, even life and love, into the robot. This happened with young and old alike, even when they were fully conscious that what they were interacting with, what they were feeling emotions for and with, was a robot run by intricate coding. She observed that the ELIZA effect[2] is indeed at work with “sociable robots.”

Java coding segment from a web version of the ELIZA program, one of the first programs to pass the Turing test in 1966.


Social robots are defined as a group of robots that are designed and coded to interact with and learn from people. Well-known sociable robots range in complexity from commercial robots like Tomogotchis, Furbies and PARO to research robots such as Kismet and DOMO. Often, it is easy to see why interaction with these mechanical mirrors can produce authentic feelings in people. Emotional projection has lead to a field called “affective computing.” Psychologists researching in this field have concluded that: “when we are confronted with an entity that [behaves in humanlike ways, such as using language and responding based on prior inputs,] our brains’ default response is to unconsciously treat the entity as human.”[3]

Turkle voices concern about the loss of authentic relationships between people as a result of the technological screens that are transforming how and with whom we interact. Yet, she seems to at once realize and ignore the transformation of authenticity as a paradigmatic possibility. So far the trend in sociable robotics is to fill in the gaps. As one respondent to Turkle’s surveys put it: “robots [are] designed to complement humans.”[4] Turkle believes this shift first occurred with the invention of computer interfaces meant to “democratize” computer usage. The easier it is to interact with an evocative object or machine, the more units will be sold. As Turkle puts it, “we are pleased to lose track of the mechanism behind [sociable robots] and take them ‘at interface value.’” When an object can evoke complex, human empathy and commiseration, and when the object’s interface becomes more humanlike in appearance, the now already thin line between human and machine blurs almost completely. Turkle believes this to be an effect of the classic psychoanalytic bane of modern humanity: narcissism. Humans are willing to project their worries, desires, fears, and confidences onto an object that is willing to reflect and express emotional intelligence in return. When asked to characterize the personality of the robot they had interacted with, most people described some version of themselves, either who they were (with corresponding needs) or who they wished they were.

Philosophically, two camps have developed around the Artificial Intelligence debate. Turkle states that MIT sociable robots follow a tradition of philosophy (Kant, Heidegger, Merleau-Ponty, Dreyfus and Antonio Damasio) that considers “the mind and body as inseparable.”[5] The other camp retains the mind–body dualism developed by Descartes. However, the more popular theory amongst the AI community is the phenomenological approach championed at MIT. If this is the case, then the only way a robot could attain artificial or emotional intelligence (Turkle equates these two forms of intelligence, as does affective computing researcher, Rosalind Picard) is by becoming aware of its body. The robot would gather experience through its embodiment and perform emotion in response to corporal stimuli. Thus, a unity of existence would be attained that, through mimicry, actually embodies human emotional intelligence.

Rodney Brooks, creator of the sociable robot Cog and founder of iRobot (makers of the pet/vacuum cleaner Roomba), penned a paper in 1990 that revolutionized the field of Artificial Intelligence. In it he proposed a system of artificial intelligence “based on the physical grounding hypothesis.” It provides a different methodology for building intelligent systems than that pursued for the last thirty years. The traditional methodology bases its decomposition of intelligence on functional information processing modules whose combinations provide overall system behavior. The new methodology bases its decomposition of intelligence into individual behavior generating modules, whose coexistence and co-operation let more complex behaviors emerge.”[6] This was the first description of a way to physically embody a robot in a humanistic manner. Since then, Brooks has maintained this optimism. Turkle states, “[Brooks] talks about the merging of flesh and machine. There will be no robotic ‘them’ and human ‘us.’ We will either merge with robotic creatures, or in a long first step, we will become so close to them that we will integrate their powers into our sense of self.”[7] Brooks (via Turkle) is describing a moment of assimilation, of communion where the robot becomes not the same as the human, but indispensable to what it is to be human through embodiment.

Rodney Brooks' humanoid robot torso named Cog. Built as part of the Humanoid Robotics Group project at MIT. All work with Cog ceased in 2003 in favor of more advanced sociable robots.

Despite, or perhaps because of, the years of research Turkle has undergone, she remains skeptical of this trend towards constructing robots with “emotions” and “intelligence”:

In practice, researchers in affective computing try to avoid the word “emotion.” Talking about emotional computers is always on track to raise strong objections. How would computers get these emotions? Affects sound more cognitive. Giving machines a bit of “affect” to make them easier to use sounds like common sense, more a user interface strategy that a philosophical position. But synonyms for “affective” include “emotional,” “feeling,” “intuitive,” and “noncognitive,” just to name a few. “Affect” loses these meanings when it becomes something computers have. The word “intelligence” underwent a similar reduction in meaning when we began to apply it to machines. Intelligence once denoted a dense, layered, complex attribute. It implied intuition and common sense. But when computers were declared to have it, intelligence started to denote something more one-dimensional, strictly cognitive.[8]

What Turkle seems to fear is the trend towards assimilation that sociable robots are undergoing. After all, it seems that for many, the first response to a sociable robot is to accept it as a companion capable of understanding what it means to be human. If this is the case, then what does that make humans? Perhaps, as Brooks suggests, it will expand what is to be considered human.[9]

A hidden hope lies at the intersection of robotics and everyday life. As Turkle concludes her first section, she tells the tale of a young couple considering a robot as a positive alternative to taking care of an ailing parent personally or by providing a human caretaker. After a lengthy discussion on the merits of human touch and robotic efficiency they come up with the conclusion that as long as the robot is taking over tasks that would otherwise be rote tasks for a human, it doesn’t matter. Let a machine be a machine and human be human. But as Turkle points out, “gradually, more of life, even parts of life that involve our children and parents, seem machine ready.”[10]

Much of Turkle’s argument revolves around a loss of authenticity because of alienating shifts in socialization practices due to technological advancements. Yet, what is the lament for the loss of face-to-face human relations? Is there anything to lament that is not already subsumed by a general nostalgia for what has been in light of that which is and will be?

In the beginning of part II of her book, “Networked: In Intimacy, New Solitudes,” Turkle announces, “we are all cyborgs now.”[11] Smart phones have revolutionized the way we think, remember, and communicate. We, as humans, have been enhanced.  “Online, like MIT’s cyborgs, we feel enhanced. There is a parallel with the robotic moment of more. But in both cases, moments of more may leave us with lives of less.”[12] Turkle goes on to state that with sociable robots, we are alone but “receive the signals” that we are together, and that online, we are connected but “so lessened are our expectations of each other that we can feel utterly alone.”[13] Again, the specter of a lament for what has been shades her argument. Expectations of interpersonal relationships are shifting and though it is easy to qualify the new as being inferior to the old, does this mean anything? The millennial generation of transition is just that, transitional so long as society and technology change, the lament of memory will be a symptom.

Turkle herself is the first to admit that she is a hesitant participant in web 2.0 platforms and technologies and as such my confidence in her ability to understand the current shift in interpersonal expectations is diminished. Since the publication of Turkle’s first book in 1984 much has changed (understatement of the year?). We have witnessed the rise of personal computing and popular social technology that has transformed the definition of “friends.” Who knew making a noun into a verb would extend a network of friends around the world? Our visits may be brief, often, probably, under the Twitter paradigm of practical speech set at 140 characters, but the interactions and information shared is distilled. There is an economy of language brought to bear that parallels a shift in the way we determine knowledge and what it means to know; let us call it knowledge on demand. Friendships and knowledge lie outside the self, waiting to be called upon, to be hailed into existence a la Althusser. For Turkle, this means that the self is more alone than ever before yet how is the condition of self described here different from what Althusser described as concerns the ideological subject?

Turkle believes we are trading in lives once dominated by face-to-face social encounters that developed a sense of community and belonging for lives in a “simulation culture.” Yet just as the definition of “friend” has changed, so has “community.” Simulation culture expands what could once be considered the self through avatars and direct near-instant access to a giant catalogue of information and identities, but “it can be hard to return to anything less.”[14] It may be the case that anything less is already never-known for many. The needs of the self writ-large as the social order will be accommodated in the new paradigm just as they were in the previous. The definition of humanity has already expanded to accommodate the vicissitudes of technology.

Perhaps the most telling section of Part II is the chapter entitled, “Reduction and Betrayal.” Turkle, as she was in her book Life on the Screen, seems to be much more receptive to the ability of online avatars to present the opportunity to “workshop”[15] identities. Example after example is presented in which individuals create online personalities that are fantasy versions of their selves, but not idealized. When Facebook first began to make an impact, many reacted the constructed identities on social platforms as idealized performances of the self. However, this has been shown not to be the case. People, it seems, genuinely want to expand their identities, to play, and this means making an avatar that is realistically human in that it is flawed. As a psychoanalyst, Turkle sees the opportunity for personal growth and understanding in online social networking. The world online allows a space free of the anxieties that come from embodiment (yet introduce other anxieties as we shall see), but we do not become dis-embodied or re-embodied. Again, we become more, not better, faster, stronger, but more.

Google image find—realist avatar

“Anxiety is part of the new connectivity.”[16] Just as technology brings the possibility of new affects, such as the ability to feel love for a robot, it affords new opportunities for anxiety. In conversations with adolescents, Turkle has noticed a high degree of anxiety and mistrust of social spaces online. Many students have been tricked and maligned by anonymous trolls or even friends from the real world who made fake accounts to test other friend’s loyalties. The online social environment is still developing its own set of mores and perhaps this is the source of the infinite possibility contained in the internet. Turkle calls this the “triumphalist narrative in which every new technological affordance meets an opportunity, never a vulnerability, never an anxiety.” Turkle does an outstanding job at outlining many of the dangers posed by unquestioning acceptance of new technological practices and spaces. She could temper even futurist Ray Kurzweil’s enthusiasm. However, if one were to designate a stance for Turkle in the debate surrounding emergent technologies and humanities, it would be as a diplomat, with one foot in each camp.  An advocate of self-conscious human and technology co-evolution, Turkle takes a calm and controlled stance even when beckoned by technology built to seduce and charm.

In 2000, Nobel Prize winning atmospheric chemist, Paul Crutzen coined the term Anthropocene to describe the geological period we now inhabit. As the name suggests, it describes a world that is pervasively shaped by humanity. What Turkle warns against is the development of a Technologocene remote from the view of humanity she was known. When technology dictates where we are, it dictates who we are. Already, slippage between the liminalities of transformational realities happens with ease and so the question remains, looming largely: when are we no longer more-than-human? No longer human?


1 p. 133

2 The ELIZA effect describes the first observation of human empathetic reaction to simulated human interaction. ELIZA was a program written to act as a psychoanalytic therapist that would respond with a predetermined set of queries once it was provoked with a statement. The coding of a Java version of the original ELIZA can be found by viewing the page source of this site:

3 Turkle, Sherry. “Seeing through computers: Education in a culture of simulation,” The American Prospect. 1997

4 p.122

5 p. 134

6 Brooks, Rodney. “Elephants Don’t Play Chess,” Robotics and Autonomous Systems 6. 1990, p. 3-15

7 p. 141

8 p. 140-141

9 #transhumanism; a term that is strangely absent from this book, perhaps due to the innate optimism in the discourse surrounding it.

10 p. 146

11 p. 153

12 p. 154

13 Ibid

14 p.209

15 p. 214

16 p. 242



  1. The ELIZE code sample image is in Javascript, not Java. The difference is significant.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: