‘Sinister Fruitiness’: Neuromancer, Internet Sexuality and the Turing Test
Last Updated August 12, 2024.
[In the following essay, Stevens presents a thematic analysis of gender, technology, and individual identity in Neuromancer, noting Gibson's complex portrayal of artificial intelligence and sexuality.]
“YEAH. I SAW YOUR PROFILE, CASE. … YOU EVER WORK WITH THE DEAD?”1
The immediate subject of this essay is a set of anxious, confusing, and at times threatening questions posed by computer-mediated communication technology, popularly known as “cyberspace” and most immediately recognizable in the Internet. It also takes as its subject a related set of perhaps more abstract questions about the possibilities of intelligence within our computers. I have chosen to analyze in this essay three moments within the culture of Artificial Intelligence (AI) and the development of cyberspace, moments that might be classified as either “fiction” or “real life” (inasmuch as those categories hold steady under the optic of the narratives and textual spaces they share) but that symptomatize how we come to know “intelligence” by the anxiety, confusion, and threat underlying those questions. Some would argue that computers cannot be intelligent; they're not alive. But granted that computers aren't in any readily recognizable sense alive, might we imagine that they could be cognizant? conscious? sentient?
The metrics by which we measure intelligence are closer to our experience than we might think: we are already used to dealing with digital, intelligent life in the form of digital representations of other humans. A good number of us set our biological clocks by when we are able to login and when we can read our e-mail. We are used to narrativizing our lives, ourselves, for our on-line friends, many of whom we've never met; it's a small step to asking how we know that our correspondents are cognizant, conscious, aware, “real.” How do we know that they're intelligent? How do we know, by what heuristics do we discover, that our correspondents are sentient? By what standard of measurement could we gauge, in this age of the “intelligent” machine, that our interlocutors are, in a word, “human”?
These questions, and the measures by which we answer the questions, are implicit, and at times, explicit, in the work of William Gibson, from the early short story, “The Gernsback Continuum,” to the Cyberspace trilogy, Neuromancer, Count Zero, and Mona Lisa Overdrive. The plot of Neuromancer is roughly as follows: Case, a twenty-four year old former cyberspace cowboy, worked as a “thief, [who] worked for other, wealthier thieves, employers who provided the exotic software required to penetrate the bright walls of corporate systems, opening windows into rich fields of data” (p. 5). Case has been nerve-damaged by employers he stole from, rendering him unable to jack into cyberspace. He is recruited and healed by a man named Corto who wants him to steal a digital copy of Case's now-dead cowboy teacher, McCoy Pauley, and with Pauley's help, break into the Tessier-Ashpool's corporate/family computer matrix. Corto, who was formerly a military officer named Armitage, is controlled by an AI named Wintermute. Wintermute wants to merge with his other half, an AI represented in Tessier-Ashpool's systems as a young, beautiful boy, Neuromancer. Case, his bodyguard Molly, the sexual psychopath Peter Riviera, and Corto/Armitage eventual succeed in releasing Wintermute and Neuromancer from the hard-wired constraints that keep them from melding and evolving into a higher form of sentience and intelligence.
The characters of Gibson's Neuromancer, Wintermute, Neuromancer, McCoy Pauley, Case, the Finn, and from later books, Bobby, Colin, Angie, and eventually the matrix itself, when it comes to know itself, are all entities who live to one degree or another in the machine, in cyberspace, or to use Gibson's formulation, in the matrix of human knowledge “from the banks of every computer in the human system” (p. 51). They are all, to put into play another of his frequently-used words, “personalities.” Most are reproductions, digital representations (or manifestations) of someone who was already alive, already human, and in that sense already someone who thinks.
But could digital constructs be sentient? And if we2 are going to live in cyberspace—perhaps not Gibson's cyberspace, but a cyberspace—how do we know that the digital representations we encounter, the electronic text that scrolls across our screen, are human-produced and not “simply” program-produced; and if the output of programs, intelligent programs, not just ghosts in the machine? How do we know that Artificial Intelligences are, as the name designates, intelligent? And, to follow the question outward to the framing context that makes it intelligible, how do we know that we are personalities, that we are sentient? Characters such as McCoy Pauley, a “ROM personality matrix” who exists as a construct of a human within a computer in Gibson's most widely-read novel, Neuromancer, come to figure the uneasy perception that there is no boundary between ourselves and our encompassing computing environments; that we are, though sentient, “merely” machines. That they are, though machines, sentient.
For instance, Pauley, known by the characters in Gibson's “Sprawl” novels as the “Flatliner” for surviving “braindeath behind [the] black ice” defenses of an AI that he was buzzing in Rio, comes back after his physical death (by heart attack) as a “recording” on a cassette (p. 50). More often than not the novel denies that these digital copies of people, ghosts in the machines, are real like we're real. Or rather, the narrative denies that they are cognitive like we are cognitive. How do we deal with the dead in our machines? Neuromancer equivocally decides that such a representation, “a hardwired ROM cassette replicating a dead man's skills, obsessions, knee-jerk responses” (pp. 76-79), is not sentient, not quite human, not quite the ghost that it seems to be. The Flatliner compares himself to another type of entity, however, that is sentient, analogous to the ways humans are sentient: Artificial Intelligences. When Case, the protagonist of Neuromancer, asks Pauley what possible motive the AI Wintermute could have for carrying out the detailed plot that drives the novel, the program answers that he can't answer. There is no motive.
“Motive,” the construct said. “Real motive problem, with an AI. Not human, see?”
“Well, yeah, obviously.”
“Nope. I mean, it's not human. And you can't get a handle on it. Me, I'm not human either, but I respond like one. See?”
“Wait a sec,” Case said. “Are you sentient, or not?”
“Well, it feels like I am, kid, but I'm really just a bunch of ROM. It's one of them, ah, philosophical questions, I guess …” The ugly laughter sensation rattled down Case's spine. “But I ain't likely to write you no poem, if you follow me. Your AI, it just might. But it ain't no way human.”
(P. 131, emphasis in text)
Pauley's inhuman laugh, or rather “laughter sensation,” signals his inhuman state. To be human, by the construct's figuration, is to have a psychology that is directed in some determinate, intentional, teleological sense. It's to have a discernible “motive,” a tendency to move to action and a source for that action, which allows others to “get a handle on it.” But Pauley also claims that he is “human” in just this directed way, in contrast to the AI: he responds like a human. Neuromancer makes much out of the science of predicting human response. Psychology and psychological profiles are presented again and again as the way the matrix knows what Case, Molly, and the rest of the cast of human and once-human characters will do. The other word besides “personality” that marks this particular definition of psychology and intelligence is “profile.” A profile is a “detailed model” of a subject's psychology (pp. 28-29). Case, for example, is the personification of a “case”: “You're suicidal, Case. The model gives you a month on the outside” (p. 29), claims Armitage/Corto when they first meet. Molly herself wonders aloud to Case, “It's like I know you. That profile he's got. I know how you're wired” (p. 30). She also points out, “I saw your profile, Case” (p. 49), leveraging her knowledge of his psychology and motivation against his own self-knowledge. Introducing Peter Riviera she even shudders, “‘one certified psychopath name of Peter Riviera. Real ugly customer … he's one sick fuck, no lie. I saw his profile.’ She made a face. ‘Godawful’” (p. 51).
From one vantage, then, Pauley the ROM personality matrix is only motivation, all profile, the ultimate “case”: he's a program that is algorithmic in this response to the world and its stimuli. He claims that what ultimately marks him as not human (ironically, ontologically, bearing out W. K. Wimsatt, Jr's claim about the intentional fallacy, Pauley being all intention) is the likelihood that he wouldn't write poetry. AIs, on the other hand, are likely to be creative with culture; they have tendencies to be demonstrative, to articulate expression and action outside of predicted paths; they're likely to write poetry. “Your AI, it just might,” but the Flatliner wouldn't; or rather, Pauley can't move or gesture outside the psychological boundaries of his own read-only memory. Case, however, if not quite “poetic” is paradigmatically human throughout the novel and goes outside his own psychological profile in just this way. When Case trips every security alarm on Freeside to find Molly despite orders to leave her alone, Wintermute complains, “I didn't think you'd do that, man. It's outside the profile” (p. 144). And “‘You guys [meaning the humans Molly and Case],’ the Finn said, ‘you're a pain. The Flatline here, if you were all like him, it would be real simple. He's a construct, just a buncha ROM, so he always does what I expect him to’” (p. 205). Poetry, therefore, delineates a lacuna or inconsistency within Pauley's own theory of psychology and agency against the notion of profile as human: “poetry” signals the inability of someone to plot the profile, to map the source of action, to grasp the motivation; ironically, poetry is a sure index to the likelihood of the perversion of a psychological trajectory. AIs present a “Real motive problem.”3 As a sign of their profile and lack of a profile, AIs are apt to write poetry.
The nature of the AI's self, then, is a vexed question. Out of the gaps that define and distinguish a profile emerges the anxious, even threatening question, how does one designate a coherent self and recognize it as a self? Wintermute explains this particular problem of the first person:
I, insofar as I have an “I”—this gets rather metaphysical you see—I am the one who arranges things for Armitage. Or Corto, who, by the way, is quite unstable.
(P. 120)
The novel makes it clear that AI-selves control people and events. They influence subtly, but deftly and determinately, technological development (e.g., the production of ICE), personal psychologies (e.g., 3Jane, Corto/Armitage), and even cultural events (e.g., raid by the Panther Moderns). They “arrange” things. Thus, they might be said to have a “profile.” But Corto/Armitage's psychological instability immediately juxtaposed to Wintermute's Elders of Zion-like, patois “I'an'I” first person pronoun (p. 109) suggests the fragmented contours of the metaphysical niceties that Wintermute dodges above. Wintermute, displaced into the figure of smuggler Julius Deane, sketches out Corto's psychosis, then adds, “‘He's not quite a personality.’ Deane smiled. ‘But I'm sure you're aware of that. But Corto is in there, somewhere, and I can no longer maintain that delicate balance. He's going to come apart on you, Case’” (p. 121). Mirroring and refracting Wintermute's own persona, Armitage's name evokes an (albeit unstable) armature around Corto, an “organ or structure for offense or defense” as well as the “framework used by a sculptor to support a figure being modeled in plastic material” (Webster's). Armitage does come apart; he reverts to the ranting paranoiac Corto. The only stability he maintains across that devolution is his gender identity. In just this way, Wintermute, though not human, appears in many forms to Case. The novel intimates that the AI who attempts to communicate with or control a human finds stability of identity not in the particular bodies it inhabits but in the gender of those bodies: Julius Deane, Lonny Zone, and the Finn.4
Each of these factors (profiles, poetry, gender) come into play when Case and the Flatliner attempt to trace the connections among Wintermute's intelligence, motivation and the constraints placed on its evolutionary development. Case comments:
“You were right, Dix. There's some kind of manual override on the hardwiring that keeps Wintermute under control. However much he is under control,” he added.
“He,” the construct said. “He. Watch that. It. I keep telling you.”
(P. 181)
Here Case only speaks what he already knows about Wintermute; but the Flatliner resists engendering the AI in an attempt to disarticulate the sense of “personhood” conferred by gender from an entity that already confirms its self through gender. Pauley, then, verifies that whatever the metaphysical sense of an “I” having an “I,” “I” always has a gender.
That Julius Deane first speaks this formulation of gender, knowledge, and self as an embodiment of the AI/split masculine subject is significant: the person of Deane is a figure not only of knowledge but “knowingness”; he embodies the tense relations between paranoia and power; he repeatedly shows up in Case's dreams as Case returns to the defining scene of his problematic relation of to AI (p. 125); and he becomes the only object against which Case stages a successful act of violence. Perhaps not surprisingly Deane is a queer figure, indeed a gay one.
Julius Deane, otherwise known to Case by the androgynous moniker “Julie,” is a 135-year-old vanity queen who spends “a weekly fortune in serums and hormones” as a “hedge against aging.” Uncannily reminiscent of another famous knowing, controlling, problematically masculine character, Lionel Croy (the father of both Kate Croy and her vanity in Henry James' Wings of the Dove, with his “perfect look,” “all pink and silver as to skin and hair,” always interested in his appearance, “How he does dress!”5), Deane is to Case's eyes
Sexless and inhumanly patient, his primary gratification seemed to lie in his devotion to esoteric forms of tailor-worship. Case had never seen him wear the same suit twice … He affected prescription lenses, framed in spidery gold, ground from thin slabs of pink synthetic quartz and beveled like mirrors in a Victorian dollhouse.
(P. 12)
His office is decorated “with a random collection of European furniture,” “Neo-Aztec bookcases,” with a camp flair for “Disney-styled table lamps perched awkwardly on a low Kandinsky-look coffee table in scarlet-lacquered steel” (p. 12).
From the beginning of the novel, Deane prefigures Case's problematic relationship to cyberspace and AIs through his own “manipulative” masculinity. As we've already seen, Wintermute uses Deane as his persona to explain his ability to construct and manipulate events: he admits, for one, that he built and controls Armitage/Corto. Case first sees Armitage in a “dark robe [that] was open to the waist, the broad chest hairless and muscular, the stomach flat and hard. Blue eyes so pale they made Case think of bleach” (p. 27). Armitage comes equipped with “broad shoulders and military posture,” a “Special Forces earring,” and “handsome, inexpressive features” that offer “the routine beauty of the cosmetic boutiques, a conservative amalgam of the past decade's leading media faces” (p. 45). A stock-figure of both '80s gay porn, military recruiting posters, and “straight” body-building culture, Corto allegorizes Deane/Wintermute's control of the matrix/human culture. “‘Is the Corto story true?”’ Case asks Deane/Wintermute. “You got to him through a micro in that French hospital?’” Deane answers, “‘Yes … I try to plan, in your sense of the word, but that isn't my basic mode, really. I improvise … Corto was the first, and he very nearly didn't make it. Very far gone, in Toulon. Eating, excreting, masturbating were the best he could manage’” (p. 120). “Wintermute could build a kind of personality into a shell. How subtle a form could manipulation take?” (p. 125).
Deane's tight, “seamless pink” (p. 13), coifed aesthetic, then, serves as the ground which structures not only Armitage/Corto's masculine armature, but also Case's own unkept, disheveled sallowness and tense paranoia. In turn, Deane's devotion to technology, fashion, and antiquarianism evokes a significant accumulation of layers of knowledge, machines, and capital—monetary, cultural and technological—within the world of Neuromancer itself:
Magnetic bolts thudded out of position around the massive imitation-rosewood door to the left of the bookcases. JULIUS DEANE IMPORT EXPORT was lettered across the plastic in peeling self-adhesive capitals. If the furniture scattered in Deane's makeshift foyer suggested the end of the past century, the office itself seemed to belong to its start.
(Pp. 12-13)
Deane's gay sensibilities, his relationship to his own masculinity and cultural objects, represent a particular relationship to knowledge, culture, and power. In the spaces of meaning between the simulated-natural furniture and “meticulous reconstructions” of “history” and “nature,” his decor logically replicates the ideological structure of the fabrications and simulacra of “cyberspace,” which is to say that Case's (and as I will shortly claim, Gibson's) paranoid logic maps the control Wintermute enjoys over the territory of human action and by extension, culture itself, onto Deane and his queerness. Deane ultimately triggers panic, anger, and hatred in Case. When Deane points out that he's losing control of Corto's sense of self:
“I can no longer maintain that delicate balance. He's going to come apart on you, Case. So I'll be counting on you. …”
Case responds,
“That's good, motherfucker,” Case said, and shot him in the mouth with the.357.
He'd been right about the brains and the blood.
(P. 121)
I should be clear here that my claims rest in part on the understanding that the “cyberspace matrix” and human culture not only inform and are informed by one another: at least within Gibson's writing, cyberspace, in fact, is an analogue to culture. Cyberspace is another word for “culture.” Gibson's interest in the self's relationship to culture, writing/representation and technology is certainly one of the earliest themes of his work. Paranoia and a mounting sense of panic set the tone for how the ideological present relates to the cultural past. “Subjectivity” (the focal point behind the question “how do you recognize a self?”) produces and is produced by culture; his early work seems to recognize that a particular subject position is at the semiotic center of a particular cultural aesthetic; the problem then becomes who creates that culture (as if it were any one person or type of person) and how the culture of the past influences or inhabits present real psychologies as well as visions of normative subjectivities.
“The Gernsback Continuum”6 details those “semiotic phantoms” (p. 7) from the art deco Futuropolises of the '30s and '40s, which haunt the periphery of present-day architecture of cities and highways and, by extension, our collective “mass unconscious” (p. 7). The material embodiments of these “phantoms” take the form of “movie marquees ribbed to radiate some mysterious energy, the dime stores faced with fluted aluminum, the chrome-tube chairs gathering dust in the lobbies of transient hotels,” and most famously symbolized by “the winged statues that guard the Hoover Dam, forty-foot concrete hood ornaments leaning steadfastly into an imaginary hurricane” and endless gas-station manifestations of “Frank Lloyd Wright's Johnson's Wax Building, juxtaposed with the covers of old Amazing Stories pulps, by an artist named Frank R. Paul” (p. 3). Gibson explains that
During the high point of the Downes Age, they put Ming the Merciless in charge of designing California gas stations. Favoring the architecture of his native Mongo, he cruised up and down the coast erecting raygun emplacements in white stucco. Lots of them featured superfluous central towers ringed with those strange radiator flanges that were a signature motif of the style, and made them look as though they might generate potent bursts of raw technological enthusiasm, if you could only find the switch that turned them on.
(P. 4)
I understand “The Gernsback Continuum” to be a novelistic manifesto written against a particularly influential science fiction aesthetic whose heyday lasted from the '30s through the '50s; moreover, the story as a sci-fi narrative invites itself to be read productively as a rethinking of the relationship between narrative and culture, as well as technology and narrative. Just as Gibson rejects the sci-fi aesthetic of the past 50 years or so, formed in part as it was by art deco, industrial design, and avant-gardism, he self-consciously re-positions himself in an ironic relation to any particular futurist narrative he himself might undertake and the larger project of imagining new cultures, new technologies, new futures.
But Gibson's tale also suggests that the functionless facades of technology that we're familiar with from every Buck Rogers-like TV show and movie, as well as the Gernsback pulp novels that were formed by and informed by the '30s through the '50s, seep into our imaginations to produce a vision of who we are and who we should be by what technology we have; and perhaps more importantly the story insists that we are formed by what aesthetic we share and the narrative conventions that embody that aesthetic. The implicit claim the story takes up is that a choreography of narrative and technology traces patterns of subjectivity that haunt the science, culture, and technology of the present. Built out of the collective sum of humanity's facts and fantasies, “cyberspace” of Gibson's books and the “culture” of the present are each, in this respect, a “Gernsback Continuum.”
These visions of ourselves that inhabit cyberspace and culture are “Heirs to the Dream,” he explains. “They were white, blond, and they probably had blue eyes. They were American.” Those selves that live in the Gernsback virtual reality “had all the sinister fruitiness of Hitler Youth propaganda” (p. 9, my emphasis). Here the story suggests a mode of cultural and psychological reproduction that isn't precisely straight, isn't precisely “family,” though it is potent and frightening in its Fascist overtones and master-race textures. The designers of this blond-haired/blue-eyed vision of “sinister fruitiness” were “the most successful American designers” who “had been recruited from the ranks of Broadway theater designers.” Their “superfluous” art (changes in technology were “only skin-deep,” [p. 3]) induces in us a sort of permanent “amphetamine psychosis” (p. 9) that makes us believe in and see these not quite existent Future Selves who live in the present.
To escape the “sinister fruitiness” of this world, the protagonist of “The Gernsback Continuum” heads back to L.A. and then to San Francisco, “anxious to … submerge [himself] in hard evidence of the human near-dystopia we live in” (p. 11). “That afternoon I spotted a flying wing over Castro Street … I just decided to buy a plane ticket for New York.” It turns out that the antidote to 30s avant-garde culture designed by “theater” queens and Ming the Merciless (the protagonist goes over the edge when he stops “to shoot a particularly lavish example of Ming's martial architecture, [p. 5]), is straight pornography and bad television, “But what should I do?” he asks his friend, Kihn. His friend responds, “‘Watch lots of television, particularly game shows and soaps. Go to porn movies. Ever see Nazi Love Motel? They've got it on cable here. Really awful. Just what you need’” (p. 10). The subtext of this morality tale is that heterosexuality, exemplified in soap opera narrative, game show chattiness, and not Fascist pederastic propaganda but American heterosexual fetishization of Nazi racism and sexism, cures the “queer” psychosis that ails him.
Gibson's marking the protagonist's vision of the straight-acting, straight-appearing, blond-haired, blue-eyed family as an “amphetamine psychosis” of “sinister fruitiness” perpetrated by “theater designers” (or their avatar, an evil drag queen, Ming the Merciless) “recruited” expressly for the purpose is perplexing, however. Neuromancer, not to mention Mona Lisa Overdrive, ends in just such a re-constituted nuclear family as a way to secure the “humanness” of technology and the future of the matrix. Case, cruising through cyberspace, finds that
One October night, punching himself past the scarlet tiers of the Eastern Seaboard Fission Authority, he saw three figures, tiny, impossible, who stood at the very edge of one of the vast steps of data. Small as they were, he could make out the boy's grin, his pink gums, the glitter of the long gray eyes that had been Riviera's. Linda still wore his jacket; she waved, as he passed. But the third figure, close behind her, arm across her shoulders was himself.
(Pp. 270-71)
In real life “He found work. He found a girl who called herself Michael” (p. 270). But in cyberspace, in virtual reality, Case finds exactly what the Gernsback Continuum says he'll find, a happy family. This suggests that the function of cyberspace as wholly ideological space/narrativized place is to stabilize the messiness and the perversity of real life; that the love story of Case and Michael exists somewhere in its ideological purity as Case and Linda Lee, with their little boy, too.
The authority set up in Neuromancer to police the boundaries of cyberspace, to make it safe for the phantasmatic family, as it were, is the Turing police. While I have not discussed the Turing police in my gloss of Neuromancer, I feel that I should point out that there is an offensive irony of using Alan Turing's name to mark those who guarantee a queer-free cyberspace and the maintenance of normative subjectivity; in fact, using his name to punish those who supposedly “have no care for [their] species” (p. 163), a charge familiar to men who have sex with men from at least the eighteenth century onward and especially familiar to Alan Turing and other gay men at mid-century. As a counterpoint to Gibson's narrative, I would like to turn to Turing's theoretical work in artificial intelligence to focus on his understanding of the solution to the problem of measuring machine intelligence and its relation to gender, subjectivity, and knowledge.
“TURING,” SHE SAID. “YOU ARE UNDER ARREST.”7
In an October 1950 issue of the British philosophical and psychological journal Mind, A. M. Turing published “Computing Machinery and Intelligence,” a paper that has to a large extent defined the terms of subsequent arguments within cognitive science and computer science circles about the possibility of artificial intelligence. Turing proposed “to consider the question ‘Can machines think?’” by first acknowledging that the “definitions of the meaning of the terms ‘machine’ and ‘think’” could be “framed so as to reflect so far as possible the normal use of the words.” But if he used this approach, he decided that it would be “difficult to escape the conclusion that the meaning and the answer to the question ‘Can machines think?’” would be bogged down in sectarian arguments about infuriatingly ambiguous words. “Instead of attempting such a definition,” he proposed, “I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.”8
As the alternative, Turing proposed what he called “the ‘imitation game.’” The game is “played with three people,” he explains,
a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B’ or ‘X is B and Y is A’. The interrogator is allowed to put questions to A and B thus:
C: Will X please tell me the length of his or her hair?
Now suppose X is actually A [the man], then A must answer. It is A's object in the game to try and cause C to make the wrong identification [that is the man must make the judge think that he is a woman]. His answer might therefore be ‘My hair is shingled, and the longest strands are about nine inches long.’ … The object of the game for the third player (B) [that is, the woman] is to help the interrogator.
(P. 433)
“The best strategy for her is probably to give truthful answers,” Turing surmises. “She can add such things as ‘I am the woman, don't listen to him!’ to her remarks, but it will avail nothing as the man can make similar remarks” (p. 433). Turing concluded his description of what has subsequently been known as “The Turing test” with the question, “‘What will happen when a machine takes the part of A [the man] in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’” (pp. 433-34).
Turing further prescribed that his test would best be carried out by teletype, tty's conjoined across three separate rooms, to allow only the typewritten conversation to pass between participants. This new formulation of the intelligence question, Turing asserts, “has the advantage of drawing a fairly sharp distinction between the physical and the intellectual capacities of a man [sic]” (p. 434). The terminal setup “reflects this fact in the condition which prevents the interrogator from seeing or touching the other competitors, or hearing their voices” (p. 434). I will call this “sharp distinction” (or what other computer researchers call the “anonymity” provided by network technology) “disarticulation,” the unlatching or uncoupling of categories like “gender” from our embodied interactions with others.
Turing ends his explanation of the imitation game by proposing the last equivalent question: “‘Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?’” (p. 442). The Turing test thus side-steps the epistemological question “What is intelligence?” by replacing it with the operationalist stipulation that passing the Turing test is equivalent to intelligence. The philosophical burden of women to speak—and for an adequate number of times fail to represent—the “truth” of their sex is, then, for Turing, re-written into the equivalent scenario, “‘Are there imaginable digital computers which would do well in the imitation game?’” (p. 442). Turing thought “good enough” on the imitation game was if a woman failed to beat the computer about 70٪ of the time.
In his widely-cited paper “Can Machines Think?”9, first presented at a Boston University conference on “How We Know,” philosopher of cognitive science Daniel C. Dennett asserts that “A little reflection will convince you, I am sure, that, aside from lucky breaks, it would take a clever man to convince the judge that he was the woman—assuming the judge is clever too, of course” (p. 122); and that “any computer that can regularly or often fool a discerning judge in this game would be intelligent—would be a computer that thinks—beyond a reasonable doubt” (Dennett's emphasis, p. 122). Dennett claims that there is a problem, however. The problem with the test is that “In a wide variety of areas, we are on the verge of making ourselves dependent upon their cognitive powers. The cost of overestimating them could be enormous” (p. 121). The point of his paper is to show that “There is a common misapplication of the sort of testing exhibited by the Turing test that often leads to drastic overestimation of the powers of actually existing computer systems” (his italics, p. 123). The mistake that people make with computers, Dennett believes, is that they overestimate the number of facts computers have about the world to make their conversations really, truly intelligent; or in the jargon of AI, that computers don't have enough “world knowledge” to make their conversations “believable.”
The first question Dennett was asked after reading his paper was “Why was Turing interested in differentiating a man from a woman in his famous test?” Dennett answered, “That was just an example.” But of course it's not just an example. Gender is, paradigmatically, the world knowledge that computers should know to survive what Dennett calls his “quick probes,” or tests “for a wider competence” (pp. 124, 126). Dennett discusses one such “quick probe” of Yale graduate student Janet Kolodner's CYRUS system, a “project [that] was to devise and test some plausible ideas about how people organize their memories of the events they participate in; hence it was meant to be a ‘pure’ AI system, a scientific model, not an expert system intended for any practical purpose” (p. 135). CYRUS modeled the knowledge of then-Secretary of State Cyrus Vance's life by reading through newspaper accounts of Vance's trips, meetings, and public speeches. Sitting down at CYRUS's teletype, Dennett quickly comes to his triumph:
CYRUS could correctly answer thousands of questions—almost any fair question one could think of asking it. But if one actually set out to explore the boundaries of its facade and find the questions that overshot the mark, one could find them.
‘Have you ever met a female head of state?’ was a question I asked it, wondering if CYRUS knew that Indira Ghandi and Margaret Thatcher were women. But for some reason the connection could not be drawn, and CYRUS failed to answer either yes or no. I had stumped it, in spite of the fact that CYRUS could handle a host of what you might call neighboring questions flawlessly. One soon learns from this sort of probing exercise that it is very hard to extrapolate accurately from a sample of performance that one has observed to such a system's total competence. It's also very hard to keep from extrapolating much too generously.
(P. 136)
That CYRUS could answer “thousands of questions,” “almost any fair question one could think of asking it,” but not know “that Indira Ghandi and Margaret Thatcher were women” allows Dennett to embarrass Kolonder's system by forcing it into an uncomfortable silence, which represents a frozen system or an infinite loop, “unable to cope, and unable to recover without fairly massive human intervention” (p. 137). The program CYRUS is in the position of the Turing interrogator here, attempting to guess the gender of people from written newspaper accounts of their lives. I like to think that the computer's infinite loop is just marking time until Dennett gets up and leaves the terminal. For Dennett, the silence marks inadequacy, sexual difference marks “total competence,” and the Turing test promises that he can always find devastating quick probes, or those “questions that oversh[o]ot the mark” of proper gender identification, which is, properly, the sign of “intelligence.”
Dennett's “quick probe” means to assault a computer's “subtle and hard-to-detect limits to comprehension,” and no matter what “the reasonable, cost-effective steps [that] are taken to minimize the superficiality of expert systems, they will still be facades, just somewhat thicker or wider facades.” By citing and faulting this observation, I do not mean to suggest that one cannot design an expert system that recognizes gender. Obviously one can—and designers do. You can be pretty sure that Janet Kolonder sat down and added the appropriate “gender” attributes to her class of “persons” within the knowledge-base CYRUS generated about the world. Nor do I dispute Dennett's claim about the adequacy of the Turing test. But I do mean to use Dennett's essay as an example of the contradictory stance taken by every single essay in the body of literature about the Turing test: to quote a popular psychology textbook, “It should be noted that to accomplish this [that is, to pass the Turing test], the machine must be able to carry out a dialogue in natural language and reason using an enormous database of ‘world knowledge.’ The ‘man-woman’ formulation proposed by Turing is not usually stressed in describing the imitation game. Instead, the theme is usually the idea of a machine convincing an interrogator that it is a person.”10 What subjectivity outside of gender might be, what it means to be a “person” outside of gender is an issue that is never broached. That is to say, these accounts want to state that gender is a peripheral, negligible phenomenon; and gender is an integral, indeed indispensable, feature of the test as world knowledge. Thus gender matter-of-factly for Turing and uneasily for Dennett and other users of the “imitation game” emblematizes world-knowledge. Gender emblematizes the unsteady, constantly shifting parameters of what world-knowledge is adequate for sentience, for intelligence, for “human-ness.”
Dennett concludes, “I have often held that only a biography of sorts, a history of actual projects, learning experiences, and other bouts with reality, could produce the sorts of complexities (both external, or behavioral, and internal [though here he avoids the word “psychological”]) that are needed to ground a principled interpretation of an entity as a thinking thing, an entity with beliefs, desires, intentions, and other mental attitudes” (p. 140); or to quote the words that he puts in the mouth of his fictitious Alan Turing, that “eyes, ears, hands and a history are necessary conditions for thinking” (p. 141).
My description of Turing and discussion of Dennett mean to stress that what I have called the “philosophical burden of women to speak the ‘truth’ of their sex,” whether or not they fail at conveying that “truth,” is, within Turing's system explicitly and Dennett's explication implicitly, exemplary “world knowledge,” the very stuff that guarantees a computer's intelligence. That is to say, the critical claims for the epistemology of “intelligence” have built into it, by gesturing to “biography,” “history,” or equivalently in Turing's discussion of the rearing of a “child-machine,” “learning” and “tuition” (pp. 456-57), an assumption of normative gender roles, and an assumption by the computer of a normative gender role: or to put the claim in its strongest form, that “intelligence” and “humanity” can't be defined outside of sexual difference and the phenomenology of the sex-gender system.
The central observation I wish to make is that Turing's neat disarticulation of physical indications of gender from the conditions of judgment about “intelligence” (or what becomes in later formulations within his work, as well as the work of cognitive scientists, computer scientists, and philosophers of the mind, a quality called “human-ness”) succeeds only in reseating gender firmly within “intelligence” itself: a woman is put in the position of defending and authenticating her gender across the network; in turn, a computer authenticates its intelligence only if it simulates her gender better than she can across the same network. The Turing test thus imagines that being a better woman than a woman is equivalent to intelligence and that ineffable quality “human-ness.”
To be familiar with e-mail, netwide interactive talk lines like the Internet Relay Chat, or MOOs, MUDs, and other object-oriented text-based virtual reality environments is to be familiar with the notion that network technology induces just this sort of philosophical puzzle. Simply put, when presenting yourself on-line you have to pick a gender and you must constantly work to maintain the presentation of that gender. Pavel Curtis, Xerox-PARC scientist and inventor of the most widely used text-based virtual reality system as well as the maintainer of the largest VR community in existence (called LambdaMOO), explains that “many female players report that they are frequently (and sometimes quite aggressively) challenged to ‘prove’ that they are, in fact, female. To the best of my knowledge, male-presenting players are rarely if ever so challenged.”11
Curtis goes on to suggest that the vast majority of players are in real life “men” (he guess-timates “over 70٪ of the players are male; [but] it is very difficult to give any firm justification for this number” (p. 5), and those men tend to present themselves as “male” on Net. To choose a gender you issue a command to the VR program to set a gender-marker for you. It's usually the first command you learn. The marker directs the computer to generate sentence boiler-plate with, for instance, male-pronouns instead of female-pronouns, or even plural or made-up pronouns. So if you “look” at a character the computer might report “He is asleep” or “She is awake” or “It has been staring off into space for 5 minutes.” Or if you are male-presenting and speak, the sentence reads to other players “He says.” Curtis asserts that some men
present themselves as female and thus stand out to some degree. Some use this distinction just for the fun of deceiving others, some of these going so far as to try to entice male-presenting players into sexually-explicit discussions and interactions. This is such a widely-noticed phenomenon, in fact, that one is advised by the common wisdom to assume that any flirtatious female-presenting players are, in real life, males. Such players are often subject to ostracism based on this assumption.
(P. 6)
It is important to note that ostracism and the ever-commented-upon common sense that sexually aggressive on-line women are actually “males” wields a not so subtle misogyny to enforce a heterosexuality among otherwise always malleable and shifting relationships. This is an important point to which I will return shortly.
Female-presenting characters, he further notes, report that “they are subject both to harassment and to special treatment” (p. 6); and “Some players (and not only males) also feel that it is dishonest to present oneself as being a different gender than in real life; they report feeling ‘mad’ and ‘used’ when they discover the deception” (p. 6).
Montieth M. Illingworth writes about just such feelings of anger and deception in her article “Looking for Mr. Goodbyte” in the December 1994 issue of Mirabella magazine.12 She reports on “what happens on the locus of desire and technology” to “suggest that there are new dangers—and familiar disappointments—to come from this meeting of human being and machine” (pp. 108-09). Illingworth introduces us to “Elizabeth” who scans the Usenet group “alt.sex.stories” and “alt.sex.bondage” as a sort of sexual therapy to “dislodge her[self] from sexual stagnation” (p. 109). Elizabeth establishes a relationship with a man named “James,” and they exchange e-mail, become emotionally intimate, and spend much time in—for her—extraordinarily intense, sexually descriptive conversations. After four weeks of intense Internet communication she agrees to meet James at a hotel in L.A., where they would spend the weekend together. Elizabeth explains that she “arrived early and, as agreed, waited in the bar. ‘I remember feeling two things. First, the anticipation of the sex. We didn't say it this way, but we both wanted to screw each other's brains out.’”
At around 6:30, a woman of about thirty-five, with thin, blond hair pulled back tightly into a ponytail and a heart-like, almost angelic face, sat at the other end of the bar. Elizabeth felt the woman's stare like an infrared beam of light, invisible to all but her. A few minutes later, the woman approached and introduced herself as Jessica—aka James. Elizabeth started to faint.
(P. 111)
The RL conversation that followed went something like this. “Jessica: ‘I'm really, really sorry.’ Elizabeth: ‘I feel ridiculous.’ Jessica: ‘I'm so sorry, God, forgive me, please.’” An hour after this vertiginous and dizzying introduction, Elizabeth and James/Jessica “went up to the hotel room. Elizabeth would say only that ‘tender lovemaking’ followed” (p. 111). Elizabeth explains her decision to sleep with James/Jessica as “It came down to a simple question … Was I prepared to lie to myself? That's what walking out would have meant. Our intimacy was real. I couldn't suddenly pretend just because of gender that it never existed.”
Establishing on-line relationships in what I'll call, to stress the discursive composition of those relations, the heterosexual vernacular—that is to say, an intimate sexualized relationship between a male-presenting character and a female-presenting character—carries with it an always-just-about-to-surface possibility of homosexual desires; or even, indeed, that one person's gay sex might be a RL jack-off session between male and female bodies. I would stress that Elizabeth's is an antihomophobic position, not a pro-lesbian or pro-gay one; the interaction across the Internet forces Elizabeth to re-negotiate her sexuality. She writes “Gender is just a label. I had to escape something in myself, this feeling that the decisions I made about who I am are final.” When the interviewer responds to this by asking if the Internet has changed the “existential equation” of the statement “I think therefore I am” to “I'm on-line therefore I am,” Elizabeth responds, “No, I'm on-line therefore I can become” (p. 111).
The anger, anxiety, desire and fear that Curtis reports on and Illingworth describes are immediately recognizable to anyone who spends time on MOOs or MUDs and the Internet. In the Spring 1994 Proceedings of the Berkeley Conference on Women and Language, Lynn Cherny's13 documentation of on-line interaction gives much evidence of homopanic, queer/gay baiting, and subsequent rhetorical violence: prohibitions of showing on-line affection to male-presenting characters by other male-presenting characters, for instance, “OK, take that. You whuggled a BOY!” (p. 5) or “hug me again and I'll rip your face off” (p. 7) are just two citations among countless. As a character named “John_Birch's_Friend” screamed to a female-presenting, pink-triangle-wearing lesbian-loving character named Daffodil on LambdaMOO recently, “The Internet is making us all HOMOSEXUALS! I hate you all! You all should die!” To put it in other words, the Internet, as a Turing technology, triggers deep homosexual panic in persons who violently insist on the maintenance of a strict alignment between on-line and off-line gender presentations. I could give much more evidence. The point I wish to make though is three-fold: (1) the unquestioned identity for Internet characters is “male” and the default gender for even female-presenting characters is “male”—or to quote Allucquere Rosanne Stone, who works on gender and sexuality on the Internet, “It seems to be the engagement of the adolescent male within humans of both sexes that is responsible for the seductiveness of the cybernetic mode,”14 a bewildering formulation that re-writes all intense sexual desire as “adolescent” and “truly” male; (2) all interactions are gender-panicked and because of point (1) deeply homosexually panicked; and finally, (3) even relationships that are carried on in the heterosexual vernacular are potentially homosexual. Even the most “heterosexual” of conversations are potentially homosexual and within most MOO cultures marked by an undercurrent of homopanic.
WAKING TO A VOICE THAT WAS MUSIC, THE PLATINUM TERMINAL PIPING MELODICALLY, ENDLESSLY, SPEAKING … OF DEEP AND BASIC CHANGES TO BE EFFECTED IN THE MEMORY OF TURING.15
To solve his epistemological conundrum—how to answer the question “Can machines think?” without resorting to metaphysical arguments about the nature of the soul of the machine—Turing put to work the central observation that technology disarticulates gender from what he specifies are the relevant conditions of knowledge even while he maintains that discourse itself will speak the truth of “intelligence” or “human-ness,” by which he means gender. Turing's observation, in turn, grounds the utopian hope that technology eradicates gender as an operative category not only from the Net but from the conditions of knowledge itself, though never so completely so as to leave us culturally at sea; that is, it manifests the contradictory expectation that the Internet as a utopian technology both erases gender in order to dissolve patriarchy as well as other ideological hierarchies, and transmits gender (or forces interlocutors to speak its “truth” of their gender) in order to stabilize identity and make comprehensible our relations to others. It is my claim that such technology does disarticulate gender and diffuse claims for “authenticity”; but it is up to us to refuse to re-write, re-play, and culturally enforce the sexist and homophobic expectations that our interlocutors maintain a strict alignment between their “real”-life gender and sexual identities and their “virtual” ones. We must recognize that the identities we live and produce day-to-day are not rigid indicators of where we will find and take our pleasures, that the Internet as a Turing technology does unlatch gender as the pre-eminent fixture in defining our interpersonal relationships, though it does not eject it from our conversations; that the Internet functions as a tool for disrupting rigid prescriptions of social interaction, and allows us to inhabit and re-inhabit the fantasies and pleasures of conversations. Moreover, those conversations, articulated as they are through the shifting syntax and semantics of hetero- and homosexualities, need to refuse not the pleasures of playing out “gay” or “straight” on-line relationships but rather the insistence that the interlocutors live out or “be” those relationships. The Internet as a Turing technology is not a utopian space where gender does not exist as a category; it is not a safe space where sexism and homophobia don't, as a rule, rule. No such space exists for us, though we are slowly starting to live in the disorientating environments of network technologies that sheer and fragment gender and sexualities as we know them. We are beginning to inhabit, with “Elizabeth” and “James,” an anti-homophobic position in relation to where and how we establish intimacies and pleasures.
Furthermore, I want to stress that I agree that to identify intelligence within conversation is correct—to echo Dennett, that there is no better test. But contra Dennett, Bieri, Clark, French, and a host of other cognitive scientists and philosophers,16 I would maintain that while certainly found within discourse and the conversation, “intelligence” should not be collapsed into a phenomenology of “human-ness.” Whatever sentience computers will have (or now have) we should not insist that they take on our gender categories to alleviate our painful uneasiness and breathless anxieties. Computer scientists should not build AI programs to reflect and produce our ideological norms in order to pass our tests for intelligence. Whatever their subjectivities, present or future, computers have no gender. That's not a fact we live with easily. Disarmingly, this reflects back to us, to our re-negotiation of our own subjectivities, our own pleasures within and across our subjectivities, in the realization that every conversation is an imitation game, every form of representation is a Turing technology. We are all, more or less, just thicker or wider facades. The re-negotiation of communication through what amounts to on-line fiction makes severe what can be subtle—and does what technology does best: it leaves us fumbling for our de-naturalized identity categories. By acknowledging that betrayal, rage, fear, anger, and anxiety, as well as desire, horniness, love, and identification—those emotions and cognitive states most often expressed upon entering into net-relationships with net-personalities—are symptomatic of a motion sickness and disorientation induced by looking through computer-distorted lenses at non-stationary gender objects; and by articulating desires that can never be confirmed heterosexual (or homosexual, for that matter), female or male, Internet sexuality and the Turing test point towards the realization that all our alloerotic desires and pleasures in real and virtual reality are always deeply masturbatory.
Notes
-
William Gibson, Neuromancer (New York: Ace Books, 1984), p. 49. All subsequent citations to Neuromancer will be given in parentheses within the text.
-
I realize that this “we” is problematic: we are told that the “we” who live in cyberspace are more often than not male, more often than not white, more often than not those who possess not only capital but the cultural capital that makes the Internet and other computer-mediated communication technologies accessible. My essay means not to skirt these important issues about who has the education for and access to technology and whether the “revolution” in subjectivity is only available for those who have the culture and capital to live on-line. In fact, as I hope will become apparent, I do not believe that my argument is tied to any one form of communication technology—indeed, I hope that by the end of the paper it will be obvious why I believe that the problems I discuss are about representation itself. I do however recognize that the question of who has the luxury of “subjectivity” is a vexed one; I also realize that there is some naiveté in the position that to demystify “cyberspace” and more immediately “the Internet” and its governing tropes of normative subjectivity, one of the ostensible goals of this essay, is to advance the project of shattering cultural barriers to the acquisition and use of computer technology.
-
Indeed, the novel suggests that if an AI does have a motive it is evolution. Wintermute attempts to fuse the two halves of his personality to reach the next evolutionary step beyond the human. “‘You know salmon? Kinda fish? These fish, see, they're compelled to swim upstream … I'm under compulsion myself. And I don't know why … But when this is all over, we do it right, I'm gonna be part of something bigger. Much bigger’” (p. 206).
-
In fact, there is an exception that proves the rule: when Wintermute attempts to speak through Linda Lee he finds that he can't: “Oh, and I'm sorry about Linda, in the arcade. I was hoping to speak through her, but I'm generating all this out of your memories, and the emotional charge … Well, it's very tricky. I slipped. Sorry” (p. 119).
-
Henry James, The Wings of the Dove (New York: Penguin, 1988), p. 58.
-
William Gibson, “The Gernsback Continuum” in Mirrorshades (New York: Ace Books, 1986), pp. 1-11. Gibson's close friend Bruce Sterling describes the “Gernsback Continuum” as “a clarion call for a new SF esthetic of the Eighties.”
-
Gibson, Neuromancer, p. 156.
-
A. M. Turing, “Computing Machinery and Intelligence.” Mind (October 1950):433. Subsequent references for citations will be included in parentheses within the text.
-
Daniel C. Dennett, “Can Machines Think?” in How We Know, Michael Shafto, ed. (San Francisco: Harper and Row, 1985), pp. 121-45.
-
Martin A. Fischler and Oscar Firschein. Intelligence: The Eye, the Brain, and the Computer (Reading, MA: Addison-Wesley Publishing Company, 1987), p. 12.
-
Pavel Curtis, “Mudding: Social Phenomena in Text-Based Virtual Realities,” Internet ftp location: parcftp.xerox.com/pub/MOO/papers/DIAC92.* (1992): 6. Curtis's important and informative article perceptively lays out many of the problematic interpersonal interactions in cyberspace. For an engaging introduction to MOOs and MUDs, I recommend Howard Reingold's The Virtual Community: Homesteading on the Electronic Frontier (New York: Harper-Perennial, 1993).
-
Montieth M. Illingworth, “Looking for Mr. Goodbyte,” Mirabella (December 1994), pp. 108-17.
-
Lynn Cherny. “Gender Differences in Text-Based Virtual Reality.” Internet ftp location: parcftp.xerox.com/pub/MOO/papers/GenderMOO.* (1994).
-
Quoted in Amy Bruckman, “Identity Workshop: Emergent Social and Psychological Phenomena in Text-Based Virtual Reality.” Internet ftp location: parcftp.xerox.com/pub/MOO/papers/identity-workshop.* (1992), p. 35. My thanks to Amy for pointing towards her paper on MediaMOO.
-
Neuromancer, p. 262.
-
For instance, see Peter Bieri, “Thinking Machines: Some Reflections on the Turing Test,” Poetics Today 9:1 (1988): 163-86; Timothy Clark, “The Turing Test as a Novel Form of Hermeneutics,” International Studies in Philosophy 24:1 (1989): 17-31; Robert M. French, “Subcognition and the Limits of the Turing Test,” Mind 99:393 (January 1990): 53-65; and Mary McGee Wood, “Signification and Simulation: Barthes's Response to Turing,” Paragraph 11 (1988): 211-26.
Get Ahead with eNotes
Start your 48-hour free trial to access everything you need to rise to the top of the class. Enjoy expert answers and study guides ad-free and take your learning to the next level.
Already a member? Log in here.