Google and the A.I. Gaze
The popular Google Arts & Culture app performs the feminized job of making people smile.
Google Arts & Culture, which sounds like a ministry of some dystopian government, recently became a frequently downloaded app after a feature involving selfies was released in December. The feature requires you to photograph yourself on the spot rather than, for example, upload your favorite selfie. As your face finds the rectangular frame, there is despair, because you haven’t put on any makeup; the jolt of an unexpected intrusion of the public; and a miniature thrill at this disburdening. You won’t even have to pick a filter. If so moved you can compose a caption but otherwise, the app takes over your job of populating a social-media post.
The pairs of photos—to the left, you; to the right, a face, a clipping from the collection of one of more than 1200 “cultural institutions,” which Google’s AI algorithm proposes for your doppelgänger—resemble “Before and After” features in lifestyle magazines. In its distillation to a headshot, each work of art has been cropped much as Facebook requires users to crop photos for their profiles, to obscurely comic effect. Among my matches, a picture labeled “Portrait of a young woman with th…” showed only a baby. That there are, each time, five options, which, by swiping left, you discard in favor of the next, reinforces the app’s point, which is to find one you like. The app’s loyalty is not to art history or to accuracy, but to you. There is, in the contrast between the precision of the captions (“58% match,” “62% match”) and the indifference to description of the image set, something of the preposterousness of the American genre of self-help.
It was at first rumored that Google might use the selfies against you. It was then reported, for example by Quartz, that, per Google’s in-app message, the company does not store the images or use them to train its system. It was at last argued, by a representative of the Electronic Privacy Information Center quoted in The Washington Post, that the app presented at least the danger of habituating its users to facial recognition technology, which is increasingly, by many governments, turned to purposes of social control.
This AI system has taken on a traditionally human job of helping to make people smile, which feminist writers theorize as affective labor: “labor that,” as Antonio Negri and Michael Hardt write in Multitude, “produces or manipulates affects,” states of mind and body. In Her, Theodore Twombly, who is entranced by an endlessly articulate and charming AI system named Samantha, works as a ghostwriter for a website called BeautifulHandwrittenLetters.com, which somehow has not been automated. Charm is the last province of humans, an intuitively derived secret recipe. But, as Twombly’s infatuation with Samantha indicates, it is, in fact, easily faked.
The story that America tells itself about job loss due to technological acceleration is one of men: of auto plants, of Bruce Springsteen’s protagonists. Because the peopling of factories by robots has been so long in happening, and because the advent of driverless cars, which threaten truckers, has reasonably sparked worry, it is common to think of automation as a threat to masculinized labor. But a study last summer suggested that twice as many women as men will lose their jobs proximately; many will be women of color and women who have received less formal education. The jobs that will go are affective in that they cannot be performed in a state of fidelity to one’s own ill humor. Cashiers, for example, will be supplanted widely. Chatbots increasingly replace humans as customer-service representatives. Home robots, which resemble rounded, wheeled iPods, take photos and cheer you up. They take on, for now only fractionally, the stage management of family life. They recognize faces, too, and let you monitor your home remotely.
We are not in the presence of a mimetic talent comparable with that of the painter, whose traditionally masculinized labor has been to transpose a human muse into oil. What this AI system is performing is more on the order of a party trick. Like a hostess, it delights.
Google’s app does faces only. It found no match for either of my hands. Within faces, it seems to favor some features over others. In my matches I recognized something of the aspect ratio of my cheeks. A high-school acquaintance posted a match that was convincing, I think because she’d taken the selfie with a blanket covering everything but her eyes. The painting had her eyes. More typically, the app flatters by its failure: you are you, and only you. You are compared favorably not only with the painting, which resembles you only slightly, and is making a face far too serious for the occasion, but also with the app, whose mechanism for perception is sufficiently crude to administer a dose of superior feeling.
We are not in the presence of a mimetic talent comparable with that of the painter, whose traditionally masculinized labor has been to transpose a human muse into oil. What this AI system is performing is more on the order of a party trick. Like a hostess, it delights. In it, you are presented with an apparent sentience whom everything reminds of you. It is like your grandmother, who, years after your study abroad, clips and mails to you articles about that country. It sees you, if not precisely then insistently.
While it is hard to unsee that a piece of technology has seen you, “sight” refers, here, to selection, picking a painting out of reams of data. By this definition, it is an ability given to AI systems more than to humans. A skill like this would be necessary, for a human, to meaningfully study study the tens of thousands of artworks (and counting) in the Google Arts & Culture database. In 2016, Google began lending out specially developed high-resolution Art Cameras. The company had been soliciting digitized images from participant museums, but the Art Cameras would photograph each artwork exhaustively, using lasers and sonar to robotically focus thousands of shots. Another camera was sent by trolley to take 360-degree videos of museums—so far not all of them, but “more” are “on the way.” The task seems both self-evidently worthy and doomed to a Borgesian absurdity. It has been undertaken, however, in utter corporate earnestness. It is possible to imagine an exhausted Google, having caught up on everything, decreeing that no further art will be necessary. Meanwhile, in the Cloud, every painting’s doppelgänger circulates.
Before this glut of data, which it is beyond our power to sort, we are like the homesick college freshman typing their parents’ address into Google’s Street View. For lack of any bearings, we navigate by our own faces. We can only hope, flicking through the images, that Google has served up what we ordered. It is an unsettling dependence.
After Google Arts & Culture assigns your matches, you are prompted to, as a final step in the user journey, do what is referred to as sharing, not only over social media but also, if you wish, via email, in which case the app auto-generates a subject line, which is always the same. It takes the form of a question. The question second-guesses the app, undoing its work. The words are put into your mouth. “Does this artwork look like me?” you ask.