CLOUD #902 Scale Invariant Feature Transform; Watershed, 2020, wallpaper. © Trevor Paglen. Courtesy of the artist and Metro Pictures, New York 

You Should Probably Be Afraid

Trevor Paglen's new show at the Carnegie Museum of Art in Pittsburgh is a retrospective that examines data ownership, surveillance, and the possibility that we're all fucked.

by Annie Armstrong
Oct 2 2020, 9:30am

CLOUD #902 Scale Invariant Feature Transform; Watershed, 2020, wallpaper. © Trevor Paglen. Courtesy of the artist and Metro Pictures, New York 

We can’t say he didn’t warn us. Trevor Paglen is an artist who has been exploring mass surveillance, human rights, and the relationship between government and technology for decades; encompassing a satellite launch, scuba diving expeditions, and up-close-and-personal photography of government agencies such as the NSA. His latest show at Pittsburgh’s Carnegie Museum of Art, “Opposing Geometries,” takes a look back at many of these works, which all have a new life after recent political events; the policing of peaceful protests, debates over social medias role in elections, and tension over data ownership. GARAGE caught up with the artist just as he got out of lockdown in New York to hear about what our future might hold (spoiler: it’s not too bright).

"It Began as a Military Experiment" (detail), 2017, 10 inkjet prints. © Trevor Paglen. Courtesy of the artist and Metro Pictures, New York

It’s been a huge summer for the discussion of surveillance and facial recognition. What do you make of peoples’ fears about protestors being surveilled?
It’s extremely well-founded. It’s a straight-forward answer: yeah, you should be afraid.

But masks can dissuade a lot of the facial recognition technology? Is that right, or am I being hopeful?
You’re being hopeful. There’s a couple of things that are tricky about thinking about that. First of all, when you look at more cutting-edge ways of [facial recognition], you can do it just based on someone’s eye. The newer systems are using tens of thousands of key-points, and the classic early-to-mid 2000s face detectors will be beaten sometimes by masks, but that certainly isn’t true of the newer systems. In order to develop countermeasures for facial recognition, you have to think about how to develop countermeasures for things that exist today that also exist in the future. Because recordings are made. And those recordings can be analyzed at any time in the future retroactively. So it’s a lot more complicated than “Oh, if you put this kind of makeup on…” or whatever.

And contact-tracing is a huge part of the current science too, right? 
The way they do it in China is a pretty good indication of where things are going in general. What they do there is a combination of facial recognition, but also things like geolocation from cellphone cameras. Then you can pair that with images of license plates, or cars, or certain types of clothes. And so you can put together a more composite fingerprint, as it were from multiple points of date. So they can identify some aspect of you, then put them all together to get more accurate identifications. In many cases, this happens where you can’t see someone's face at all.

"The Black Canyon Deep Semantic Image Segments," 2020, dye sublimation print. © Trevor Paglen. Courtesy of the artist and Altman Siegel, San Francisco

That’s horrifying. It sounds like you’re pretty in touch with how you think the future is going to look. I’m dying to hear—can you clue me in?
Well, I think that the question we have to ask when looking at technology systems is pretty simple. It’s just: “What forms of power is the system designed to amplify? Whose economic interest is it designed to promote? Whose political interests is it designed to promote? Whose cultural or civic interests?” Because it’s always going to promote someone’s at the expense of somebody else’s. Analyzing technical systems, in terms of what sort of power they wield, and who they empower, is a productive way of thinking about it at this point.

So tell me about Autonomy Cube.
That is a piece that tries to imagine what the internet could look like if it were not the greatest tool of mass surveillance in the history of mankind. Basically, there are protocols built into the way that technology works. Those protocols have politics to them, and they control how communication functions on the internet, and are constructed in such a way that it’s very easy and even encouraged to conduct mass data collection. That comes from decisions people made to make that happen. They could have just as easily built protocols that preserve internet privacy and preserve anonymity. But that’s not the direction that was chosen.

So what Autonomy Cube does is try to make that point by creating an object, I call it an “Impossible Object,” a piece of technology whose logic is the exact opposite of what that technology normally does. It’s a piece that allows you to connect to the internet, but tries to anonymize you by routing your traffic over something called the Tor network, which is a network of computers that sits on top of the internet, and they try to anonymize traffic. It’s a Wi-Fi hot spot that tries to anonymize your data and your location. Simultaneously, it allows people to use the computer installed in the museum to try to anonymize their data and location.

"It Began as a Military Experiment" (detail), 2017, 10 inkjet prints. © Trevor Paglen. Courtesy of the artist and Metro Pictures, New York

How have you seen the internet change in the five years since Autonomy Cube was first shown? 
The piece was made in response to a lot of the Snowden controversy, and that was what I was spending a lot of time working on at that point. The realization there was that the whole internet has been commandeered by state intelligence agencies to be a vehicle of mass surveillance. And completely illegally, most of the time. As you work with the internet more, certainly in my case, you realize that is a part of it, but also if you look at the business models or a Google or a Facebook or an Amazon, they are certainly very much aligned with the same kind of practices that you would see from an NSA or TCHQ for example. And they’re actually way bigger.

So, that piece remains relevant to me, but I think what has changed for me is that it was originally a response to state surveillance, and now I think of it in relation to the internet in challenge.

I’d love to hear about your process with these photos in the show, like the De Beauvoir. Can you walk me through how that piece got made? 
What is happening in those images is that basically when you’re using a facial recognition system, the way that you train it is to give it pictures of people. Facebook has the best facial recognition on the planet, and the reason for that is because they have everyone’s faces. One philosophy of facial recognition is that you take a bunch of pictures of someone, then you figure out what the salient features of this person's face are versus others. Like a facial fingerprint. I started training facial recognition software on the faces of dead philosophers and activists, and anti-colonial figures, civil rights leaders. In the studio, we made images of not that persons face, but that person’s faceprint that the facial recognition would make. It’s like trying to recover the ghost images that are inside the algorithms.

"Karnak, Montezuma Range Haar; Hough Transform; Hough Circles; Watershed," 2018, 3 gelatin silver laser-exposed prints. © Trevor Paglen. Courtesy of the artist and Metro Pictures, New York

So, do you have an opinion then on what should happen with TikTok in the U.S.?
[Laughs] I really don’t. There’s a geopolitical aspect to the Internet and to artificial intelligence as well, which has to do with which countries and which entities have what power to collect what kind of data, and then do what with it. There is a Chinese way of doing that, and an American way of doing that. And those are quite different. But what they have in common is forms of mass surveillance.

Trevor Paglen
Carnegie Museum of Art