The Perpetual Line-Up: Interview With Clare Garvie

In October, the Center on Privacy and Technology at Georgetown Law released The Perpetual Line-Up, a comprehensive report about the use of facial recognition technologies in law enforcement across America. The report is based on a year-long investigation which included over 100 records requests to police departments; it reveals a tremendous level of unregulated surveillance behavior which affects half of all American citizens. Art Keller spoke with lead author Clare Garvie via Skype. The interview has been condensed.

Art Keller: How long did it take to compile the report, and describe a little bit about the research process.

Clare Garvie: I’ve been working on this report for about a year now. I started back in last October in taking a look at whether state and local law enforcement agencies are using face recognition. We know a fair amount about the FBI’s use of face recognition, and the Next Generation Identification System that state and local law enforcements can actually use. But very little is known about how state and local law enforcements themselves have set up and started using face recognition technology.

So, back in January of 2016, we sent about a hundred records requests to various state and local law enforcement agencies. We targeted the agencies that we could identify as having trialed or piloted face recognition programs, and also the 50 largest law enforcement agencies across the country. So we sent about a hundred of these out, received close to 17,000 pages of records in response, and then the process became to dig through those records and identify what we’re actually looking at, what types of systems were out there, how many agencies could use them, how frequently they were used, how they’re set up, and what policies, what controls were in place.

AK: What practices by law enforcement had been identified as issues of greatest concern?

CG: In the report, we actually separate face recognition out into different types of uses based on risk. We do not think it’s a monolith, we think there are uses that carry greater and less risks. The greater-risk deployments can be classified maybe one or two ways. One is what database the systems are running against, so what identified photos are they using to try to identify the unknown individual, and then who they are actually being used on.

So in the first category, we see a higher-risk system that runs on driver’s license records as opposed to just mug shots. One in two American adults are in a face recognition database that is used or accessed by law enforcement, by virtue of the fact that they have received a driver’s license in one of 28 states now across the country. What this means is, we have civilian databases. We have databases that people are enrolled in off of purely civilian purposes that are now being used for law enforcement purposes, major biometric databases. This is completely unprecedented. If you’ve never committed a crime, you’re, generally speaking, not in a fingerprint database that is used to identify criminal suspects. You’re certainly not in a DNA database. And yet, all of a sudden, now one in two American adults is in a biometric face database that can be accessed by law enforcement. So we consider that to be higher risk.

The other types of deployments that are high risk are what we consider real-time face recognition systems. These are systems where face recognition is used in conjunction with live-feed video to actually scan the faces of a crowd or search any individual walking by a video camera, search that person against a database perhaps of individuals with outstanding warrants or suspected gang members. But basically what’s happening is that this type of face recognition is running searches at a distance and in secret. So someone may not know that the search is actually taking place, and we do know that these types of systems are actually being deployed. We see this in Los Angeles, Chicago requested it, the Dallas Area Rapid Transit is looking at setting up a system. Sounds like New York is setting up a system. West Virginia has a real-time system. So this is very much a reality today.

AK: What do you see as the chief peril of this? A lot of people are kind of inured to technology being used in new ways they don’t necessarily get why this would be problematic.

CG: Well one, and a very overarching issue here, is that there are very few controls on the use of the technology. So even if someone is by and large OK with how this face recognition is being used today, there’s nothing that says it won’t be used in a very different or even more pervasive manner in the future. There are no laws that comprehensively address face recognition. There are a few that touch on certain issues or very specific applications of face recognition, such as its use in conjunction with body cameras or drone footage or requirements for certain records to be destroyed. But by and large, there’s actually nothing telling police departments how they absolutely cannot use face recognition, either now or in the future. So we have a system where it’s left up to the departments who are employing the technology to decide how to use it and whether to place any controls. And what we found is that a lot of agencies are not placing many controls.

AK: Talk a bit about concerns with the accuracy or lack thereof, as this face recognition technology is deployed.

CG: Sure. This is a biometric identification system, but relative to other biometric identification systems, such as fingerprints or DNA, face recognition is not very accurate. It’s a new technology and it’s certainly getting better, but it’s not very good yet. To add on top of that, accuracy rates are not distributed equally across race, gender, and age. Face recognition systems perform worse on African-Americans, on women, and on people under 30. The FBI ran tests of its system a couple years back and found that it performed at an accuracy rate of about 86%. The FBI’s system is designed to return a list of between two and 50 possible candidates. It’s not designed to say, “This is your guy,” or “This is your woman,” or “We did not find anyone.” It’s going to give you a possible list. So what the 86% means is that six out of seven searches, the suspect that the FBI’s actually looking for is in the list of, let’s say, 50 possible candidates. One out of seven  times, the system returns a list of completely innocent candidates, regardless of the fact that they actually have the suspect in their database.

To add to that, face recognition gets less accurate the larger the database is. And then accuracy rates also drop drastically when face recognition is used on what are called faces in the wild. So, think surveillance camera footage or cell phone camera videos, or a still that’s taken from that. These photos are not very good quality, where the face might be turned a little bit away from the camera. The accuracy of these systems is going to drop drastically.

AK: The fact that it gives you a match regardless of whether or not it is close is definitely troubling. The report says that the backup to these face recognition systems are human, but that has its own issues, and let me quote here a line from the report. “A recent study showed that without specialized training, human users make the wrong decision about a match half of the time.”

I can say that I work with one of the large three-letter federal agencies, and I’ve had classroom training and hours of hands-on practice trying to manually match photo IDs to live faces, and it’s not easy. There can be a huge amount of variation in between the picture and the person, and some states, like Arizona, have driver’s licenses that are valid for decades, so you can find yourself trying to match someone to a photo taken 25 years ago. That’s just trying to match a person standing in front of you with their photo ID. If it’s that hard for me to match up one person to one photo, how well are the human backups who are supposed to evaluate this line-up once it’s given to them—and not being even aware that perhaps they were given a line-up where there was no real match found—how good are they going to do at that?

CG: We found that eight systems have trained examiners, but of those eight, we could only verify the training requirements of two agencies, and that was the FBI and the Michigan State Police. For the rest, we were not provided with any details on what trained reviewers or trained analysts, what that actually meant, what that training entailed. But for the vast majority of agencies, the individual taking a look at the candidate list and making the ultimate decision as to who, if any of those individuals, are the correct person, these are by and large untrained individuals. They’re regular officers who are definitely making a good-faith effort, but humans are actually pretty bad at this task.

There have been a number of studies done, and we like to think that because we perform this task on a day-to-day basis, we look at somebody and we say, “Oh, that’s our friend so-and-so,” we think we’re pretty good at it. We actually aren’t. If you’re untrained, you can perform this about 50 percent of the time. If you’re highly trained, it jumps up to about 70 percent of the time. So even though we recommend specific training, and a specific group of people perform this end analysis, this end human review of what the machine has done, this is not a perfect science.

We actually have an example of a catastrophic failure, if you will, of human review. And that’s the case of Steven Talley out of Colorado. This was a gentleman who was accused of two bank robberies. For the second bank robbery, they sent his photo that was caught of the robbery to the analysis unit that does facial comparison for the FBI, and the FBI returned a result to them and said this was a match, that Steven Talley was the guy who was caught on surveillance camera robbing the bank. They actually got it flat wrong. So these were experts that were taking a look at two photos and actually did make a mistake.

And his life was pretty much destroyed. These were very real consequences. He lost his job, he lost his home, his wife left him, he lost visitation rights to his family. These are very, very real consequences of being accused of armed bank robbery.

AK: Wow. I don’t know what, if any, avenue someone like that would have for redress, but that’s the problem: even if you are innocent, at the very least, you’re going to have huge legal fees trying to prove your own innocence, trying to prove a negative, “No, I wasn’t at that bank.”

So the report gives model legislation suggesting how lawmakers might regulate police application of face recognition technology, and it also has a model face-recognition-use policy for law enforcement agencies. Can you tell us some of the key provisions in each of those?

CG: Sure. So the model bill very much represents conclusions that we drew, based on the different applications of face recognition and the different risks that come with each application. So we believe at a very, very minimum, there should be individualized suspicion of criminal conduct, and this should be reasonable suspicion, for when face recognition is used in lower-risk applications. And what we mean by lower risks are applications where the individual knows that the search is taking place or the individual has already been stopped or suspected of a crime by police, so this isn’t a stop-and-identify type of situation. And this is also when the face recognition system is running against mug shot databases, against individuals who have already been apprehended by law enforcement, or have already had some type of interaction by law enforcement.

For the searching against driver’s license photos, because this is a civilian database, we actually believe that legislatures have to approve this type of use before law enforcement begins using it. So, for the current states that are actually already permitting this type of use, we believe there should be a moratorium on this until the legislature can actually say, “Yes, we as a community actually want our license photos to be used in such a way.”

If they will be used, we also believe that there should be probable cause and a court order or a warrant, except when there are true emergencies and for identity theft and fraud-type cases, when the crime that you’re looking for is specifically tied to applying for and getting a driver’s license.

Courtesy of Clare Garvie

We believe that for after-the-fact searches, such as when a surveillance camera footage is run through face recognition to identify the individuals on the footage—this is invisible to a person being searched—this should be limited to serious offenses. And the reason for this limitation is to avoid searches done on pretext, searches done on people for jaywalking or for obstructing a sidewalk, where the real purpose is to identify people who are instrumental in forming a demonstration or who continually show up at certain protests that law enforcement wants to shut down.

For real-time applications of face recognition, this is what we believe to be the most aggressive form of face recognition, this should very much be limited to public emergencies. This should be under a court order backed by probable cause. And we think that the court order should specify a specific time and place for these searches to take place. It should specify a specific list of individuals and identify how such individuals get on that list against which the searches are going to be run. And law enforcement has to specify that they’ve exhausted all other means.

We think there should actually be complete prohibitions on certain places and on certain groups, so it can’t be used specifically or solely on the basis of race or religion. It shouldn’t be used where it starts chilling people from partaking in Constitutionally-protected activities or activities that are essential to public health, such as attending a hospital or going to school.

The model policy is very much based on the policies that we were provided with by law enforcement. Very few of those provisions were actually something that were made out of whole cloth. Most of these were already in existence in the policies that we were provided. And this very much lays out what we believe to be very common-sense approaches to what law enforcement agencies and folks can do in the absence of legislation to make sure that they’re using face recognition in a way that does not chill free speech, that does not unduly burden certain groups of people. It does have a provision in there that discusses what should be required in terms of trained human review. So these are very common-sense steps that police departments themselves can take to ensure that face recognition is used in a way that’s transparent, productive, and limited to genuine law enforcement need.

AK: What feedback or interest have you been getting from lawmakers at any level and law enforcement on the model legislation and the model use policy?

CG: We have received a fair amount of response from various law enforcement agencies. These tend to be law enforcement agencies that were forthcoming with us from the beginning, that did express an interest in working with us to make sure that their systems are deployed in a productive, transparent manner. And those responses have by and large been positive.

A lot of law enforcement agencies did express interest when I first reached out to them in learning about what other agencies are actually doing in terms of deploying face recognition. So we have had law enforcement agencies express appreciation for this insight into the broader picture of how face recognition is being deployed.

From a legislative perspective, we are a non-profit, so we don’t lobby, though we are very much trying to provide the tools necessary for legislatures, but also for lobbyists, and then other advocacy organizations to press forward with the model policies and the model legislation. So I guess it’s still pretty early, but our hope is that we really do start seeing legislatures build on a few laws that are already in existence and start developing more comprehensive coverage for this type of technology.

AK: A lot of the information for this report came from Freedom of Information Act requests. You were the lead FOIA researcher. Can you tell us a little bit about how the FOIA process works?

CG: That’s right. So the Freedom of Information Act is actually just a federal law that allows individuals to reach out to federal agencies and request specific documents from these federal agencies. Every state, however, has their own Freedom of Information Act. They’re often called records requests laws or sunshine laws. They allow citizens, either across the country or from that particular state to request specific documents from state agencies if someone’s interested in learning how their tax dollars are spent, or how their police departments are going about policing, or what type of companies get contracts in their hometown. So these are very, very powerful tools for citizens to ensure accountability and transparency, both at the federal level, but also at the state and local level.

AK: What kind of roadblocks did you run into?

CG: We did have a number of agencies, particularly in the larger cities with the larger law enforcement agencies who we suspect to have more aggressive deployments of face recognition, these agencies, like LAPD, New York Police Department, the Chicago Police, the usual suspects if you will, these agencies were very reticent to provide us with information. So the next step with that is to actually file an appeal. And then if the appeal is denied, the next step is to actually sue the agency, and we haven’t gotten quite there yet. For the most part, the agencies I spoke to on the phone were making a very good-faith effort to comply with my requests.

AK: That’s pretty encouraging that the response rate is much higher than I would have suspected. Do you think you will update Perpetual Line-Up once those appeals are finally exhausted?

CG: Our hope is that we do continue to update the Perpetual Line-Up, the information on the website and in the interactive map that we have on the website.

AK: I have to say, the Perpetual Line-Up does a great job of highlighting the concerns about unregulated use of government photo databases to put people into virtual Line-Ups, but that is unfortunately only half the story anymore. Although it’s probably not how the public thinks of them, social media sites like Facebook, Instagram, and Twitter are de facto commercial repositories of pictures. The ACLU recently revealed that a social media monitoring company, Geofeedia, was selling access to law enforcement agencies to efficiently search Twitter and Facebook, Instagram, etcetera. And after that was revealed, the social media companies cut Geofeedia’s access to that data. What I suspect is that very few average users posting personal photos with family and friends do it with the expectation that the police will be able to use those personal pics to do the same thing discussed in virtual Line-Up, which is to say, use it to monitor potential criminal activity. Do you have any thought on that use and that being addressed in a separate report or a future report?

CG: That’s right. So you mentioned the Geofeedia use by police. That was in Baltimore following the death of Freddy Gray in custody. What happened was, people who were protesting Freddy Gray’s death were posting photos and videos to social media and Geofeedia was collecting that information by geotag, by geographic location, submitting that information to the Baltimore police, who were then using face recognition to actually search these photos and videos to determine who was in them and to identify, locate, and then in some cases, arrest the individuals that appeared in these photos based on who they were or the activities they were engaged in.

While Facebook and a handful of other social media companies have their own face recognition systems, so when you upload a photo to Facebook it might suggest you tag your friend, so Facebook already knows who’s in the photo—that database that Facebook has established is not accessible to law enforcement at this point in time. That would be the database side of face recognition. What Geofeedia and the Baltimore police were doing was using the photos and videos uploaded to social media as the probe photos, the photos of unidentified individuals that they then had to run against law enforcement databases, and in the case of Maryland, the driver’s license photos in Maryland to conduct the identification.

There are so many new technological capabilities in terms of tracking and surveillance, and there’s very much a synergy between them. Geofeedia is a social media monitoring tool that can be used in conjunction with face recognition to create an incredibly powerful tool that can almost do real-time monitoring and identification of people engaged in a demonstration, engaged in First-Amendment-protected activity. And I think we’re going to see this increase in the future, this synergy between different technological capabilities that create a much more powerful surveillance technique when combined. Think face recognition and body cams or face recognition and drone footage. The possibilities at this point are endless. And going back to one of the huge concerns that report points out, there are very few restrictions on this use of face recognition in conjunction with other surveillance techniques at this point.

AK: NPR just ran a story about some researchers at Carnegie Mellon who developed a tortoise-shell pattern that you can stick over glasses to spoof facial recognition software. Have you heard any other groups or individuals who are thinking about pushing back in that way?

CG: As the technology becomes better known by the public, I think we are going to continue to see efforts to ensure that individuals have options available to them to try to evade this type of surveillance. When we were writing the report, we very much came up against the fact that, as it currently stands, a lot of states have what are called anti-masking laws on the books, which essentially says you cannot walk around in public with a mask on. And I think there is going to be a lot of resistance. Hopefully, when the public becomes more aware of this technology, or in a worst-case scenario, if the technology becomes more and more pervasive, which we think it might, there is going to be a pushback, where it’s going to be a desire by the public to be able to obscure in some ways their face.

The Supreme Court has recognized that the right to anonymous speech, the right to anonymity in public is something that is covered under the First Amendment in some capacity, and it’s just a question of if this new technology what is possible from an individual standpoint to make sure that that anonymity remains a reality.

AK: Clare, thank you so much for your time.

View the report by clicking on the cover below.