- Simon Winder
- A clip of Egyptian protesters' faces being detected was used at Friday's hackathon as an example of a program that SPD might use in the future.
You’ve probably heard by now that the Seattle Police Department launched a body cam pilot program this weekend—a big deal in light of President Obama’s body cam initiative after the Ferguson grand jury decision, and an even more contentious development after video footage failed to indict the police officer who put Eric Garner in a deadly chokehold.
Body cams promise to make cops more accountable to the communities they police. But when an anonymous person (who’s since been outed as 24-year-old Timothy Clemans) made a public records request for all body cam footage from SPD, it posed a problem: How will SPD redact sensitive information while making the body cam footage public in a timely manner? That was the focus of a hackathon in the SPD basement last Friday, where a group of roughly 40 technologists, police officers, and city officials came together over coffee and pastries to discuss how the same tools used to protect people on camera could also be used against them.
In order to make body cam footage instantly uploadable, the city has to figure out how to quickly redact sensitive information. If body or dashboard videos capture a juvenile or someone whose safety might be compromised by the release of the footage (consider domestic violence or sexual assault cases), those identities have to be hidden. There’s also the issue of how you might redact medical information if an officer stops someone in the middle of a psychotic break. Identities are made up of more than just blurred faces. Someone’s gait or speech could also reveal their identity.
Outside of how the law currently protects identities on body cam footage, Assistant City Attorney Mary Perry noted that there could be volumes of other personal data a person simply might not want uploaded online.
“There’s a lot of information that you think is private that is really not under the Public Records Act,” Perry told the crowd.
Still, many of the solutions presented at the SPD hackathon focused on facial detection—the ability to track (and blur) a face as it moves across a screen.
“Right now, we follow someone’s face frame-for-frame, 30 frames a second,” Karim Miller, head of the SPD’s video unit, told me, explaining the painstaking process his team follows in order to blur a person’s face. Currently, his team exports roughly 8,000 videos for public disclosure requests a month. Next year, he anticipates that number may even double because of the new body cam program. “It’s unbelievable,” he said. “We can’t hire enough people.”
Right now, body cam footage is available through a public records request. The idea is that once redaction can be automated, SPD will make body cam footage available online for anyone to view. However, when that will happen is unclear.
SPD already has put in an order for a type of software that can track a person’s face. Simon Winder, a former Microsoft Research machine learning specialist and independent consultant, showed how a program he coded could do something similar.
At the hackathon, Winder demonstrated the code on a clip of Egyptian protesters. His program tracked many of the faces on the screen, but it also left some out, flickered, and picked up other noise, too, like windows that maybe looked like faces. Winder used images from family photo albums to teach the code what a face was supposed to look like, but blurry footage and certain positions make the algorithm’s job more difficult. That'swhy Winder suggested a program that could also teach itself to recognize faces from other data sources, like manually redacted footage.
But Winder’s presentation—along with several others—brought up an uncomfortable point. What if police, or even other governments, used this kind of software not to redact, but to track and identify protesters? What’s to stop a government from using this technology against its own citizens?
“If we’re not careful, we’re going to end up with a massive database of our faces and our gaits,” local software developer and open source activist Phil Mocek warned the crowd. “That scares the hell out of me. It’s setting us up for a system that could be abused beyond belief.”
“I suspect that database of gaits and video and faces you’re talking about already exists,” Bill Schrier, Seattle’s former chief technology officer, told Mocek. “And it’s probably already in the hands of Google and Facebook.”
“But they can’t put me in jail,” Mocek replied.
When I caught up with Winder to ask him how he felt about this kind of technology being used to surveil or target citizens, he told me it was something he definitely thought about. “It’s true that whatever you invent, somebody out there can use it for nefarious purposes,” he said.
Later that afternoon, Winder added another thought on body cams: “It’s almost like walking on a tightrope between a dystopian and utopian future.”