We Need Algorithmic Transparency

—Kelly Gates

At a certain point in the twenty-teens we decided to give everything over to the cloud. The very note I’m writing is being stored there now, periodically as I write it, along with so many details about me it’s impossible to remember.

The cloud can remember, but it can’t remember everything. It contains billions of images of faces, for example, but it can’t always retrieve the correct images when asked. This is partly because the cloud isn’t really a singular brain-like entity, but even if it was, it wouldn’t necessarily have a “photographic memory.” Computational systems rely on photographs to remember faces, and photographs are only fleeting and imperfect materializations of memories. It’s also the case that faces change over time, thanks to aging, experiences, injuries and surgery. They look different in different lighting, at different angles, and when they display different expressions.

Another reason the cloud can’t remember faces perfectly has less to with differences in the appearance of faces and more to do with the algorithms designed to analyze and retrieve them. Machine learning algorithms have to learn on particular datasets, and those datasets, no matter how large, are never perfect reproductions of the domains they represent. The datasets being used to teach computers how to see faces inevitably contain complex permutations of intentional choices and unconscious blind spots.

Still, we hear a lot about how “accurate” facial recognition technology has been getting lately. In fact, judging by how often we hear these claims, the technology should be at least 300% accurate by now. But the reality is that computational systems are perfectly capable of having unreliable memories. And a lot is at stake in their ability to remember or forget. Institutions are using algorithms to make decisions about the future that they find difficult to make, decisions that can have profound effects on people’s lives. Private companies are supplying risk-assessment products to the criminal courts, designed to predict people’s likelihood of committing future crimes as a way of assisting trial judges with sentencing. A company called Lapetus Solutions Inc., named after the Greek god of mortality, sells a product to life insurance companies that uses “facial analytics” to predict people’s life expectancy. The US Federal Bureau of Investigation has been building its own large-scale facial recognition system, partnering with local law enforcement agencies, without the recommended accuracy testing or privacy-impact assessments.

The lack of transparency surrounding the development of computational systems likes these poses a serious challenge to the democratic experiment, especially the essential tenet of legal due process. The lack of transparency is often attributed to how technical data science is: data analytics and machine learning are so complex that even the data scientists themselves are often at a loss to explain the outcomes. But there are other reasons why the development of these technologies is so opaque, reasons that have nothing to do with technical complexity. Today, tech development is almost always treated as either proprietary information, or something like a state secret. This combination of technical complexity, proprietary privilege, and secretive state practice is keeping us completely in the dark about the algorithms being developed and applied our lives.

Our Biometric Future aims to work against this tendency to make the development and functioning of technologies opaque, locked in a black box where only the inputs and outputs matter. Understanding what’s inside the black box of facial recognition matters, I argue, because our faces have a special pride of place in the very weird experience of being human. It’s hard to say who or what we are becoming as we become subjects whose lives are perpetually tracked, analyzed and modulated, but it seems rather important to remember how it all played out.

Kelly A. Gates is Assistant Professor in the Department of Communication and the Science Studies Program at the University of California, San Diego. She is the author of Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance (NYU Press, 2011).

Get the e-book on Amazon now for only $1.99!

Featured image: Face recognition by Mirko Tobias Schäfer. CC 2.0 via Flickr.

Website | + posts