Trusting Machines Requires Transparency

Array

Pamela Pavliscak is Committee Member, IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems, visiting assistant professor at Pratt Institute’s School of Information and founder of insights and innovation firm, Change Sciences. Her work focuses on emotion and identity in the context of people’s personal experiences with algorithms. In this Q&A, she explores some of the key considerations and challenges for algorithmic decision making, which will be discussed further in a complimentary webinar hosted by IEEE on 7 December.

Question: What is algorithmic decision making?

Pavliscak: An algorithm is a set of instructions used by machines to problem-solve or predict. Algorithmic decision making is behind technology we use every day. Some we are keenly aware of using, like social media feeds, and others are running behind the scenes in everything from financial systems to modern courtrooms to healthcare networks. One simple example we all know are algorithms that decide what to show us based on our past behaviors, like your Spotify Discover Weekly recommendations. Others stem from what other people do, like Amazon purchase recommendations. Still others are a mix of both of these approaches or something else entirely. One thing is clear, algorithmic decision making touches every aspect of our lives—whether we are consciously using the latest, greatest technology or it’s embedded in the underpinnings of daily existence.

Question: What are some of the inherent challenges with algorithmic decision making?

Pavliscak: When a decision is made by a machine it carries a sense of objectivity, but we know that to be anything but the case. Inaccuracies, inconsistencies, and incomplete training data can set algorithm decision making off course. Often, it’s more than that. Bias embedded in the data itself, in the selection of the dataset, or the way the algorithm operates on the data is a growing challenge. Large datasets can reproduce existing prejudice. We’ve seen that in contexts ranging from Google’s image search results to predictive policing to various human resources applications.

Complicating matters further, we don’t all agree on the values, ethics, or social factors that should shape algorithmic decision making. For now, economic goals play a much larger role than other types of societal goals around education, environment, health, or community. Different goals mean different ways of defining successful outcomes, and a successful outcome for business is not always in sync with successful outcomes for individuals, communities, or society as a whole.

So, the way forward has to involve all the people creating technology powered by algorithms, including engineers, inventors, developers, and designers. But it also needs to tap the expertise of lawmakers, corporate decision makers, and governing agencies. Most crucial of all is to facilitate the meaningful participation of end users—the humans encountering algorithmic decision making in more and more aspects of their everyday lives.

Question: What’s needed to address those challenges?

Pavliscak: If you talk about algorithmic decision making, you are likely to hear a lot about transparency. That’s because right now, algorithmic decision making is a black box. It’s opaque to the end user and sometimes to its creators. Whether because it’s a trade secret like Facebook’s feed or because it’s learning based on training data, how the algorithm works isn’t well-known or easily discernable. We may not want to see all the messy code beneath, but giving people ways to understand how it works, interpret the effects, and control both the input and the output gives some idea of what we mean by transparency.

Ethical guidelines are just as crucial. In just the past year, we’ve seen some of the big tech companies trying to band together and create their own guidelines. Governing bodies, international initiatives, and professional associations are all entering the conversation. The IEEE Internet Initiative provides a collaborative platform to connect technologists and policymakers to help advance solutions and inform global technology policymaking in the areas of internet governance, cybersecurity, privacy, and inclusion. Through the Collabratec Internet Technology Policy (ITP) Community, we’re developing a set of best practices that anyone can use to understand the core issues and improve algorithmic decision making.

Question: When will those best practices be available?

Pavliscak: The first step is a related white paper, currently in progress, that discusses challenges and proposes options for addressing them. The paper will include terminology and definitions that are understandable by the lay public and rigorously accurate to technologists. We’re working to publish the paper late in 2017, and it will be available at http://internetinitiative.ieee.org/resources.

Share this Article