‘Ethics Is the New Green’

Array

Q&A with Konstantinos Karachalios, IEEE Standards Association managing director

What is your vision for how technology will benefit humanity?

“Advancing Technology for Humanity” is the IEEE tagline, and the word “for” is extremely important for us. It implies that technology must be created with purpose, versus in a de facto manner. Many times, technology is created simply because it can be without asking if it should be. Nothing is inherently wrong with creating technology because it could provide a certain group of people value or make sound profits; however, you must design your technology with a wider lens to truly benefit all humanity versus one small sector, when you ask, “Is this product, service or system something that will holistically increase the wellbeing of the individuals and communities where it is placed?”

To what extent did ethical considerations in design come to bear in other mainstream technologies (such as automobiles or internet communications)?

To be clear, ethical considerations have always been a part of engineering design. Our work is a “yes and” effort, in combination with codes of ethics like the ones IEEE and ACM and a majority of other organizations have had for years. However, by design, professional codes of ethics are created to mirror the core values of an organization, to provide instructional guidance to employees about their behavior. On top of such codes, organizations certainly have rigid safety and compliance standards set in place, to try and guarantee the wellbeing of employees as well as users of the products they make.

One example of a difficulty that sometimes arises is when a product, service or system is being created and an engineer or programmer discovers something she thinks may be an issue that hasn’t been accounted for. She risks being called a “whistleblower” by bringing up these concerns. At that point, she can either pursue legal action and lose her job and negatively affect her career, or she can be quiet at the risk of the product hurting people. This is an area we are trying to address in our work.

More fundamentally, though, there are technology fields where speed to market, in form of “quick and dirty,” seems to be more important than any other consideration. This leads to “beta versions” offered to consumers, without a rigorous process to account for in principle foreseeable negative consequences–for instance, regarding privacy, children’s safety, etc. By contrast—and although failures may always happen—a civil engineer would never build a bridge as fast as possible, without thorough anticipative studies, hoping it may last and expecting passengers to find and report construction errors.

If ethically-driven methodologies, such as value-driven design, are rigorously applied during the design process, there is a much better chance of considering ethical issues of end users and preventing situations that may negatively affect safety, before a product or service is put into production. This can greatly decrease risk and cost for a company, while increasing innovation by discovering end values of users which competitors may not have taken the time to study. In this light, we say, “Ethics is the new green.” People and their values are what we are working to improve in a sense of sustainability, comparable with the planet as a key focus 10 years ago.

Why do ethical considerations in design demand a different approach with regard to Artificial Intelligence and Autonomous Systems (AI/AS) than they did in these other areas of technology development?

The process of how algorithms track our behavior is often hidden by design. The opaque and self-learning nature of AI/AS work does not necessarily denote any negative behavior. But because personalization is supposed to appear somewhat magical for users, they can’t be fully cognizant of why certain choices they have made affect new messaging directed toward them. This makes transparency and accountability for any product, service or system of paramount importance, in order to ensure users can provide consent about technology they use in a way they feel genuinely reflects their personal choices, values and identity. In this regard, ethical considerations at the front end of the design process become essential for avoiding unintended consequences for users of AI/AS, many of which have not manifested themselves to manufacturers.

Why are standards part of this discussion?

Globally open and transparent, bottom-up, consensus-based standardization processes allow multiple stakeholders to come together and create a roadmap of sorts to help people and organizations navigate complex situations. Such standards typically also provide deep specificity regarding technological deployment and use, which provide engineers and other technologists the details they need to actually begin conceiving and building complex systems. They are also a communications tool, in as much as they often introduce a new concept or paradigm to users. Also, in countries where common law operates, standards often provide society clear and direct instructions on a certain subject before laws may even exist about these new areas. These are some of the many reasons that standards are so important, especially with regard to AI/AS development, in which many organizations are only beginning to grasp the vast ethical ramifications of fully or partially autonomous technology.

Why is IEEE involved in this area?

IEEE is the world’s largest professional organization for the advancement of technology with almost half a million members in 190 countries. We fully understand the need to provide cutting-edge experts and thinking about the ethically oriented issues driving the creation of modern technology. That is why, beyond our specific work in AI and ethics, our board recently appointed a new committee to aggregate and amplify all the many diverse and comprehensive global projects being done in ethics throughout the organization. Ethics, writ large, is a new pillar for IEEE, as we realize that the applied rigor of understanding end-user values must be positioned at the heart of any technology creation, in order to avoid unintended consequences and evolve a sustainable innovation for the algorithmic age.

Who is driving the conversation? Are AI/AS companies interested in formalizing ethical considerations in design? Are they under pressure from government/consumer/regulatory groups?

Many organizations, such as the Partnership on AI, are addressing these types of issues. The recent announcement of a $27M research fund from Omidyar is also a sign that AI/AS companies realize the need to prioritize ethical considerations in their work. In addition, like any new technology, AI/AS also is under scrutiny from governmental agencies and regulators, for instance regarding areas of safety and employment due to automation.

However, because of their global nature and inherent moral complexity, no industry consortium nor any single government can solve the issues at stake here. To gain trust of citizens and consumers there is clearly a need for globally open and transparent consensus-building platforms. In such an environment it is our hope that, by prioritizing the increase of holistic human wellbeing for AI/AS, we can widen the lens of what progress means as a society to intentionally try and benefit all citizens equally outside of a focus on exponential growth or profits. This is a benefit IEEE has because of the inclusive nature of our work—we bring together academics, scientists, engineers, civil society and policy actors. We can work to build consensus about what will create a flourishing society in a holistic manner, versus promoting the interests of one particular company, sector or nation.

What is the danger of not formalizing ethical considerations in design for AI/AS now?

The emergence of “unintended consequences” may be vastly accelerated through AI/AS technologies. When an algorithm is created to be self-learning in some way, it means that programmers or manufacturers may not always fully understand the implicit biases or actions a certain technology may take when imbued with these tools. This lack of foreknowledge is not historically unique, by any means— the damage of the ozone layer or global warming caused by fossil-fuel combustion being well-known examples. However, along with the opaque nature of how algorithms may work in and of themselves is the issue of how they will work when coming into contact with myriads of other algorithms or the personal data of the individuals they are designed to affect or with other sensitive systems like detection of and response to alleged nuclear attacks. The manufacturer or programmer may have no negative intent in what they have created; typically, it is quite the opposite, in fact. Nonetheless, it is impossible to account for how an individual user’s experiences or values will affect how they interpret the actions of a device driven by these multiple algorithms. And, again, all of these interactions are hidden to a user, so they also cannot ask, for example, “Should I not get in an autonomous car based on how it may react in an accident?”

While it can be easy for people to say these types of ethical considerations should not be formalized now as it may “hinder innovation,” our position is that values-driven design for AI/AS provides a way to evolve and support innovation in the algorithmic era. If there are literally dozens of algorithms tracking individuals every hour, providing proxy analyses of their values to help inspire sales, why would we not also harness these extraordinary technologies to provably align people’s values to the AI- and AS-oriented products and services they will use? We have the opportunity to maximize trust via trusted data exchanges between manufacturers and users. The danger of not prioritizing these ethical considerations is that we risk fully shutting out end users, if we assume we can get all the information we need by only tracking their actions and not providing ways for them to provide their consent and subjective opinions before they interact with devices they may not understand.

Why will this conversation be particularly meaningful for the SXSW audience?

SXSW audiences are the savvy, cutting-edge experts from their organizations looking to understand not just what flash-in-the-pan tech may be getting headlines but also what long-term trends will affect their stakeholders and audiences. It is the must-attend conference for anyone serious about technology and the socially oriented norms driving its adoption.

What are you hoping to learn during your time at SXSW?

We want to learn how we can best spread the message of ethically aligned technology in the age of AI/AS. We understand these ideas are new to many people, which means we also have to begin working with organizations to implement education and training programs to suffuse these ideals into the heart of every organization on the planet. We are at the beginning of creating what we feel will be a renaissance for both technology and humanity via the prioritization of individual and societal values. In this way, we move forward into the future not by mandating morals in any way but by carefully thinking ahead as to what elements of our humanity we wish to imbue into the machines we will be living closer to every day.

Ethically-Aligned Design: Setting Standards for AI is included in the IEEE Tech for Humanity Series at the annual SXSW Conference and Festival, 10-19, March, 2017. In this session, Konstantinos Karachalios, Managing Director at IEEE Standards Association will provide insight on The Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Along with Kay Firth-Butterfield, John C. Havens, and Derek Jinks, the session will discuss the initiative’s standard setting process and current recommendations. 

Share this Article