Everyday we interact with tens, if not hundreds, of machines. Machines wake us, make our coffee, entertain us, get us to work and back, and lull us to sleep. We count on machines to bring value to our lives, but how confident should we be in the values instilled in the machines we value the most?
Without evoking tales of nefariously spying microwaves, it is certain that most consumers are unaware of how machines are imbued with programs and processes that either compliment or confound our human values. The black-box of creating ethical machines or stipulating a machine ethic has long been a musty corner of philosophical speculation. But, recent work by the IEEE is bringing the processes of values-based judgments to the area of machine design in a public, accessible way.
Beginning with a robust charge during the autumn months of 2016, the IEEE launched “The Global Initiative for Ethically Aligned Design” (EAD) version one. This path-breaking document, under revisions during the summer of 2017, outlines a human-flourishing, well-being, focused approach to the design of machines with artificial intelligence and autonomous systems components. The goal of EAD is neatly summarized in the executive summary, “by aligning the creation of AI/AS [artificial intelligence/ autonomous systems] with the values of its users and society we can prioritize the increase of human wellbeing as our metric for progress in the algorithmic age” (IEEE 2016, p.2).
Although the many, international, members of the Global Initiative drafting committees include a number of philosophers, the purpose of EAD is not chiefly philosophical. The purpose of the document is to “ensure every technologist is educated, trained, and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems” (p. 4). Technical, manufacturing, and technician-oriented processes for baking ethics into the machine design is outlined in the many P7000 series of documents.
In P7000 — the “Model Process for Addressing Ethical Concerns During System Design”–an ongoing standards development project sponsored by IEEE’s Software & Systems Engineering Standards Committee, the values-sensitive design methodology is described in detail. Value-sensitive design incorporates intuitive and empirical, qualitative and quantitative, approaches to reasoning through the design of software, algorithms, and hardware systems. There are four stages of values-sensitive design including: value discovery, value conceptualization, empirical value investigation, and technical value investigation (Spiekerman 2015 p. 168).
Machine design practitioners are familiar with some forms of the value discovery process. Cost benefit analysis or harm benefit identification are key components of almost all design practice. According to the values-sensitive approach taken in the IEEE documents, the chief value against which costs and benefits should be judged is human well-being. Wellbeing is a multidimensional concept into which standard measures, like GDP, and new, qualitative measures like the Better Life Index or the United Nations Sustainable Development Goals, are incorporated.
Identifying the values that should be incorporated into values-sensitive machines is the second stage of the design process. When grappling with value concepts, the tensions that exist between technological specification and goals, and constituent component integration, are addressed consultatively between technical experts and stakeholders.
The values-sensitive design methodology is more than the rules for hosting a hypothetical, philosophical debate. The third stage of the method involves empirical investigation of stakeholders’ positions on design propositions or uses. Asking stakeholders what they want from future designs can delay moving product design forward efficiently without incorporation of the fourth stage, technical value investigation. Technical value investigation is how design teams weigh various design specifications, working to identify that design that best meets the specification of the principle of wellbeing identified in the previous three stages.
Creating machines with values baked into them is a laudable goal, but is it one that designers and the companies they work for should aim for? What is the design or product output from so much ethical input? New books and articles appear daily which offer a forecast of the intelligent systems driven future. Each holds out a vision for a future with artificial agents working with, serving under, or lording over humans. Each of these visions invites considerations of ethical problems, many of which are left to the imagination of the readers.
What the values-sensitive design process, the Global Initiative for Ethically Aligned Design, and the various programs in the IEEE TechEthics section hold out, is a way for design professionals, technical experts, and manufacturers to answer the unanswered question of ethics in machine-design for themselves first. Following the recommendations contained in the EAD and those recommendations contained in the coming IEEE P7000 series, these stakeholders at the front end of the design process can make clear to consumers, regulators, and future collaborators how human flourishing and wellbeing are baked into to ethical machine design.
ABOUT THE AUTHOR:
Sara R Jordan is an Assistant Professor in the Center for Public Administration and Policy at Virginia Tech. Her research touches on issues of ethics in public policy, particularly areas of high technology policy and research policy. Her previous work appears in Accountability in Research, Public Performance and Management Review, and Administration & Society. She is currently working on issues surrounding ethical participation in e-government and public policy responses to high-technology innovations, such as autonomous vehicles and integrated biospecimen repositories.