Introducing the Measurementality Series on Artificial Intelligence Systems

Defining What Counts in the Algorithmic Age


Professor Alex ‘Sandy’ Pentland speaks with John C. Havens of IEEE SA in the first official video interview for Measurementality, describing how our metrics of success need to include a more holistic set of Indicators than they do today. Sandy directs MIT Connection Science and previously helped create and direct the MIT Media Lab and the Media Lab Asia in India. He is also on the Executive Leadership team of The Council on Extended Intelligence, a joint program between IEEE SA and MIT.

IEEE SA is launching a new content series of podcasts, webinars, and reports focusing on “defining what counts in the Algorithmic Age.” While it’s critical that Artificial Intelligence Systems (AIS) are transparent, responsible, and trustworthy, Measurementality will bring together great minds from the IEEE SA, AIS communities and beyond to explore the deeper issues around what measurements of success we’re optimizing for in the first place.

Currently in the world of Artificial Intelligence Systems (AIS) we frame a lot of development on areas of risk — in essence, what we value is how little harm an algorithm or system will do once released. But this valuation does not necessarily connect with the market focus of how much money an AIS will make. Financial indicators like GDP in combination with pressures to drive AIS to market quickly are prioritized as primary metrics of success. Humans and the environment are deprioritized and harms are largely addressed only after economic and design decisions have already been made. Responsible AIS must value all three areas of people, planet and profit in unison at the outset of design and as equally represented societal metrics of success to avoid greatest harm and increase sustainable flourishing.

Harm is also not just about physical safety. Where issues of bias aren’t dealt with, AIS may offend, disenfranchise, or endanger marginalized populations. For instance, technologists may not have read the Indigenous Protocol and Artificial Intelligence Position Paper that provides a “starting place for those who want to design and create AI from an ethical position that centers Indigenous concerns.” Or a designer may be unaware of non-Western ethical traditions like the ones featured in IEEE’s Ethically Aligned Design Classical Ethics in AIS Chapter. Perhaps an organization is building a product that will be available in Africa but haven’t read Sabelo Mhlambi’s paper, From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance that features key insights about understanding South African cultural traditions deeply influencing people’s perceptions of family, connection, and technology.

It can be easy to say, “I’m not focused on those things — I’m building my AIS based on what counts to me (or my boss). And I’ve got a deadline to hit.” This is understandable, but unless we know what counts in other people’s lives then we cannot assume what we build will honor them. And as technologists who value empiricism and specificity, it is time to move beyond utilizing financial or productivity metrics in isolation for AIS as the global proxy for societal success.

And that’s where the Measurementality Series comes in – defining what counts in the algorithmic age.

The Measurementality Series

Measurementality is a new content series from the IEEE Standards Association (IEEE SA) created in collaboration with The Radical AI Podcast focusing on defining what counts in the Algorithmic Age. While it’s critical that Artificial Intelligence Systems (AIS) are transparent, responsible, and trustworthy, Measurementality will explore the deeper issues around what measurements of success we’re optimizing for AIS in the first place.

The Measurementality series has three components:

  • Podcasts: we’re delighted to be working with the hosts and creators of The Radical AI Podcast to collaborate on this series. Episodes will begin in January 2021 and will be posted on the Radical AI site in the beginning of each month.
  • Webinars: at the end of each month we’ll be hosting a Webinar featuring the same guests who were on the Radical AI Measurementality podcast. We’ll also invite additional guests who are working in IEEE SA committees, Standards Working Groups, or other programs. That way we can further explore the themes and issues brought up on the podcasts to best grow our work with relevance and specificity.
  • Reports: we will share reports featuring answers from podcasts, webinars, and other participants on three key questions:
    • How is success measured today in AIS? What issues (positive and negative) do you see with these measures with an eye towards human welfare, environmental sustainability, responsibility, technology, economics and ethics?
    • What is the positive future you’re working to build with AIS? Answers here can be submitted in short story form as well as prose.
    • What are the measures of success for that positive future? Please be as specific as possible, utilizing sample Indicators or criteria.

But we’d also like to hear from you. Submit your answers today and get your voice heard!

Unconventional Thinking (and Doing)

For the co-hosts of The Radical AI Podcast, Dylan and Jess, two PhD students rooted in values of curiosity, humility, and hope for a better world, they began to interview dozens of scholars in the responsible tech community about a concept they coined “radical AI” and what it means to them, and what unconventional work might look like in AI. Their goal with this project became to resist harmful technologies and how they are created today through community building and collective storytelling, and to continue to co-define this concept. Their Radical AI Podcast and community is still in active exploration of what this work really is in the AI field.

For IEEE SA, we serve as a convener around critical ideas related to technology. When The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems gathered more than seven hundred global thinkers to create Ethically Aligned Design, many of the ideas that stemmed from that work may have seemed unconventional or “radical” – prioritizing values-based design in systems engineering, for instance. But the multi-disciplinary process of bringing volunteers together from the US, EU, UK, China, Japan, Korea, Australia, India and many other countries meant that one person’s definition of one word or concept meant something completely different to someone else.

So while it may seem simple, actually listening to other people with consensus and curiosity can be one of the most unconventional things anyone can do.

So with the help of Jess and Dylan, our global experts and interviewees, and all of you, that’s exactly what we’re going to do. Start listening to your answer to the question: “What counts most in your life?”

Sign Up for the Series and Let Us Know Your Answer

Share this Article