About the Activity
In an era characterized by the rapid evolution of autonomous and intelligent systems (AIS) and their integration into critical infrastructure and societal functions and interactions, including the proliferation of large language models and the advent of generative AIS applications, the global discourse is increasingly focused on navigating the delicate balance between the vast potential benefits and the inherent risks of AIS. This underscores an urgent necessity for the establishment of stringent standards that not only uphold scientific integrity but also prioritize public safety. Central to this conversation is the imperative for AIS technologies to be developed with a foundational commitment to sound ethical considerations and a demonstrated safety-first principle.
While the concept of safety is well-established in the domain of reliability engineering, its application and interpretation within the complex landscape of sociotechnical systems are frequently encountered misappropriation and misunderstanding. This is a challenge not only for the engineers who design. these systems but also for policymakers who regulate them, and in the articulation of normative. objectives, for confidence building measures and verification tools.
The IEEE has long been a pioneer in promoting ethics and safety by design in the development of AIS. Despite the initiative running for over eight years, the ever-expanding technological landscape, the diversification of actors involved, and the broadening spectrum of use cases only serve to amplify IEEE’s role rather than diminish it. On the contrary, the relevance and impact of IEEE’s work have never been more significant. The body of work emanating from the initiative is increasingly integrated into. crucial policy processes worldwide, signifying the initiative’s growing impact. Thus, the approach chosen for the next iteration of this global initiative is dedicated to leveraging the exceptional global.platform and the substantive work undertaken to date, recognizing the evolving characteristics of the technology, the stakeholder dynamic, and its societal and political impacts. Addressing the critical issue of AISs and generative AI technologies and models role in the dissemination of disinformation and misinformation, and its impact on our social and political fabric, and trust at global, regional, and local levels, demands our immediate and thoughtful attention. As the reliance on AIS systems grows, and regulatory tools evolve, sometimes in contradictory ways, IEEE’s role in promoting scientific integrity,technical excellence, openness, international collaboration, and enriched discourse in the realm of AIS development and application, becomes increasingly indispensable.
This renewed iteration of the Global Initiative, agreed upon by the Executive Committee during their 2023 deliberations, will work on a “Beyond Risk Framing in AI Governance” report to inspire a new paradigm for AI governance that shifts from merely mitigating risks to proactively embedding a “Safety First Principle” and “Safety by Design” into AI’s design and lifecycle assessments, as well as in the development of generative AI models from the outset. This paradigm shift challenges the risk-centric.
Goals of the Activity
Building upon foundational work, the “Global Initiative 2.0” aims to leverage the existing body of work while advancing new workstreams. These efforts are designed to actively guide and inform public discourse on current Autonomous and Intelligent Systems and related Governance Initiatives, including generative AI models and associated standards. The initiative will explore key areas with the goal of both informing and, in some aspects, reshaping the discourse on Artificial Intelligence Systems (AIS). It adopts a proactive and constructive conformity approach, rooted in technical insights and scientific integrity. The initiative’s focus areas, which are subject to updates as the IC progresses, include but are not limited to:
- Standards: Initiating an exploration into an additional family of standards, specifically targeting generative AI technologies and large language models. This exploratory effort will be led by a dedicated IC Global Initiative subgroup.
- Toolkits: Amplifying efforts to develop “Toolkits” for various industrial and governmental sectors. These toolkits aim to provide leadership and practical guidance, aligning with the IEEE-SA’s conformity and impact assessments and certification efforts. They will promote a “do no harm” philosophy that underpins engineering excellence and practical ethics.
- AI Safety Champions: Establishing a dedicated community of “AI Safety Champions”, honoring and building on the extensive ecosystem involved in the Global Initiative to date. This community will serve as a comprehensive resource, offering best practices, sharing knowledge, and contributing subject matter expertise and thought leadership within and beyond the IEEE ecosystem.
- IEEE 7000 Series Awareness Campaign: Revitalise and demonstrate the cross-uses of the IEEE 7000 Series (and related certification tools), both as guidelines and as sources providing deep insights into issues related to ontological processes and other domains.
Getting Involved
Who Should Get Involved
Global Initiative stakeholder communities refer to anyone involved in the research, design, manufacture, decision making, regulatory efforts or messaging around intelligent and autonomous systems, including universities, organizations, governments, and corporations making these technologies a reality for society or opening on their impact on societies, political systems, public safety and international security.
How to Get Involved
To learn more about the program and how to join the The Global Initiative on Ethics of Autonomous and Intelligent Systems activity, please express your interest by completing the The Global Initiative on Ethics of Autonomous and Intelligent Systems interest form.