AIS Standards

IEEE portfolio of AIS technology and impact standards and standards projects

IEEE P1872.2™ - Standard for Autonomous Robotics (AuR) Ontology

This standard is a logical extension to IEEE 1872-2015™ Standard for Ontologies for Robotics and Automation. The standard extends the CORA ontology by defining additional ontologies appropriate for Autonomous Robotics (AuR) relating to:

  1. The core design patterns specific to AuR in common R&A sub-domains;
  2. General ontological concepts and domain-specific axioms for AuR; and
  3. General use cases and/or case studies for AuR.

IEEE 1589-2020™ - Standard for Augmented Reality Learning Experience Model

Augmented Reality (AR) promises to provide significant boosts in operational efficiency by making information available to employees needing task support in context in real time. To support according implementations of AR training systems, this document proposes an overarching integrated conceptual model that describes interactions between the physical world, the user, and digital information, the context for AR-assisted learning and other parameters of the environment.

IEEE 2089™-2021 - Standard for Age Appropriate Digital Services Framework - Based on the 5Rights Principles for Children

This standard provides a methodology to establish a framework for digital services when end users are children, and by doing so, tailors the services that are provided so that they are age appropriate. This is essential to creating a digital environment that offers children safety by design and delivery, privacy by design, autonomy by design, health by design, specifically providing a set of guidelines and best practices and thereby offer a level of validation for service design decisions.

P2040/P2040.1™ - Taxonomy and Definitions for Connected and Automated Vehicles

This standard specifies the taxonomy and definitions for connected and automated vehicles.

IEEE P2247.1™ - Standard for the Classification of Adaptive Instructional Systems

This standard defines and classifies the components and functionality of adaptive instructional systems (AIS). It defines parameters used to describe AIS and establishes requirements and guidance for the use and measurement of these parameters.

IEEE P2247.2™ - Interoperability Standards for Adaptive Instructional Systems (AISs)

This standard defines interactions and exchanges among the components of adaptive instructional systems (AISs). It defines the data and data structures used in these interactions and exchanges and parameters used to describe and measure them and establishes requirements and guidance for the use and measurement of the data, data structures, and parameters.

IEEE P2247.3™ - Recommended Practices for Evaluation of Adaptive Instructional Systems

This recommended practice defines and classifies methods of evaluating adaptive instructional systems (AIS) and establishes guidance for the use of these methods. This best practice incorporates and promotes the principles of ethically aligned design for the use of artificial intelligence (AI) in AIS.

P2418.4™ - Standard for the Framework of Distributed Ledger Technology (DLT) Use in Connected and Autonomous Vehicles (CAVs)

This standard provides a common framework for distributed ledger technology (DLT) usage, implementation, and interaction in connected and autonomous vehicles (CAVs). The framework addresses scalability, security and privacy challenges with regard to DLT in CAVs. DLT tokens, smart contracts, transactions, assets, networks, permissioned CAVs DLT, and permission-less CAVs DLT are included in the framework.

IEEE P2660.1™ - Recommended Practices on Industrial Agents: Integration of Software Agents and Low Level Automation Functions

This recommended practice describes integrating and deploying the Multi-agent Systems (MAS) technology in industrial environments for use in building the intelligent decision-making layer on top of legacy industrial control platforms. The integration of software agents with the low-level real-time control systems, mainly based on the Programmable Logic Controllers (PLCs) running the IEC 61131-3™ control programs (forming in this manner a new component known as industrial agents) are also identified. In addition, the integration of software agents with the control applications based on IEC 61499™ standard or executed on embedded controllers is described.

This recommended practice supports and helps the engineers leverage the best practices of developing industrial agents for specific automation control problems and given application fields. Therefore, corresponding rules, guidelines and design patterns are provided.

IEEE P2671™ - Standard for General Requirements of Online Detection Based on Machine Vision in Intelligent Manufacturing

This standard specifies through the general requirements of online detection based on machine vision, including requirements for data format, data transmission processes, definition of application scenarios and performance metrics for evaluating the effect of online detection deployment.

IEEE P2672™ - Guide for General Requirements of Mass Customization

This guide provides the definitions, terminologies, operation procedures, system architectures, key technological requirements, data requirements and applications of and related to user-oriented mass customization. This guide provides reference information to be used by manufacturing enterprises for designing and implementing business models of mass customization.

IEEE P2751™ - 3D Map Data Representation for Robotics and Automation

This standard extends the IEEE 1873-2015™ Standard for Robot Map Data Representation from two-dimensional (2D) maps to three-dimensional (3D) maps. The standard develops a common representation and encoding for 3D map data, to be used in applications requiring robot operation, like navigation and manipulation, in all domains (space, air, ground/surface, underwater, and underground). The standard encoding is devoted to exchange map data between robot systems, while allowing robot systems to use their private internal representations for efficient map data processing. The standard places no constraints on where map data comes from nor on how maps are constructed.

IEEE P2801™ - Recommended Practice for the Quality Management of Datasets for Medical Artificial Intelligence

This recommended practice identifies best practices for establishing a quality management system for datasets used for artificial intelligence medical devices. It covers a full cycle of dataset management, including items such as but not limited to data collection, transfer, utilization, storage, maintenance and update.
This recommended practice recommends a list of critical factors that impact the quality of datasets, such as but not limited to data sources, data quality, annotation, privacy protection, personnel qualification/training/evaluation, tools, equipment, environment, process control and documentation.

IEEE P2802™ - Standard for the Performance and Safety Evaluation of Artificial Intelligence Based Medical Device: Terminology

This standard establishes terminology used in artificial intelligence medical devices, including definitions of fundamental concepts and methodology that describe the safety, effectiveness, risks and quality management of artificial intelligence medical devices.

It provides definitions using the following forms, such as but not limited to literal description, equations, tables, figures and legends.

The standard also establishes a vocabulary for the development of future standards for artificial intelligence medical devices.

IEEE P2807™ - Framework of Knowledge Graphs

This standard defines the framework of knowledge graphs (KGs). The framework describes the input requirement of KG, construction process of KG, i.e., extraction, storage, fusion and understanding, performance metrics, applications of KG, verticals, KG related artificial intelligence (AI) technologies and other required digital infrastructure.

IEEE P2807.1™ - Standard for Technical Requirements and Evaluation of Knowledge Graphs

This standard defines technical requirements, performance metrics, evaluation criteria and test cases for knowledge graphs. The mandatory test cases include data input, metadata, data extraction, data fusion, data storage and retrieval, inference and analysis, and knowledge graph display.

IEEE P2807.2™ - Guide for Application of Knowledge Graphs for Financial Services

This standard defines guidelines for application of knowledge graphs for financial services. The standard specifies technical framework, workflows, implementation guidelines and application scenarios of financial knowledge graphs.

IEEE P2807.4™ - Standard for Technical Requirements and Evaluation of Knowledge Graphs

This guideline for Scientific Knowledge Graphs (SKG) specifies: 1) Data scope, including the actors such as authors or organizations, the documents such as journal or conference publications, and the research knowledge such as research topics or technologies; 2) SKG construction process, including knowledge acquisition, knowledge fusion, knowledge representation, or knowledge inference of scientific knowledge; 3) Applications, including academic service, intelligence mining, or scholar analysis.

IEEE P2817™ - Guide for Verification of Autonomous Systems

The purpose of this Guide is to identify existing best practices and provide instruction sets that define valid verification processes for a range of autonomous system configurations. These best practices apply from the lowest level components and software to the highest level learning or decision making elements (specifically including verification of the inputs to any learning algorithms, such as training data). The guidelines are intended to include both robots and immobots, singly and in groups, focusing primarily on systems that can operate autonomously rather than on automated or supervised robots. They may also be applicable to systems that do not directly interact with the external world (e.g. intelligence networks).

IEEE P2830™ - Standard for Technical Framework and Requirements of Shared Machine Learning

This standard defines a framework and architectures for machine learning in which a model is trained using encrypted data that has been aggregated from multiple sources and is processed by a trusted third party. It specifies functional components, workflows, security requirements, technical requirements, and protocols.

IEEE P2840™ - Standard for Responsible AI Licensing

The standard describes specifications for the factors that shall be considered in the development of a Responsible Artificial Intelligence (AI) license. Possible elements in the specification include (but are not limited to): (1) What a ‘Responsible AI License’ means and what its aims are (2) Standardized definitions for referring to components, features and other such elements of AI software, source code and services (3) Standardized reference to geography specific AI/Technology specific legislation and laws (such as the EU General Data Protection Regulation – GDPR) as well as identification of violation detection, penalties, and legal remedies. (4) The specification lists domain specific considerations that may be applied in developing a responsible AI license. The proposed standard shall not require the use of any specific legal text or clauses nor shall the proposed standard offer legal advice.

IEEE P2841™ - Framework and Process for Deep Learning Evaluation

This document defines best practices for developing and implementing deep learning algorithms and defines a framework and criteria for evaluating algorithm reliability and quality of the resulting software systems.

IEEE P2842™ - Recommended Practice for Secure Multi-party Computation

This standard provides a technical framework for Secure Multi-Party Computation, including specifying: An overview of Secure Multi-Party Computation, A technical framework of Secure Multi-Party Computation, Security levels of Secure Multi-Party Computation, Use cases based on Secure Multi-Party Computation.

IEEE P2863™ - Recommended Practice for Organizational Governance of Artificial Intelligence

This recommended practice specifies governance criteria such as safety, transparency, accountability, responsibility and minimizing bias, and process steps for effective implementation, performance auditing, training and compliance in the development or use of artificial intelligence within organizations.

IEEE P2894™ - Guide for an Architectural Framework for Explainable Artificial Intelligence

This guide specifies an architectural framework that facilitates the adoption of explainable artificial intelligence (XAI). This guide defines an architectural framework and application guidelines for XAI, including: 1) description and definition of explainable AI, 2) the categorizes of explainable AI techniques; 3) the application scenarios for which explainable AI techniques are needed, 4) performance evaluations of XAI in real application systems.

P2959™ - Standard for Technical Requirements of Standard-Oriented Knowledge Graphs

This document specifies data and schema requirements for knowledge graphs constructed from published standards which can be automatically machine readable. A knowledge graph construction process and performance metrics are specified. Application scenarios are also described.

P2976™ - Standard for XAI – eXplainable Artificial Intelligence - for Achieving Clarity and Interoperability of AI Systems Design

This standard defines mandatory and optional requirements and constraints that need to be satisfied for an AI method, algorithm, application or system to be recognized as explainable. Both partially explainable and fully or strongly explainable methods, algorithms and systems are defined. XML Schema are also defined.

P2986™ - Recommended Practice for Privacy and Security for Federated Machine Learning

This document provides recommended practices related to privacy and security for Federated Machine Learning, including security and privacy principles, defense mechanisms against non-malicious failures and examples of adversarial attacks on a Federated Machine Learning system. This document also defines an assessment framework to determine the effectiveness of a given defense mechanism under various settings.

P3110™ - Standard for Computer Vision (CV) - Algorithms, Application Programming Interfaces (API), and Technical Requirements for Deep Learning Framework

This standard establishes the application programming interfaces (API) model of the computer vision systems and specifies the functional and technical requirements of the API between the computer vision algorithm, deep-learning framework, and the data set in the process of algorithm training phase. This standard is suitable for the adaptation and invocation of computer vision algorithms using deep learning frameworks.

P3119™ - Standard for the Procurement of Artificial Intelligence and Automated Decision Systems

This standard establishes a uniform set of definitions and a process model for the procurement of Artificial Intelligence (AI) and Automated Decision Systems (ADS) by which government entities can address socio-technical and responsible innovation considerations to serve the public interest. The process requirements include a framing of procurement from an IEEE Ethically Aligned Design (EAD) foundation and a participatory approach that redefines traditional stages of procurement as: problem definition, planning, solicitation, critical evaluation of technology solutions (e.g. Impact assessments), and contract execution. The scope of the standard not only addresses the procurement of AI in general, but also government in-house development and hybrid public-private development of AI and ADS as an extension of internal government procurement practices.

P3123™ - Standard for Artificial Intelligence and Machine Learning (AI/ML) Terminology and Data Formats

The standard defines specific terminology utilized in artificial intelligence and machine learning (AI/ML). The standard provides clear definition for relevant terms in AI/ML. Furthermore, the standard defines requirements for data formats.

P3127™ - Guide for an Architectural Framework for Blockchain-based Federated Machine Learning

This guide specifies an architectural framework and application guidelines for Blockchain based Federated Machine Learning, including: 1) a description and a definition of Blockchain-based Federated Machine Learning, 2) the types of Federated Machine Learning for Blockchain-based Federated Machine Learning, 3) application scenarios for each type, 4) a definition of the levels of competency for blockchain based federated learning and guidelines for certifying these systems, 5) Security and privacy requirements of blockchain based federated learning, and 6) performance evaluations of Blockchain-based Federated Machine Learning in real application systems.

P3142™ - Recommended Practice on Distributed Training and Inference for Large-scale Deep Learning Models

This recommended practice specifies principles, approaches, and key performance indicators for distributed training and inference of large-scale deep learning models.

P3152™ - Standard for the Description of the Natural or Artificial Character of Intelligent Communicators

This standard describes recognizable audio and visual marks to assist with the identification of communicating entities as machine intelligence or human being to facilitate transparency, understanding, and trust during online, telephone, or other electronic interactions. Interventions to discern whether an interaction is with a machine or not (such as a ‘Turing Test’) are not within the scope of this standard. This standard is concerned only about the declaration of the nature of an interaction.

P3154™ - Recommended Practice for the Application of Knowledge Graphs for Talent Services

This recommended practice defines the architectural framework and application practices for knowledge graphs in the field of talent services, including a) description of the construction process and workflow for knowledge graphs for talent services, b) definition of the technical procedure for each part of the construction process, c) introduction of the application scenarios of knowledge graphs for talent services.

P3156™ - Standard for Requirements of Privacy-preserving Computation Integrated Platforms

This standard provides architecture and requirements of privacy-preserving computation integrated platforms, that includes: – Overview of privacy-preserving computation integrated platforms – The reference architecture of privacy-preserving computation integrated platforms – Functional requirements of privacy-preserving computation integrated platforms – Performance requirements of privacy-preserving computation integrated platforms – Security requirements of privacy-preserving computation integrated platforms.

P3157™ - Recommended Practice for Vulnerability Test for Machine Learning Models for Computer Vision Applications

This recommended practice provides a framework for vulnerability tests for machine learning models in the computer vision domain. The document covers the following areas: – definitions of vulnerabilities for machine learning models and their training processes, – approaches for the selection and application of vulnerability test means, – approaches for determining test completeness and termination criteria, – metrics of vulnerabilities and test completeness.

IEEE P3333.1.3™ - Standard for the Deep Learning Based Assessment of Visual Experience Based on Human Factors

This standard defines deep learning-based metrics of content analysis and quality of experience (QoE) assessment for visual contents, which is an extension of Standard for the Quality of Experience (QoE) and Visual-Comfort Assessments of Three-Dimensional (3D) Contents Based on Psychophysical Studies (IEEE 3333.1.1™) and Standard for the Perceptual Quality Assessment of Three Dimensional (3D) and Ultra High Definition (UHD) Contents (IEEE 3333.1.2™).
The scope covers the following:

  • Deep learning models for QoE assessment (multilayer perceptrons, convolutional neural networks, deep generative models)
  • Deep metrics of visual experience from High Definition (HD), UHD, 3D, High Dynamic Range (HDR), Virtual Reality (VR) and Mixed Reality (MR) contents * Deep analysis of clinical (electroencephalogram (EEG), electrocardiogram (ECG), electrooculography (EOG), and so on) and psychophysical (subjective test and simulator sickness questionnaire (SSQ)) data for QoE assessment
  • Deep personalized preference assessment of visual contents
  • Building image and video databases for performance benchmarking purpose if necessary

IEEE P3652.1™ - Guide for Architectural Framework and Application of Federated Machine Learning

Federated learning defines a machine learning framework that allows a collective model to be constructed from data that is distributed across data owners.

This guide provides a blueprint for data usage and model building across organizations while meeting applicable privacy, security and regulatory requirements. It defines the architectural framework and application guidelines for federated machine learning, including: 1) description and definition of federated learning, 2) the types of federated learning and the application scenarios to which each type applies, 3) performance evaluation of federated learning, and 4) associated regulatory requirements.

PC37.249™ - IEEE Draft Guide for Categorizing Security Needs for Protection and Automation Related Data Files

Security categorization is the first step in a security risk management framework because of its impact on all other steps, from selection of security controls to apply based upon the assessment to the level of effort required to assess the effectiveness of the security controls put in place. Security categorization covers information (data) at rest and information systems. The approach used in this guide applies only to data at rest. The approach aligns National Institute of Standards and Technology (NIST) Special Publication (SP) 800-60 Volume 1, revision 1 [B2] and with Federal Information Processing Standards (FIPS) FIPS 199 [B1], the latter of which establishes security categories based on the magnitude of harm expected to result from compromises rather than on the results of an assessment that includes an attempt to determine the probability of compromise.

IEEE 7000™-2021 - Model Process for Addressing Ethical Concerns During System Design

This standard outlines an approach for identifying and analyzing potential ethical issues in a system or software program from the onset of the effort. The values-based system design methods address ethical considerations at each stage of development to help avoid negative unintended consequences while increasing innovation.

IEEE 7001™-2021 - Standards for Transparency of Autonomous Systems

This standard describes measurable, testable levels of transparency, so that autonomous systems can be objectively assessed and levels of compliance determined.

A key concern over autonomous systems (AS) is that their operation must be transparent to a wide range of stakeholders, for different reasons.

For designers, the standard will provide a guide for self-assessing transparency during development and suggest mechanisms for improving transparency (for instance the need for secure storage of sensor and internal state data, comparable to a flight data recorder or black box).

IEEE P7002™ - Standard for Data Privacy Process

This standard specifies how to manage privacy issues for systems or software that collect personal data. It will do so by defining requirements that cover corporate data collection policies and quality assurance. It also includes a use case and data model for organizations developing applications involving personal information. The standard will help designers by providing ways to identify and measure privacy controls in their systems utilizing privacy impact assessments.

IEEE P7003™ - Standard for Algorithmic Bias Considerations

This standard describes specific methodologies to help users certify how they worked to address and eliminate issues of negative bias in the creation of their algorithms, where “negative bias” infers the usage of overly subjective or uniformed data sets or information known to be inconsistent with legislation concerning certain protected characteristics; or with instances of bias against groups not necessarily protected explicitly by legislation, but otherwise diminishing stakeholder or user well being and for which there are good reasons to be considered inappropriate.

IEEE P7004™ - Standard for Child and Student Data Governance

The standard defines specific methodologies to help users certify how they approach accessing, collecting, storing, utilizing, sharing, and destroying child and student data. The standard provides specific metrics and conformance criteria regarding these types of uses from trusted global partners and how vendors and educational institutions can meet them.

IEEE 7005™-2021 - Standard for Transparent Employer Data Governance

The standard defines specific methodologies to help employers to certify how they approach accessing, collecting, storing, utilizing, sharing, and destroying employee data. The standard provides specific metrics and conformance criteria regarding these types of uses from trusted global partners and how vendors and employers can meet them.

IEEE 7007™-2021 - Ontological Standard for Ethically Driven Robotics and Automation Systems

The standard establishes a set of ontologies with different abstraction levels that contain concepts, definitions and axioms which are necessary to establish ethically driven methodologies for the design of Robots and Automation Systems.

IEEE P7008™ - Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems

“Nudges” as exhibited by robotic, intelligent or autonomous systems are defined as overt or hidden suggestions or manipulations designed to influence the behavior or emotions of a user.

This standard establishes a delineation of typical nudges (currently in use or that could be created). It contains concepts, functions and benefits necessary to establish and ensure ethically driven methodologies for the design of the robotic, intelligent and autonomous systems that incorporate them.

IEEE P7009™ - Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems

This standard establishes a practical, technical baseline of specific methodologies and tools for the development, implementation, and use of effective fail-safe mechanisms in autonomous and semi-autonomous systems.

The standard includes (but is not limited to): clear procedures for measuring, testing, and certifying a system’s ability to fail safely on a scale from weak to strong, and instructions for improvement in the case of unsatisfactory performance.

IEEE 7010™-2020 - IEEE Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-being

Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems (A/IS) on Human Well-being is a recommended practice for measuring the impact of A/IS on humans. The overall intent of IEEE P7010™ is to support the outcome of A/IS having positive impacts on human well-being.

The recommended practice is grounded in scientifically valid well-being indices currently in use and based on a stakeholder engagement process. The intent of the recommended practice is to guide product development, identify areas for improvement, manage risks, assess performance and identify intended and unintended users, uses and impacts on human well-being of A/IS.

IEEE P7010.1™ - Recommended Practice for Environmental Social Governance (ESG) and Social Development Goal (SDG) Action Implementation and Advancing Corporate Social Responsibility

IEEE Standards Project to provide recommendations for next steps in the application of IEEE Std 7010, applied to meeting Environmental Social Governance (ESG) and Social Development Goal (SDG) initiatives and targets. It provides action steps and map elements to review and address when applying IEEE Std 7010. This recommended practice serves to enhance the quality of the published standard by validating the design outcomes with expanded use. It provides recommendations for multiple users to align processes, collect data, develop policies and practices and measure activities against the impact on corporate goals and resulting stakeholders.

IEEE P7011™ - Standard for the Process of Identifying and Rating the Trustworthiness of News Sources

This standard provides semi-autonomous processes using standards to create and maintain news purveyor ratings for purposes of public awareness. It standardizes processes to identify and rate the factual accuracy of news stories in order to produce a rating of online news purveyors and the online portion of multimedia news purveyors. This process will be used to produce truthfulness scorecards through multi-faceted and multi-sourced approaches.

The standard defines an algorithm using open source software and a scorecard rating system as methodology for rating trustworthiness as a core tenant in an effort to establish trust and acceptance.

IEEE P7012™ - Standard for Machine Readable Personal Privacy Terms

The standard identifies/addresses the manner in which personal privacy terms are proffered and how they can be read and agreed to by machines.

IEEE P7014™ - Standard for Ethical considerations in Emulated Empathy in Autonomous and Intelligent Systems

This standard defines a model for ethical considerations and practices in the design, creation and use of empathic technology, incorporating systems that have the capacity to identify, quantify, respond to, or simulate affective states, such as emotions and cognitive states. This includes coverage of ‘affective computing’, ’emotion Artificial Intelligence’ and related fields.

IEEE P7015™ - Standard for Data and Artificial Intelligence (AI) Literacy, Skills, and Readiness

IEEE Standards Project to coordinate global data and AI literacy building efforts, this standard establishes an operational framework and associated capabilities for designing policy interventions, tracking their progress, and empirically evaluating their outcomes. The standard includes a common set of definitions, language, and understanding of data and AI literacy, skills, and readiness.

IEEE P7016™ - Standard for Ethically Aligned Design and Operation of Metaverse Systems

This standard defines a methodology for creating possible Metaverse systems. A description of the sociotechnical aspects of Metaverse systems is provided, together with a high level ethical assessment methodology for the design and operation of Metaverse systems.

IEEE P7016.1™ - Standard for Ethically Aligned Educational Metadata in Extended Reality (XR) & Metaverse

This standard defines a high-level overview of a conceptual data schema for a metadata instance based on ethics concepts for a learning object utilized within XR systems and Metaverse applications.

IEEE P7017™ - Recommended Practice for Design-Centered Human-Robot Interaction (HRI) and Governance

This recommended practice describes the methodology and application of ‘compliance by design’ in the area of human-robot interaction (HRI) with regard to socially assistive robots.

IEEE P7018™ - Standard for Security and Trustworthiness Requirements in Generative Pretrained Artificial Intelligence (AI) Models

This standard establishes a comprehensive framework for mitigating security risks, privacy leaking in the development, deployment, and use of generative pretrained AI models.

IEEE 1232.3™-2014 - IEEE Guide for the Use of Artificial Intelligence Exchange and Service Tie to All Test Environments (AI-ESTATE)

Guidance to developers of IEEE Std 1232-conformant applications is provided in this guide.

IEEE 1855™-2016 - IEEE Standard for Fuzzy Markup Language

A new specification language, named Fuzzy Markup Language (FML), is presented in this standard, exploiting the benefits offered by eXtensible Markup Language (XML) specifications and related tools in order to model a fuzzy logic system in a human-readable and hardware independent way.

IEEE 1873™-2015 - IEEE Standard for Robot Map Data Representation for Navigation

A map data representation of environments of a mobile robot performing a navigation task is specified in this standard. It provides data models and data formats for two-dimensional (2D) metric and topological maps.

IEEE IC20-010 - Labeling Cybersecurity Data for AI Automation (Single- and-Multi-Modal)

Cyber analysts are becoming a bottleneck in analyzing ever-increasing amounts of data. Automating cyber analysts actions using AI can help reduce amounts of work for analysts and thereby reduce time to outcome dramatically, record actions in knowledge bases for the training of new cyber analysts, and in general, open up the field for new opportunities. As a result, the state of cybersecurity will improve. It is envisioned that this group will bring together industry stakeholders to engage in building consensus on priority issues for standardization activities on these topics, and providing a platform for IEEE thought leadership to the industry.

IEEE IC20-012 - Roadmap for the Development and Implementation of Standard Oriented Knowledge Graphs

This activity assists organizations or users who develop and apply standard-oriented knowledge graphs to have a basic picture of the framework and general construction method. In addition, it may assist the integrators of knowledge graphs to design a generic interface and follow clarified evaluation metrics. Furthermore, standard-oriented knowledge graphscan be integrated, implemented, and applied more simply and efficiently.

IEEE IC20-016 - The IEEE Global Initiative on Ethics of Extended Reality

The goal of this Industry Connections group is to continue and proliferate the existing efforts of The IEEE Standards Association focused on the ethical issues related to Extended Reality as outlined in the Extended Reality Chapter of Ethically Aligned Design while inviting Working Group members from the multiple Standards Working Groups focused on augmented and virtual reality and the spatial web and additional subject matter experts from industry and policy to create white papers, workshops, and PARs related to this work to ensure these technologies move from “perilous” to “purposeful.”

The goal of this Industry Connections Program is to strengthen IEEE Standards Association work on biosecurity and safety aligning and supporting with IEEE`s mission of “Advancing Technology for Humanity”.

IEEE IC20-027 - Responsible Innovation of AI and the Life Sciences

Nowhere is the potential of Artificial Intelligence (AI) and autonomous intelligent systems (AIS) more apparent than in human health and human biology, where increasingly sophisticated computational data modelling methods have led to dramatic improvements in our ability to precisely diagnose and treat disease, to estimate risks, and to deliver care. Genetic information is increasingly being used in AI algorithms to guide treatment selection and even whether treatment is provided at all. The transformative impact of these technologies and the commodification of our biological and genomic data will have a significant impact on the future biological continuum and geopolitical order.

Subscribe to our Newsletter

Sign up for our monthly newsletter to learn about new developments, including resources, insights and more.