Privacy and security issues pose great challenges to the federated machine leaning (FML) community. A general view on privacy and security risks while meeting applicable privacy and security requirements in FML is provided. This recommended practice is provided in four parts: malicious failure and non-malicious failure in FML, privacy and security requirements from the perspective of system and FML participants, defensive methods and fault recovery methods, and the privacy and security risks evaluation. It also provides some guidance for typical FML scenarios in different industry areas, which can facilitate practitioners to use FML in a better way.
- Standard Committee
- C/AISC - Artificial Intelligence Standards Committee
- Joint Sponsors
-
C/LT
- Status
- Active Standard
- PAR Approval
- 2021-03-25
- Board Approval
- 2023-12-06
- History
-
- Published:
- 2024-04-26
Working Group Details
- Society
- IEEE Computer Society
- Standard Committee
- C/AISC - Artificial Intelligence Standards Committee
- Working Group
-
SPFML - Security and Privacy for Federated Machine Learning
- IEEE Program Manager
- Christy Bahn
Contact Christy Bahn - Working Group Chair
- Zuping Wu
Other Activities From This Working Group
Current projects that have been authorized by the IEEE SA Standards Board to develop a standard.
No Active Projects
Standards approved by the IEEE SA Standards Board that are within the 10-year lifecycle.
No Active Standards
These standards have been replaced with a revised version of the standard, or by a compilation of the original active standard and all its existing amendments, corrigenda, and errata.
No Superseded Standards
These standards have been removed from active status through a ballot where the standard is made inactive as a consensus decision of a balloting group.
No Inactive-Withdrawn Standards
These standards are removed from active status through an administrative process for standards that have not undergone a revision process within 10 years.
No Inactive-Reserved Standards