Privacy and security issues pose great challenges to the federated machine leaning community. A general view on privacy and security risks while meeting applicable privacy and security requirements in federated machine learning is provided. A recommended practice is provided in four parts: malicious failure and non-malicious failure in federated machine learning, privacy and security requirements from the perspective of system and federated machine learning participants, defensive methods and fault recovery methods and the privacy and security risks evaluation. It also provides some guidance for typical federated learning scenarios in different industry areas which can facilitate practitioners to use federal learning in a better way.
Working Group Details
- IEEE Computer Society
Learn More About IEEE Computer Society
- Sponsor Committee
- C/AISC - Artificial Intelligence Standards Committee
- Working Group
SPFML - Security and Privacy for Federated Machine Learning
Learn More About SPFML - Security and Privacy for Federated Machine Learning
- IEEE Program Manager
- Christy Bahn
Contact Christy Bahn
- Working Group Chair
- Zuping Wu
Other Activities From This Working Group
Current projects that have been authorized by the IEEE SA Standards Board to develop a standard.
No Active Projects
Standards approved by the IEEE SA Standards Board that are within the 10-year lifecycle.
No Active Standards
These standards have been replaced with a revised version of the standard, or by a compilation of the original active standard and all its existing amendments, corrigenda, and errata.
No Superseded Standards
These standards have been removed from active status through a ballot where the standard is made inactive as a consensus decision of a balloting group.
No Inactive-Withdrawn Standards
These standards are removed from active status through an administrative process for standards that have not undergone a revision process within 10 years.
No Inactive-Reserved Standards