• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
CS Logo
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
CS Logo

0

IEEE Computer Society Logo
Sign up for our newsletter
FacebookTwitterLinkedInInstagramYoutube
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2025 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Digital Library
  • /Journals
  • /Bd
  • Home
  • / ...
  • /Journals
  • /Bd

CLOSED Call for Papers: Special Issue on Trustable, Verifiable, and Auditable Federated Learning

TBD seeks submissions for this upcoming special issue.

Data sharing and collaborative model training are promising ways to improve the quality of deep-learning models. However, it is usually difficult to implement such settings in practice due to data privacy concerns and relative regulations such as the GDPR and HIPAA. Federated learning aims to train collaboratively on distributed data sources without disclosing private data from each of the data sources, thus enabling privacy-preserving data sharing and collaboration. However, federated learning also faces multiple challenges that may limit its applications in real-world use scenarios. For example, federated learning is still under the risk of various kinds of attacks that may result in leakage of individual data source privacy or degraded joint model accuracy. On the other hand, the common federated settings can only protect data privacy for each participant, but cannot identify incorrect inputs and computations from malicious participants. Thus, techniques like verifiable computing are also needed in federated learning. Furthermore, data privacy is a concern in critical businesses like the financial and healthcare industry, but the model interpretability is also essential. For a model generated from federated training, the model behavior or results need to be explainable and auditable to be widely used with confidence. It eventually leads to implementations of federated learning providers that will gain the trust of the federated learning consumers/users. Federated learning on non-IID distributed data sources usually leads to lower model performance. Designing proper incentive mechanisms for federated systems is beneficial in practice as it encourages the active participation of data owners. Social responsibility in federated-learning systems is also an important topic that needs to be addressed. We believe this special issue will offer a timely collection of research updates to benefit the researchers and practitioners working in federated learning. Topics of interest include, but are not limited to:
  • Adversarial Attacks on Federated Learning
  • Federated Learning for Non-IID Data
  • Incentive Mechanisms in Federated Learning Systems
  • Interpretability in Federated Learning
  • Social Responsibility in Federated Learning Systems
  • Fully Decentralized Federated Learning
  • Verifiable Computing in Federated Learning
  • Federated Learning with Blockchain
  • Privacy-Preserving Techniques in Federated Learning
  • Communication Efficiency in Federated Learning
  • Federated Learning with Heterogeneous Devices
  • Federated Learning with Unreliable Participants
  • Systems and Infrastructures for Federated Learning
  • Applications with Federated Learning

Important Dates

Submissions deadline: 5 June 2022 Revised papers due: 1 July 2022 Final notification: 15 August 2022 Publication of special issue: October 2022

Submission Guidelines

Manuscripts must be within the scope of the IEEE Transactions on Big Data and the special issue on “Trustable, Verifiable, and Auditable Federated Learning.” Manuscript preparation guidelines are available on the TBD Author Information webpage. All papers will be handled via ScholarOne Manuscripts. Please select "SI: Trustable, Verifiable, and Auditable Federated Learning" when selecting article type name during the submission process. Submissions that are out of the scope of the journal may be rejected.

Guest Editors

  • Qiang Yang, Hong Kong University of Science and Technology, Hong Kong (http://www.cs.ust.hk/~qyang/)
  • Sin G. Teo, Agency for Science, Technology, and Research, Singapore
  • Chao Jin, Agency for Science, Technology, and Research, Singapore
  • Lixin Fan, WeBank, China
  • Yang Liu, Tsinghua University, China (https://sites.google.com/site/yangliuveronica/)
  • Han Yu, Nanyang Technological University, Singapore (http://hanyu.sg/)
  • Le Zhang, University of Electronic Science and Technology of China, China (https://zhangleuestc.github.io/)
LATEST NEWS
Reimagining Infrastructure and Systems for Scientific Discovery and AI Collaboration
Reimagining Infrastructure and Systems for Scientific Discovery and AI Collaboration
IEEE 2881: Learning Metadata Terms (LMT) Empowers Learning in the AI Age
IEEE 2881: Learning Metadata Terms (LMT) Empowers Learning in the AI Age
Platform Engineering: Bridging the Developer Experience Gap in Enterprise Software Development
Platform Engineering: Bridging the Developer Experience Gap in Enterprise Software Development
IEEE Std 3158.1-2025 — Verifying Trust in Data Sharing: Standard for Testing and Performance of a Trusted Data Matrix System
IEEE Std 3158.1-2025 — Verifying Trust in Data Sharing: Standard for Testing and Performance of a Trusted Data Matrix System
IEEE Std 3220.01-2025: Standard for Consensus Framework for Blockchain System
IEEE Std 3220.01-2025: Standard for Consensus Framework for Blockchain System
Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter
Read Next

Reimagining Infrastructure and Systems for Scientific Discovery and AI Collaboration

IEEE 2881: Learning Metadata Terms (LMT) Empowers Learning in the AI Age

Platform Engineering: Bridging the Developer Experience Gap in Enterprise Software Development

IEEE Std 3158.1-2025 — Verifying Trust in Data Sharing: Standard for Testing and Performance of a Trusted Data Matrix System

IEEE Std 3220.01-2025: Standard for Consensus Framework for Blockchain System

Mapping the $85B AI Processor Landscape: Global Startup Surge, Market Consolidation Coming?

AI Agentic Mesh – A Foundational Architecture for Enterprise Autonomy

IEEE O.C A.I “DEVHACK” Hackathon 2025 Winner Celebration