IEEE International Conference on Sensing, Communication, and Networking
20–23 September 2022 // Virtual Conference

Distributed, Private, and Robust Machine Learning over Networks

Distributed machine learning is an interdisciplinary research area standing at the intersection of artificial intelligence, edge computing, and large-scale networked systems. As sheer data volumes are being generated on edge devices, such as mobile phones, tablets/laptops, or autonomous vehicles, there is a particular trend in distributed ML, e.g., federated learning, that moves the ML model directly onto the data generating sources, i.e., the end-users, for on-device data processing. This approach not only substantially reduces the communication overheads but, more importantly, facilitates the end-users to obtain a global model without centralizing their private data, thereby contributing to the development of trustworthy intelligent systems. Despite its great potential, several new challenges need to be addressed to make this paradigm possible. Specifically, the processing power, communication capability, and data quality across different end-user devices are highly heterogeneous, giving rise to significant fluctuation, or even divergence, in the learning process. Moreover, even if the end-users only exchange intermediate model parameters, it does not rule out the threat of privacy leakage as a malicious agent can adopt advanced inference techniques to recover a large portion of origin information from an intermediate parameter. To that end, this workshop aims to foster discussion, discovery, and dissemination of novel ideas and approaches for private and robust distributed machine learning. We solicit high-quality original papers on topics including, but not limited to:

  • Robust distributed learning algorithms against data heterogeneity as well as system heterogeneity
  • Privacy enhancing schemes (e.g., adopting differential privacy or creating synthetic data) for distributed learning systems
  • Novel methods for distributed machine learning with limited communication resources
  • Over-the-air computation for private and robust distributed machine learning systems
  • Impact of network topology on distributed machine learning algorithms
  • Robust and Private distributed reinforcement/meta/deep learning and other novel learning paradigms
  • Networking protocols to improve robustness and privacy in distributed learning
  • Experimental implementations and testbeds on large-scale distributed learning systems

Workshop Organizers

  • Howard H. Yang, ZJU-UIUC Institute, Zhejiang University, China
  • Tony Q. S. Quek, ISTD Pillar, Singapore University of Technology and Design, Singapore

Contribution format and workshop deadlines

The contributing formats to this workshop will be short papers that have a 6-page limit, covering recent advances in distributed machine learning with a particular focus on techniques that enhances the robustness and privacy of the system.

The updated submission deadline for this workshop is July 20, 2022.

 

Paper Submission Link

Workshop papers can be submitted via EDAS:

Paper submission link