Communication Dans Un Congrès Année : 2025

Trustworthiness of Stochastic Gradient Descent in Distributed Learning

Résumé

Distributed learning (DL) uses multiple nodes to accelerate training, enabling efficient optimization of large-scale models. Stochastic Gradient Descent (SGD), a key optimization algorithm, plays a central role in this process. However, communication bottlenecks often limit scalability and efficiency, leading to increasing adoption of compressed SGD techniques to alleviate these challenges. Despite addressing communication overheads, compressed SGD introduces trustworthiness concerns, as gradient exchanges among nodes are vulnerable to attacks like gradient inversion (GradInv) and membership inference attacks (MIA). The trustworthiness of compressed SGD remains unexplored, leaving important questions about its reliability unanswered.In this paper, we provide a trustworthiness evaluation of compressed versus uncompressed SGD. Specifically, we conducted empirical studies using GradInv attacks, revealing that compressed SGD demonstrates significantly higher resistance to privacy leakage compared to uncompressed SGD. In addition, our findings suggest that MIA may not be a reliable metric for assessing privacy risks in distributed learning.

Fichier non déposé

Dates et versions

hal-05544527 , version 1 (09-03-2026)

Identifiants

Citer

Hongyang Li, Caesar Wu, Mohammed Chadli, Saïd Mammar, Pascal Bouvry. Trustworthiness of Stochastic Gradient Descent in Distributed Learning. 23rd European Control Conference (ECC 2025), Jun 2025, Thessaloniki, Greece. pp.2199--2204, ⟨10.23919/ECC65951.2025.11187286⟩. ⟨hal-05544527⟩
26 Consultations
0 Téléchargements

Altmetric

Partager

  • More