Table of contents

  • This session has been presented March 25, 2022.

Description

  • Speaker

    Dario Pasquini (EPFL)

This talk is about inaccurate assumptions, unrealistic trust models, and flawed methodologies affecting current collaborative machine learning techniques. In the presentation, we cover different security issues concerning both emerging approaches and well-established solutions in privacy-preserving collaborative machine learning. We start by discussing the inherent insecurity of Split Learning and peer-to-peer collaborative learning. Then, we talk about the soundness of current Secure Aggregation protocols in Federated Learning, showing that those do not provide any additional level of privacy to users. Ultimately, the objective of this talk is to highlight the general errors and flawed approaches we all should avoid in devising and implementing "privacy-preserving collaborative machine learning".

Practical infos

Next sessions

  • Privacy-preserving collaboration for intrusion detection in distributed systems

    • March 27, 2026 (11:00 - 12:00)

    • Inria Center of the University of Rennes - Room Markov

    Speaker : Léo Lavaur - Université du Luxembourg

    The emergence of Federated Learning (FL) has rekindled the interest in collaborative intrusion detection systems, which were previously limited by the risks of information disclosure associated with data sharing. But is it a good collaboration tool? Originally designed to train prediction models on distributed consumer data without compromising data confidentiality, its use as a collaborative[…]
    • SoSysec

    • Privacy

    • Intrusion detection

    • Distributed systems

Show previous sessions