Description
This talk is about inaccurate assumptions, unrealistic trust models, and flawed methodologies affecting current collaborative machine learning techniques. In the presentation, we cover different security issues concerning both emerging approaches and well-established solutions in privacy-preserving collaborative machine learning. We start by discussing the inherent insecurity of Split Learning and peer-to-peer collaborative learning. Then, we talk about the soundness of current Secure Aggregation protocols in Federated Learning, showing that those do not provide any additional level of privacy to users. Ultimately, the objective of this talk is to highlight the general errors and flawed approaches we all should avoid in devising and implementing "privacy-preserving collaborative machine learning".
Practical infos
Next sessions
-
Les jeux vidéo de l’écran au réel : enjeux juridiques et (géo)politiques au prisme de la cybersécurité
Speaker : Léandre Lebon, Sandrine Turgis - Univ Rennes, IODE
Protection des droits d’auteur, lutte contre les techniques de triche, interactions avec la guerre et les conflits hybrides, enjeux de démocratie ... Sous l’angle de la cybersécurité les enjeux juridiques et (géo)politiques des jeux video sont nombreux. Cette présentation du groupe de travail sur les jeux video (GTJV) permettra d’alimenter la réflexion sur l’articulation entre jeux video et[…]-
Law
-
-
The Quest for my Perfect MATE. Investigate MATE: Man-at-the-End attacker (followed by a hands-on application).
Speaker : Mohamed Sabt, Etienne Nedjaï - Univ Rennes, IRISA
Shannon sought security against an attacker with unlimited computational powers: if an information source conveys some information, then Shannon’s attacker will surely extract that information. Diffie and Hellman refined Shannon’s attacker model by taking into account the fact that the real attackers are computationally limited. This idea became one of the greatest new paradigms in computer[…]