Description
Recent research efforts on adversarial ML have investigated problem-space attacks, focusing on the generation of real evasive objects in domains where, unlike images, there is no clear inverse mapping to the feature space (e.g., software). However, the design, comparison, and real-world implications of problem-space attacks remain underexplored. In this talk, I will present two major contributions from our recent IEEE Symp. S&P 2020 paper [1]. First, I will present our novel reformulation of adversarial ML evasion attacks for the problem-space, with more constraints to consider than the feature-space and with more light shed on the relationship between feature-space and problem-space attacks. Second, building on our reformulation, I will present our novel problem-space attack for generating end-to-end evasive Android malware, showing that it is feasible to generative evasive malware at scale that also evade feature-space defenses.[1] Fabio Pierazzi*, Feargus Pendlebury*, Jacopo Cortellazzi, Lorenzo Cavallaro. “Intriguing Properties of Adversarial ML Attacks in the Problem Space”. IEEE Symp. Security & Privacy (Oakland), 2020.Trailer of the talkhttps://www.youtube.com/watch?v=lLrnHwrvYiQ
Practical infos
Next sessions
-
Privacy-preserving collaboration for intrusion detection in distributed systems
Speaker : Léo Lavaur - Université du Luxembourg
The emergence of Federated Learning (FL) has rekindled the interest in collaborative intrusion detection systems, which were previously limited by the risks of information disclosure associated with data sharing. But is it a good collaboration tool? Originally designed to train prediction models on distributed consumer data without compromising data confidentiality, its use as a collaborative[…]-
SoSysec
-
Privacy
-
Intrusion detection
-
Distributed systems
-