Description
Recent research efforts on adversarial ML have investigated problem-space attacks, focusing on the generation of real evasive objects in domains where, unlike images, there is no clear inverse mapping to the feature space (e.g., software). However, the design, comparison, and real-world implications of problem-space attacks remain underexplored. In this talk, I will present two major contributions from our recent IEEE Symp. S&P 2020 paper [1]. First, I will present our novel reformulation of adversarial ML evasion attacks for the problem-space, with more constraints to consider than the feature-space and with more light shed on the relationship between feature-space and problem-space attacks. Second, building on our reformulation, I will present our novel problem-space attack for generating end-to-end evasive Android malware, showing that it is feasible to generative evasive malware at scale that also evade feature-space defenses.[1] Fabio Pierazzi*, Feargus Pendlebury*, Jacopo Cortellazzi, Lorenzo Cavallaro. “Intriguing Properties of Adversarial ML Attacks in the Problem Space”. IEEE Symp. Security & Privacy (Oakland), 2020.Trailer of the talkhttps://www.youtube.com/watch?v=lLrnHwrvYiQ
Infos pratiques
Prochains exposés
-
Should I trust or should I go? A deep dive into the (not so reliable) web PKI trust model
Orateur : Romain Laborde - University of Toulouse
The padlock shown in the URL bar of our favorite web browser indicates that we are connected using a secure HTTPS connection and providing some sense of security. Unfortunately, the reality is slightly more complex. The trust model of the underlying Web PKI is invalid, making TLS a colossus with feet of clay. In this talk, we will dive into the trust model of the web PKI ecosystem to understand[…]-
SoSysec
-
Protocols
-
Network
-