Description
To understand the sensitivity under attacks and to develop defense mechanisms, machine-learning model designers craft worst-case adversarial perturbations with gradient-descent optimization algorithms against the model under evaluation. However, many of the proposed defenses have been shown to provide a false sense of robustness due to failures of the attacks, rather than actual improvements in the machine‐learning models’ robustness, as highlighted by more rigorous evaluations. Although guidelines and best practices have been suggested to improve current adversarial robustness evaluations, the lack of automatic testing and debugging tools makes it difficult to apply these recommendations in a systematic and automated manner. To this end, the analysis of failures in the optimization of adversarial attacks is the only valid strategy to avoid repeating mistakes of the past.
Prochains exposés
-
Should I trust or should I go? A deep dive into the (not so reliable) web PKI trust model
Orateur : Romain Laborde - University of Toulouse
The padlock shown in the URL bar of our favorite web browser indicates that we are connected using a secure HTTPS connection and providing some sense of security. Unfortunately, the reality is slightly more complex. The trust model of the underlying Web PKI is invalid, making TLS a colossus with feet of clay. In this talk, we will dive into the trust model of the web PKI ecosystem to understand[…]-
SoSysec
-
Protocols
-
Network
-