Sommaire

  • Cet exposé a été présenté le 03 février 2023.

Description

  • Orateur

    Maura Pintor (PRA Lab, University of Cagliari)

To understand the sensitivity under attacks and to develop defense mechanisms, machine-learning model designers craft worst-case adversarial perturbations with gradient-descent optimization algorithms against the model under evaluation. However, many of the proposed defenses have been shown to provide a false sense of robustness due to failures of the attacks, rather than actual improvements in the machine‐learning models’ robustness, as highlighted by more rigorous evaluations. Although guidelines and best practices have been suggested to improve current adversarial robustness evaluations, the lack of automatic testing and debugging tools makes it difficult to apply these recommendations in a systematic and automated manner. To this end, the analysis of failures in the optimization of adversarial attacks is the only valid strategy to avoid repeating mistakes of the past.

Prochains exposés

  • A non-comparison oblivious sort and its application to private k-NN

    • 20 juin 2025 (11:00 - 12:00)

    • Inria Center of the University of Rennes - - Petri/Turing room

    Orateur : Sofiane Azogagh - UQÀM

    Sorting is a fundamental subroutine of many algorithms and as such has been studied for decades. A well-known result is the Lower Bound Theorem, which states that no comparison-based sorting algorithm can do better than O(nlog(n)) in the worst case. However, in the fifties, new sorting algorithms that do not rely on comparisons were introduced such as counting sort, which can run in linear time[…]
    • Cryptography

    • SoSysec

    • Privacy

    • Databases

    • Secure storage

Voir les exposés passés