Table of contents

  • This session has been presented February 03, 2023.

Description

  • Speaker

    Maura Pintor (PRA Lab, University of Cagliari)

To understand the sensitivity under attacks and to develop defense mechanisms, machine-learning model designers craft worst-case adversarial perturbations with gradient-descent optimization algorithms against the model under evaluation. However, many of the proposed defenses have been shown to provide a false sense of robustness due to failures of the attacks, rather than actual improvements in the machine‐learning models’ robustness, as highlighted by more rigorous evaluations. Although guidelines and best practices have been suggested to improve current adversarial robustness evaluations, the lack of automatic testing and debugging tools makes it difficult to apply these recommendations in a systematic and automated manner. To this end, the analysis of failures in the optimization of adversarial attacks is the only valid strategy to avoid repeating mistakes of the past.

Next sessions

  • CHERIoT RTOS: An OS for Fine-Grained Memory-Safe Compartments on Low-Cost Embedded Devices

    • November 21, 2025 (11:00 - 12:00)

    • Inria Center of the University of Rennes - Room Markov

    Speaker : Hugo Lefeuvre - The University of British Columbia

    Embedded systems do not benefit from strong memory protection, because they are designed to minimize cost. At the same time, there is increasing pressure to connect embedded devices to the internet, where their vulnerable nature makes them routinely subject to compromise. This fundamental tension leads to the current status-quo where exploitable devices put individuals and critical infrastructure[…]
    • SoSysec

    • Compartmentalization

    • Operating system and virtualization

    • Hardware/software co-design

    • Hardware architecture

Show previous sessions