Sommaire

  • Cet exposé a été présenté le 06 janvier 2023.

Description

  • Orateur

    Romain Thomas (Quarkslab)

SafetyNet is the Android component developed by Google to verify the devices’ integrity. These checks are used by the developers to prevent running applications on devices that would not meet security requirements but it is also used by Google to prevent bots, fraud and abuse.In 2017, Collin Mulliner & John Kozyrakis made one of the first public presentations about SafetyNet and a glimpse into the internal mechanisms. Since then, the Google anti-abuse team improved the strength of the solution which moved most of the original Java layer of SafetyNet, into a native module called DroidGuard. This module implements a custom virtual machine that runs a proprietary bytecode provided by Google to perform the devices’ integrity checks.The purpose of this talk is to make a state-of-the-art of the current implementation of SafetyNet. In particular, we aim at presenting the internal mechanisms behind SafetyNet and the DroidGuard module. This includes an overview of the VM design, its internal mechanisms, and we will introduce the security checks performed by SafetyNet to detect Magisk, emulators, rooted devices, and even Pegasus.

Prochains exposés

  • Towards privacy-preserving and fairness-aware federated learning framework

    • 19 septembre 2025 (11:00 - 12:00)

    • Inria Center of the University of Rennes - Petri/Turing room

    Orateur : Nesrine Kaaniche - Télécom SudParis

    Federated Learning (FL) enables the distributed training of a model across multiple data owners under the orchestration of a central server responsible for aggregating the models generated by the different clients. However, the original approach of FL has significant shortcomings related to privacy and fairness requirements. Specifically, the observation of the model updates may lead to privacy[…]
    • Cryptography

    • SoSysec

    • Privacy

    • Machine learning

  • NEAT: A Nile-English Aligned Translation Corpus based on a Robust Methodology for Intent Based Networking and Security

    • 26 septembre 2025 (11:00 - 12:00)

    • Inria Center of the University of Rennes - Room Métivier

    Orateur : Pierre Alain - IUT de Lannion

    The rise of Intent Based Networking (IBN) has paved the way for more efficient network and security management, reduced errors, and accelerated deployment times by leveraging AI processes capable of translating natural language intents into policies or configurations. Specialized neural networks could offer a promising solution at the core of translation operations. Still, they require dedicated,[…]
    • SoSysec

    • Network

    • Security policies

  • Black-Box Collision Attacks on Widely Deployed Perceptual Hash Functions and Their Consequences

    • 03 octobre 2025 (11:00 - 12:00)

    • Inria Center of the University of Rennes - Métivier room

    Orateur : Diane Leblanc-Albarel - KU Leuven

    Perceptual hash functions identify multimedia content by mapping similar inputs to similar outputs. They are widely used for detecting copyright violations and illegal content but lack transparency, as their design details are typically kept secret. Governments are considering extending the application of these functions to Client-Side Scanning (CSS) for end-to-end encrypted services: multimedia[…]
    • Cryptography

    • SoSysec

  • Malware Detection with AI Systems: bridging the gap between industry and academia

    • 09 octobre 2025 (11:00)

    • Inria Center of the University of Rennes - Room Aurigny

    Orateur : Luca Demetrio - University of Genova

    With the abundance of programs developed everyday, it is possible to develop next-generation antivirus programs that leverage this vast accumulated knowledge. In practice, these technologies are developed with a mixture of established techniques like pattern matching, and machine learning algorithms, both tailored to achieve high detection rate and low false alarms. While companies state the[…]
    • SoSysec

    • Intrusion detection

    • Machine learning

Voir les exposés passés