Talks and presentations

Thesis Defense

July 15, 2025

Talk, University of Toulouse, CerCo - CNRS, Toulouse, France

This talk examines neural synchrony (i.e., the coordinated timing of neuronal activity) as a potential mechanism for visual binding. I introduce three artificial neural network models that induce synchrony through different dynamics and show that these models improve object representation, robustness, and human-like generalization. I then compare these computational results with shared temporal variance (STV, proxy for synchrony) measured in primate IT cortex, revealing information carried by temporal signals that firing rates alone do not capture. Together, the work provides new evidence that temporal coordination supports visual perception and offers a framework for aligning ANN dynamics more closely with the brain.

VSS Talk on Reverse Predictivity

June 18, 2025

Talk, Vision Science Society (VSS) Conference, TradeWinds Island Resorts, St. Pete Beach, Florida, USA

How should we evaluate whether an artificial neural network truly represents objects the way the primate visual system does? In this talk, we introduce Reverse Predictivity, a complementary metric that tests how well population activity in macaque IT can predict the internal (IT-like) units of models. Using large-scale time-resolved IT recordings and features from diverse ANN architectures, we show that reverse predictivity reveals alignment patterns that traditional encoding analyses (i.e., forward predictivity) miss. Combined, forward and reverse predictivity provide a more complete picture of model–brain similarity, exposing representational differences that are invisible when using only a one-directional evaluation.

ICLR Presentation on FeatureTracker

April 24, 2025

Talk, International Conference on Learning Representations (ICLR), Singapore EXPO

We introduce FeatureTracker, a benchmark designed to test whether models can track objects even as their appearance change. We also design a complex-valued recurrent neural network that relies on phase synchrony to dynamically bind features belonging to the same object. As the object transforms, neurons representing it align their phases, creating a stable temporal signature that supports robust tracking. Across diverse transformations, synchrony-based tracking outperforms appearance-based baselines, showing that temporal coordination provides a powerful inductive bias for feature binding.

AI Tasting Seminar

September 07, 2023

Talk, Torus AI, Toulouse, France

Attribution methods are essential for understanding how deep networks make decisions, yet prediction-based approaches routinely outperform classic gradient-based ones. We show that the key difference lies in their frequency content: gradient-based attribution maps contain excessive high-frequency noise, while prediction-based maps do not. By analyzing gradients across multiple CNN classifiers, we trace this noise to aliasing introduced during downsampling operations. Applying an optimal low-pass filter removes this high-frequency contamination and dramatically improves the performance of gradient-based methods, reshaping the ranking of state-of-the-art attribution techniques. Our results highlight a simple principle—filtering out high-frequency noise restores the faithfulness of gradients, and point toward a renewed appreciation of efficient, interpretable gradient-based explanations.

CCN 2022 Contributed Talk on Neural ODEs

August 28, 2022

Talk, Conference on Cognitive Computational Neuroscience (CCN), San Francisco, CA, USA

This talk explores how standard (discretized) deep-learning implementations of neuroscience models often distort their underlying differential equations, limiting both performance and dynamical accuracy. We present neural ODEs as a more faithful alternative, allowing large-scale models to be trained end-to-end using precise and adaptive ODE solvers. Using predictive coding and hGRU as case studies, we show that neural-ODE implementations yield more stable dynamics and better task performance than conventional Euler-based updates, highlighting neural ODEs as a powerful framework for scaling biologically inspired models without sacrificing their continuous-time structure.