Talks and presentations

Talk on Distinct computational roles of excitatory and inhibitory neurons at the Simons Foundation

April 13, 2026

Talk, Simons Foundation (SURFiN Symposium), New York City, NY, USA

The balance between excitation and inhibition (E/I) is central to cortical computation and has been implicated in autism, yet the distinct functional roles of excitatory and inhibitory neurons remain unclear. In this talk, I present large-scale recordings from macaque inferior temporal cortex during object recognition and detail how these populations each contribute to neural representations and behavior. I show that excitatory and inhibitory neurons differ in low-level properties (firing rate, latency, …) but also their decoding performance and manifold geometry, and their alignment with artificial neural networks. Moreover, the two populations explain partially dissociable components of image-level behavioral performance, suggesting complementary computational roles. These findings provide a circuit-level framework for understanding how E/I balance shapes visual representations and offer a principled foundation for testing E/I imbalance hypotheses in computational models of autism.

Reverse Predictivity at CIAN Postdoctoral Seminar

February 06, 2026

Talk, CIAN Monthly Seminar, York University, Toronto, ON, Canada

Systems neuroscience increasingly relies on large-scale computational models to understand how neurons give rise to behavior. How good are these models? An intuitive benchmark comes from asking: how well can one primate brain predict the activity of another? A model that is fully brain-aligned should show a similar symmetry with the brain. In practice, however, model–brain comparisons have only followed a one-way approach. Artificial neural networks (ANNs) are typically evaluated by how well their features predict neural responses (forward predictivity), and not whether the ANN’s internal responses are equally predictable from primate brain activity. Addressing this gap, we introduced reverse predictivity (Muzellec et al., 2025), quantifying how well neural population activity predicts individual ANN units. Applying this bidirectional framework to macaque inferior temporal cortex we revealed representational mismatches that forward metrics alone fail to detect. Interestingly, factors improving forward predictivity (e.g., increasing model capacity, optimizing single-task performance) reduced reverse predictivity. To avoid such a bottleneck in current model design, we further demonstrated (Ziaee et al., 2025) that ANNs trained with multiple, behaviorally meaningful objectives can achieve improvements in both directions of predictability. Together, these results motivate bidirectional evaluation as a general principle for assessing and developing brain-like ANN models. In this seminar, I will argue that ANNs remain our strongest mechanistic hypotheses of brain function and discuss how reverse predictivity can guide the design of more biologically grounded models.

Perception en Provence Winter School

January 26, 2026

Talk, RISD & Brown University, Aix-en-Provence, France

Why do some images stay with us long after we’ve seen them, while others are quickly forgotten? This property—called memorability—turns out to be surprisingly consistent across people: if an image is memorable for one person, it is likely to be memorable for others as well. This suggests that memorability is not just subjective, but reflects how our visual system processes and encodes information. To understand and influence memorability, we turn to artificial neural networks (ANNs), computer models inspired by the brain. Modern vision models, such as those used in object recognition, have been shown to closely resemble how the human visual system—particularly the ventral visual stream—represents images. Because of this similarity, they provide a useful tool for probing and predicting human perception. In our work, we use these models to estimate how memorable an image is, and then go a step further: we use them to modify images in ways that are predicted to increase or decrease their memorability. Importantly, these changes are subtle and preserve the overall content of the image. We then test these modified images in human participants to see whether their memory performance changes as expected.

Thesis Defense

July 15, 2025

Talk, University of Toulouse, CerCo - CNRS, Toulouse, France

This talk examines neural synchrony (i.e., the coordinated timing of neuronal activity) as a potential mechanism for visual binding. I introduce three artificial neural network models that induce synchrony through different dynamics and show that these models improve object representation, robustness, and human-like generalization. I then compare these computational results with shared temporal variance (STV, proxy for synchrony) measured in primate IT cortex, revealing information carried by temporal signals that firing rates alone do not capture. Together, the work provides new evidence that temporal coordination supports visual perception and offers a framework for aligning ANN dynamics more closely with the brain.

VSS Talk on Reverse Predictivity

June 18, 2025

Talk, Vision Science Society (VSS) Conference, TradeWinds Island Resorts, St. Pete Beach, Florida, USA

How should we evaluate whether an artificial neural network truly represents objects the way the primate visual system does? In this talk, we introduce Reverse Predictivity, a complementary metric that tests how well population activity in macaque IT can predict the internal (IT-like) units of models. Using large-scale time-resolved IT recordings and features from diverse ANN architectures, we show that reverse predictivity reveals alignment patterns that traditional encoding analyses (i.e., forward predictivity) miss. Combined, forward and reverse predictivity provide a more complete picture of model–brain similarity, exposing representational differences that are invisible when using only a one-directional evaluation.

ICLR Presentation on FeatureTracker

April 24, 2025

Talk, International Conference on Learning Representations (ICLR), Singapore EXPO

We introduce FeatureTracker, a benchmark designed to test whether models can track objects even as their appearance change. We also design a complex-valued recurrent neural network that relies on phase synchrony to dynamically bind features belonging to the same object. As the object transforms, neurons representing it align their phases, creating a stable temporal signature that supports robust tracking. Across diverse transformations, synchrony-based tracking outperforms appearance-based baselines, showing that temporal coordination provides a powerful inductive bias for feature binding.

AI Tasting Seminar

September 07, 2023

Talk, Torus AI, Toulouse, France

Attribution methods are essential for understanding how deep networks make decisions, yet prediction-based approaches routinely outperform classic gradient-based ones. We show that the key difference lies in their frequency content: gradient-based attribution maps contain excessive high-frequency noise, while prediction-based maps do not. By analyzing gradients across multiple CNN classifiers, we trace this noise to aliasing introduced during downsampling operations. Applying an optimal low-pass filter removes this high-frequency contamination and dramatically improves the performance of gradient-based methods, reshaping the ranking of state-of-the-art attribution techniques. Our results highlight a simple principle—filtering out high-frequency noise restores the faithfulness of gradients, and point toward a renewed appreciation of efficient, interpretable gradient-based explanations.

CCN 2022 Contributed Talk on Neural ODEs

August 28, 2022

Talk, Conference on Cognitive Computational Neuroscience (CCN), San Francisco, CA, USA

This talk explores how standard (discretized) deep-learning implementations of neuroscience models often distort their underlying differential equations, limiting both performance and dynamical accuracy. We present neural ODEs as a more faithful alternative, allowing large-scale models to be trained end-to-end using precise and adaptive ODE solvers. Using predictive coding and hGRU as case studies, we show that neural-ODE implementations yield more stable dynamics and better task performance than conventional Euler-based updates, highlighting neural ODEs as a powerful framework for scaling biologically inspired models without sacrificing their continuous-time structure.