Sitemap

A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.

Pages

Posts

publications

Accurate implementation of computational neuroscience models through neural ODEs

Presented at Conference on Cognitive Computational Neuroscience (CCN) as a Talk, 2022

Master’s internship on the use of neural ODEs to accurately implement computational neuroscience models, leading to better performance and computational effiency. See the associated talk here.

Recommended citation: Muzellec, S., Chalvidal, M., Serre, T., VanRullen, R. (2022). Accurate implementation of computational neuroscience models through neural ODEs. Conference on Cognitive Computational Neuroscience (CCN).

PDF

Saliency strikes back: How filtering out high frequencies improves white-box explanations

Presented at International Conference on Machine Learning (ICML), 2023

We identify a major limitation of gradient-based white-box attribution methods (their susceptibility to high-frequency artifacts) and introduce FORGrad, a simple filtering approach that removes these distortions using architecture-specific optimal cut-off frequencies. Across models, FORGrad reliably boosts the performance of existing white-box methods, allowing them to rival more accurate but computationally expensive black-box approaches and enabling more faithful and efficient explainability.

Recommended citation: Muzellec, S., Fel, T., Boutin, V., Andeol, L., VanRullen, R., Serre, T. (2023). Saliency strikes back: How filtering out high frequencies improves white-box explanations. International Conference on Machine Learning (ICML).

PDF GitHub

Benefits of synchrony: Improving deep neural networks using complex values and Kuramoto synchronization

Presented at Conference on Cognitive Computational Neuroscience (CCN), 2023

Deep neural networks continue to improve at visual tasks but still fall short of human generalization. They also struggle to learn abstract concepts and to represent hierarchical relations between objects. The binding by synchrony theory describes how the brain processes visual scenes by synchronizing the activity of neurons that encode features from the same object. Here, we instantiate this theory in an artificial deep neural network through ”complex-valued” neurons. These neurons’ magnitude can be interpreted as the firing rate and their phase as a degree of synchrony with respect to other neurons in the population. We first initialize the phases by synchronizing them using an adaptation of the ”Kuramoto model” where a coupling kernel is learned to maximize the degree of synchrony/desynchrony between neurons encoding for the same/different objects. We then train a complex-valued neural network on a multi-object classification task, using the Kuramoto-synchronized state as phase initialization. We find that this model outperforms both its real-valued counterpart and a complex model initialized with random phases – exhibiting greater robustness to noise and to occlusions.

Recommended citation: Muzellec, S., Alamia, A., Serre, T., VanRullen, R. (2023). Benefits of synchrony: Improving deep neural networks using complex values and Kuramoto synchronization. Conference on Cognitive Computational Neuroscience (CCN).

PDF

Tracking objects that change in appearance with phase synchrony

Presented at Conference on Cognitive Computational Neuroscience (CCN), 2024

See the associated talk here.

Recommended citation: Muzellec, S.*, Linsley, D.*, Ashok, A. K., VanRullen, R., Serre, T. (2024). Tracking objects that change in appearance with phase synchrony. Conference on Cognitive Computational Neuroscience (CCN).

PDF

Tracking objects that change in appearance with phase synchrony

Presented at International Conference on Learning Representations (ICLR), 2024

We introduce a complex-valued recurrent neural network (CV-RNN) that leverages neural synchrony to bind the different features of target objects while they change locations, offering a biologically inspired mechanism for tracking appearance-morphing objects. Using FeatureTracker, a large-scale challenge involving controlled changes in object shape, color, and location, we show that while standard deep networks fail, the CV-RNN closely matches human performance, providing a computational proof-of-concept for synchrony-based object tracking. See the associated talk here.

Recommended citation: Muzellec, S.*, Linsley, D.*, Ashok, A. K., Mingolla, E., Malik, G., VanRullen, R., Serre, T. (2024). Tracking objects that change in appearance with phase synchrony. International Conference on Learning Representations (ICLR).

PDF GitHub

Latent Representation Matters: Human-like Sketches in One-shot Drawing Tasks

Presented at Advances in Neural Information Processing Systems (NeurIPS), 2024

We systematically investigate how different inductive biases shape the latent space of Latent Diffusion Models (LDMs) in one-shot drawing tasks, comparing standard LDM regularizers with supervised and contrastive objectives. We find that redundancy-reduction and prototype-based regularizations enable LDMs to generate near-human-like drawings (highly recognizable and original), bringing machine one-shot generation remarkably close to human performance.

Recommended citation: Boutin, V., Mukherji, R., Agrawal, A., Muzellec, S., Fel, T., Serre, T., VanRullen, R. (2024). Latent Representation Matters: Human-like Sketches in One-shot Drawing Tasks. Advances in Neural Information Processing Systems (NeurIPS) 37, 96282-96324

PDF

Enhancing deep neural networks through complex-valued representations and Kuramoto synchronization dynamics

Published in Transactions on Machine Learning Research (TMLR), 2025

We investigate whether neural-inspired synchrony can improve object binding in deep learning models, testing whether complex-valued representations and Kuramoto dynamics can align phases and group features belonging to the same object. Across multi-object and noisy visual categorization tasks, including overlapping digits and out-of-distribution inputs, both feedforward and recurrent synchrony-based models outperform real-valued and unsynchronized complex networks, demonstrating that phase-based mechanisms can substantially enhance performance, robustness, and generalization.

Recommended citation: Muzellec, S., Alamia, A., Serre, T., VanRullen, R. (2025). Enhancing deep neural networks through complex-valued representations and Kuramoto synchronization dynamics. Transactions on Machine Learning Research (TMLR).

PDF GitHub

In Sync with the Brain: Modeling Visual Binding through Neural Synchrony

Delivered by University of Toulouse, 2025

This thesis tests the long-standing idea that neural synchrony helps the brain bind visual features into objects. By developing neural network models that incorporate synchrony in different ways, it shows that temporal coordination improves object representations, robustness, and human-like behavior. Analyses of primate IT recordings further suggest that synchrony carries meaningful information beyond firing rates. These findings support synchrony as a useful mechanism for visual perception and for designing more brain-like AI models. See the associated talk here.

Recommended citation: Muzellec, S. (2025). In Sync with the Brain : Modeling Visual Binding through Neural Synchrony.Neuroscience. Université de Toulouse.

PDF

GASPnet: Global Agreement to Synchronize Phases

Under Revision at Neurocomputing, 2025

We introduce a new, neuroscience-inspired binding mechanism that integrates Transformer-style attention with phase-based synchrony, enabling neural networks to selectively enhance interactions between neurons with aligned temporal phases while suppressing mismatched signals. By incorporating Kuramoto-driven phase alignment into all layers of a convolutional network, our model achieves higher accuracy, stronger noise robustness, and better generalization than standard CNNs on multi-object datasets, offering a principled solution to the visual binding problem in artificial networks.

Recommended citation: Alamia, A.*, Muzellec, S.*, Serre, T., VanRullen, R. (2025). GASPnet: Global Agreement to Synchronize Phases. arXiv preprint arXiv:2507.16674

PDF

Reverse Predictivity: Going Beyond One-Way Mapping to Compare Artificial Neural Network Models and Brains

Under Revision at Nature Machine Intelligence, 2025

We introduce reverse predictivity, a new metric that complements traditional forward predictivity by testing bidirectional alignment (how well ANN representations can be predicted by macaque IT activity). While monkey-to-monkey and same-architecture ANN initializations show symmetry, ANNs exhibit a striking asymmetry with IT, and we identify factors (such as adversarial training) that reduce these non-IT-aligned representational dimensions and improve ANN–brain alignment. See the associated talk here.

Recommended citation: Muzellec, S. & Kar, K. (2025). Reverse Predictivity: Going Beyond One-Way Mapping to Compare Artificial Neural Network Models and Brains. bioRxiv, 2025.08. 08.669382

PDF GitHub Data

Tracking objects that change in appearance with phase synchrony

Presented at The 3rd Cold Spring Harbor conference on From Neuroscience to Artificially Intelligent Systems (NAISys), 2025

See the associated talk here.

Recommended citation: Muzellec, S.*, Linsley, D.*, Ashok, A. K., Mingolla, E., Malik, G., VanRullen, R., Serre, T. (2024). Tracking objects that change in appearance with phase synchrony. The 3rd Cold Spring Harbor conference on From Neuroscience to Artificially Intelligent Systems (NAISys).

PDF

Distinct contributions of memorability and object recognition to the representational goals of the macaque inferior temporal cortex

Submitted to PNAS, 2025

We show that incorporating image memorability as an additional training objective—in addition to object recognition—substantially improves ANN alignment with primate inferior temporal (IT) cortex. Models jointly optimized for recognition and memorability capture non-overlapping neural variance, reduce non–brain-like units, and better match human memorability patterns, revealing that IT supports multiple representational goals beyond recognition alone.

Recommended citation: Ziaee, S.*, Ahuja, R.*, Muzellec, S., Fide, E., Rosenbaum, S. R., & Kar, K. (2025). Distinct contributions of memorability and object recognition to the representational goals of the macaque inferior temporal cortex. bioRxiv, 2025.10. 06.680822

PDF GitHub

MAPS: Masked Attribution-based Probing of Strategies — A computational framework to align human and model explanations

Under revision at Communications Psychology, 2025

MAPS is a validated tool that identifies which ANN explanations best match human vision. By converting attribution maps into explanation-masked images (EMIs), it links model explanations with human and macaque behavior and enables principled comparison of interpretability methods using far fewer trials.

Recommended citation: Muzellec, S., Alghetaa, Y. K., Kornblith, S., & Kar, K. (2025). MAPS: Masked Attribution-based Probing of Strategies — A computational framework to align human and model explanations. arXiv preprint arXiv:2510.12141

PDF GitHub Data

talks

CCN 2022 Contributed Talk on Neural ODEs

Published:

This talk explores how standard (discretized) deep-learning implementations of neuroscience models often distort their underlying differential equations, limiting both performance and dynamical accuracy. We present neural ODEs as a more faithful alternative, allowing large-scale models to be trained end-to-end using precise and adaptive ODE solvers. Using predictive coding and hGRU as case studies, we show that neural-ODE implementations yield more stable dynamics and better task performance than conventional Euler-based updates, highlighting neural ODEs as a powerful framework for scaling biologically inspired models without sacrificing their continuous-time structure.

AI Tasting Seminar

Published:

Attribution methods are essential for understanding how deep networks make decisions, yet prediction-based approaches routinely outperform classic gradient-based ones. We show that the key difference lies in their frequency content: gradient-based attribution maps contain excessive high-frequency noise, while prediction-based maps do not. By analyzing gradients across multiple CNN classifiers, we trace this noise to aliasing introduced during downsampling operations. Applying an optimal low-pass filter removes this high-frequency contamination and dramatically improves the performance of gradient-based methods, reshaping the ranking of state-of-the-art attribution techniques. Our results highlight a simple principle—filtering out high-frequency noise restores the faithfulness of gradients, and point toward a renewed appreciation of efficient, interpretable gradient-based explanations.

ICLR Presentation on FeatureTracker

Published:

We introduce FeatureTracker, a benchmark designed to test whether models can track objects even as their appearance change. We also design a complex-valued recurrent neural network that relies on phase synchrony to dynamically bind features belonging to the same object. As the object transforms, neurons representing it align their phases, creating a stable temporal signature that supports robust tracking. Across diverse transformations, synchrony-based tracking outperforms appearance-based baselines, showing that temporal coordination provides a powerful inductive bias for feature binding.

VSS Talk on Reverse Predictivity

Published:

How should we evaluate whether an artificial neural network truly represents objects the way the primate visual system does? In this talk, we introduce Reverse Predictivity, a complementary metric that tests how well population activity in macaque IT can predict the internal (IT-like) units of models. Using large-scale time-resolved IT recordings and features from diverse ANN architectures, we show that reverse predictivity reveals alignment patterns that traditional encoding analyses (i.e., forward predictivity) miss. Combined, forward and reverse predictivity provide a more complete picture of model–brain similarity, exposing representational differences that are invisible when using only a one-directional evaluation.

Thesis Defense

Published:

This talk examines neural synchrony (i.e., the coordinated timing of neuronal activity) as a potential mechanism for visual binding. I introduce three artificial neural network models that induce synchrony through different dynamics and show that these models improve object representation, robustness, and human-like generalization. I then compare these computational results with shared temporal variance (STV, proxy for synchrony) measured in primate IT cortex, revealing information carried by temporal signals that firing rates alone do not capture. Together, the work provides new evidence that temporal coordination supports visual perception and offers a framework for aligning ANN dynamics more closely with the brain.

teaching