Publications

You can also find my articles on my Google Scholar profile.

Preprints


MAPS: Masked Attribution-based Probing of Strategies — A computational framework to align human and model explanations

Under revision at Communications Psychology, 2025

MAPS is a validated tool that identifies which ANN explanations best match human vision. By converting attribution maps into explanation-masked images (EMIs), it links model explanations with human and macaque behavior and enables principled comparison of interpretability methods using far fewer trials.

Recommended citation: Muzellec, S., Alghetaa, Y. K., Kornblith, S., & Kar, K. (2025). MAPS: Masked Attribution-based Probing of Strategies — A computational framework to align human and model explanations. arXiv preprint arXiv:2510.12141

PDF GitHub Data

Distinct contributions of memorability and object recognition to the representational goals of the macaque inferior temporal cortex

Submitted to PNAS, 2025

We show that incorporating image memorability as an additional training objective—in addition to object recognition—substantially improves ANN alignment with primate inferior temporal (IT) cortex. Models jointly optimized for recognition and memorability capture non-overlapping neural variance, reduce non–brain-like units, and better match human memorability patterns, revealing that IT supports multiple representational goals beyond recognition alone.

Recommended citation: Ziaee, S.*, Ahuja, R.*, Muzellec, S., Fide, E., Rosenbaum, S. R., & Kar, K. (2025). Distinct contributions of memorability and object recognition to the representational goals of the macaque inferior temporal cortex. bioRxiv, 2025.10. 06.680822

PDF GitHub

Reverse Predictivity: Going Beyond One-Way Mapping to Compare Artificial Neural Network Models and Brains

Under Revision at Nature Machine Intelligence, 2025

We introduce reverse predictivity, a new metric that complements traditional forward predictivity by testing bidirectional alignment (how well ANN representations can be predicted by macaque IT activity). While monkey-to-monkey and same-architecture ANN initializations show symmetry, ANNs exhibit a striking asymmetry with IT, and we identify factors (such as adversarial training) that reduce these non-IT-aligned representational dimensions and improve ANN–brain alignment. See the associated talk here.

Recommended citation: Muzellec, S. & Kar, K. (2025). Reverse Predictivity: Going Beyond One-Way Mapping to Compare Artificial Neural Network Models and Brains. bioRxiv, 2025.08. 08.669382

PDF GitHub Data

GASPnet: Global Agreement to Synchronize Phases

Under Revision at Neurocomputing, 2025

We introduce a new, neuroscience-inspired binding mechanism that integrates Transformer-style attention with phase-based synchrony, enabling neural networks to selectively enhance interactions between neurons with aligned temporal phases while suppressing mismatched signals. By incorporating Kuramoto-driven phase alignment into all layers of a convolutional network, our model achieves higher accuracy, stronger noise robustness, and better generalization than standard CNNs on multi-object datasets, offering a principled solution to the visual binding problem in artificial networks.

Recommended citation: Alamia, A.*, Muzellec, S.*, Serre, T., VanRullen, R. (2025). GASPnet: Global Agreement to Synchronize Phases. arXiv preprint arXiv:2507.16674

PDF

Journal Articles


Enhancing deep neural networks through complex-valued representations and Kuramoto synchronization dynamics

Published in Transactions on Machine Learning Research (TMLR), 2025

We investigate whether neural-inspired synchrony can improve object binding in deep learning models, testing whether complex-valued representations and Kuramoto dynamics can align phases and group features belonging to the same object. Across multi-object and noisy visual categorization tasks, including overlapping digits and out-of-distribution inputs, both feedforward and recurrent synchrony-based models outperform real-valued and unsynchronized complex networks, demonstrating that phase-based mechanisms can substantially enhance performance, robustness, and generalization.

Recommended citation: Muzellec, S., Alamia, A., Serre, T., VanRullen, R. (2025). Enhancing deep neural networks through complex-valued representations and Kuramoto synchronization dynamics. Transactions on Machine Learning Research (TMLR).

PDF GitHub

AI Conference Papers


Latent Representation Matters: Human-like Sketches in One-shot Drawing Tasks

Presented at Advances in Neural Information Processing Systems (NeurIPS), 2024

We systematically investigate how different inductive biases shape the latent space of Latent Diffusion Models (LDMs) in one-shot drawing tasks, comparing standard LDM regularizers with supervised and contrastive objectives. We find that redundancy-reduction and prototype-based regularizations enable LDMs to generate near-human-like drawings (highly recognizable and original), bringing machine one-shot generation remarkably close to human performance.

Recommended citation: Boutin, V., Mukherji, R., Agrawal, A., Muzellec, S., Fel, T., Serre, T., VanRullen, R. (2024). Latent Representation Matters: Human-like Sketches in One-shot Drawing Tasks. Advances in Neural Information Processing Systems (NeurIPS) 37, 96282-96324

PDF

Tracking objects that change in appearance with phase synchrony

Presented at International Conference on Learning Representations (ICLR), 2024

We introduce a complex-valued recurrent neural network (CV-RNN) that leverages neural synchrony to bind the different features of target objects while they change locations, offering a biologically inspired mechanism for tracking appearance-morphing objects. Using FeatureTracker, a large-scale challenge involving controlled changes in object shape, color, and location, we show that while standard deep networks fail, the CV-RNN closely matches human performance, providing a computational proof-of-concept for synchrony-based object tracking. See the associated talk here.

Recommended citation: Muzellec, S.*, Linsley, D.*, Ashok, A. K., Mingolla, E., Malik, G., VanRullen, R., Serre, T. (2024). Tracking objects that change in appearance with phase synchrony. International Conference on Learning Representations (ICLR).

PDF GitHub

Saliency strikes back: How filtering out high frequencies improves white-box explanations

Presented at International Conference on Machine Learning (ICML), 2023

We identify a major limitation of gradient-based white-box attribution methods (their susceptibility to high-frequency artifacts) and introduce FORGrad, a simple filtering approach that removes these distortions using architecture-specific optimal cut-off frequencies. Across models, FORGrad reliably boosts the performance of existing white-box methods, allowing them to rival more accurate but computationally expensive black-box approaches and enabling more faithful and efficient explainability.

Recommended citation: Muzellec, S., Fel, T., Boutin, V., Andeol, L., VanRullen, R., Serre, T. (2023). Saliency strikes back: How filtering out high frequencies improves white-box explanations. International Conference on Machine Learning (ICML).

PDF GitHub

Neuroscience Conference Papers


Tracking objects that change in appearance with phase synchrony

Presented at The 3rd Cold Spring Harbor conference on From Neuroscience to Artificially Intelligent Systems (NAISys), 2025

See the associated talk here.

Recommended citation: Muzellec, S.*, Linsley, D.*, Ashok, A. K., Mingolla, E., Malik, G., VanRullen, R., Serre, T. (2024). Tracking objects that change in appearance with phase synchrony. The 3rd Cold Spring Harbor conference on From Neuroscience to Artificially Intelligent Systems (NAISys).

PDF

Tracking objects that change in appearance with phase synchrony

Presented at Conference on Cognitive Computational Neuroscience (CCN), 2024

See the associated talk here.

Recommended citation: Muzellec, S.*, Linsley, D.*, Ashok, A. K., VanRullen, R., Serre, T. (2024). Tracking objects that change in appearance with phase synchrony. Conference on Cognitive Computational Neuroscience (CCN).

PDF

Benefits of synchrony: Improving deep neural networks using complex values and Kuramoto synchronization

Presented at Conference on Cognitive Computational Neuroscience (CCN), 2023

Deep neural networks continue to improve at visual tasks but still fall short of human generalization. They also struggle to learn abstract concepts and to represent hierarchical relations between objects. The binding by synchrony theory describes how the brain processes visual scenes by synchronizing the activity of neurons that encode features from the same object. Here, we instantiate this theory in an artificial deep neural network through ”complex-valued” neurons. These neurons’ magnitude can be interpreted as the firing rate and their phase as a degree of synchrony with respect to other neurons in the population. We first initialize the phases by synchronizing them using an adaptation of the ”Kuramoto model” where a coupling kernel is learned to maximize the degree of synchrony/desynchrony between neurons encoding for the same/different objects. We then train a complex-valued neural network on a multi-object classification task, using the Kuramoto-synchronized state as phase initialization. We find that this model outperforms both its real-valued counterpart and a complex model initialized with random phases – exhibiting greater robustness to noise and to occlusions.

Recommended citation: Muzellec, S., Alamia, A., Serre, T., VanRullen, R. (2023). Benefits of synchrony: Improving deep neural networks using complex values and Kuramoto synchronization. Conference on Cognitive Computational Neuroscience (CCN).

PDF

Accurate implementation of computational neuroscience models through neural ODEs

Presented at Conference on Cognitive Computational Neuroscience (CCN) as a Talk, 2022

Master’s internship on the use of neural ODEs to accurately implement computational neuroscience models, leading to better performance and computational effiency. See the associated talk here.

Recommended citation: Muzellec, S., Chalvidal, M., Serre, T., VanRullen, R. (2022). Accurate implementation of computational neuroscience models through neural ODEs. Conference on Cognitive Computational Neuroscience (CCN).

PDF

Thesis


In Sync with the Brain: Modeling Visual Binding through Neural Synchrony

Delivered by University of Toulouse, 2025

This thesis tests the long-standing idea that neural synchrony helps the brain bind visual features into objects. By developing neural network models that incorporate synchrony in different ways, it shows that temporal coordination improves object representations, robustness, and human-like behavior. Analyses of primate IT recordings further suggest that synchrony carries meaningful information beyond firing rates. These findings support synchrony as a useful mechanism for visual perception and for designing more brain-like AI models. See the associated talk here.

Recommended citation: Muzellec, S. (2025). In Sync with the Brain : Modeling Visual Binding through Neural Synchrony.Neuroscience. Université de Toulouse.

PDF