Software & Code

Here is a selection of software, code, and datasets that I maintain or contribute to. Most of it sits at the intersection of computational neuroscience, deep learning, and vision.


Computational Neuroscience Toolbox

Reverse Predictivity

🔹 1. Reverse Predictivity (analysis repo)

Purpose: fully reproduces the analyses from the paper (model comparisons, time-resolved predictivity, neuron-level regression, reliability estimation, etc.).

Includes:

  • Regression pipelines
  • Reliability & noise-ceiling estimation
  • ANN → Monkey, Monkey → ANN, Monkey → Monkey, and Model → Model comparisons
  • Full reproducibility scripts for figures and benchmarks

This repo is designed to replicate the publication and provide a transparent scientific workflow.

🔹 2. reverse-pred (PyPI Python library)

A lightweight, general-purpose Python package that exposes the core mapping utilities independent of the paper’s full analysis pipeline.

Features:

  • Unified API for the four mapping modes
  • Fit/evaluate linear models (ridge by default; PLS & others supported)
  • Clean, modular functions for ANN feature extraction → mapping → evaluation
  • Integrates easily into any model–brain alignment project

Example usage

import numpy as np
from reverse_predictivity.monkey_to_model import compute_monkey_to_model

# Model features: one row per image, one column per ANN unit
model_features = np.load("features/resnet50_itlayer.npy")   # shape: (n_images, n_units)

# Neural data: images × neurons × repeats
rates = np.load("data/it_rates.npy")                        # shape: (n_images, n_neurons, n_repeats)

compute_monkey_to_model(
    model_features=model_features,
    rates=rates,
    out_dir="results/monkey_to_model/resnet50",
    reps=20,
)

Brain-inspired neural networks

MAPS — Masked Attribution-based Probing of Strategies

Purpose: MAPS is a computational framework for testing whether visual explanations from deep neural networks capture human-like and primate-like visual strategies.

It combines attribution methods, explanation-driven perturbations, and behavioral evaluation to compare models ↔ humans ↔ monkeys.

Includes:

  • Fine-tuning pipelines for object-recognition models
  • Attribution generation using Captum (Saliency, NoiseTunnel, IG, etc.)
  • Explanation-Masked Inputs (EMIs) based on attribution maps
  • Behavioral metrics (B.I1) comparing models, humans, and monkeys
  • Similarity analyses (L1, L2, LPIPS)
  • Full reproducibility scripts for MAPS paper figures

KomplexNet – Complex-valued networks with Kuramoto synchrony

Purpose: PyTorch(-Lightning) framework for complex-valued neural networks with Kuramoto-style phase synchronization, designed to test the benefits of binding-by-synchrony in deep networks.

Includes:

  • Complex-valued convolutions with phase-based recurrence
  • Differentiable Kuramoto synchrony module
  • Training pipelines for MultiMNIST
  • Tools to analyze phase dynamics, synchrony strength, and recognition accuracy
  • Comparisons with real-valued CNNs and ablated models
  • Full experiment + figure reproduction scripts

Datasets & Benchmarks

FeatureTracker – Tracking objects that change in appearance

Purpose: Synthetic video benchmark where objects change shape, color, and position over time, designed to probe object tracking under appearance changes and phase-based grouping mechanisms.

Includes:

  • Stimulus generation scripts (shapes, colors, dynamics)
  • Training scripts for CV-RNN and related models
  • Testing scripts for all FeatureTracker variations

Explainability tools

Saliency strikes back: How filtering out high frequencies improves white-box explanations

Purpose:
Improve the faithfulness and robustness of gradient-based saliency maps by filtering out high-frequency artifacts, yielding clearer and more reliable explanations without modifying the underlying model or attribution method.

Includes:

  • FORGrad (Fourier-Regularized Gradients): frequency-domain filtering applied to gradient-based explanations
  • Drop-in compatibility with standard white-box methods (Gradients, Integrated Gradients, SmoothGrad, etc.)
  • Lightweight, model-agnostic post-processing with no additional forward or backward passes

How to cite

If you use any of this code or datasets in your work, please cite the corresponding paper(s).
If something looks useful but isn’t yet public, feel free to email me for early access or collaboration.