Dr. Sam Gijsen

Postdoctoral Researcher
Foundation Models & Representation Learning
for Neural Time Series

I research multimodal representation learning and build foundation models for neural and physiological time series. My neuroscience background shapes how I think about learning in neural nets; I'm fascinated by what representations emerge, how they scale, and what they encode about the world.

Dr. Sam Gijsen

GitHub  github LinkedIn  linkedin Google Scholar  scholar

Recent Work


[ICLR26] Brain-Semantoks: Learning Semantic Tokens of Brain Dynamics with a Self-Distilled Foundation Model Sam Gijsen, Marc-André Schulz, Kerstin Ritter
Project 2

We develop a self-distilled foundation model for brain dynamics that pretrains in 2 hours and eliminates the need for finetuning. We stabilize self-distillation for noisy neural timeseries through learned tokenization, and find log-linear scaling laws for pretraining data on cross-dataset downstream tasks.

ICLR PaperCodePretrained Models


[ICML25] EEG-Language Pretraining for Highly Label-Efficient Clinical Phenotyping Sam Gijsen, Kerstin Ritter
Project 2

First-of-its-kind EEG-language model for downstream clinical tasks. We show that multimodal models integrating natural language learn more useful representations of neural data.

ICML PaperCodePretrained Models


2025 NeurIPS EEG Competition: 7th / 1,183 teams
Project 2

Led a small team that placed highly using a multi-modal fusion architecture designed from scratch, without any pretraining. Code and report to come!

Challenge Link


Latest Blog post

Project 2

Hillsbrad Diffusion: A World Diffusion Model Criminally Undertrained
A qualitative look at a world diffusion model undertrained on two hours of sparse exploration of a large map.

Blog post

Some Previous Work


Neural surprise in somatosensory Bayesian learning Sam Gijsen, Miro Grundei, Robert T. Lange, Dirk Ostwald, Felix Blankenburg
Project 2

Computational modeling of neural signals using information-theoretic measures shows perceptual learning can be described as a process of probabilistic inference.

PLOS Computational BiologyCode


Active inference and the two-step task Sam Gijsen, Miro Grundei, Felix Blankenburg
Project 2

Compared to reinforcement learning, active inference models can better describe human sequential decision-making using probablistic surprise minimization.

Scientific ReportsCode