Research Hub

Advancing AI Safety research in the Netherlands

The SAIN Research Hub connects talented researchers with experienced supervisors, providing mentorship, compute, and community to produce impactful AI Safety research.

6+Active Projects
20+Researchers
12+Publications

How It Works

A structured path to AI Safety research

Whether you're a student looking for your first research experience or a PhD looking to mentor the next generation, the Research Hub has a place for you.

Supervised Matching

We connect talented researchers with PhD+ supervisors for structured, mentored AI Safety research projects.

Compute & Support

We provide compute resources and logistical support for open collaborations and research projects.

National Network

Access the full SAIN network: researchers, advisors, and practitioners across all Dutch chapters.

Publication Track

Our community has published at NeurIPS, ICLR, and other top venues. We help you build a strong AI Safety research track record.

Supervisors

Research guidance from experienced mentors

Research Hub participants can work with supervisors across technical AI safety, governance, complex systems, and related fields.

Steven Abreu

Steven Abreu

Research Scientist, MakerMaker

Research Agenda
Fatih Turkmen

Fatih Turkmen

Associate Professor of Computer Science, University of Groningen

Research Agenda
Jobst Heitzig

Jobst Heitzig

Working Group Leader, Senior Scientist, Potsdam Institute for Climate Impact Research

Research Agenda
Guillaume Pourcel

Guillaume Pourcel

PhD AI Candidate, University of Groningen

Research Agenda

Join as a researcher

If you want to be supervised or join an open collaboration project, fill in the expression of interest form. If you are unsure where you fit, email us and we'll help route you.

Become a supervisor

If you are interested in supervising AI Safety research projects through SAIN, email the Research Hub and we'll follow up with next steps.

Become a supervisor

Publications

Research from our community

Our researchers publish at top venues including NeurIPS, ICLR, and compete in international AI Safety hackathons.

NeurIPS 2025 Spotlight

The Anatomy of Alignment: Decomposing Preference Optimization by Steering Sparse Features

Jeremias Ferrao, Matthijs van der Lende, Ilija Lichkovski

ICLR 2025

Self-Ablating Transformers: More Interpretability, Less Sparsity

Jeremias Ferrao

NeurIPS 2025

EU-Agent-Bench: Measuring Illegal Behavior of LLM Agents Under EU Law

Ilija Lichkovski, Alexander Müller, Mariam Ibrahim, Tiwai Mhundwa

ICLR 2025

Contextual Sparsity as a Tool for Mechanistic Understanding of Retrieval in Hybrid Foundation Models

Davide Zani, Felix Michalak, Steven Abreu

NeurIPS 2024

Steering Large Language Models using Conceptors

Joris Postmus, Steven Abreu

1st Place, Apart Research Hackathon

AutoSteer: Weight-Preserving Reinforcement Learning for Interpretable Model Control

Jeremias Ferrao

3rd Place, Apart Research Hackathon

Local Learning Coefficients Predict Developmental Milestones During GRPO

Jeremias Ferrao, Ilija Lichkovski

4th Place, Apart Research Hackathon

Collective Deliberation for Safer CBRN Decisions: A Multi-Agent LLM Debate Pipeline

Alexander Müller, Arsenijs Golicins, Galina Lesnic

Apart Research

Sandbagging LLMs using Activation Steering

Jeremias Ferrao, Davide Zani

Apart Research

Cybersecurity Persistence Benchmark

Davide Zani, Felix Michalak, Jeremias Ferrao

Research Project

Playing with Perception: Fooling Traffic Sign Classifiers via Copy-Paste Manipulation

Davide Zani, Alexandru Dimofte

Apart Research

AI Misinformation and Threats to Democratic Rights

Davide Zani, Mariam Ibrahim, Tiwai Mhundwa, Felix Michalak, Andrei Avram

Contribute to AI Safety research

Whether you want to join a supervised project, contribute to an open collaboration, or supervise AI Safety research, here are the clearest next steps.

Join as a researcher

If you want to be supervised or join an open collaboration project, fill in the expression of interest form. If you are unsure where you fit, email us and we'll help route you.

Become a supervisor

If you are interested in supervising AI Safety research projects through SAIN, email the Research Hub and we'll follow up with next steps.

Become a supervisor