SAIN Groningen
Groningen's hub for AI Safety education, research, and community.
Events
Past & upcoming events
AI Safety, Ethics and Society Graduation
AI Safety Talk by Fatih Turkmen
AI Control Hackathon
AI Safety, Ethics and Society Graduation Ceremony
TedXBroerstraat
AI Safety Talk with Tekla Emborg
Programs
What we run in Groningen
Education
AI Safety, Ethics, and Society
We facilitate the Center for AI Safety course "AI Safety, Ethics, and Society" in two distinct cohorts, each customized to fit two different approaches to AI Safety: Technical and Governance. We host both courses on-site in Groningen.
This course has a broader scope compared to the previously facilitated AGISF, and addresses not only control issues and misalignment but also risks like malicious use, accidents, and societal dependence. We aim for 3-4 cohorts per year, reaching around 60 individuals annually.
Technical Track
Focuses on the technical aspects of AI Safety, with extra sessions on mechanistic interpretability, adversarial attacks, complex systems, and more.
Governance Track
Prioritizes governance and policy aspects, dedicating time to case studies and real-world examples of regulatory, legal, and societal challenges.
Course Details
- Duration
- 6 weeks per block
- Workload
- 2h readings + 2h discussion / week
- Cohorts per year
- 3-4 (Block-based)
- Format
- On-site in Groningen
- Certification
- Certificate upon completion
- Selection
- Application-based
These courses are independently led by SAIN Groningen and are not affiliated with the University of Groningen.
Questions about the course? edugro@safeainetherlands.org
The discussion groups
Weekly research & discussion
Focused groups meeting weekly (~2 hours) to discuss, learn, and collaborate on specific AI Safety topics. Each group has at least one experienced mentor guiding the conversation.
Technical AI Alignment
Explores how to align capable ML systems with human intent: scalable oversight, evaluation and red-teaming, preference learning, robustness, and deployment risks, with weekly readings and discussion grounded in current research.
AI Governance & Privacy
Shaping policies for AI's ethical and responsible use. Discussing guidelines, regulations, and accountability mechanisms to ensure AI systems are transparent, secure, fair, and free of bias.
Questions about discussion groups? edugro@safeainetherlands.org
Research
Selected research from our members
About
From AISIG to SAIN Groningen
SAIN Groningen is the local chapter of Safe AI Netherlands in Groningen, and the direct successor of the AI Safety Initiative Groningen (AISIG). Since 2023 it has grown into one of the most active AI Safety communities in Europe.
Originally student-led, the chapter now includes many professionals and has produced research published at venues including NeurIPS and ICLR. We run education programs, discussion groups, events, and connect to SAIN's national Research Hub.
SAIN Groningen is co-directed by Alexander Müller and Thomas Brcic. We organise work across four teams: Education, Research, Events, and PR. This is a structure other SAIN chapters are adopting as they spin up.
SAIN Groningen Team
Alexander Müller
Co-Director
Thomas Brcic
Co-Director
Ilija Lichkovski
Research Lead
Imaan Kanji Lalji
Public Relations Lead
Tiwai Mhundwa
Education Lead
Hanadi Al-Samarrai
Events Lead & AI Governance Facilitator
Tarteel Mohamed
Community Manager & Public Relations
Steven Abreu
Research
Alice Dauphin
Research & Public Outreach
Guillaume Pourcel
Research
Iulia Bugan
Governance & Privacy Lead
Jeremias Ferrao
Technical Alignment Lead
Cansu Kutay
AI Technical Facilitator
Sophia Lopotaru
AI Technical Facilitator
Nabiha Duaa
Events
Hristo Karagyozov
Events
Jesse Kerkhof
Events
Joris Postmus
Advisory Board
Davide Zani
Advisory Board
Mariam Ibrahim
Advisory Board
Join & contact
Start with the onboarding form (we'll follow up by email). Please have an extremely low bar for filling this in! Subscribe to the national Substack for articles and updates across SAIN.
Email the right team to contact SAIN Groningen