About/Vision

Where SAIN is going

Vision

SAIN's mission, national role, strategic pillars, and long-term ambition for AI safety in the Netherlands.

Vision of Safe AI Netherlands


Mission

Safe AI Netherlands (SAIN) exists to raise awareness of the full spectrum of existing and potential harms from AI, contribute to shaping mitigation priorities through ongoing discourse, and support the realization of effective solutions across the entire Netherlands.

We engage with the full range of AI safety concerns: from near-term harms like deepfakes, misinformation, and algorithmic manipulation of mental health, to long-term risks such as the loss of meaningful human control over increasingly capable AI systems, and everything in between.


The Challenge

The Netherlands is home to world-class universities, a thriving tech sector, and major institutions like ASML, yet it lacks a unified civil-society voice on AI safety. As AI systems reshape everything around us, the risks scale accordingly. This is a monumental transition, and we have the power to shape whether it is positive or negative, just or unjust.

But the Dutch AI safety landscape is fragmented. Local initiatives are scarce, and when they exist, they operate independently, duplicating effort, struggling with administrative overhead, and lacking the authority to influence institutions. Talented individuals who care about these issues often have no clear path from curiosity to meaningful contribution.

SAIN was created to close that gap.


Who We Are

Safe AI Netherlands is the leading civil-society organization for AI safety in the Netherlands. We are a national foundation (stichting) that operates through a network of local chapters, SAIN Groningen, SAIN Amsterdam, SAIN Utrecht, and more to come, united under a single brand, a shared mission, and centralized infrastructure.

SAIN grew out of the AI Safety Initiative Groningen (AISIG), which in under a year transformed from an informal student group into a professionalized organization with a Research Hub and published research at top-tier conferences (NeurIPS, ICLR), over 100 course graduates across 9+ cohorts, and operational playbooks that other Dutch AI safety groups began adopting. When it became clear that this model could, and should, serve the entire country, we rebranded and expanded.

SAIN is governed by a small board of directors responsible for legal and strategic decisions, consisting of the national director and chapter directors of SAIN Groningen, Amsterdam, and Utrecht. An Advisory Board of experts in the Dutch AI safety landscape provides strategic guidance. This structure is designed to scale cleanly from a handful of chapters to many more.


Our Vision

We envision a Netherlands where AI safety is not a niche concern but a recognized priority in universities, in boardrooms, in government ministries, and in public conversation.

In concrete terms, we see:

  • A national Research Hub with dozens of active projects and a growing body of peer-reviewed publications at leading global conferences.
  • Hundreds of individuals per year completing AI safety courses across multiple cities.
  • Large national events: conferences, hackathons, expert seminars, alongside accessible local gatherings that welcome newcomers.
  • A professional media operation producing high-quality content on AI safety for the Dutch public.
  • Thriving local chapters across the country, each with dozens to hundreds of active individuals.
  • A talent pipeline that reliably moves people from initial curiosity to career-level influence in EU policy bodies, at frontier AI labs, in government advisory roles, in academia, and beyond.

All in pursuit of one goal: ensuring the development and integration of AI genuinely benefits all of humanity.


Strategic Framework: Four Pillars

SAIN achieves its mission through four integrated pillars that together move individuals from first exposure to deep engagement. (For how these pillars produce measurable change, see SAIN’s Theory of Change.)

Research. SAIN hosts a national AI Safety Research Hub, connecting experienced supervisors (PhD+) with emerging talent to produce peer-reviewed research at top-tier global conferences. At scale, this operates as a structured fellowship program spanning the full spectrum of AI safety: from technical AI safety to AI governance and policy analysis.

Education. SAIN delivers standardized AI Safety, Ethics, and Society courses in both Technical and Governance tracks, alongside ongoing discussion groups. Education is the primary entry point into the SAIN funnel.

Events. SAIN organizes both high-level national events and accessible local gatherings, from expert speaker series and technical hackathons to informal AI Safety Chats. Events attract newcomers and deepen the engagement of those already involved.

Public Relations. SAIN helps shape the national narrative on AI safety through strategic outreach across LinkedIn, Instagram, Substack, and other channels. In the short term, this channels interested individuals into our entry points. Over time, we aim to develop this into a fuller media presence.


The SAIN Funnel

SAIN provides a structured pathway from first contact to career-level contribution, with clear steps and support at every stage.

LevelDescriptionExamples
0No engagementHas not encountered AI safety
1Initial engagementAttends a SAIN event, reads a Substack post, sees content on social media
2Foundational learningCompletes a SAIN course, joins a discussion group
3Active contributionJoins a Research Hub project, participates in a campaign, joins a chapter team
4Structured developmentUndertakes a full-time, paid fellowship or internship in AI safety
5Professional impactWorks full-time in AI safety: policy, research, industry, or civil society; leads a SAIN research project

SAIN mainly focuses on levels 0 to 3 (while providing minor support for levels 4 and 5, e.g., through writing referral letters or connecting people). Our funnel has two critical transitions where people are most likely to disengage:

  1. Level 0 to 1: Getting someone unfamiliar with AI safety to engage for the first time. This is where events, social media, and word-of-mouth matter most.
  2. Level 2 to 3: Moving someone who has learned about AI safety into making a meaningful contribution. This is where the Research Hub, structured projects, and mentorship are essential.

SAIN's four pillars directly address both: PR and Events lower the barrier to entry; Education builds the foundation; Research and structured programs scaffold deep contributions.

A Journey Through the Funnel

Consider Maayke, a 22-year-old law student in Groningen. She scrolls past a LinkedIn post about SAIN's upcoming AI Safety, Ethics, and Society course. Having recently read Yuval Noah Harari's Nexus, she is curious and signs up.

She completes the course, attends a few social events, and earns her certificate. For a while, she is busy with an internship and not very active. But a semester later, a class assignment gives her the option to focus on AI. She chooses it, feeling more confident after the SAIN course. She notices a new project on SAIN's Research Hub, joins a team of three, and they publish their work a few months later.

That experience sparks a deeper ambition. She explores opportunities in EU AI policy and sets her sights on DG CNECT, the European Commission department responsible for digital policy. Because of her proven experience at SAIN and a referral letter, she is part of the team working on the next iteration of the EU AI Act a short time later.

Every pillar, every team member, and every chapter plays a role in broadening, strengthening, and advancing people along this path. Importantly, however, the above is an example of an individual moving through SAIN’s funnel from level 0 to level 5. While this is an integral part of SAIN’s mission, there are two important points to be made. First, SAIN’s focus is primarily on levels 0 to 3. We will, as of now, be able to provide less support moving individuals from levels 3 to 5, and believe it is important to be as transparent as possible about this to prevent confusion about this. Second, we highly believe that individuals ending at an earlier level of SAIN’s funnel are valuable. One of the most important aspects of the development and integration of AI going well is having individuals in broader society being adequately AI (safety) literate. This is itself a meaningful outcome, and that deserves to be mentioned explicitly.


A National Ecosystem

SAIN's chapter model is the vehicle through which our vision scales. A strong central identity provides legal status, brand authority, financial infrastructure, shared resources, and proven operational playbooks. Local chapters run their own activities, build their own communities, and forge their own partnerships. Crucially, they never have to start from scratch.

A new chapter does not need to register a foundation, set up accounting, or develop a curriculum from nothing. They adopt the SAIN brand and hit the ground running. (For the full details of chapter requirements and standards, see SAIN’s Code of Conduct.)

This balance, local freedom within a shared national framework, is what allows SAIN to scale without losing the grassroots energy that makes local communities thrive. Each new chapter extends SAIN's reach, strengthens the national brand, and demonstrates the model's scalability.


Looking Ahead

SAIN is at the beginning of its journey as a national organization, but it builds on a strong foundation. The operational model has been proven in Groningen. The playbooks are being adopted in Amsterdam and Utrecht. The Research Hub is producing work at the highest academic level. The course has educated over a hundred individuals.

The Netherlands has world-class universities, a concentrated tech ecosystem, and a culture of institutional trust and collaboration. There is no reason this country should not have a thriving, authoritative, and impactful AI safety movement.

SAIN intends to be that movement.