Theory of Change
This document describes the causal logic behind SAIN's work: how and why our activities produce the change we seek. It complements the Vision (which describes what SAIN is and where it is going) and the Code of Conduct (which defines expectations for members and chapters).
1. The Problem
We identify four gaps that currently limit the Dutch AI safety landscape:
- A coordination gap. Local AI safety initiatives across Dutch cities are scarce, and if they exist, operate independently, duplicate effort, lack legal and financial infrastructure, and struggle to build the authority needed to influence institutions.
- A skill gap. Too few people have the knowledge to work on AI safety in research, policy, industry, or public discourse. The pipeline from "interested individual" to "impactful professional" is underdeveloped and fragmented.
- An awareness gap. The Dutch public, and many professionals in adjacent fields, have limited understanding of the full range of risks AI poses, from near-term societal harms to existential concerns. This poses problems for making societal changes.
- An institutional gap. Universities, government ministries, and industry actors largely lack the frameworks, expertise, or external pressure to take AI safety seriously as an organizational priority. Even where individual champions exist, there is no trusted civil-society counterpart for them to engage with.
2. Our Theory
SAIN believes that lasting change in AI safety comes from building human capital: people who understand the risks, have the skills to address them, and occupy positions where they can act. The most effective way to build this human capital at national scale is through a unified organizational infrastructure that informs the public and systematically moves unaware individuals to meaningful contribution, while producing direct research and policy outputs along the way.
In short: if we build the right ecosystem of education, research, community, and visibility, and make it easy to enter, we will produce the people and the knowledge that make AI safer.
3. The Logic Model
Inputs
| Input | Description |
|---|---|
| Proven operational model | The AISIG blueprint: a tested 4-team structure (Education, Research, Events, PR) with documented playbooks, course curricula, and event formats |
| Legal entity | A registered Dutch foundation (stichting) providing fiscal sponsorship, contractual capacity, and institutional credibility |
| National brand | The SAIN identity, carrying the authority of a national organization |
| Funding | Grant funding enabling paid roles and program operations |
| Human capital | A Director, chapter co-directors, team leads, volunteer teams, and an Advisory Board |
| Academic partnerships | A network of PhD+ supervisors across Dutch (and other) universities willing to mentor research projects |
| International network | Connections to AI safety organizations at Georgia Tech, Berkeley, and others. |
Activities
Organized by pillar (see Vision for full descriptions):
- Education: Deliver the AI Safety, Ethics, and Society course (Technical + Governance tracks) across chapters; run discussion groups; maintain a standardized national curriculum.
- Research: Operate the national Research Hub (student-supervisor matching, compute, open collaborations) at scale, run a fellowship program modeled on MATS (though, at first, voluntarily).
- Events: Host national conferences, hackathons, and seminars; organize local AI Safety Chats, socials, and speaker series.
- Public Relations: Produce content across LinkedIn, Instagram, Substack; manage SAIN's public narrative; develop multimedia at scale.
- Ecosystem Growth: Mentor and onboard new chapters; provide centralized infrastructure and fiscal sponsorship.
Outputs
| Output | Currently | Target (July 2027) |
|---|---|---|
| Course graduates per year | ~60 | 250+ |
| Active research projects | 6 | 25+ supervised, many open collaborations |
| Peer-reviewed publications | 6 | Growing annual count |
| Events hosted per year | 1 a month locally | 1 a month per local chapter and 2 national ones |
| Active chapters | 3 (Groningen, Amsterdam, Utrecht) | Strong core chapters (Groningen, Amsterdam, Utrecht) and adoption and/or start of 3 new local chapters. |
| People reached | Hundreds directly, tens of thousands via content | Thousands directly, hundreds of thousands via content |
Outcomes
- A functioning talent pipeline. The SAIN Funnel (see Vision) becomes a proven mechanism that reliably moves individuals from first contact to positions of influence in EU institutions, frontier AI labs, government, and academia.
- A credible research contribution. The Research Hub produces a steady stream of publications at top-tier venues. The Netherlands becomes a recognized contributor to AI safety research.
- Increased public awareness. AI safety enters mainstream Dutch discourse. Decision-makers recognize SAIN as a trusted source of expertise.
- A self-sustaining national community. Chapters are active and self-reinforcing. New chapters emerge organically as the brand and model gain recognition.
- Institutional partnerships. Universities, municipalities, and industry actors engage with SAIN for consulting, joint research, and institutional AI safety frameworks.
Impact
The development and integration of AI in the Netherlands and, through the people and knowledge SAIN produces, internationally, is purposeful, just, and safe for all of humanity.
5. Evidence and Validation
SAIN's theory builds on demonstrated proof-of-concept:
Education works. 100+ graduates across 9+ cohorts in Groningen. Replicated in Amsterdam (70+ participants in the first independent iteration). Curriculum based on Dan Hendrycks' material, adapted for interdisciplinary audiences.
Research works. Publications at NeurIPS (including spotlight) and ICLR. 6+ active projects, \~20 people, 4+ PhD-level supervisors. Successfully recruited supervisors who previously had no involvement in the AI safety community, showing high counterfactual value.
The operational model works. AISIG's playbooks and 4-team structure are already being adopted informally by other Dutch AISIs. Transformation from unstructured student group to professionalized organization accomplished in under one year.
The brand matters. Invited to speak at TEDx, EAGx Amsterdam, AiGrunn, Samenwerking Noord. Municipal consulting partnership with municipality Westerkwartier. International connections to groups at Georgia Tech, Berkeley, and more.
Scaling has early traction. Three cities are confirmed or enthusiastic about adopting the SAIN brand. Amsterdam is already running activities using AISIG-derived playbooks. Strong interest from additional cities.
6. Scaling Logic
The chapter model produces a cycle that is central to the theory:
- More chapters → larger national footprint → stronger brand → more authority -> more members
- More members → more researchers → more publications → more credibility
- More credibility → more funding → more paid roles → better retention → more impact
- More impact → more chapters wanting to join → cycle repeats
This cycle works because AI safety engagement is inherently local (courses, events, and community happen in person) but national coordination creates capabilities no local group can achieve alone (brand authority, legal infrastructure, grant management, a research hub with supervisor matching, professional media). The chapter model bridges both needs.
For new chapters, SAIN dramatically lowers the barrier to impact. Without SAIN, starting a serious AI safety group means registering a foundation (€500-1,000+), developing curricula from scratch, building a brand from scratch, and figuring out operations through trial and error. With SAIN, a motivated group can adopt the brand, receive fiscal sponsorship, use ready-made curricula, follow documented playbooks, and get direct mentorship. This turns months of setup into weeks of community building.
SAIN's role is to catalyze and sustain this cycle: maintaining central infrastructure quality, supporting chapters through growing pains, and keeping the mission front and center as the organization scales.