Powerful market and geopolitical forces are driving AI labs in a race towards smarter-than-human intelligence. We’re enjoying the benefits of the progress so far, but we’re fast approaching a point where AI becomes so capable and widely deployed that it could trigger massive societal disruption or even existential threats. Many of us are deeply concerned but feel powerless to act against the industry momentum.
I built AI Safety Pledge (https://aisafetypledge.org) to channel that concern into measurable collective action. It’s a public registry where you pledge donations to AI safety organisations and we track the total. You donate directly; we never handle money.
Why this matters:
Donations will directly support vital AI safety research, however the goal isn’t fundraising alone. The real power is in creating visible social proof. A public tally (“£500K pledged by 2,000 people and counting”) sends a clear signal to policymakers and AI labs that people are concerned about AI risk and demand more action on safety.
How it works:
– Browse 15+ vetted organisations (research labs, policy groups, educators)
– Donate directly through their own channels
– Record your donation to join the public count
Listed organisations include MIRI, Pause AI, Alignment Research Center, Doom Debates, and others working on reducing or raising awareness of large-scale AI risks.
Working on AI safety and want to be listed? Drop me a line: https://aisafetypledge.org/contact
Tech: Built with Cloudflare Workers + D1, server-side rendered with Hono/JSX.
So, if you’re spending $20 or $200 this month on AI subscriptions or API fees, consider matching it with a donation towards AI safety. Your pledge on our website helps send a powerful signal about AI risk to policymakers and AI labs.
Curious to hear what HN thinks about the approach.
Comments URL: https://news.ycombinator.com/item?id=45515152
Points: 1
# Comments: 0
Source: aisafetypledge.org