I wrote a letter to my representatives about AI safety and wanted to share it. There are many official and unofficial resources online that can help you find and email your reps in minutes. Physical letters and phone calls are even more effective!
This letter is marked CC0 1.0 

[Your representative’s name],
My name is [your name], I am a constituent from [your town, state]. I am writing because AI development has reached a critical turning point.
I urge you to advocate for an immediate, world-wide halt to advanced AI development, to prevent a global catastrophe.
Anthropic, one of the world’s leading AI companies, has claimed that their latest model Claude Mythos has found and exploited code weaknesses in every essential system on the Web, in all major browsers and operating systems.[1] These flaws have, until now, escaped the notice of human reviewers and automated tools, in some cases for decades. These systems run the world’s entire digital infrastructure; if Mythos were used maliciously, banks, hospitals, air traffic, militaries, and more could all be compromised.
This is not a theoretical threat. Mythos has already demonstrated these capabilities. The danger is concrete and immediate. If what they claim is true, Anthropic has built a weapon of mass destruction.
The capabilities of these models are only going to increase. Anthropic is already using Mythos (a model that, by their own admission, they cannot reliably control) to develop the next, more powerful model. Other AI companies are racing to catch up.
For decades, experts in AI safety have been warning us that advanced AI presents an existential threat to humanity on par with pandemics and nuclear weapons.[2][3] Current mainstream discourse, advocating for “guardrails” and “a balanced approach,” is ten years too late. The only safe policy remaining is a coordinated, international ban on advanced AI development.
Public support for a pause is strong. An open letter from 2023 calling for a 6-month pause gained over 30,000 signatures, including many executives and researchers from the AI companies themselves.[4] A more recent letter calling for an indefinite pause has over 130,000 signatures.[5] Polling has repeatedly shown that the majority of the population is uneasy about the progress of AI, and opposes developing superintelligence until we are able to do so safely. The only way to ensure safe superintelligence is for all nations to come together and agree to a pause.
The time for “caution” has passed. The time for action is now.
Thank you for your attention.
Sincerely,
[Your Name]
- https://www.anthropic.com/glasswing
- https://intelligence.org/files/AIPosNegFactor.pdfhttps://en.wikipedia.org/wiki/Statement_on_AI_Risk
- https://en.wikipedia.org/wiki/Statement_on_AI_Risk
- https://futureoflife.org/open-letter/pause-giant-ai-experiments/
- https://superintelligence-statement.org/