Links

AI Safety

  • AISafety.com – A global team of volunteers and professionals from various disciplines who believe AI poses a grave risk of extinction to humanity. AISafety.com is a dedicated online hub offering comprehensive, accurate, and up-to-date resources to educate and empower users in mitigating existential risks posed by artificial intelligence, with interactive features like a 3-minute survey to shape its content strategy and priorities.
  • Model Evaluation & Threat Research (METR) – mission is to develop scientific methods to assess catastrophic risks stemming from AI systems’ autonomous capabilities and enable good decision-making about their development.
  • Human-Centered AI (HAI) 2025 AI Index Report – The Stanford Institute for Human-Centered AI (HAI) is an interdisciplinary institute established in 2019 to advance AI research, education, policy, and practice.
  • NIST AI Risk Management Framework – Artificial Intelligence Risk Management Framework (AI RMF 1.0) by National Institute of Standards and Technology (NIST)
  • Safe Superintelligence Inc (SSI) – is a company dedicated to developing safe superintelligence, prioritizing it as the most critical technical challenge of our era by advancing AI capabilities as rapidly as possible while ensuring safety measures always stay ahead, founded by a lean team of elite engineers and researchers.

AI Community and Personal Sites

  • Future of Life Institute – is a non-profit organization dedicated to reducing global catastrophic risks from powerful technologies, with a specific focus on artificial intelligence. They work to ensure that AGI (artificial general intelligence) is developed safely and remains beneficial to humanity through policy advocacy, safety research grants, and public outreach.
    • Contact your Legislators – If you live in the United States, you can use this form to contact your local representatives on the issue of unsafe AI development.
  • LessWrong – is a website and community with a strong interest in AI and specifically causing powerful AI systems to be safe and beneficial.
  • P(doom) Calculator – an interactive online tool designed to help users estimate their personal probability of AI-induced catastrophe (“p(doom)”) by inputting best guesses and uncertainty levels (± points) across various development stages, generating outputs like 10th/90th percentiles and highlighting the widest uncertainty range. Users can share their parameters to contribute to aggregate community statistics, accessible via a dedicated stats page, fostering collective insights into AI existential risks without predefined methodologies or assumptions.
  • Sam Altman – is a personal platform where the OpenAI CEO shares insights, updates, and reflections on artificial intelligence advancements, infrastructure developments, economic impacts, and ethical considerations in AI’s societal role.

Books

  • If Anyone Builds It, Everyone Dies – “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All” is a 2025 book by AI alignment pioneers Eliezer Yudkowsky and Nate Soares that urgently argues the development of artificial superintelligence poses an existential threat to humanity, detailing how misaligned systems could lead to extinction through scenarios like resource acquisition or unintended escalation. Through theory, evidence, a fictional extinction narrative, and calls for an immediate global halt to superintelligent AI research, the authors emphasize that humanity must prioritize survival over innovation to avert this “suicide race.”

Podcasts

  • Doom Debates (www.youtube.com/@DoomDebates) – hosted by Liron Shapira that features debates, interviews, and discussions with AI experts, ethicists, and thinkers on the existential risks of artificial general intelligence (AGI) and superintelligence, aiming to raise mainstream awareness of potential human extinction scenarios.
  • Future of Life Institute (https://www.youtube.com/@futureoflifeinstitute) – a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, and nuclear weapons. The Institute’s work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world’s leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Leave a Reply

Your email address will not be published. Required fields are marked *