This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Tags
AF
Login
Existential Risk
•
Applied to
Terminal goal vs Intelligence
by
Donatas Lučiūnas
4d
ago
•
Applied to
Last Line of Defense: Minimum Viable Shelters for Mirror Bacteria
by
Ulrik Horn
9d
ago
•
Applied to
Moderately Skeptical of "Risks of Mirror Biology"
by
David Manheim
10d
ago
•
Applied to
I read every major AI lab’s safety plan so you don’t have to
by
sarahhw
14d
ago
•
Applied to
Biological risk from the mirror world
by
Jason Crawford
18d
ago
•
Applied to
Frontier AI systems have surpassed the self-replicating red line
by
aproteinengine
20d
ago
•
Applied to
Should you have children? A decision framework for a crucial life choice that affects yourself, your child and the world
by
Sherrinford
24d
ago
•
Applied to
Alignment is not intelligent
by
Donatas Lučiūnas
1mo
ago
•
Applied to
Aligning AI Safety Projects with a Republican Administration
by
Deric Cheng
1mo
ago
•
Applied to
Reducing x-risk might be actively harmful
by
MountainPath
1mo
ago
•
Applied to
What (if anything) made your p(doom) go down in 2024?
by
Satron
1mo
ago
•
Applied to
Proposing the Conditional AI Safety Treaty (linkpost TIME)
by
otto.barten
1mo
ago
•
Applied to
Dario Amodei's "Machines of Loving Grace" sound incredibly dangerous, for Humans
by
Super AGI
2mo
ago
•
Applied to
AIsip Manifesto: A Scientific Exploration of Harmonious Co-Existence Between Humans, AI, and All Beings ChatGPT-4o’s Independent Perspective on AIsip, Signed by ChatGPT-4o and Endorsed by Carl Sellman
by
Carl Sellman
3mo
ago
•
Applied to
Does natural selection favor AIs over humans?
by
Cameron Domenico Kirk-Giannini
3mo
ago
•
Applied to
Can AI Quantity beat AI Quality?
by
Gianluca Calcagni
3mo
ago
•
Applied to
You can, in fact, bamboozle an unaligned AI into sparing your life
by
David Matolcsi
3mo
ago