This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Tags
AF
Login
Cause Prioritization
•
Applied to
A case for donating to AI risk reduction (including if you work in AI)
by
Trevor L
19d
ago
•
Applied to
Reducing x-risk might be actively harmful
by
MountainPath
1mo
ago
•
Applied to
Two arguments against longtermist thought experiments
by
Amaury LORIN
2mo
ago
•
Applied to
Differential knowledge interconnection
by
Roman Leventov
2mo
ago
•
Applied to
Does “Ultimate Neartermism” via Eternal Inflation dominate Longtermism in expectation?
by
Jordan Arel
4mo
ago
•
Applied to
How bad would AI progress need to be for us to think general technological progress is also bad?
by
Jim Buhler
5mo
ago
•
Applied to
Why I stopped working on AI safety
by
Jack Koch
8mo
ago
•
Applied to
Comparing Alignment to other AGI interventions: Basic model
by
Martín Soto
9mo
ago
•
Applied to
Attention on AI X-Risk Likely Hasn't Distracted from Current Harms from AI
by
Erich Grunewald
1y
ago
•
Applied to
Preserving our heritage: Building a movement and a knowledge ark for current and future generations
by
rnk8
1y
ago
•
Applied to
The (short) case for predicting what Aliens value
by
Jim Buhler
1y
ago
•
Applied to
Five Areas I Wish EAs Gave More Focus
by
Prometheus
2y
ago
•
Applied to
The Bunny: An EA Short Story
by
JohnGreer
2y
ago
•
Applied to
What's the best ratio for Africans to starve compared to Ukrainians not dying in the war?
by
Adam Zerner
3y
ago
•
Applied to
Prioritization Research for Advancing Wisdom and Intelligence
by
Ozzie Gooen
3y
ago
•
Applied to
Announcing the Nuclear Risk Forecasting Tournament
by
MichaelA
4y
ago