This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Wikitags
AF
Login
Future of Life Institute
Settings
Applied to
AXRP Episode 38.7 - Anthony Aguirre on the Future of Life Institute
by
DanielFilan
1mo
ago
changed name from Future of Life Institute (FLI) to Future of Life Institute
Dakara
v1.5.0
Dec 30th 2024 GMT
(-38)
LW
1
Applied to
Can AI agents learn to be good?
by
Ram Rachum
7mo
ago
Applied to
AI Existential Safety Fellowships
by
mmfli
1y
ago
Applied to
FLI podcast series, "Imagine A World", about aspirational futures with AGI
by
Jackson Wagner
1y
ago
Applied to
AI Risk & Policy Forecasts from Metaculus & FLI's AI Pathways Workshop
by
Will Aldred
2y
ago
Applied to
We might need to rethink the Hard Reset , aka the AI Pause.
by
Jonas Kgomo
2y
ago
Applied to
FLI Podcast: Connor Leahy on AI Progress, Chimps, Memes, and Markets (Part 1/3)
by
Andrea Miotti
2y
ago
Applied to
A bridge to Dath Ilan? Improved governance on the critical path to AI alignment.
by
Jackson Wagner
3y
ago
Multicore
v1.4.0
Dec 29th 2021 GMT
LW
1
Applied to
Increased Availability and Willingness for Deployment of Resources for Effective Altruism and Long-Termism
by
Evan_Gaensbauer
3y
ago
v1.3.0
Dec 13th 2020 GMT
<minor-edit> changed description, added one (1) tag link. </minor-edit>
LW
1
Multicore
v1.2.0
Dec 4th 2020 GMT
(
+4
/
-3
)
LW
1
Multicore
v1.1.0
Dec 4th 2020 GMT
(+262)
LW
1
Applied to
Research Priorities for Artificial Intelligence: An Open Letter
by
Multicore
4y
ago
Applied to
[link] FLI's recommended project grants for AI safety research announced
by
Multicore
4y
ago
Applied to
FLI awards prize to Arkhipov’s relatives
by
Multicore
4y
ago
Applied to
FLI Podcast: On Superforecasting with Robert de Neufville
by
Multicore
4y
ago
Applied to
2019 AI Alignment Literature Review and Charity Comparison
by
Multicore
4y
ago