This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Books of LessWrong
AF
Login
Alignment & Agency
62
An Orthodox Case Against Utility Functions
Abram Demski
5y
45
53
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables
johnswentworth
4y
34
63
Alignment By Default
johnswentworth
4y
72
68
An overview of 11 proposals for building safe advanced AI
Evan Hubinger
4y
31
93
The ground of optimization
Alex Flint
4y
50
34
Search versus design
Alex Flint
4y
30
58
Inner Alignment: Explain like I'm 12 Edition
Rafael Harth
4y
12
44
Inaccessible information
Paul Christiano
4y
9
39
AGI safety from first principles: Introduction
Richard Ngo
4y
15
39
Is Success the Enemy of Freedom? (Full)
alkjash
4y
0