This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
ozhang
Posts
Sorted by New
24
Announcing the Introduction to ML Safety course
2y
3
20
$20K In Bounties for AI Safety Public Materials
2y
0
30
Introducing the ML Safety Scholars Program
3y
0
25
SERI ML Alignment Theory Scholars Program 2022
3y
0
19
[$20K in Prizes] AI Safety Arguments Competition
3y
9
34
ML Alignment Theory Program under Evan Hubinger
3y
2
Wiki Contributions
Comments
Sorted by
Newest