This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Tags
AF
Login
AI Alignment Intro Materials
•
Applied to
AI Control: Improving Safety Despite Intentional Subversion
5d
ago
•
Applied to
5-day Intro to Transformative AI course
by
Li-Lian Ang
13d
ago
•
Applied to
Doing Nothing Utility Function
by
k64
3mo
ago
•
Applied to
AI Alignment and the Quest for Artificial Wisdom
by
Madhusudhan Pathak
5mo
ago
•
Applied to
UC Berkeley course on LLMs and ML Safety
by
Ruben Bloom
5mo
ago
•
Applied to
So you want to work on technical AI safety
by
George Wang
6mo
ago
•
Applied to
Talk: AI safety fieldbuilding at MATS
by
Ryan Kidd
6mo
ago
•
Applied to
Podcast interview series featuring Dr. Peter Park
by
jacobhaimes
9mo
ago
•
Applied to
INTERVIEW: Round 2 - StakeOut.AI w/ Dr. Peter Park
by
jacobhaimes
9mo
ago
•
Applied to
INTERVIEW: StakeOut.AI w/ Dr. Peter Park
by
jacobhaimes
10mo
ago
•
Applied to
A starter guide for evals
by
Marius Hobbhahn
1y
ago
•
Applied to
Hackathon and Staying Up-to-Date in AI
by
jacobhaimes
1y
ago
•
Applied to
Interview: Applications w/ Alice Rigg
by
jacobhaimes
1y
ago
•
Applied to
Into AI Safety: Episode 3
by
jacobhaimes
1y
ago
•
Applied to
Into AI Safety Episodes 1 & 2
by
jacobhaimes
1y
ago
plex
v1.4.0
Nov 5th 2023 GMT
(
+51
/
-26
)
LW
4
Stampy's AI Safety Info
(extensive interactive FAQ)
Scott Alexander's Superintelligence FAQ
The MIRI Intelligence Explosion FAQ
The
Stampy.AI wiki project
The
AGI Safety Fundamentals courses
Superintelligence
(book)
•
Applied to
Into AI Safety - Episode 0
by
jacobhaimes
1y
ago
•
Applied to
Documenting Journey Into AI Safety
by
jacobhaimes
1y
ago
Stampy.AI wiki projectTheAGI Safety Fundamentals courses