x
All Questions — AI Alignment Forum
This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
Home
Library
Questions
All Posts
About
Top Questions
Recent Activity
52
Have LLMs Generated Novel Insights?
Q
abramdemski
,
Cole Wyeth
,
Kaj_Sotala
1y
Q
20
33
Why The Focus on Expected Utility Maximisers?
Q
DragonGod
,
Scott Garrabrant
3y
Q
1
37
Does Agent-like Behavior Imply Agent-like Architecture?
Q
Scott Garrabrant
7y
Q
3
41
Forecasting Thread: AI Timelines
Q
Amandango
,
Daniel Kokotajlo
,
Ben Pace
,
datscilly
6y
Q
33
17
Are You More Real If You're Really Forgetful?
Q
Thane Ruthenis
,
Charlie Steiner
1y
Q
4
52
Have LLMs Generated Novel Insights?
Q
abramdemski
,
Cole Wyeth
,
Kaj_Sotala
1y
Q
20
33
Why The Focus on Expected Utility Maximisers?
Q
DragonGod
,
Scott Garrabrant
3y
Q
1
37
Does Agent-like Behavior Imply Agent-like Architecture?
Q
Scott Garrabrant
7y
Q
3
7
Is CIRL a promising agenda?
Q
Chris_Leong
4y
Q
0
41
Forecasting Thread: AI Timelines
Q
Amandango
,
Daniel Kokotajlo
,
Ben Pace
,
datscilly
6y
Q
33
17
Are You More Real If You're Really Forgetful?
Q
Thane Ruthenis
,
Charlie Steiner
1y
Q
4
45
why assume AGIs will optimize for fixed goals?
Q
nostalgebraist
,
Rob Bensinger
4y
Q
3
27
What convincing warning shot could help prevent extinction from AI?
Q
Charbel-Raphaël
,
cozyfractal
,
peterbarnett
2y
Q
2
8
Egan's Theorem?
Q
johnswentworth
6y
Q
7
40
Seriously, what goes wrong with "reward the agent when it makes you smile"?
Q
TurnTrout
,
johnswentworth
4y
Q
13
14
Is weak-to-strong generalization an alignment technique?
Q
cloud
1y
Q
1
9
What is the most impressive game LLMs can play well?
Q
Cole Wyeth
1y
Q
8