This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Tags
AF
Login
Stances
•
Applied to
Humans aren't agents - what then for value learning?
by
JenniferRM
2y
ago
•
Applied to
What is an agent in reductionist materialism?
by
JenniferRM
2y
ago
•
Applied to
Mike Darwin on Kurzweil, Techno-Optimism, and Delusional Stances on Cryonics
by
JenniferRM
2y
ago
•
Applied to
RFC: Philosophical Conservatism in AI Alignment Research
by
JenniferRM
2y
ago
•
Applied to
Why is Everyone So Boring? By Robin Hanson
by
JenniferRM
2y
ago
•
Applied to
Implications of the Doomsday Argument for x-risk reduction
by
JenniferRM
2y
ago
•
Applied to
Death and Desperation
by
JenniferRM
2y
ago
•
Applied to
Relative Abstracted Agency
by
JenniferRM
2y
ago
•
Applied to
Encouraging New Users To Bet On Their Beliefs
by
JenniferRM
2y
ago
•
Applied to
An Introduction to Current Theories of Consciousness
by
JenniferRM
2y
ago
•
Applied to
Parasitic Language Games: maintaining ambiguity to hide conflict while burning the commons
by
JenniferRM
2y
ago
•
Applied to
Conflict Theory of Bounded Distrust
by
JenniferRM
2y
ago
•
Applied to
"Status" can be corrosive; here's how I handle it
by
JenniferRM
2y
ago
•
Applied to
GPT-4 is easily controlled/exploited with tricky decision theoretic dilemmas.
by
JenniferRM
2y
ago
•
Applied to
Smuggled frames
by
JenniferRM
2y
ago
•
Applied to
Double-negation as framing
by
JenniferRM
2y
ago
•
Applied to
Setting the Zero Point
by
JenniferRM
2y
ago
•
Applied to
Deliberate Play
by
JenniferRM
2y
ago
•
Applied to
A review of "Agents and Devices"
by
JenniferRM
2y
ago
•
Applied to
Curating "The Epistemic Sequences" (list v.0.1)
by
JenniferRM
2y
ago