This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Tags
AF
Login
Complexity of Value
•
Applied to
Don't want Goodhart? — Specify the variables more
by
Yan
1mo
ago
•
Applied to
Don't want Goodhart? — Specify the damn variables
1mo
ago
•
Applied to
Intent alignment as a stepping-stone to value alignment
by
Seth Herd
1mo
ago
•
Applied to
Conflating value alignment and intent alignment is causing confusion
by
Seth Herd
1mo
ago
•
Applied to
Sequence overview: Welfare and moral weights
by
MichaelStJules
4mo
ago
•
Applied to
Value fragility and AI takeover
by
Raymond Arnold
5mo
ago
•
Applied to
You don't know how bad most things are nor precisely how they're bad.
by
Gunnar Zarncke
5mo
ago
•
Applied to
Utilitarianism and the replaceability of desires and attachments
by
MichaelStJules
5mo
ago
•
Applied to
A (paraconsistent) logic to deal with inconsistent preferences
by
B Jacobs
5mo
ago
•
Applied to
Aggregative Principles of Social Justice
by
Cleo Nardo
7mo
ago
•
Applied to
The Pointer Resolution Problem
by
Arun Jose
10mo
ago
•
Applied to
An even deeper atheism
by
jacobjacob
1y
ago
•
Applied to
Why Do We Engage in Moral Simplification?
by
Wei Dai
1y
ago
•
Applied to
Open-ended ethics of phenomena (a desiderata with universal morality)
by
Ryo
1y
ago
•
Applied to
Evaluating the historical value misspecification argument
by
Writer
1y
ago
•
Applied to
The Hidden Complexity of Wishes - The Animation
by
Writer
1y
ago
•
Applied to
(Thought experiment) If you had to choose, which would you prefer?
1y
ago
•
Applied to
The cone of freedom (or, freedom might only be instrumentally valuable)
by
dkl9
1y
ago