This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Wikitags
AF
Login
Reflective Reasoning
Settings
Applied to
Levels of Thought: from Points to Fields
by
HNX
4mo
ago
Applied to
notes on prioritizing tasks & cognition-threads
by
Emrik
4mo
ago
Applied to
Thought Experiments Website
by
minmi_drover
8mo
ago
Applied to
Replaceable Axioms give more credence than irreplaceable axioms
by
Yoav Ravid
1y
ago
Applied to
Anthropically Blind: the anthropic shadow is reflectively inconsistent
by
Christopher King
2y
ago
Applied to
Probabilistic Payor Lemma?
by
Abram Demski
2y
ago
Yoav Ravid
v1.2.0
Mar 11th 2023 GMT
(+40)
LW
4
Yoav Ravid
v1.1.0
Mar 10th 2023 GMT
(+49)
LW
2
Applied to
The Lens That Sees Its Flaws
by
Yoav Ravid
2y
ago
Applied to
"Arbitrary"
by
Yoav Ravid
2y
ago
Applied to
Mirrors and Paintings
by
Yoav Ravid
2y
ago
Applied to
Is Fairness Arbitrary?
by
Yoav Ravid
2y
ago
Applied to
The Meaning of Right
by
Yoav Ravid
2y
ago
Applied to
Squeezing foundations research assistance out of formal logic narrow AI.
by
Lauren (often wrong)
2y
ago
Applied to
Alignment allows "nonrobust" decision-influences and doesn't require robust grading
by
Alex Turner
2y
ago
Applied to
Reflection in Probabilistic Logic
by
Yoav Ravid
3y
ago
Applied to
What I'd change about different philosophy fields
by
Rob Bensinger
4y
ago