This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Wikitags
AF
Login
Anthropics
Settings
Applied to
Experimental testing: can I treat myself as a random sample?
by
Lucie Philippon
2d
ago
Applied to
A Philosophical Artifact: “Witnessing Without a Self” — A Dialogue Between Human and AI
by
Eric Rosenberg
13d
ago
Applied to
You don't actually need a physical multiverse to explain anthropic fine-tuning.
by
Ruben Bloom
1mo
ago
Applied to
For the Sake of Pleasure Alone
by
Greenless Mirror
2mo
ago
Applied to
What are the surviving worlds like?
by
KvmanThinking
2mo
ago
Applied to
Sleeping Beauty: an Accuracy-based Approach
by
Ruben Bloom
2mo
ago
Applied to
Chi Nguyen's Shortform
by
Chi Nguyen
2mo
ago
Applied to
Chance is in the Map, not the Territory
by
Gunnar Zarncke
3mo
ago
Applied to
Are You More Real If You're Really Forgetful?
by
Thane Ruthenis
5mo
ago
Applied to
Quantum Immortality: A Perspective if AI Doomers are Probably Right
by
Christopher King
6mo
ago
Applied to
Keeping it (less than) real: Against ℶ₂ possible people or worlds
by
quiet_NaN
7mo
ago
Applied to
Is there any rigorous work on using anthropic uncertainty to prevent situational awareness / deception?
by
Raymond Arnold
8mo
ago
Applied to
Does “Ultimate Neartermism” via Eternal Inflation dominate Longtermism in expectation?
by
Jordan Arel
8mo
ago
Applied to
A computational complexity argument for many worlds
by
Gunnar Zarncke
8mo
ago
Applied to
"Which Future Mind is Me?" Is a Question of Values
by
dadadarren
8mo
ago
Applied to
Relativity Theory for What the Future 'You' Is and Isn't
by
Gunnar Zarncke
9mo
ago
Applied to
Doomsday Argument and the False Dilemma of Anthropic Reasoning
by
RobertM
10mo
ago
Applied to
Datasets that change the odds you exist
by
Gunnar Zarncke
10mo
ago