This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Wikitags
AF
Login
Discussion
0
Researchers in value alignment theory
Discussion
0
Written by
Eliezer Yudkowsky
,
Paul Christiano
,
et al.
last updated
23rd Feb 2016
This page lists researchers in
AI alignment
.
Eliezer Yudkowsky
(founder,
MIRI
)
Nick Bostrom
(founder,
FHI
)
3q
(MIRI;
parametric polymorphism
, the
Procrastination Paradox
, and numerous other developments in
Vingean reflection
.)
Orthonormal
(MIRI;
modal agents
)
StuartArmstrong
(FHI;
Utility indifference
)
Paulfchristiano
(UC
Berkeley,
approval-directed agents
, previously proposed a formalization of
indirect normativity
)
StuartRussell
(UC Berkeley; author of Artificial Intelligence: A Modern Approach; previously published on theories of reflective optimality; currently interested in
inverse reinforcement learning
.)
Jessicat
(MIRI,
reflective oracles
)
Andrew Critch
(MIRI)
ScottGarabant
(MIRI,
logical probabilities
)
So8res
(previously MIRI researcher, now Executive Director at MIRI)
Parents:
AI alignment
Children:
Nick Bostrom