This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
Yaakov T
Posts
Sorted by New
Wiki Contributions
Transformative AI
1y
(
+34
/
-19
)
Transformative AI
1y
(
+178
/
-169
)
Language Models
2y
(
+1122
/
-630
)
Corrigibility
2y
(
+761
/
-10
)
Comments
Sorted by
Newest
[Intro to brain-like-AGI safety] 9. Takeaways from neuro 2/2: On AGI motivation
Yaakov T
2y
2
0
But in that kind of situation, wouldn't those people also pick A over B for the same reason?
Reply
But in that kind of situation, wouldn't those people also pick A over B for the same reason?