2021 MIRI Conversations
This sequence is a (chronological) series of chatroom conversation logs about artificial general intelligence. A large number of topics are covered, beginning with conversations related to alignment difficulty.
Short summaries of each post, and links to audio versions, are available here. There are also two related posts released shortly before this sequence:
- Discussion with Eliezer Yudkowsky on AGI Interventions
- Comments [by Nate Soares] on Joe Carlsmith's "Is power-seeking AI an existential risk?"
Rob Bensinger edited and posted this sequence, and Matthew Graves helped with much of the formatting.
⠀
Part One (Primarily Richard Ngo and Eliezer Yudkowsky)
Part Two (Primarily Paul Christiano and Eliezer Yudkowsky)