Super interesting, thanks!
If you were running it again, you might want to think about standardizing the wording of the questions - it varies from 'will / is' to 'is likely' to 'plausible' and this can make it hard to compare between questions. Plausible in particular is quite a fuzzy word, for some it might mean 1% or more, for others it might just mean it's not completely impossible / if a movie had that storyline, they'd be okay with it.
Fair. For better or worse, a lot of this variation came from piloting—we got a lot of nudges from pilot participants to move toward framings that were perceived as controversial or up for debate.
I was part of a group that ran a PhilPapers-style survey and metasurvey targeting NLP researchers who publish at venues like ACL. Results are here (Tweet-thread version). It didn't target AGI timelines, but had some other questions that could be of interest to people here: