coo @ ought.org. by default please assume i am uncertain about everything i say unless i say otherwise :)
I also appreciated reading this.
This was really helpful and fun to read. I'm sure it was nontrivial to get to this level of articulation and clarity. Thanks for taking the time to package it for everyone else to benefit from.
If anyone has questions for Ought specifically, we're happy to answer them as part of our AMA on Tuesday.
I think we could play an endless and uninteresting game of "find a real-world example for / against factorization."
To me, the more interesting discussion is around building better systems for updating on alignment research progress -
Access
Alignment-focused policymakers / policy researchers should also be in positions of influence.
Knowledge
I'd add a bunch of human / social topics to your list e.g.
Research methodology / Scientific “rationality,” Productivity, Tools
I'd be really excited to have people use Elicit with this motivation. (More context here and here.)
Re: competitive games of introducing new tools, we did an internal speed Elicit vs. Google test to see which tool was more efficient for finding answers or mapping out a new domain in 5 minutes. We're broadly excited to structure and support competitive knowledge work and optimize research this way.
This is exactly what Ought is doing as we build Elicit into a research assistant using language models / GPT-3. We're studying researchers' workflows and identifying ways to productize or automate parts of them. In that process, we have to figure out how to turn GPT-3, a generalist by default, into a specialist that is a useful thought partner for domains like AI policy. We have to learn how to take feedback from the researcher and convert it into better results within session, per person, per research task, across the entire product. Another spin on it: we have to figure out how researchers can use GPT-3 to become expert-like in new domains.
We’re currently using GPT-3 for classification e.g. “take this spreadsheet and determine whether each entity in Column A is a non-profit, government entity, or company.” Some concrete examples of alignment-related work that have come up as we build this:
We'd love to talk to people interested in exploring this approach to alignment!
I generally agree with this but think the alternative goal of "make forecasting easier" is just as good, might actually make aggregate forecasts more accurate in the long run, and may require things that seemingly undermine the virtue of precision.
More concretely, if an underdefined question makes it easier for people to share whatever beliefs they already have, then facilitates rich conversation among those people, that's better than if a highly specific question prevents people from making a prediction at all. At least as much, if not more, of the value of making public, visual predictions like this comes from the ensuing conversation and feedback than from the precision of the forecasts themselves.
Additionally, a lot of assumptions get made at the time the question is defined more precisely, which could prematurely limit the space of conversation or ideas. There are good reasons why different people define AGI the way they do, or the moment of "AGI arrival" the way they do, that might not come up if the question askers had taken a point of view.
This was really interesting, thanks for running and sharing! Overall this was a positive update for me.
I think this just links to PhilPapers not your survey results?