Ought will host a factored cognition “Lab Meeting” on Friday September 16 from 9:30AM - 10:30AM PT

We'll share the progress we've made using language models to decompose reasoning tasks into subtasks that are easier to perform and evaluate. This is part of our work on  supervising process, not outcomes. It’s easier for us to show you than to tell you about it in a post (though written updates will hopefully follow). 

Then, we'll cover outstanding research directions we see and plan to work on, many almost shovel-ready. If the alignment community can parallelize this work across different alignment research teams, we can make progress faster. We'd love to coordinate with other alignment researchers thinking about task decomposition, process supervision, factored cognition, and IDA-like approaches (where efficient to do so). We want to save you time and mistakes if we can!

What is the agenda?

  1. 30 min | Updates on Ought’s work decomposing reasoning tasks
    1. The specific alignment problems we’re trying to solve and our vision of a solution that mitigates these risks (see also: Supervise Process, not Outcomes)
    2. Early progress on decomposing reasoning about evidence quality in randomized controlled trials
    3. Our tools for building and debugging reasoning traces of language models (preview)
  2. 15 min | Related research directions we’re excited about and how they fit in, e.g.
    1. Automating evaluation through critique models or verifier models
    2. Distillation
    3. Comparing scaling trends for process-based systems vs. end-to-end systems
    4. Testing process-based systems for adversarial robustness
  3. 15 min | Q&A

There will be more to discuss than we can fit into an hour. We’ll get to what we can and consider making this a regular meeting if there’s appetite (likely with more sharing from other researchers)! 

Who should attend?

You should attend if:

  1. You are interested in Ought’s research and want updates. 
  2. You want to build off of Ought’s learnings from doing this research.
  3. You want to use our tools for running and debugging compositional language models tasks.
  4. You want concrete research ideas in this domain. 
  5. You are not a researcher but want to learn how other backgrounds can support this work (engineers can build debugging infrastructure, non-ML researchers can help create datasets, etc.). 

How can I attend? 

You can register for the Lab Meeting here. Email jungwon@ought.org if you have any questions!

The meeting will be recorded & shared. 

New Comment
1 comment, sorted by Click to highlight new comments since:

The video from the factored cognition lab meeting is up:

Description:

Ought cofounders Andreas and Jungwon describe the need for process-based machine learning systems. They explain Ought's recent work decomposing questions to evaluate the strength of findings in randomized controlled trials. They walk through ICE, a beta tool used to chain language model calls together. Lastly, they walk through concrete research directions and how others can contribute. 

Outline:

00:00 - 2:00 Opening remarks
2:00 - 2:30 Agenda
2:30 - 9:50 The problem with end-to-end machine learning for reasoning tasks
9:50 - 15:15 Recent progress | Evaluating the strength of evidence in randomized controlled trials trials
15:15 - 17:35 Recent progress | Intro to ICE, the Interactive Composition Explorer
17:35 - 21:17 ICE | Answer by amplification
21:17 - 22:50 ICE | Answer by computation
22:50 - 31:50 ICE | Decomposing questions about placebo
31:50 - 37:25 Accuracy and comparison to baselines
37:25 - 39:10 Outstanding research directions
39:10 - 40:52 Getting started in ICE & The Factored Cognition Primer
40:52 - 43:26 Outstanding research directions
43:26 - 45:02 How to contribute without coding in Python
45:02 - 45:55 Summary
45:55 - 1:13:06 Q&A

The Q&A had lots of good questions.