We want to advance process-based supervision for language models. To make it easier for others to contribute to that goal, we're sharing code for writing compositional language model programs, and a tutorial that explains how to get started:

We've been using ICE as part of our work on Elicit and have found it useful in practice.
 

Interactive Composition Explorer (ICE)

ICE is an open-source Python library for writing, debugging, and visualizing compositional language model programs. ICE makes it easy to:

  1. Run language model recipes in different modes: humans, human+LM, LM
  2. Inspect the execution traces in your browser for debugging
  3. Define and use new language model agents, e.g. chain-of-thought agents
  4. Run recipes quickly by parallelizing language model calls
  5. Reuse component recipes such as question-answering, ranking, and verification

ICE looks like this:

 

Factored Cognition Primer

The Factored Cognition Primer is a tutorial that explains (among other things) how to:

  1. Implement basic versions of amplification and debate using ICE
  2. Reason about long texts by combining search and generation
  3. Run decompositions quickly by parallelizing language model calls
  4. Use verification of answers and reasoning steps to improve responses

The Primer looks like this:


If you end up using either, consider joining our Slack. We think that factored cognition research parallelizes unusually well and would like to collaborate with others who are working on recipes for cognitive tasks.

To learn more about how we've been using ICE, watch our recent Factored Cognition lab meeting.

New Comment