Shayne O'Neill

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

I have a little bit of skepticism on the idea of using COT reasoning for interpretability. If you really look into what COT is doing, its not actually doing much a regular model doesnt already do, its just optimized for a particular prompt that basically says "Show me your reasoning". The problem is, we still have to trust that its being truthful in its reasoning.  It still isn't accounting for those hidden states , the 'subconscious', to use a somewhat flawed analogy

We are still relying on trusting an entity that we dont know if we can actually trust to tell us if its trustworthy, and as far as ethical judgements go, that seems a little tautological. 

As an analogy, we might ask a child to show their work when doing a simple maths problem ,but it wont tell us much about the childs intuitions about the math.