LLMs for Alignment Research: a safety priority? — AI Alignment Forum