This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Tags
AF
Login
Conjecture (org)
Settings
Dakara
v1.3.0
Dec 30th 2024 GMT
LW
1
•
Applied to
Conjecture: A Roadmap for Cognitive Software and A Humanist Future of AI
by
Connor Leahy
2mo
ago
•
Applied to
Launching Applications for the Global AI Safety Fellowship 2025!
by
Aditya_S
2mo
ago
•
Applied to
My guess at Conjecture's vision: triggering a narrative bifurcation
by
Alexandre Variengien
1y
ago
•
Applied to
Barriers to Mechanistic Interpretability for AGI Safety
by
Connor Leahy
1y
ago
•
Applied to
Introducing EffiSciences’ AI Safety Unit
by
Charbel-Raphael Segerie
2y
ago
•
Applied to
Conjecture: A standing offer for public debates on AI
by
Andrea Miotti
2y
ago
•
Applied to
Critiques of prominent AI safety labs: Conjecture
by
Anonymous Omega
2y
ago
•
Applied to
Conjecture internal survey: AGI timelines and probability of human extinction from advanced AI
by
Maris Sala
2y
ago
•
Applied to
A technical note on bilinear layers for interpretability
by
Lee Sharkey
2y
ago
•
Applied to
Shah (DeepMind) and Leahy (Conjecture) Discuss Alignment Cruxes
by
Olivia
2y
ago
•
Applied to
A response to Conjecture's CoEm proposal
by
Kristian Freed
2y
ago
•
Applied to
Japan AI Alignment Conference Postmortem
by
Chris Scammell
2y
ago
•
Applied to
A couple of questions about Conjecture's Cognitive Emulation proposal
by
Igor Ivanov
2y
ago
•
Applied to
Japan AI Alignment Conference
by
Andrea Miotti
2y
ago
•
Applied to
Questions about Conjecure's CoEm proposal
2y
ago
Andrea Miotti
v1.2.0
Mar 9th 2023 GMT
LW
1