This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Wikitags
AF
Login
Neuromorphic AI
Settings
Applied to
How do biological or spiking neural networks learn?
by
Dom Polsinelli
2mo
ago
Dakara
v1.4.0
Dec 30th 2024 GMT
(-2)
LW
1
Applied to
GPT-4 implicitly values identity preservation: a study of LMCA identity management
by
Ozyrus
2y
ago
Applied to
Connectomics seems great from an AI x-risk perspective
by
Steve Byrnes
2y
ago
Applied to
[Intro to brain-like-AGI safety] 12. Two paths forward: “Controlled AGI” and “Social-instinct AGI”
by
Gunnar Zarncke
2y
ago
Applied to
Capabilities and alignment of LLM cognitive architectures
by
Seth Herd
2y
ago
Applied to
Correcting a misconception: consciousness does not need 90 billion neurons, at all
by
bvbvbvbvbvbvbvbvbvbvbv
2y
ago
Applied to
The alignment stability problem
by
Seth Herd
2y
ago
Applied to
EAI Alignment Speaker Series #1: Challenges for Safe & Beneficial Brain-Like Artificial General Intelligence with Steve Byrnes
by
Ruben Bloom
2y
ago
Applied to
Human preferences as RL critic values - implications for alignment
by
Seth Herd
2y
ago
Applied to
Clippy, the friendly paperclipper
by
Seth Herd
2y
ago
Applied to
Are you stably aligned?
by
Seth Herd
2y
ago
Applied to
Large Language Models Suggest a Path to Ems
by
Richard Thiessen
2y
ago
Applied to
Safety of Self-Assembled Neuromorphic Hardware
by
RobertM
2y
ago
Applied to
My take on Jacob Cannell’s take on AGI safety
by
Steve Byrnes
2y
ago
Applied to
AI researchers announce NeuroAI agenda
by
Cameron Berg
2y
ago