This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Tags
AF
Login
General Intelligence
•
Applied to
How to solve the misuse problem assuming that in 10 years the default scenario is that AGI agents are capable of synthetizing pathogens
by
jeremtti
1mo
ago
•
Applied to
The default scenario for the next 50 years
by
Julien
1mo
ago
•
Applied to
How Should We Measure Intelligence Models: Why Use Frequency of Elemental Information Operations
by
RobertM
2mo
ago
•
Applied to
Singular Learning Theory for Dummies
by
Rahul Chand
2mo
ago
•
Applied to
The Hopium Wars: the AGI Entente Delusion
by
Max Tegmark
3mo
ago
•
Applied to
On Measuring Intellectual Performance - personal experience and several thoughts
by
Alexander Gufan
3mo
ago
•
Applied to
How does someone prove that their general intelligence is above average?
by
Michael Y. Zuo
3mo
ago
•
Applied to
What bootstraps intelligence?
by
invertedpassion
4mo
ago
•
Applied to
On agentic generalist models: we're essentially using existing technology the weakest and worst way you can use it
by
Yuli_Ban
4mo
ago
•
Applied to
Four Phases of AGI
by
Gabe M
5mo
ago
•
Applied to
Some comments on intelligence
by
Ruben Bloom
5mo
ago
•
Applied to
What can we learn from orcas?
by
Jonas Braadbaart
7mo
ago
•
Applied to
Towards a New Ontology of Intelligence
by
Tara
7mo
ago
•
Applied to
The Alignment Problem No One Is Talking About
by
James Stephen Brown
8mo
ago
•
Applied to
What are some posthumanist/more-than-human approaches to definitions of intelligence and agency? Particularly in application to AI research.
by
Eli Hiton
9mo
ago