Rohan Subramani

I aim to promote welfare and reduce suffering as much as possible. This has led me to work on AGI safety research. I am particularly interested in foundation model agents (FMAs): systems like AutoGPT and Devin that equip foundation models with memory, tool use, and other affordances so they can perform multi-step tasks autonomously.

Previously, I completed an undergrad in CS and Math at Columbia, where I helped run Columbia Effective Altruism and Columbia AI Alignment Club (CAIAC).

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by