Lll
Atlas
Biosec
- modeling the world as a vulnerable and fragile entity
- dual use research
-
advancing our understanding makes us vulenrable to more diseases -> gain of function
-
AI Safety Cruxes:
- Why can’t complex models develop complex and ill-defined reward functions? (not a crux anymore, this is the problem)
- Why cant; we make good and measured objectives to optimize for? (literally alignment)
- Why won’t super AI be indifferent? (bad protection)
CDT (Causal Decision Theory) / Newcomb’s problem
Copyright © 2019-2025 Uzay Girit