The Control Problem:
How do we ensure that future smarter-than-human artificial intelligence has a positive impact on the world? Experts agree that this is one of the most consequential issues of our age.
Other terms for what we discuss here include Superintelligence, AI Safety, AGI X-risk, and the AI Alignment/Value Alignment Problem.
"People who say that real AI researchers don’t believe in safety research are now just empirically wrong." —Scott Alexander
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." —Eliezer Yudkowsky
Some Guidelines
- Be respectful
- Stay on topic
- If you are unfamiliar with the Control Problem, read at least one of the introductory links or our wiki before submitting a text post.
- This especially goes for posts claiming to solve the Control Problem or dismissing it as a non-issue. Such posts aren't welcome.
- Flair posts
Introductions to the Topic
Video Links
Recommended Reading
Important Organizations
Related Subreddits
Want to add to the discussion?
Post a comment!