Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don’t lead to our extinction.
Kasutame küpsiseid, et pakkuda Teile parimat kogemust. Kui jätkate selle veebisaidi kasutamist, nõustute meie küpsiste tingimustega.