![]() Bostrom was going out on a very shaky limb to broadcast a crazy-sounding warning about what might be the most important problem humanity has ever faced, and the book made this absolutely clear. It explored various outrageous scenarios (what if the AI destroyed humanity to prevent us from turning it off? what if it put us all in cryostasis so it didn’t count as destroying us? what if it converted the entire Earth into computronium?) with no excuse beyond that, outrageous or not, they might come true. But if you also open up the books to see what’s inside, the two defy easy comparison. From the artifactual point of view, HC is just better – more recent, and by a more domain-relevant expert. ![]() But Superintelligence was in 2014, and by a philosophy professor. ![]() Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies previously filled this role. ![]() I’m only half-joking: in addition to its contents, Human Compatible is important as an artifact, a crystallized proof that top scientists now think AI safety is worth writing books about. His new book Human Compatible states that superintelligent AI is possible Clarke would recommend we listen. But what he lacks in age, he makes up in distinction: he’s a computer science professor at Berkeley, neurosurgery professor at UCSF, DARPA advisor, and author of the leading textbook on AI. ![]() When he states that something is impossible, he is very probably wrong. Clarke’s First Law goes: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. ![]()
0 Comments
Leave a Reply. |