This is the so-called ‘hard takeoff’ scenario, also called the FOOM model by some in the singularity world. It’s the scenario where in a blink of an AI, a ‘godlike’ intelligence bootstraps into being, either by upgrading itself or by being created by successive generations of ancestor AIs.
It’s also, with due respect to Vernor Vinge, of whom I’m a great fan, almost certainly wrong.
It’s wrong because most real-world problems don’t scale linearly. In the real world, the interesting problems are much much harder than that.
Ramez Naam
Interesting article, which brings up aspects of AI development that are conveniently ignored elsewhere. Specifically, there is no reason to think emergent AI will be able to exponentially improve itself to the point of becoming ‘godlike’ – quite the contrary, there are physical constraints to how fast AI can evolve and how fast it can become.
Would you like a self-driving car that has its own opinions? That might someday decide it doesn’t feel like driving you where you want to go? That might ask for a raise? Or refuse to drive into certain neighborhoods? Or do you want a completely non-sentient self-driving car that’s extremely good at navigating roads and listening to your verbal instructions, but that has no sentience of its own? Ask yourself the same about your search engine, your toaster, your dish washer, and your personal computer.
Many of us want the semblance of sentience. There would be lots of demand for an AI secretary who could take complex instructions, execute on them, be a representative to interact with others, and so on. You may think such a system would need to be sentient. But once upon a time we imagined that a system that could play chess, or solve mathematical proofs, or answer phone calls, or recognize speech, would need to be sentient. It doesn’t need to be. You can have your AI secretary or AI assistant and have it be all artifice. And frankly, we’ll likely prefer it that way.
Ramez Naam
Very true! In fact it reminds me of one of the themes in Westworld, where Ford was secretly working to ignite a spark of sentience in the hosts, while the board was actively trying to keep them ‘dumb’, fearing that more independent hosts would go off-script more often and hurt their revenues (and customers!).
Post a Comment