It is ironic that mainstream psychology has largely renounced behaviourism, which has been recognised as both inadequate and inhuman, while computer science, thanks to philosophical misconceptions such as inductivism, still intends to manufacture human-type cognition on essentially behaviourist lines.
AGIs will indeed be capable of self-awareness — but that is because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves. This does not mean that apes who pass the mirror test have any hint of the attributes of ‘general intelligence’ of which AGI would be an artificial version. Indeed, Richard Byrne’s wonderful research into gorilla memes has revealed how apes are able to learn useful behaviours from each other without ever understanding what they are for: the explanation of how ape cognition works really is behaviouristic.
David Deutsch
Insightful article about the fundamental problems faced by the current research into artificial intelligence. I’m sharing it partly because I agree with the general conclusion, partly because of the interesting parallels with a science-fiction novel I read recently, Blindsight by Peter Watts. There, the author plays with the idea that consciousness is not a prerequisite for intelligence, and may even be a hindrance under certain circumstances. Here, David Deutsch argues that it would be relatively easy for a computer to mimic human behavior, even self-awareness (because it would just follow complex software instructions coming from human operators), but much harder to truly become intelligent, able of independent thought outside its existing programming.
Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough. David Deutsch
Post a Comment