Regardless of what you might think about AI, the reality is that just about every successful deployment has either one of two expedients: It has a person somewhere in the loop, or the cost of failure, should the system blunder, is very low. In 2002, iRobot, a company that I cofounded, introduced the first mass-market autonomous home-cleaning robot, the Roomba, at a price that severely constricted how much AI we could endow it with. The limited AI wasn't a problem, though. Our worst failure scenarios had the Roomba missing a patch of floor and failing to pick up a dustball.
That same year we started deploying the first of thousands of robots in Afghanistan and then Iraq to be used to help troops disable improvised explosive devices. Failures there could kill someone, so there was always a human in the loop giving supervisory commands to the AI systems on the robot.
Rodney Brooks
I very much appreciate Rodney Brooks’ grounded perspective on artificial intelligence – and I am a bit surprised to see that I haven’t quoted any of his articles on my blog previously. I have certainly written about enough examples that fall quite neatly into these two categories: in military applications human supervision is supposed to continue for the foreseeable future, whereas for games or automated translations the cost of failure is low to nonexistent. And when companies start overstepping clear boundaries, AI systems often fail spectacularly, as it happened with the Apple Card algorithms, and with automated piloting systems. Even Google’s Waymo, arguably the most advanced in the field of self-driving cars, is still employing remote overseers to guide vehicles out of irregular situations.
AI systems power the speech and language understanding of our smart speakers and the entertainment and navigation systems in our cars. We, the consumers, soon adapt our language to each such AI agent, quickly learning what they can and can’t understand, in much the same way as we might with our children and elderly parents. The AI agents are cleverly designed to give us just enough feedback on what they’ve heard us say without getting too tedious, while letting us know about anything important that may need to be corrected. Here, we, the users, are the people in the loop. The ghost in the machine, if you will.
Ask not what your AI system can do for you, but instead what it has tricked you into doing for it.
The finishing lines have strong echoes of Dune, pre Butlerian Jihad.
Post a Comment