11 November 2022

IEEE Spectrum: “For Better or Worse, Tesla Bot is Exactly What We Expected”

Most of what we saw in the presentation was hardware. And hardware is important and a necessary first step, but software is arguably a much more significant challenge when it comes to making robotics useful in the real world. Understanding and interacting with the environment, reasoning and decision-making, the ability to learn and be taught new tasks—these are all necessary pieces of the puzzle of a useful robot that Tesla is trying to put together, but they’re all also extremely difficult, cutting-edge problems, despite the enormous amount of work that the research community has put into them.

And so far, we (still) have very little indication that Tesla is going to be any better at tackling this stuff than anyone else. There doesn’t appear to be anything all that special or exciting from Tesla that provides any unique foundation for Musk’s vision in a way that’s likely to allow the company to outpace other companies working on similar things. I’ll reiterate what I said a year ago: The hard part is not building a robot, it’s getting that robot to do useful stuff.

Evan Ackerman

Interesting rundown of the announcements at this year’s Tesla AI Day, in particular the unveiling of the Tesla Bot prototype. Tesla seems to be going all-in on synergies between the humanoid robot and its existing automobile business, by porting software components developed for its vehicles to the robot’s environment, and suggesting that, once fully functioning, the robots would be laboring at Tesla’s assembly lines, among other uses. A solid plan in theory, although the road to achieving it appears murky and convoluted.

Images showing how the Optimus robot sees the world through cameras and computer generated views
Tesla showed how sensing used in its vehicles can help the Optimus robot navigate

As the author stresses throughout the piece, obstacle avoidance is quite different on cars – where the goal is for the vehicle to stay reasonably far away from most objects at all times – compared to bipeds – where a human-analog robot would need to decide which obstacles to avoid (like walls) and which to approach (like doors, tables, closets, and so on). As for populating Tesla factories with robots, most tasks could be performed just as easily (or better) by nonhumanoid robots. We’ll have to wait and see if and what sort of results can Tesla achieve in the coming years.

I still fundamentally disagree with the implied “humanoid robots are just cars with legs” thing, but it’s impressive that they were able to port much at all—I was highly skeptical of that last year, but I’m more optimistic now, and being able to generalize between platforms (on some level) could be huge for both Tesla and for autonomous systems more generally. I’d like more details on what was easy, and what was not.

What we’re seeing above, though, is one of the reasons I was skeptical. That occupancy grid (where the robot’s sensors are detecting potential obstacles) on the bottom is very car-ish, in that the priority is to make absolutely sure that the robot stays very far away from anything it could conceivably run into.

By itself, this won’t transfer well to a humanoid robot that needs to directly interact with objects to do useful tasks. I’m sure there are lots of ways to adapt the Tesla car’s obstacle-avoidance system, but that’s the question: How hard is that transfer, and is it better than using a solution developed specifically for mobile manipulators?

Personally, I strongly suspect that Elon Musk secretly wants to build himself a robot body and transfer his mind into its processing unit à la Westworld – hence his parallel project with Neuralink – to become a near-immortal ruler of his Mars colony.

Post a Comment