Sammy Azdoufal claims he wasn’t trying to hack every robot vacuum in the world. He just wanted to remote control his brand-new DJI Romo vacuum with a PS5 gamepad, he tells The Verge, because it sounded fun.
But when his homegrown remote control app started talking to DJI’s servers, it wasn’t just one vacuum cleaner that replied. Roughly 7,000 of them, all around the world, began treating Azdoufal like their boss.
On Tuesday, when he showed me his level of access in a live demo, I couldn’t believe my eyes. Ten, hundreds, thousands of robots reporting for duty, each phoning home MQTT data packets every three seconds to say: their serial number, which rooms they’re cleaning, what they’ve seen, how far they’ve traveled, when they’re returning to the charger, and the obstacles they encountered along the way.
I watched each of these robots slowly pop into existence on a map of the world. Nine minutes after we began, Azdoufal’s laptop had already cataloged 6,700 DJI devices across 24 different countries and collected over 100,000 of their messages. If you add the company’s DJI Power portable power stations, which also phone home to these same servers, Azdoufal had access to over 10,000 devices.
Sean Hollister
Speaking of AI coding bots taking down AWS, this story is in several ways the opposite: on one hand evidently human programmers can and have also delivered applications riddled with bugs and security holes – and rather serious ones in this case, as the article later mentions another vulnerability so bad it won’t even detail it until DJI has time to fix it. The other aspect is that here Azdoufal used Claude Code to reverse engineer DJI’s protocols, thus exposing the first issue described in the paragraphs above.
Putting aside how awful it is to have unsecured robot vacuums that basically anyone can access to see and listen inside your house, this interplay between unsecure software and the capabilities of new coding agents to uncover weak spots signals looming issues that could lead to more sophisticated and larger-scale breaches. This specific story appears positive only because the person directing the coding agent did the right thing by reporting his findings to a journalist and later to the company itself. Some may even see this as a positive scenario where coding agents would be used to stress test applications faster and more thoroughly than human quality control can manage.
The flip side is that others may not be so forthright, opting instead to exploit these security holes to steal data, eavesdrop on strangers, extort them with secrets discovered in this way, or sell details of the vulnerabilities on the black market. Even if a mere fraction of the bugs uncovered by AI gets weaponized, these could cause heavy disruptions were they to target financial or energy systems, flight controls, water management, or any number of critical systems in our interconnected digital society. We may end up in an arms-race-like scenario, where different actors, from companies to government organizations to smaller hacker groups, deploy various chat bots either aggressively or defensively, to scout out and pierce the enemy’s weak points, but also to assess the gaps in their own defenses and devise fixes and countermeasures.
Post a Comment