14 October 2024

TechCrunch: “Meta confirms it may train its AI on any image you ask Ray-Ban Meta AI to analyze”

In other words, the company is using its first consumer AI device to create a massive stockpile of data that could be used to create ever-more powerful generations of AI models. The only way to “opt out” is to simply not use Meta’s multimodal AI features in the first place.

The implications are concerning because Ray-Ban Meta users may not understand they’re giving Meta tons of images – perhaps showing the inside of their homes, loved ones, or personal files – to train its new AI models. Meta’s spokespeople tell me this is clear in the Ray-Ban Meta’s user interface, but the company’s executives either initially didn’t know or didn’t want to share these details with TechCrunch. We already knew Meta trains its Llama AI models on everything Americans post publicly on Instagram and Facebook. But now, Meta has expanded this definition of “publicly available data” to anything people look at through its smart glasses and ask its AI chatbot to analyze.

Maxwell Zeff

I’ve been vaguely following the news around these new smart glasses from Meta to see how they would fare compared to Google’s failed attempt with Glass a decade ago, and Apple’s massive VR helmet. This reporting underscores what some remarked repeatedly about ad-driven companies like Google and Facebook: their products are built for the core purpose of data collection for ad targeting. The privacy implications are as egregious as with Google Glass: everything in the visual field of these glasses will likely end up in a facial recognition database and might get exposed publicly – in fact, some college students have already hacked the Ray-Ban Meta glasses to reveal the name, address, and phone number of anyone they look at.