In other words, the company is using its first consumer AI device to create a massive stockpile of data that could be used to create ever-more powerful generations of AI models. The only way to “opt out” is to simply not use Meta’s multimodal AI features in the first place.
The implications are concerning because Ray-Ban Meta users may not understand they’re giving Meta tons of images – perhaps showing the inside of their homes, loved ones, or personal files – to train its new AI models. Meta’s spokespeople tell me this is clear in the Ray-Ban Meta’s user interface, but the company’s executives either initially didn’t know or didn’t want to share these details with TechCrunch. We already knew Meta trains its Llama AI models on everything Americans post publicly on Instagram and Facebook. But now, Meta has expanded this definition of “publicly available data” to anything people look at through its smart glasses and ask its AI chatbot to analyze.
Maxwell Zeff
I’ve been vaguely following the news around these new smart glasses from Meta to see how they would fare compared to Google’s failed attempt with Glass a decade ago, and Apple’s massive VR helmet. This reporting underscores what some remarked repeatedly about ad-driven companies like Google and Facebook: their products are built for the core purpose of data collection for ad targeting. The privacy implications are as egregious as with Google Glass: everything in the visual field of these glasses will likely end up in a facial recognition database and might get exposed publicly – in fact, some college students have already hacked the Ray-Ban Meta glasses to reveal the name, address, and phone number of anyone they look at.
And, of course, this is the reason Meta keeps up this charade that ‘EU regulatory uncertainty’ is preventing them from releasing this-and-that AI feature. EU regulations are pretty clear on the subject of privacy and consent, but Meta knows perfectly well that its products would not meet these requirements and is instead trying to skirt the rules and obfuscate the issue, probably aggressively lobbying for changes behind the scenes. Ironically, even in the laxer and more fragmented US regulatory landscape, some of Facebook’s data collections practices went too far:
Meta just paid the state of Texas $1.4 billion to settle a court case related to the company’s use of facial recognition software. That case was over a Facebook feature rolled out in 2011 called “Tag Suggestions”. By 2021, Facebook made the feature explicitly opt-in, and deleted billions of people’s biometric information it had collected. Notably, several of Meta AI’s image features are not being released in Texas.
Post a Comment