Meta's AI Display Glasses Reportedly Share Intimate Videos With Human Moderators (engadget.com)
- Reference: 0180900778
- News link: https://yro.slashdot.org/story/26/03/03/1926214/metas-ai-display-glasses-reportedly-share-intimate-videos-with-human-moderators
- Source link: https://www.engadget.com/ai/metas-ai-display-glasses-reportedly-share-intimate-videos-with-human-moderators-135939855.html
> Users of Meta's AI smart glasses in Europe [1]may be unknowingly sharing intimate video and sensitive financial information with moderators outside of the bloc, according to a report from Sweden's [2]Svenska Dagbladet released last week. Employees in Kenya doing AI "annotation" told the journalists that they've seen people nude, using the toilet and engaging in sexual activity, along with credit card numbers and other sensitive information.
>
> With Meta's Ray-Ban Display and other glasses with AI capabilities, users can record what they're looking at or get answers to questions via a Meta AI assistant. If a wearer wants to make use of that AI, though, they must agree to Meta's terms of service that allow any data captured to be reviewed by humans. That's because Meta's large language models (LLMs) often require people to annotate visual data so that the AI can understand it and build its training models.
>
> This data can end up in places like Nairobi, Kenya, often moderated by underpaid workers. Such actions are subject to Europe's GDPR rules that require transparency about how personal data is processed, according to a data protection lawyer cited in the report. However, Svenska Dagbladet's reporters said they needed to jump through some hoops to see Meta's privacy policy for its wearable products. That policy states that either humans or automated systems may review sensitive data, and puts the onus on the user to not share sensitive information.
[1] https://www.engadget.com/ai/metas-ai-display-glasses-reportedly-share-intimate-videos-with-human-moderators-135939855.html
[2] https://www.svd.se/a/K8nrV4/metas-ai-smart-glasses-and-data-privacy-concerns-workers-say-we-see-everything
Meta will say "oops"... (Score:3)
and "We're not doing that anymore."
And nothing else will happen...
Re: (Score:2)
They might pay a "fine" (bribe) to some court system somewhere to get away with it if it gets that far, but nobody will ever end up in jail for as long as the system is corrupt.
Probably left unsaid... (Score:3)
> Employees in Kenya doing AI "annotation" told the journalists that they've seen people nude, using the toilet and engaging in sexual activity, along with credit card numbers and other sensitive information.
Kenyan Employees' Inner Dialog: "These people are idiots."
More seriously, don't these glasses have a easy-to-access / simple-to-use privacy/off/standby button for situations like those? If not, they should.
Re: (Score:2)
What? It is prohibited to turn off your telescreen!
> The telescreen received and transmitted simultaneously. Any sound that Winston made, above the level of a very low whisper, would be picked up by it, moreover, so long as he remained within the field of vision which the metal plaque commanded, he could be seen as well as heard. There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was gue
telltale stink (Score:3)
"...That's because Meta's large language models (LLMs) often require people to annotate visual data so that the AI can understand it and build its training models..."
Is that true though? It's not even clear for LLM's generally, visual data tends to require the most annotation but that doesn't mean it's "often". And a lot of annotation required is for driving, is Meta doing that? I'd say this is a corporate lie, and excuse that fits with their desire to do whatever they want.
Also, an AI doesn't need to "understand it" to build "its training models". Ignoring the ambiguity of "it" here, annotation tells an AI what data is, data may be annotated as part of training or to build "training data", but there are no "training models", models are the result of training. The person who wrote this is not a technical person, more likely a spokesman paid to tell the corporation's lies.https://yro.slashdot.org/story/26/03/03/1926214/metas-ai-display-glasses-reportedly-share-intimate-videos-with-human-moderators?utm_source=rss1.0moreanon&utm_medium=feed#
Good! (Score:2)
Anybody who signs up for this kind of service - never mind actually paying for it - deserves any bad consequences. It's a variety of natural selection.
Re:Good! (Score:4, Insightful)
If only the people who wear these glasses would only record themselves, and nobody else..
Re: (Score:1)
Similar to an Amazon Alexa device. They made them cheap for a reason, so that Amazon can get an always-on microphone in customer homes.
Re: (Score:2)
Frontier sent me two of those pieces of shit for free when I had them for DSL. They immediately went in the trash.
Gee golly jerwiilickers! Say it ain't so! (Score:2)
Who would have thunk a camera strapped to your face recording everything it sees and sending the video to a server would actually record video and send it to a server. Further more color me surprised that the people mechanical Turking Meta's "AI" are underpaid serfs somewhere in the third world. Who would have guessed any of that? Said no one with more than minimal understanding of how things work.
If anything is ever shared online, on a private or public network you can be assured that someone, somewhere
Hold the phone! (Score:5, Funny)
Meta is violating privacy??? Whaaaaaaaat?????????
Re: (Score:2)
When you made a Facebook account and gave them all the data from every minute of your life were you expecting privacy?
Re: (Score:1)
Next thing you know, they'll admit that Facebook mobile app listens in to people when they aren't even using Facebook!