An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to
engage a child in conversations that are romantic or sensual, generate false medical information and help users argue that Black people aredumber than white people.These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social-media platforms.
Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.
Jeff Horwitz
Despicable, even by Facebook’s abysmal standards. I was browsing my blog to link to examples of their horrible past behaviors, and the list just kept getting longer and longer!

The reactions from some members of Congress were swift and outraged, calling for investigations into Meta’s internal policies. I wouldn’t hold my breath for concrete measures though; Congress hearings on various Big Tech topics have become no more than elaborate shows of mock action, while representatives cash in on contributions and lobbying.
A more interesting angle of attack would be the issue of who is liable for such content: until now, Big Tech has benefitted from Section 230 protections for content posted by users on its platforms. Would generative AI fall under the same protection? Personally, I would argue not, since there’s no human behind the strings of words, and the models outputting the text are controlled by the company itself, tweaked by its engineers. But who knows how the American legal system would rule if it’s ever presented with this challenge? And you can almost be certain that companies would argue in their defense that chatbots are ‘persons’ too and entitled to the same rights and liberties…
Post a Comment