By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.
The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.
Karen Hao
It seems like the ‘responsible AI’ framing is completely subjective to what a company decides it wants to care about. It’s like,, says Ellery Roberts Biddle, the editorial director of Ranking Digital Rights, a nonprofit that studies the impact of tech companies on human rights.We’ll make up the terms and then we’ll follow themI don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?
Evergreen conclusion about Facebook’s ultimate motives and their impact on society. I’m sharing the article mostly for the below image and its caption.
Misinformation and hate speech constantly evolve. New falsehoods spring up; new people and groups become targets. To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way. An algorithm that has learned to recognize Holocaust denial can’t immediately spot, say, Rohingya genocide denial. It must be trained on thousands, often even millions, of examples of a new type of content before learning to filter it out. Even then, users can quickly learn to outwit the model by doing things like changing the wording of a post or replacing incendiary phrases with euphemisms, making their message illegible to the AI while still obvious to a human. This is why new conspiracy theories can rapidly spiral out of control, and partly why, even after such content is banned, forms of it can persist on the platform.
In his New York Times profile, Schroepfer named these limitations of the company’s content-moderation strategy.
Every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and are thus not caught, wrote the Times.It’s never going to go to zero, Schroepfer told the publication.
This section fits well with another report about the rampant misinformation and political propaganda outside the United States – AI models trained on English content will evidently have a hard time identifying similar problems in other languages and cultures, even after translation.
Update: less than a month later:
The head of Facebook's Responsible AI team, @jquinonero — the subject of that brutal @_KarenHao piece in the MIT Tech Review — announced internally that he is leaving after 9 years at the company, citing burnout. In his post, he says he'll be taking a 3 month internet sabbatical.
— Ryan Mac🙃 (@RMac18) June 1, 2021
Post a Comment