Experiments showed that this change would impede the distribution of hateful, polarizing, and violence-inciting content in people’s News Feeds. But Zuckerberg
rejected this intervention that could have reduced the risk of violence in the 2020 election, Haugen’s SEC filing says. An internal message characterizing Zuckerberg’s reasoning says he wanted to avoid new features that would get in the way of “meaningful social interactions”. But according to Facebook’s definition, its employees say, engagement is considered “meaningful” even when it entails bullying, hate speech, and reshares of harmful content.This episode, like Facebook’s response to the incitement that proliferated between the election and January 6, reflects a fundamental problem with the platform. Facebook’s megascale allows the company to influence the speech and thought patterns of billions of people. What the world is seeing now, through the window provided by reams of internal documents, is that Facebook catalogs and studies the harm it inflicts on people. And then it keeps harming people anyway.
Zuckerberg’s positioning of Facebook’s role in the insurrection is odd. He lumps his company in with traditional media organizations—something he’s ordinarily loath to do, lest the platform be expected to take more responsibility for the quality of the content that appears on it—and suggests that Facebook did more, and did better, than journalism outlets in its response to January 6. What he fails to say is that journalism outlets would never be in the position to help investigators this way, because insurrectionists don’t typically use newspapers and magazines to recruit people for coups.
Adrienne LaFrance
A subject I have not touched for some time, despite a growing number of investigations and revelations, partly because I myself have become almost completely disinterested in Facebook as a social network, partly because none of these revelations have had tangible effects on the company and its behavior. Quite the opposite, Zuckerberg apparently thinks that a rebranding will be enough to wash away any stains on his company’s image. At this point it seems that the only remedy to Facebook’s malignancy could be tough regulation in the US, its home country, as neither regulation abroad, nor vague and unenforceable privacy standards, neither Apple’s containment measures on iOS, nor employee criticism have managed to affect its ways. And yet, hoping for the divided US Congress to take firm action seems about as foolish as expecting Mark Zuckerberg to suddenly grow a conscience.
Everything distasteful about Facebook, including its unmanageable size, flows from its business model, which is infinitely scalable. Yes, we should break Facebook up, but we also must break its business model of ever more granular user surveillance. Without the urge to record, shape, and monetize users’ every thought and click, the platform’s incentives change enormously — and perhaps for the better. But Facebook is not going to give up its cash cow willingly. Only sweeping policy changes can do that — a national data privacy law, regulators that can audit algorithms and police the data trade, wholesale prohibitions on the collection and sale of certain types of personal information, outlawing targeted advertising, nationalizing some tech firms and running them as public utilities.
Jacob Silverman
Looking back on its history, I think the change that snowballed into this huge mess was introducing the Timeline and making everyone’s posts public by default. This led to the massive distribution of fringe content all over people’s feeds, which the company later monetized with ads and is now unwilling to roll back to protect its lucrative business.
Facebook could ban reshares. It could consistently enforce its policies regardless of a user’s political power. It could choose to optimize its platform for safety and quality rather than for growth. It could tweak its algorithm to prevent widespread distribution of harmful content. Facebook could create a transparent dashboard so that all of its users can see what’s going viral in real time. It could make public its rules for how frequently groups can post and how quickly they can grow. It could also automatically throttle groups when they’re growing too fast, and cap the rate of virality for content that’s spreading too quickly.
Facebook could shift the burden of proof toward people and communities to demonstrate that they’re good actors—and treat reach as a privilege, not a right. Facebook could say that its platform is not for everyone. It could sound an alarm for those who wander into the most dangerous corners of Facebook, and those who encounter disproportionately high levels of harmful content. It could hold its employees accountable for preventing users from finding these too-harmful versions of the platform, thereby preventing those versions from existing.
It could do all of these things. But it doesn’t.
Adrienne LaFrance
Post a Comment