Real-time detection, in this context, does not mean seconds. It may be unnecessary to take action if it does not spread. In practice, rapid response could mean minutes or hours. Time enough for an algorithm to detect a wave of news that seems suspicious and is gathering momentum, potentially from multiple sources. Also, enough of a window to gather evidence and have it considered by humans who may choose to arrest the wave before it turns into a tsunami.
I know a thing or two about algorithms processing news. I built Google News and ran it for many years. It is my belief that detection is tractable.
I also know that it is probably not a good idea to run anything other than short-term countermeasures solely based on what the algorithm says. It is important to get humans in the loop — both for corporate accountability and to serve as a sanity check. In particular, a human arbiter would be able to do proactive fact checking. In the above example, the Facebook or Twitter representative could have called the press office of the Holy See and established that the story is false. If there is no obvious person to call they could check with top news sources and fact checking sites to get their read on the situation.
Krishna Bharat
Still on the subject of fake news and online manipulation: fast detection and prevention doesn’t look complicated if you think about it – and the article comes from the founder of Google News, an ex-Google employee.
The scale and success of our major platforms made this large-scale assault on truth possible in the first place. They are also best positioned to fix it. They can set up sensors, flip levers, and squash fake news by denying it traffic and revenue.
My concern is whether the leadership in these companies recognizes the moral imperative and has the will to take this on at scale, invest the engineering that is needed, and act with the seriousness that it deserves. Not because they are being disingenuous and it benefits their business — I genuinely believe that is not a factor — but because they may think it’s too hard and don’t want to be held responsible for errors and screw-ups. There is no business imperative to do this and there may be accusations of bias or censorship, so why bother?
And it is high time they acted decisively on this issue. Just today I saw an article that some sites allow anyone to create and spread their own fake news, and obviously many are using this to harass people they don’t like (primarily foreigners). People in developing countries, who are just discovering the Internet and social media, are particularly vulnerable to misinformation, like some articles about Myanmar and Southern Sudan reported. It gets even more complicated when you consider messaging services (for example WhatsApp, very popular around the world), where false rumors are very difficult to track and debunk. If Silicon Valley tycoons genuinely think they are making the world a better place, they should (re)start right here.
Post a Comment