There is an undercurrent of fatalism in some of the responses to publication of this study. “Man, if you expect Facebook to do anything other than shove a live rattlesnake up your arse in pursuit of profit, you’re a naive child.” I don’t agree with that. We should expect more, demand more, hope for more from those who act as custodians of our data. Whether the law requires it or not. (European data protection laws are considerably more constraining than those in the US, in my opinion correctly, but acting only just as decently as the law requires is the minimum requirement, and we should ask for better.) But I honestly don’t see the difference between what Facebook did and what Target did. Yes, someone with depression could be adversely affected (perhaps very severely) by Facebook making their main channel of friendly communication be markedly less friendly. But consider if the pregnant woman who hadn’t told her father had had a miscarriage, and then received a book of baby milk vouchers in the mail.
Stuart Langridge
In case you didn’t hear about it, Facebook recently published a research paper based on careful manipulation of the newsfeed in order to investigate how the mood of the stories in the feed influences the emotions of users. And with it unleashed a seemingly endless wave of outrage on the internet. Personally I don’t see much reason to revolt; in our current media culture, we are exposed to high levels of manipulation in the news on a daily basis, constantly affecting our opinions about politics, science and a number of other subjects. The fact that our emotions are influenced by what people close to us are experiencing is just a sign of human empathy – people would likely react the same way if they found out about the sad or happy events through other channels, out of Facebook’s reach. So I largely agree with the article linked above, with one exception: I wouldn't be as eager to use the ‘happiness button’ as the author seems to be.
Post a Comment