Over the past year, there has been much hue and cry about Facebook's fake news problem. The company deferred dealing with it first by saying that a better machine-learning model will fix the problem and then by saying it will rely on third-party fact checkers to flag "disputed" stories when they are shared. Both of these ideas are OK, but they are missing one crucial ingredient. That ingredient, as Charlton Heston screams in Soylent Green, is people.
Economist Brad DeLong has been saying for a while that robots may take over many jobs, but there are some things robots cannot do alone. Humans will always be needed to make decisions that require a nuanced understanding of how culture works, especially in political and social debates where context is everything. An algorithm might be able to learn some of the signs of fake news—certain hashtags perhaps, or a viral reach that starts with shares happening at bot-like speed. But a human is always going to be needed at some point to determine whether those signs point to fake news or real news that's blowing up organically because it's actually important. And these humans need to be well-trained in media analysis themselves, able to spot hoaxes and lies better than an average reader.
In short, Facebook needs a team of trained editors. But wait, you are saying. Facebook already had a group of editors the company fired earlier this year. So obviously human workers couldn't solve the problem, right? Wrong. Very few of Facebook's editors were highly experienced, nor were they full-time employees. They were contract workers, treated like outsiders at Facebook and given very little in-depth training or decision-making power. Not surprisingly they grew disgruntled with their work, and a few who had been fired talked about Facebook's slapdash editorial policies in a tell-all with Gizmodo. The point is, Facebook has never made an honest, concerted effort to create an internal team of humans devoted to making the News Feed a good experience for users.