Your Internet Is About to Get a Lot More Censored
(ANTIMEDIA) — The term “fake news” sits at the heart of a raging battle between political dynasties, media conglomerates, and alternative Internet news sites. In the aftermath of the 2016 presidential election, which many establishment pundits and operatives argue was heavily affected by online propaganda and ‘weaponized’ information, few people can say with any authority what constitutes “fake news,” but that hasn’t stopped mainstream news outlets and tech giants from attempting to censor it.
The newest effort sees an alliance between the world’s two most powerful tech companies, which arguably control the vast majority of what people see online. Google and Facebook have partnered up with 75 other global news organizations and tech firms, including The Washington Post, to combat online misinformation with the ‘Trust Project.’
The ‘Trust Project’ will attempt to establish a ‘transparency’ system of rating and tagging the trustworthiness of an article and the news source that produced it. This system will feature “trust indicators” developed for each online platform.
According to the project leader, Sally Lehrman of Santa Clara University’s Markkula Center for Applied Ethics, the project will also include information about funding and details about specific journalists and their sources and references.
The project will involve a far-reaching collaboration between different companies. Launch partners include the Economist, the Globe and Mail, the Independent Journal Review, Mic, Italy’s La Repubblica and La Stampa, and the Washington Post. Tech giants like Facebook, Google, Twitter, and Microsoft will integrate trust indicators using “machine-readable signals” that will be specially tailored to their platforms by Schema.org. They plan to eventually spread the plan out to incorporate smaller news websites and non-profits by creating indicators for WordPress and Drupal.
According to the project’s website, there is a core group of 8 trust indicators that were whittled down from an initial 37. These core indicators are Best Practices, Author/Reporter Expertise, Type of Work, Citations and References, Methods, Locally Sourced, Diverse Voices, and Actionable Feedback.
Lehrman says her work on the ‘Trust Project’ dates back to 2012 when she asked a specialist in machine learning at Twitter, and Richard Gingras, head of Google News, if it was possible to create ethical algorithms — algorithms that can be used toward an ethical means. They said yes. Together with Craig Newmark, the founder of Craigslist, Gringas provided early financial and consulting assistance to the project.
Greg Sterling, a contributing editor for the Search Engine Land blog, believes the project may be too ambitious.
“Readers should be able to see what’s behind the labeling scheme, but they should be able to tell at a glance whether an item is from a credible source, not have to spend time evaluating it based on a range of factors that may be obscure to them,” he wrote.
On its homepage, the ‘Trust Project’ asks a question. “We all think we can tell the difference between opinion, advertising and accurate news. But how do we really know?”
A more pointed question to ask may be: can an algorithm designed to provide ethical reporting be trusted if the gatekeepers leveraging it already have track records of tweaking their algorithms to restrict the reach of independent media sites? State Department-friendly news sources like the Washington Post joining forces with tech giants that already have a track record of algorithmic censorship is not terribly surprising or inspiring. Such an endeavor may not garner much “trust” from independent publishers who already believe the “fake news” narrative has been weaponized to destabilize the alternative media landscape.