With reporting by Tamar Wilner

The Trust Project

Based at California’s Santa Clara University, The Trust Project is led by journalist Sally Lehrman, who conceived the effort with Google News executive Richard Gingras in 2014. Their idea: to develop digital tools and strategies to signal trustworthiness and ethical standards in reporting—for audiences, but also for news distribution platforms. Funded by Craigslist’s Craig Newmark, Google, and the Markkula Family Fund, the project collaborates with nearly 70 global news organizations and media institutions, from The New York Times and Mother Jones to the American Press Institute and Poynter.  Working from interviews with the public conducted by the project, senior news executives developed and defined a list of 37 “trust indicators,” out of which they prioritized eight —including, among others, correction notices, ethics and diversity policies, conflict of interest disclosures, and labels to distinguish news from opinion and sponsored content. The project is working with news organizations to integrate the indicators into content management systems and build plug-ins to present them to the public, possibly displaying check marks showing which indicators a given news outlet fulfills. Lehrman says the system will also provide signals to search and social media platforms including Google, Bing, and Facebook, which have agreed to use the indicators to surface high-quality news. Lehrman is also working with the Engaging News Project at the University of Texas-Austin to test readers’ responses to selected trust indicators on mock-ups of potential user interfaces.

Factmata

With Factmata, researchers originating from the University of Sheffield and University College London are taking an automated approach to statistical fact-checking and claim detection. Automation, the Factmata team believes, has the potential to keep up with the speed of the Internet, minimize errors, and eliminate the expense and time of manual verification done by independent fact checkers and organizations like PolitiFact. For now, the project, which is backed by Google’s Digital News Initiative, is focusing solely on statistical claims—ones that contain an entity (like a person or place), a statistical property (like population or the rate of unemployment), and a numerical value for that property.

Further Reading

Can News Literacy Be Taught?
by John Dyer

Whether educators can train audiences to unmask fake news, conspiracy theories and propaganda remains an open question

“The current administration in the U.S. has invested $4.5 billion into the energy sector” and “Greenhouse gases in the U.K. have fallen by 6.2% this year alone” are both examples of statistical claims, which are often used to support political statements. Factmata is building a widget that, using natural language processing, can identify such claims and verify if they are “fact-checkable,” meaning they can be validated by checking reliable databases. Last year, Factmata’s co-founder Andreas Vlachos and his Ph.D. student James Thorne built a prototype extension of their research in numerical fact checking. The prototype was among more than 80 entrants from around the world in a “fast and furious fact check challenge” put on by investigative journalist Diane Francis and HeroX, a crowdsourcing technology provider in the U.K. The project was one of three finalists. However, after being asked to check 50 claims, none of the competitors was able to meet the contest’s goal of 80 percent accuracy—an indicator of the current limitations of the technology.

RumorGauge

RumorGauge began in 2013 as an effort to predict the veracity of rumors on Twitter, using computational models that considered the content of tweets and how viral they are, as well as characteristics of the users involved. For his Ph.D., Soroush Vosoughi, currently a postdoc associate at MIT’s Lab for Social Machines, programmed RumorGauge to evaluate rumors associated with several events, such as the 2013 Boston Marathon bombings and the Ferguson riots in 2014. Vosoughi is now focusing his work on examining political rumors, using filter bubbles as a starting point. Because social media users are embedded in webs of like-minded individuals, the same people tend to share fake political news again and again. So, when a rumor is shared, RumorGauge examines the histories of the sharers—what they’ve shared in the past, and whether those items turned out to be true or false. It will consider how long the social media account has been active, since accounts are often set up for the express purpose of spreading misinformation. While integrating RumorGauge into social media platforms would be the most effective route, that isn’t likely to happen. Instead, Vosoughi is considering a website, where users could check rumors they’ve seen on social media. Therein lies a problem common to many projects tackling misinformation: Before someone even visits such a website, they have to not only doubt the truth of something they’ve seen on social media, but know such a website exists to check that rumor—and trust it to deliver sound information.

First Draft

Formed in 2015 to address challenges relating to truth and trust in the digital age, First Draft trains journalists and others in finding and verifying content sourced from the social Web, providing resources, case studies, and best practice recommendations. Last fall, the nonprofit launched the First Draft Partner Network, enlisting newsrooms, human rights organizations, and technology companies from around the globe to develop solutions for improving ethical sourcing, verification, and reporting of stories that emerge online. The core partners include, among others, The New York Times, the Australian Broadcasting Corporation, the Associated Press, Facebook, Twitter, Amnesty International, YouTube, and ProPublica. Among the initiatives to come out of the partner network is CrossCheck, a real-time collaborative verification tool. Hosted on Check, a platform designed by Meedan—a company (and founding partner of the First Draft Partner Network) focused on global journalism and translation—CrossCheck lets news organizations establish and share common verification tasks to streamline work. So, for example, rather than having dozens of newsrooms independently verifying whether a single photo is fake, one person does it quickly and then shares it with the virtual network, freeing others to focus on reporting and telling stories. CrossCheck is being tested by 35 different newsrooms in France, including Agence France-Presse, Le Monde, and BuzzFeed, during the French elections, with info to be verified coming from questions submitted by the public and by participating newsrooms monitoring Google Trends. All questions will be posted on a dedicated CrossCheck website, and newsrooms will prepare reports for their own audiences on verified info, while Facebook will support the promotion of CrossCheck articles and resources on its platform.

Due to an editing error, Nieman Reports failed to acknowledge Tamar Wilner’s reporting when this story was first published.

Further Reading

Show comments / Leave a comment