Maybe I’m desperate for trustworthy news, and progress in that direction, but possibly good news surfaced recently, that I missed on first reading.
First, yes, I’m very much, and only a news consumer, and want to respect the boundary between consumer and news professional.
I just want news I can trust.
Several months ago I stumbled on the post about The Trust Project which is about standards for ethical conduct (trustworthiness) and other signals for “quality” news posts.
The deal is that a news outlet could tag news articles with code indicating their code of ethics and related accountability, a way for them to commit to trustworthy behavior.
Beyond that, an article could be tagged as original, or verified, or delivered via satire, or not original but with added value via interpretation, etc.
That is, the Trust Project looks for standardized signals that both search engines and audiences can identify. (For example, a bio is present and tagged.) This way, the signals will be transparent to audiences to make decisions for themselves rather than just trust the algorithms.
Big trust question: if a news org doesn’t declare trustworthiness, what are they telling us?
Jeff Jarvis summarizes the Project really well, and suggests that news aggregators like Google, Facebook, Apple, Twitter and more could rank more highly those articles tagged as trustworthy, or maybe just omit articles not so tagged. That can be done programmatically, or with people, or some combination or both.
Recently, Ken Doctor surfaced comments from David Gehring from The Guardian which give me more hope.
The deal is that reporting, tagged as original and trustworthy is what advertisers regarding “premium” and they’re willing to pay more for that.
What if Google provided a persistent tag to be associated with any article originating with one of those 60,000 publishers? Those include thousands of legacy newspaper and magazine brands, but also the digital news startups that emphasize original content creation as well. As programmatic trading systems matched targetable content with advertisers, that apparatus could differentiate “premium” from “non-premium” audiences. Further, such premium content could still be found by category, like tech, sports, or health, increasing its value. Importantly, such tags wouldn’t only accompany articles in Google News itself, but on all news found throughout Google, including web search.
In all these efforts, the devil’s in the details. For example, what happens if a news org commits to a code of ethics, and then avoids accountability for lapses in trustworthy behavior.
How do we get trustworthy people who might call foul regarding a deceptive article, or a flat-out deceptive fake news site?
Some hope comes from two efforts building networks to verify video and related material via social media, a form of citizen journalism. That might in itself be useful regarding fake nonprofits running scams.
Along these lines, I’m impressed by First Draft Coalition, which has the support of Google Labs, that’s a big deal.
Imagine the following situation: you’re sitting at your computer or looking at your mobile phone, when suddenly you come across a powerful first-person video on YouTube, or a visceral image on Twitter, depicting a breaking news event. You want to share it, but it’s coming from a source or a person you’ve never heard of before so you find yourself wondering… “is this real?”
Also, check out the work done by Storyful, for Google Newswire and Facebook Newswire.
When I see or hear news coverage that features politicians and others lying to me, my brain hurts. Maybe I’m overly optimistic, but I think we can help the help the media do better by getting behind efforts like the ones I’ve reported on here. If we don’t do that, we’re saying it’s OK to lie to us.