Algorithms monitor marketplace of ideas
By Fu Tao and William A. Babcock
Algorithms are pervasive in our daily lives. Every action we accomplish online could not be achieved without them.
As evidence, simply quickly thumb through New York Times articles from this year and you will easily find coverage on how algorithms are designed to complete tasks that had not been under our radar before. Using algorithms, cosmetics are customized to each consumer’s skin tone. Algorithm-driven chatbots, an application of artificial intelligence at a low-level, are used by customer companies such as Domino’s for customers’ orders or inquiries. Sensors, equipped with algorithms, detect the change of heart beat, eye movement and body temperature of drivers to provide “drowsy” warnings.
And of course there are other more well-known applications of algorithms such as those used by search engines to make ranking decisions.
Algorithms have also been used in journalism and the media industry. Konstantin Dörr, a media researcher at the University of Zurich, refers to the former as Algorithmic Journalism. The task of generating automated reports is realized by bots. An Internet bot, or simply “bot,” as a nickname for a software robot, is a software application doing automated, sometimes repetitive, jobs based on artificial intelligence. Some also call this software-generated, no-human-intervention stories robo-journalism.
In early 2014, the Los Angeles Times ran a story online of an earthquake that hit the Los Angeles area eight minutes after it had happened. But the story contained a footnote — “This information comes from the USGS Earthquake Notification Service and this post was created by an algorithm written by the author,” according to Atlantic magazine. This early report about earthquakes was automatically generated based on a pre-written template by a bot designed by Ken Schwencke, a Los Angeles Times journalist and database producer.
In 2014, the Associated Press announced its use of a news-writing bot called Wordsmith, for short U.S. corporate earnings stories. Wordsmith’s algorithm could spot trends in data and choose appropriate words to formulate reports featuring the AP style. Last year, the AP expanded its use of artificial intelligence to its coverage of Minor League Baseball games.
The AP is not alone in journalistic automation. According to Nieman Reports, the New York Times, ProPublica, Forbes, Yahoo, and Oregon Public Broadcasting all use algorithms to generate reports on business, sports, education, public safety, and earthquake impacts, to be exact.
The New York Times also developed a content marketing bot called Blossom to help its social media editors decide which stories might be trending on Facebook among the 300 stories it publishes everyday. In 2016, Heliograf, the Washington Post’s bot, made its first debut at the Rio Olympics churning out scores and schedules of the matches.
The list of writing bots by some major U.S. news organizations includes:
News Organization | Launched in | Bot’s Name | Purpose | Developer |
Los Angeles Times | 2014 | Quakebot | Earthquake | Ken Schwencke |
Associated Press | 2014 | Wordsmith | Business | Automated Insights |
New York Times | 2015 | Blossom | Content marketing | The NYT |
Washington Post | 2016 | Heliograf | Sports | The Post |
Algorithms per se are supposedly neutral as they are the specified sequences of logical operations based on mathematics and statistics, designed to offer solutions. That said, last year saw algorithms for social media go out of control.
Tay, Microsoft’s chatbot, cutely designed to engage American millenials, was expected to learn from conversations over time. Trolls at 4chan, however, deliberately took advantage of Tay’s vulnerability in design and taught her to tweet racist and genocidal slurs hours after launching, according to Microsoft’s blog. Microsoft thus had to shut her down, made a public apology, and promised to keep her offline till algorithmic problems are solved.
Facebook, which had some 1.94 billion monthly active users worldwide as of the first quarter of this year, has received criticism on its algorithm for Trending, a service added in 2014 providing users a personalized list of popular topics. A National Public Radio report showed some Facebook users complained the Women’s March, a protest involving about 1 million people, did not show up in their Trending topics.
Monitoring claims of fake news
But the most disastrous criticism Facebook received during last year’s presidential election was trending fake news. A most recent Guardian report stated even after Facebook adopted the initiative to flag possible fake news with the help of users and third-party fact-checkers, it proved ineffective. Facebook’s news feed algorithm was also cast in doubt for possibly suppressing news representing conservative views, though Facebook denied the algorithm bias.
News aggregators such as Google News and Reddit use algorithms for customized news recommendations. In Reddit, factors affecting whether a post will appear on the front page include shares, keywords, number of up-votes or down-votes received, timing of submission and the amount of contacts.
Still, there are concerns algorithms may result in trending polarized opinions that create filter bubbles, a concept raised by Eli Pariser, CEO of Upworthy, a website for “meaningful” viral content, to refer to the “personal ecosystem of information that’s been catered by these algorithms.”
While one of the goals of journalism is to provide a marketplace of ideas, it appears the door to this marketplace increasingly is being guarded by an algorithm doorkeeper. But this begs the question: Since human beings have created algorithms, isn’t it disingenuous to place blame on the technology itself and not on its creators?
Who’s watching the watchdog?
In his article, “Bias in algorithmic filtering and personalization”, Engin Bozdag, Privacy by Design Lead at Philips, the Netherlands, writes human beings are the designers of algorithms and they can manually affect results of algorithms. When Facebook fired its human editors whose duties had been writing descriptions for Trending stories and manually adding news to Trending to ensure diversity and inclusiveness, fake news about Fox News host Megyn Kelly was kicked out for endorsing Hillary Clinton immediately trended.
But as GJR has reported in the past, there are fewer media watchdogs any longer monitoring the marketplace of ideas. The New York Times recently terminated the position of public editor (also called readers’ representative or ombudsman), claiming the position is now superfluous. Other U.S. newspapers had also eliminated this position. Nor are there any longer any news or press councils, citywide or national, left in America. Few news organizations have media reporters or editors, and most “media critics” have gone the way of the dodo bird. And media organizations that even still have media ethics codes are hard pressed to even find them.
With the media’s ethics toolbox all but empty, algorithms, and those who create and monitor them, have become one of the very few arbiters of what’s any longer right or truthful or responsible or fair in the media. That’s scary.
Last year the National Science and Technology Council released the National Artificial Intelligence Research and Development Strategic Plan, providing the guide for future artificial intelligence research and design. One of the strategies is to understand and address the ethical, legal and societal implications of AI. The Plan warns that “many concerns have been voiced about the susceptibility of data-intensive AI algorithms to error and misuse, and the possible ramifications for gender, age, racial, or economic classes.”
Algorithm-driven AI clearly is the trend, and journalism and the media industry are following — and often leading — this trend. Accordingly, it is time to add more ethical reconsideration to the design and use of algorithms in the news and mass media industry.