Automated journalism

0 new
0 new
0 new
0 new

Project log

Andreas Graefe
added 7 research items
In recent years, the use of algorithms to automatically generate news from structured data has shaken up the journalism industry—most especially since the Associated Press, one of the world’s largest and most well-established news organizations, has started to automate the production of its quarterly corporate earnings reports. Once developed, not only can algorithms create thousands of news stories for a particular topic, they also do it more quickly, cheaply, and potentially with fewer errors than any human journalist. Unsurprisingly, then, this development has fueled journalists’ fears that automated content production will eventually eliminate newsroom jobs, while at the same time scholars and practitioners see the technology’s potential to improve news quality. This guide summarizes recent research on the topic and thereby provides an overview of the current state of automated journalism, discusses key questions and potential implications of its adoption, and suggests avenues for future research.
We conducted two experiments to study people’s prior expectations and actual perceptions of automated and human-written news. We found that, first, participants expected more from human-written news in terms of readability and quality; but not in terms of credibility. Second, participants’ expectations of quality were rarely met. Third, when participants saw only one article, differences in the perception of automated and human-written articles were small. However, when presented with two articles at once, participants preferred human-written news for readability but automated news for credibility. These results contest previous claims according to which expectation adjustment explains differences in perceptions of human-written and automated news.
We conducted an online experiment to study people’s perception of automated computer-written news. Using a 2×2×2 design, we varied the article topic (sports, finance; within-subjects) and both the articles’ actual and declared source (human-written, computer-written; between subjects). 986 subjects rated two articles on credibility, readability, and journalistic expertise. Varying the declared source had small but consistent effects: subjects rated articles declared as human-written always more favorably, regardless of the actual source. Varying the actual source had larger effects: subjects rated computer-written articles as more credible and higher in journalistic expertise but less readable. Across topics, subjects’ perceptions did not differ. The results provide conservative estimates for the favorability of computer-written news, which will further increase over time, and endorse prior calls for establishing ethics of computer-written news.