How topics are covered in the news frames public debates and thus has a profound impact on collective decision making. News may be subtly biased through specific word choices or framing, intentional omissions or misrepresentation of specific details. Examples include the terms “illegal aliens” or “undocumented immigrants” in the coverage on immigration-related topics. Additionally, news authors can bias coverage by including or omitting specific information to support a certain perspective on the reported topic, and hence influence their audience. In the most extreme cases, fake news may present entirely fabricated facts to intentionally manipulate public opinion towards a given topic. A rich diversity of opinions is desirable but systematically biased information, if not recognized as such, can be problematic as a basis for decision making. Therefore, it is crucial to empower news readers in recognizing relative biases in coverage by providing timely identification of media bias that can be delivered together with the actual news coverage – for example, through a specifically designed news aggregator platform.
In this project we connect a long tradition of social science research on media bias with state-of-the-art methodology from computer science. The first part of the project centers around achieving rapid automated assessment of news media bias from a more technical, computer science point of view. The second, social science part of the project then is concerned with systematically studying how information about (relative) bias in the news could then be disseminated to enable – rather than to hinder – consensus formation and, in turn, collective decision making.