Content uploaded by Emil O. W. Kirkegaard
Author content
All content in this area was uploaded by Emil O. W. Kirkegaard on Aug 07, 2021
Content may be subject to copyright.
26
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/jpr.v3i3.3282
Journal of Psychological Research
https://ojs.bilpublishing.com/index.php/jpr
ARTICLE
The Left-liberal Skew of Western Media
Emil O. W. Kirkegaard1* Jonatan Pallesen2 Emil Elgaard3 Noah Carl4
1. Ulster Institute for Social Research, Denmark
2. Independent researcher, Denmark
3. Independent researcher, United States
4. Independent researcher, United Kingdom
ARTICLE INFO ABSTRACT
Article history
Received: 24 May 2021
Accepted: 28 June 2021
Published Online: 27 July 2021
We gathered survey data on journalists’ political views in 17 Western
countries. We then matched these data to outcomes from national
elections, and constructed metrics of journalists’ relative preference for
different political parties. Compared to the general population of voters,
journalists prefer parties that have more left-wing positions overall (r’s
-.47 to -.53, depending on the metric used), and that are associated with
certain ideologies, namely environmentalism, feminism, social liberalism,
socialism, and support for the European Union. We used Bayesian model
averaging to assess the validity of the predictors in multivariate models. We
found that, of the ideology tags in our dataset, ‘conservative’ (negative),
‘nationalist’ (negative) and ‘green’ (positive) were the most consistent
predictors with nontrivial effect sizes. We also computed estimates of the
skew of journalists' political views in different countries. Overall, our
results indicate that the Western media has a left-liberal skew.
Keywords:
Media
Journalists
Media bias
Political bias
Cross-national
Survey
Poll
1. Introduction
It is widely claimed that the media leans left or is
biased against non-left-wing views [1-4]. However, such
claims have been disputed by others [5,6]. One signicant
limitation of the empirical literature on media bias is that
it is narrowly focused on the United States [7], a problem
with quantitative media research generally [8]. Hence we
attempted to quantify the political skew of the Western
media as a whole.
How can one study political bias in the media? We are
aware of three main approaches. First, one can analyze
who owns or funds the media. This approach is based on
the assumption that owners exert some kind of inuence
over the outlets they own. Interestingly, both far-left and
far-right commentators have cited analyses of media
owners in support of their views. Far-left commentators
have highlighted that the media are owned almost entirely
by the wealthy, who tend to hold conservative views on
economic issues [9]. Hence if owners do influence the
outlets they own, it would tend to be in the direction of
maintaining the status quo, which is assumed to benefit
them. On the far right, commentators have taken a similar
approach, except that instead of emphasising owners’
wealth, they have focussed on their ethnicity. In particular,
it has been claimed that Jewish-owned media tend to
support specific interests such as defending Israel, or
trying to undermine nationalism in Western countries [10,11].
*Corresponding Author:
Emil O. W. Kirkegaard,
Ulster Institute for Social Research, Denmark;
Email: emil@emilkirkegaard.dk
27
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
journalists comprising the lion’s share of the workforce.
As a matter of fact, our initial analysis of surveys of
media personnel indicated that journalists constitute
the vast majority of respondents to such surveys. As a
consequence, any survey-based approach would have to
focus on journalists, while analysing data on, say, editors
only if it happened to be reported.
We took the approach of analysing media personnel
themselves because we found that it was relatively
easy to track down surveys from a variety of countries,
including many that had received little previous attention,
especially in the English language literature (e.g. Polish
and Scandinavian surveys). Note that it would be much
more difcult to analyse data from many countries using
content analysis because the relevant models do not easily
generalize across languages. Hence our approach can be
considered particularly useful in this regard. We decided
to focus our attention solely on Western countries because
language barriers would have been prohibitive for non-
Western countries.
A number of previous cross-national surveys of
journalists have been carried out. Yet these did not
generally include questions about voting behavior (or
vote intentions), but only self-placement on a left-right
scale. We consider this unsatisfactory in the present
context because of the reference group effect, namely that
journalists might rate themselves in comparison to others
within their profession and extended network, rather
than with respect to the general population. Furthermore,
previous research has shown that self-placement is only
moderately correlated with more complex measures
of political views, such as factors derived from many
multiple-choice questions [19-21]. To avoid this issue, we
decided to collect data on journalists’ voting behaviour or
vote intentions.
1.1 How Media Bias Works: The Distortion Model
Before proceeding to the methods section, it is worth
outlining the main causal model for the relationship
between journalists’ political views and media bias, which
we consider to be the distortion model [7]. The model
can be divided into three parts, which we will discuss
in turn. First, survey evidence indicates that journalists
have considerable leeway as to which stories they write
and how they write them. By and large, journalists seek
out stories in the information stream that surrounds them,
which happens to include a lot of other journalists. Here
political leanings are relevant, given that journalists
are presumably more interested in stories that cast a
favourable light on persons, parties or organizations with
which they identify, as well as stories that cast a negative
A second approach to studying media bias is to analyze
the content of the media itself [12,13]. Traditionally, this
involved reading through media output, and then manually
coding it as supporting one ideology or another. Because this
method relies on the subjective judgment of coders [9], it is
open to the criticism that those coders themselves might
be biased, or that there are second-order effects whereby
sources seem right-wing while being neutral, due to
most other media being left-wing [7]. In addition to these
criticisms, manual coding is extremely labor intensive
and is therefore difcult to implement in practice. To get
around these issues, studies have increasingly relied on
machine learning analysis of media content [14].
One study ranked media outlets from liberal to
conservative by comparing the number of times they
cited various think tanks or policy groups to the number
of times those groups were cited by Democrat versus
Republican members of the US Congress. Media outlets
that tended to cite groups more often cited by Democrats
were classied as more liberal, whereas those that tended
to cite groups more often cited by Republicans were
classified as more conservative [15]. In a related study,
Gentzkow and Shapiro (2010) [16] trained algorithms to
identify the phrases that most differentiated Democrat
versus Republican members of Congress, and then
classified newspapers based on how frequently they
used the phrases that were typical of Democrats versus
Republicans. Their rank ordering of major US news
outlets was similar to the one obtained by Groseclose and
Milyo (2005) [15], and they found that most news outlets
had a left-wing tilt. (Interestingly, they also found that
ownership of media outlets was of comparatively minor
importance).
Another way to utilise machine learning is to train
algorithms to code media content based on text cues
which may be incomprehensible to humans but that do
show predictive validity [17]. This necessitates having a
text corpus with known (or assumed) political leanings,
which can be obtained either by recruiting humans to
evaluate a subset of the data, or by relying on sources with
known (or assumed) positions, such as politicians who
have given speeches. For example, Budak et al. (2016) [17]
used human judges to rate a subset of their data, and then
evaluated the remaining, very large dataset using complex,
trained models. Their method yielded a similar ranking
of US news outlets to those that have been reported in
previous studies.
A third approach to studying media bias is to
analyse data on media personnel themselves [18]. Media
organizations employ a variety of workers, the most
important of whom are journalists and editors, with
DOI: https://doi.org/10.30564/jpr.v3i3.3282
28
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
bias [22].
Together, the three tendencies outlined above result in a
consistent slant of media output in line with the journalist’s
political preferences. The model is illustrated in Figure 1.
At each stage, some level of political bias enters into the
journalistic production process, and that bias accumulates
across the stages, resulting in output that becomes
progressively closer to the journalists’ own views.
2. Data Collection and Metrics
2.1 Data Collection and Initial Coding
We searched the published literature for surveys of
journalists that included questions on voting behavior
or vote intentions. This search yielded comparatively
few relevant articles, and we therefore turned to works
such as dissertations, reports, and newspaper articles.
The reports were often written in the local language (e.g.
French in France), and were often published by journalist
associations or media organizations. In other cases,
newspapers themselves conducted surveys and reported
the results in their own pages, almost invariably in the
local language.
To collect these data, we were assisted by a diverse
team of international research assistants who could read
the local language, and knew where to look or whom
to ask. When we were unable to find anyone, we wrote
to local journalist associations and relevant academics
asking if they knew of any relevant sources. In general,
our search was multi-faceted: Google Scholar, Google
advanced search, asking friends from relevant countries,
asking for assistance on social media platforms such as
Twitter and Reddit, etc.
The resulting sources were saved to a publicly accessible
repository at the Open Science Framework (https://osf.
io/6uvnu/). Online sources were archived to prevent link
rot or deletion of primary source material. Data from the
sources were coded using a standardized format, and entered
into a publicly accessible spreadsheet at Google Drive.
Usually some adjustments to the data were needed, and these
were done in a dedicated sheet within the spreadsheet so
that everything was fully documented. The most common
adjustment concerns respondents who declined to say or
didn’t know whom they had voted for. In particular, these
individuals were ignored, and the denite preferences were
normalized to 100% by dividing by the sum of the denite
preferences. An example of this is shown in Table 1.
This method effectively assumes that people who didn’t
contribute data would have voted in the same relative
proportions as their counterparts who did. In reality, this
assumption is probably somewhat inaccurate, and one
light on persons, parties or organisations to which they are
opposed. This kind of bias has been termed gatekeeping
or selectivity in the previous literature [22]. Although note
that a recent US study found no evidence of a liberal bias
in which news stories political journalists choose to cover [6] 1.
Second, journalists have many options concerning
which sources to seek out when writing a story. Suppose
a local university has just rolled out a new policy. If a
particular journalist happens to support the new policy, he
can choose to seek out only or predominantly sources that
are likely to speak in its favor. Because there are always
many relevant sources that could be sought out, some
kind of selection has to be made. And one would expect
this selection to produce a list of sources that comports
with the journalist’s own preferences. On some occasions,
journalists may seek out particularly ill-informed
members of the opposing side of the story, so as to make
that side “look bad”. When writing a story about the local
university’s new policy, the journalist could seek out a
well-spoken professor who supports it, and a dissenting
individual who is known to make particularly incoherent
arguments.
Third, journalists have to make decisions about which words
or images to use when writing a story [23]. How should a given
individual be introduced or labeled? Consider Charles
Murray, the author of the controversial book The Bell
Curve, which is about intelligence and social inequality
in the United States. Should he be described as ‘far-right’,
‘controversial’, or a ‘pseudoscientist’? These are certainly
labels that have been used for him. Or maybe he should
be described as ‘a scholar associated with the American
Enterprise Institute’, which emphasizes his political
association with the libertarian-leaning think tank, or
even as a ‘leading scholar of American inequality’, which
emphasizes his intellectual contributions.
Choices over how to describe particular individuals are
inevitable when writing about politics, and the distortion
model assumes that journalists’ choice of words reflects
their own political preferences. A journalist with left-wing
views will tend to see everybody else as comparatively
right-wing, while a journalist with right-wing views will
tend to see everybody else as comparatively left-wing.
This kind of bias has previously been labeled statement
1 The authors ran a correspondence experiment in which they emailed
a large number of journalists on behalf of a ctitious candidate for the
state legislature, and asked each one whether she would be able to cover
the candidate. They found that journalists were not signicantly more
likely to cover the candidate when he was described as conservative,
as compared to when he was described as liberal. However, we do not
believe this provides compelling evidence for the authors’ conclusions
because it could be ideologically advantageous for journalists to cover a
candidate from the opposing party. For example, it might allow them to
misrepresent the candidate or to cast his views in an unfavourable light.
DOI: https://doi.org/10.30564/jpr.v3i3.3282
29
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
a range of 72 to 1,338. The mean/median proportion
of journalists who provided party preference data was
0.75/0.79 (sd = 0.17). On average, there was a mean/
median of 2.5/1.0 surveys by country.
2.2 Relative Preference Metrics
To estimate journalists’ political preferences, one needs
a reference population to serve as an anchor. Since we had
decided to focus on journalists’ voting behaviour or vote
intentions, we utilised data from the national election that
was nearest in time to the relevant journalist survey. In a
few cases, we averaged two elections equidistant in time
(details are given in the Calculations sheet). Political skew
was defined as any deviation of journalists’ preferences
from those of the general voting population. Of course,
such skew could be in either one of two directions:
journalists might prefer a given party more or less than the
general voting population.
Because of disagreement about the optimal metric to
use (the authors could not even agree), we decided on a
pluralistic approach, and employed several metrics. First, we
used the delta %point (d) metric. This is the simplest metric,
and is defined as journalist% - general%, i.e., the number
of %points that journalists vote more for a given party
than the general population. When negative, it means that
journalists vote for the party less than the general population.
The metric could be seen as problematic for smaller parties
because it fails to capture the relative aspect of party support.
If journalists are 5 ppts more likely to vote for a particular
party, it may matter whether that party enjoys 5% or 50%
support in the general population.
The second metric we used is the relative risk (RR), which
takes into account the relative party sizes. This is defined
as journalist% / general%, and captures the differences
in relative support. In the case of 0% support among the
can come up with hypotheses as to how the method might
bias results either to the left or to the right [7,24]. One might
expect a bias towards the political centre from the left
because very left-wing journalists will decline to state
their preferences, cognizant that the surveys will reveal
the overall leanings of their profession, which would
not be in their interest if they want to appear as neutral
reporters of the truth. Alternatively, one might expect a
bias toward the left from the right because right-wing
journalists might not dare to state their true preferences
even in anonymous surveys. It is possible to quantitatively
analyze the question by examining whether surveys with
more abstainers produced different findings than those
with less, or by asking about voting intentions in more
circumspect ways [25,26].
The sample size of the surveys included in our analysis
had a mean/median of 542/500, with a range from 89 to
1640. The effective sample size (number with definite
preference given) had a mean/median of 418/408 with
Figure 1. Sketch of the distortion model for the journalistic process.
Table 1. Calculation example for vote normalization.
Polish journalists for 2005 parliament election (1st round).
Party % of
journalists
% of votes
cast
Social Democracy of Poland 3 3.33
Democratic Left Alliance 4 4.44
Democratic Party 9 10
Civic Platform 60 66.67
Polish People’s Party 0 0
Law and Justice 11 12.22
Self-Defence of the Republic of
Poland 0 0
League of Polish Families 0 0
Real Politics Union 3 3.33
Total voted 90 100
Abstained 10
DOI: https://doi.org/10.30564/jpr.v3i3.3282
30
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
is that they are undened when journalists have zero support
for a given party in our samples. This problem arises
due to sampling error for parties that have low levels of
support among journalists (assuming that journalists never
have exactly 0% support for a party). Thus, excluding the
undefined data points would result in a data bias because
of the excluded data’s relation to the outcome of interest
(i.e., there would be nonrandom missing data). We therefore
conducted a simulation study to investigate the best way
to adjust the data. We found that a local regression model
based on sample size performed well. We imputed the best
guess of support for parties where 0% was observed with a
given sample size. After that, the support for other parties
was adjusted downwards slightly so that the sum was again
100%. This essentially mimics a Bayesian approach with a
weak prior. See the supplementary materials for more details
about this procedure.
2.3 Party Data
Figure 2. Infobox on Wikipedia for Sweden Democrats
(Sverigedemokraterna, https://en.wikipedia.org/wiki/
general population, the metric would be undened. However,
given that journalists are part of the general population, this
scenario is impossible, and never occurred in our data. One
problem with the relative ratio is that it is harder for larger
parties to have high ratios than smaller parties. If a party is
already at 20% general population support, the maximum RR
is 5 because journalists cannot support it more than 100%.
For a party with 5% support among the general population, a
relative ratio of 20 is possible.
The third metric we used, which takes into account the
reciprocal of support for each party, is the odds ratio (OR),
defined as (journalist-support% / journalist-non-support%)
/ (general-support% / general-non-support%). This metric
is commonly used when modeling binary outcomes for
statistical reasons (e.g. as log odds in a logistic regression),
but is less intuitive. The relative risk and odds ratio measures
suffer from a non-linearity problem relating to the direction
of coding. The RR can theoretically be almost infinitely
large, but cannot be lower than 0. A solution for this is to
log10 transform the metric. This results in a linear (i.e.,
interval) scale where a 2-unit decrease in the score has the
same meaning as a 2-unit increase. Table 2 shows a few
examples of the metrics.
Table 2. Example calculations of preference metrics.
Journalist% General% d RR OR log10RR log10OR
45 20 25 2.25 3.27 0.35 0.51
35 10 25 3.50 4.85 0.54 0.69
15 10 5 1.50 1.59 0.18 0.20
10 10 0 1 1 0 0
10 15 -5 0.67 0.63 -0.18 -0.20
10 35 -25 0.29 0.21 -0.54 -0.69
20 45 -25 0.44 0.31 -0.35 -0.51
None of the metrics are entirely satisfactory. For
instance, while the log10RR takes into account the relative
support without nonlinearity problems, it does not give any
information about the overall importance of the difference. If
a party has 1% support among the general population and 3%
support among journalists, this would constitute a threefold
difference in attitudes, but would not be very important in
terms of overall voting behavior. The d metric would clearly
show this, however, while providing less information about
the differences in relative support. A difference in party
support of 80% versus 90% might not be taken to have the
same importance as a difference of 1% versus 11%, say, even
though both differences have a d-value of 10% points. We
decided to report detailed results from the d and log10RR
metrics in the main text. Results for the other metrics can be
found in our supplementary materials.
One particular problem with the log transformed metrics
DOI: https://doi.org/10.30564/jpr.v3i3.3282
31
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
Table 3. Political party ideology tag tabulation. LW = left-wing, RW = right-wing.
Rank Tag Proportion Count Rank Tag Proportion Count
1 conservative 0.45 86 12 RW_populism 0.13 25
2 liberalism 0.40 76 13 democratic_
socialism 0.12 22
3 EU_positive 0.23 43 14 national_
conservatism 0.11 21
4 EU_skeptic 0.21 41 15 agrarianism 0.08 15
5 populism 0.18 35 16 feminism 0.07 13
6 green 0.18 34 17 communism 0.07 13
7 nationalism 0.17 33 18 libertarianism 0.05 10
8social_
democracy 0.17 32 19 centrism 0.04 7
9social_
liberalism 0.17 32 20 direct_
democracy 0.04 7
10 christian 0.15 28 21 LW_populism 0.03 5
11 socialism 0.14 27
wing to far-right”). In almost every case, we removed
any other descriptions given, and converted the political
position into a numerical scale from -3 to 3, reflecting
the 7 possible descriptors used.2 The descriptors were
then averaged for each party. For parties with two listed,
this resulted in half integer values (e.g. 2.5 for “right to
far right”). Figure 3 shows the distribution of political
positions in the data. The mean/median position was
0.05 with a standard deviation of 1.40 and skew of 0.01.
Thus, it was nearly perfectly symmetrical despite having
a bimodal shape. This seems to indicate a relative lack
of bias in Wikipedia’s positioning of parties, since bias
would have presumably skewed the distribution in a
particular direction.
2.4 Independent Party Ratings
To check the validity of Wikipedia’s party ratings,
we recruited 25 individuals to rate all 197 parties in
our dataset on a 7-point scale from “far-left” to “far-
right” (including non-integer values, if desired). These
individuals were recruited online via Facebook groups for
people interested in politics, and via participant referral
(snowballing). Each individual received approximately
300 DKK (45 USD) for participating. Raters were told
that they could use any approach they wanted, except that
they should not use Wikipedia, and should not simply
copy-and-paste ratings from another source, including
another participant. They were not told the purpose of the
study. 23 out of 25 raters were Danish (the remaining two
were Dutch and Portugese, respectively); 60% were male;
and they were aged between 17 and 30. The raters were
2 One party was described as “syncretic” which we also coded as 0.
Sweden_Democrats).
By combining data from surveys of journalists and
general elections, we computed metrics of relative support
for political parties. There were 151 parties with at least
one relative preference datapoint in our sample. By itself,
however, this information is not informative. One also
needs some information about the parties themselves.
Instead of relying on the authors’ judgment of party
political ideology and relative placement (which could
of course be biased), we relied on the English language
Wikipedia as an external source. English Wikipedia has
pages for all of the parties in our dataset (n = 197), and
provides ideological and relative placement data in a
semi-structured format called the infobox. We retrieved
and processed this information automatically using a web
scraper. Political left-right position data were available for
93% of the parties (n = 184), and political ideology data
for 97% (n = 191). Missing data were mostly confined
to small or defunct parties. Figure 2 shows a part of the
infobox for a party (Sweden Democrats, from Sweden).
The information of interest is given by Ideology and
Political position. For political ideology, we cleaned
the references (i.e., the numbers in brackets) and any
explanatory text in parentheses (not shown in example).
This results in a tag set for every party. The tags across
pages were not entirely standardized, so to reduce the
number of tags to a more manageable quantity, we
recoded and merged a few of them. The details of this
procedure are given in the supplementary materials. Table
3 shows the frequency distribution for the ideology tags.
For political positions, nearly all the descriptors refer
to a relative position between far-left and far-right,
sometimes with two descriptors being used (e.g. “right-
DOI: https://doi.org/10.30564/jpr.v3i3.3282
32
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
Figure 3. Political positions of parties according to English Wikipedia position data.
recruited by a research assistant who was also not aware
of the study’s purpose.
There was no evidence of cheating on the part of
raters. No two raters were suspiciously similar in ratings
(maximum r = .91, 2nd highest, r = .80, mean = .63),
suggesting they had not copied one another. And none of
the raters gave ratings that were suspiciously similar to
Wikipedia’s positions (maximum r = .85, mean r = .66),
suggesting that they had complied with our instruction
not to use Wikipedia. Measures of internal consistency for
the average party ratings were good, although two of the
raters gave ratings that were only weakly correlated with
the others’ (r’s .15 and .36). The intraclass correlation was
= .54 (.61 without two poor raters), Chronbach’s alpha was
.97 (.97 without two poor raters), and the median correlation
was .61 (.63 without two poor raters). Clustering the ratings
by similarity did not reveal any obvious effects of age, sex or
Danish nationality.
Overall, the mean party ratings were strongly correlated
with Wikipedia’s ratings: across 183 parties for which
Wikipedia ratings were available, r = .86, shown in Figure 4.3
3 The discrepancy with the number of parties in the Wikipedia data, 184,
is that we mistakenly omitted one party from the list of parties given to
the raters, and therefore have no rating data for this party (Pirate Party
Germany). However, since the party in question is relatively minor, this
One interesting difference between Wikipedia’s ratings
and participants’ ratings is that participants labelled
parties as “far-right” less often than Wikipedia. (Note the
relative lack of parties in the top right of the plot). This
may be due to our raters being slightly more right-wing
than average, Wikipedia having a left-wing bias, or raters
interpreting “far-right” as referring to Neo-Nazi parties,
of which there are none in our dataset (their voter support
was too low, and they were outlawed in some countries).
Overall, however, the average participant ratings strongly
corroborate the measures derived from Wikipedia.
2.5 Data Exclusion Rules
We excluded parties that received less than 2% in the
general election. The journalist samples are generally too
small to calculate accurate preference metrics for such
parties, and they would therefore mostly contribute noise
to the results. They are also of little practical importance
since they enjoy no real political power in the countries
where they are found.4 This resulted in 19 excluded cases
(of 151, 13%). As a robustness test, we analyzed the effect
omission is unlikely to have affected our results.
4 Most countries have election thresholds at higher values than 2%, with
an average of about 4% across the countries in our dataset.
DOI: https://doi.org/10.30564/jpr.v3i3.3282
33
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
The simplest way to analyze the data is to examine the
left-right position of parties, and the relative preferences
of journalists for or against those parties. We report results
based on both participants’ party ratings and Wikipedia’s
party ratings. Our two preferred metrics, d and log10RR,
are shown in Figures 5-8.
The figures indicate that there is a relatively strong
relationship, whereby journalists tend to prefer parties with
more left-wing positions overall. For Wikipedia’s ratings:
r = -.47 for d, r = -.53 for log10RR. And for participants’
ratings: r = -.50 for d, r = -.53 for log10RR. In fact, the
relationship was about equal in strength across all of our
metrics and rating sources. For Wikipedia’s ratings: r = -.48
for RR, r = -.40 for OR, and r = -.53 for logOR. And for
participants’ ratings: r = -.53 for RR, r = -.47 for OR, and r
= -.54 for logOR. The relationship is somewhat nonlinear
when using the Wikipedia data: the maximum differential
in support is for center-left parties, rather than for far-
left parties. A likelihood ratio test of a linear vs. nonlinear
(spline) model supports this conclusion for the logRR
metric and is borderline-signicant for the d metric: d adj.
R2 21.3% vs. 24.4%, p = 0.052; logRR: 27.4% vs. 31.7%,
p = .016). The nonlinearity was not evident in the rater
data (model comparisons p’s .48 and .17, for d and logRR
metrics, respectively).
of this data exclusion, to be discussed later.
3. Analyses
There are a number of ways to analyze the dataset we
assembled. The analyses reported here are to some degree
exploratory because it was not clear how best to analyze
the data before we gathered it, and we were therefore
unable to pre-register our analyses [27].
To avoid giving more weight to parties for which we
had multiple surveys, all voting data were averaged at
the party level. In addition, parties were weighted so that
each country had the same overall weight, at least initially
(i.e. each party was assigned a weight of 1/n, where n is
the number of parties with data for that country). This
prevented countries with more parties from exerting a
disproportionate inuence on the results.5
3.1 Left-right Position
5 The election threshold (minimum vote% needed to gain any seat)
has an impact on the number of parties that make it into a country’s
parliament. Since countries differ with respect to this threshold, some
countries have a lot more unique, smaller parties, and others fewer
and larger ones. The USA and the Netherlands represent the extreme
positions on this since the USA has a de facto 2 party system and the
Netherlands has no election threshold with 150 seats (i.e., one needs
0.67% of the vote to gain a seat).
Figure 4. Wikipedia political positions vs. rater political estimates.
DOI: https://doi.org/10.30564/jpr.v3i3.3282
34
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0 DOI: https://doi.org/10.30564/jpr.v3i3.3282
35
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
Figures 5-8. Party political position and journalists’ relative preference for the d and log10RR metrics. Orange line =
OLS t, blue line = LOESS t.
DOI: https://doi.org/10.30564/jpr.v3i3.3282
36
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
3.2 Ideology Tags
Next, we turn to our ideology tags. Here we begin
by taking a univariate approach, calculating an average
(central tendency) for each tag. We chose the weighted
mean and median as our estimators.6 Figure 9 shows the
results for the logRR metric.
Compared to the general voting population, journalists
prefer parties that are associated with the following
ideologies: green parties/environmentalism, feminism,
support for the European Union, socialism. Conversely,
journalists are less likely than the general voting
population to support parties associated with the following
ideologies: national conservatism, libertarianism,
populism, nationalism and conservatism. Tables S1 and
S2 in the Supplementary Information give the numerical
results.
In some cases, the magnitude of journalists’ relative
preferences was quite large. Across the two versions of
RR, the general population votes about 6.1 times more for
national conservative parties as journalists do, whereas
journalists vote about 3.0 times more for green parties.7
For robustness, we analysed the data using unweighted
versions as well as using the log10OR metric. However,
doing so made relatively little difference to the results.
The correlations across combinations of metrics were very
strong with a mean/median correlation of r = .89/.91.
3.3 Multivariate Analyses
Having seen that both left-right position and most
ideology tags are associated with journalists’ relative
preferences, we now tackle the more complicated question
of how to combine the predictors into a single model. In
particular, we have only about 120 cases with complete
data, but 22 interrelated predictors. The predictors consist
of the 21 tags and the overall political position, of which
we have two versions. We also have three different
outcome variables. Given these limitations, we did not
expect to get useful results from OLS regression.
6 The unweighted median is the middle datapoint in a set of numbers
ranked by value. If there are an even number of datapoints, the mean of
the two middle datapoints is used. The weighted median works the same
way, but applies the weights (1/number of parties in that country) to
increase or decrease the relative size of each datapoint along the ranking,
and then chooses the middle one as usual.
7 The outlier in the gure for the green tag is Youth Party – European
Greens, from Slovenia (https://en.wikipedia.org/wiki/Youth_
Party_%E2%80%93_European_Greens), which obtained anomalously
low support among journalists in a sample of 300 journalists from 2009.
As a matter of fact, the result is probably due to sampling error, given
that the party received only 2.6% among the general population and 0.3%
among the journalists. This is a difference of only a few individuals in
the sample, and illustrates the extreme sampling error problem with the
RR and OR metrics when the level of support for a party is low.
In macroeconomics, a similar issue arises when using
country-level data. Consequently, some economists
have begun using a Bayesian model averaging (BMA)
approach8 [28-31]. This method involves fitting all the
possible regressions (where possible, otherwise sampling
1000s of them), and seeing which predictors tend to be
included in the best models, and how strong they are in
these models. It is conceptually similar to best subset
selection [32], and is a form of meta-analysis [33]. We fit
BMA to our dataset using the BMS package [34]. We used
the default settings for the package, and analyzed the
complete set of models since this was computationally
feasible. In our case, there were 21-22 predictors yielding
2-4 million models to evaluate (runtime on a laptop was a
few minutes for each set). We did this for each of the three
outcome metrics (d, log10RR, and log10OR), and each
of the two sets of political position data (Wikipedia and
rater-based). We left out the populism tag because of its
redundancy with respect to the directional populism tags
(right-wing populism and left-wing populism).
Because the output from these analyses is rather
lengthy, we have confined it to the supplementary
materials (see Tables S3-S6). In an ideal world, there
would be variables that are clearly important in all
models, and variables that are not. In addition, the same
variables would be important no matter which outcome
metric, or which version of the political position data,
we use. Unfortunately, our output tables showed that
reality is not quite so clear. For example, in the rst meta-
analysis (Wikipedia data + d outcome) the ‘conservative’
tag variable was included in 97% of the best models. The
effect size was quite large at -9.7 (i.e, journalists’ vote%
was 9.7% lower for parties tagged as ‘conservative’,
holding the other variables constant). However, when we
compared these results to the parallel results based on
the rating data, we found that this tag was only included
in 85% of the best models, though it still had a sizable
beta of -7.7. And when we looked at the results based on
log10RR, the same tag was only included in 93% and
66% of the best models (for Wikipedia dating and rating
data, respectively). This shows that modelling choices
matter for the stability of the results.
For the sake of simplicity, we looked for variables
that appeared to be useful across the four meta-analyses
corresponding to our preferred specications (Tables S3-
S4). We somewhat arbitrarily defined ‘useful’ variables
as those that 1) had a PIP of at least 10%, and 2) had a
post mean effect size larger than trivial (d > 1, |log10RR|
8 This method is also called Bayesian averaging of classical estimates
(BACE), where classical refers to the frequentist approach in the source
regressions.
DOI: https://doi.org/10.30564/jpr.v3i3.3282
37
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
the general population of voters. The strength of this skew
varies from -0.17 to -0.96, with a mean/SD of -0.58/0.26.
If instead we look at the rater data, we see a left-bias for
16 out of 17 countries, with a mean/SD of -0.52/0.26. The
only notable difference is that Slovenia has a very slight
right-bias in the rater data, probably related to the issue
with small parties we discussed earlier. It can also be
observed that countries differ substantially in their mean
political position, and that these differences make intuitive
sense. Poland stands out in our dataset as particularly
right-wing (0.98 and 1.11, respectively) and indeed, it is
generally known as a conservative, Catholic country. On
the other hand, the general populations in the Netherlands
and Germany are rated as left-of-center in our dataset.
This is somewhat puzzling for Germany, given that the
country has been governed by center-right parties since
2005 (headed by Angela Merkel). Generally speaking,
the results in Table 4 should be taken as a rst attempt at
quantifying the skew of journalists in different countries,
and not as something denitive.
3.5 Robustness Checks
We have already seen that the tag-based results were
fairly robust to the outcome metric and the use of weights
> .05, i.e. 10% increase). Given these criteria, the most
important variables were: ‘conservative’ (negative),
‘nationalism’ (negative), and ‘green’ (positive). Thus, our
multivariate analysis shows that these seem to be the most
useful variables that have an appreciable effect size. The
conclusion is not necessarily that other variables don’t
matter, but just that their effects are difcult to detect with
a high degree of condence in the current dataset.
3.4 Left-right Position by Country
Using the party preference data and the political
position of parties, it is possible to calculate the overall
political position of the general population in a country,
the position of journalists in that country, and the
difference between them. The latter may be taken as an
overall estimate of the left-right skew of the journalists
in a particular country. However, given that some of
the data on journalists were obtained from ad hoc
samples, individual estimates are subject to considerable
uncertainty, and should be interpreted with caution. Table
4 shows the values for the countries with available data,
ordered by the magnitude of the skew.
Based on the Wikipedia data, it can be observed that, in
16 out of 16 countries, journalists are more left-wing than
Figure 9. Journalists’ relative support for parties by political ideology. Red diamonds correspond to the weighted
median for each tag in the log10 RR metric.
DOI: https://doi.org/10.30564/jpr.v3i3.3282
38
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
very low levels of support in the general population on the
grounds that these data would be aficted by substantial
sampling error. We examined the effect of trying different
thresholds for exclusion, including none. Figure 10 shows
the results across method choices.
Here we see that changing the threshold from 0 to 10%
leads to an increase in the effect size, presumably due to
removal of cases with large errors. At a threshold of 10%
mark, only 59 cases out of the original 151 remain in the
analysis. Thus our decision to only remove parties with
less than 2% support seems to be a rather conservative
choice that tended to weaken the results slightly.
Third, we tried dropping data from older sources.
While we sought to identify the newest possible sources,
especially surveys from the last 20 years, we sometimes
had to rely on older sources. The publication dates of the
surveys included in our analysis range from 1997 to 2017,
but most of the data were of more recent origin: mean/
SD = 2008/5.6. Did the inclusion of older data affect our
results? If we drop the data from before 2005, the sample
size changes from 132 to 100, and the results change from
-.50 to -.55 (rating data, d metric), -.55 to -.58 (rating,
log10RR), -.47 to -.50 (Wikipedia, d), and -.54 to -.57
(cf. Section 3.2). However, there are other decisions that
might have influenced the results. (Note that we also
compiled data on the political attitudes of other media
personnel. These are provided in Tables S7-S8 in the
Supplementary Information.)
First, recall that we used a pseudo-Bayesian approach
to move the 0 values away from exact zero, so as to
ensure that our RR values would be meaningful. (Observed
values of 0 result in relative ratios (RR’s) of 0, and thus
infinite values for the log transformation.) We re-ran
the main left-right analysis using the unadjusted values.
This is straightforward for the d metric, and yielded
very similar results, as expected (for the ratings data:
r = -.50 before and after; for Wikipedia data, r = -.47
before and after). For the log10RR metric, this alternative
specication yielded a small increase in effect sizes, due
to the removal of many datapoints corresponding to right-
wing parties with observed 0% support among journalists
(n dropped from 132 to 120). The observed changes were
quite minor, however: the log10RR left-right correlations
changed from -.53 to -.55 for the Wikipedia data, and
from -.53 to -.56 for the ratings data.
Second, recall that we excluded data for parties with
Table 4. Average political position of journalists and the general population of voters, by country. Participant ratings
were used to calculate positions. The ratings derived from the Wikipedia data correlated at r = .86 with these, but did not
include the USA.
Political position and bias results by country
Wikipedia-based political position Rater-based political position
Country Journalist mean General pop. mean Bias wikipedia Country Journalist mean General pop. mean Bias
1 Austria -0.36 0.62 -0.97 Austria -0.77 0.33 -1.10
2 France -0.73 0.20 -0.93 France -1.08 -0.27 -0.81
3 Switzerland -0.22 0.66 -0.89 Switzerland -0.06 0.63 -0.69
4 Denmark -0.75 0.10 -0.86 Denmark -0.69 0.01 -0.69
5 Ireland -0.66 0.18 -0.84 Ireland -0.85 -0.26 -0.58
6 Sweden -0.65 0.06 -0.71 Sweden -0.85 -0.18 -0.67
7 Norway -0.37 0.18 -0.56 Norway -0.21 0.25 -0.46
8United
Kingdom -0.31 0.23 -0.54 United Kingdom 0.01 0.52 -0.51
9 Poland 0.48 0.98 -0.50 Poland 0.70 1.11 -0.42
10 Finland -0.26 0.24 -0.50 Finland -0.64 -0.03 -0.61
11 Belgium -0.46 0.02 -0.48 Belgium -0.71 -0.14 -0.57
12 Netherlands -0.59 -0.14 -0.45 Netherlands -0.71 -0.21 -0.50
13 Australia -0.11 0.24 -0.35 Australia -0.17 0.25 -0.42
14 Canada -0.30 0.01 -0.31 Canada 0.28 0.38 -0.10
15 Germany -0.59 -0.38 -0.21 Germany -0.69 -0.29 -0.40
16 Slovenia 0.19 0.35 -0.17 Slovenia -0.06 -0.09 0.03
USA 0.51 0.88 -0.37
DOI: https://doi.org/10.30564/jpr.v3i3.3282
39
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
Of course, our analyses have a number of important
limitations. First, not all the surveys of journalists had
large sample sizes. Ideally, one would want a large,
representative sample of journalists from each country.
However, we often had to rely on ad hoc samples of
journalists, (e.g. a survey from a particular region or city,
or one based on a limited number of outlets.) The sample
sizes varied from small (<100) to large (>1,000). The
smaller surveys may of course yield uncertain estimates of
journalists’ preference for or against a given party. When
we analyzed the effect of inclusion threshold, an indirect
way of evaluating the effects of sampling error, we found
that increasing the threshold did not weaken the results.
Second, many of the surveys were somewhat older
than we would have liked. We attempted to nd samples
collected within the last 22 years (1998 onward) to reduce
drift in party ideology between the time of the survey
and the time party data were added Wikipedia. We only
included older surveys when we were unable to nd newer
ones, based on the assumption that some slightly older
data is preferable to no data at all for a given country. In
addition, we did not nd a substantial effect of sample age
in our analyses.
Third, despite collecting data for multiple years, there
were still some Western countries for which we were
(Wikipedia, log10RR). Hence there does not appear to be
a notable effect of source age on our results. In fact, our
inclusivity tended to weaken the results slightly.
4. Discussion and Conclusions
We have attempted to quantify the political skew of
Western media by comparing survey data on journalists’
voting behaviour to national election results. Our results
showed that journalists lean left overall (Section 3.1),
and that they are particularly unsupportive of national
conservatism, while being particularly supportive of
feminism, immigration and the EU (Section 3.2). In
multivariate analysis using Bayesian model averaging
(Section 3.3), we found that three ideology tags were
consistent predictors: conservatism (negative), nationalism
(negative), and green (positive). The ndings we observed
were generally robust to alternative specifications and
sensitivity checks (Section 3.5). We believe they are
unlikely to change much upon collection of additional
data. Indeed, we wrote most of the analysis code early on
in the process of data collection, and monitored the results
as new data came in. The findings presented here are
quite similar to those observed when data on only a few
countries were available.
Figure 10. Left-right correlation results across minimum party support exclusion rule values.
DOI: https://doi.org/10.30564/jpr.v3i3.3282
40
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
the scientific process itself, given that findings reported
in the media receive more attention from scientists and
tend to get cited more [39]–[41]. The academic performance
of authors whose work is selected for coverage (quantied
by means of citation indexes) will increase, and they will be
likely to receive more funding for that line of research. This
gives the authors an incentive to produce more research
in the same vein. The process we have just outlined is
illustrated in the owchart shown in Figure 11.
It is important to mention that the potential impact of
political bias is only statistical. Despite being left-leaning,
the media and academia obviously produce many stories
and research ndings that are not “friendly” to left-wing
causes. However, it is plausible that they produce fewer
of these stories and findings than they would do in the
absence of the observed political skew. This model is not a
conspiracy theory because it does not postulate any secret
coordination between large numbers of actors in different
areas.9
4.2 Increasing Media Bias
Several studies have documented an increase in the
left-liberal skew of Western media in recent decades [42-45].
But what factors may have given rise to such an increase,
if it has in fact occurred? Shafer and Doherty (2017) [42]
argue that the increase in political bias in the media is
attributable to deep-rooted economic factors. They show
that media personnel increasingly work in coastal areas
which are left-wing. According to their calculations based
on the U.S. Bureau of Labor Statistics employment data,
the percentage of newspaper and internet-publishing
workers working in a county where Democrats won
increased from 61% in 2008 to 72% in 2016. Furthermore,
the percentage of these individuals who worked in a
country that was won by more than 30% points increased
from 32% to 51%. Their results are even starker for
internet media personnel: 90% of such individuals worked
in a county won by Clinton, and 75% worked in a country
where she won by more than 30% points. The reason for
the increasing urbanization of journalists seems to be the
expansion of national media at the expense of local media,
which is presumably tied at least in part to the decline in
advertising revenues for newspapers.
9 Note that there has been at least one case of large-scale secret
collaboration, namely JournoList (Calderone, 2009). Ezra Klein (the
former editor of Vox) ran a secret discussion forum (a Google Group)
for several hundred left-leaning “bloggers, political reporters, magazine
writers, policy wonks and academics”. The individuals on this list
sometimes worked together on pieces that later appeared in the news,
and even plotted to collectively kill stories they considered damaging to
their political goals (Strong, 2010).
unable to obtain any relevant survey data. Unfortunately,
these were not randomly located, but rather concentrated
in southern and eastern Europe. The Southern European
countries (Greece, Italy, Spain) were center stage in both
the Eurozone debt crisis and the European migrant crisis,
while the eastern European countries (Hungary, Czech
Republic) have featured prominently in the news due to
their opposition to accepting migrants. Hence it would
be particularly interesting to assess the political skew of
journalists in these countries. And indeed, it is possible
that the political skew we detected would have been lower
if data from more countries had been available. We hope
that the present study will inspire further research on
journalists’ voting behavior, and reveal data sources that
we missed.
Fourth, voting patterns reflect voters’ political
preferences, but they are by no means a perfect gauge of
such preferences. While we were able to study several
aspects of journalists’ political attitudes indirectly through
reported voting behaviour and vote intentions, some
dimensions of their political attitudes were not covered
at all, making it difficult to say precisely which way
journalists lean on the relevant issues.
4.1 Journalists and Academics
Notwithstanding the limitations outlined above, we
believe the empirical results we have presented are
relevant to understanding the general ow of information
within society. To explain why, it is necessary to expand
our discussion to the political leanings of academics. Like
journalists, academics mostly produce words for a living.
Whereas journalists write news articles about current
events, academics write reports about current research.
Over the last few years, there has been a surge of interest
in the political leanings of academics [24,35-38]. The results
from this literature mirror those seen for journalists, but
generally reveal even larger skews towards left-wing
parties and political attitudes. For example, Langbert
(2018) [24] found that the ratio of Democrat to Republican
professors was 17.4:1 in History, 43.8:1 in Sociology and
133:1 in Anthropology.
The effects of political skews in journalism and
academia may exert synergistic effects insofar as many
news stories relate to ndings from new studies published
by academics. As a result, the process we described in
the introduction (Figure 1) may lead to biased coverage
of new research ndings. One would expect journalists to
preferentially report ndings that comport with their own
political views, and to interview sources who they suspect
will give a favourable interpretation of the importance or
validity of the ndings. This tendency may interact with
DOI: https://doi.org/10.30564/jpr.v3i3.3282
41
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
in the newsroom means that anyone with a story that
conservatives might prefer to see in print had a designated
go-to person. Although this cannot by itself counteract
the overall slant of a newsroom, it can at least ensure
that every important “conservative” story has a chance
of being told. Depending on how hiring usually works,
this kind of low level afrmative action (keep at least one
right-winger in the newsroom at left-wing newspapers,
and vice versa at right-wing newspapers) might be a
fruitful option for newspapers to consider.
Rather than trying to alter the ideological composition
of the newsroom, one could attempt to forestall biases that
may arise during the journalistic production process itself.
In science, many such proposals have been made [35], and
some have been partially implemented (e.g., registered
reports). For example, one way bias distorts science is
through what is termed researcher degrees of freedom, i.e.,
researchers can analyze their data in many different ways,
and then only report the analyses that produced results
favourable to their hypothesis [47-48]. Those results are then
published, while the results from the alternative analyses,
which yielded null or perhaps even negative results,
remain unpublished. Because of the low evidentiary
standards in social science, one can almost always find
something in a given dataset that could be construed as
supporting a particular hypothesis.11 When the hypothesis
under consideration has some relevance to public policy,
as is often the case, this tendency may give rise to a
general slant in the research ndings.
Policies aiming to reduce bias in the journalistic
4.3 Proposals for Reducing Political Bias
Proposals for increasing political diversity among
academics have focused on raising awareness and
creating a more hospitable environment for dissidents
[35]. We are not aware of any general attempts to increase
political diversity among journalists, presumably because
people prefer to choose from among an assortment of
media outlets, each with a relatively obvious slant. One
exception is the proposal mentioned by Groseclose (2012)
[7]. Specifically, the Minneapolis Star Tribune ran an
experiment where they hired a self-identied conservative
to increase viewpoint diversity in their newsroom. As
explained by Lambert (2007) [46],
When the tinny tinkle of “Joy to the World, the
Lord Is Come” begins playing on the cell phone,
everyone in range in the Star Tribune newsroom
knows who’s getting a call. It is Katherine Kersten,
the paper’s unapologetically religious and fiercely
conservative metro columnist.
Since May 2005, the Star Tribune has been
engaged in what its top editor freely describes as “an
experiment.” The test has Katherine Kersten, a fty-
ve-year-old former banker, and think-tank denizen,
now an opinion writer, playing the role of an alien
element injected into a tradition-bound newspaper
culture.
Long battered by conservative critics as the “Red
Star” for its alleged knee-jerk liberalism … the
Star Tribune decided it had to answer. For the last
twenty months, Kersten has been an one-woman
solution, applying a decidedly different, and perhaps
revolutionary, face to the role of big-city reporter
and metro columnist.
The presence of a single self-identified conservative
Figure 11. Flowchart of the scientic process with political bias of journalists and scientists.10
10 We acknowledge that a similar version of this chart was originally
created by J.P. de Ruiter.
11 The reader can see this for himself by playing with the interactive
p-hacking simulator at https://vethirtyeight.com/features/science-isnt-
broken/.
DOI: https://doi.org/10.30564/jpr.v3i3.3282
42
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
University: Problems, Scope, and Reforms [M].
[13] Eberl, J.-M., Boomgaarden, H. G., and Wagner,
M. (Dec. 2017) One Bias Fits All? Three Types of
Media Bias and Their Effects on Party Preferences
[J]. Communication Research, Vol. 44, No. 8, 1125–
1148. 10.1177/0093650215614364.
[14] Hamborg, F., Donnay, K., and Gipp, B. (Dec. 2019)
Automated identication of media bias in news arti-
cles: an interdisciplinary literature review [J]. Inter-
national Journal on Digital Libraries, Vol. 20, No. 4,
Art. no. 4. 10.1007/s00799-018-0261-y.
[15] Groseclose, T. and Milyo, J. (Nov. 2005) A Measure of Me-
dia Bias [J]. The Quarterly Journal of Economics, Vol.
120, No. 4, Art. no. 4. 10.1162/003355305775097542.
[16] Gentzkow, M. and Shapiro, J. M. (2010) What Drives
Media Slant? Evidence From U.S. Daily Newspapers
[J]. Econometrica, Vol. 78, No. 1, Art. no. 1. 10.3982/
ECTA7195.
[17] Budak, C., Goel, S., and Rao, J. M. (Jan. 2016) Fair
and Balanced? Quantifying Media Bias through
Crowdsourced Content Analysis [J]. Public Opinion
Quarterly, Vol. 80, No. S1, Art. no. S1. 10.1093/poq/
nfw007.
[18] Patterson, T. E. and Donsbagh, W. (Oct. 1996)
News decisions: Journalists as partisan actors [J].
Political Communication, Vol. 13, No. 4, 455–468.
10.1080/10584609.1996.9963131.
[19] Feldman, S. and Johnston, C. (2014) Understanding
the Determinants of Political Ideology: Implications
of Structural Complexity [J]. Political Psychology,
Vol. 35, No. 3, Art. no. 3. 10.1111/pops.12055.
[20] Kirkegaard, E. O. W., Bjerrekær, J. D., and Carl, N.
(Feb. 2017) Cognitive ability and political preferenc-
es in Denmark [J]. Open Quantitative Sociology &
Political Science, Vol. 1, No. 1, Art. no. 1.
[21] Malka, A., Lelkes, Y., and Soto, C. J. (Jul. 2019)
Are Cultural and Economic Conservatism Positively
Correlated? A Large-Scale Cross-National Test [J].
British Journal of Political Science, Vol. 49, No. 3,
Art. no. 3. 10.1017/S0007123417000072.
[22] D’Alessio, D. and Allen, M. (Dec. 2000) Media bias
in presidential elections: a meta-analysis [J]. Jour-
nal of Communication, Vol. 50, No. 4, Art. no. 4.
10.1111/j.1460-2466.2000.tb02866.x.
[23] Peng, Y. (Oct. 2018) Same Candidates, Different
Faces: Uncovering Media Bias in Visual Portrayals
of Presidential Candidates with Computer Vision [J].
Journal of Communication, Vol. 68, No. 5, Art. no. 5.
10.1093/joc/jqy041.
[24] Langbert, M. (Apr. 24, 2018) Homogeneous: The
Political Affiliations of Elite Liberal Arts College
Faculty.
production process have already been implemented in
certain countries. In the US, for example, the equal-time
rule was implemented as early as 1934 [49] (Miller, 2013,
p. 359). This rule specifies that radio and TV stations
must provide air time to opposing political candidates
who request it. Of course, a detailed discussion how to
counteract media bias is beyond the scope of this paper.
Supplementary Material and Acknowledgments
Supplementary materials including tables, code,
high quality gures and data can be found at h t t ps://osf.
io/6uvnu/.
We would like to thank numerous people who helped
us gather data for the present study. Some of those who
should have been mentioned by name declined due to fear
of political retaliation by journalists, academics or both,
underlining some of the points made in this article.
References
[1] Farhi, P. (Apr. 27, 2012) How biased are the media,
really? Washington Post.
[2] Eberl, J.-M. (2018) Lying press: Three levels of per-
ceived media bias and their relationship with political
preferences [J]. Communications, Vol. 0, No. 0, Art.
no. 0. 10.1515/commun-2018-0002.
[3] Stern, K. (Oct. 21, 2017) Former NPR CEO opens up
about liberal media bias. New York Post.
[4] Gainor, D. (Apr. 21, 2018) Media war on Trump
continues around the clock, and other proof of media
bias. Fox News.
[5] Alterman, E. (2003) What liberal media? the truth
about bias and the news [M].
[6] Hassell, H. J. G., Holbein, J. B., and Miles, M. R.
(Apr. 2020) There is no liberal media bias in which
news stories political journalists choose to cover
[J]. Science Advances, Vol. 6, No. 14, eaay9344.
10.1126/sciadv.aay9344.
[7] Groseclose, T. (2012) Left Turn: How Liberal Media
Bias Distorts the American Mind [M].
[8] Tsfati, Y. and Ariely, G. (Aug. 2014) Individual and
Contextual Correlates of Trust in Media Across 44
Countries [J]. Communication Research, Vol. 41, No.
6, Art. no. 6. 10.1177/0093650213485972.
[9] Miljan, L. A. (2000) The backgrounds, beliefs, and
reporting practices of Canadian journalists [D].
[10] Boatsinker, C. (Mar. 11, 2018) Bonniers terrorkam-
pagne mod alternative medier. Dagens Blæser.
[11] MacDonald, K. (2002) A People That Shall Dwell
Alone: Judaism as a Group Evolutionary Strategy,
with Diaspora Peoples [M]. First edition.
[12] Maranto, R., Hess, F., Redding, R., Agresto, J., Balch,
S. H., Brown, H., et al. (2009) The Politically Correct
DOI: https://doi.org/10.30564/jpr.v3i3.3282
43
Journal of Psychological Research | Volume 03 | Issue 03 | July 2021
Distributed under creative commons license 4.0
[37] Carl, N. (Jan. 2018) The Political Attitudes of British
Academics [J]. Open Quantitative Sociology & Polit-
ical Science, Vol. 1, No. 1, Art. no. 1.
[38] Werfhorst, H. G. (Jan. 2020) Are universities left-
wing bastions? The political orientation of profes-
sors, professionals, and managers in Europe [J]. The
British Journal of Sociology, Vol. 71, No. 1, 47–73.
10.1111/1468-4446.12716.
[39] Chapman, S., Nguyen, T. N., and White, C. (Feb.
2007) Press-released papers are more downloaded
and cited [J]. Tobacco Control, Vol. 16, No. 1, 71–71.
10.1136/tc.2006.019034.
[40] Liang, X., Su, L. Y.-F., Yeo, S. K., Scheufele, D. A.,
Brossard, D., Xenos, M., et al. (Dec. 2014) Build-
ing Buzz: (Scientists) Communicating Science in
New Media Environments [J]. Journalism & Mass
Communication Quarterly, Vol. 91, No. 4, 772–791.
10.1177/1077699014550092.
[41] Manisha, M. and Mahesh, G. (2015) Citation pattern
of newsworthy research articles [J]. Journal of Scien-
tometric Research, Vol. 4, No. 1, 42. 10.4103/2320-
0057.156022.
[42] Shafer, J. and Doherty, T. (Apr. 25, 2017) The Media
Bubble Is Real — And Worse Than You Think. PO-
LITICO Magazine.
[43] Silver, N. (Mar. 10, 2017) There Really Was A Liber-
al Media Bubble. FiveThirtyEight.
[44] Asp, K. (2012) Journalistkårens partisympatier.
Svenska journalister 1989–2011.
[45] Willnat, L., Weaver, D. H., and Wilhoit, G. C. (Feb.
2019) The American Journalist in the Digital Age:
How journalists and the public think about journalism
in the United States [J]. Journalism Studies, Vol. 20,
No. 3, 423–441. 10.1080/1461670X.2017.1387071.
[46] Lambert, B. (Jan. 29, 2007) Katherine Kersten: The
One-Woman Solution. The Rake.
[47] Simmons, J. P., Nelson, L. D., and Simonsohn,
U. (Nov. 2011) False-Positive Psychology Undis-
closed Flexibility in Data Collection and Analysis
Allows Presenting Anything as Significant [J].
Psychological Science, Vol. 22, No. 11, Art. no. 11.
10.1177/0956797611417632.
[48] Wicherts, J. M., Veldkamp, C. L. S., Augusteijn, H.
E. M., Bakker, M., van Aert, R. C. M., and van As-
sen, M. A. L. M. (Nov. 2016) Degrees of Freedom
in Planning, Running, Analyzing, and Reporting
Psychological Studies: A Checklist to Avoid p-Hack-
ing [J]. Frontiers in Psychology, Vol. 7, 10.3389/
fpsyg.2016.01832.
[49] Miller, P. (2013) Media law for producers [M].
[25] Agnoli, F., Wicherts, J. M., Veldkamp, C. L. S., Albi-
ero, P., and Cubelli, R. (Mar. 2017) Questionable re-
search practices among italian research psychologists
[J]. PLOS ONE, Vol. 12, No. 3, Art. no. 3. 10.1371/
journal.pone.0172792.
[26] Gervais, W. M. and Najle, M. B. (Jan. 2018)
How Many Atheists Are There? How Many Athe
ists Are There? [J]. Social Psychological and Per-
sonality Science, Vol. 9, No. 1, Art. no. 1. 10.1177/
1948550617707015.
[27] Moore, D. A. (2016) Preregister if you want to [J].
American Psychologist, Vol. 71, No. 3, Art. no. 3.
10.1037/a0040195.
[28] Białowolski, P., Kuszewski, T., and Witkowski, B.
(Feb. 2014) Bayesian averaging of classical estimates
in forecasting macroeconomic indicators with appli-
cation of business survey data [J]. Empirica, Vol. 41,
No. 1, 53–68. 10.1007/s10663-013-9227-x.
[29] Jones, G. and Schneider, W. J. (Mar. 2006) Intelli-
gence, Human Capital, and Economic Growth: A
Bayesian Averaging of Classical Estimates (BACE)
Approach [J]. Journal of Economic Growth, Vol. 11,
No. 1, Art. no. 1. 10.1007/s10887-006-7407-2.
[30] Sala-I-Martin, X., Doppelhofer, G., and Miller, R.
I. (2004) Determinants of Long-Term Growth: A
Bayesian Averaging of Classical Estimates (BACE)
Approach [J]. American Economic Review, Vol. 94,
No. 4, Art. no. 4.
[31] Simo-Kengne, B. D. (2016) What explains the recent
growth performance in Sub-Saharan Africa? Results
from a Bayesian Averaging of Classical Estimates
(BACE) Approach. Working Papers, No. 578.
[32] James, G., Witten, D., Hastie, T., and Tibshirani, R.,
Eds. (2013) An introduction to statistical learning:
with applications in R [M].
[33] Vaitsiakhovich, T., Drichel, D., Herold, C., Lacour,
A., and Becker, T. (Jan. 2015) METAINTER: me-
ta-analysis of multiple regression models in ge-
nome-wide association studies [J]. Bioinformatics,
Vol. 31, No. 2, Art. no. 2. 10.1093/bioinformatics/btu629.
[34] Zeugner, M. F. and S. (2015) BMS: Bayesian Model
Averaging Library [M].
[35] Duarte, J. L., Crawford, J. T., Stern, C., Haidt, J.,
Jussim, L., and Tetlock, P. E. (Jan. 2015) Political
diversity will improve social psychological science
[J]. Behavioral and Brain Sciences, Vol. 38, 10.1017/
S0140525X14000430.
[36] Zigerell, L. J. (Jan. 2017) Reducing Political Bias in
Political Science Estimates [J]. PS: Political Science
& Politics, Vol. 50, No. 1, Art. no. 1. 10.1017/
S1049096516002389.
DOI: https://doi.org/10.30564/jpr.v3i3.3282