Article

Automated Media: An Institutional Theory Perspective on Algorithmic Media Production and Consumption

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Communication scholars have recently begun to recognize and investigate the importance of algorithms to a wide range of processes related to the production and consumption of media content. There have been few efforts thus far, though, to connect these developments to potentially relevant bodies of existing theory and research. This article seeks to address this gap by exploring the utility of institutional theory as a potentially useful analytical framework for continued inquiry into the role of algorithms in the operation of media systems, and by offering some suggestions for ways in which an institutional analytical frame can be extended into algorithmic contexts.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... It looks at the trend towards using algorithms in China based on Chinese studies and practices, providing an alternative to Western discourse. Studies on the rise of American algorithmic media focus mainly on two trajectories, the first being Professional Generated Content (PGC), such as Netflix (Napoli, 2014), and secondly a gradual transition in social media platforms, where the algorithm is slowly reshaping both the business model and participatory media culture (Couldry and Mejias 2019;Moore, 2018;Noble, 2018;Russell, 2019). This trajectory has confined many significant discussions to theoretical debate, especially the philosophical explanation for the changes to media culture. ...
... In this sense, Douyin dovetails the current universal observation on the trend towards algorithmic media. Firstly, algorithms create a decision output system based on the analysis of enormous quantities of data, which are gathered from a media environment of extreme interactivity (Napoli, 2014). The audience's engagement with the media leaves a growing array of capturable and quantifiable traces to make the algorithm increasingly accurate, continuously recommending increasingly attention-grabbing content. ...
... For content producers the data extracted from the audience can also actively function as a demand predicator and a content creator (Napoli, 2014). Considering the participatory nature of Douyin, data are specifically meant to function in two forms for producers. ...
Article
Full-text available
Douyin, which is also known as the Chinese version of Tiktok, is currently the most valuable digital advertisement platform in China. One of the most significant features of this short-video platform is the heavy reliance on algorithmic production and distribution of media. In this emergent configuration, algorithms and data shape the production and circulation of media beyond social networks. Such a system develops by meshing grassroots and professionally generated content, leading to the audience engaging in the production of commercial content for profit. My essay explores the political context and economic logic that underpins these developments. It draws specifically on official reports from Douyin, as well as interviews with users, including individual users and Multiple Channel Network (MCN) employees. This essay proposes the idea of the ‘data attraction model’ based on the investigation of the emergence of new forms of algorithmic production and distribution. It argues that the data attraction model is characterised by an extreme logic of flexible accumulation, which is radically transforming the content production of participatory media in China
... Para a busca, foram escolhidas sete palavras principais (Quadro 1) que se referem ao contexto da IA, ponderando sobre aquelas que poderiam estar relacionadas ao jornalismo diante do exposto na literatura (Anderson, 2012;Diakopoulos, 2019;Latar, 2018;Lima Junior, 2011;Linden, 2018;Marconi, 2020;Napoli, 2014;Santos, 2016). ...
... Mais adiante, a discussão sobre a presença da IA se torna mais evidente no contexto jornalístico a partir do big data e de uma virada algorítmica na produção de notícias (Linden, 2018;Napoli, 2014). Aliado a isso, a apreensão em torno das fake news, da pós-verdade e da (des)credibilidade do jornalismo suscita discussões em torno das recentes práticas e premissas jornalísticas. ...
... A temática envolvendo o algoritmo e o futuro da produção de notícias, que trata a produção como automatizada, vem sendo pesquisada no âmbito acadêmico por diversos autores (Anderson, 2012;Diakopoulos, 2019;Latar, 2018;Lima Junior 2011;Linden, 2018;Marconi, 2020;Napoli, 2014;Santos, 2016). Ademais, diante dos avanços da IA nas redações, há algumas discussões acerca da garantia da democracia nos processos de investigação da notícia (Latar, 2018). ...
Article
Full-text available
This work presents a state of the art regarding Brazilian research that addresses the topic of journalism and artificial intelligence (A.I.). This is a survey of articles published in Brazilian journals and in national congresses SBPJor, Compós, and Intercom Nacional between 2010 and 2020 whose objective is to understand the main discussions in these works. The corpus consists of 19 articles published in journals and 27 articles published in congresses. The methodological procedures adopted are a literature review and a quantitative analysis with a set of different softwares. From reading the titles, abstracts, and keywords, we performed a qualitative inferential analysis, by co-occurrence and connections between words with the Iramuteq software. The results show that periodicals discuss tools and applied research with an emphasis on data and news, whereas in congresses prevails a discussion of the algorithm and the implications for the journalist's work.
... At the same time, AJ can be defined from an organizational viewpoint because numerous institutional values (e.g. cultural and political values associated with news organizations) can be unconsciously embedded into algorithms which generate news stories (Napoli, 2014). Based on these two different viewpoints, journalism scholars have examined how readers evaluate algorithmically assembled news differently from human written news (Jang et al., 2021a;Liu and Wei, 2019), as well as investigated how journalists perceived AJ as a reliable form of journalism practice (Kim and Kim, 2018) and investigate how AJ can be viewed from an institutional viewpoint (Napoli, 2014), as well as legal issues associated with AJ . ...
... cultural and political values associated with news organizations) can be unconsciously embedded into algorithms which generate news stories (Napoli, 2014). Based on these two different viewpoints, journalism scholars have examined how readers evaluate algorithmically assembled news differently from human written news (Jang et al., 2021a;Liu and Wei, 2019), as well as investigated how journalists perceived AJ as a reliable form of journalism practice (Kim and Kim, 2018) and investigate how AJ can be viewed from an institutional viewpoint (Napoli, 2014), as well as legal issues associated with AJ . As we discussed, AJ has been examined from various different perspectives. ...
Article
Drawing on propositions from the HAII-TIME (Human–artificial intelligence [AI] Interaction and the Theory of Interactive Media Effects) and Persuasion Knowledge Model, this study examines how knowledge of automated journalism (AJ) moderates the evaluation of algorithmically generated news. Experiment 1 demonstrates the utility of process-related knowledge in user evaluations of agency: individuals with little knowledge of AJ prefer attributions of human authorship over news stories attributed to algorithms, whereas individuals with high AJ knowledge have an equal or stronger preference for news that is described as algorithmically generated. Experiment 2 conditions these effects to show how prior characterizations of AJ—whether more machine- or human-like—shape evaluations of algorithmically generated news contingent on user age and knowledge level. Effects are found for differing age groups at lower levels of AJ knowledge, where machine-like characterizations enhance evaluations of algorithmically generated news for younger users but ascribing human-like traits enhances evaluations of automated news for older users.
... This shift, further accelerated by the availability of metrics-oriented insight into user demand, 'can change newsroom cultures and affect how journalists understand good journalism' (Møller, 2022b, p. 7;Carlson, 2018). As a result, technologies such as NRS may influence the amount, shape, and quality of political news provided by the media (Davis, 2013;Napoli, 2014). ...
... Another potentially fruitful framework might be institutional theory (Napoli, 2014;Scott et al., 1994) or ,more prominently, new institutionalism (Powell & DiMaggio, 2012;Ryfe, 2006), as it can shed light on the processes of legitimisation, imitation, and homogenisation of NRS usage within and across organisations (Caplan & Boyd, 2018). Studies have already discovered imitation processes in the implementation of data-driven solutions, such as audience analytics across different types of media organisations (e.g. ...
Article
Full-text available
News recommender systems (NRS) are becoming a ubiquitous part of the digital media landscape. Particularly in the realm of political news, the adoption of NRS can significantly impact journalistic distribution, in turn affecting journalistic work practices and news consumption. Thus, NRS touch both the supply and demand of political news. In recent years, there has been a strong increase in research on NRS. Yet, the field remains dispersed across supply and demand research perspectives. Therefore, the contribution of this programmatic research review is threefold. First, we conduct a scoping study to review scholarly work on the journalistic supply and user demand sides. Second, we identify underexplored areas. Finally, we advance five recommendations for future research from a political communication perspective.
... I instead posit that such a recognition provides an interdisciplinary vision where ethical AI/ML in its abstraction can be used as a heuristic for the creation of windows through which social problems can be inscribed into automated systems, as opposed providing a fully packaged automated solution. I further argue that an institutional approach to algorithms (Napoli, 2014) distinguishes the individualist-regulatory approach that has dominated discussions of ethical AI/ML in terms of governance. In this way, intraorganizational factors, interorganizational cooperation, and information sharing become important, alluding to new demands for responsible AI/ML in emergent attempts to govern algorithm-based automated platforms. ...
... Here, it is important to note that the contribution of an algorithm to a fairness goal should be evaluated relative to the resources and institutions have to achieve a broader goal. I explore the importance of this matter in relation to the proposed institutional turn (Napoli, 2014) of algorithm assessment in the conclusion of this chapter. ...
Chapter
Full-text available
While the discussion about ethical AI centers around conflicts between automated systems and individual human right, those systems are often adopted to aid institutions rather than individuals. Starting from this observation, this chapter delineates the potential conflicts between institutions and ethical algorithms, with particular focus on two major attempts by the ML community—fair ML and interpretable ML—to make algorithms more responsible. Computer scientists, legal scholars, philosophers, and social scientists have presented both immanent and external critiques regarding the formalization of responsible AI/ML. Such critiques have been based on the computational or mathematical complexity of creating fair, transparent algorithms as well as on the argument that computational solutions cannot accurately account for the entirety of social problems and could potentially worsen them. As an alternative, this chapter suggests an institutional perspective to responsible AI as relevant to considerations of polycentric governance over sociotechnical platforms in the embedding of automated decision systems, where cooperation among users, civic societies, regulatory entities, and related firms is required to secure systems' regularity and integrity.
... In the conceptualization of new media, Manovich (2003) lists eight definitions; a page of search results as a technological object fits them all (Metaxa et al., 2019). Search results are generated by algorithms, which are discussed in the cultural studies and new media literature from a variety of perspectives with the aim to better understand their relevance, implications for ethics and their impact on our lives and institutions (Kitchin, 2017;Napoli, 2014;Paßmann and Boersma, 2017). While algorithms are simply presented as a set of rules, it has been highlighted that these rules are shaped by decisions, politics, and ideologies (Gillespie et al., 2013;Kitchin, 2017), and that algorithms replicate existent biases on the annotated data sets used in their training (Hajian et al., 2016). ...
... Overall, our findings contribute to the research on accountability and utility of algorithms (Kitchin, 2017;Napoli, 2014;Schäfer and Van Es, 2017). Critically, independent audits have been argued to improve both (Bandy, 2021;Kitchin, 2017;Raji et al., 2020), as providing evidence of biases in search engine image results can serve as a stepping stone for search engine providers to provide higher-quality results and a fairer representations in their outputs (Bilić, 2016;Raji and Buolamwini, 2019;Seufert, 2014;Van Couvering, 2007). ...
Article
Full-text available
Implicit and explicit gender biases in media representations of individuals have long existed. Women are less likely to be represented in gender-neutral media content (representation bias), and their face-to-body ratio in images is often lower (face-ism bias). In this article, we look at representativeness and face-ism in search engine image results. We systematically queried four search engines (Google, Bing, Baidu, Yandex) from three locations, using two browsers and in two waves, with gender-neutral (person, intelligent person) and gendered (woman, intelligent woman, man, intelligent man) terminology, accessing the top 100 image results. We employed automatic identification for the individual’s gender expression (female/male) and the calculation of the face-to-body ratio of individuals depicted. We find that, as in other forms of media, search engine images perpetuate biases to the detriment of women, confirming the existence of the representation and face-ism biases. In-depth algorithmic debiasing with a specific focus on gender bias is overdue.
... The logic of these algorithmic decisions is often obscure, and curation mechanisms are in constant flux (Karpf, 2012), which can lead to unexpected or even undesirable outcomes (Napoli, 2014). These effects include societal fragmentation caused by the formation of isolated ideological communities (Spohr, 2017;Zollo & Quattrociocchi, 2018) as well as manipulation of publics and distribution of false claims (Badawy, Addawood, Lerman, & Ferrara, 2019;Keller & Klinger, 2019;Shao, Ciampaglia, Varol, Flammini, & Menczer, 2017;Zerback, Töpfl, & Knöpfle, 2020). ...
Article
Full-text available
Previous research highlighted how algorithms on social media platforms can be abused to disseminate disinformation. However, less work has been devoted to understanding the interplay between Facebook news curation mechanisms and propaganda content. To address this gap, we analyze the activities of RT (formerly, Russia Today) on Facebook during the 2020 U.S. presidential election. We use agent-based algorithmic auditing and frame analysis to examine what content RT published on Facebook and how it was algorithmically curated in Facebook News Feeds and Search Results. We find that RT’s strategic framing included the promotion of anti-Biden leaning content, with an emphasis on antiestablishment narratives. However, due to algorithmic factors on Facebook, individual agents were exposed to eclectic RT content without an overarching narrative. Our findings contribute to the debate on computational propaganda by highlighting the ambiguous relationship between government-sponsored media and Facebook algorithmic curation, which may decrease the exposure of users to propaganda and at the same time increase confusion.
... Social Graph benzer özellikler, benzer bağlantılara sahip kullanıcıların istedikleri, düşündükleri ve yaptıklarını temel alarak bilgi kombinasyonları oluşturmakta ve bu bilgiler doğrultusunda başka kullanıcılara kişisel ilgili alanları ve davranış biçimleri hakkında tahminde bulunmaktadır. Kullanıcılarının internet kullanırken bilgisayarlarında bıraktıkları kurabiyelerden (cookies) ve Ayrıca Facebook kullanıcı verilerini kullanıcıların başka dijital platformlara Facebook hesabı ile giriş yapması sayesinde de elde edebilmektedir (Vaidhyanathan, 2018, s.58;Martínez, 2016, s.6-8 (Napoli, 2014). Özellikle de puanlama aşamasında baz alınan değerlerin nasıl yapılandırıldığına ilişkin bilgiler detaylarıyla kamuoyu ile paylaşılmamaktadır. ...
Conference Paper
Full-text available
Facebook kısa olarak tanımlanabilecek bir zaman diliminde kurumlar için önemli bir iletişim ortamı haline gelmiştir. Milyarca kullanıcısını reklam kitlesi olarak konumlandıran Facebook, gelirini sürdürülebilir kılmak için kullanıcılarının platformda geçirdiği zamanı korumaya ve hatta arttırmaya ihtiyaç duymaktadır. Facebook bu ihtiyacı karşılamak için kullanıcılarının verilerini analiz ederek onlar hakkında bilgi sahibi olmakta ve bu bilgiler doğrultusunda içerik kişiselleştirme stratejisini etkili biçimde kullanmaktadır. Facebook, içerik kişiselleştirmeye yönelik kendine özgü algoritması aracılığıyla kullanıcılarına en çok ilgi duyacakları içerikleri seçip sunarak platformda harcadıkları süreyi arttırmaya çalışmaktadır. Son yıllarda bu stratejinin kullanıcı açısından olumlu yönlerinin yanı sıra, olumsuz yönleri de tartışılmaya başlanmıştır. Ancak bu konuda Türkiye’deki bilgi üretiminin kısıtlı olduğu görülmektedir. Bu çalışmayla Facebook’un içerik kişiselleştirme algoritmasına yönelik kullanıcı tutumunun duygusal, bilişsel ve davranışsal boyutlarıyla ortaya koyulması amaçlanmıştır. Dijital ortamda gerçekleştirilen anket aracılığıyla 604 katılımcıdan elde edilen verilerin güvenilirliği ve geçerliliği sınandıktan sonra, veriler işlenerek bilgiye dönüştürülmüş; araştırmanın hipotezleri test edilmiştir. Araştırma ortaya koymuştur ki; beklenin aksine, Türkiye’deki Facebook kullanıcılarının ilgili ifadeler kapsamında algoritmaya yönelik genel tutumu olumsuz yönlüdür. Tutumun bilişsel ve davranışsal boyutları genel olarak olumlu yönlü iken; duygusal boyutu baskın biçimde negatif yönlü olmuş, genel tutumun negatif yönlü olmasına neden olmuştur.
... En el ámbito democrático, la potencialidad del big data y sus diferentes técnicas y tecnologías digitales de aplicación, como el data mining, las grandes bases de datos o las herramientas de data analytics, pueden ayudar a elaborar políticas públicas más cercanas, eficaces y eficientes mediante un mejor conocimiento sobre las preferencias, los intereses y las opiniones de la ciudadanía digitalmente hiperconectada; a desarrollar sistemas democráticos más sostenibles a través de una mayor y más rápida capacidad de detección de patrones comportamentales y anomalías disruptivas ocultas; a diseñar y aplicar tecnologías capaces de detectar con mayor precisión la corrupción, el cohecho o la evasión fiscal, así como sistemas de seguridad y salud ciudadana más robustos, trazables y predictivos; a poner en marcha procesos y procedimientos de gestión de la conflictividad subyacente más rápidos y eficientes mediante un incremento notable de la predictibilidad de las expectativas y demandas de la ciudadanía; a mejorar la captación y/o fidelización del electorado mediante campañas publicitarias personalizadas, entre otras muchas cosas (Napoli, 2014). No obstante, como se expondrá a continuación, pronto estas expectativas de mejora se han ido convirtiendo en riesgos y auténticos peligros para la democracia. ...
Article
Full-text available
Este artículo se propone confrontar el concepto de opinión pública con la realidad y las expectativas de una sociedad digitalizada para analizar si la actual colonización algorítmica exige un nuevo cambio estructural de la opinión pública o más bien la retirada de este concepto. Los datos y metadatos masivos se han vuelto un arma de doble filo para la sociedad democrática digitalmente hiperconectada. Mientras que, por un lado, el increíble potencial que atesora el big data y sus diferentes técnicas y tecnologías de explotación de los datos y metadatos lo convierten en un producto codiciado por sistema de instituciones que componen tanto el estado como la sociedad civil; por otro, los altos impactos negativos que su uso instrumental e irresponsable está produciendo y puede llegar a producir, hacen del big data una herramienta controvertida y altamente criticada por alejarnos de cualquier intento de construir una ciudadanía digital. Si bien la democracia algorítmica no se apoya solo en la opinión pública, el objetivo es mostrar la incompatibilidad entre opinión pública artificial y democracia. Nuestro hilo conductor es el concepto habermasiano de opinión pública, puesto que será precisamente la fuerza de la sociedad civil, a través del diseño en su seno de espacios de participación, de donde podemos extraer el potencial necesario para enfrentarnos a la actual colonización algorítmica, para recuperar una deliberación autónoma y crítica sin la cual no existe opinión pública alguna y, por tanto, tampoco democracia.
... This work in the communications literature finds a counterpart in the work on media in economics. We refer to DellaVigna and Gentzkow (2010), Napoli (2014), DellaVigna and La Ferrara (2015), Strömberg (2015), and Enikolopov and Petrova (2015) for reviews of the literature. 12 Our model is also related to a number of studies that focus on the specific role of attention (Enke and Zimmermann, 2019;Hartzmark et al., 2019;Enke, 2020) and the importance of prior experiences and emotions in financial decision-making (Kuhnen and Knutson, 2011;Rudorf et al., 2014;Kuhnen, 2015). ...
... At the heart of the academic assessment process is the assessment of the teaching quality of individual courses/faculties by students taking the course. Under the traditional mode, the evaluation of teachers' educational ability mainly adopts the methods of setting teaching evaluation forms, reviewing textbooks, classroom teaching observation, and discussing evaluation results [31,32]. In order to arrive at an evaluation result, some attributes (variables/indicators) inevitably need to be assigned weights. ...
Article
Full-text available
In order to improve the shortcomings that the weight of each index is easily affected by human influence in the evaluation of traditional teachers’ comprehensive literacy, on the basis of the analysis of index coefficients, feature reorganization, and feature analysis, this paper constructs the comprehensive literacy of environmental protection professional teachers’ education in the big data environment. The evaluation model is proposed, and a weight distribution scheme for teachers’ comprehensive literacy evaluation indicators is proposed based on the hierarchical Bayesian beta-return model, so as to improve the rationality of the indicator distribution and make the prediction results more accurate. The simulation results further verify the superiority of the proposed model in improving the comprehensive quality evaluation level of environmental protection teachers.
... In addition to this rule of participation inequality, studies have shown that algorithmic recommendations and social information tend to enhance inequality by increasing exposure to what is already popular (Bucher 2017;Carlson, 2017;Napoli 2014;Salganik et al. 2006;Nelson & Taneja 2018). Thus, information from popular headlines is likely to get pushed to audiences more often and forcefully than from another source (Nelson & Taneja 2018). ...
Technical Report
Full-text available
This report documents the outcomes of an analysis of user behaviour on social media regarding the approval, assessment and evaluation of information and information sources, feeding into the further development of the EUNOMIA toolkit. Both individual and collective behaviour was analysed. On the one hand, there are factors that cause and explain individual behaviour, such as cognitive biases and psychological effects that influence a single person’s behaviour. An example is the so-called truth effect, i.e. the fact that repetition and familiarity with content make it more believable. On the other hand, group effects and social norms additionally influence the individual’s behaviour. Studies have shown that we are more likely to believe a piece of information if our social circles also accept it (Lewandowski et al. 2012; Eckles & Bakshy 2017; Lazer et al. 2017; Sloman & Fernbac, 2018; Karduni 2019). The task of user behaviour analysis included (i) a literature review; (ii) a workshop with end users and experts; and (iii) an online survey. We identified explanations for collective and individual user behaviour in assessing, sharing and distributing (mis)information, building on (i) the theory of cognitive dissonance and the theory of selective exposure; (ii) the third-person effect; (iii) the concept of opinion leadership; (iv) the concept of information gatekeeping; (v) the truth effect; and (vi) explanations for the persistence of misinformation and (vii) audience behaviour. Insights explain, for example, how users on social media tend to surround themselves with information that confirms their own interests, values and beliefs in so-called ‘filter bubbles’ or ‘echo chambers’. Furthermore, we were able to identify strategies to influence or reward preferable behaviour to avoid the spread of misinformation in the form of nudges, building on certain heuristics (i.e., mental shortcuts) and psychological or social effects. We also identified approaches for the correction of misinformation (e.g., providing explanations, targeting opinion leaders), as well as strategies to avoid their spread (e.g., triggering a thinking process before information is read).
... A juicio de los investigadores este tipo de técnicas han sido escasamente utilizadas, aunque comúnmente señaladas, en estudios sobre política internacional. Realidad que ha dejado un campo de estudio por abordar y estudiar teniendo en cuenta que cada vez más los procesos comunicativos están siendo dictados algorítmicamente (Napoli, 2014). Además, nos encontramos en un proceso en el cual el estudio de la comunicación internacional está consolidándose como una disciplina muy a tener en cuenta en la explicación de fenómenos sociales (Manfredi-Sánchez, 2020). ...
... Les algorithmes n'accélèrent pas simplement le commerce, le journalisme, la finance ou d'autres domainesils sont un discours et une culture de savoirs à la fois sociaux et technologiques, structurant comment l'information est produite, mise en évidence, interprétée, considérée comme (Napoli, 2014), présente un potentiel de grammatisation dont l'essor se rapproche de ce que Auroux (Auroux, 1995) et Derrida avaient observé via les phénomènes de gramma-latinisation. C'est potentiellement toutes les cultures qui sont passées sous le crible de dispositifs qui peuvent les rendre interprétables à partir de quelques modèles de base. ...
... A juicio de los investigadores este tipo de técnicas han sido escasamente utilizadas, aunque comúnmente señaladas, en estudios sobre política internacional. Realidad que ha dejado un campo de estudio por abordar y estudiar teniendo en cuenta que cada vez más los procesos comunicativos están siendo dictados algorítmicamente (Napoli, 2014). Además, nos encontramos en un proceso en el cual el estudio de la comunicación internacional está consolidándose como una disciplina muy a tener en cuenta en la explicación de fenómenos sociales (Manfredi-Sánchez, 2020). ...
Article
Full-text available
This article analyzes Twitter messages, in Spanish and English, during the development of the last conflict between Armenia and Azerbaijan over the historical Nagorno-Karabakh region. This conflict, along with the issues about Abkhazia and South Ossetia, is one of the most important crises in the South Caucasus. Two algorithms designed for processing large volumes of information have been used, namely LDA (unsupervised) and SVM (supervised). Based on the framing theory, a conclusion has been reached that both audiences are mostly positioned with the Armenian stance. The article also shows that the messages focus on other issues different from the war, such as Turkey’s role, the Israeli government’s responsibility for saling weapons to Azerbaijan’s army, or the religious orientation of both countries. The results show that humanitarian explanations are rarely used by both audiences whose messages are rather focused on conflicting elements. In short, this work does not only seek to identify the elements of the public debate around conflict but it also highlights the potential of computer science techniques in political communication studies.
... Moreover, algorithms not only help us find and curate information, but also engage in "producing and certifying knowledge" (Gillespie and Boczkowski, 2013). As algorithms possess such "governmental power" and "gatekeeping" functions, their outputs have political and cultural ramifications (Napoli, 2014). A recent study has found that Colossal Clean Crawled Corpus, one of the largest NLP datasets, has excluded documents related to racial and gender minorities, thus exacerbating inequalities and stigmatization of marginalized communities (Dodge et al., 2021). ...
Preprint
Full-text available
This paper presents exploratory work on whether and to what extent biases against queer and trans people are encoded in large language models (LLMs) such as BERT. We also propose a method for reducing these biases in downstream tasks: finetuning the models on data written by and/or about queer people. To measure anti-queer bias, we introduce a new benchmark dataset, WinoQueer, modeled after other bias-detection benchmarks but addressing homophobic and transphobic biases. We found that BERT shows significant homophobic bias, but this bias can be mostly mitigated by finetuning BERT on a natural language corpus written by members of the LGBTQ+ community.
... And, as he also notes, "What we generally lack as a public is clarity about how algorithms exercise their power over us" (Diakopoulos, 2013, p. 2, emphasis added). Napoli (2014) has gone so far as to classify algorithmic systems 2 as institutions in their own right, akin to other cultural, political, and economic institutions. If we accept this position, then it would certainly make sense that these algorithmic systems, and the digital platforms in which they are embedded, become a focus of journalistic attention in the same way that journalism has served as a watchdog for other institutions. ...
Preprint
Full-text available
As digital platforms have come to play a central role in the news and information ecosystem, a new realm of watchdog journalism has emerged-the platform beat. Journalists on the platform beat report on the operation, use, and misuse of social media platforms and search engines. The platform beat can serve as an important mechanism for increasing the accountability of digital platforms, in ways that can affect public trust in the platforms, but that can also, hopefully, lead to the development of stronger, more reliable, and ultimately more trustworthy, platforms. However, there are a number of tensions, vulnerabilities, and potential conflicts of interest that characterize the platform beat. This paper explores these complex dynamics of the platform beat in an effort assess the capacity of those on the platform beat to enhance the accountability and trustworthiness of digital platforms. 3
... We are at the "algorithmic turn" [59,82] witnessing the application of AI algorithms that possess self-learning and autonomous complex decision making abilities at various levels. AI algorithms now have found their way into autogenerated content as they are capable of replacing humans in performing many cognitive tasks regarding auto-generated news content [27,46]. ...
Article
Full-text available
Recent AI developments have made it possible for AI to auto-generate content—text, image, and sound. Highly realistic auto-generated content raises the question of whether one can differentiate between what is AI-generated and human-generated, and assess its origin and authenticity. When it comes to the processes of digital scholarship and publication in the presence of automated content generation technology, the evolution of data storage and presentation technologies demand that we rethink basic processes, such as the nature of anonymity and the mechanisms of attribution. We propose to consider these issues in light of emerging digital storage technologies that may better support the mechanisms of attribution (and fulfilling broader goals of accountability, transparency, and trust). We discuss the scholarship review and publication process in a revised context, specifically the possibility of synthetically generated content and the availability of a digital storage infrastructure that can track data provenance while offering: immutability of stored data; accountability and attribution of authorship; and privacy-preserving authentication mechanisms. As an example, we consider the MetaScribe system architecture, which supports these features, and we believe such features allow us to reconsider the nature of identity and anonymity in this domain, and to broaden the ethical discussion surrounding new technology. Considering such technological options, in an underlying storage infrastructure, means that we could discuss the epistemological relevance of published media more generally.
... HMC systems are no longer simply instances of the mediation or dissemination of communication, but also for its generation-a development that is based on the automated processing of (human) verbal and non-verbal utterances. This is ultimately the idea expressed through the concept of "automated media" (Andrejevic, 2020;Napoli, 2014). From this perspective, communicative AI and communicative robots operate as 'media within media'; they process within media. ...
Chapter
Full-text available
This chapter describes the study of human-machine-communication (HMC) as inherently interdisciplinary. This interdisciplinarity is significant in several ways: When considering interdisciplinarity's scope, there exist narrow forms of correspondence with neighboring disciplines in media and communication studies as do broader connections with more diverse disciplines such as computer science. In regard to the types of interdisciplinarity, it must be taken into account that HMC already represents an interdisciplinary phenomenon for whose investigation the methodological and theoretical integration of approaches from different disciplines persists. When it comes to the goals of interdisciplinarity, HMC aims both at fundamental research (the so-called "epistemological orientation" of interdisciplinarity) and the application of this research, such as the development of "socio-compatible" communicative AI and communicative robots (the so-called "instrumental orientation" of interdisciplinarity). HMC's requirement for cross-compatible approaches becomes most apparent when one keeps in mind that communicative AI and communicative robots challenge the three crucial foundational concepts of media and communication studies: communication, media, and agency. It is only through an interdisciplinary approach that the possibility of rethinking these concepts is solidified in the building of purposeful foundations for empirical research.
... is driven by Internet networks (Chia et al., 2021). Although a useful concept for making sense of how conspiratorial thinking and new age spirituality collide online, the term was only introduced just before the so-called algorithmic turn (Napoli, 2014). We suggest that we are in a new era of conspirituality, one that is defined not just by online communities and collaborative knowledge construction, but also by algorithms. ...
Article
Full-text available
In this article, we introduce the concept of algorithmic conspirituality to capture occasions when people find personal, often revelatory connections to content algorithmically recommended to them on social media and explain these connections as a kind of algorithmically mediated cosmic intervention. The phenomenon emerges from three particular developments: an epistemological shift that has positioned algorithms as important tools for self-knowledge; the sublime quality that algorithms have acquired, which primes users to imagine them as providential; and the rise of conspirituality (a portmanteau of conspiracy and spirituality). In conceptualizing algorithmic conspirituality, we particularly focus on TikTok, where the platform’s For You Page algorithm shapes users’ experience to an even greater degree than other platforms. We illustrate the concept through three example TikTok videos and conclude with a discussion and recommendations on future research agendas using algorithmic conspirituality.
... Auch Plattformen und Algorithmen sind sozial konstruiert und können nur mit Blick auf die Menschen und Organisationen (vor allem profitorientierte Unternehmen) verstanden werden, die sie geschaffen haben (vgl. Klinger und Svensson 2018;Napoli 2014). Ananny (2016, S. 99) bezeichnet Plattformen als "an assemblage ... of institutionally situated computational code, human practices, and normative logics that creates, sustains, and signifies relationships among people and data through minimally observable, semiautonomous action". ...
Article
Full-text available
Zusammenfassung Der Beitrag fragt nach strukturellen Veränderungen der politischen Kommunikation, die sich als Folge der Digitalisierung ergeben. Dabei wird eine regelorientierte und institutionalistische Perspektive eingenommen: Digitale Kommunikationsmedien wie Social-Media-Plattformen weisen eigene institutionelle Logiken auf und beeinflussen so die Regeln, nach denen politische Kommunikation stattfindet. Zur Begründung dieser These wird der Begriff Digitalisierung zunächst in technische Möglichkeit und soziale Realisierung unterschieden. Politische Kommunikation wird als Vermittlungsprozess betrachtet. Neben die Selbstvermittlung durch politische Akteure und die Fremdvermittlung durch journalistisch-redaktionelle Medien tritt mit digitalen Kommunikationsmedien ein neuer Typ in den Vordergrund, die automatisiert algorithmische Vermittlung. Aus diesen Unterscheidungen ergeben sich mehrere Paradoxien, die für eine Betrachtung der institutionellen Folgen relevant sind: Digitalisierung senkt die Kosten der Kommunikation und ermöglicht ein Mehr an publizierten Mitteilungen, erschwert jedoch zugleich die Chance gesellschaftlicher Wahrnehmung und gelingender Kommunikation. Durch die automatisiert algorithmische Vermittlung können Akteure ihre Botschaften in höherer Auflösung an spezifische Zielgruppen richten und sich mit ihnen verbinden, die digitalen Formen der Konnektivität erschweren jedoch die für demokratische Prozesse notwendige Repräsentanz und Zurechenbarkeit von Mitteilungen an politische Akteure. Technisch ermöglichte und sozial eingeforderte Transparenz geht mit der Bemühung von politischen Organisationen einher, das eigene Handeln zu verdecken oder zu verschleiern. Digitalisierung und die automatisiert algorithmische Vermittlung führen damit sowohl zu neuen Sichtbarkeiten als auch zu neuen Unsichtbarkeiten des Politischen.
... Nowadays, computers, data, and algorithms play a central role in journalism and influence all relevant areas of newswork, from information research to content production and distribution (Diakopoulos, 2019; see Saurwein in this volume). In other words, there is evidence of a "computational" (Coddington, 2015) or "algorithmic" (Napoli, 2014) turn in journalism. However, there is a double relevance to this transformation: The same means-data and algorithms-that distinguish the datafied society are used by journalism to observe, in turn, the datafication of society as it becomes an important object of reporting (Porlezza, 2018). ...
Chapter
The chapter summarizes results from an empirical study that intends to shed light on current change processes in journalism ethics. Qualitative interviews with media practitioners and a document analysis of ethics codes and guidelines in ten European countries show that newsrooms are confronted with a broad spectrum of ethical issues that are seen as a direct result of the digitization of journalism. While many of them led to adaptations in the practices of media self-regulators across Europe, datafication and algorithm-driven newswork remain uncharted territory.
... However, an algorithm should not be considered independently from the system in which it functions. In this article, algorithms are considered as systems that interact with other systems, interfaces, and users [2,3]. ...
Article
Full-text available
This article focuses on PubMed’s Best Match sorting algorithm, presenting a simplified explanation of how it operates and highlighting how artificial intelligence affects search results in ways that are not seen by users. We further discuss user search behaviors and the ethical implications of algorithms, specifically for health care practitioners. PubMed recently began using artificial intelligence to improve the sorting of search results using a Best Match option. In 2020, PubMed deployed this algorithm as the default search method, necessitating serious discussion around the ethics of this and similar algorithms, as users do not always know when an algorithm uses artificial intelligence, what artificial intelligence is, and how it may impact their everyday tasks. These implications resonate strongly in health care, in which the speed and relevancy of search results is crucial but does not negate the importance of a lack of bias in how those search results are selected or presented to the user. As a health care provider will not often venture past the first few results in search of a clinical decision, will Best Match help them find the answers they need more quickly? Or will the algorithm bias their results, leading to the potential suppression of more recent or relevant results?
Article
The algorithmic automation of media processes has produced machines that perform in roles that were previously occupied by human beings. Recent research has probed various theoretical approaches to the agency and ethical responsibility of machines and algorithms. However, there is no theoretical consensus concerning many key issues. Rather than setting out with fixed conceptions, this research calls for a closer look at the considerations and attitudes that motivate actual attributions of agency and responsibility. The empirical context of this study is legacy media where the introduction of automation, together with topical considerations of journalistic ethics and responsibility, has given rise to substantial reflection on received conceptions and practices. The results show a continuing resistance to attributions of agency and responsibility to machines. Three lines of thinking that motivate this stance are distinguished, and their potential shortcomings and theoretical implications are considered.
Chapter
Die Veröffentlichung von Zhang et al. (2019, http://arxiv.org/pdf/1904.05440v1) über den im Auftrag der Walt Disney Company entwickelten Algorithmus zur Entwicklung von sogenannten Animatics und Storyboards aus Drehbüchern eines Filmmedienguts demonstriert das Potenzial, die frühen Phasen des Wertschöpfungsprozesses der Filmmedienbranche zu verändern und Disney als entwickelndem Filmmedienunternehmen wirtschaftliche Vorteile zu ermöglichen. In der Initiierungs- und Vorproduktionsphase des Wertschöpfungsprozesses von Filmmedienunternehmen wie Disney werden die Tätigkeiten, die der Algorithmus mittelfristig substituieren könnte, von verschiedenen fest oder frei beschäftigten Akteuren übernommen oder begleitet. So werden etwa vor Produktionsbeginn des Filmmedienguts die Filmdrehbücher durch das künstlerische Handwerk von Storyboard Artists und Animatic Experts in Szenenbilder, ein Storyboard und darauf aufbauende Bewegtbilder, die Animatics umgesetzt, bevor die weitere Kreation des Filmmediums durch die Produzenten, Regisseure und übrigen Akteure voranschreitet. Das bringt einen erheblichen Kosten- und Zeitaufwand mit sich, der vor der eigentlichen Produktion entsteht. In diesem Aufsatz beleuchten wir den von Zhang et al. (2019, http://arxiv.org/pdf/1904.05440v1) präsentierten Algorithmus als potenzielles Mittel zur Behebung mancher typischer ökonomischer Bezugsprobleme, die ihre Ursprünge in den ökonomischen Eigenschaften von Filmmediengütern haben. Es werden daran anschließend die einzelnen Rationalisierungspotenziale durch den Algorithmus sowie Möglichkeiten zur Steigerung der allokativen und produktiven Effizienz in der Filmmedienbranche hervorgehoben. Abschließend geben wir einen Überblick über die gesellschaftlichen und medienökonomischen Auswirkungen, die im Kontext des Algorithmus mittel- und langfristig entstehen können.
Chapter
Since the rise of TikTok and Kuaishou from 2014 onwards, short video has been one of the most popular digital streaming media forms in China. This phenomenon has arisen alongside the rapid expansion of digital media usage in rural areas and lower-tier cities. Rapid urbanisation in these regions has led to the creation of a unique subculture on these short video platforms. These grassroots-produced videos usually have stereotypical plots and vulgar content, centred on these users’ ordinary, daily lives. This is sometimes criticised by the mainstream media. However, it is these social attributes, characteristic to these platforms, that are contributing to a new subculture of alternative-making. This has become an unignorable, modern phenomenon, which is now crucial for understanding contemporary Chinese politics and cultural expressions in new media.By studying these videos and the subculture they have generated within the framework of “compressed modernity”, this chapter will argue that these short videos are functioning as crucial expressions of vigorous struggle and grassroots resistance in China. The case study of this chapter will consist of content from two short video accounts, Big Wolf Dog Zheng Jianpeng Couple, and Zhu Yidan’s Boring Life to illustrate how “compressed modernity” and grassroots resistance are both reflected in family relationships and professional spaces in China. By interpreting the plot and characterisation in these fictional short videos, we investigate how these grassroots content creators reflect on and reshape their social identities. Based on this research and analysis, this essay seeks to contribute a deeper understanding of how rapid modernisation has affected cultural and economic aspects of fast-developing Chinese digital media.KeywordsShort videoStreamingChinaCompressed modernityDouyinOnline communitiesContent creatorsDigital media
Conference Paper
Algorithmic systems that recommend content often lack transparency about how they come to their suggestions. One area in which recommender systems are increasingly prevalent is online news distribution. In this paper, we explore how a lack of transparency of (news) recommenders can be tackled by involving users in the design of interface elements. In the context of automated decision-making, legislative frameworks such as the GDPR in Europe introduce a specific conception of transparency, granting 'data subjects' specific rights and imposing obligations on service providers. An important related question is how people using personalized recommender systems relate to the issue of transparency, not as legal data subjects but as users. This paper builds upon a two-phase study on how users conceive of transparency and related issues in the context of algorithmic news recommenders. We organized co-design workshops to elicit participants' 'algorithmic imaginaries' and invited them to ideate interface elements for increased transparency. This revealed the importance of combining legible transparency features with features that increase user control. We then conducted a qualitative evaluation of mock-up prototypes to investigate users' preferences and concerns when dealing with design features to increase transparency and control. Our investigation illustrates how users' expectations and impressions of news recommenders are closely related to their news reading practices. On a broader level, we show how transparency and control are conceptually intertwined. Transparency without control leaves users frustrated. Conversely, without a basic level of transparency into how a system works, users remain unsure of the impact of controls.
Article
As people access news via digital platforms, existing literature provides foundations for institutional approaches to news organizations’ platform dependency. Yet, platform dependency also exists on a spectrum: size, business model, and market position impact how each news organization strategizes its reliance on digital platforms. I draw on in-depth interviews with 22 South Korean news professionals to delve into different survival strategies in dealing with South Korea’s biggest search portal and news aggregator, Naver. Findings reveal that contrary to the common belief, journalists in legacy news organizations experience more pressure and compromise journalistic values with clickbait headlines. They deem their relationship with the platform more in hierarchical and inevitable terms while journalists from new, emerging organizations are relatively freer from the competition for clicks and strive for more quality journalism. However, the difference stems from the Naver platform’s news organization ranking system and its tiered visibility structure that systematically creates the difference in audience reach and news distribution.
Chapter
This chapter shows how news outlets’ engagement with algorithmic distribution intersects with the economics of online journalism. There are concerns that news outlets have become dependent on digital platforms for audience traffic. I offer an alternative perspective here, arguing that news media organisations are instead dependent on advertising income, which is still a key source of revenue for many publications. As a result, publications become reliant on the complex algorithmic systems that drive programmatic advertising, produce inequitable outcomes and even harm media freedom. While large news media outlets are trying to extricate themselves from these systems by building their own advertising platforms, this will lead to a two-speed news media economy benefiting companies that have the money to invest in alternative options.
Chapter
In Switzerland, social media platforms and their algorithms increasingly influence news distribution and consumption. Requests to improve the governance and regulation of the algorithmic distribution of news have increasingly gained traction, in policy circles and in the news industry. However, the Swiss government has historically believed that no regulatory action is necessary. Even today, Switzerland continues to play a waiting game in order to avoid the risk of a negative market impacts. But this position puts Switzerland in a delicate position, as reform options can diminish over time. These developments lead to a critical juncture in the political debate around the governance of intermediaries. Switzerland closely watching what neighboring countries and supranational institutions like the European Union will do as they decide whether to follow the crowd or take their own path.
Article
News media are increasingly interwoven with social media platforms. Building on institutional theory, we trace the repercussions of the platform infrastructure inside a media organization by focusing on organizational discourses and practices in connection with the journalistic use of social media. The empirical material includes interviews, field notes, chat logs, and documents collected from a public service media organization during a 6-month on-site and virtual ethnography. The findings show how platform pressures intertwine with content production, audience representation, journalistic values, and organizational development, thus manifesting the infrastructuralization and institutionalization of platforms in the media industry. While the interviewees articulated tensions related to adopting social media, the fieldwork data revealed forms of mimetic and normative isomorphism, mediated by platform data and professional roles in the organization. Moreover, the platform infrastructure seems to cultivate both critical and aspirational talk in the organization, which implies a more complex relationship beyond coercive platform power.
Article
Full-text available
This article presents the results of an in-depth ethnographic study of the development of a personalization algorithm in a large regional news organization in Denmark. Drawing on the concept of sociotechnical assemblage, we argue that in the process the news organization moves from distributing news to the users as segments of consuming collectives to algorithmically constructing individual users as aggregated data points. Second, we show how personalization disassembles the constitution of “the news” as a finite arrangement of articles, replacing one structural organization and routinization of news distribution with an algorithmic and numeric form of organizing the distribution. This disassembling leads to negotiations over loss of control, as editors realize that their publicist and democratic mission is at stake and as they struggle building news values such as timeliness and localness into the algorithm, thus “translating back” the agency from the algorithm to the journalistic staff. Finally, we discuss how the negotiations involved in this concrete case study has far reaching implications for the future of journalism, as this transformation further emphasizes the economic value of news for the individual, while putting the societal value of new journalism and audiences as democratic collectives at stake.
Article
The realm of politics (and social relations in general) during the era of social media and massive amounts of data about citizens is undergoing a creeping transformation that the general public does not understand. This essay draws attention to the increasingly deep conditioning of the political sphere through a new, directly unobservable form of data-driven political communication. This communication, using social media user data and algorithms for its analysis, is fundamentally changing the conditions of politics and electoral campaigning. Additionally, it is introducing a new form of information asymmetry that privileges tech giants and individuals with the capital to buy access to these data. As a result of this marriage of technology corporations and their wealthy principals, political discourses are being radicalised and controlled, weakening the democratic system.
Article
Through guidelines, terms of service and algorithmic curation, digital platforms such as YouTube encourage creators to produce content that fits with the commercial goals of the platform. Scholars have argued that this pressure to conform might lead to uniformity, or isomorphism, in the ways organizations manage their presence on platforms. This article contributes to the debate on isomorphism by taking a bottom-up approach and ask to which extent creators on YouTube pursue similar, or different, strategies for uploading and monetizing content. Through quantitative and qualitative analyses of a sample of YouTube channels, we show how content creators adapt to, negotiate with, and defy institutional pressures. In the end, we find greater support for diversification, that is, polymorphism, than concentration in the ways organizations manage their presence on the platform. This has implications for how we understand platform power and integrate institutional theories in communication research.
Article
The notion of “smart city” incorporates promises of urban resilience, referring generally to capacities for cities to anticipate, absorb, react, respond, and reorganize in the face of disruptive changes and disturbances. As such, artificial intelligence (AI), coupled with big data, is being heralded as a means for enhancing and accessing key determinants of resilience. At the same time, while AI generally has been extolled for contributions to urban resilience, less attention has been paid to the other side of the equation — i.e., to the ethical, governance, and social downsides of AI and big data that can operate to hinder or compromise resilience. With particular attention to relevant institutional dynamics and features, an encompassing and systemic conception of smart and resilient cities is delineated as a critical lens for viewing and analyzing complex instrumental and intrinsic aspects of the relationship between AI and resilience. As a broader contribution to the literature, a set of structural, process, and outcome conditions are offered for engaging and assessing linkages inherent in the use of AI relative to urban resilience in terms of absorptive capacity, speed of recovery, over-optimization avoidance, and creative destruction, especially as regards impacts on relevant practices, standards, and policies.
Article
As automated journalism based on AI came into being, it is important to understand the algorithm competence possibilities and limitations for the institutional facilitating the human-machine collaboration. Meanwhile, videos become mainstream in the advertisement realm. To expand the scope of research from journalism to advertisement, from text news to video, a comparative study was conducted to examine how the users perceive the video created by AI and humans. There is no significant difference explicitly, but the implicit appraisals were in favor of human-generated video. The key discussion is the boundary thinking of AI in both the academic and industrial spheres.
Article
Full-text available
Industry advocates argue that the focus of advertising production has shifted from the creativity of practitioners to consumer analytics and the potential advantages of big data. Although a little empirical research offers valuable insights about the changing role of advertising practitioners, it lacks a critical perspective to situate it in a broader social context. On the other hand, digital labor and branding literature over-concentrate on user labor and neglect the role of practitioners in advertising production. By deploying the concept of immaterial labor, this article reevaluates the findings of mainstream marketing-advertising literature within the context of post-Fordist labor. This article aims to create a resonance between theories of immaterial labor and advertising literature and to call for further empirical research from a labor perspective. It argues that advertising practitioners put more strategical, relational and communicative powers into work to manage a data-oriented market. Keywords: Advertising Practitioners, Immaterial Labour, Big Data, Media Work, Autonomist Marxism
Article
As artificial intelligence (AI) technologies become more ubiquitous for streamlining and optimizing work, they are entering fields representing organizational logics at odds with the efficiency logic of automation. One such field is journalism, an industry defined by a logic enacted through professional norms, practices, and values. This paper examines the experience of technologists developing and employing natural language generation (NLG) in news organizations, looking at how they situate themselves and their technology in relation to newswork. Drawing on institutional logics, a theoretical framework from organizational theory, we show how technologists shape their logic for building these emerging technologies based on a theory of rationalizing news organizations, a frame of optimizing newswork, and a narrative of news organizations misinterpreting the technology. Our interviews reveal technologists mitigating tensions with journalistic logic and newswork by labeling stories generated by their systems as nonjournalistic content, seeing their technology as a solution for improving journalism, enabling newswork to move away from routine tasks. We also find that as technologists interact with news organizations, they assimilate elements from journalistic logic beneficial for benchmarking their technology for more lucrative industries.
Article
Full-text available
This research concerns the perceived need for and benefits of an algorithmically generated, personalizable tip sheet that could be used by journalists to improve and expand coverage of state legislatures. This study engaged in two research projects to understand if working journalists could make good use of such a tool and, if so, what features and functionalities they would most value within it. This study also explored journalists’ perceptions of the role of such tools in their newswork. In a survey of 193 journalists, nearly all said legislative coverage is important but only 37% said they feel they have the resources to do such coverage now, and 81% said they would improve their coverage if barriers were removed. Respondents valued the ability to receive customizable alerts to news events regarding specific people, issues or legislative actions. A follow-up series of semi-structured interviews with reporters brought forth some concerns on such issues as transparency, trust and timeliness and identified differing normative assumptions on how such a tool should influence their newswork.
Article
Full-text available
Non-fungible tokens (NFTs) exist today as a component of a broader, ever-evolving financial environment in which questions of value, ownership, and intention are characterized by their ambiguity. This article considers Dapper Labs “NBA Top Shot,” a blockchain-backed website inviting NBA fans to join in “a new era in fandom” wherein they may acquire NFTs of NBA highlights by opening “packs,” which are functionally similar to trading cards. NFTs reflect the pressures of market forces, as well as increased cultural and economic emphasis on marketization, financialization, commodification, and the ubiquity of gambling-like designs and interactions. Furthermore, this study explores tensions present in differing intentions for the NBA Top Shot platform and Discord server, the diffuse nature of user conversations (a nature that disregards topical boundaries), and audience attention toward marketization and investment interests. The commodification of the NBA fan experience illustrates a shared social pressure to more readily think of one’s life, interactions, and consumptive behaviors through the lens of the investor, fostering financial attitudes that normalize instability and encourage risk-taking beyond the scope of a platform where purchase-dependent interactions serve as a source of joy and social experience in a venue representing a perceived electronic gold rush.
Article
Full-text available
Until the end of the last century, media sociology was synonymous with the investigation of mass media as a social domain. Today, media sociology needs to address a much higher level of complexity, that is, a deeply mediatized world in which all human practices, social relations, and social order are entangled with digital media and their infrastructures. This article discusses this shift from a sociology of mass communication to the sociology of a deeply mediatized world. The principal aim of the article is to outline a new media-sociological imagination: media sociology as a cross-sectional sociology, a sociology of entanglement, and a new critical sociology of technological deep structures.
Article
This contribution investigates how public funding of media can be reinterpreted to fit a communication rights–based approach to media policy. To this end, it describes and evaluates current public funding in small democratic-corporatist European media systems. While public funding is no longer “frozen” in its late twentieth-century state, as funding mechanisms have undergone significant change, when held against a rights-based approach, it appears there is a need to shift the basis for funding from safeguarding the survival of media industries to safeguarding the communication rights of citizens, allowing media to become “enablers” in executing these rights.
Book
Full-text available
Does the information on the Web offer many alternative accounts of reality, or does it subtly align with an official version? In Information Politics on the Web, Richard Rogers identifies the cultures, techniques, and devices that rank and recommend information on the Web, analyzing not only the political content of Web sites but the politics built into the Web's infrastructure. Addressing the larger question of what the Web is for, Rogers argues that the Web is still the best arena for unsettling the official and challenging the familiar. Rogers describes the politics at work on the Web as either back-end—the politics of search engine technology—or front-end—the diversity, inclusivity, and relative prominence of sites publicly accessible on the Web. To analyze this, he developed four "political instruments," or software tools that gather information about the Web by capturing dynamic linking practices, attention cycles for issues, and changing political party commitments. On the basis of his findings on how information politics works, Rogers argues that the Web should be, and can be, a "collision space" for official and unofficial accounts of reality. (One chapter, "The Viagra Files" offers an entertaining analysis of official and unofficial claims for the health benefits of Viagra.) The distinctiveness of the Web as a medium lies partly in the peculiar practices that grant different statuses to information sources. The tools developed by Rogers capture these practices and contribute to the development of a new information politics that takes into account and draws from the competition between the official, the non-governmental, and the underground.
Chapter
Full-text available
Algorithms (particularly those embedded in search engines, social media platforms, recommendation systems, and information databases) play an increasingly important role in selecting what information is considered most relevant to us, a crucial feature of our participation in public life. As we have embraced computational tools as our primary media of expression, we are subjecting human discourse and knowledge to the procedural logics that undergird computation. What we need is an interrogation of algorithms as a key feature of our information ecosystem, and of the cultural forms emerging in their shadows, with a close attention to where and in what ways the introduction of algorithms into human knowledge practices may have political ramifications. This essay is a conceptual map to do just that. It proposes a sociological analysis that does not conceive of algorithms as abstract, technical achievements, but suggests how to unpack the warm human and institutional choices that lie behind them, to see how algorithms are called into being by, enlisted as part of, and negotiated around collective efforts to know and be known.
Article
Full-text available
: Through three case studies of online political activism on Facebook, this article conceptualizes the deployment of issue publics (Lippmann, 1993; Marres, 2005) on Facebook. We argue that issue publics on Facebook come into being through a specific set of double articulations of code and politics that link and reshape informational processes, communicational constraints and possibilities, and political practices in different and sometimes contradictory ways. Using Maurizio Lazzarato’s exploration of immaterial labour (2004), we demonstrate the need to further understand the networking of publics and their issues by considering how online platforms provide the material, communicational, and social means for a public to exist and therefore define the parameters for assembling issues and publics and circumscribe a horizon of political agency. Resume : Au travers de trois analyses d’exemples d’activisme politique en ligne sur Facebook, cet article offre une conceptualisation du developpement de problemes d’interet general et de leurs publics sur Facebook (Lippmann, 1922; Marres, 2005). Nous demontrons que les problemes d’interet general et leurs publics sur Facebook sont crees au travers d’une serie de double articulations du code et du politique qui lient et refaconnent les processus informationnels, les possibilites et contraintes communicationnelles et les pratiques politiques de manieres differentes et parfois contradictoires. En se referant aux travaux de Maurizio Lazzarato sur le travail immateriel (2004), nous demontrons le besoin d’analyser le processus de reseautage des problemes d’interet general et de leurs publics. Ceci inclut une nouvelle approche envers les plates-formes en ligne comme fournissant les moyens materiels, communicationnels et sociaux pour qu’un public puisse exister, et comme definissant par la meme un horizon d’activite politique et les parametres selon lesquels des problemes d’interet general et leurs publics peuvent etre assembles.
Article
Full-text available
Despite growing interest in search engines in China, relatively few empirical studies have examined their sociopolitical implications. This study fills several research gaps by comparing query results (N = 6320) from China's two leading search engines, Baidu and Google, focusing on accessibility, overlap, ranking, and bias patterns. Analysis of query results of 316 popular Chinese Internet events reveals the following: (1) after Google moved its servers from Mainland China to Hong Kong, its results are equally if not more likely to be inaccessible than Baidu's, and Baidu's filtering is much subtler than the Great Firewall's wholesale blocking of Google's results; (2) there is low overlap (6.8%) and little ranking similarity between Baidu's and Google's results, implying different search engines, different results and different social realities; and (3) Baidu rarely links to its competitors Hudong Baike or Chinese Wikipedia, while their presence in Google's results is much more prominent, raising search bias concerns. These results suggest search engines can be architecturally altered to serve political regimes, arbitrary in rendering social realities and biased toward self-interest.
Article
Full-text available
This article relates technology studies to organization research and examines the technology-as-text metaphor. The study of organization is incomplete as long as tangible technology remains in its blind spot. Linguistic metaphors and analogies, while capturing and indeed amplifying much of received understandings of technology, succeed only partially in repairing the situation. The image of the palimpsest is used to highlight this critique and to visualize ways out. Thus, while the main concern of the paper is to re-situate technology to the study of organization, an argument is also put forward for a specific approach to the study of technology.
Article
Full-text available
As key socio-cultural building blocks of human societies, institutions are distinct from organizations and, hence, are central to sociological inquiry. In recent decades, however, institutional analysis has increasingly moved toward the analysis of organizations, while treating “institutions” as the environments or fields of organizations. While the insights offered by contemporary organizational theorists have provided important keys to understanding how organizations, especially economic organizations, adapt to pressures within their environments, the authors argue that the Old Institutionalisms of functional theorizing has much to offer the New Institutionalisms. In this article, the Old Institutionalisms are revisited to construct a precise definition of institutions as well as posit a robust theory of institutional dynamics, a theory which supplements contemporary organizational analysis. Four dynamics stand out: the process of institutional autonomy, the intersection of stratification systems and institutions, modes of integration within and between institutions, and generalized symbolic media of exchange. In particular, the latter two occupy the authors' attention primarily as they have been under-theorized elsewhere.
Article
Full-text available
""The article discusses the background and origins of research on media institutions as afield, and especially assesses the development and status of Norwegian research on broadcasting institutions. It is demonstrated how the field has developed, both quantitatively and qualitatively, through three key phases: the era of broadcasting monopolies; the“new media situation” in the 1980s and 1990s; and the era of convergence; globalization andcommercialization from the late 1990s. A key purpose is to discuss the theoretical perspectives and implicit and explicit assumptions upon which the research is based. Further, thearticle points to shortcomings and gaps in our knowledge of how media institutions evolve and operate. In closing, it is suggested how the field may maintain its relevance in an erawhere the very concept of a “broadcasting institution” is becoming more blurred.
Article
Full-text available
Abstract The paper relates technology studies to organization,research and examines the technology- as-text metaphor. The study of organization,is incomplete,as long,as tangible technology re- mains in its blind spot. Linguistic metaphors and analogues, while capturing and indeed am- plifying much of received understandings of technology, succeed only partially in repairing the situation. The image of the palimpsest is used to highlight this critique and to visualize ways out. Thus, while the paper‚s main concern is to bring back technology to the study of organization, a specific approach to the study of technology is also argued for. 2 Silent Inscription
Article
Full-text available
New information and communication technologies (ICTs) such as email and the internet have altered the work practices of journalists. This article introduces Actor-Network Theory (ANT) as a framework for analyzing the relation between new ICTs and changing practices in newswork. It argues that ANT offers an exciting new perspective on ‘holistic’ studies of mass mediation practices, because it calls for a focus on heterogeneous actors: people, ideals, symbolic constructions, and material elements are seen as equally important elements to analyze. The article offers empirical examples of how ICTs have become elements of specific actor-networks, and argues that, at this point, the new aspect of them is their seamlessness. It is argued that while including materiality — technology — in analyses of journalism practices we should refrain from essentializing the ‘effects’ of ICT. Rather, technology should be treated analytically as an actant tightly integrated in networks with other actants, without being assigned particular forces or consequences.
Article
In any business, the ability to see into the future is the killer app, and Netflix may be getting close with “House of Cards.”
Article
Building on earlier empirical work in newsrooms, this paper contends that a fundamental transformation has occurred in journalists' understanding of their audiences. A new level of responsiveness to the agenda of the audience is becoming built into the DNA of contemporary news work. This article argues, however, that this journalistic responsiveness to the "agenda of the audience" has multiple, often contradictory meanings. It traces these out through a critical-historical sketch of key moments in the journalism-audience relationship, including the public journalism movement, Independent Media Center (Indymedia), and Demand Media. These different visions of the audience are correlated to different images of democracy, and they have different sociological implications. The public journalism movement believed in a form of democracy that was conversational and deliberative; in contrast, traditional journalism embraced an aggregative understanding of democracy, while Indymedia's democratic vision could best be seen as agonistic in nature. Demand Media and similar ventures, this article concludes, may be presaging an image of public that can best be described as algorithmic. Understanding this algorithmic conception of the audience may be the first step into launching a broader inquiry into the sociology and politics of algorithms.
Article
Historically, neither the creators nor the distributors of cultural products such as books or movies have used analytics - data, statistics, predictive modeling - to determine the likely success of their offerings. Instead, companies relied on the brilliance of tastemakers to predict and shape what people would buy. Creative judgment and expertise will always play a vital role in the creation, shaping and marketing of cultural products. But the balance between art and science is shifting. Today companies have unprecedented access to data and sophisticated technology that allows even the best-known experts to weigh factors and consider evidence that was unobtainable just a few years ago. And with increased cost and risk associated with the creation of cultural products, it has never been more important to get these decisions right. In this article, the authors describe the results of a study of prediction and recommendation efforts for a variety of cultural products. They discuss different approaches used to make predictions, the contexts in which these predictions are applied and the barriers to more extensive use, including the problem of decision making pre-creation. They then discuss two aspects of the prediction market. First, the need for better prediction for distributors of cultural products, and second, the potential for business models around prediction techniques. Copyright © Massachusetts Institute of Technology, 2009. All rights reserved.
Article
In the beginning, the World Wide Web was exciting and open to the point of anarchy, a vast and intimidating repository of unindexed confusion. Into this creative chaos came Google with its dazzling mission-"To organize the world's information and make it universally accessible"-and its much-quoted motto, "Don't be evil." In this provocative book, Siva Vaidhyanathan examines the ways we have used and embraced Google-and the growing resistance to its expansion across the globe. He exposes the dark side of our Google fantasies, raising red flags about issues of intellectual property and the much-touted Google Book Search. He assesses Google's global impact, particularly in China, and explains the insidious effect of Googlization on the way we think. Finally, Vaidhyanathan proposes the construction of an Internet ecosystem designed to benefit the whole world and keep one brilliant and powerful company from falling into the "evil" it pledged to avoid.
Chapter
This chapter considers the question of commensuration - the process of comparison according to a common metric - and how it is accomplished on online social media websites. When commensurability is produced through the distributed reviews and ratings of thousands of user-generated postings, and transformed through filtering and weighting algorithms into ratings and rankings, it may be expected that different things will be paid attention to, connected, and compared. The chapter is interested in understanding these differences and the implications of online user-based evaluation mechanisms for how commensurability is organized and achieved.
Article
Every day automated algorithms make decisions that can amplify the power of businesses and governments. Yet as algorithms come to regulate more aspects of our lives, the contours of their power can remain difficult to grasp. This paper studies the notion of algorithmic accountability reporting as a mechanism for elucidating and articulating the power structures, biases, and influences that computational artifacts exercise in society. A framework for algorithmic power based on autonomous decision-making is proffered and motivates specific questions about algorithmic influence. Five cases of algorithmic accountability reporting involving the use of reverse engineering methods in journalism are then studied and analyzed to provide insight into the method and its application in a journalism context. The applicability of transparency policies for algorithms is discussed alongside challenges to implementing algorithmic accountability as a broadly viable investigative method.
Article
This article advances a sociological approach to computational journalism. By computational journalism the article refers to the increasingly ubiquitous forms of algorithmic, social scientific, and mathematical forms of newswork adopted by many 21st-century newsrooms and touted by many educational institutions as the future of news. By sociological approach, the article endorses a research model that brackets, at least temporarily, many of the current industry concerns with the practical usability of newsroom analysis. The bulk of the article outlines a series of six lenses through which such an approach to computational journalism might be carried out. Four of these lenses are drawn from Schudson's classic typology of the sociology of newseconomic, political, cultural, and organizational approaches. In addition, the author adds Bordieuean field approaches and technological lenses to the mix. In each instance, the author discusses how particular approaches might need to be modified in order to study computational journalism in the digital age.
Article
Is the Internet democratizing American politics? Do political Web sites and blogs mobilize inactive citizens and make the public sphere more inclusive?The Myth of Digital Democracyreveals that, contrary to popular belief, the Internet has done little to broaden political discourse but in fact empowers a small set of elites--some new, but most familiar.Matthew Hindman argues that, though hundreds of thousands of Americans blog about politics, blogs receive only a miniscule portion of Web traffic, and most blog readership goes to a handful of mainstream, highly educated professionals. He shows how, despite the wealth of independent Web sites, online news audiences are concentrated on the top twenty outlets, and online organizing and fund-raising are dominated by a few powerful interest groups. Hindman tracks nearly three million Web pages, analyzing how their links are structured, how citizens search for political content, and how leading search engines like Google and Yahoo! funnel traffic to popular outlets. He finds that while the Internet has increased some forms of political participation and transformed the way interest groups and candidates organize, mobilize, and raise funds, elites still strongly shape how political material on the Web is presented and accessed.The Myth of Digital Democracy. debunks popular notions about political discourse in the digital age, revealing how the Internet has neither diminished the audience share of corporate media nor given greater voice to ordinary citizens.
Article
Source: Democracy Now! JUAN GONZALEZ: When you follow your friends on Facebook or run a search on Google, what information comes up, and what gets left out? That's the subject of a new book by Eli Pariser called The Filter Bubble: What the Internet Is Hiding from You. According to Pariser, the internet is increasingly becoming an echo chamber in which websites tailor information according to the preferences they detect in each viewer. Yahoo! News tracks which articles we read. Zappos registers the type of shoes we wear, we prefer. And Netflix stores data on each movie we select. AMY GOODMAN: The top 50 websites collect an average of 64 bits of personal information each time we visit and then custom-designs their sites to conform to our perceived preferences. While these websites profit from tailoring their advertisements to specific visitors, users pay a big price for living in an information bubble outside of their control. Instead of gaining wide exposure to diverse information, we're subjected to narrow online filters. Eli Pariser is the author of The Filter Bubble: What the Internet Is Hiding from You. He is also the board president and former executive director of the group MoveOn.org. Eli joins us in the New York studio right now after a whirlwind tour through the United States.
Article
"House of Cards" gives viewers exactly what Big Data says we want. This won't end well
Article
This study adopts new institutional theory from the sociology of organizations, as well as concepts from the study of social networks, to help explain news organizations' struggles to innovate in the face of uncertainty. This literature suggests organizations with institutional orientations tend to adopt fleeting change, following industry trends, or even buffering internal processes from innovation in the product. In contrast, organizations that network with markets and readers tend to adopt more substantial change. Factors shaping managers' decision-making are explored, with a particular focus on the role environmental uncertainty plays in news organizations pursuing connections within the news institution (strong ties) or with audiences (weak ties). Data from a survey of news organizations and an analysis of features on their websites suggest levels of innovation are low, and institutionalist tendencies dominate decision-making about product change. Where innovation occurs, it is due to corporate coercion and resources, and concrete evidence from the organization's market. Uncertainty about audiences and technologies tends to reinforce institutionalist tendencies by encouraging managers to follow present industry trends. Uncertainty does seem to fuel the news organization's internal capacity to innovate, but it does not lead to actual changes in website features. This suggests news organizations are decoupling internal processes from external processes—more evidence of an institutional orientation.
Article
The business model of gathering, producing and distributing news is changing rapidly. Producing content is not enough; moderation and curation by “news workers” is at least as important. There is a growing pressure on news organizations to produce more inexpensive content for digital platforms, resulting in new models of low-cost or even free content production. Subscription, advertising revenues and non-profit funding are in many cases insufficient to sustain a mature news organization. Aggregation, either by humans or machines, is gaining importance. At “content farms” freelancers, part-timers and amateurs produce articles that are expected to end up high in Web searches. Apart from this low-pay model a no-pay model—the Huffington Post—emerged where bloggers write for no compensation at all. We analyse the background to all this, the consequences for journalists and journalism, and the implications for online news organizations. We investigate aggregation services, content farms and no-pay or low-pay news websites.
Article
This article explores the new modalities of visibility engendered by new media, with a focus on the social networking site Facebook. Influenced by Foucault’s writings on Panopticism – that is, the architectural structuring of visibility – this article argues for understanding the construction of visibility on Facebook through an architectural framework that pays particular attention to underlying software processes and algorithmic power. Through an analysis of EdgeRank, the algorithm structuring the flow of information and communication on Facebook’s ‘News Feed’, I argue that the regime of visibility constructed imposes a perceived ‘threat of invisibility’ on the part of the participatory subject. As a result, I reverse Foucault’s notion of surveillance as a form of permanent visibility, arguing that participatory subjectivity is not constituted through the imposed threat of an all-seeing vision machine, but by the constant possibility of disappearing and becoming obsolete.
Article
This article explores the institutionalization of YouTube: its transformation from user-generated content (UGC) – oriented as a virtual village – into a professionally generated content (PGC) video site, especially after being purchased by Google. YouTube has influenced the traditional media environment, but at the same time this new medium imitates the rules of the old media, including legally managed distribution of broadcasting content and smooth links between content and commercials. YouTube constitutes an evolution of the present media milieu, rather than a revolution. On the other hand, the dominance of mainstream media is, to a degree, still compromised in UGC culture. The emancipatory dimension of UGC media (e.g. as democratic, creative outlet with high accessibility and online library potential) is discussed in the conclusion, not losing sight of the technological-economic limitations placed on its continuing promise.
Article
This chapter reviews the history of three terms increasingly used by researchers in the fields of organization studies and information systems: “materiality,” “sociomateriality” and “socio-technical systems.” After this review, I explore ways in which these terms overlap and depart in meaning from one another in scholars’ writings. I suggest that materiality might be viewed as a concept that refers to properties of a technology that transcend space and time, while sociomateriality may be used to refer to the collective spaces in which people come into contact with the materiality of an artifact and produce various functions. I suggest that the concept of a sociomaterial practice is akin to what socio-technical systems theorists referred to as the “technical subsystem” of an organization, or the way that people’s tasks shape and are shaped by their use of machines. This technical subsystem is recursively organized alongside the social subsystem of an organization, which is characterized by an abstract set of roles, communication patterns, and so on.
Article
One of the central questions in free speech jurisprudence is what activities the First Amendment encompasses. This Article considers that question in the context of an area of increasing importance – algorithm-based decisions. I begin by looking to broadly accepted legal sources, which for the First Amendment means primarily Supreme Court jurisprudence. That jurisprudence provides for very broad First Amendment coverage, and the Court has reinforced that breadth in recent cases. Under the Court’s jurisprudence the First Amendment (and the heightened scrutiny it entails) would apply to many algorithm-based decisions, specifically those entailing substantive communications. We could of course adopt a limiting conception of the First Amendment, but any nonarbitrary exclusion of algorithm-based decisions would require major changes in the Court’s jurisprudence. I believe that First Amendment coverage of algorithm-based decisions is too small a step to justify such changes. But insofar as we are concerned about the expansiveness of First Amendment coverage, we may want to limit it in two areas of genuine uncertainty: editorial decisions that are neither obvious nor communicated to the reader, and laws that single out speakers but do not regulate their speech. Even with those limitations, however, an enormous and growing amount of activity will be subject to heightened scrutiny absent a fundamental reorientation of First Amendment jurisprudence.
Article
■ Abstract Political science has tended,to neglect the study of the news,media as political institutions, despite a long history of party-subsidized newspapers and despite a growing,chorus of scholars who,point to an increasing “mediatization” of politics. Still, investigators in sociology, communication, and political science have taken up the close study of news,institutions. Three general approaches,predominate. Political economy,perspectives focus on patterns of media ownership,and the behavior of news institutions in relatively liberal versus relatively repressive states; a second,set of approaches,looks at the social organization of newswork,and relates news content to the daily patterns of interaction of reporters and their sources; a third style of research examines,news as a form of culture that often unconsciously,incorporates general belief systems, assumptions, and values into news writing.
Article
This paper proposes and develops a model of audience evolution. The concept of audience evolution in this case refers to the notion that the dominant framework employed by media industry stakeholders (content producers, distributors, advertisers, media buyers, etc.) to conceptualize the audience evolves in response to environmental changes. These environmental changes primarily involve technological changes that simultaneously transform the dynamics of media consumption as well as the dynamics of gathering information on various dimensions of audience behavior. These technological changes also interact with one another, in that the technological changes that affect the dynamics of media consumption also simultaneously provide new means of gathering information on previously umeasurable aspects of audience behavior. These technological changes, and their economic and strategic implications, are then filtered through a process of stakeholder resistance and negotiation, out of which new institutionalized conceptualizations of the media audience emerge. This paper asserts a causal relationship between the decline of traditional exposure metrics and the emergence of alternative conceptualizations of audience behavior. That is, the extent to which the fragmentation of the media environment is undermining the long-institutionalized exposure-focused conceptualization of the audience is creating an environment of exploration of – and receptivity toward – alternative conceptualizations of the audience that are derived from dimensions of audience behavior that are better capturable in today’s increasingly fragmented, increasingly interactive media environment. This pattern suggests that the institutionalized audience is a very malleable construct; something that evolves in response to environmental conditions in order to facilitate the continued functioning of the audience marketplace.
Article
How are transformations in newswork intersecting with changes in the monitoring of reader behavior and new technologies of audience measurement? How, in short, are journalistic ‘visions of the audience’ shifting in the online era, and how are they enabling particular editorial practices? This article explores a provocative tension between the now common rhetorical invocation of the news audience as a ‘productive and generative’ entity, and the simultaneous, increasingly common institutional reduction of the audience to a quantifiable, rationalizable, largely consumptive aggregate. The first half of article reviews the literature on the relationship between audience understanding and newsroom practices. The second half of the article is comprised of an ethnographic analysis of the manner by which increasingly prominent and widespread techniques of audience measurement and quantification interact with the newsroom rhetoric of the active, generative audience. The article concludes with some thoughts regarding the role played by audience quantification and rationalization in shifting newswork practices. It argues that the underlying rhetoric of the active audience can be seen as laying the groundwork for a vision of the professional reporter that is less autonomous in his or her news decisions and increasingly reliant on audience metrics as a supplement to news judgment.
Article
This study compares rational-choice economics approaches with institutional theory in assessing the ways news organizations respond to uncertainty. Rational-choice approaches suggest newsroom managers facing uncertainty will pursue “tight coupling” by monitoring audiences and collaborating with the newspaper's business side, while institutionalism suggests uncertainty will lead organizations to buffer themselves from their market environments and to mimic other news organizations. Findings from a national survey of editors and a content analysis of interactive features from newspaper Web sites reveal both institutional-ist and rational-choice tendencies. News managers are monitoring audiences, but monitoring is not strongly affecting decisions about content and online features.
Book
The book is a collective meditation on the role of materiality in social affairs. The recent and growing interest in the concept of "materiality" certainly has diverse origins. Yet, it is closely associated with the diffusion of technological objects and artifacts through society and many have questioned how human choice and social practice are conditioned by the characteristics of such devices and systems. Many traditional technologies are easy to call "material" - they are made up of wood, steel, and other physical substrates that afford and constrain particular uses. Other technologies, such as software and rhetorical tropes, are not made up of such physical substrates, but they still have implications for human action in many of the same ways as the more traditional technologies. Thus, it is unclear how to talk about the materiality of technology in a way that includes both physical and nonphysical artifacts while still accounting for their effects. The book gathers together a group of scholars from various disciplines who approach the issues materiality raises from various angles, making evident that there is no single answer as to how the concept can be used to approach the perennial question of the ways technologies and humans bear upon one another. The book contributes to untangling the various meanings of materiality and clarifying the positions or perspectives from which they are produced.
Article
Deregulation and ownership concentration have been accompanied by increased rationalization of programming strategies in commercial music radio in many industrial nations. However, understanding of the impact of these trends on music programming is incomplete because little research has examined music radio's `culture of production'. This article addresses this deficit by exploring the knowledge frameworks that radio programmers draw on to transform records into music programming. Interviews with music programmers working at radio stations in the USA reveal fundamental variation in how culture production is managed in this industry. I account for this variation by distinguishing four programming philosophies that guide and legitimate programmers' choice of programming strategies. Finally, I describe the integration of these philosophies into programmers' knowledge frameworks by considering their impact on music programming, and the structural factors that accommodate and constrain each philosophy.
Article
This article employs and extends the concept of technical code (Feenberg, 1992, 1995a, 1995b) to examine the current state of the internet. The notion of technical code — the cultural and social assumptions and values that become manifest in a technology’s physical and structural forms — is invoked to examine design characteristics of the internet that, in turn, reflect and provide opportunities for important social outcomes. Overall, the internet’s technical design supports interoperability and open access, while suggesting an enormous capacity for personalization and innovation. In turn, these technical features support the emergence of myriad collective social activities, resulting in a sense of individual empowerment achieved through enhanced agency. Significant countervailing forces, however, inhibit this potential. By examining the values, priorities, and assumptions that have become built into the internet, both technically and socially, the present analysis clarifies this tension and serves to frame the internet’s potential at this critical time in its evolution.