The SAGE Handbook of Digital Society
... We recruited self-identified activists via purposeful and snowball sampling [18], given the close-knit nature of activist communities. We limited recruitment to participants who primarily used Instagram for activism, as it is recognized as the most popular platform for contemporary activists (e.g., [3,52,56,74,141]). While we ensured that all our participants used Instagram, the semi-structured nature of interviews resulted in our discussions naturally addressing broader social media use and activist practices as well. ...
Through interviews with 16 social justice activists, we explore their challenges of adapting to Instagram, particularly in light of the platform's evolving algorithm. Our findings reveal that the frequent changes in these algorithms significantly impact their ability to engage effectively-and disproportionately impact visibility, especially for those with fewer resources and less algorithmic expertise. Our contributions encompass discussions on activists' challenges in adapting to platform changes, and the strategic shifts towards gaining broader visibility. We also address the expectations of being a "good digital activist" amidst algorithmic mediation on Instagram, emphasizing participants' need for navigating platform mediated complexities and maintaining authenticity. Finally, we suggest design implications, advocating features-for both existing platforms and alternative systems exclusively for activism that reduce activists' concerns about quantitative metrics, promote selective privacy, tie amplification to thoughtful engagement, and foster community building through contextual moderation and communication.
... In particular, the oceans should be able to provide massive amounts of energy through a variety of different processes, e.g., ocean thermal energy, waves, osmotic pressure, salinity gradients and currents [2] . The review by Sitoe, Hoguane and Haddout [3] of ocean renewable energy in Africa concluded that mini tidal power plants and salt gradient power are the most suitable coastal power sources. An earlier review of the situation off the east coast of South Africa by Schumann [4] highlighted the potential posed by the Agulhas Current. ...
The Agulhas Current is a major western boundary current flowing polewards on the southeast coast of South Africa. This analysis assesses its characteristics and suitability to generate power as a source of clean renewable energy. On a section of coastline some 400 km long, over a period spanning more than 5 years an extensive set of current measurements was obtained. These data confirmed that south-westward currents with a speed greater than 1.2 m s–1 occurred over more than 60% of the recorded time; such ocean current speeds compare very favourably to winds required for energy generation. These currents occurred at the continental shelf break in water depths around 100 m, in the upper 50 m of the water column. Occasional current slowdowns and reversals did occur, with the major influence coming from ‘Natal Pulses’, which are large-scale meanders in the Current that temporarily reversed the currents at the measurement sites. However, because of the surface temperature structure of the relevant water masses, such meanders can be identified in satellite imagery giving a few days advance warning of such current reversals. The characteristics of western boundary currents have been known for many years, but at Present, there is no operational system where this source of power is being utilised. It has tremendous potential for renewable energy generation, but is symptomatic of the many engineering challenges that still have to be solved to make such generation economically viable.
Technological developments with the rapid and significant advances related to artificial intelligence (AI) have generated a broad debate on political, social, and ethical impacts, raising important questions that require multidisciplinary analysis and investigation. One of the issues under discussion is whether the integration of AI in the political context represents a promising opportunity to improve the efficiency of democratic participation and policy-making processes, as well as increase institutional accountability. The aim of this article is to propose a theoretical reflection that allows us to fully understand the implications and potential consequences of the application of AI in the political field without neglecting its social and ethical effects: can such uses really be considered democratic, or do they represent a dangerous trend of using algorithms for manipulative purposes? To achieve this, a deductive approach will be adopted based on theories, imaginaries, and expectations concerning AI in the specific context of politics. Through this type of analysis, knowledge will contribute to the understanding of the complex dynamics related to the use of AI in politics by offering a critical perspective and a picture of the different connections.
The topic of digital technology is considered a wide and primary motivator of cultural capital development in the academic field. The tangible progress of digital technology as an open social framework that supports students’ knowledge development and facilitates their integration into the scientific system. This chapter aims to study the future dimensions in the scientific field in the light of the rapid development of digital technology, and to understand the extent of artificial intelligence’s contribution to the production and reproduction of cultural capital and scientific authority in the scientific field, using a qualitative methodology and the techniques of observation and structured interview with students from Mohammed I University in Oujda, Morocco. Through this sociological study, it has been concluded that artificial intelligence enhances the cultural capital of students in the scientific field, and that the possibility of a new balance of scientific authority in the scientific field due to artificial intelligence is very low.
Large Language Models (LLMs) and generative Artificial Intelligence (A.I.) have become the latest disruptive digital technologies to breach the dividing lines between scientific endeavour and public consciousness. LLMs such as ChatGPT are platformed through commercial providers such as OpenAI, which provide a conduit through which interaction is realised, via a series of exchanges in the form of written natural language text called ‘prompt engineering’. In this paper, we use Membership Categorisation Analysis to interrogate a collection of prompt engineering examples gathered from the endogenous ranking of prompting guides hosted on emerging generative AI community and practitioner-relevant social media. We show how both formal and vernacular ideas surrounding ‘natural’ sociological concepts are mobilised in order to configure LLMs for useful generative output. In addition, we identify some of the interactional limitations and affordances of using role prompt engineering for generating interactional stances with generative AI chatbots and (potentially) other formats. We conclude by reflecting the consequences of these everyday social-technical routines and the rise of ‘ethno-programming’ for generative AI that is realised through natural language and everyday sociological competencies.
In the context of modernity, the world is, to a large extent, experienced as a point of aggression by those who inhabit it. In accordance with the functionalist logic inherent in systemic imperatives (such as rationalization, optimization, growth, expansion, competition, and profit-maximization), the modern world is dominated by the structural principle of dynamic stabilization and the cultural principle of the expansion of humanity’s reach. A society that can stabilize itself only dynamically—and seeks to achieve this through constant economic growth, technological acceleration, and cultural innovation—is bound to generate pathological levels of desire for endless escalation.
In just a few decades, the concept of the information society has transitioned from a vaguely established idea attempting to predict the future of humanity to a tangible reality that encompasses the entire globe. This rapid progress in the field of information technology and communication has resulted in the majority of our everyday devices, such as computers, televisions, home appliances, and even vehicles, being imbued with smart technology that is intended to benefit society. While the evolution of the internet is linked to technological advancements, it is also heavily influenced by social factors that give rise to new consequences for our society. Given the fast-paced evolution of the Information Society, it is essential to keep up with the changes. In Romania, the .RO domains play a crucial role in shaping the country's information society. However, these domains are often targeted by attacks, and continuous measures must be implemented to ensure the Internet operates smoothly in the country. The management of .RO domains involves several specialized applications, and their development and security are an ongoing project. Despite the remarkable technological advancements made in recent decades, the physical location of a business or organization remains crucial. In this regard, national domain names are of significant importance. The Romanian Top Level Domain is a vital component of the information society, and its dependability and safety are fundamental for social, economic activities, and the smooth operation of online services. Although the Domain Name System is often viewed as merely a technical task, its administration involves many factors, such as infrastructure stability, system security, and resource allocation. DNS is not solely a function of Internet governance, as it encompasses technologies that contribute to the functioning of the Internet. The DNS embeds content, and conflicts over property rights may arise concerning domain names that contain text (letters and/or numbers). The system must have several checkpoints that sanitize and verify content access. Security plays a key role in Internet governance, and this issue is a significant concern at both the country and ccTLD level, as well as for Internet governing bodies such as ICANN and IANA. Organizations responsible for Internet governance continuously update their security policies and offer training and advisory sessions at various levels to counter security threats. At the ccTLD level, security is a continuous topic as the risks associated with it cannot be tolerated.
Det råder delade meningar om digitaliseringen och AI:s allt större utrymme i skolan. Inte sällan leder det till en tämligen polariserad debatt där mänskliga värden ställs mot ekonomiska. I föreliggande artikel problematiseras detta utrymme med utgångspunkt i specialpedagogik, kopplat till tre övergripande teman: digitalisering, AI och maskininlärning och lärarrollen. De frågor som artikeln mer specifikt kretsar kring är: Vilka problem finns det med externa aktörer och en ökad digitalisering inom det specialpedagogiska fältet? Vad händer med den specialpedagogiska professionen i en skola som alltmer präglas av AI? Det är en explorativ studie som tar sin utgångspunkt i ett Foucault-inspirerat angreppssätt för att analysera de konsekvenser som AIed har inom utbildningsområdet. Materialet består av intervjuer, tidningsartiklar, inslag från SvT och företagens hemsidor och rapporter. Resultaten pekar mot att EdTech-industrin får konsekvenser för lärarrollen, inte minst i samband med den specialpedagogiska professionen. I många avseenden är det oklart vem – skolan, forskningen eller företagen – som styr vad som händer på såväl policynivå som i det individuella klassrummet och för den enskilda individen. Det väcker i sin tur en rad frågor kring AI och etik.
The main purpose of this paper is to assess the validity of the contention that, over the past few decades, the public sphere has undergone a new structural transformation. To this end, the analysis focuses on Habermas’s recent inquiry into the causes and consequences of an allegedly ‘new’ or ‘further’ [erneuten] structural transformation of the political public sphere. The paper is divided into two parts. The first part considers the central arguments in support of the ‘new structural transformation of the public sphere’ thesis, shedding light on its historical, political, economic, technological, and sociological aspects. The second part offers some reflections on the most important limitations and shortcomings of Habermas’s account, especially with regard to key social developments in the early twenty-first century. The paper concludes by positing that, although the constitution of the contemporary public sphere is marked by major—and, in several respects, unprecedented—structural transformations, their significance should not be overstated, not least due to the enduring role of critical capacity in highly differentiated societies.
Social media, in general, and Facebook in particular, have been clearly identified as important platforms for the dissemination of mis- and disinformation and related problematic content. However, the patterns and processes of such dissemination are still not sufficiently understood. We detail a novel computational methodology that focusses on the identification of high-profile vectors of “fake news” and other problematic information in public Facebook spaces. The method enables examination of networks of content sharing that emerge between public pages and groups, and external sources, and the study of longitudinal dynamics of these networks as interests and allegiances shift and new developments (such as the COVID-19 pandemic or the US presidential elections) drive the emergence or decline of dominant themes. Through a case study of content captured between 2016 and 2021, we demonstrate how this methodology allows the development of a new and more comprehensive picture of the overall impact of “fake news,” in all its forms, on contemporary societies.
In recent years, the field of natural language processing has seen substantial developments, resulting in powerful voice-based interactive services. The quality of the voice and interactivity are sometimes so good that the artificial can no longer be differentiated from real persons. Thus, discerning whether an interactional partner is a human or an artificial agent is no longer merely a theoretical question but a practical problem society faces. Consequently, the ‘Turing test’ has moved from the laboratory into the wild. The passage from the theoretical to the practical domain also accentuates understanding as a topic of continued inquiry. When interactions are successful but the artificial agent has not been identified as such, can it also be said that the interlocutors have understood each other? In what ways does understanding figure in real-world human–computer interactions? Based on empirical observations, this study shows how we need two parallel conceptions of understanding to address these questions. By departing from ethnomethodology and conversation analysis, we illustrate how parties in a conversation regularly deploy two forms of analysis (categorial and sequential) to understand their interactional partners. The interplay between these forms of analysis shapes the developing sense of interactional exchanges and is crucial for established relations. Furthermore, outside of experimental settings, any problems in identifying and categorizing an interactional partner raise concerns regarding trust and suspicion. When suspicion is roused, shared understanding is disrupted. Therefore, this study concludes that the proliferation of conversational systems, fueled by artificial intelligence, may have unintended consequences, including impacts on human–human interactions.
ResearchGate has not been able to resolve any references for this publication.