Clemens Apprich’s research while affiliated with University of Applied Arts Vienna and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (5)


‚Das könnte Sie auch interessieren‘ – Methoden zur Erforschung algorithmischer Empfehlungssysteme
  • Chapter

June 2024

·

8 Reads

Inga Luchs

·

Clemens Apprich

·


Figure 3: Machine Learning Crash Course (screenshot from Google Developers 2020).
When Achilles met the tortoise: Towards the problem of infinitesimals in machine learning
  • Chapter
  • Full-text available

December 2023

·

12 Reads

Download

Figure 3: Machine Learning Crash Course (screenshot from Google Developers 2020).
When Achilles met the tortoise: Towards the problem of infinitesimals in machine learning

November 2023

·

26 Reads

How do artificial neural networks and other forms of artificial intelligence interfere with methods and practices in the sciences? Which interdisciplinary epistemological challenges arise when we think about the use of AI beyond its dependency on big data? Not only the natural sciences, but also the social sciences and the humanities seem to be increasingly affected by current approaches of subsymbolic AI, which master problems of quality (fuzziness, uncertainty) in a hitherto unknown way. But what are the conditions, implications, and effects of these (potential) epistemic transformations and how must research on AI be configured to address them adequately?



Learning machine learning: On the political economy of big tech's online AI courses

February 2023

·

22 Reads

·

18 Citations

Big Data & Society

Machine learning (ML) algorithms are still a novel research object in the field of media studies. While existing research focuses on concrete software on the one hand and the socioeconomic context of the development and use of these systems on the other, this paper studies online ML courses as a research object that has received little attention so far. By pursuing a walkthrough and critical discourse analysis of Google's Machine Learning Crash Course and IBM's introductory course to Machine Learning with Python, we not only shed light on the technical knowledge, assumptions, and dominant infrastructures of ML as a field of practice, but also on the economic interests of the companies providing the courses. We demonstrate how the online courses further support Google and IBM to consolidate and even expand their position of power by recruiting new AI talent and by securing their infrastructures and models to become the dominant ones. Further, we show how the companies not only influence greatly how ML is represented, but also how these representations in turn influence and direct current ML research and development, as well as the societal effects of their products. Here, they boast an image of fair and democratic artificial intelligence, which stands in stark contrast to the ubiquity of their corporate products and the advertised directives of efficiency and performativity the companies strive for. This underlines the need for alternative infrastructures and perspectives.

Citations (1)


... All these challenges indicate the urgent need for cautious oversight regarding wider access to AI. Further, AI learning needs democratization to equitably impact AI development (Luchs et al. 2023). A rapid rise in the capabilities of AI and its utilization are not guaranteed. ...

Reference:

Democratizing Artificial Intelligence for Social Good: A Bibliometric–Systematic Review Through a Social Science Lens
Learning machine learning: On the political economy of big tech's online AI courses

Big Data & Society