Fig 2 - uploaded by Dimitri Ognibene
Content may be subject to copyright.
Perceived level of realistic and identity threat aggregated average values before and after the intervention.

Perceived level of realistic and identity threat aggregated average values before and after the intervention.

Source publication
Preprint
Full-text available
Artificial intelligence's progress holds great promise in assisting society in addressing pressing societal issues. In particular Large Language Models (LLM) and the derived chatbots, like ChatGPT, have highly improved the natural language processing capabilities of AI systems allowing them to process an unprecedented amount of unstructured data. T...

Contexts in source publication

Context 1
... this measure a dependent t-test revealed significant differences (p<0.05) between the pre (mPre= 4.17, SD=1.39) and post (mPost=3.73, SD=1.42) questionnaires (Fig 2). This suggests that participants' realistic threat caused by AI decreased after the intervention. ...
Context 2
... dependent t-test revealed a significant difference (p<0.05) between the pre (mPre= 4.08, SD=1.39) and post (mPost=3.57, SD=1.54) questionnaires (Fig 2). This finding indicates that participants' AI identity threat significantly decreased after the intervention. ...
Context 3
... this measure a dependent t-test revealed significant differences (p<0.05) between the pre (mPre= 4.17, SD=1.39) and post (mPost=3.73, SD=1.42) questionnaires (Fig 2). This suggests that participants' realistic threat caused by AI decreased after the intervention. ...
Context 4
... dependent t-test revealed a significant difference (p<0.05) between the pre (mPre= 4.08, SD=1.39) and post (mPost=3.57, SD=1.54) questionnaires (Fig 2). This finding indicates that participants' AI identity threat significantly decreased after the intervention. ...

Similar publications

Preprint
Full-text available
When answering questions, LLMs can convey not only an answer, but a level of confidence about the answer being correct. This includes explicit confidence markers (e.g. giving a numeric score) as well as implicit markers, like an authoritative tone or elaborating with additional knowledge. For LLMs to be trustworthy knowledge sources, the confidence...
Preprint
Full-text available
Language model (LM) post-training (or alignment) involves maximizing a reward function that is derived from preference annotations. Direct Preference Optimization (DPO) is a popular offline alignment method that trains a policy directly on preference data without the need to train a reward model or apply reinforcement learning. However, typical pre...
Chapter
Full-text available
Artificial intelligence’s (AI) progress holds great promise in tackling pressing societal concerns such as health and climate. Large Language Models (LLM) and the derived chatbots, like ChatGPT, have highly improved the natural language processing capabilities of AI systems allowing them to process an unprecedented amount of unstructured data. Howe...

Citations

Article
Full-text available
A 21. században a technológiai fejlődésnek köszönhetően gyorsan bővül azon képességek köre, amelyek meghatározzák a mindennapi életünket, amelyekkel munkavállalóként rendelkeznünk kell, amelyeket a tanulástanítás folyamata során a pedagógusoknak, az oktatóknak fejlesztenie szükséges. A generatív mesterséges intelligencia megjelenésével előtérbe került, a kritikus gondolkodás magasabb szintre emelése mellett, a promptolás képessége is. A minőségi tartalmak létrehozásához elengedhetetlen, hogy ismerjük a promptalkotás lépéseit, befolyásoló tényezőit, nyelvi jellemzőit. Írásunkban azt járjuk körbe, hogy milyen ismeretekre van szükség e képesség kialakításához és fejlesztéséhez.