Chapter

Asking Right Questions: Towards Engaging and Inclusive Learning Environment to Enhance Creativity Using Generative AI

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In the rapidly evolving educational landscape, the integration of Generative AI into learning environments holds the potential to revolutionize the way educators engage with students and foster creativity. This chapter explores the transformative role of Generative AI in crafting an inclusive and engaging learning atmosphere that encourages students to ask the right questions. By leveraging the capabilities of Generative AI, educators can personalize learning experiences, adapt to diverse learning styles, and stimulate critical thinking. This chapter delves into the methodologies for implementing AI tools that prompt students to formulate insightful questions, thereby enhancing their creative problem-solving skills. It also examines the pedagogical frameworks necessary to ensure that the use of Generative AI aligns with the goals of equitable education. Through empirical data, the chapter demonstrates how asking the right questions in an AI-enhanced environment can lead to a deeper understanding of subject and a more profound development of creativity among learners. The findings suggest that when students are guided to inquire effectively, the boundaries of innovation are expanded, making education not only a process of learning but also a journey of discovery and creation.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The increasing integration of artificial intelligence (ai) in visual analytics (va) tools raises vital questions about the behavior of users, their trust, and the potential of induced biases when provided with guidance during data exploration. We present an experiment where participants engaged in a visual data exploration task while receiving intelligent suggestions supplemented with four different transparency levels. We also modulated the difficulty of the task (easy or hard) to simulate a more tedious scenario for the analyst. Our results indicate that participants were more inclined to accept suggestions when completing a more difficult task despite the ai's lower suggestion accuracy. Moreover, the levels of transparency tested in this study did not significantly affect suggestion usage or subjective trust ratings of the participants. Additionally, we observed that participants who utilized suggestions throughout the task explored a greater quantity and diversity of data points. We discuss these findings and the implications of this research for improving the design and effectiveness of ai‐guided va tools.
Article
Full-text available
Studies show that only 27% of graduates believe that Universities and colleges taught them how to ask their own questions. The Question Formulation Technique (QFT) imparts students a way that makes them to think critically every time they read, connect the concepts and when deciding whether to take facts and information at face value or to dig a little deeper. Generally, it is reported that students ask less than a fifth of the questions teachers estimated would be elicited and deemed desirable Poor participation by students in the questioning during teaching and learning process has often led to poor learning outcomes which are manifested by poor performance in academics. The study was instituted to evaluate the equipping of 21st skills to secondary schools’ students using QFT trained teachers in ten schools in the South Eastern Region of Kenya. The teachers and students were trained to develop skills in producing of questions, categorizing questions, prioritizing questions and in reflections. The study found that teachers were eager to be trained in QFT skills so as to enhance an observed low student engagement and poor performance. The assessment of the implementation of QFT in content delivery found that students had many questions to ask if given opportunity and not judged during the teaching and learning process. The analysis of the questions showed that the QFT sparked student’s potentials into divergent, convergent and metacognition types of thinking during and after the teaching and learning process. The teachers had a challenge of focusing the student class questions to achieve the lesson objectives in the stipulated time of the lesson. However, online engagement of students with teacher was observed to be a key in spurring more learners’ curiosity in learning and in developing patterns in their thinking and ask questions and facilitate lifelong learning.
Article
Full-text available
The disruptive potential of artificial intelligence (AI) augurs a requisite evolutionary concern for artificial wisdom (AW). However, given both a dearth of institutionalized scientific impetus and culturally subjective understandings of wisdom, there is currently no consensus surrounding its future development. This article provides a succinct overview of wisdom within various cultural traditions to establish a foundational common ground for both its necessity and global functioning in the age of AI. This is followed by a more directed argument in favor of pedagogical practices that inculcate students with a theoretical/practical wisdom in support of individual/collective critical capacities directed at democratic planetary stewardship in the age of AI education. The article concludes with a distilled synthesis of wisdom philosophies as principles that establish a framework for the development of a new planetary ethics built upon a symbiotic relationship between humans-technology and nature.
Article
Full-text available
The interest in artificial intelligence (AI) in education has erupted during the last few years, primarily due to technological advances in AI. It is therefore argued that students should learn about AI, although it is debated exactly how it should be applied in education. AI literacy has been suggested as a way of defining competencies for students to acquire to meet a future everyday- and working life with AI. This study argues that researchers and educators need a framework for integrating AI literacy into technological literacy, where the latter is viewed as a multiliteracy. This study thus aims to critically analyse and discuss different components of AI literacy found in the literature in relation to technological literacy. The data consists of five AI literacy frameworks related to three traditions of technological knowledge: technical skills, technological scientific knowledge, and socio-ethical technical understanding. The results show that AI literacy for technology education emphasises technological scientific knowledge (e.g., knowledge about what AI is, how to recognise AI, and systems thinking) and socio-ethical technical understanding (e.g., AI ethics and the role of humans in AI). Technical skills such as programming competencies also appear but are less emphasised. Implications for technology education are also discussed, and a framework for AI literacy for technology education is suggested.
Article
Full-text available
Purpose Artificial intelligence (AI) chatbots, such as ChatGPT and GPT-4, developed by OpenAI, have the potential to revolutionize education. This study explores the potential benefits and challenges of using ChatGPT in education (or “educative AI”). Design/Approach/Methods This paper proposes a theoretical framework called “IDEE” for educative AI such as using ChatGPT and other generative AI in education, which includes identifying the desired outcomes, determining the appropriate level of automation, ensuring ethical considerations, and evaluating effectiveness. Findings The benefits of using ChatGPT in education or more generally, educative AI, include a more personalized and efficient learning experience for students as well as easier and faster feedback for teachers. However, challenges such as the untested effectiveness of the technology, limitations in the quality of data, and ethical and safety concerns must also be considered. Originality/Value This study explored the opportunities and challenges of using ChatGPT in education within the proposed theoretical framework.
Article
Full-text available
This article discusses OpenAI's ChatGPT, a generative pre-trained transformer based on large language model, which uses natural language processing to fulfill text-based user requests (i.e., a “chatbot”). The history and principles behind ChatGPT and similar large language models are discussed. This technology is then discussed in relation to its potential impact on academia and scholarly research and publishing. ChatGPT is seen as a potential model for the automated preparation of essays and other types of scholarly manuscripts. Potential ethical issues that could arise with the emergence of large language models like GPT-3, the underlying technology behind ChatGPT, and its usage by academics and researchers, are discussed and situated within the context of broader advancements in artificial intelligence, machine learning, language models, and natural language processing for research and scholarly publishing.
Article
Full-text available
Artificial intelligence (AI) technologies are used in many dimensions of our lives, including education. Motivated by the increasing use of AI technologies and the current state of the art, this study examines research on AI from the perspective of online distance education. Following a systematic review protocol and using data mining and analytics approaches, the study examines a total of 276 publications. Accordingly, time trend analysis increases steadily with a peak in recent years, and China, India, and the United States are the leading countries in research on AI in online learning and distance education. Computer science and engineering are the research areas that make the most of the contribution, followed by social sciences. t-SNE analysis reveals three dominant clusters showing thematic tendencies, which are as follows: (1) how AI technologies are used in online teaching and learning processes, (2) how algorithms are used for the recognition, identification, and prediction of students’ behaviors, and (3) adaptive and personalized learning empowered through artificial intelligence technologies. Additionally, the text mining and social network analysis identified three broad research themes, which are (1) educational data mining, learning analytics, and artificial intelligence for adaptive and personalized learning; (2) algorithmic online educational spaces, ethics, and human agency; and (3) online learning through detection, identification, recognition, and prediction.
Article
Full-text available
Applications of artificial intelligence in education (AIEd) are emerging and are new to researchers and practitioners alike. Reviews of the relevant literature have not examined how AI technologies have been integrated into each of the four key educational domains of learning, teaching, assessment, and administration. The relationships between the technologies and learning outcomes for students and teachers have also been neglected. This systematic review study aims to understand the opportunities and challenges of AIEd by examining the literature from the last 10 years (2012–2021) using matrix coding and content analysis approaches. The results present the current focus of AIEd research by identifying 13 roles of AI technologies in the key educational domains, 7 learning outcomes of AIEd, and 10 major challenges. The review also provides suggestions for future directions of AIEd research.
Article
Full-text available
This study aimed to identify STEM PCK competence and barometer for gen z and millennial teachers in West Kalimantan as the authority successor subject in the future education with an evaluation stage which was expected as a further step taken so that progression scale realized and had implications for education advancement, especially in West Kalimantan. Respondents involved in this study amounted to 41 teachers aged 23 to 30 years spread over 14 districts/cities in West Kalimantan applying probability sampling technique. The instrument used is STEM PCK questionnaire adopted from Yildirim and Sahin (2019) which consists of 56 statements consisting of 12 statements related to pedagogy, 14 statements of 21st century skills knowledge, and 30 other questions related to STEM PCK knowledge, the data obtained through the use of a Likert rating scale, then each respondent will be classified into 3 levels, namely high, medium and low using the interpretation of the average score on the Likert scale. The result showed that in general there are 5 respondents or 12.2% who are in the high category and 36 other respondents or 87.8% are in the low category, and there are no respondents who are in the low category, it can be interpreted that although there is no respondents who had low STEM PCK competence, however, there are many millennial teachers in West Kalimantan on a medium scale who are needed to be able to implement every aspect of STEM PCK in learning optimally so that dynamic educational standards are realized with global demands in the modern era.
Article
The ability of children to ask curiosity-driven questions is an important skill that helps improve their learning. For this reason, previous research has explored designing specific exercises to train this skill. Several of these studies relied on providing semantic and linguistic cues to train them to ask more of such questions (also called divergent questions). But despite showing pedagogical efficiency, this method is still limited as it relies on generating the said cues by hand, which can be a very long and costly process. In this context, we propose to leverage advances in the natural language processing field (NLP) and investigate the efficiency of using a large language model (LLM) for automating the production of key parts of pedagogical content within a curious question-asking (QA) training. We study generating the said content using the "prompt-based" method that consists of explaining the task to the LLM in natural text. We evaluate the output using human experts annotations and comparisons with hand-generated content. Results suggested indeed the relevance and usefulness of this content. We then conduct a field study in primary school (75 children aged 9–10), where we evaluate children’s QA performance when having this training. We compare 3 types of content: 1) hand-generated content that proposes "closed" cues leading to predefined questions; 2) GPT-3-generated content that proposes the same type of cues; 3) GPT-3-generated content that proposes "open" cues leading to several possible questions. Children were assigned to either one of these groups. Based on human annotations of the questions generated, we see a similar QA performance between the two "closed" trainings (showing the scalability of the approach using GPT-3), and a better one for participants with the "open" training. These results suggest the efficiency of using LLMs to support children in generating more curious questions, using a natural language prompting approach that affords usability by teachers and other users not specialists of AI techniques. Furthermore, results also show that open-ended content may be more suitable for training curious question-asking skills.
Article
It is essential for items in assessments of mathematics’ teacher knowledge to evoke the desired response processes – to be interpreted and responded to by teachers as intended by item developers. In this study, we sought to unpack evidence that middle school mathematics teachers were not consistently interacting as intended with constructed response (i.e. open-ended) items designed to assess their pedagogical content knowledge (PCK). We analyzed recent data derived from think-aloud interviews with 13 teachers involving 38 assessment items designed to tap PCK regarding proportional reasoning. Five key issues associated with undesired response processes were identified: (1) scenarios provided insufficient information, (2) content knowledge (CK) and PCK elements were confounded, (3) questions asked about the scenarios lacked specificity, (4) items contained distracting text and/or visual elements, and (5) differences between math education research and classroom teacher work cultures led to unanticipated interpretations of items. These issues were associated with teacher responses that were problematic (e.g. vague, off topic, etc.). In addition, we suggest that obtaining response process evidence is critical, and the way it is obtained may impact the average difficulty of the final pool of assessment items developed.
To solve a tough problem, reframe it
  • J Binder
  • M D Watkins
Asking the right questions: A guide to critical thinking
  • M N Browne
  • S M Keeley
  • MN Browne
Metacognition, learning, & Socrates: asking questions to foster entrepreneurial minds
  • P Tarasanski
Improving Socratic question generation using data augmentation and preference optimization arXivpreprint arXiv
  • N A Kumar
  • A Lan
AI-driven adaptive learning platforms in education
  • A Johnson
Unlocking creativity through unconventional questions
  • A Jones
Culturally responsive questioning in inclusive classrooms
  • M Thomas
Iterative questioning and feedback loops
  • L Wilson
The role of curiosity in learning outcomes
  • K Brown
AI-guided problem-solving approaches in education
  • J Smith
Exploring the use of AI-generated content in creative writing workshops
  • X Chen
Revolutionizing education: The dynamic intersection of technology and learning
  • L Kokkinos
Enhancing gamified learning experiences with AI-powered adaptation
  • S Lee
Aligning question-framing with blooms taxonomy
  • L Wilson
Analogical questions for conceptual understanding
  • L Wilson
Dynamic and adaptive questioning systems
  • J Smith
AI-mediated collaborative questioning platforms
  • A Jones