Project
Understanding the impacts of informal science education on the public
Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
16
Reads
0 new
80
Project log
Astrobiology is an ideal context to engage students with the processes of science. However, there is a problem in measuring the effectiveness of engaging students with astrobiology where the learning outcome is aimed at improving student views of science. Most studies suggest little to no change in students' views of science, especially in short experiences of hours or days. These studies mostly use quantitative methods, such as numerical scores derived from survey rating scales and responses. We propose that hidden in those figures, the words of students in written survey responses are telling us about the effectiveness of astrobiology outreach at improving understanding of science. We sampled 483 students from multiple high schools involved in an established astrobiology outreach program in Australia, using pre- and postintervention data collected from an open- and closed-form survey to identify the impacts of the program on students' views of science. We applied both conventional quantitative score analysis methods and computer-based qualitative analysis methods-NVivo and Linguistic Inquiry and Word Count (LIWC) programs. While there was little difference in postsurvey scores, there is evidence in the qualitative data that the astrobiology program creates cognitive conflict in students, a trigger to the learning process that can open students to the first steps in understanding the creative, subjective, and tentative nature of science.
To date, true/false, multiple-choice and short answer exams have been the standard tools
for summative assessment in online courses. These techniques are easy to administer
and relatively quick ways to measure large groups of students, but they are not the most
sensitive and accurate assessment methods. We report on the development and testing
of a new assessment tool—a digital game—designed to measure student learning and
scientific literacy, in this case conceptual and critical thinking about the nature of science
(how science is done). It was tested in a third level undergraduate astrobiology course at
the University of New South Wales, which aims to get students to think like scientists as a
key learning outcome.
The tool uses concept maps and Teachable Agents (TA). Concept maps are visual
representations of cognitive structures used to measure changes in students’ learning pre
and post course. Concept maps have been shown to be powerful and effective evaluation
tools, for both formative and summative assessment (Novak & Canas, 2008; Novak &
Gowin, 1984). Developed by Stanford Graduate School of Education, a Teachable Agent
(TA) is a learning technology that uses the social metaphor of teaching a computer agent
by creating a concept map that serves as the agent’s ‘brain’ (Schwartz & Arena, 2009).
The teaching metaphor draws students’ sense of responsibility for their agent — called
the ‘protégé effect’ — to motivate them to put more effort into producing their concept
maps than they would for themselves (Chase, Chin, Oppezzo, & Schwartz, 2009; Schwartz
& Arena, 2009). This game-like instrument measures the choices (concept maps) that
students make in ‘teaching’ their TA about scientific process to detect any changes pre
and post course. Since concept maps are complex and individualised representations
of students’ cognitive structures, they can be unmanageable to use as summative
assessment, especially for large groups of students, because they are time consuming
to assess and score. To allow for scalability to large-scale online courses, the new tool
automatically assesses students’ concept maps, via an algorithm, which makes using this tool faster and easier than using true/false and multiple-choice exams.
This study indicates that the newly developed tool may be more sensitive and accurate
than traditional assessment methods at measuring how students integrate the learning
outcomes of online courses into their overall learning rather than content rote learning.
Validation of the assessment tool as well as further testing is required.
What evidence is there that any public communication of space science is effective? In 2001, Sless and Shrensky pointed out that the evidence of the effectiveness of science communication in general is about as "… strong as the evidence linking rainmaking ceremonies to the occurrence of rain" [1]. In 2017, very little has changed in effectively measuring the success of intended outcomes of Education and Public Outreach and outreach activities in space science-or in any other area of science. There still have been few attempts to formally measure the success of public engagement activities-such as public talks, science cafes, interactive events and festivals-against clear indicators of success. The focus of this research is to measure the effectiveness of science education and outreach activities in achieving their objectives; that is, changing or influencing participants' understanding, attitudes and perceptions of science. We report on a pilot study of four education and outreach activities held at a large museum in a major Australian capital city. Pre and post questionnaires containing validated Likert-scale items were used to measure participants' trust in science and scientists, their understanding of scientific practice, and their opinions on its relevance and value to society. A total of 46 pre and post surveys were matched-37 of the 46 data sets were from space science events. The results show that after the event, participants demonstrated more positive attitudes and an increase in trust, but a decrease in understanding of scientific practice. These results suggest that the way we are communicating space science is misleading the public's perception of science as absolute, instead of the tentative and evolving endeavour that it actually is. We argue that we need to change the way we communicate space science by focussing more on revealing how science is practiced. We need to be more open about the way conclusions are reached to increase the public's understanding of scientific practice. We also argue that increasing the public's understanding of scientific practice is key to understanding science itself and to increasing trust in science and scientists. The results of this pilot study also point to the need for the development of new instruments that are more sensitive in assessing the public's understanding of scientific practice and the impact of space science outreach and education efforts.
What evidence is there that any public communication of astrobiology is effective in changing or influencing the understanding, attitudes and perceptions of science? In 2001, Sless and Shrensky pointed out that the evidence of the effectiveness of science communication in general is about as “… strong as the evidence linking rainmaking ceremonies to the occurrence of rain”[1]. In 2017, very little has changed and there have been very few attempts to formally measure the success of public engagement activities—such as public talks, science cafes, interactive events and festivals— against clear indicators of success.
We report on a pilot study of four science education and outreach activities held at the Museum of Applied
Arts and Science in Sydney, Australia. Pre and post questionnaires containing validated Likert-scale items were used to measure participants’ trust in science and scientists, their understanding of scientific practice, and their opinions on its relevance and value to society. A total of 46 pre and post surveys were matched. The results show that after the event, participants demonstrated more positive attitudes and an increase in trust, but a decrease in understanding of scientific practice.
The results of this pilot study suggest that the way we are communicating science is misleading the public’s
perception of science as absolute, instead of the evolving endeavour that it actually is. We argue that
we need to change the way we communicate science—including astrobiology—by focussing more on revealing
how science is practiced and being more open about the way conclusions are reached, in order to increase
the public’s understanding of scientific practice. We also argue that increasing the public’s understanding of scientific practice is key to understanding science itself and to increasing trust in science and scientists.
The results of this pilot study will inform further research into the effectiveness of science education and
outreach at achieving objectives, as well as identifying the types of activities and formats that are most effective
at achieving these objectives.
References:
[1] D. Sless, and R. Shrensky (2001). Science Communication in Theory and Practice, 97–105.