Book

Quasi-Experimentation. Design & Analysis Issue for Field Settings

Authors:
... We discuss the limitations to our approach in this Section. We group these limitations as threats to Conclusion Validity, External Validity, Internal Validity, and Construct Validity (Cook and Campbell 1979;Feldt and Magazinius 2010;Wohlin et al. 2012). Threats to validity frequently involve the treatments and outcome measures used in the study as well as the higher level constructs the treatments and outcomes represent (Cook and Campbell 1979;Wohlin et al. 2012;Ralph and Tempero 2018). ...
... We group these limitations as threats to Conclusion Validity, External Validity, Internal Validity, and Construct Validity (Cook and Campbell 1979;Feldt and Magazinius 2010;Wohlin et al. 2012). Threats to validity frequently involve the treatments and outcome measures used in the study as well as the higher level constructs the treatments and outcomes represent (Cook and Campbell 1979;Wohlin et al. 2012;Ralph and Tempero 2018). In our study, the two primary outcome constructs we intended to observe were effectiveness (RQ1) and efficiency (RQ2). ...
... Conclusion Validity is about whether conclusions are based on statistical evidence (Cook and Campbell 1979;Wohlin et al. 2012). While we have empirical results for RQ1, a single case study is insufficient to draw statistically significant conclusions for effectiveness. ...
Article
Full-text available
Context Applying vulnerability detection techniques is one of many tasks using the limited resources of a software project. Objective The goal of this research is to assist managers and other decision-makers in making informed choices about the use of software vulnerability detection techniques through an empirical study of the efficiency and effectiveness of four techniques on a Java-based web application. Method We apply four different categories of vulnerability detection techniques – systematic manual penetration testing (SMPT), exploratory manual penetration testing (EMPT), dynamic application security testing (DAST), and static application security testing (SAST) – to an open-source medical records system. Results We found the most vulnerabilities using SAST. However, EMPT found more severe vulnerabilities. With each technique, we found unique vulnerabilities not found using the other techniques. The efficiency of manual techniques (EMPT, SMPT) was comparable to or better than the efficiency of automated techniques (DAST, SAST) in terms of Vulnerabilities per Hour (VpH). Conclusions The vulnerability detection technique practitioners should select may vary based on the goals and available resources of the project. If the goal of an organization is to find “all” vulnerabilities in a project, they need to use as many techniques as their resources allow.
... Regarding the causal relationship between variables in quantitative research, randomized controlled experiments, in which participants are randomly assigned to experimental and control groups (also referred to as "RCT" in clinical professions), provide higher-quality results than nonrandomized controlled quasi-experiments, in which participants are not randomly assigned to experimental and control groups (also referred to as "non-RCT"), and quasi-experiments outperform surveys [49]. Field experiments conducted in real-world environments, however, exhibit more favorable ecological validity than laboratory experiments [50]. If surveys were used to collect objective outcomes such as health indicators, they were also included. ...
... First, noncompliance with an ITT analysis might result in unduly liberal estimate of the treatment effect [100]. Second, results obtained from unrepresentative participants might prevent observed effects from being generalizable to a larger population [49], but generalizability is improved more by many heterogeneous small experiments than by only a few large experiments [50]. Third, when outcome assessors were aware of participant allocation, outcomes might be assessed differently [101]. ...
... Not blinding participants to research questions [82] might affect their responses [105]. Lack of individual level allocation [58,81,86], in which each participant did not have an equal opportunity of being assigned to groups, might result in incomparable groups before intervention [50]. Inconsistency of intervention (within and between groups) was an issue in several studies as the intervention included more than one treatment [53,56,58,66,71,72,77,81,94], such as various plant colors. ...
Article
Full-text available
The influences of indoor plants on people have been examined by only three systematic reviews and no meta-analyses. The objective of this study was therefore to investigate the effects of indoor plants on individuals’ physiological, cognitive, health-related, and behavioral functions by conducting a systematic review with meta-analyses to fill the research gap. The eligibility criteria of this study were (1) any type of participants, (2) any type of indoor plants, (3) comparators without any plants or with other elements, (4) any type of objective human function outcomes, (5) any type of study design, and (6) publications in either English or Chinese. Records were extracted from the Web of Science (1990–), Scopus (1970–), WANFANG DATA (1980–), and Taiwan Periodical Literature (1970–). Therefore, at least two databases were searched in English and in Chinese—two of the most common languages in the world. The last search date of all four databases was on 18 February 2021. We used a quality appraisal system to evaluate the included records. A total of 42 records was included for the systematic review, which concluded that indoor plants affect participants’ functions positively, particularly those of relaxed physiology and enhanced cognition. Separate meta-analyses were then conducted for the effects of the absence or presence of indoor plants on human functions. The meta-analyses comprised only 16 records. The evidence synthesis showed that indoor plants can significantly benefit participants’ diastolic blood pressure (−2.526, 95% CI −4.142, −0.909) and academic achievement (0.534, 95% CI 0.167, 0.901), whereas indoor plants also affected participants’ electroencephalography (EEG) α and β waves, attention, and response time, though not significantly. The major limitations of this study were that we did not include the grey literature and used only two or three records for the meta-analysis of each function. In brief, to achieve the healthy city for people’s health and effective functioning, not only are green spaces needed in cities, but also plants are needed in buildings.
... We discuss the limitations to our approach in this Section. We group these limitations as threats to Conclusion Validity, External Validity, Internal Validity, and Construct Validity [17,30,102]. ...
... Conclusion Validity is about whether conclusions are based on statistical evidence [17,102]. While we have empirical results for RQ1, a single case study is insufficient to draw statistically significant conclusions for effectiveness (RQ1). ...
... Construct Validity concerns the extent to which the treatments and outcome measures used in the study reflect the higher level constructs we wish to examine [17,102,85]. In our study, the cause construct of the vulnerability detection technique being used is reflected in our treatment of the four categories of vulnerability detection techniques. ...
Preprint
CONTEXT: Applying vulnerability detection techniques is one of many tasks using the limited resources of a software project. OBJECTIVE: The goal of this research is to assist managers and other decision-makers in making informed choices about the use of software vulnerability detection techniques through an empirical study of the efficiency and effectiveness of four techniques on a Java-based web application. METHOD: We apply four different categories of vulnerability detection techniques \textendash~ systematic manual penetration testing (SMPT), exploratory manual penetration testing (EMPT), dynamic application security testing (DAST), and static application security testing (SAST) \textendash\ to an open-source medical records system. RESULTS: We found the most vulnerabilities using SAST. However, EMPT found more severe vulnerabilities. With each technique, we found unique vulnerabilities not found using the other techniques. The efficiency of manual techniques (EMPT, SMPT) was comparable to or better than the efficiency of automated techniques (DAST, SAST) in terms of Vulnerabilities per Hour (VpH). CONCLUSIONS: The vulnerability detection technique practitioners should select may vary based on the goals and available resources of the project. If the goal of an organization is to find "all" vulnerabilities in a project, they need to use as many techniques as their resources allow.
... Notably, the numbers of bibliometric studies in FT50 management and marketing journals have Table 1 Period-wise distribution of bibliometric research in FT50 journals. No. Journal Field 1950-19591960-19691970-19791980-19891990-19992000-200920102020 steadily increased over the decades, and all disciplines covered by FT50 continue to publish bibliometric research (2010-2022; Fig. 1), thereby evidencing that bibliometric research is valued by premier business outlets. ...
... To deepen our understanding of how bibliometric research can enrich theoretical contributions, we first must define theory. Although some scholars have defined theory with regard to the relationships between independent and dependent variables to allow scientists to effectively understand and predict outcomes of interest (Cook et al., 1979), others view it as a tool that enables them to describe a process or a sequence of events (Colquitt & Zapata-Phelan, 2007;DiMaggio, 1995). Colquitt andZapata-Phelan (2007, p. 1281) further observed that "a theory is evaluated primarily by the richness of its account, the degree to which it provides a close fit to empirical data, and the degree to which it results in novel insights." ...
Article
Full-text available
Bibliometric research presents unique opportunities to contribute to theory and practice. Top journals from various disciplines have published numerous highly impactful articles utilizing bibliometric techniques to study different fields’ evolutionary nuances and capture emerging trends. However, studies using bibliometric techniques have often attracted criticism for failing to adequately link their derived analytical and visual outputs with theory building and practice improvement. Consequently, we ask the following question: How can bibliometric research contribute to theory and practice? To this end, this editorial (i) premiers the characteristics and distinct contributions of bibliometric research and (ii) proposes a multifaceted approach that (a) researchers can utilize to develop and demonstrate the potential contributions of their bibliometric research and (b) referees (e.g., editors and reviewers) can rely on to effectively decipher and evaluate the framing, positioning, and contributions of bibliometric research. In doing so, we hope to enhance the understanding and contributions of bibliometric research in advancing theory and practice.
... We adopted the nonequivalent control group quasi-experimental research design, with a two-wave (pretest and post-test) survey. This type of design involves observing one or more dependent variables (pretests), manipulation of one or more treatment/independent variables, then followed by observing the effects of the treatment on one or more dependent variables-posttests (Cook & Campbell, 1979;Gall et al., 2007). In this type of design, two groups are given nonequivalent treatment such that the experimental (invention) group receives the new or unusual treatment while the control group receives the usual treatment (Cook & Campbell, 1979;Gay et al., 2011). ...
... This type of design involves observing one or more dependent variables (pretests), manipulation of one or more treatment/independent variables, then followed by observing the effects of the treatment on one or more dependent variables-posttests (Cook & Campbell, 1979;Gall et al., 2007). In this type of design, two groups are given nonequivalent treatment such that the experimental (invention) group receives the new or unusual treatment while the control group receives the usual treatment (Cook & Campbell, 1979;Gay et al., 2011). ...
Article
Full-text available
This study investigates the effects of a career-related practical skill-based training intervention on job search intention via vocational identity statuses (viz. exploration, commitment, and reconsideration), job search self-efficacy behavior (JSSE-B), and career-related practical skills possessed among university students. The participants (N = 79) were electrical/electronic technology education students in Nigeria. Results showed that the intervention influenced the students’ vocational identity statuses, JSSE-B, career-related practical skills, and job search intention. Contrary to our expectation, vocational identity statuses and JSSE-B failed as theorized mediators in our model; while career-related practical skills possessed is upheld as a mediator in the model.
... When randomized experiments are simply impractical or interventions are natural (like ours), regression discontinuity design (RDD) is a quasi-experimental solution that allows us to identify the causal effect of a policy intervention by assigning a cutoff or threshold above or below which an intervention is assigned [68,69]. The basic idea behind this research design is that observations with the assignment variable just below the known cutoff (i.e., those who did not receive the treatment) are good comparisons to, and therefore serve as a valid counterfactual for, those just above the cutoff (i.e., those who did receive the treatment) [70,71]. ...
... Following the reasoning in RDD, the assignment variable is a deterministic function of time, and every consumer is treated sharply at the time when the promotional activities are delivered. Therefore, to distinguish treatment effect from a time trend or seasonality, a single-group interrupted timeseries experimental design is a viable option for true experiments in which the pretreatment observations of the dependent variables prior to the policy change are used as a baseline to assess the impact on the same outcomes after the policy change (i.e., post-treatment observations) [68]. Notice that, distinct from most RDD studies that compare different subjects above or below some threshold value, we use a regression discontinuity in time (RDiT) framework that regards time as a cutoff point to control for the potential time-varying confounders at the user level and compare the same subject-that is, the same decision-related rationality degree of one consumer-prior to and after facing the promotional activity release to see if this change has any causal effect [76][77][78]. ...
Article
Full-text available
Alibaba's annual online shopping carnival is well known for being one of the most successful promotion campaigns, during which marketers often deliver as many informational incentives and promotion activities as possible to inspire consumers' fanatical participation and purchases. Nonetheless, there is a dearth of studies that examined the effect of such rationality manipulation on consumers decision-making process using real-world behavioral evidence, which gives us an opportunity to make up for this research gap. Using a unique shopping log dataset generated by consumers on the Tmall platform, we regard the promotional activities release date as source of exogenous shock and conduct a regression discontinuity in time design to examine the change in consumers rationality degree during the carnival. The empirical results show that consumers tend to deal with more external cues and be more stick to their original options within a shorter decision cycle during the carnival, which indicates their decreasing rationality degrees and thus verifies the effectiveness of marketers’ rationality manipulation. Interestingly, we also found an in-group bias that such rationality manipulation has different influences on consumer subgroups of different genders and ages. Among them, of particular note is that the consumer group younger than 24 years old not only has the biggest gender difference within the group, but also has the biggest difference with other age groups. Findings emerged from this study will help marketers improve promotion effectiveness and deliver a rational allocation of information resources on the e-commerce platform.
... RDD is a technique used to model the extent of a discontinuity at the moment of intervention and long after the intervention. The technique is based on the assumption that if the intervention does not affect the outcome, there would be no discontinuity, and the outcome would be continuous over time (Cook and Campbell 1979). The statistical model behind RDD is end of our observation period for each project (24 months). ...
Article
Full-text available
Software bots have been facilitating several development activities in Open Source Software (OSS) projects, including code review. However, these bots may bring unexpected impacts to group dynamics, as frequently occurs with new technology adoption. Understanding and anticipating such effects is important for planning and management. To analyze these effects, we investigate how several activity indicators change after the adoption of a code review bot. We employed a regression discontinuity design on 1,194 software projects from GitHub. We also interviewed 12 practitioners, including open-source maintainers and contributors. Our results indicate that the adoption of code review bots increases the number of monthly merged pull requests, decreases monthly non-merged pull requests, and decreases communication among developers. From the developers’ perspective, these effects are explained by the transparency and confidence the bot comments introduce, in addition to the changes in the discussion focused on pull requests. Practitioners and maintainers may leverage our results to understand, or even predict, bot effects on their projects.
... The phenomena as observed within our case study (e.g. the processes, the attributes of involved actors etc.) correspond well with what we know from theory. A study with clear rationale for the case selection, including details on the study context (Cook and Campbell, 1979;Coviello and Jones, 2004) and additional qualitative preliminary interviews (see Supplementary material) enables us to gain substantial insights in understanding our setting as well as to reconcile the results with the theory. The weak point remains our inability to account for potential heterogeneity in terms of the comparison between different organizations, although we did perform several robustness checks (reminiscent of the nested approach) which indicated that the observed relationships are stable across the sample and in time (with technology transfer literature often citing the lack of insights from longitudinal data as limitations (e.g. ...
Article
Experience defined in terms of time, scope, type, density and timing affect performance of highly skilled administrative staff. We apply a multidimensional model to the field of science commercialization as a typical multi-goal oriented process. We identify how different conceptualizations of experience models lead to diverse conclusions regarding their effects on facets of performance such as speed, efficiency and revenue. Acknowledging multifaceted goals of science commercialization, we further contribute to the body of work on individual level factors regarding universities' commercialization performance. In this paper we provide evidence from the context of universities' commercialization efforts, relying on administrative records of a Japanese university including 845 transfer cases over a 13-year period (2004–2016). By focusing on coordinators working in a technology transfer office, and the various measurement modes of their experience, we detect several important characteristics. While several experience components affect speed and efficiency of technology transfer, our results show that revenue is determined by interaction components.
... Such concerns encompass both participant-related and interventionist-related issues. The former category includes, among others, novelty, Hawthorne, placebo, and John Henry effects that accompany the introduction of the intervention-see, for example, Cook and Campbell (1979). The latter category includes such matters as the interventionist offering differential performance feedback between conditions or phases, along with differences in the interventionist's provision of various verbal or nonverbal cues (whether intentional or not)-see, for example, Rosenthal (1966). ...
Article
In this article we respond to the recent recommendation of Slocum et al. (2022), who provided conceptual and methodological recommendations for reconsidering the credibility and validity of the nonconcurrent multiple-baseline design. We build on these recommendations and offer replication and randomization upgrades that should further improve the status of the nonconcurrent version of the design in standards and single-case design research. Although we suggest that the nonconcurrent version should be an acceptable methodological option for single-case design researchers, the traditional concurrent multiple-baseline design should generally be the design of choice.
... The value of Recall F inal becomes important since when using only one digital library in the experimental group (SCOPUS), there is a risk that a relevant study will not be found. Wohlin et al. (2012) present a checklist with threats to validity as characterized by Campbell and Cook (1979). We use this checklist to describe which threats apply to the experiment and how we planned to mitigate each threat. ...
Article
Full-text available
A Secondary Study (SS) is an important research method used in several areas. A crucial step in the Conduction phase of a SS is the search of studies. This step is time-consuming and error-prone, mainly due to the refinement of the search string. The objective of this study is to validate the effectiveness of an automatic formulation of search strings for SS. Our approach, termed Search String Generator (SeSG), takes as input a small set of studies (as a Quasi-Gold Standard) and processes them using text mining. After that, SeSG generates search strings that deliver a high F1-Score on the start set of a hybrid search strategy. To achieve this objective, we (1) generate a structured textual representation of the initial set of input studies as a bag-of-words using Term Frequency and Document Frequency; (2) perform automatic topic modeling using LDA (Latent Dirichlet Allocation) and enrichment of terms with a pre-trained dense language representation (embedding) called BERT (Bidirectional Encoder Representations from Transformers); (3) formulate and evaluate the search string using the obtained terms; and (4) use the developed search strings in a digital library. For the validation of our approach, we conduct an experiment—using some SS as objects—comparing the effectiveness of automatically formulated search strings by SeSG with manual search strings reported in these studies. SeSG generates search strings that achieve a better final F1-Score on the start set than the searches reported by these SS. Our study shows that SeSG can effectively supersede the formulation of search strings, in hybrid search strategies, since it dismisses the manual string refinements.
... We next discuss the most relevant threats to the validity of our study design and evaluation based on the threat types by Cook and Campbell (1979). ...
Article
Full-text available
Previous work has shown that taint analyses are only useful if correctly customized to the context in which they are used. Existing domain-specific languages (DSLs) allow such customization through the definition of deny-listing data-flow rules that describe potentially vulnerable or malicious taint-flows. These languages, however, are designed primarily for security experts who are expected to be knowledgeable in taint analysis. Software developers, however, consider these languages to be complex. This paper thus presents fluent TQL, a query specification language particularly for taint-flows. fluent TQL is internal Java DSL and uses a fluent-interface design. fluent TQL queries can express various taint-style vulnerability types, e.g. injections, cross-site scripting or path traversal. This paper describes fluent TQL’s abstract and concrete syntax and defines its runtime semantics. The semantics are independent of any underlying analysis and allows evaluation of fluent TQL queries by a variety of taint analyses. Instantiations of fluent TQL, on top of two taint analysis solvers, Boomerang and FlowDroid, show and validate fluent TQL expressiveness. Based on existing examples from the literature, we have used fluent TQL to implement queries for 11 popular security vulnerability types in Java. Using our SQL injection specification, the Boomerang-based taint analysis found all 17 known taint-flows in the OWASP WebGoat application, whereas with FlowDroid 13 taint-flows were found. Similarly, in a vulnerable version of the Java Spring PetClinic application, the Boomerang-based taint analysis found all seven expected taint-flows. In seven real-world Android apps with 25 expected malicious taint-flows, 18 taint-flows were detected. In a user study with 26 software developers, fluent TQL reached a high usability score. In comparison to CodeQL , the state-of-the-art DSL by Semmle/GitHub, participants found fluent TQL more usable and with it they were able to specify taint analysis queries in shorter time.
... In this section, we discuss threats to the validity of our results using Cook and Campbell's classification [9,[43][44][45]. The threats are classified as conclusion, internal, construct and external validity. ...
Article
Context Managing technical debt and developing easy-to-maintain software are very important aspects for technological companies. Integrated development environments (IDEs) and static measurement and analysis tools are used for this purpose. Meanwhile, gamification also is gaining popularity in professional settings, particularly in software development. Objective This paper aims to analyse the improvement in technical debt indicators due to the use of techniques to raise developers’ awareness of technical debt and the introduction of gamification into technical debt management. Method A quasi-experiment that manipulates a training environment with three different treatments was conducted. The first treatment was based on training in the concept of technical debt, bad smells and refactoring, while using multiple plugins in IDEs to obtain reports on quality indicators of both the code and the tests. The second treatment was based on enriching previous training with the use of to continuously raise awareness of technical debt. The third was based on adding a gamification component to technical debt management based on a contest with a top ten ranking. The results of the first treatment are compared with the use of for continuously raising developers’ awareness of technical debt; while the possible effect of gamification is compared with the results of the previous treatment. Results It was observed that continuously raising awareness using a technical debt management tool, such as , significantly improves the technical debt indicators of the code developed by the participants versus using multiple code and test quality checking tools. On the other hand, incorporating some kind of competition between developers by defining a contest and creating a ranking does not bring about any significant differences in the technical debt indicators. Conclusion Investment in staff training through tools to raise developers’ awareness of technical debt and incorporating it into continuous integration pipelines does bring improvements in technical debt management
... Notes 1. Within economics, relatively early contributions include the work around the National Supported Work Demonstration summarized in Hollister et al. (1984), Ferber and Hirsch (1981) on social experiments, as well as the non-experimental "old testament" of Heckman and Robb (1985). Outside economics, see, for example, Cook and Campbell (1979). 2. See, for example, Heckman et al. (2000) for more on imperfect compliance and responses thereto. ...
Article
This paper considers recent methodological developments in the treatment effects literature, describes their value for applied evaluation work, and suggests next steps. It pays particular attention to documenting the presence of treatment effect heterogeneity, to the quest to attach treatment effect heterogeneity to particular subgroups and other moderators, and to the recent application of machine learning methods in this domain.
... This work aims to measure the energy consumption of browsers in the Android environment and compare the results to know which browser is more energy efficient. We present in this section some threats to the validity of our study, separated into four categories [38]. a) Conclusion Validity: In this category are the threats which may influence the capacity to draw correct conclusions. ...
Preprint
Full-text available
This paper presents an empirical study regarding the energy consumption of the most used web browsers on the Android ecosystem. In order to properly compare the web browsers in terms of energy consumption, we defined a set of typical usage scenarios to be replicated in the different browsers, executed in the same testing environment and conditions. The results of our study show that there are significant differences in terms of energy consumption among the considered browsers. Furthermore, we conclude that some browsers are energy efficient in several user actions, but energy greedy in other ones, allowing us to conclude that no browser is universally more efficient for all usage scenarios.
... The convergent validity assessment is required for the empirical evaluation of formative measurement models in PLS-SEM (Hair et al., 2021). It determines the extent to which different items to measure the same construct agree (Cook & Campbell, 1979;Hair et al., 2019). We conducted four tests to determine the convergent validity: reliability of items, composite reliability (CR) of constructs, average variance extracted (AVE) and Cronbach's alpha. ...
Article
Full-text available
Understanding students’ privacy concerns is an essential first step towards effective privacy-enhancing practices in learning analytics (LA). In this study, we develop and validate a model to explore the students’ privacy concerns (SPICE) regarding LA practice in higher education. The SPICE model considers privacy concerns as a central construct between two antecedents—perceived privacy risk and perceived privacy control, and two outcomes—trusting beliefs and non-self-disclosure behaviours. To validate the model, data through an online survey were collected, and 132 students from three Swedish universities participated in the study. Partial Least Square results show that the model accounts for high variance in privacy concerns, trusting beliefs, and non-self-disclosure behaviours. They also illustrate that students’ perceived privacy risk is a firm predictor of their privacy concerns. The students’ privacy concerns and perceived privacy risk were found to affect their non-self-disclosure behaviours. Finally, the results show that the students’ perceptions of privacy control and privacy risks determine their trusting beliefs. The study results contribute to understand the relationships between students’ privacy concerns, trust, and non-self-disclosure behaviours in the LA context. A set of relevant implications for LA systems’ design and privacy-enhancing practices’ development in higher education is offered.
... Despite the evolution of RTD studies, a recent literature review of the academic literature arguably employing RTD by Lenzholzer, Nijhuis and Cortesão (2018) indicates that in academia a wide range of interpretations about the meaning of RTD occurred. The results of that study showed that only a small number of academic publications dealt with RTD in a scholarly sense, that is, meeting the requirements of academic research such as originality, validity (internal and external), transparency and reliability (e.g., Cook and Campbell 1979;Jong and van der Voordt 2002;Creswell 2011;Lenzholzer, Duchhart and Koh 2013;Prochner and Godin 2022). Furthermore, the study revealed a general misappropriation of the terms 'research' and 'research by/through design', which seem to be used simply to describe a design process instead of a structured and in-depth reflection on the design products created. ...
Article
This study takes stock on how research through design (RTD) is interpreted in urban and landscape design practice in relation to the scholarly meaning of RTD. The results indicate that the term ‘RTD’ in Dutch practice largely refers to the typical procedures and resources of a practical design process. This interpretation differs from definitions of scholarly RTD which have more focus on the rigid testing of design alternatives. Such a scholarly RTD approach is advisable to ensure the validity and robustness of design products. This study recommends that this approach to RTD is adopted in urban and landscape design practice.
... Sigh. We don't have the space to go into great depth so we'll mention just a few … #1 is construct validity, which is whether the operationalization measures the intended construct (Cook & Campbell, 1979;Cronbach, & Meehl, 1995); it determines whether we can draw valid conclusions from the research (Cronbach & Meehl, 1955;Hinkin, 1995). This is a BIG deal (Schwab, 1980) and the item generation stage is critical to "adequately capture the specific domain of interest …" (Hinkin, 1995, p. 969). ...
... There are two sets of limitations, or threats, to the validity of the results presented. As we indicated previously one threat to the validity of any of the effects discussed here is that of confounding factors (Cook & Campbell, 1979). Confounding factors are those that occur previously or contemporaneously to the event investigated (in this case, occupation) that could cause the results even in the absence of the event. ...
... The current study addresses this overarching question through the use of a quasiexperimental design (Cook & Campbell, 1979 3. How do the trainees evaluate the quality and effectiveness of online language teacher education and the English pronunciation pedagogy course? ...
Thesis
Full-text available
Many specialists have put out a call to action for more specialized training in L2 pronunciation pedagogy, as many language educators are ill-equipped to meet the instructional demands that pronunciation teaching requires (Foote et al., 2011; Henderson et al., 2015; Murphy, 2014, 2017). While language teacher professional development has started to receive attention in the field of L2 pronunciation (e.g., Burri, 2016; Buss, 2017), this conversation has not made much headway into online domains. That is, would language teacher trainees ‘walk’ out of an online pronunciation pedagogy course with the knowledge, skills, and abilities needed to teach pronunciation to their learners? To address this gap, this study investigated the development of second language teacher cognitions throughout four separate versions of an eight-week online L2 pronunciation pedagogy course using different modes of instructional delivery (i.e., PowerPoint and vignettes). In particular, this study sought to (1) track the overall development of teacher cognitions throughout the course, (2) assess which delivery of instruction led to the most significant results, if any, and (3) track the overall efficacy of the online medium in terms of professional development in English pronunciation pedagogy. To answer the first two research questions, data collected from a knowledge questionnaire, beliefs survey, weekly narrative frames, and selected interviews were used. To answer the third research question, data collected from the beliefs survey, weekly narrative frames, and selected interviews were used. Findings showed that the participants’ knowledge of English pronunciation pedagogy significantly increased, particularly in their knowledge of phonological processes and practical teaching applications. However, one dimension tested in the study (i.e., analyzing and categorizing learner errors) did not improve significantly, which suggests this knowledge base may require a more hands-on approach with experiential learning. Survey data showed that there was minimal change (only 2 of 25 beliefs showed significant change) in the participants’ perceptions and beliefs about language teaching and pronunciation pedagogy. This may be due in part to the participants’ experiences with language teaching, and that their beliefs had already been formed before the course started. Of particular interest was the analysis of the different sections that received various delivery of instruction. The results of a three-way ANOVA showed that course section was not significant in terms of their scores on the knowledge questionnaire. This indicates that, regardless of delivery of instruction method, participants were able to gain the knowledge delivered throughout the course. However, this study only looked at the participants’ declarative knowledge (e.g., definitions) and not their procedural knowledge (e.g., actual ability to teach pronunciation). This study showed that the current professional development opportunity in English pronunciation pedagogy was overall effective, though there remains future work of improving the course based on questionnaire scores and participant feedback. Analyzing and categorizing learner errors is one problem area that future iterations of the course need to address with a more hands-on approach. Additionally, participants mentioned struggling with assessing learner progress and incorporating technology into their teaching practices—two areas that were not addressed in the current iteration. However, for pronunciation teaching, the current professional development opportunity was not only seen as desirable by language professionals across the globe, but it was also able to improve their knowledge about pronunciation terminology and teaching practices.
... The main limitation of the study is that both MST and FFT were carried out as single-group pre-post designs; thus, changes may also be due to, for example, maturation, history, and regression to the mean (Cook & Campbell, 1979). Furthermore, we did not have item-level data, and consequently, we were not able to estimate reliability for the YLS/CMI subscales. ...
Article
Background: Multisystemic Therapy (MST) and Functional Family Therapy (FFT) are evidence-based Blueprint programs shown to be effective towards youth problem behaviors. Purpose: The present study aimed to investigate treatment outcomes following MST and FFT among Norwegian youths with serious behavior problems. Research design: Routine Outcome Monitoring (ROM) data of the Youth Level of Service/Case Management Inventory at intake and post-test was used along with measures of five national treatment goals. Study sample: The study is based on two samples of youths assigned to MST ( n = 2018) and FFT ( n = 453). Analysis: Data were analyzed separately for MST and FFT, to explore changes during treatment and accomplishment of the treatment goals. Results: At intake youths in MST showed a significant higher level of risk factors compared to those referred to FFT. Significant reductions in risk factors and behavioral problems were evident for both interventions. Follow-up results demonstrated sustained reductions of problem behaviors. Conclusion: Both treatments decrease risk factors and increase the completion of outcome goals. Implications of the results are discussed.
... The before-and-after analysis (B/A analysis) constituted of a non-experimental design, which is very suitable for the purposes of this study, since it is a reasonable option for an evaluation to provide preliminary evidence for an intervention effectiveness [43][44]. ...
Article
Full-text available
Cities are often characterised by the presence of universities and by a greater number of students, often commuters, with an average age of less than 30 years. The study focused on the city of Enna (Italy), where the university students represent a significant percentage of residents and also a good rate of local travel demand. The survey campaign was conducted over a period of more than one year. A bivariate statistical method was applied highlighting significant variables regarding several features of a car sharing system. Additionally, non-parametric statistics and a before and after analysis were performed to evaluate the influence of implementation of the shared transport service. The results can also offer insights into the improvement of transport supply in urban context and the possible implementation of the co-creation actions between the companies managing the service with the end-users.
... This subsection presents the main threats to the validity of the Monitor-IoT evaluation results that were mitigated during the development of the quasi-experiment. These threats are organized according to the classification proposed by Cook and Campbell [42]. ...
Article
Full-text available
The Internet of Things (IoT) is a technological paradigm involved in a diversity of domains with favorable impacts on people’s daily lives and the development of industry and cities. Nowadays, one of the most critical challenges is developing software for IoT systems since the traditional Software Engineering methodologies and tools are unproductive in the face of the complex requirements resulting from the highly distributed, heterogeneous, and dynamic scenarios in which these systems operate. Model-Driven Engineering (MDE) emerges as an appropriate approach to abstract the complexity of IoT systems. However, there are no domain-specific languages (DSLs) aligned to standardized reference architectures for IoT. Furthermore, existing DSLs have an incomplete language to represent the IoT entities that may be needed at the edge, fog, and cloud layers to monitor IoT environments. Therefore, this paper proposes a domain-specific language named Monitor-IoT, which supports developers in designing multi-layer monitoring architectures for IoT systems with high abstraction, expressiveness, and flexibility. Monitor-IoT consists of a high-level visual modeling language and a metamodel aligned with the ISO/IEC 30141:2018 reference architecture. In addition, it provides a language capable of modeling architectures with a wide variety of digital entities and dataflows (synchronous and asynchronous) between them across the edge, fog, and cloud layers to support the monitoring of a diversity of IoT scenarios. The empirical evaluation of Monitor-IoT through the application of an experiment, which contemplates the use of the Technology Acceptance Model (TAM), demonstrates the intention of the participants to use this tool in the future since they consider it easy to use and useful.
... Some threats to validity were identified in this study. We categorized these threats into four categories as proposed by Cook and Campbell (1979). Threats and strategies adopted to mitigate them are discussed below: ...
Article
Nowadays, SPOCs (Small Private Online Courses) have been used as complementary methods to support classroom teaching. SPOCs are courses that apply the usage of MOOCs (Mas-sive Open Online Courses), combining classroom with online education, making them an exciting alternative for contexts such as emergency remote teaching. Although SPOCs have been continuously proposed in the software engineering teaching area, it is crucial to assess their practical applicability via measuring the effectiveness of this resource in the teaching-learning process. In this context, this paper aims to present an experimental evaluation to investigate the applicability of a SPOC in a Verification, Validation, and Software Testing course taught during the period of emergency remote education during the COVID-19 pandemic in Brazil. Therefore, we conducted a controlled experiment comparing alternative teaching through the application of a SPOC with teaching carried out via lectures. The comparison between the teaching methods is made by analyzing the students' performance during the solving of practical activities and essay questions on the content covered. In addition, we used questionnaires to analyze students' motivation during the course. Study results indicate an improvement in both motivation and performance of students participating in SPOC, which corroborates its applicability to the software testing teaching area.
... We ensure the construct reliability as the average variance extracted (AVE) of all constructs is above 0.50, and the composite reliability (CR) is above 0.70 (Fornell and Larcker, 1981; Table 4 in Appendix). We ensure convergent validity because items of the same construct are highly correlated and discriminant validity since the items are more highly on their intended construct than on other constructs (Cook and Campbell, 1979). We attest that the data passes reliability and validity evaluation. ...
Conference Paper
Organizations invest lots of effort and costs in reducing technostress, as it harms their employees’ well-being and reduces their work performance. Therefore, it is imperative to mitigate technostress. We suppose each individual has a unique digital mindset, a malleable factor describing their specific ways of thinking and awareness, which guides how to react to techno-stressors. We build on the transactional model of stress and survey 151 employees to test the role of the digital mindset. Our results show that individuals with a strong digital mindset respond less strongly to techno-stressors with reduced job performance, reduced job satisfaction, and increased turnover intention. We contribute to research by carving out how individuals react to techno-stressors in line with their digital mindset, reflecting that digital mindset might buffer that techno-stressors have adverse impacts on individuals and organizations.
... Although the uncontrollability of the data from various groups is mentioned as a limitation of the design (Lackeus et al., 2015), it has several advantages. By creating the temporal precedence of the independent variable to the dependent variable (Cook & Campbell, 1979) and consequently establishing a cause-effect relationship, the design was identified. The learning progress and any changes of the study participants were compared with the learning outcomes of the course Entrepreneurship and Small Business Management which comprised four months in duration, 48 CrHr contacts, and delivered by a semester-based linear format. ...
Book
Full-text available
We are very happy to publish this issue of the International Journal of Learning, Teaching and Educational Research. The International Journal of Learning, Teaching and Educational Research is a peer-reviewed open-access journal committed to publishing high-quality articles in the field of education. Submissions may include full-length articles, case studies and innovative solutions to problems faced by students, educators and directors of educational organisations. To learn more about this journal, please visit the website http://www.ijlter.org. We are grateful to the editor-in-chief, members of the Editorial Board and the reviewers for accepting only high quality articles in this issue. We seize this opportunity to thank them for their great collaboration. The Editorial Board is composed of renowned people from across the world. Each paper is reviewed by at least two blind reviewers. We will endeavour to ensure the reputation and quality of this journal with this issue.
... We report the threats to the validity of our work following the classification by Cook et al. (1979) suggested for software engineering by Wohlin et al. (2012). Additionally, we discuss the reliability as suggested by Runeson and Höst (2009). ...
Article
Full-text available
Context Tangled commits are changes to software that address multiple concerns at once. For researchers interested in bugs, tangled commits mean that they actually study not only bugs, but also other concerns irrelevant for the study of bugs. Objective We want to improve our understanding of the prevalence of tangling and the types of changes that are tangled within bug fixing commits. Methods We use a crowd sourcing approach for manual labeling to validate which changes contribute to bug fixes for each line in bug fixing commits. Each line is labeled by four participants. If at least three participants agree on the same label, we have consensus. Results We estimate that between 17% and 32% of all changes in bug fixing commits modify the source code to fix the underlying problem. However, when we only consider changes to the production code files this ratio increases to 66% to 87%. We find that about 11% of lines are hard to label leading to active disagreements between participants. Due to confirmed tangling and the uncertainty in our data, we estimate that 3% to 47% of data is noisy without manual untangling, depending on the use case. Conclusion Tangled commits have a high prevalence in bug fixes and can lead to a large amount of noise in the data. Prior research indicates that this noise may alter results. As researchers, we should be skeptics and assume that unvalidated data is likely very noisy, until proven otherwise.
... RDD is a technique used to model the extent of a discontinuity at the moment of intervention and long after the intervention. The technique is based on the assumption that if the intervention does not affect the outcome, there would be no discontinuity, and the outcome would be continuous over time [7]. The statistical model behind RDD is ...
Preprint
Full-text available
Automated tools are frequently used in social coding repositories to perform repetitive activities that are part of the distributed software development process. Recently, GitHub introduced GitHub Actions, a feature providing automated workflows for repository maintainers. Understanding and anticipating the effects of adopting such kind of technology is important for planning and management. Our research investigates how projects use GitHub Actions, what the communities discuss about them, and how activity indicators change after their adoption. Our results indicate that a considerable number of projects adopt GitHub Actions (almost 30% of our sample) and that developers frequently ask for help with them. Our findings also suggest that the adoption of GitHub Actions leads to more rejections of pull requests (PRs), more communication in accepted PRs and less in rejected PRs, fewer commits in accepted PRs and more in rejected PRs, and more time to accept a PR. We found similar results in the Utility Actions but we found fewer rejected PRs for the Code Quality Actions. Our results are especially relevant for practitioners to consider these effects when adopting GitHub Actions on their projects.
... This section describes the problems that can affect the validity of the quasi-experiment. For this, the four types of validity proposed by [28] are considered, which are: validity of the statistical conclusion, internal validity, construct validity and external validity. ...
... The second problem pertained the inaccuracy of data collected from the interviews, document study and questionnaire. The study employed a mixedmethods research to cater this limitation, however, Cook and Campbell (1979) advices that participants and respondents tend to report or respond what they believe the researcher wants to hear or see or report positively on their own opinions, knowledge, and abilities at the expense of being realistic. Based on this it is possible that some that data derived from the same school might contradict each other from the individual participants and respondents of the school. ...
Thesis
Full-text available
The introduction of technologies in our lifetime necessitates transformation in our lifestyles. This expectation is not an exception to the education system. The curriculum and the ways of teaching and learning are affected the most by new technologies. It is therefore imperative that schools, educational officials and teachers change in tandem with these new technologies. The transitioning to technologies, therefore, tends to make it obligatory for schools, principals, deputy principals, heads of departments and teachers to be competent in these new innovations and the accompanying digital strategies. The aim of this study is to investigate the implementation of blended learning as one of the technological platforms in Sekhukhune District schools in Limpopo Province, South Africa. The study was guided by blending Connected Learning Theory (CLT), Technology Acceptance Model (TAM) and Connectivism Theory (CT) frameworks. The theories assisted in the formulation of research questions which led to the study findings. The research questions of the study included How do teachers perceive the usefulness of blended learning approaches in teaching and learning? How do teachers connect information using technology resources in blended learning? “To what extent do teachers display the necessary skills for successful implementation of blended learning?” What are teachers’ recommendations for the introduction and improvement of blended learning in rural schools? and What are the elements to be considered for the designing of blended learning model? The study used mixed methods research (MMR) to achieve the aim of the study. Convergent parallel design was used to collect, analyse and interpret data. The study was guided by a pragmatic paradigm where 10 schools were purposively sampled for the QUAN strand while 4 schools were purposively sampled for the QUAL strand. The participants of the study comprised principals, deputy principals and teachers. For the QUAN strand 10 principals, 9 deputy principals, 35 heads of departments and 123 teachers participated while for the AQUAL strand 4 principals, 4 head of departments and 4 teachers took part in the investigation. The total sample for the QUAN was 177 participants and 12 participants for the QUAL strand. A questionnaire was used to collect data in the QUAN strand while for the QUAL strand interviews and document study were used. Data gathered through questionnaires was analysed through the IBM SPSS version 28. Thematic, content and narrative analyses were used to assess data collected from the interviews and document study. The results of the two strands were merged to obtain the final results of the study. The study established that teachers embraced the introduction and implementation of blended learning in schools. However, challenges such as lack of e-technological supply and internet connection; inadequacies in the use of classroom technologies; lack of e-tech policies; lack of teachers’ digital training; insufficient teachers′ technological competencies; and inadequate teacher support in technologies impede the effective implementation of blended learning in Sekhukhune-Limpopo schools. The study therefore recommends that the Limpopo Department of Education (LDE) should prioritise the supply of e-tech in Sekhukhune-Limpopo through fiscal policies. The study further recommends that the Department of Basic Education, through the provincial education departments and districts, train and develop officials and teachers in digital technologies for the successful implementation of blended technologies in teaching and learning. The study also suggested a proposed Blended Learning Model (BLM) which might assist in the implementation of blended learning in schools. Keywords: blended learning, blended learning model, E-Tech, E-Education policy, teachers’ digital framework
... (Lijphart, 2012, p. 137) Therefore, unless there is a major change in the electoral system, the type of government generally remains stable. This circumstance has made it difficult if not impossible to investigate the effects of differences in government type in a quasi-experimental design (Cook and Campbell, 1979). ...
Thesis
Full-text available
The type of government, whether the cabinet is a single-party majority, multiparty coalition, or minority, is often claimed to be one important factor affecting the shape of bureaucratic reform. Masashi’s research investigates empirically whether this statement can be applied to the case of New Zealand, where government type has shifted from majoritarian to consensual since the introduction of the MMP electoral system in the mid-1990s. Through the investigation of impacts of differences in cabinet type on government organisational restructuring, Masashi’s research has shown that New Zealand did not follow the commonly stated pattern in the relationship.
... ‫أحادية‬ ‫والحاضارة‬ ‫المتنوعاة‬ ‫الملموساة‬ ‫الفردياة‬ ‫الظاواهر‬ ‫م‬ ‫العديد‬ ‫توليه‬ ‫وبواسطة‬ ‫النظر‬ ‫وجهات‬ ‫أحيان‬ ‫والغائبة‬ ‫ما‬ ‫حد‬ ‫إلى‬ ً ‫ا‬ " (Weber, 1904) ‫أن‬ ‫يقادم‬ ‫ولكناه‬ ‫"فرضاية"‬ ‫لاير‬ ‫المثالي‬ ‫النوع‬ ‫الفرضيات‬ ‫لبناء‬ ‫إرشادات‬ (Weber, 1904) ‫يعتقد‬ (Bacharach, 1989) (BLALOCK, 1979) (Whetten, 1989) (Miles, et al., 1987) (Mintzberg, 1989) ‫ز‬ ‫إلاى‬ ‫المثالي‬ ‫النوع‬ ‫م‬ ‫الكبير‬ ‫التشابه‬ ‫يدد‬ ‫أن‬ ‫ُفترض‬ ‫ي‬ ‫العوامال‬ ‫أن‬ ‫ُعتقاد‬ ‫ي‬ ‫ألناه‬ ‫الفعالياة‬ ‫ياادة‬ ‫ما‬ ‫كل‬ ‫افترض‬ ‫مثالي.‬ ‫نوع‬ ‫كل‬ ‫ضم‬ ‫متسقة‬ ‫الصلة‬ ‫ذات‬ ‫االستراتيجية‬ ‫أو‬ ‫الهيكلية‬ ‫أو‬ ‫السياقية‬ (Mintzberg, 1989) ‫و‬ (Miles, et al., 1987) (Cook & Campbell, 1979) ، (Lave & March, 1975) . (Mintzberg, 1979) . ...
Article
The development and examining the theory, are central subjects to the improvement of the human resources management as a scientific field. For almost three decades now, scientists have borrowed theories from various disciplines. These metaphors enhance the power and effectiveness of the search results. To achieve greater accuracy and relevance in future research, more attention must be paid to the context of its investigations. This leads to an understanding of the nature, dynamics, uniqueness, and limitations of this context, thus enriching future studies. This paper describes common problems that have been revealed in HRD (Human Resources Development) theory construction research.This paper discusses the methods that can be used to build theories better, by knowing the importance of building theory in this field, exposure to models, understanding what it is for theory and what it has from complex multi-level theories, and the basic assumptions for building valid models and their interactions, theory building, correlation factors, theory-building challenges, and how Overcoming these challenges in the field of HRD. Building theory is a process of creativity and imagination. It requires careful consideration of the significance and uniqueness of the presented phenomena, the questions explored, and the context of the research. Theories act as benchmarks that tell us what is important, why it is important, what figures out its importance, and what outcomes to expect. Theories make a reader know why it enriches or even challenges our understanding and what was discovered. ("Contextualizing theory building in entrepreneurship research") Studies based on theory pay special attention to the context of their research and explain its complexity, uniqueness, and richness. These studies also present convincing arguments, offer a fair examination for these arguments, and use the results to refine and enrich the theory they have invoked. The paper shows that models are complex theoretical data that must undergo quantitative modeling and rigorous empirical testing. Models are distinguished from classification systems, they have been shown to meet many important criteria for theories, and they have been shown to hold multiple levels of theory. Keywords: theory building, models, models and theory building, human resources development, challenges of theory building, theory and applying.
... The size of the sample corresponded to 62 granadilla producers based on the formula by Cook and Campbell (1979), out of a target population of 374 farmers according to data provided by the Ministry of Agriculture in the Oxapampa site, surveying 69 producers, seven surveys more than the minimum necessary. ...
Article
Full-text available
Socioeconomic characterization and a description of the granadilla (Passiflora ligularis) productive system of the Oxapampa District in the Pasco-Peru region was conducted, and needs for innovation among producers were identified. A survey was applied to 69 farmers, key informants were interviewed, and soil samples were extracted to analyze fertility as well as the presence of fungi and nematodes. Granadilla producers work mostly in a parcel of their own, they have secondary school studies, most cultivate 1.5 hectares of granadilla, work with their own capital, have access to electricity and insufficient drainage service coverage. The technological level of the crop management went from basic to medium through the adoption of the trellis since the year 2000, compared to the traditional system associated with pacay (Inga feuilleui). All of the producers adopted the trellis conduction system, the Colombian ecotype, and the selection of fruits per categories for commercialization. The application of organic matter is occasional, there is a general use of pesticides, and they do not take into account the cultural, ethological and biological control. Most of the producers purchase saplings without quality guarantee. Likewise, they do not receive extension services for the crop. The main needs for innovation and training are in safety and fertilization.
Preprint
Full-text available
The research focused on causes of misconception in transformation case of three selected Wedza Schools in Rural Zimbabwe. The study probed why ordinary level students are failing to answer transformation questions in mathematics (Nziramasanga Commission, 1999; Chakanyuka, Chung & Stevenson, 2009; Chirume, 2012). A total of 60 respondents from the three selected schools comprised a sample that was obtained through stratified random sampling .Stata statistical manipulation and open ended questions were used.. About 80% of students passed mathematics without tackling transformation and only 40% of students choose transformation questions. Insufficient textbooks, lack of transformations background, teacher lacking content, and less time allocated to the topic were noted as contributing to failure of grasping the concept The study recommends that sufficient lesson time, textbook sharing ratio should go to one, and experienced mathematics teachers should be employed.
Thesis
La présente recherche relève du domaine de la didactique du Français Langue Étrangère et consiste à évaluer l’efficacité d’un dispositif de formation hybride sur l’amélioration de l’activité de compréhension en lecture de contenus pédagogiques produits en français langue étrangère, en le comparant à une situation de formation traditionnelle. Nous supposons qu’un dispositif de formation hybride permet aux étudiants d’acquérir des apprentissages en langue 2 (L2) de qualité, en améliorant et développant davantage leur activité de compréhension en lecture des contenus pédagogiques proposés dans leurs offres de formations, plus particulièrement ceux des matières fondamentales. Pour expérimenter notre principale hypothèse de recherche, nous avons conçu un dispositif de formation hybride en deux temps. Dans un premier temps, nous avons créé et didactisé à l’aide de la chaine éditoriale Scénari Opale un contenu pédagogique numérique. Puis, dans un second temps, nous l’avons mis en place pour les étudiants participant à notre projet à l’aide d’une plateforme d’apprentissage (MoodleCloud), en lui intégrant plusieurs activités considérées comme nécessaires pour l’interaction, la construction de connaissances et l’évaluation.Le recueil des données a été formé par le biais d’une expérimentation comparant deux situations de formation différentes : une hybride combinant un enseignement en présentiel et à distance, et une traditionnelle suivie exclusivement en présentiel, menée auprès d’étudiants inscrits en première année Licence de Français à l’Université de Badji Mokhtar Annaba (Algérie) et d’une post-enquête par questionnaire administré seulement au groupe ayant participé à la formation hybride. Ainsi, le corpus traité ici est constitué de réponses fournies à un test par (26) Vingt-Six étudiants répartis en deux groupes (groupe expérimental et groupe témoin) et à un questionnaire adressé au (13) Treize étudiants qui ont participé à la formation hybride (groupe expérimental). Pour vérifier notre première série d’hypothèses qui porte sur les effets du dispositif hybride sur l’amélioration de l’activité de compréhension en lecture de contenu pédagogique en L2, nous avons évalué les réponses apportées au test selon des grilles d’analyse élaborées en se basant sur le modèle de compréhension de Van Dijk et Kintsch (1982) et la taxonomie SOLO de Biggs et Collis (1983). Quant à notre deuxième série d’hypothèses qui concerne les effets perçus du dispositif hybride suivi sur la qualité des apprentissages, nous l’avons mesuré à l’aide du logiciel de statistique SPSS. Notre approche d’analyse est essentiellement descriptive et analytique. Le traitement et l’interprétation des corpus rendent compte que le recours à ce mode de formation articulant le présentiel et la distance représente une solution pour améliorer la qualité des apprentissages. Effectivement, les résultats obtenus mettent en évidence un important relèvement du niveau de compréhension en lecture des étudiants ayant participé à la formation hybride en comparaison avec le niveau de leurs homologues du groupe traditionnel et également une évaluation positive du dispositif de la part de l’ensemble des étudiants interrogés qui nous a amené à affirmer que l’avenir et le bon fonctionnement d’un tel mode d’apprentissage dépend fortement de l’investissement engagé par les étudiants qui y ont recours.
Thesis
L’objectif général de la thèse vise à examiner les effets d’une pratique de conseil en orientation professionnelle auprès de personnes placées sous main de justice. L’emploi est reconnu comme un levier du processus de désistance, mais les sortants de détention peinent à y accéder. Des dispositifs d’orientation sont déployés en prison afin de favoriser l’insertion professionnelle à l’issue de la peine. Au-delà de la recherche de solutions concrètes, les pratiques actuelles de conseil en orientation ont pour finalité de développer l’autonomie, la construction de soi et la capacité à construire sa vie. Ces perspectives renvoient à l’ambition de favoriser le changement de trajectoire des personnes incarcérées. Une étude est menée auprès de 32 hommes, inscrits sur un dispositif d’orientation, durant leur incarcération. Les changements sont évalués à partir de deux types de données : d’une part, les caractéristiques initiales liées aux parcours et aux connaissances relatives au projet, et d’autre part, les auto-présentations pré et post-bilan sur des dimensions psychologiques liées aux perceptions de soi, soit le sentiment d’efficacité personnelle, l’autodétermination, les attributions causales, l’estime de soi et l’image de soi. Initialement, les présentations de soi des participants ne semblaient pas fonctionnelles pour accéder à l’emploi ou à la formation. Les résultats suggèrent que l’intervention amène à une posture plus réaliste et plus adaptative en vue de la réinsertion future. Au terme de la recherche, des perspectives sont émises afin d’accompagner plus efficacement l’insertion professionnelle de la population carcérale.
Thesis
Les chutes touchent une personne sur trois après 65 ans et peuvent avoir de graves conséquences dans cette population. Les preuves scientifiques indiquent que l'activité physique est la méthode la plus efficace pour prévenir les chutes chez les personnes âgées. Pourtant, bien qu'il existe désormais un consensus sur l'utilité de l'activité physique pour prévenir les chutes, de nombreux obstacles à la participation subsistent. L’adhésion aux programmes de prévention des chutes est la question centrale. Le marketing social a montré son utilité dans la construction de programmes de prévention. Cependant, son utilisation dans les programmes de prévention chez les personnes âgées et notamment dans le domaine de l’activité physique reste faible. Notre question générale est : l’utilisation de la méthode du marketing social pourrait-elle être efficace pour promouvoir et augmenter la participation à un programme de prévention des chutes destiné aux personnes âgées en France ? Dans ce contexte, nous avons montré, notamment à travers une revue systématique de la littérature, que le marketing social avait un fort potentiel pour promouvoir l’activité physique chez le sujet âgé. De plus, l’étude de marché que nous avons réalisée ainsi que le développement de la campagne de communication, suggèrent que les besoins et les attentes du public cible doivent être mieux pris en compte dans la conception des programmes de prévention. Ce travail de thèse ouvre la voie pour des stratégies de marketing social pour cette population spécifique en France et confirme que l’adhésion aux programmes d’activité physique reste l’enjeu majeur de leur efficacité.
Chapter
Gamification-based reward systems are a key part of the design of modern adaptive instructional systems and can have substantial impacts on learner choices and engagement. In this paper, we discuss our efforts to engineer the rewards system of Kupei AI, an adaptive instructional system used by elementary and middle school students in afterschool programs to study English and Mathematics. Kupei AI’s rewards system was iteratively engineered across four versions to improve student engagement and increase progress, involving changes to how many points were awarded for success in different activities. This paper discusses the design changes and their impacts, reviewing the impacts (both positive and negative) of each generation of re-design. The end result of the design was improved learning and more progress for students. We conclude with a discussion of the implications of these findings for the design of gamification for adaptive instructional systems.
Article
Understanding the sources and sinks of methane (CH4) is critical to both predicting and mitigating future climate change. There are large uncertainties in the global budget of atmospheric CH4, but natural emissions are estimated to be of a similar magnitude to anthropogenic emissions. To understand CH4 flux from biogenic sources in the United States (US) of America, a multi-scale CH4 observation network focused on CH4 flux rates, processes, and scaling methods is required. This can be achieved with a network of ground-based observations that are distributed based on climatic regions and land cover. To determine the gaps in physical infrastructure for developing this network, we need to understand the landscape representativeness of the current infrastructure. We focus here on eddy covariance (EC) flux towers because they are essential for a bottom-up framework that bridges the gap between point-based chamber measurements and airborne or satellite platforms that inform policy decisions and global climate agreements. Using dissimilarity, multidimensional scaling, and cluster analysis, the US was divided into 10 clusters distributed across temperature and precipitation gradients. We evaluated dissimilarity within each cluster for research sites with active CH4 EC towers to identify gaps in existing infrastructure that limit our ability to constrain the contribution of US biogenic CH4 emissions to the global budget. Through our analysis using climate, land cover, and location variables, we identified priority areas for research infrastructure to provide a more complete understanding of the CH4 flux potential of ecosystem types across the US. Clusters corresponding to Alaska and the Rocky Mountains, which are inherently difficult to capture, are the most poorly represented, and all clusters require a greater representation of vegetation types.
Article
Introduction: Greece was hit particularly hard by the latest economic recession. Method: Using a quasi-experimental design, we examined whether and how psychosocial resources promoted and/or protected youth's school adjustment (academic achievement, school engagement, and conduct) and psychological well-being (absence of emotional symptoms) during the economic crisis. We focused on three family resources (family economic well-being, parental education, and school involvement) and one personal resource (self-efficacy). Data were collected with multiple methods and informants. We compared two cohorts of adolescents, closely matched through Inverse Probability of Treatment Weighting, who lived in the same neighborhoods, one before (2005; N = 1057; age M = 12.7 years) and the other during (2013; N = 1052; age M = 12.6 years) the economic recession. Results: Variable- and person-focused analyses revealed that in the context of the economic recession parental education and parental school involvement promoted and/or protected youth's school adjustment, and families' economic wellbeing was linked to both externalizing and internalizing symptoms. Another key finding is that youth who exhibited positive adaptation during the economic crisis were equally well adjusted as youth who were well adjusted before the economic crisis, even though they had fewer resources. Finally, youth with more adequate psychosocial resources were able to keep the same high level of adaptation during the crisis as well-adjusted youth had before the crisis. The findings were robust regarding variations in gender and immigrant status. Conclusion: The results suggest that psychosocial resources are important in understanding the diversity in youth's school adjustment and well-being during a major economic crisis.
Article
There is currently limited research on student peer leadership in the social‐emotional literature. This paper used exploratory methods of social network analysis to understand the structure of school peer relationships, peer leadership, and school climate. Self‐report measures of perceptions of peer leadership and climate were given to students during the 2016–2017 school year. Data collected from a peer leadership survey were used to calculate closeness and indegree centrality values. The results showed that student Ambassadors have higher peer nominated leadership scores compared to non‐Ambassador controls and the rest of the school. Additionally, Ambassadors did not demonstrate a change in centrality scores, non‐Ambassador students increased in centrality scores, and school climate was not correlated with the leadership centrality score. Results suggest that influence spreads, and that good leadership may be emulated among students, leading to a diffusion effect. This supports the need for good leaders in schools. Additionally, climate may not be associated with leadership centrality scores due to the length of the intervention. Future studies should look toward behavioral data to unravel what comprises positive and negative influences in Social‐Emotional and Character Development interventions.
Article
Full-text available
With the ongoing need for water conservation, the American Southwest has worked to increase harvested rainwater efforts to meet municipal needs. Concomitantly, environmental pollution is prevalent, leading to concerns regarding the quality of harvested rainwater. Project Harvest , a co-created community science project, was initiated with communities that neighbor sources of pollution. To better understand how a participant’s socio-demographic factors affect home characteristics and rainwater harvesting infrastructure, pinpoint gardening practices, and determine participant perception of environmental pollution, a 145-question “Home Description Survey” was administered to Project Harvest participants ( n = 167) by project promotoras (community health workers). Race/ethnicity and community were significantly associated ( p < 0.05) with participant responses regarding proximity to potential sources of pollution, roof material, water harvesting device material, harvesting device capacity, harvesting device age, garden amendments, supplemental irrigation, and previous contaminant testing. Further, the study has illuminated the idiosyncratic differences in how underserved communities perceive environmental pollution and historical past land uses in their community. We propose that the collection of such data will inform the field on how to tailor environmental monitoring efforts and results for constituent use, how community members may alter activities to reduce environmental hazard exposure, and how future studies can be designed to meet the needs of environmentally disadvantaged communities.
Thesis
Die Förderung epistemischer Überzeugungen gilt als wichtige Aufgabe in der Lehrerbildung. Insbesondere im bildungswissenschaftlichen Begleitstudium tendieren Studierende jedoch zu ungünstigen Überzeugungen: Bildungswissenschaftliches Wissen wird im Vergleich zu fachwissenschaftlichem Wissen als wenig systematisiert, subjektiv und kaum praxisrelevant beurteilt. Wie Studierende dabei unterstützt werden können, angemessene Überzeugungen zum Wissen in den Bildungswissenschaften zu entwickeln, ist empirisch noch unzureichend untersucht. Aus diesem Grund wurden literaturgestützt drei Kurzinterventionen entwickelt, die auf je spezifischen Förderstrategien gründen: (1) Direkt-explizite Adressierung von epistemischen Überzeugungen, (2) indirekt-implizite Adressierung und (3) Kombination aus direkten und indirekten Förderansätzen. Die Eignung der drei Interventionen wurde in einem quasi-experimentellen Mixed-Methods-Design in zwei Teilstudien untersucht. Die quantitative Teilstudie umfasste eine Veränderungsmessung. Es konnte gezeigt werden, dass sich die Studierenden aus der direkten und der kombinierten Intervention tendenziell in Richtung reflektierter Überzeugungen weiterentwickelt haben. Die indirekte Intervention blieb dagegen wirkungslos. Die qualitative Teilstudie umfasste die Analyse von Follow-Up-Interviews, die im Nachgang der Interventionen mit teilnehmenden Studierenden geführt wurden. Die Ergebnisse stützten erstens die Befunde aus der quantitativen Teilstudie, wobei ein besonderes Veränderungspotenzial für die kombinierte Intervention deutlich wurde. Zweitens konnten die zugrunde liegenden Wirkweisen der verschiedenen Interventionen herausgearbeitet werden. Drittens konnten weiterführende Erkenntnisse zu den grundlegenden Veränderungsmechanismen epistemischer Überzeugungen erzielt werden. Das bestehende Veränderungsmodell epistemischer Überzeugungen konnte weiterentwickelt und um individuelle Faktoren ergänzt werden.
ResearchGate has not been able to resolve any references for this publication.