Technical ReportPDF Available

A Framework for Examining Research Practice Partnerships for K-12 Computer Science Education

Authors:
  • SageFox Consulting Group
  • SageFox Consulting Group

Abstract and Figures

A Framework for Examining
Research Practice Partnerships for
K-12 Computer Science Education
February 2022
Monica M. McGill, CSEdResearch.org
Amanda Menier, SageFox Consulting Group
Stacey Sexton, SageFox Consulting Group
Rebecca Zarch, SageFox Consulting Group
Alan Peterfreund, SageFox Consulting Group
Maral Kargarmoakhar, Florida International University
This material is based upon work supported by the U.S. National Science
Foundation under Grant No. 1745199. Any opinions, findings, and
conclusions or recommendations expressed in this material are those of
the author(s) and do not necessarily reflect the views of the National
Science Foundation.
Suggested Citation: McGill, M.M., Menier, A., Sexton, S., Zarch, R., Peterfreund, A., Kargarmoakhar,
M. (2022). A Framework for Examining Research Practice Partnerships for K-12 Computer Science
Education. CSEdResearch.org: Peoria, IL USA
Acknowledgement: As our partners on this NSF-funded work, we acknowledge and thank the CS-
forALL team for their support and discussions on this work.
Copyright © 2022. All Rights Reserved.
PUB LI SH ED B Y CSEDRESEARCH.OR G with encouragement and support from SageFox Consulting
Group.
Licensed under the Creative Commons Attribution-NonCommercial 4.0 License (the “License”). You
may not use this file except in compliance with the License. You may obtain a copy of the License
at https://creativecommons.org/licenses/by-nc-sa/4.0. Unless required by applicable law or agreed
to in writing, software distributed under the License is distributed on an “AS IS BASIS,WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
First printing, February 2022
Contents
1Introduction ...................................................... 5
2Research Practice Partnerships .................................... 8
2.1 Definition and Key Components ...................................... 8
2.2 Process Commonalities and Framework Implementations ............... 9
2.2.1 ProcessCommonalities ............................................... 9
2.2.2 ImplementationFrameworks .......................................... 11
2.3 Roles and Responsibilities .......................................... 11
2.4 General RPP Benefits ............................................... 13
2.5 General Challenges ............................................... 14
2.6 Assessing RPPs and Their Value ...................................... 16
2.6.1 The Five Dimensions of Effectiveness Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6.2 The Wilder Collaboration Factors Inventory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6.3 TheRPPforCSHealthAssessment. ...................................... 17
2.6.4 TheWentworthetal.Survey. .......................................... 17
2.6.5 SWOTAnalysis ..................................................... 18
2.6.6 Student Outcomes Assessment Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3Study Design and Author Reflexivity ............................... 19
3.1 Study Design ..................................................... 19
3.2 Researcher Description and Author Reflexivity ......................... 19
4Framework for Analyzing Partnerships ............................. 21
5Component Analysis ............................................. 23
5.1 Structure for Evaluating the RPP’s Theory of Change .................... 23
5.2 Data Collection ................................................... 24
5.3 Data Analysis ..................................................... 24
5.3.1 PhaseIAnalysis .................................................... 24
4
5.3.2 PhaseIIAnalysis .................................................... 24
6Component Analysis Results ..................................... 26
6.1 Phase I Analysis: Goals ............................................ 26
6.2 Phase II Analysis: Exploratory & Rigorous Replication Analyses .......... 28
6.2.1 Actions........................................................... 28
6.2.2 EquityDimensions .................................................. 29
6.2.3 TargetGroupsofActions ............................................. 30
6.2.4 RevisedCodebook ................................................. 30
6.2.5 TheoryofChange .................................................. 32
7Discussion ....................................................... 34
7.1 Central Contributions .............................................. 34
7.2 Similarities and Differences from Prior Theories/Research Findings ........ 34
7.2.1 MajorComponents ................................................. 34
7.2.2 Actions........................................................... 35
7.2.3 TargetsActedUpon ................................................. 35
7.2.4 EquityDimensions .................................................. 36
7.2.5 TheoryofChange .................................................. 36
7.3 Alternative Explanations of Findings ................................. 36
7.4 Strengths and Limitations of this Study ................................ 37
7.5 Ethical Dilemmas or Challenges Encountered ......................... 37
7.6 Implications for Future Work ........................................ 38
8Conclusion ...................................................... 39
9References ...................................................... 40
Appendices ..................................................... 46
ASelected Projects ................................................ 46
BRevised Theory of Change Codebook ............................ 48
B.1 Theory of Change Codes ........................................... 48
B.2 Equity Dimensions Codes ........................................... 49
B.3 Target of Actions .................................................. 50
B.4 Action Codes ..................................................... 51
1. Introduction
Research Practice Partnerships (RPPs) have been increasingly used in K-12 education to address
general problems of practice through a unique collaboration that includes various stakeholders (i.e.,
researchers and practitioners) committed to designing and implementing solutions. Recent examples
of the types of problems that RPPs seek to address include reducing student disciplinary infractions
and number of failed courses, improve student grades and attendance (Cannata et al., 2019), obtaining
evidence that could inform policy and help students make decisions about their future educational and
career paths (Wentworth et al., 2017), and examining and addresses issues of inequity and access to
equalize status and increase student engagement (Wentworth et al., 2017).
Although the formal concept of and implementation frameworks for RPPs have evolved over the
last 30 years, in the context of K-12 computing education RPPs are relatively new. In 2017, the U.S.
National Science Foundation (NSF) issued a call for proposals for implementing Research Practice
Partnerships (RPP) for Computer Science (CS) for All (National Science Foundation, 2020)
1
. The
intent of this K-12 initiative was to foster research into and development of curricula based on mutual
partnerships between researchers and practitioners and to learn from these implemented projects.
It is worth noting that this funding program is currently a large portion of the federal funding
streams for researchers seeking to do pre-K-12 Computer Science Education work. Thus, much of
federally-funded CS work is by necessity happening via an RPP modality. Given the relative newness
of pre-K-12 computing education and computing education research, it is likely that this will have
impacts on the overall trajectory of the field, the speed with which it can propose, test, and refine
new ideas and approaches, as well as the nature of work around issues of equity, among others. Thus,
work to analyze, understand, and compare RPPs could also provide insight into the development of the
pre-K-12 computing education discipline overall.
Since 2017, 117 unique projects were borne from this initiative, yielding many new RPPs for CS
in various stages of progress (see Table 1.1). Projects ranged from early elementary grades to high
schools as well as pathways to post-secondary schools and sought to address such problems as:
Challenges associated with providing every high school student high-quality, introductory CS
1
The program synopsis reads: “This program aims to provide all U.S. students the opportunity to participate in computer
science (CS) and computational thinking (CT) education in their schools at the preK-12 levels. With this solicitation, the
National Science Foundation (NSF) focuses on researcher-practitioner partnerships (RPPs) that foster the research and
development needed to bring CS and CT to all schools. Specifically, this solicitation aims to provide high school teachers
with the preparation, professional development (PD) and ongoing support that they need to teach rigorous computer science
courses; preK-8 teachers with the instructional materials and preparation they need to integrate CS and CT into their teaching;
and schools and districts the resources needed to define and evaluate multi-grade pathways in CS and CT.”
6
Awarded Funding Amount Number of Projects
2017 $27,588,674 34
2018 $24,952,423 37
2019 $20,111,122 26
2020 $25,324,183 54
2021 $24,842,137 49
Total $122,818,539 200 (158 unique2)
Table 1.1: U.S. National Science Foundation’s Computer Science for All Initiative, 2017-2020.
course (Henrick et al., 2019),
Lack of engagement and learning and student participation in computer science education,
including the role of relationships with peers, staff, and other mentors, as well as whether digital
micro-credentials can be used to increase engagement and learning (Denner et al., 2019),
Lack of teacher and school capacity to implement maker activities (Fancsali et al., 2019),
Lack of computational thinking and computer science education in middle school (Gilbert et al.,
2018; Wiebe et al., 2019),
Lack of equitable access of Advanced Placement (AP) Computer Science Principles (CSP)
courses for all students (Mark et al., 2020), and
Lack of computer science curriculum to teach students with learning differences (Wille et al.,
2016).
Within these projects, RPPs have been conceptualized and implemented in various ways across
computing education. Coburn et al. recognize a need for comparative studies across RPPs to better
understand them (Coburn et al., 2015). Despite this, however, there are no existing frameworks to serve
as a means for deconstructing these partnerships in a formal manner that would allow for consistent com-
parisons and analysis. To do this holistically, three frameworks are needed for analyzing and comparing:
The partnership portion of the RPP,
The educational context (i.e., computer science) of its implementation, and
The output of the RPPs (e.g., contributions to their institutions and to the broader CS education
research community, long-term versus short-term impacts).
In this research study, we create the first framework based on the following research questions:
1.
What would a broad framework for analyzing and comparing the partnership aspect of RPPs for
primary and secondary computer science education entail?
2. If we investigated one component of the framework, what would we learn from its analysis?
To explore these questions, we provide a literature review that covers the different facets of RPPs,
including how they can be structured and their known benefits and challenges. Using this knowledge
as well as our deep understanding of RPPs in CS, we propose a framework for RPPs that could be
used for analysis. To gain an understanding of how one of the components of the framework can be
used, we conducted a content analysis and analyzed the project descriptions, broader impact statements,
and intellectual merit statements from a subset of the 117 funded NSF CSforAll: RPP projects. This
provided insight into how this one concept can be examined and some of the other considerations that
7
must be made when comparing data within the other components.
This study is relevant to researchers, practitioners and other stakeholders within the CS education
community who want to understand how to compare and analyze multiple RPPs or how to compare a
single RPP across different years or projects. In this article, we provide a background of RPPs, followed
by our proposed framework and the results of our analysis of the first component, Theory of Change.
This is followed by a discussion of what we learned about the framework and its components and how
they can be used to conduct comparative analyses.
2. Research Practice Partnerships
Much has been written recently about RPPs in the context of education, including their benefits and
challenges. Here, we discuss central tenets of RPPs and their implementation across education.
2.1 Definition and Key Components
Though practitioners and researchers both are outcome focused and are interested in increasing academic
achievement among students, the gulf between the two has often been (and is still) very wide (Wanzer,
2019). Formerly referred to as School-University Partnerships (Brookhart and Loadman, 1992; Gifford,
1986; Schlechty, Whitford, et al., 1988 40 years ago, these partnerships were established for many of
the same reasons they are today–to solve the problems that arose from the deep separation of research
from practice. Once the research concluded, the findings would then be disseminated to practitioners.
The research was often conducted in silos and the hand-off from researcher to practitioner did not
always meet the critical needs that practitioners faced or may not have adequately considered the
context of their work (Hod et al., 2018; Penuel and Farrell, 2017).
Full participation by practitioners in the course of conducting research could mitigate these problems
and ensure that the practitioners’ voices, context, and experience are considered. Likewise, practitioners
learn how to study and learn from research, building the knowledge needed for practitioners to leverage
research in their decision-making within a particular context (Coburn et al., 2013; López Turley and
Stevens, 2015; Penuel et al., 2015; Resnick and Kazemi, 2019; Tseng et al., 2017). Through RPPs, the
gap between researchers and practitioners lessens (Boser and McDaniels, 2018). Ghiso et al. points out
nuances in the formation of RPPs, noting:
Research-practice partnerships (RPPs) call on forms of professional knowledge that may
have traditionally been less visible or valued in the academy. Collaborative research teams
are engaged in deeply relational intellectual and emotional labor: They have to develop
methodological sensibilities and skills that are attentive to issues of power and have to
negotiate social and institutional boundaries. (Ghiso et al., 2019, p. 1)
In the context of education, Coburn et al. defined RPPs as “...long-term collaborations between
practitioners and researchers that are organized to investigate problems of practice and solutions for
improving schools and districts” (Coburn et al., 2013, p. 48). Their intent is to “...leverage research
to address persistent problems of practice” (Henrick et al., 2017, p. 1) for improving districts and
schools (Coburn et al., 2013). RPPs are intentionally organized and can be focused within a single
school, but typically they involve several schools, a single school district, multiple school districts and
2.2 Process Commonalities and Framework Implementations 9
even supporting agencies. They also can be formed across distributed networks (e.g., special education
providers across a state) (Coburn and Penuel, 2016; Coburn et al., 2013).
Three basic tenets of RPPs are that they are long-term collaborations, mutualistic, and consist
of efforts to build and maintain trust among their participants (Henrick et al., 2016). The long-term
component of the structure and intent of RPPs is indeed paramount to their success. The long-term
approach allows for the time and space needed to institute a continuous improvement paradigm
(Shakman et al., 2017), including the Plan, Do, Study, Act (PDSA) cycle that needs to be continually
repeated to identify promising practices and bring those practices to scale.
In addition to being collaborative, long-term, and focusing on problems of practice, by their
very nature RPPs are designed to be mutualistic, to equalize the power structure between researchers
and practitioners, and to elevate the concept of joint work where researchers and practitioners work
collaboratively to design and implement solutions, study their impact, and act by redesigning and
refining their solutions in order to increase impact (Penuel et al., 2015). The investigation of the
problems and their potential solutions are co-created by both researchers and practitioners (Bevan,
2017). Trust is a key element of a successful partnership and reliance on roles and responsibilities
that are established upfront help ensure that proper boundaries are set and trust is maintained. This
trust is built upon the discourse around the problems which they seek to solve mutually and for similar
interests (Hod et al., 2018).
RPPs also involve original analysis of data, a practice that involves the collection of data within the
context of the problems being solved, within the context of the district(s) or school(s), and/or within the
context of an intervention, program, or reform strategy (Coburn et al., 2013). This enables the district
leaders to become familiar with the data and be able to analyze and interpret the data in a way that
considers their unique district frameworks within which they operate.
2.2 Process Commonalities and Framework Implementations
2.2.1 Process Commonalities
There are similar and shared functions among different ways in which RPPs are implemented. Figure
2.1 shows the various steps of how RPPs function and the important key processes within them (Lash
et al., 2019). We briefly highlight each here.
Establish an Equitable Partnership. Connolly notes that even with RPPs, "everything grows from
a strong foundation" (Connolly, 2019, p. 1). Part of this is also recognizing that the ecosystem
of connected academic enterprises and institutions can result in positive change that impacts youth
(Connolly, 2019; Wiebe et al., 2019).
Create a Memo of Understanding. To facilitate the partnership, rules of engagement can help lay
the groundwork of expectations, roles and responsibilities for the RPP (Lash et al., 2019).
Collaboratively Identify the Pressing Problems. Collaboration strengthens the RPP, demonstrates
its value, and can help institutionalize the work (Connolly, 2019). It can also ensure that the right
problems of practice are being addressed (Wiebe et al., 2019). Identifying and decomposing the
pressing problems can be aided by the use of the Edelsons design methodology and other step wise
processes that include grounding the decomposition in practice through the RPP team members’ vision
(such as "techquity"), by function, and in relation to the contexts to which it applies (Kalir, no date;
Muñoz, 2016; Resnick and Kazemi, 2019; Thompson et al., 2019). This requires a range of perspectives
and can further identify relevant stakeholders who should be included in the RPP (Resnick and Kazemi,
2019).
Include all Relevant Stakeholders. Creating the partnership should be thoughtfully based on address-
ing power imbalances, addressing issues of trust, sharing of information, and strong communication as
2.2 Process Commonalities and Framework Implementations 10
Figure 2.1: A Guide Map to Research Practice Partnerships
well as ensuring that collaborative researchers and practitioners are at the table and are active informers
of the research (Wentworth et al., 2017; Wiebe et al., 2019).
Identify Possible Solutions & Research Questions. A critical step in the RPP is identifying solutions
and implementing them (Muñoz, 2016 as well as formalizing the research questions that are to
be addressed. Ecosystems help in this process by offering a "...powerful lens for researchers and
stakeholders as they can answer the key problems of practice" (Wiebe et al., 2019).
Establish Shared Language. Inter-organizational practices for the RPP can ensure better communi-
cation and understanding across the research and partnership communities, including meeting routines
to encourage communication and professional support (Frumin, 2019; Santo et al., 2017a).
Conduct Cycles of Collaborative Inquiry. Collaborative inquiry can be performed through a variety
2.3 Roles and Responsibilities 11
of methodological approaches that are iterative in nature and test and refine the new educational
approaches (Muñoz, 2016; Santo et al., 2017a; Schools, 2019). Methodologies can include exploratory
research (Carroll-Miranda et al., 2019), narrative ethnography analytic approaches (Denner et al., 2019),
comparative studies (Fall et al., 2019), qualitative approaches (Harrison et al., 2017), descriptive case
studies (Kalir, no date), and a variety of other qualitative and quantitative methods (Cannata et al.,
2019).
Generate Key Findings. A collaboratively developed research agenda is necessary for identifying
how findings will be discovered (Boser and McDaniels, 2018; Coburn and Penuel, 2016; Fall et al.,
2019). Findings are often generated using shared tools and common practical measurements (Frumin,
2019; Thompson et al., 2019), some of which may need to be developed for the RPP. Collaboration is
also key in conducting the research within schools and school districts to collect the data needed for
the findings. It is important to find meaningful ways to share findings as well as recommendations for
change and action (Muñoz, 2016).
Communicate & Sustain the Work. Sharing implementation processes, communicating key findings
for those who will implement the practices, and sharing key findings with other districts, researchers,
and practitioners are all key aspects of an RPP (Hod et al., 2018; Muñoz, 2016). Sustaining the work
via a continuous improvement model is necessary for the longevity of forming promising practices.
2.2.2 Implementation Frameworks
Primary models of partnerships (Penuel and Farrell, 2017) include:
RPP Research Alliances - Typically focused on a specific district, region, or state for ongoing
problems of similar interest (Cannata et al., 2019; Henrick et al., 2017)
RPP Design/Co-Design models - Typically focused on the fully collaborative model of designing,
studying, improving and then scaling classroom practices (to, for example, the entire school
district) often based on promising practices as defined by empirical evidence (Cannata et al.,
2019; Henrick et al., 2017; Henrick et al., 2016; Severance et al., 2014)
Networked Improvement Communities (NICs) - often short-cycle improvement efforts, these
communities engage education professionals, researchers, and designers to use a continuous
improvement model for exploring the usage and refinement of promising practices that address
shared problems (Cannata et al., 2019; Henrick et al., 2017)
Hybrid - Two more of these methods combined (Henrick et al., 2017).
These frameworks will often contain the steps in the previous section, although their structure and
organization may differ.
2.3 Roles and Responsibilities
RPPs are the long term strategy where practitioners and researchers come together and work in a highly
collaborative manner to solve problems related to practice and find solutions for improving schools
and districts (Cannata et al., 2019; Farrell et al., 2019). There are a number of studies discussing how
researchers and practitioners can collaborate in RPPs and what common attributes are for achieving
success (Henrick et al., 2016; Jacob et al., 2019; Stokes et al., 2018). Researchers and practitioners in
RPP projects can work together to identify the problems and research questions (Bevan, 2017). They
can access information needed through data collection and analysis. After analyzing the data they can
answer their research questions and find solutions for the problems (Bevan, 2017). To close out the
research cycle, researchers and practitioners work together to identify new problems of practice and
research questions (Tseng et al., 2017).
2.3 Roles and Responsibilities 12
Each study discusses how researcher’s and practitioner’s roles and responsibilities influence the
outcome of the study. In all RPPs researchers and practitioners plan collaboratively to address re-
searchers’ interest and practitioners needs (Boser and McDaniels, 2018). For a mutual benefit and
meeting the goals in an RPP it is important for both sides to know and fulfill their negotiated roles and
responsibilities throughout the project (Connolly, 2019). According to a recent study, the quality of
relationships between the two groups are "...important explanatory variables of evidence use above
and beyond research relevance and rigor" (Wanzer, 2019, p. 3). To meet the goals for RPPs, and
regardless of the engagement strategy or specific roles and responsibilities, the partnership between
researchers and practitioners must be honest, transparent, and trusting (Connolly, 2019; Harrison et al.,
2017; Stokes et al., 2018). Depending on each project and the framework of the study, the impact of
meaningful partnerships have been shown to include:
Positive changes in teachers self efficacy and sense of ownership by answering questions that
matter to them,
Improvement in the quality of teaching, scaling and new approaches for teaching, building
bi-directional knowledge between researchers and practitioners,
Researchers’ deeper understanding of school contexts,
Expanded professional communities that comprise practice-informed researchers, and
Improvement in students’ engagement and learning (Jacob et al., 2019; Santo et al., 2017a;
Stokes et al., 2018).
Researchers and practitioners often have diverse roles. Researchers can provide the research plan,
take a leadership role in structuring the shared learning, establish roles and responsibilities, support
teachers’ development of pedagogical content knowledge through balancing researcher and practitioner
needs, collaborate with district leaders, put effort into being of service to practitioners, and provide
evidence to support a strong model (Fall et al., 2019; Henrick et al., 2016; Schools, 2019). They act as
knowledge brokers, connecting practitioners to other knowledge in real time as needed (Davidson and
Penuel, 2019 and often bring connections to external supports for implementation and evaluation and
disseminate findings (Connolly, 2019; Fancsali et al., 2019; Stokes et al., 2018).
While the term practitioner implies an array of practice-organization roles Kali et al., 2018, teachers
are often regarded as a special population. They occupy a dual space as both the recipient of project
interventions and a critical voice within the project. The goal of teacher engagement in the RPP is to
“...foster and grow teacher leaders that participate in research in a variety of ways” (Wortel-London
et al., 2019, online). Teachers may participate in design work to create classroom materials or take on
leadership roles within the RPP, acting as conduits to their colleagues and representing the classroom
perspective.
Roles and responsibilities of researchers and practitioners depend on the RPP type. In Research
Alliances, their roles are distinct, and collaboration between them happens at the start and end of
the project. The main responsibility of practitioners is designing and implementing the policies and
programs, while researchers’ responsibility is to evaluate the policies and programs (Penuel and Farrell,
2017). Research Alliances maintain an ’insider-outsider’ perspective by acting as "independent voices
in a community that document implementation and effectiveness" (Penuel and Farrell, 2017, p. 15).
The Design Research model shares elements with but is distinct from the Research Alliances
model. The partnerships in this model are in long term collaborations. This model uses a co-design
approach, and researchers and leaders work together in an iterative process in identifying challenges,
test strategies, and finding solutions (Penuel and Farrell, 2017). Kali et al. notes that these tasks
2.4 General RPP Benefits 13
require Design Centric (DC) RPP
1
participants to take on more than the traditional roles and often
share responsibilities of consultant/facilitator, designer, and researcher (Kali et al., 2018).
In a NIC, there is no clear delineation of who is a researcher and who is a practitioner (Penuel,
2019; Penuel et al., 2015). However others have identified some basic functions of researchers and
practitioners in NICs: researchers can take on the work of facilitating and guiding members through
the improvement process (Thompson et al., 2019). Practitioners, then, take on responsibilities for
developing measures, gathering, and analyzing data. In other words, in NICs it is assumed researchers
and practitioners roles are counter-normative to their routine responsibilities (Coburn et al., 2013;
Penuel and Farrell, 2017).
From the cumulative literature mentioned above we understand that researchers and practitioners
can be assigned to varied types of roles and responsibilities. Since the RPPs are collaborative projects,
the roles of researchers and practitioners are not always distinct and often can be blurred. Participants
can also define and redefine their roles and responsibilities through the project (Farrell et al., 2019), with
practitioners taking roles as researchers, and conversely researchers taking roles as practitioners (Ghiso
et al., 2019). Additionally, both practitioners and researchers can be responsible for data collection,
monitoring the fit of roles and responsibilities, gaining insight to problems of practice, and assessing
needs through observations and listening. They may also bring stakeholders into the development of
the project, its implementation and evaluation, and reporting the findings (Connolly, 2019; Fancsali
et al., 2019; Stokes et al., 2018).
2.4 General RPP Benefits
Benefits of RPPs are multi-faceted and both researchers and practitioners can both be positively
impacted due to the participatory knowledge building process (Santo et al., 2017a). They result in
higher quality research that builds capacity among the researchers, practitioners, and their institutions
that is more likely to have a positive, timely impact on students (Henrick et al., 2016; Muñoz, 2016;
Stokes et al., 2018). By their vary nature, they are more equitable and ethical since they leverage
ideas, assets, and "...community stakeholder experiences and perspectives to inform research questions,
methods, and meaning-making" (Bevan et al., 2019, p. 1(Bevan, 2017; Henrick et al., 2016). This
has the potential to discover interventions that have a higher adoption rate due to their usability and
relevance in the local context (Bevan et al., 2019; Coburn and Penuel, 2016; Henrick et al., 2016; Hod
et al., 2018; Stokes et al., 2018; Wille et al., 2017 since the rigorous research often provides better
assurance that the new practices solve the targeted problem and are institutionalized (Bevan et al.,
2019; Connolly, 2019; Stokes et al., 2018). This collaborative partnership also provides the platform
for participants to "...self-reflect about how their own expectations influenced the RPP [which] has
resulted in an honest description of the challenges that must be negotiated" (Denner et al., 2019, p.10),
including those difficult challenges that district leaders face "...when attempting to make system wide
improvements in complex education settings, particularly in high-needs priority schools" (Henrick
et al., 2016, p. 26).
The outcomes of these many benefits include improved academic achievement among students
(Coburn and Penuel, 2016; Schools, 2019; Stokes et al., 2018), student engagement (Stokes et al.,
2018), and other social-emotional factors that have been shown to impact learning (Stokes et al., 2018).
The networked community of those involved in the RPP are able to access the research and interpret it,
and decision making can then be based on the interpretation of this research (Boser and McDaniels,
1
Kali et al. (Kali et al., 2018 use the term Design-Centric Research Practice Partnerships (DC-RPPs). This includes,
but is not limited to, Design Research. Other practices included in this term are design experiments, design-based research,
design-based implementation research, and educational design research.
2.5 General Challenges 14
2018; Coburn and Penuel, 2016; Henrick et al., 2016). Tools and resources for improving curriculum
can be provided and shared more widely (Stokes et al., 2018), and this generalized knowledge can
extend beyond those involved in the RPP (Kali et al., 2018; Quartz et al., 2017).
The adaption of the continuous improvement model as a whole helps ensure continued use of "social
resources" via continued networking as well as the continued sharing of ideas, processes, materials, and
tools (Coburn and Penuel, 2016; Kali et al., 2018). Their long-term nature and open-ended commitment
leads to the acceptance and use of the continuous improvement model dedicated to addressing persistent
problems of practice (Coburn and Penuel, 2016; Dettori et al., 2018; Santo et al., 2017a and results in a
significant amount of original data that is produced over time (Boser and McDaniels, 2018). Districts
and state-wide policymakers then build "...their own capacity to use and generate research effectively"
(Boser and McDaniels, 2018, p. 6).
In addition to their general benefits, RPPs have been shown to have a positive impact on individual
researchers and practitioners (see Table 2.1). Benefits to teachers include increases in confidence
and self-efficacy (Fancsali et al., 2019; Jacob et al., 2019; Stokes et al., 2018), improved classroom
practices (Stokes et al., 2018), increases in sense of ownership of research (Jacob et al., 2019), and
more awareness of advances in scholarship on improved teaching (Coburn and Penuel, 2016; Fancsali
et al., 2019; Stokes et al., 2018). Researchers also share in benefits, including a deeper understanding
of the realities of school contexts and practices (Kali et al., 2018; Stokes et al., 2018) and an increased
confidence in the value of their work (Stokes et al., 2018).
There potentially may be another class of benefits that have yet to be documented by others or
otherwise might go unstated, particularly at the macro level (e.g., policy, procedure, culture or subsets of
the organizations participating in the RPP). These may include partnerships extending to new challenges
and opportunities, development of trust allowing difficult conversations to occur, and acknowledgement
and open discussion of power dynamics/power relationships by participants.
2.5 General Challenges
The first two hurdles that RPP initiators face are the ability to 1) form the collaboration and infrastructure
for the RPP that can sustain change and 2) decompose the problem of practice that takes into account
the holistic needs of learners for generating the RPP’s focus (Kali et al., 2018; Resnick and Kazemi,
2019; Santo et al., 2017a; Santo et al., 2017b; Wiebe et al., 2019; Wille et al., 2016). Differing priorities,
shifting goals, differing visions and approaches can all contribute to tensions among the RPP members
(Boser and McDaniels, 2018; Denner et al., 2019; Henrick et al., 2017; Severance et al., 2014; Wanzer,
2019).
Over the multi-year course of RPPs, funding for sustaining the long-term collaborations often
present a challenge (Bevan, 2017; Boser and McDaniels, 2018; Coburn and Penuel, 2016; Dettori et al.,
2018). Likewise, RPPs can face organizational and knowledge management issues that plague any
institution–finding and potentially hiring qualified researchers (Boser and McDaniels, 2018), employee
turnover, sharing of research across the community of those involved, turnover in leadership, (Coburn
and Penuel, 2016; Henrick et al., 2017; Santo et al., 2017a), time constraints (Coburn and Penuel,
2016; Henrick et al., 2017), complexities of the institutional and RPP hierarchies (Cannata et al., 2019
(including leadership structure (Dettori et al., 2018), social dynamics (Farrell et al., 2019), the needs of
special interest groups external to the RPP (e.g. parents) (Cannata et al., 2019), lack of focus on the
guiding goals (Santo et al., 2017a), and political influences within the RPP (Coburn and Penuel, 2016;
Henrick et al., 2017). RPPs are also faced with similar decisions about choosing whether the benefits
of the RPP outweighs the expenditure of funds to conduct the research (Muñoz, 2016).
RPPs may also bring to the forefront cultural gaps and differences, including those practices and
2.5 General Challenges 15
Group Impacts
Teachers Access to usable research (Stokes et al., 2018)
Affirmation for long-term collaboration (Frumin, 2019)
Classroom practices (Stokes et al., 2018)
Confidence (Fancsali et al., 2019; Jacob et al., 2019; Stokes et al., 2018)
Creating opportunities to develop and apply new knowledge (Coburn and
Penuel, 2016)
Engagement in professional learning (Stokes et al., 2018)
Expanded professional communities (Stokes et al., 2018)
Knowledge and awareness of important advances in scholarship (Coburn
and Penuel, 2016; Fancsali et al., 2019; Stokes et al., 2018)
Leadership capability related to STEM improvement (Stokes et al., 2018)
Self-efficacy (Jacob et al., 2019)
Sense of ownership (Jacob et al., 2019)
Personal Identity (Frumin, 2019)
Professional Renewal (Frumin, 2019)
Administrators Expanded professional communities (Stokes et al., 2018)
Personal Identity (Frumin, 2019)
Professional Renewal (Frumin, 2019)
Receipt of yearly feedback to support improvement (Henrick et al., 2016)
Researchers
Deepen their understanding of realities of school contexts and practices
(Kali et al., 2018; Stokes et al., 2018)
Expanded professional communities (Stokes et al., 2018)
Increased confidence in the value of their work (Stokes et al., 2018)
Increased confidence in outcome of their research (Kali et al., 2018)
Personal Identity (Frumin, 2019)
Professional Renewal (Frumin, 2019)
Receipt of yearly feedback to support improvement (Henrick et al., 2016)
Table 2.1: Impacts of RPPs on practitioners and researchers based on previously gathered evidence.
policies that are inflexible and the awareness by the RPPs of those practices and policies that can be
changed (Denner et al., 2019; Hazzan et al., 2018; Henrick et al., 2017). Likewise, they introduce
a multi-party problem, which is amplified when the practitioners and researchers have no or only a
limited history of interactions (Henrick et al., 2017) and have not been trained to work together (Wanzer,
2019). Many of these organizational complexities multiply as more members are added to the RPP
(Tseng et al., 2017).
Power imbalances can inhibit the goals for equity and inclusivity in RPPs as well as inhibit the
building of trust among RPP members (Bevan, 2017; Bevan et al., 2019; Denner et al., 2019; Ghiso
et al., 2019; Henrick et al., 2017; Lash et al., 2019). This is further complicated by the complexities of
communication among RPP team members (Wanzer, 2019), including issues of shared language (Santo
et al., 2017a) and even communication about the partnership itself (Kali et al., 2018). Maintaining a
local context on the partnership work can also be a challenge, particularly when there are other forces at
play (e.g., politics, external fixtures that influence and can stress the partnership) (Henrick et al., 2017).
Equity within various aspects of the research, including the students, can be addressed in RPPs, but
often there are "...complex and interrelated problems of practice associated with the creation and scale
2.6 Assessing RPPs and Their Value 16
of new practices that aim to position educators as techquity designers and brokers" (Kalir, no date, p.
6). In this regard, working towards justice also means that challenges can arise when considering if and
when research should be conducted (Denner et al., 2019).
Building and maintaining trust among the RPP members can require significant time and commit-
ment from the researchers and practitioners (e.g., teachers and district leaders) (Boser and McDaniels,
2018; Denner et al., 2019; Henrick et al., 2017; Henrick et al., 2016; Wanzer, 2019), which can be
difficult when time is a known burden on staff and students that participate in the activities (Muñoz,
2016). RPP teams must also be flexible and adaptable, since at times the focus of the work must be
shifted2(Santo et al., 2017a).
Sharing of the knowledge from the original data produced and lessons learned throughout the team
as well as sharing of that data throughout the RPP can be a challenge (Santo et al., 2017a; Santo et al.,
2017b). Even obtaining usable data, ensuring that practitioners understand the inquiry process and
the scientific methods involved in inquiry, navigating between facilitating teachers and collecting data,
and deciding on what data constitutes evidence needs to be navigated (Denner et al., 2019; Henrick
et al., 2017; Jacob et al., 2019; Schools, 2019). Equitable sharing that presents the practitioners voice
is also problematic, since findings are often presented at academic conferences and practitioners may
not have the time or resources to commit to this endeavor (Ghiso et al., 2019). Research findings can
also raise "...unanticipated and/or politically charged issues" (Henrick et al., 2017, p. 6) that must be
navigated and data collection involves the establishment and maintenance of ethics related to privacy
and confidentiality (Muñoz, 2016).
Research findings may also challenge the practitioners’ fundamental beliefs (Coburn and Penuel,
2016 and institutional obstacles (Boser and McDaniels, 2018 and require that teachers take the time
to shift their teaching to include practices related to findings (Stokes et al., 2018). These can affect
implementation fidelity and quality (Bevan et al., 2019; Dettori et al., 2018). Teacher capacity building
to engage in the RPP and the implementation of findings may be difficult to build (Dettori et al., 2018).
2.6 Assessing RPPs and Their Value
Assessment of RPPs is important in ensuring that the key components and the value of RPPs are being
continually addressed. In this section we highlight several assessment methods.
2.6.1 The Five Dimensions of Effectiveness Model.
Although relatively new and not specifically designed for RPPs in CS, the Five Dimensions of Effec-
tiveness assessment model (Henrick et al., 2017) has already been used and referenced across a variety
of projects (Connolly, 2019; Henrick et al., 2019; Jacob et al., 2019; Lash et al., 2019) and has evidence
of validity. In this model, RPP progress is measured across the following five dimensions:
Building trust and cultivating partnership relationships
Conducting rigorous research to inform action
Supporting the partner practice organization in achieving its goals
Producing knowledge that can inform educational improvement efforts more broadly
Building the capacity of participating researchers, practitioners, practice organizations, and
research organizations to engage in partnership work.
Various indicators are used across these dimensions to actual provide assessment measures. For
2
A perfect example of this is the shifting required to address the impact of COVID-19 on the RPP team, the RPP’s goals,
and the impact on students
2.6 Assessing RPPs and Their Value 17
example, for the first dimension, Building trust and cultivating partnership relationships, there are 5
indicators (Henrick et al., 2017, p. 5-6):
Researchers and practitioners routinely work together
The RPP establishes routines that promote collaborative decision making and guard against
power imbalances
RPP members establish norms of interaction that support collaborative decision making and
equitable participation in all phases of the work
RPP members recognize and respect one another’s perspectives and diverse forms of expertise
Partnership goals take into account team members’ work demands and roles in their respective
organizations
By reading this assessment model, which is carefully aligned to best practices in establishing and
implementing RPPs, one can derive a strong sense of how RPPs should be structured to acknowledge,
support, and embrace the equal partnership RPPs seek to achieve.
2.6.2 The Wilder Collaboration Factors Inventory.
The Wilder Collaboration Factors Inventory can be used to assess the collaboration and partnership
qualities among groups involved in an RPP (https://wilderresearch.org/tools/cfi-2018/start) (Connolly,
2019; Mattesich and Johnson, 2018). This vetted instrument has 44 questions across 23 factors
that groups utilize. Factors include the history of collaboration/cooperation, flexibility, ability to
compromise, open and frequent communication, and shared vision.
2.6.3 The RPPforCS Health Assessment.
Based on the Five Dimensions of Effectiveness model, the RPPforCS Health Assessment Tool offers
a matrix for evaluators to evaluate the RPP design process over time to assess the maturity of the
RPP (Zarch and Sexton, 2019). The Tool asks participants to identify the five dimensions of RPP
effectiveness and their corresponding indicators and whether they have a) designed for this indicator, b)
if the indicator is part of their documentation plan and c) if it is part of their reflection strategy. The
tool then asks for an example of how the project has designed and how they document some element of
each dimension (if relevant). The current iteration of the Tool was modified from the pilot which asked
teams to rate their progress on each indicator based on their RPP community feedback. The Tool was
produced as a Google spreadsheet, which allows easy inter-team collaboration and sharing with the
RPPforCS research team.
Healthy partnerships will be proactive in giving their partnership attention. The Tool can help
facilitate the design of the RPP and reflection among partners as a part of the trust building process.
For RPPs that are struggling to function as healthy partnerships, the Tool may facilitate difficult
conversations around where and how to improve the partner dynamics.
Early in the project the Tool can help frame discussions and set the intentions of the partnership.
Engaging the RPP project evaluator early in the process allows them to design an evaluation that is
aligned with the Health Assessment. Conversely, the health assessment is comprehensive. Not all
dimensions or indicators will be appropriate at any one time so prioritize the areas of importance.
2.6.4 The Wentworth et al. Survey.
The assessment framework provided Wentworth et al. can be used to examine the impact of RPPs on
behaviors, "such as educators’ evidence-based decision-making, in the context of school and district
improvement efforts" (Wentworth et al., 2017, p. 250). To support this assessment framework, the
2.6 Assessing RPPs and Their Value 18
authors developed a survey instrument that measures several key components of RPPs, including the
"utility of the research and the partnership itself, the quality of the research produced by the partnership,
the relationships within the partnership, the operations of the partnership, access to the research by
practitioners and, generally, the amount of time it takes participants to interact within the partnership"
(Wentworth et al., 2017, p. 250).
2.6.5 SWOT Analysis
A Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis was used to conduct an
evaluation of RPPs in STEM in Israel (Groff, 1983; Hazzan et al., 2018). SWOT analysis is an
well-known assessment measure used in businesses to help identify the strengths, mitigate weaknesses,
seize on opportunities and identify threats, all in an effort to improve the processes and functions of
an organization. In their analysis, for example, they found several strengths of RPPs across Israel,
including that there were "...(a) multiple activities in STEM education, (b) a large research community
in higher education departments in the universities that specialize in STEM subjects, and (c) other
research institutions that engage in STEM education research and development" (Hazzan et al., 2018,
p. 52). Although this particular use of SWOT was focused on multiple RPPs across one country, the
concept of SWOT as an indication of each component at a particular point of time could potentially be
useful for principal investigators/directors of RPPs in order to improve the processes.
2.6.6 Student Outcomes Assessment Model.
The Cannata et al. Student Outcomes Assessment Model uses a "difference-in-differences estimation
strategy", which consists of a way to "...compare student outcomes among the innovations schools to
the remaining schools in the district" (Cannata et al., 2019, p. 5). Outcome measures can (and should)
be defined collaboratively with stakeholders (such as district leaders) based on the jointly desired
outcomes. This can include a mixture of quantitative and qualitative measures, but should take the
context of those partnership participants into account. To aid in the assessment process, the authors
created a survey that can be found in Cannata et al.
3. Study Design and Author Reflexivity
3.1 Study Design
After reviewing the literature, it became apparent that, if the framework were to be carefully created,
even one component would require extensive analysis and testing given the complexities of RPPs. In
its entirety, this could easily result in a multi-chapter book. To better scope our research, we revised
our methodology to better understand its complexity in relation to our needs. We took the following
approach:
1. Carefully consider the literature and previously referenced components of an RPP
2.
Blend this with our own understanding of RPPs to create a compositional structure for the
framework
3. Consider what the first component should contain based on an analysis of the literature
4.
Test the first component’s structure for feasibility by vetting the components against a subset of
NSF-funded RPPs
5. Prepare future work to further vet the proposed component structure
6. Prepare future work to build structures for each of the other components
We chose to complete the first four steps and leave steps five and six for future work.
3.2 Researcher Description and Author Reflexivity
We developed our research questions and chose to conduct this research based on our intensive work
with the NSF CS for All: RPP community as computer science education researchers and evaluators.
Within this capacity, we have relied upon our theoretical and practical knowledge of RPPs in general,
our work within the CS education community, and the intersection of the two. The authors include
education and computer science education researchers and practitioners (including former K-12 CS
practitioners).
Our research analysis spans the next two sections, with Section 4 examining the creation of the
broader framework to answer the first research question and Section 5 examining the first component
of the framework. The former relies on our literature review and understanding of RPPs. The latter
also relies on this information, and we also use a secondary content analysis to strengthen our work.
The coding for the content analysis was conducted by four of the authors. Two authors conducted
the first phase of analysis, with one of the authors having experience conducting content analysis and
other qualitative studies in computer science education and the other was a PhD student who worked
3.2 Researcher Description and Author Reflexivity 20
alongside them as a research assistant. Two other authors conducted the second phase of analysis, one
of whom is a PhD student with a focus on organizational studies, gender inclusivity, and qualitative
research, and the other who has been evaluating education programs and research projects for five years
and who is closely involved with the NSF CS for All: RPP community. Both of the authors involved in
the second portion of the analysis have education policy research interests.
We have brought our knowledge into this research in several ways. This research study itself
was first borne from our desire to understand the uniqueness of RPPs for computer science education,
particularly how they differ from RPPs in general disciplines. We understood that a more comprehensive
literature review would need to be conducted to ensure that no other framework exists. This knowledge
helped us focus and frame our research on key aspects of RPPs.
In creating the compositional structure for the broader framework (Section 4), we were able to
draw upon our understanding of the structure of RPPs, their benefits, and their challenges to consider
what a compositional structure might entail. Although the section is brief, this was necessary for us as
we relied upon our previous knowledge to guide the development of this structure. For developing a
framework for one of the sections, Theory of Change, we were able to again rely on our experience and
knowledge of this theory to develop this (although we let the actual construction be guided by previous
research). For vetting the Theory of Change framework using project data, we were able to utilize our
existing collection of project abstracts data to expedite this process. We randomly selected 15 projects.
Throughout the remainder of this article, we reference when we rely on this knowledge more
heavily to better inform the reader of when our own perspectives interject our understanding and
interpretation of the content.
4. Framework for Analyzing Partnerships
To answer the first research question, What would a broad framework for analyzing and comparing the
partnership aspect of RPPs for primary and secondary computer science education entail?, we first
considered previous research as referenced in our background section (Section 2). In particular, we
looked for previous research articles that presented categories for discussing and/or requirements for an
RPP–not necessarily processes, but fluid, moving parts. Of the many articles reviewed in Section 2,
works of two sets of authors stood out as a potential starting point:
Penuel and Farrell state that the shared "DNA of RPPs" are the problems of practice they seek to
resolve: mutualism, strategies to foster partnerships, and original analysis of data (Penuel and
Farrell, 2017).
Tseng et al. provide their own analysis of RPPs by defining four primary categories, includ-
ing structuring the partnership (including long-term goals), developing shared commitments,
producing and using research, and funding and similar partnerships (Tseng et al., 2017).
We combined these two to form the base compositional structure for our framework. We also
drew from our own categorizations for partnership structure and organization that was built from the
literature. The resulting components included the following:
Problem(s) of practice in K-12 computing education that the RPP seeks to solve
Partnership structure and organization
RPP framework
Roles and responsibilities
Long-term goals
Strategies employed that foster partnerships
Development of shared commitments
Mutualism
Original analysis of data
Production and use of research
Funding to create and sustain the RPP
Once we stepped back to reflect on this, we noticed that there was a core component missing that
enabled the RPP to motivate change. We refer to this as an RPP’s Theory of Change (Organizational
Research Services, 2004 with the knowledge that establishing a Theory of Change is a critical piece of
structuring and evaluating projects designed to change educational (and other types of) programs.
Further, although assessment measures and outcomes are not part of the categories established by
22
Table 4.1: Compositional Structure for Analyzing an RPP. The remainder of this report focuses on
Theory of Change.
Theory of Change
Mutualism Resources to
Create/Sustain
Problems
of Practice
Original Analysis
of Data
Assessment
Methods
Partnership
Structure
Production and
Use of Research Outcomes
either Penuel and Farrell or Tseng et al., our and others’ previous research in this area (see Section
2.6) indicates that assessment is a critical part of the continuous improvement process for long-term
collaborations such as RPPs (Connolly, 2019; Henrick et al., 2017; Mattesich and Johnson, 2018; Zarch
and Sexton, 2019).
Likewise, outcomes were not part of the original categories, but in the context of our goals for
this article and future needs to see the pathways that RPPs take to meet intended goals, looking at
what outcomes were being produced by the RPPs to date will provide insight into early findings from
the RPPs. Finally, analysis of outcomes can also lead to a better comprehensive understanding of
the types of outcomes funded by the NSF, how these correspond to what we know about academic
achievement in the context of computer science and computational thinking and whether there are gaps
in this understanding that remain to be filled. This also provides a component dedicated to examining
intended and unintended outcomes, similar to those described in Sections 2.4 and 2.5).
We folded these additional categories (Theory of Change, Assessment Methods, and Outcomes)
into the compositional structure, resulting in nine possible components that could be used to analyze
and compare one RPP across multiple years or multiple RPPs (see Figure 4.1). Based on the literature
and our own deep knowledge of RPPs, a thorough review by the authors and further discussion of these
components as they align to the literature presented in Section 2, we consider these categories to be
comprehensive and a valid starting point for vetting.
5. Component Analysis
To answer the second research question, If we investigated one component of the framework, what
would we learn from its analysis?, we turned our attention to the first component, Theory of Change.
5.1 Structure for Evaluating the RPP’s Theory of Change
A Theory of Change is an essential road map for achieving change in any organization and the structure
for creating these theories are particularly well-suited for non-profits and educational institutions like
RPPs. According to Organizational Research Services (Organizational Research Services, 2004), there
are six primary steps for constructing a Theory of Change:
1. Clarify the goals
2. Identify powerful strategies to reach the goals
3. Create "so that" chains to define the outcomes
4. Link strategies with outcomes and goals
5. Test the logic and relevance
6. Articulate assumptions
We then examined each of these steps for constructing a Theory of Change to determine to what degree
they could be used to analyze an RPP. We noted that step 5, test the logic and relevance of the Theory
of Change, is a process of vetting the Theory of Change and not part of the Theory of Change itself.
What remains are five steps that can be transformed into five guiding questions:
1. What are the goals of the RPP?
2. What strategies will (or did) the RPP use to meet those goals?
3. What are the pre-defined outcomes of the goals?
4. How do the strategies map to the outcomes and goals?
5.
What are the underlying assumptions of the RPP (e.g., principles about how the RPP will operate
or belief systems in place at the schools or within the communities)?
We postulated that these questions could potentially be used to analyze the Theory of Change in an
RPP, such as its initial state, or across multiple RPPs. They could be answered by collecting several
data points, including examining the original NSF proposals, examining materials from RPP strategy
meetings, and interviewing the primary investigators of the project.
5.2 Data Collection 24
5.2 Data Collection
We decided to analyze secondary data to test our hypothesis and further analyze the Theory of Change
component against the five questions. We conducted a random sampling of 77 unique RPPs from the
first three funded cohorts, five across each year (2017, 2018, and 2019), for a total of 15 RPPs
1
). To
perform the sampling, we divided the number of unique projects in each year and divided that number
by 5 to get the nth value. We listed the RPPs alphabetically by year based on their titles. We chose
every nth article for that particular year, yielding the projects in the Appendix.
Of these 15 projects, primary investigators from seven of the projects had previously shared their
project descriptions. For the remaining eight projects, we relied on the project abstracts only that appear
on the NSF website for awarded projects.
5.3 Data Analysis
5.3.1 Phase I Analysis
We decided to analyze the secondary data that we had (15 abstracts and seven project descriptions)
by coding each against the five guiding questions. To familiarize ourselves with the data and gain
a better of understanding of the types of codes that might be generated, we listed the project titles
in a spreadsheet along with their project summary or abstract, research questions, intellectual merit
statements and broader impact statements. We added five columns, one for each of the Theory of
Change guiding research questions shown above. Once this was completed, we manually analyzed the
data across each of the five categories (based on the questions above) to determine if this was a feasible
way to disaggregate the data.
Once the data was established across all of these categories for each project, we placed the goals
on a separate worksheet and summarized each into subgoals. For example, for one of the projects,
the goal was stated as "A key project goal is to transform a collection of individual teachers into a
South Carolina virtual community of ’activists’ for broadening participation in CS education and to
create a vibrant computing education community that is ready to generate interest and excitement
about CS among all students." This goal became three subgoals, Create Virtual Community of Practice
for Teachers, Recruit Teachers to Participate in Community of Practice, and Broaden interest in CS.
Once these subgoals were established, 41 subgoals emerged. We classified the subgoals into different
categories (e.g., Build teacher capacity) and is further discussed in Section 6.1.
After this analysis started, we noticed that it was inherently difficult to separate the "goals" text from
other text in the project documentation. We quickly learned that a project’s goals are not necessarily
independent, and therefore, coding multiple goals per project made it difficult to assess the overall
diversity of intended actions and equity dimensions planned for each project.
Many of the abstracts were rich in content related to Theory of Change, prompting us to consider
that we may not actually need project descriptions for analysis–that we may be able to use the project
abstracts to build this framework. This led us down a different exploratory analysis path.
5.3.2 Phase II Analysis
We conducted another analysis of the data, this time focusing only on abstracts and formalizing the
process as follows:
The 15 abstracts were uploaded and reviewed using Dedoose (qualitative analysis application)
1We did not include 2020 in this analysis, since the 2020 abstracts were not released until after this analysis took place.
5.3 Data Analysis 25
Subcodes for the Parent codes of Theory of Change, Actions, and Equity were generated based
on the questions and coding scheme described in Phase I Analysis, with additional codes added
for equity dimensions.
The abstract as a whole was assessed for the presence of each code.
Multiple subcodes from each parent code could be applied to each project.
Each abstract was read a minimum of three times
New codes were added when the existing categories did not seem to capture an action.
New codes were integrated into existing subcodes, while other times they were created as a
stand-alone subcode as appropriate. (See Table 6.6 for the complete list of derived codes.)
Code frequencies were calculated for each subcode and parent code and reported.
A limited number of code co-application dimensions were investigated.
Another pass of the codes was conducted for verification of the codes generated.
After promising results in Phase II analysis, we formalized the codebook and identify additional
areas of inquiry. We used consensus coding (Cascio et al., 2019 to assure inter-rater agreement. Each
abstract was read and coded separately and coders met to resolve discrepancies in code application. For
example, if only one rater applied the "Tailor" code, the coders would discuss this difference, come to
an agreement about whether code application was warranted, and update their records to reflect this.
While the frequency of codes was generally stable between the first and second passes of the data
in Phase II, some differences emerged.We attribute most of these changes to the formalization of the
codebook between the passes. For example, in the first pass, nine abstracts were identified which
included the Collaborate action family. In the second, five abstracts were coded with the Collaborate
code, and 4 were coded with the new Create: RPP code, as we felt this was a better fit. In other cases,
differences were a result of being stricter with applying the codebook criteria. This is most obvious
with the Theory of Change elements (Table 6.4), where fewer abstracts were given credit for having
clear outcomes and assumptions.
6. Component Analysis Results
6.1 Phase I Analysis: Goals
We first identified 41 subgoals from our Phase I analysis. Once sorted and grouped similarly, the
following categories emerged: Type of Action to be Taken, Issues of Equity/Equity Dimensions, RPP
Activities, Administrator Capacity Building, Teacher Capacity Building, Curriculum, and Community.
We viewed the first two, the type of action to be taken and equity dimensions, to be complementary
to the five remaining categories (see Table 6.1). That is, for every subgoal in the Teacher Capacity
Building category, they each have a type of action ("Create," "Engage," etc.) and may possibly have an
equity focus (e.g., girls, low performing schools, students with disabilities).
Components Subcomponents N %
Action to be Taken Create, Build, Design, Develop, Establish, or Implement 14 34%
Examine, Investigate, Identify, Address 11 27%
Teach, Train, Prepare 9 22%
Support, Sustain, Strengthen, Refine 8 20%
Recruit 1 2%
Engage 1 2%
Broaden 1 2%
Equity Dimensions Unspecified 5 12%
Girls 2 5%
Low Performing Schools 2 5%
Minoritized Students 1 2%
Low Socio-Economic Status 1 2%
Diverse Leaners 1 2%
Disabilities 1 2%
Table 6.1: Phase I Analysis: Components of the goals are complementary to the components shown in
Table 6.2.
When analyzing the 41 subgoals, we learned that 14 (34%) create or build and 11 (27%) exam-
ine/investigate. Equity dimensions included unspecified target demographics (12%), girls (5%), and
low performing schools (5%). Table 6.1 lists the types of actions and equity dimensions found.
With respect to the target of the activities within the goals, we uncovered those related to the RPP
itself (e.g., forming a new one or strengthening an existing one), teacher capacity, principal capacity,
6.1 Phase I Analysis: Goals 27
Count %
Administrator Capacity CT/CT Activities, CT Pedagogy 4 10%
Community Principals 1 2%
Curriculum Parents/Caregivers 1 2%
Culturally Relevant Pedagogy 1 2%
CS Relevance 1 2%
CT Relevance 1 2%
Lesson plans to integrate CT 1 2%
Interest in CS 1 2%
Scaffolded Instruction 1 2%
CT Instructional Materials 1 2%
CS Principles 1 2%
Pathways 1 2%
RPP New RPP 3 7%
Existing RPP 2 5%
Teacher Capacity PD in CS/CT 5 15%
Community of Practice 2 5%
PD to integrate CS 1 2%
Culturally Relevant Pedagogy 1 2%
Targeted/differentiated PD 1 2%
Ongoing support 1 2%
Table 6.2: Phase I Analysis: These components of the goals show the target of the actions.
curriculum, and community engagement. We found that Professional Development in CS/CT was the
most prominent focus of activities (5 or 15%) and CT/CT Activities/CT Pedagogy was next (4 or 10%).
Table 6.2 lists the target activities found among the subgoals.
The goals grouping starts to shape a framework for analyzing the RPP activities (see Figure 6.1),
which lends itself to understanding whether the NSF is meeting its planned objectives for the program
as well as whether this aligns with what is known about building educational programs that boost
academic achievement.
Action to be taken
51.1%
Equity Dimensions
14.4%
Target of Actions
34.5%
Figure 6.1: Phase I Analysis: Target of actions (N=31), equity dimensions (N=13) and actions (N=46)
in the goals.
6.2 Phase II Analysis: Exploratory & Rigorous Replication Analyses 28
First Pass Second Pass
Action Family N% of Projects N% of Projects
Create 13 87% - -
Context: CS Education - - 13 87%
Context: RPP - - 4 27%
Implement - - 4 27%
Leverage - - - 0
(Leverage, expand, enhance) 10 67% - -
(Leverage, transform, enhance) - - 10 67%
Examine 9 60% 13 87%
(Collaborate) 9 60% 5 33%
Support 8 53% 5 33%
Teach 7 47% 8 53%
Engage 7 47% 3 20%
(Empower) 6 40% - -
(Empower, give voice, mobilize) - - 4 27%
(Raise awareness, disseminate) - - 5 33%
Broaden (transform) 5 33% - -
Broaden (expand) - - 3 20%
(Embed) 4 27% 1 7%
(Tailor) 2 13% 3 20%
(Hold accountable) 2 13% 1 7%
Recruit 0 0% 0 0%
Table 6.3: Frequency of Unique Actions.
6.2 Phase II Analysis: Exploratory & Rigorous Replication Analyses
Following our methodology described in Section 5.3, for Phase II Analysis, we used the project as the
level of analysis, and all code counts were performed at the project level. The rationale for this decision
was that doing provided a more holistic analysis of each project’s Theory of Change.
6.2.1 Actions
There were seven unique actions that were found in the first analysis when looking at project goals only
(see Table 6.1). However, in the Phase II analysis that examined the actions more holistically across
projects, an additional six emerged as shown in Table 6.3 (with number of actions per project shown
in Figure 6.2). When we look at the total number of elements used by projects in the the actions, the
percentages are much lower here, even if the raw counts are similar. For example, the “Create” category
had 13 codes in our sample and 14 in the original analysis. This meant while 87% of the 15 projects in
our tabulation planned to create as part of their project, only 34% of 41 goals of the goal-level analysis
were related to creation in the original coding application.
Since the Phase II analysis was more holistic, we would expect to find more codes. We note that
only one action, recruit, was found in the first analysis and not in the second. This indicates that
the recruit action was found in one of the project descriptions, but not in the project abstracts. This
leads us with the hypothesis that 1) many of the projects include the majority of their actions in their
abstracts and 2) additional codes may be added upon further investigation of additional documentation
6.2 Phase II Analysis: Exploratory & Rigorous Replication Analyses 29
012345678 9 10
0
2
4
6
Projects
Pass 2
Pass 1
Figure 6.2: Number of Actions Found per Project. The yield from the first and second pass appear
closely aligned.
or interviews with primary investigators.
We note that the Create action family is split into parts based on context. This was done during the
second pass, as we thought it would be an important distinguish between whether the action was related
to creating an RPP or creating new efforts of CS/CT education. We also thought it was important to
distinguish contexts where the team was making a new product from scratch from those where they
were putting in place an existing product. By increasing the precision of our codes, we can more
easily create a typology of actions in future applications of the framework. We also recognize that
contextualization can be added to all of the actions, and future use of the codes may be dependent on
the ultimate goals of the researchers.
6.2.2 Equity Dimensions
Equity requires significant space when it comes to formal RPP practices (Denner et al., 2019; Henrick
et al., 2017; Kalir, no date; Lash et al., 2019). Tables 6.4 and Figure 6.3 further quantify the framework
for specifying the degree to which projects are pursuing multiple courses of action for multiple equity
stakeholder groups. We note that the percentages appear much lower than in Table 6.1, since the equity
dimensions in the first analysis was based on 41 subgoals rather than the total number of projects (15).
Equity First Pass Second Pass
Dimensions N% of Projects N% of Projects
Race/ethnicity 8 53% 8 53%
Unspecified underrepresented 7 47% 2 13%
Economic disadvantage 6 40% 6 40%
All students 6 40% - -
All (including) students - - 6 40%
All (only) students - - 0 0%
Rural learners 5 33% 6 40%
Gender 4 27% 2 13%
Disability 3 20% 3 20%
English language learners 2 13% 2 13%
Other equity - - 3 20%
Table 6.4: Frequency of Equity Dimensions. Codes added during Phase II analysis appear in parenthesis.
6.2 Phase II Analysis: Exploratory & Rigorous Replication Analyses 30
0123456
0
2
4
6
Projects
Pass 2
Pass 1
Figure 6.3: Number of Equity Dimensions in Projects. Pass 1 and 2 resulted in a similar count.
When analyzing the abstracts, we learned that many included multiple equity dimensions, even
if only mentioning reaching unspecified additional disadvantaged students or expressing interest in
reaching all students in addition to a specified population of students. Again, we recognized the overlap
in codes between Phase I and II analysis, which led us to believe that these would be a reasonable
addition to the framework. Another pass produced similar results, though some equity dimensions
were less frequent due to stricter criteria for coding those dimensions after the codebook was revised.
Interviews with primary investigators or investigating more project documentation would provide more
completeness for individual projects.
6.2.3 Target Groups of Actions
Going into our second pass, we made the decision to include Target Groups of Actions as a unique set
of potential codes (6.5). We initially created the codebook based on our collective knowledge of typical
CS education and RPP project aims, and later added formal definitions for each code for reference
by future coders. Both people (e.g., Teachers) and objects/concepts/activities (e.g., Curriculum; PD;
Pedagogy) were included as potential targets. After we applied these codes to the 15 abstracts, we
revised our codebook. First, we clarified the existing codes. For example, we made the decision to
modify our code for Principals and Other School Leadership to read Principals and Other School
Personnel to be more inclusive of non-instructional staff such as librarians, counselors, and curriculum
support specialists. We then looked at all the excerpts to which we had applied the code "Other Target"
to see if there were any new codes that emerged. We identified Barriers and Policy and Governance as
potential codes to "promote" to the codebook in future analyses using this framework.
Since Targets of Action naturally correspond to Actions, two coders re-coded the 15 abstract set
with the formalized codebook and coded all four dimensions of interest. When analyzing by number of
targets, we found that all 15 projects incorporated four or more target groups for the actions (see Figure
6.4), indicating a multi-focus tendency.
6.2.4 Revised Codebook
These analyses only used project abstracts to ensure an equivalent amount of information was used to
evaluate each project. Abstracts are publicly available and are often the first point of contact for an
outsider to understand the project. Of the original codes that emerged from Phase I analysis, only one
code (Recruit) did not emerge from the Phase II analysis (see Table 6.6). This indicates a significant
overlap between codes generated from the goals from which the abstract and project descriptions were
used (Phase I Analysis) and Phase II Analysis in which only the abstracts were analyzed at the project
level. This provided us with confidence that the Phase II Analysis based on abstracts could yield
sufficient results to form a comprehensive codebook for the Theory of Change (first component in
6.2 Phase II Analysis: Exploratory & Rigorous Replication Analyses 31
Grouping Target Group # of % of
Projects Projects
Administrators District-level administrators 8 53%
Principals and Other School Leadership 10 67%
Community Community organizations 3 20%
Families 2 13%
Pedagogy Curriculum 9 60%
Pedagogy 6 40 %
Research Research or Researchers 10 67%
Teachers Teachers 13 87%
Professional Development 10 67%
RPP RPP or RPP members 11 73%
Students Students 11 73%
Other Target Other 6 40%
Table 6.5: Unique Targets of Action per Project.
12345678 9
0
2
4
6
Projects
Pass 2
Figure 6.4: Number of Targets per Project.
Figure 4.1).
The codes also differ from what is presented in Table 6.1 in another way. We added the five
categories of Theory of Change (clarity of goals, strategies, outcomes, strategy-outcome relationships,
and assumptions) as stand alone items in the codebook. This provided us with the ability to include
specific elements of the Theory of Change (see Section 5.1) and determine the completeness of the
Theory of Change presented by a given project. These five categories were drawn from the six elements
of a Theory of Change outlined in 5.1. Item five (test the logic and relevance) was considered to be too
subjective to code and was excluded from the codebook and analysis. This also set the stage for our
rigorous replication pass, which confirmed and clarified codes.
It is critical to note that this analysis does not necessarily point to deficiencies in the project
abstracts. Abstracts are written at a separate time than the project description without clearly delineated
expectations. Instead, the purpose of this analysis was to determine if this would be a valid method for
analyzing the Theory of Change in an RPP as a whole.
6.2 Phase II Analysis: Exploratory & Rigorous Replication Analyses 32
Grouping Code
Actions to be taken Create, build, design, develop, establish, implement
Examine, investigate, identify, address* (evaluate, assess)
Teach, train, prepare (introduce)
Support, sustain, strengthen, refine (inform, assist)
Recruit
Engage
Broaden (transform)
(Collaborate, bring together, partner)
(Embed, infuse, integrate, formalize)
(Empower, give voice, mobilize, raise awareness, disseminate)
(Hold accountable, address)*
(Leverage, expand, enhance)
(Tailor)
Equity Dimensions Disability
Economic disadvantage
English language learners
Gender
Race/ethnicity
Rural learners
(All students)
(Unspecified underrepresented)
Theory of Change Goals clear
Strategies clear
Outcomes clear
Strategy-outcome relationship clear
Assumptions clear
Table 6.6: Revised Codebook. Codes added during the Phase II analysis appear in parentheses.
6.2.5 Theory of Change
Table 6.7 shows one way to analyze the Theory of Change in a project–analyzing how clearly these
elements are present. The goals were clear in all project abstracts, and 14 (93%) had clear strategies.
Of the four projects which had both clear strategies and outcomes, two had a clear relationship between
the strategies and outcomes.
Figure 6.5 shows the degree to which projects have a complete Theory of Change. Most project
abstracts included at least two or three elements of a Theory of Change. Another pass with a formalized
codebook and stricter guidelines for coding produced similar results. This assured us that it is possible
to include these elements within a framework for the Theory of Change, as it is likely that we could
distinguish more of these elements through interviews with the primary investigators or through project
descriptions.
6.2 Phase II Analysis: Exploratory & Rigorous Replication Analyses 33
First Pass Second Pass
Theory of Change Element N% of Projects N% of Projects
Goals clear 15 100% 15 100%
Strategies clear 14 93% 14 93%
Outcomes clear 6 40% 4 27%
Strategy-outcome agreement 2 13% 3 20%
Assumptions clear 4 27% 1 7%
Table 6.7: Theory of Change by Element (n=15).
012345
0
5
10
15
Projects
Pass 1
Pass 2
Figure 6.5: The Number of Theory of Change Elements across Projects.
7. Discussion
7.1 Central Contributions
The framework shown in Figure 4.1 provides a start in creating the foundation for analyzing an RPP.
Once we analyzed the Theory of Change component and developed a way to analyze the CS for All:
RPP project abstracts, we derived a comprehensive codebook (see Appendix B) that was tested against
15 project abstracts with meaningful results. This work is significant, since a framework for analyzing
RPPs individually or comparing/contrasting across RPPs did not previously exist.
The Theory of Change was the first component that we analyzed and we let the data guide the
emergence of the codes. We learned that what was being done, for whom, and how were captured
in many project abstracts. From our first analysis of goals, we learned that a given project may have
equity expressed in each subgoal, showing the strength of integration into the overall project.
Though we used a small sample size, we were able to establish a grouping of the projects’ goals
(Table 6.1). This grouping starts to illustrate what types of RPPs are being funded, which lends itself to
understanding whether the NSF is meeting its planned objectives and whether this aligns with what
is known about building educational programs that boost academic achievement. This prompts other
questions: is this what is generally found in all RPPs or for all RPPs funded by the NSF? Or is there a
uniqueness here that can be attributed to its context of computing education?
7.2
Similarities and Differences from Prior Theories/Research Findings
As articulated earlier, the field of RPP research is still under development. Though there has been
previous work into analyzing RPPs, little has been done in the way of analyzing RPPs as a whole.
Therefore, this section examines our derived codes against the literature called out in Section 2 to
determine if parallels exist. Section 7.2.1 revisits how the major components materialized and Sections
7.2.2, 7.2.3 and 7.2.4 examine the Actions, Targets Acted Upon, and Equity Dimensions, respectively,
as they relate to the first component, Theory of change.
7.2.1 Major Components
The major components of the framework (Figure 4.1) was developed through the literature and our
knowledge of RPPs. Although no previous research provides a framework as complete as the one we
offer, the components were derived directly from previous research, with the basic components derived
from Penuel and Farrell and Tseng et al. (Penuel and Farrell, 2017; Tseng et al., 2017). The Assessment
Methods and Outcomes components were added after revisiting the literature and acknowledging the
important role they play in RPPs (Connolly, 2019; Henrick et al., 2017; Mattesich and Johnson, 2018;
7.2 Similarities and Differences from Prior Theories/Research Findings 35
Zarch and Sexton, 2019). Theory of Change plays an important role in designing programs intent on
solving problems of practice (Organizational Research Services, 2004).
7.2.2 Actions
Reflecting on the actions discovered and how those relate to previous research, we start with the Create
family of actions. One of the first steps of RPPs is the establishment of the partnership (Connolly, 2019;
Lash et al., 2019). Further, the identification of possible solutions can require the creation of capacity
building for teachers and schools to offer CS and CT instruction (Fancsali et al., 2019; Wille et al.,
2016). Implementation considers those components of pedagogy, curriculum, PD or other existing
material that only needs implementing to meet the RPP’s objectives and can lend itself to capacity
building (Fancsali et al., 2019).
It is no surprise that the Teach family actions appear frequently within RPPs, given the nature of
these RPPs being situated within the context of education–our coding found roughly half of the projects
take this action. More unique, however, is the empower, give voice, mobilize code, which is an action
that is measurable and driven by the NSF call for proposals while at the same time being a critical piece
of an RPP (Penuel and Farrell, 2017; Santo et al., 2019). The Engage code is also key to this, as it is
a necessary part of mutualistic collaborations (Henrick et al., 2016; Penuel et al., 2015 as well as an
effective teaching mode. Both of these appeared lower in the abstracts than we might expect, given the
CS for All calls’ emphasis. We cannot, however, be sure if this was merely unstated in the abstract or
efforts were not being made in the projects for intentional reasons or an oversight in the project itself.
As indicated by their title, research is a critical part of RPP’s foundation (Hod et al., 2018; Lash
et al., 2019; Penuel and Farrell, 2017) and this is considered in our codes (Examine, Investigate, Identify,
Evaluate, Assess). This is necessary for the continuous improvement science that is integrated into
RPPs (Shakman et al., 2017). Likewise, raising awareness and disseminating findings is an important
part of the RPP framework and is included in the NSF proposal call and a part of the fundamental steps
for an RPP (Hod et al., 2018; Lash et al., 2019; Muñoz, 2016).
7.2.3 Targets Acted Upon
Santo et al., Stokes et al., and Jacob et al. all suggest that RPPs can have positive impacts on teachers’
self efficacy and sense of ownership of the work; improve the quality of teaching and ability to scale
new approaches; expand professional communities; and lead to improvement in students’ engagement
and learning (Jacob et al., 2019; Santo et al., 2017a; Stokes et al., 2018). Looking at the Targets for
these RPP actions, the sample of RPPs appears well positioned to achieve similar benefits. 13 projects
target teachers, 11 target students, 10 each targeting Principals and Other School Leadership, PD, and
Researchers themselves. These intended targets are aligned with work that is known to impact student
learning: teachers, instruction, and building-level support for teachers (Farrington et al., 2012; J. Lee
and Shute, 2010).
Though the NSF hosted several workshops to support projects writing proposals for the CS for All:
RPP solicitation, we were still slightly surprised to see such a large portion of the sample (73 percent)
state a focus on the RPP or RPP members. This suggests an awareness that the RPP approach requires
intentional focus, effort, and planning (Henrick et al., 2016).
Six of 15 abstracts coded identified at least one target for which we had not generated a code. Of
these "Other Targets," four of them related to Policy, State Administrators, or governance structures.
This is aligned with what Penuel and Farrell offers as a potentially important impact that RPPs can
have, given RPPs production and utilization of data that can be applied to the policy process.
7.3 Alternative Explanations of Findings 36
7.2.4 Equity Dimensions
Lack of focus on guiding goals is a known challenge to RPPs (Santo et al., 2017a). Projects generally
can be seen to take on multiple actions (Tables 6.4 and 6.3 ) and multiple dimensions of equity they
hope to positively impact. Projects clustered around five or six actions, impacting between five and
seven targets, and addressing 2-3 equity dimensions. It is unclear how projects intend to manage and
navigate these multiple goals and actions. It may be necessary, however, for RPPs to take on such
complex work to make progress against their problems of practice. As Kali et al. reminds us, "complex
interrelated problems of practice are associated with the creation and scale of ..."techquity" (Kalir,
no date). This implies that even one area that an RPP may have goals around, or have a Theory of
Change to impact, may itself be composed of many component problems of practice, demonstrating
the inherent complexity of RPP work.
7.2.5 Theory of Change
Having a well-defined and agreed upon Theory of Change or Action is crucial to the functioning of
the RPP and for being able to assess the RPP as part of a continuous improvement cycle (Davidson
and Penuel, 2019; Henrick et al., 2016; Penuel and Farrell, 2017). The Theory of Change needs to
be well-defined to articulate how each element of the proposed work fits into the overall Theory of
Change for the program (Davidson and Penuel, 2019). (Penuel and Farrell, 2017 acknowledges that
projects’ Theories of Change can also be influenced by the funding organization. In this work we did
not formally examine the extent to which projects’ actions, targets, and equity target were aligned
with NSF priorities. In our sample of 15 abstracts, we found that the two most commonly articulated
elements of a Theory of Change were the Goals and Strategies. We imagine that in the full proposal
projects are able to articulate each of the other elements, though this would require future work to
examine empirically.
7.3 Alternative Explanations of Findings
While we are confident that our theoretical reasoning is correct, we also entertain that there may be
another path of influence that has led almost every project we reviewed to include the same actions,
targets, and basic pattern of presenting the Theory of Change. Taking this viewpoint, we could view the
"NSF abstract" and the proposal that it is drawn from as its own genre of academic writing. Since NSF
application are formal, structured, and competitive, we can borrow from institutional theory (Meyer
and Rowan, 1977 and suggest that authors are striving to highlight the legitimacy of their proposals
and are not necessarily concerned with providing complete concise summaries of their projects in their
abstracts. Because they are concerned with appearing legitimate, they may over rely on the elements
included in the abstracts of previously funded projects, and thus the isomorphic forces (DiMaggio and
Powell, 1983 acting in the field of NSF proposal writing are responsible for the degree of similarity and
not a rational reason related to the needs and successes of RPPs.
Another possibility, which would require further study, is the degree to which the NSF abstracts
correspond with the specific language used in the NSF solicitations, or if we could see a shift over time
in response to the shifts within the solicitation language and priority areas. This type of research cannot
be extricated from the policy process, and NSF as a funder, can shape the directions of fields of study
through their language, guidance, and call for proposals (Penuel and Farrell, 2017).
7.4 Strengths and Limitations of this Study 37
7.4 Strengths and Limitations of this Study
Though we performed an extensive review of literature about RPPs for Section 2, we did not perform
a formal systematic literature review. Given this, there may be additional articles with results that
could supplement or contradict findings from other studies. A systematic literature review could lead
to modifications to the components in the compositional structure (Figure 4.1). Additionally, as each
component’s composition is considered, this may add to insight about the compositional structure
itself, including whether there are too many or too few components, or if components have overlapping
features.
Coding in Phase I analysis was limited to seven project descriptions and 15 abstracts, and coding of
the Phase II analysis was limited to the abstracts. By limiting the analysis to the NSF abstracts, some
key information for which codes could be derived could have been omitted. For example, a project by
Chicago Public Schools aims to make CS instruction available to all high school students. While the
abstract specifically mentions the equity dimensions of gender and race/ethnicity, it does not explicitly
mention other diversity dimensions that would be included in a population of "all high school students,"
such as students with disabilities and English language learners. Phase II analysis also included a code
only once for a project, even if an abstract made multiple references to a concept or action. This could
be remedied in future code applications to determine the relative focus on a particular action in the text.
The amount of information that can be gleaned from the abstracts may not be enough or may not
be accurate enough for the intended purpose of the analysis. Further, there is no requirement from NSF
that the abstract must contain all details of the proposer’s Theory of Change. However, looking at a
set of 15 and then comparing this to future projects can help ensure that this model works at the level
needed for understanding what the RPPs are trying to achieve holistically. Additionally, we do not
currently have access to a dataset which would allow us to examine project-level outcomes to map
against a project’s Theory of Change, thus we cannot speak to the quality of the Theories of Change
themselves or their efficacy in structuring the work to produce said outcomes.
An interesting question about this framework is whether it is specific to CS RPPs or if it could be
generalized to other disciplines. Any RPP focused on equity, for example, may be able to use the same
codes. Likewise, the Theory of Change codes are general and can easily be applied to RPPs. What
remains unclear is how the actions could be specific to the RPPs for CS or whether they may be more
generalized. To determine the transferability of these codes, a study would need to be conducted with
the framework to explore abstracts from other disciplines and determine if they produce similar results.
7.5 Ethical Dilemmas or Challenges Encountered
One significant dilemma we encountered was the degree to which we, as the coders, should read into the
abstracts that we read or give the project team the benefit of the doubt when assessing the completeness
of their conceptualization. We understand that proposals, projects–and RPP projects especially due
to their complex team composition–cannot be miniaturized to an abstract in a way that completely
captures their thinking, intentions, and capabilities.
We also realized that this method of relying on the abstract may advantage writers who can provide
a compelling description, but whose projects will fail to meet their well-communicated goals and
expectations. Therefore, we position this framework as developed from abstracts, but cautioning that,
without further validation, it would be misguided to use this framework to make value judgments about
the projects or predict project outcomes. To mitigate this, we have begun the process of vetting the
Theory of Change disaggregation model via additional analyses.
7.6 Implications for Future Work 38
7.6 Implications for Future Work
Additional analysis is currently underway to refine the codebook and examine all 117 of the unique
NSF CSforAll project abstracts against its completeness. Additional research that could be performed
includes 1) adding descriptive categories to each project to look for intersections between RPP type,
funding amount, age, etc and the number and type of elements present in each NSF abstract, 2) reducing
the action codes into theoretically meaningful categories, some examples include Concrete vs Abstract
and Research focused vs Implementation focused, and 3) focusing on why RPPs need comparative
frameworks, and how, when, and by whom the analysis would be performed, with what external
outcomes. This then provides greater focus on which elements of the many possible facets of RPPs
should be included in an analytic framework.
There are additional elements based on the review that might be part of the Theory of Change,
but also may be appropriate for one of the other components. These include project implementation
personnel (e.g., researchers, school personnel), intervention target (e.g., student, parent, teacher, school),
theoretical basis for the intervention (e.g., self-efficacy, unplugged computing), and aspects of equity
that the intervention addresses (e.g., economic disadvantage, race/ethnicity).
As part of the larger compositional structure of the broader framework, analyzing each component
as we have done for the Theory of Change component will provide a thorough method of analyzing
projects. Researchers could then investigate one of the components across several projects (e.g., in a
comparison/contrast case study approach) or all components against one project to learn more about
how one project functions. This can then serve a further purpose of providing additional information to
funding agencies like the NSF to understand what their monies are funding and whether efforts are
meeting agencies’ objectives.
8. Conclusion
This qualitative research study was conducted to explore how to analyze RPPs to learn more about
them. We proffer a framework for comparing and contrasting RPPs as well as analyzing a single RPP
(potentially using case study research methods). Our compositional structure for the framework with
independent components can be separated for conducting analysis or can be used in whole. Further, we
created a framework within one of the components, Theory of Change, that could be used to analyze a
single RPP or to compare multiple RPPs and serve as a model for creating analysis frameworks for
other components. The framework was created through two methods of analysis to confirm the utility
of our approach, with overlapping results. This provides further confirmation that our methodology and
the framework is robust. Future analysis includes examining all 117 unique RPP project descriptions
(2017-2020) to see if the Theory of Change framework holds.
The value of this work extends beyond the RPP for CS community. Creating a way to learn why
some RPPs make more progress than others and what variables and factors contribute to this has the
potential of redefining how RPPs are structured and function. As we continue this work, we are also
keen on learning how RPPs for CS are differentiated from RPPs in other disciplines and fields.
9. References
Asbell-Clarke, J., Rowe, E., & Kidwell, R. (2017). Personalized computational thinking for grades 3-8.
https://www.nsf.gov/awardsearch/showAward?AWD_ID=1738572. (Cited on page 46)
Bevan, B. (2017). Research and practice: One way, two way, no way, or new way? Curator: The
Museum Journal,60(2), 133–141 (cited on pages 9, 11, 13–15).
Bevan, B., Henrick, E. C., McGee, S., & Dettori, L. (2019). Rpps: Love ‘em or leave ‘em? 2019
Research on Equity and Sustained Participation in Engineering, Computing, and Technology
(RESPECT), 1–4 (cited on pages 13, 15, 16).
Boser, U., & McDaniels, A. (2018). Addressing the gap between education research and practice: The
need for state education capacity centers. Center for American Progress (cited on pages 8,
11–14, 16).
Brookhart, S. M., & Loadman, W. E. (1992). School-university collaboration and perceived professional
rewards. Journal of Research in Education,2(1), 68–76 (cited on page 8).
Cannata, M., Redding, C., & Nguyen, T. D. (2019). Building student ownership and responsibility:
Examining student outcomes from a research-practice partnership. Journal of Research on
Educational Effectiveness,12(3), 333–362 (cited on pages 5, 11, 14, 18).
Carroll-Miranda, J., Ordonez, P., Orozco, E., Bravo, M., Borrero, M., Lopez, L., Houser, G., Gerena, E.,
Reed, D., Santiago, B., et al. (2019). This is what diversity looks like: Making cs curriculum
culturally relevant for spanish-speaking communities. Proceedings of the 50th ACM Technical
Symposium on Computer Science Education, 647–648 (cited on page 11).
Cascio, M. A., Lee, E., Vaudrin, N., & Freedman, D. A. (2019). A team-based approach to open coding:
Considerations for creating intercoder consensus. Field Methods,31(2), 116–130 (cited on
page 25).
Che, S., Sitaraman, M., & Kraemer, E. (2017). Cs for all: Rpp: A scalable rpp for preparing and
supporting teachers to teach culturally responsive and rigorous cs courses in sc high schools.
https://www.nsf.gov/awardsearch/showAward?AWD_ID=1738760. (Cited on page 46)
Coburn, C. E., & Penuel, W. R. (2016). Research–practice partnerships in education: Outcomes,
dynamics, and open questions. Educational Researcher,45(1), 48–54 (cited on pages 9, 11,
13–16).
Coburn, C. E., Penuel, W. R., & Farrell, C. C. (2015). Practice partnerships in education: Outcomes,
dynamics, and open questions. National Center on Scaling Up Effective Schools (cited on
page 6).
41
Coburn, C. E., Penuel, W. R., & Geil, K. E. (2013). Practice partnerships: A strategy for leveraging
research for educational improvement in school districts. William T. Grant Foundation (cited
on pages 8, 9, 13).
Connolly, F. (2019). Measuring the value of a research-practice partnership. https://nnerppextra.rice.
edu/measuring-the-value-of-an-rpp/. (Cited on pages 9, 12, 13, 16, 17, 22, 34, 35)
Cynthia Blitz, C., Nguyen, T., Trees, F., & Duncan, T. (2018). Addressing issues of equity and
engagement in computer science (cs) through a research practice partnership: The cs teaching
and learning collaboratory. https: / / www. nsf . gov / awardsearch/showAward?AWD _ ID=
1837305&HistoricalAwards=false. (Cited on page 46)
Davidson, K. L., & Penuel, W. R. (2019). The role of brokers in sustaining partnership work in education.
The Role of Knowledge Brokers in Education: Connecting the Dots Between Research and
Practice, 154 (cited on pages 12, 36).
Denner, J., Bean, S., Campe, S., Martinez, J., & Torres, D. (2019). Negotiating trust, power, and culture
in a research–practice partnership. AERA Open,5(2), 2332858419858635 (cited on pages 6,
11, 13–16, 29).
Dettori, L., Yanek, D., Hu, H., & Brylow, D. (2018). The role of researcher-practitioner partnerships
in cs4all: Lessons from the field. Proceedings of the 49th ACM Technical Symposium on
Computer Science Education, 674–675 (cited on pages 14, 16).
DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and
collective rationality in organizational fields. American Sociological Review,48(2), 147–160
(cited on page 36).
Fall, R., Hoffman, B., Rosato, J., Freeman, S., & Kaiser, D. (2019). Cs through ce: Broadening
participation and building pathways in computer science through concurrent enrollment. 2019
Research on Equity and Sustained Participation in Engineering, Computing, and Technology
(RESPECT), 1–2 (cited on pages 11, 12).
Fancsali, C., Mirakhur, Z., Klevan, S., & Rivera-Cash, E. (2019). Making science relevant for the 21st
century: Early lessons from a research-practice partnership. Proceedings of fablearn 2019
(Pages 136–139). (Cited on pages 6, 12–15, 35).
Farrell, C. C., Harrison, C., & Coburn, C. E. (2019). “what the hell is this, and who the hell are
you?” role and identity negotiation in research-practice partnerships. AERA Open,5(2),
2332858419849595 (cited on pages 11, 13, 14).
Farrington, C. A., Roderick, M., Allensworth, E., Nagaoka, J., Keyes, T. S., Johnson, D. W., & Beechum,
N. O. (2012). Teaching adolescents to become learners: The role of noncognitive factors in
shaping school performance–a critical literature review. ERIC. (Cited on page 35).
Frumin, K. (2019). Researchers and practitioners in partnership: Co-design of a high school biology
curriculum (Doctoral dissertation). (Cited on pages 10, 11, 15).
Ghiso, M. P., Campano, G., Schwab, E. R., Asaah, D., & Rusoja, A. (2019). Mentoring in research-
practice partnerships: Toward democratizing expertise. AERA Open,5(4), 2332858419879448
(cited on pages 8, 13, 15, 16).
Gifford, B. R. (1986). The evolution of the school-university partnership for educational renewal.
Education and Urban Society,19(1), 77–106 (cited on page 8).
Gilbert, B. B., Moix, D., Su, H.
-
C., Hammerand, E., & Kim, D. (2018). The enhancement of computa-
tional thinking and computer science in the middle grades: A research to practitioner approach
with a focus on professional development for teachers of middle grades. Society for Information
Technology & Teacher Education International Conference, 15–20 (cited on page 6).
42
Gordon, A., Kolodner, J., Barbato, S., Hacker, M., & Buelin-Biesecker, J. (2019). Exploring computation
integrated into technology and engineering (excite). https : / / www. nsf . gov / awardsearch /
showAward?AWD_ID=1923552. (Cited on page 47)
Groff, W. H. (1983). Strategic planning for economic development. (cited on page 18).
Harrison, C., Davidson, K., & Farrell, C. (2017). Building productive relationships: District leaders’
advice to researchers. International Journal of Education Policy and Leadership,12(4), n4
(cited on pages 11, 12).
Hazzan, O., Heyd-Metzuyanim, E., Even-Zahav, A., Tal, T., & Dori, Y. J. (2018). Research–practice
partnerships in stem education: An organizational perspective. Application of management
theories for stem education (Pages 43–74). Springer. (Cited on pages 15, 18).
Henrick, E. C., Cobb, P., Penuel, W. R., Jackson, K., & Clark, T. (2017). Assessing research-practice
partnerships. %7Bhttp://wtgrantfoundation.org/library/uploads/2017/10/Assessing-Research-
Practice-Partnerships.pdf%7D. (Cited on pages 8, 11, 14–17, 22, 29, 34)
Henrick, E. C., McGee, S., Greenberg, R. I., Dettori, L., Rasmussen, A. M., Yanek, D., & Reed,
D. F. (2019). Assessing the effectiveness of computer science rpps: The case of cafecs. 2019
Research on Equity and Sustained Participation in Engineering, Computing, and Technology
(RESPECT), 1–5 (cited on pages 6, 16).
Henrick, E. C., Munoz, M. A., & Cobb, P. (2016). A better research-practice partnership. Phi Delta
Kappan,98(3), 23–27 (cited on pages 9, 11–16, 35, 36).
Hod, Y., Sagy, O., Kali, Y., et al. (2018). The opportunities of networks of research-practice partnerships
and why cscl should not give up on large-scale educational change. International Journal of
Computer-Supported Collaborative Learning,13(4), 457–466 (cited on pages 8, 9, 11, 13, 35).
Hodge, L., Sadovnik, A., & Rosenberg, J. (2019). Cs for appalachia: A research-practice partnership
for integrating computer science intoeast tennessee schools. https://www.nsf.gov/awardsearch/
showAward?AWD_ID=1923509&HistoricalAwards=false. (Cited on page 46)
Hollis, S. (2018). Collaborative research: Identifying participation barriers to computer science edu-
cation in rural mississippi. https://nsf.gov/awardsearch/showAward?AWD_ID=1837407&
HistoricalAwards=false. (Cited on page 46)
Hutchison, A., Offutt, J., Gutierrez, K., Evmenova, A., & Colwell, J. (2018). Preparing k-5 teachers to
integrate the computer science standards of learning in inclusive classrooms to support students
with high incidence disabilities. https://nsf.gov/awardsearch/showAward?AWD_ID=1837380&
HistoricalAwards=false. (Cited on page 46)
Jacob, S., Nguyen, H., Richardson, D., & Warschauer, M. (2019). Developing a computational thinking
curriculum for multilingual students: An experience report. 2019 Research on Equity and
Sustained Participation in Engineering, Computing, and Technology (RESPECT), 1–2 (cited
on pages 11, 12, 14–16, 35).
Kali, Y., Eylon, B.
-
S., McKenney, S., & Kidron, A. (2018). Design-centric research-practice partner-
ships: Three key lenses for building productive bridges between theory and practice. Learning,
design, and technology. Cham: Springer. https://doi. org/10.1007/978-3-319-17727-4_122-1
(cited on pages 12–15, 36).
Kalir, R. (no date). Metaproblems about techquity in a research-practice partnership. https://dml2016.
dmlhub.net/wp-content/uploads/2016/02/15_Revised_Kalir_DMLBrokering.pdf. (Cited on
pages 9, 11, 16, 29, 36)
Lash, T., Wortel-London, S., Delyser, L. A., & Wright, L. (2019). Building trust in computer sci-
ence research-practice partnerships: A theme study. Proceedings of the 50th ACM Technical
Symposium on Computer Science Education, 1266–1266 (cited on pages 9, 15, 16, 29, 35).
43
Lee, J., & Shute, V. J. (2010). Personal and social-contextual factors in k–12 academic performance: An
integrative perspective on student learning. Educational Psychologist,45(3), 185–202 (cited
on page 35).
Lee, V., Recker, M., & Clarke-Midura, J. (2018). Developing board games and learning materials to
support 5th grade students’ connected learning around computational thinking and coding.
https://www.nsf.gov/awardsearch/showAward?AWD_ID=1837224&HistoricalAwards=false.
(Cited on page 46)
López Turley, R. N., & Stevens, C. (2015). Lessons from a school district–university research partner-
ship: The houston education research consortium. Educational Evaluation and Policy Analysis,
37(1_suppl), 6S–15S (cited on page 8).
Mark, J., Klein, K., & Mitchell, T. (2020). Teacher and school supports to promote equitable imple-
mentation of ap csp in nyc. Proceedings of the 51st ACM Technical Symposium on Computer
Science Education, 1341–1341 (cited on page 6).
Mark, J., & Patel, A. (2018). Equitable computer science implementation in all new york city (nyc)
schools. https://nsf.gov/awardsearch/showAward?AWD_ID=1837280&HistoricalAwards=
false. (Cited on page 46)
Mark Warschauer, M., Richardson, D., & Barquin, B. (2019). Collaborative network of grades 3-5
educators for computational thinking for english learners. https://www.nsf.gov/awardsearch/
showAward?AWD_ID=1923136&HistoricalAwards=false. (Cited on page 46)
Mattesich, P., & Johnson, K. (2018). The wilder collaboration factors inventory. https://www.wilder.
org/wilder-research/research-library/collaboration-factors-inventory-3rd-edition. (Cited on
pages 17, 22, 34)
McGee, S. (2017). Collaborative research: Chicago alliance for equity in computer science (cafecs).
https://www.nsf.gov/awardsearch/showAward?AWD_ID=1738572. (Cited on page 46)
Meyer, J. W., & Rowan, B. (1977). Institutionalized organizations: Formal structure as myth and
ceremony. American Journal of Sociology,83(2), 340–363 (cited on page 36).
Militello, M., Smith, R., Reardon, R., & Frye, D. (2017). Integrating the computer science and
computational thinking in three rural eastern north carolina school districts. https://www.nsf.
gov/awardsearch/showAward?AWD_ID=1738767. (Cited on page 46)
Muñoz, M. A. (2016). Building research-practice partnerships as a district-led initiative: A high-leverage
strategy for system improvement. Planning & Changing,47 (cited on pages 9–11, 13, 14, 16,
35).
National Science Foundation. (2020). Computer science for all (csforall: Research and rpps). https:
//nsf.gov/funding/pgm_summ.jsp?pims_id=505359. (Cited on page 5)
Organizational Research Services. (2004). Theory of change: A practical tool for action, results and
learning. https://www.aecf.org/resources/theory-of-change/#key-takeaway. (Cited on pages 21,
23, 35)
Penuel, W. R. (2019). Infrastructuring as a practice of design-based research for supporting and studying
equitable implementation and sustainability of innovations. Journal of the Learning Sciences,
28(4-5), 659–677 (cited on page 13).
Penuel, W. R., Allen, A.
-
R., Coburn, C. E., & Farrell, C. (2015). Conceptualizing research–practice
partnerships as joint work at boundaries. Journal of Education for Students Placed at Risk
(JESPAR),20(1-2), 182–197 (cited on pages 8, 9, 13, 35).
Penuel, W. R., & Farrell, C. C. (2017). Practice partnerships and essa: A learning agenda for the coming
decade. http://learndbir.org/resources/160812-RPP-chapter.pdf. (Cited on pages 8, 11–13, 21,
22, 34–36)
44
Quartz, K. H., Weinstein, R. S., Kaufman, G., Levine, H., Mehan, H., Pollock, M., Priselac, J. Z., &
Worrell, F. C. (2017). University-partnered new school designs: Fertile ground for research–
practice partnerships. Educational Researcher,46(3), 143–146 (cited on page 14).
Resnick, A. F., & Kazemi, E. (2019). Decomposition of practice as an activity for research-practice
partnerships. AERA Open,5(3), 2332858419862273 (cited on pages 8, 9, 14).
Rosato, J., & Treichel, C. (2019). K12 cs pathways for rural and tribal schools. https://www.nsf.gov/
awardsearch/showAward?AWD_ID=1923369. (Cited on page 47)
Santo, R., Ching, D., Peppler, K., & Hoadley, C. (2017a). Participatory knowledge building within
research-practice partnerships in education. SAGE Publications Ltd. (Cited on pages 10–16,
35, 36).
Santo, R., Ching, D., Peppler, K., & Hoadley, C. (2017b). Messy, sprawling, and open: Research–
practice partnership methodologies for working in distributed inter-organizational networks.
Connecting research and practice for educational improvement (Pages 100–118). Routledge.
(Cited on pages 14, 16).
Santo, R., DeLyser, L. A., Ahn, J., Pellicone, A., Aguiar, J., & Wortel-London, S. (2019). Equity in
the who, how and what of computer science education: K12 school district conceptualizations
of equity in ‘cs for all’initiatives. 2019 Research on Equity and Sustained Participation in
Engineering, Computing, and Technology (RESPECT), 1–8 (cited on page 35).
Schlechty, P. C., Whitford, B. L. et al. (1988). Shared problems and shared vision: Organic collaboration.
School-university partnerships in action: Concepts, cases, and concerns, 191–204 (cited on
page 8).
Schools, S. P. (2019). Using data in evidence-based policy processes through building research-practice
partnerships (cited on pages 11–13, 16).
Severance, S., Leary, H., & Johnson, R. (2014). Tensions in a multi-tiered research-practice partnership.
Boulder, CO: International Society of the Learning Sciences. (Cited on pages 11, 14).
Shakman, K., Bailey, J., & Breslow, N. (2017). A primer for continuous improvement in schools and
districts. Teacher & Leadership Programs (cited on pages 9, 35).
Skuratowicz, E., Wilson, J., Vanderberg, M., & Krause, G. (2019). Rui: Empowering k-5 teachers
in southern oregon through computational thinking. https : / / www. nsf . gov / awardsearch /
showAward?AWD_ID=1923633&HistoricalAwards=false. (Cited on page 47)
Stokes, L., Carroll, B., Helms, J. V., Mitchell, H., Phillips, M., St John, M., & Tambe, P. (2018).
Combining researchers’ and practitioners’ intelligences for stem improvement: A study of the
local labs of the research and practice collaboratory. executive summary. research+ practice
partnerships. Inverness Research (cited on pages 11–16, 35).
Thompson, J., Richards, J., Shim, S.
-
Y., Lohwasser, K., Von Esch, K. S., Chew, C., Sjoberg, B., & Mor-
ris, A. (2019). Launching networked plcs: Footholds into creating and improving knowledge
of ambitious and equitable teaching practices in an rpp. AERA Open,5(3), 2332858419875718
(cited on pages 9, 11, 13).
Tseng, V., Easton, J. Q., & Supplee, L. H. (2017). Research-practice partnerships: Building two-way
streets of engagement. Social Policy Report,30(4), 1–17 (cited on pages 8, 11, 15, 21, 22, 34).
Wanzer, D. L. (2019). Improving evidence use: The importance of relationship quality in research-
practice partnerships (Doctoral dissertation). The Claremont Graduate University. (Cited on
pages 8, 12, 14–16).
Wentworth, L., Mazzeo, C., & Connolly, F. (2017). Research practice partnerships: A strategy for
promoting evidence-based decision-making in education. Educational research,59(2), 241–
255 (cited on pages 5, 10, 17, 18).
45
Wiebe, E., Barnes, T., Freeman, S., Frye, D., Maher, M. L., Cao, L., Dorodchi, M. M., Pugalee, D.,
Rorrer, A. S., Boulden, D., et al. (2019). Developing a systemic, scalable model to broaden
participation in middle school computer science. 2019 Research on Equity and Sustained
Participation in Engineering, Computing, and Technology (RESPECT), 1–2 (cited on pages 6,
9, 10, 14).
Wille, S., Century, J., & Pike, M. (2016). Computer science principles (csp) and students with learning
differences: Expanding opportunities for a hidden underrepresented group. 2016 Research on
Equity and Sustained Participation in Engineering, Computing, and Technology (RESPECT),
1–8 (cited on pages 6, 14, 35).
Wille, S., Century, J., & Pike, M. (2017). Exploratory research to expand opportunities in computer
science for students with learning differences. Computing in Science & Engineering,19(3),
40–50 (cited on page 13).
Wortel-London, S., DeLyser, L. A., & Muralidhar, D. (2019). Rppforcs theme study: Teacher roles.
https://www.csforall.org/projects_and_programs/rppforcs-theme-study-teachers/. (Cited on
page 12)
Yadav, A., Schwarz, C., Bouck, E., & Shah, N. (2017). Ct4edu: Broadening pathways into computing
by developing computational thinking competencies in elementary classrooms. https://www.
nsf.gov/awardsearch/showAward?AWD_ID=1738677&HistoricalAwards=false. (Cited on
page 46)
Zarch, R., & Sexton, S. (2019). Research practice brief: The health assessment tool. (Cited on pages 17,
22, 34).
A. Selected Projects
CAFECS: Collaborative Research: Chicago Alliance For Equity in Computer Science (McGee, 2017
CS For All: RPP: A Scalable RPP for Preparing and Supporting Teachers to Teach Culturally Respon-
sive and Rigorous CS Courses in SC High Schools (Che et al., 2017
CT4EDU: Broadening Pathways into Computing by Developing Computational Thinking Competen-
cies in Elementary Classrooms (Yadav et al., 2017
Integrating the Computer Science and Computational Thinking in Three Rural Eastern North Carolina
School Districts (Militello et al., 2017
Personalized Computational Thinking for Grades 3-8 (Asbell-Clarke et al., 2017
Addressing Issues of Equity and Engagement in Computer Science (CS) through a Research Practice
Partnership: The CS Teaching and Learning Collaboratory (Cynthia Blitz et al., 2018
Collaborative Research: Identifying Participation Barriers to Computer Science Education in Rural
Mississippi (Hollis, 2018
Developing board games and learning materials to support 5th grade students’ connected learning
around computational thinking and coding (V. Lee et al., 2018
Equitable Computer Science Implementation in All New York City (NYC) Schools (Mark and Patel,
2018
Preparing K-5 Teachers to Integrate the Computer Science Standards of Learning in Inclusive Class-
rooms to Support Students with High Incidence Disabilities (Hutchison et al., 2018
Collaborative Network of Grades 3-5 Educators for Computational Thinking for English Learners
(Mark Warschauer et al., 2019
CS for Appalachia: A Research-Practice Partnership for Integrating Computer Science into East Ten-
nessee Schools (Hodge et al., 2019
47
Exploring Computation Integrated into Technology and Engineering (ExCITE) (Gordon et al., 2019
K12 CS Pathways for Rural and Tribal Schools (Rosato and Treichel, 2019
RUI: Empowering K-5 Teachers in Southern Oregon Through Computational Thinking (Skuratowicz
et al., 2019
B. Revised Theory of Change Codebook
B.1 Theory of Change Codes
Clear Theory Definition: Project team...
Goals clear Clearly describes at least one goal for the project
Strategies clear
Clearly describes at least one activity that will be used to meet project
goal(s)
Outcomes clear
Clearly describes the outcome(s) that will be used to determine progress
against the goal(s)
Strategy-
outcome re-
lationship clear
Describes both strategies and outcomes and the coder can interpret a
realistic way that the outcomes could be reached through the described
strategies
Assumptions
clear
Mentions at least one factor that underlies success or may hinder attain-
ment of the goal(s)
Table B.1: Theory of Change Codes
B.2 Equity Dimensions Codes 49
B.2 Equity Dimensions Codes
Equity Dimensions Definition
All (only) students Only mentions all students
All (including) students
Mentions all students as well as one or more specific equity
dimensions
Disability Mentions students with disabilities
Economic disadvantage Mentions students with an economic disadvantage
Gender
Mentions girls, women, or otherwise addresses gender is-
sues
Race/ethnicity Mentions race and/or ethnicity
Rural learners Mentions rural learners
Unspecified underrepresented Only mentions unspecified underrepresented students
Other Equity
An equity dimension not otherwise covered in the codebook
Table B.2: Equity dimensions codes that rest within the Theory of Change.
B.3 Target of Actions 50
B.3 Target of Actions
Target of Action Definition
Community organizations
Those outside of the direct instruction or school administra-
tive structures (e.g. local businesses or non-profits)
Curriculum The content of instruction
District-level administrators
Any official operating within the administrative structure of
a district
Families Parents and care-givers of students
Pedagogy The process of instruction
Principals and Other School
Personnel
Any school-level personnel or administrators (including
counselors, librarians, and curriculum support specialists)
RPP or RPP members
In aggregate the enterprise of “partnership” as implied in
“RPP” or any singular component or member thereof
Students K-12 students
Teachers Classroom instructors
Other Target A target not otherwise covered in the codebook
Table B.3: Target of Actions that rest within the Theory of Change.
B.4 Action Codes 51
B.4 Action Codes
Action Family Examples of Actions Definition: Project team will...
Broaden Broaden, expand
Expand access to a program or fundamentally
change a key aspect of a project
Collaborate Collaborate, bring
together, partner
Facilitate interaction among parties that may not
have done so without this intervention
Create
Create, build, design,
develop, establish
Undertake at least one new activity that involves a
new curriculum, working group, product, or method,
etc.
Embed
Embed, infuse, inte-
grate, formalize
Work to integrate intervention changes into the prac-
titioner spaces so that they last beyond the project’s
end
Engage Engage
Motivate individuals to participate in the project as
researchers, practitioners, or students
Empower
Empower, give voice,
mobilize
Encourage action among and beyond the re-
searchers, practitioners and/or students in the
project
Examine Examine, investigate,
identify, evaluate,
assess
Collect and analyze data to understand the situation
Hold account-
able
Hold accountable
Ensure that researchers and/or practitioners uphold
the goals of the project
Implement Implement Project team will deploy existing material
Leverage
Leverage, transform,
enhance
Use existing resources as a springboard from which
to launch a new or changed initiative
Raise awareness
Raise awareness, dis-
seminate
Encourage knowledge sharing among and beyond
the researchers, practitioners and/or students in the
project
Recruit Recruit
Seek out new individuals to join the project as prac-
titioners or students
Support Support, sustain,
strengthen, refine,
inform, assist
Either directly or indirectly impact structures or
practices
Tailor Tailor
Mindfully adapt the intervention to the specific
needs of an educational space
Teach
Teach, train, prepare,
introduce
Transfer knowledge to researchers, practitioners
and/or students or otherwise develop practitioner
capacity to meet goals
Table B.4: Action Codes that rest within the Theory of Change.
... Henrick et al. state that this framework is "...intended to guide the development of more specific quantitative measures and qualitative protocols" [1, p. 21]. Indeed, the Five Dimensions framework appears to be the gold standard for evaluating RPP effectiveness that other researchers use as a building block for the development of more tools to measure RPP effectiveness [16][17][18]. ...
Article
Full-text available
Reconceiving relationships between universities, schools, and community organizations through research-practice partnerships, and building capacity for partnership work, necessarily entails rethinking the mentorship of graduate students. In this article, we describe our findings on what mentorship looks like in a now 9-year RPP focusing on educational equity through participatory approaches. The authors include the two project principal investigators and three doctoral students who participated at different stages of the project, one of whom is now a faculty member. In our analysis, we identify dimensions of a more horizontal form of mentorship, involving qualities and skills that extend beyond traditional practices of academic apprenticeship: universalizing who is an intellectual, cultivating community responsiveness, implementing collective structures and protocols, and constructing a shared vision. Our findings shift conceptions of mentorship from individual apprenticeship into a narrowly defined discipline to a collective undertaking that aims to democratize expertise and enact a new vision of the public scholar.
Article
Full-text available
One of the major challenges in educational reform is supporting teachers and the profession in the continual improvement of instruction. Research-practice partnerships and particularly networked improvement communities are well-suited for such knowledge-building work. This article examines how a networked improvement community with eight school-based professional learning communities—comprised of secondary science teachers, science and emergent bilingual coaches, and researchers—launched into improvement work within schools and across the district. We used data from professional learning communities to analyze pathways into improvement work and reflective data to understand practitioners’ perspectives. We describe three improvement launch patterns: (1) Local Practice Development, (2) Spread and Local Adaptation, and (3) Integrating New Practices. We raise questions about what is lost and gained in the transfer of tools and practices across schools and theorize about how research-practice partnerships find footholds into joint improvement work.
Conference Paper
Research-practice partnerships (RPPs) are a promising strategy for tackling persistent problems of practice and building knowledge about practices that may promote equity in computer science. Education Development Center (EDC) and New York City Department of Education (NYCDOE) are engaged in an RPP to enhance and study the implementation of high school Advanced Placement (AP) Computer Science Principles (CSP) courses in low-performing NYC schools. The objectives of this partnership are: sustained implementation of AP CSP courses in low-performing NYC high schools; greater participation of female students and students from racial and ethnic groups (URG) typically underrepresented in CS; and the integration of findings across NYCDOE's K-12 CS4All initiative. The partnership is studying and iteratively refining teacher and school supports, such as professional development (PD) and curriculum resources, school-based supports, and resources for school leaders, developed for and with low-performing schools. We hope to contribute knowledge of the curriculum, PD, and school supports that promote success for all students. The impact of the proposed work has implications for low-performing NYC high schools, and for schools nationwide that offer AP CSP. Administrators, teachers, developers, and policymakers can use these findings to promote AP CSP implementation supports for teachers and schools that show promise for improving student outcomes.