Conference PaperPDF Available

PRECALCULUS CONCEPT ASSESSMENT: A PREDICTOR OF AP CALCULUS AB AND BC SCORES

Authors:

Abstract

This study establishes a theoretical framework for predicting the American College Testing (ACT) Mathematics sub-score and AP Calculus AB and BC scores from the Precalculus Concept Assessment (PCA) exam results and suggests a total of 16 different regression based models to actually perform the prediction. The strong positive correlation between the actual and predicted values confirm that the PCA is a powerful tool for identifying students who are at the risk of not passing AP Calculus AB or BC tests and thus helping teachers, parents and students take the necessary measures in a timely manner.
PME-NA 2011 Proceedings 787
PRECALCULUS CONCEPT ASSESSMENT: A PREDICTOR OF AP CALCULUS
AB AND BC SCORES
Rusen Meylani
Arizona State University
rusen.meylani@asu.edu
Dawn Teuscher
Brigham Young University
dawn.teuscher@asu.edu
This study establishes a theoretical framework for predicting the American College Testing
(ACT) Mathematics sub-score and AP Calculus AB and BC scores from the Precalculus
Concept Assessment (PCA) exam results and suggests a total of 16 different regression based
models to actually perform the prediction. The strong positive correlation between the actual
and predicted values confirm that the PCA is a powerful tool for identifying students who are
at the risk of not passing AP Calculus AB or BC tests and thus helping teachers, parents and
students take the necessary measures in a timely manner.
Introduction
Assessments are a major practice in the K-12 educational system as well as post-
secondary. High school students are required to take more and more assessments to
demonstrate what they have learned. Most states require high school students to take either an
end of course (mathematics) exam and/or a graduation exam (with a portion being
mathematics) to complete a course or to graduate from high school (Teuscher, Dingman,
Nevels, & Reys, 2008). In addition, most colleges require students to take a mathematics
placement exam to direct students into the appropriate first year mathematics course. Even
though students are required to send official transcripts and take one of the college entrance
exams (e.g.; ACT, SAT), they are asked to demonstrate their knowledge of mathematics on
multiple assessments.
Math placement exams vary in mathematics content, the number and type of questions
(multiple choice, open-ended, etc.), use of calculators, and time limits. The number of
different mathematics placement exams used by institutions across the country continues to
increase. However, there are only two commonalities among these placement exams (1) the
focus of exam items is on content and procedures taught in remedial mathematics classes,
which satisfy general education requirements or serves as prerequisites such as college
algebra and precalculus; and (2) the results are not used to inform student or teachers of the
possible deficiencies in student knowledge.
This article reports research results on how high school students performed on the AP
Calculus AB or BC exam, the Mathematics portion of the standardized American College
Testing (ACT) and on the Precalculus Concept Assessment (PCA), a research developed
instrument based on college students’ common misconceptions of functions (Carlson,
Oehrtman, & Engelke, 2010) after completing four years of college preparatory mathematics
and AP Calculus.
Theoretical Framework
Precalculus Concept Assessment
The PCA is a 25-item multiple choice exam that helps researchers and instructors learn
what students think and understand about the foundational concepts of precalculus and
beginning calculus (see Carlson et al., 2010, for released items). The PCA was developed
based on research with collegiate level mathematics classes, and was piloted and revised over
the past 15 years.
The PCA is based on a taxonomy developed to determine student’s understandings and
reasoning of foundational concepts learned during precalculus (Carlson et al., 2010).
Although the PCA was not created to be used as a placement exam for Calculus, Carlson et
al. (2010) reported that 77% of college students who scored a 13 or higher on the PCA passed
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
PME-NA 2011 Proceedings 788
a first semester calculus course with a C or better. The correlation coefficient for college
student PCA scores and their calculus grades was 0.47.
The PCA was validated with college students who enroll in College Algebra and
Precalculus. The study reported in this paper provides a different sample of students, those
who are in high school and enrolled in Advanced Placement (AP) Calculus. Students took the
ACT, PCA and the AP Calculus AB or BC exams during their high school years. Although
one might assume that precalculus at the college level is equivalent to precalculus at the high
school level, the PCA had not been used to analyze student thinking at the high school level.
Content of AP Calculus courses and exams
The AP Calculus AB course focuses on limits, derivatives, and an introduction to
integrals, which is typically taught in a first semester calculus course in college. The AP
Calculus BC course focuses on the topics studied in AB; however, more depth is given to
integration and students are introduced to sequences and series as well as parametric and
polar functions, which is typically taught in a two semester calculus sequence in college. The
AP Calculus exams award students with a score of one to five inclusive with five being the
highest score. On each of the AB and BC exams, those students who score four or five pass
the exam, and may use their scores to receive college credit for Calculus courses when they
enter college. It is evident that a strong foundation of precalculus is absolutely essential for
success in Calculus; therefore the PCA can be a valuable tool for identifying the students’
weaknesses in precalculus if administered to students prior to entering Calculus.
Content of ACT mathematics exam
The ACT mathematics exam is a 60-question, 60-minute test designed to measure the
mathematical skills students have typically acquired in courses taken by the end of 11th grade
(ACT, 2011). Students receive an overall score between one and 36 inclusive and three sub-
scores based on six content areas: pre-algebra (23%), elementary algebra (17%), intermediate
algebra (15%), coordinate geometry (15%), plane geometry (23%) and trigonometry (7%).
All of these topics are highly correlated with the content of the PCA and if used with PCA,
can be employed as a diagnostic tool for predicting how students are likely to succeed in the
AP Calculus system.
Regression Analysis
Regression analysis includes techniques for modeling and analysis of several variables,
when attention is focused on the relationship between a dependent variable and one or more
independent variables. More specifically, regression analysis helps understand how the
typical value of the dependent variable changes when any of the independent variables is
varied, while the other independent variables remain fixed. Usually, regression analysis
estimates the conditional expected value of the dependent variable given the independent
variables (i.e. the mean (average) value of the dependent variable when independent variables
are kept fixed). Regression analysis is widely used for estimation and prediction. It is also
used to explore and comprehend the causal relationships that exist among the independent
variables in relation to the dependent variable. In this study, regression analysis is the primary
means of inquiry to explore the relationships between the AP Calculus AB and BC exam
scores, PCA results and ACT mathematics test sub-scores.
Research Questions
In light of the scope of the PCA, the ACT mathematics test as well as the AP Calculus AB
and BC exams, this study specifically seeks to answer the following research questions:
1) How are high school students’ PCA scores and AP Calculus AB or BC scores related?
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
PME-NA 2011 Proceedings 789
2) Can students’ PCA scores predict their performance on the AP Calculus AB or BC
exams?
3) Can the prediction be improved when the ACT scores are available?
Methodology
In this study, the 16 different regression schemas are built upon three regression based
models, namely, Multiple Linear Regression Model, Multinomial Logistic Regression Model
and Cumulative Odds (CO) – Ordinal Regression Model.
Regression Models
Regression models were used in this study to predict students’ AP Calculus AB or BC
scores (i.e. an integer between 1 and 5 inclusive). Students’ AP Calculus AB or BC scores
were the dependent variables and their response to the 25 individual questions in the PCA,
each being a 1 (that represents a correct answer) or a 0 (that represents an incorrect answer)
were the independent variables. . In some of the regression models students’ ACT
mathematics scores were used as a second independent variable.
The Multiple Linear Regression Model. This model assumes that a linear relation exists
between the dependent and the independent variables where the random errors are assumed to
be independent and normally distributed random variables with zero mean and constant
standard deviation, (i.e., assumptions of normality, linearity, and homogeneity of variance are
met). The dependent variable is students’ AP Calculus AB or BC score and the independent
variables are the responses to the 25 PCA questions with or without the ACT mathematics
scores. Tthus, depending on the regression model, there are 25 (without the ACT mathematics
score) or 26 (with the ACT mathematics score) independent variables.
The Multinomial Logistic Regression Model. Multinomial logistic regression does not
require any assumptions of normality, linearity, and homogeneity of variance for the
independent variables (Kutner et al., 2005). Because this regression model is less stringent it
is often preferred to discriminant analysis when the data does not satisfy these assumptions.
Suppose the dependent variable has M nominal (unordered) categories. One value of the
dependent variable is chosen as the reference category and the probability of membership in
each of the other categories is compared to the probability of membership in the reference
category. For the dependent variable with M categories, this requires the calculation of M – 1
equations, one for each category relative to the reference category, in order to describe the
relationship between the dependent and the independent variables. Please note that
multinomial logistic regression model ignores the ordinal nature that might exist within the
levels of the dependent variable and treats each category in a similar manner.
The Cumulative Odds (CO) Ordinal Logistic Regression Model. The CO – ordinal
regression model calculates the probability of being at or below category m of an ordinal
dependent variable with M categories (Kutner et al., 2005). Ordinal logistic regression is
different from multinomial logistic regression in that it takes into account the ordinal nature
inherent within the levels of the dependent variable, which might be useful in some cases.
For the two logistic regression models (multinomial or CO ordinal) each of the AP
Calculus AB or BC scores had five levels (i.e. an integer between one and five inclusive). For
multinomial the logistic regression model, the last level (AP Calculus AB or BC score being
equal to 5) was selected as the reference category.
The dependent variables were again the 25 PCA items used as categorical variables
(factors). The ACT mathematics test scores could be used as both categorical and ordinal
variables. When the ACT mathematics test scores were used as categorical variables (factors),
each level inherent within the score was a separate independent variable; when they were
used as ordinal variables (covariates), they constituted a single independent variable.
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
PME-NA 2011 Proceedings 790
Participants
At the end of a school year, 193 high school students from two high schools in a mid-
western town were administered the PCA to assess their understandings and reasoning
abilities prior to entering AP Calculus (Teuscher & Reys, in press). Of the 193 students who
took the PCA and enrolled in AP Calculus AB or BC the following year, 143 students took
the AP Calculus exam at the end of the school year; 80 of these students were enrolled in the
AP Calculus AB course while the remaining 63 were enrolled in AP Calculus BC course.
The AP Calculus exam is scored and then students are given a grade of one to five.
Typically students who receive a four or five grade receive college credit for at least one
semester of calculus. Those students who take the AP Calculus BC exam receive a BC grade
and an AB sub-grade. It is possible that a student who takes the BC exam many not receive a
four or five on the BC exam, but receives a four or five on the AB portion of the test, which
can be interpreted as having passed the calculus AB exam, but not the BC exam.
Prediction Process
The models that used the PCA results to predict the AP Calculus AB of BC scores are
based on the three regression models (Multiple Linear Regression, Multinomial Logistic
regression and the CO Ordinal Logistic Regression). The predictors in all three regression
models were the actual results of each of the 25 questions in the PCA test, (i.e. each test
question was associated with one of two categorical values, 1 if it was answered correctly and
0 if it was answered incorrectly).
The ACT mathematics score can theoretically take ordinal values between 1 and 36
inclusive (ACT, 2011). In statistics, higher level variables can always be downgraded to
lower level ones, such that, a metric scale variable can be downgraded to an ordinal or a
nominal variable; this process sometimes requires defining categories within the data and/or
creating discrete values based on the continuous scale variables (Kent, 2001). The ACT
mathematics score already takes discrete values, which is reason alone why it can be treated
to be ordinal or nominal as well. While it is theoretically possible for a student to score
between one and 36 inclusive (ACT, 2011), this is usually not the case in practice; for
instance the scores of a group of high school students attending the same school may exhibit
a certain pattern. The scores of the group of students subject to our analyses were between 19
and 34 and none of the students scored 22. This is another justification for the fact that ACT
mathematics score can be treated as an ordinal or categorical variable.
Dependent Variable
Independent Variable(s)
AP Calculus
PCA
AP Calculus
PCA and ACT mathematics sub-scores
Table 1. The variables used for the three linear regression models to predict students’ ACT
mathematics and AP Calculus (AB or BC) scores.
Two different linear regression models used students’ PCA results with or without the
ACT mathematics scores to predict students’ AP Calculus AB or BC scores; when the ACT
mathematics scores were used, they were treated as ordinal metric variables. The variables
used for the linear regression models used in this study are summarized in Table 1.
The Logistic Regression models (Multinomial and Ordinal) predict a categorical or an
ordinal dependent variable using categorical predictors as factors with or without ordinal
variables as covariates. These two models were employed to predict the AP Calculus AB or
BC scores separately using students’ PCA results with or without the ACT mathematics
scores; the ACT mathematics scores were used as categorical variables (predictors) or as
ordinal variables (covariates). The logistic regression models used in this study are
summarized in Table 2.
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
PME-NA 2011 Proceedings 791
Model
Specification Dependent
Variable
Independent Variable(s)
Categorical Variables
(Factors)
Ordinal Variables
(Covariates)
A
PCA
B
PCA Score and
ACT mathematics Scores
C
PCA Score
ACT mathematics
Scores
Table 2. The logistic regression models (multinomial or CO-ordinal) used to predict
students’ AP Calculus AB or BC scores.
Results
As it might be expected, students enrolled in AP Calculus BC scored higher (mean of
17.51 and standard deviation of 3.18) than students in AP Calculus AB (mean of 15.69 and
standard deviation of 3.21) on the PCA (a total score possible was 25) prior to entering AP
Calculus. Eighty-one percent of the students in this study who took one of the AP Calculus
exams pass it with a four or five.
A positive Pearson correlation existed between students’ PCA scores and the AP Calculus
exam grades and it was statistically significant (r = 0.40, p = 0.000). This can be interpreted
as students who receive a high PCA score are likely to receive a high AP Calculus exam
grade. Then again, a positive Pearson correlation existed between students’ PCA scores and
the ACT mathematics test scores and it was statistically significant (r = 0.28, p = 0.02). This
can be interpreted as students who receive a high PCA score and/or a high ACT mathematics
test score are likely to receive a high AP Calculus exam grade. The AP Calculus AB scores
were available for 80 students; the mean score was 4.00 and the standard deviation was 0.95.
Whereas the AP Calculus BC scores were available for 63 students; the mean score was 4.13
and the standard deviation was 0.96.
The AP Calculus AB scores were predicted using the two multiple linear regression
models given in Table 1 and the Pearson correlations were calculated between the actual and
predicted values. The actual values of students’ ACT mathematics scores were also used to
assess whether or not their inclusion while predicting the AP Calculus AB and BC scores
would in fact improve the prediction. The results indicate strong positive correlations and are
summarized in Table 3 which can be interpreted as follows: AP Calculus AB scores can be
predicted with 48% (100 × 0.692 = 48%) accuracy when using students PCA results alone or
75% (100 × 0.872 = 75%) accuracy when using studentsPCA results along with their ACT
mathematics test scores.
Model
AP Calculus AB Scores
Predicted from the PCA
Scores
AP Calculus AB Scores Predicted from
the PCA and Actual ACT mathematics
Test Scores
Pearson Correlation
0.69
0.87
N
80
48
M (SD)
3.74 (0.83)
3.84 (0.91)
Table 3. AP Calculus AB test scores predicted using the multiple linear regression models.
The AP Calculus AB scores were predicted using the three distinct model specifications
for the multinomial logistic regression models given in Table 2 and the Pearson correlations
were calculated between the actual and predicted values. The results indicate strong positive
correlations and are summarized in Table 4. Model specifications B and C yielded perfect
correlations with 100% accuracy in predicting the AP Calculus AB test scores. The results
summarized in Table 4 can be interpreted as follows: AP Calculus AB scores can be
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
PME-NA 2011 Proceedings 792
predicted with 91% (100 × 0.952 = 91%) accuracy when using studentsPCA results alone or
100% (100 × 12 = 100%) accuracy when using studentsPCA results along with their ACT
mathematics scores using the ACT mathematics scores as factors or covariates depending on
the model.
Model Specification
A
B
C
Pearson Correlation
0.95
1.00
1.00
N
80
48
48
M (SD)
3.80 (0.99)
3.99 (0.99)
3.97 (0.95)
Table 4. AP Calculus AB test scores predicted using the multinomial logistic regression
models.
The AP Calculus AB scores were also predicted using the three distinct model
specifications for the CO ordinal logistic regression models given in Table 2 and the
Pearson correlations were calculated between the actual and predicted values. The results
indicate strong positive correlations and are summarized in Table 5. Model specifications B
yielded a perfect correlation with 100% accuracy in predicting the AP Calculus AB test
scores. The results summarized in Table 5 can be interpreted as follows: AP Calculus AB
scores can be predicted with 40% (100 × 0.632 = 40%) accuracy using the PCA results alone;
with 100% (100 × 12 = 100%) accuracy using both the PCA results and ACT mathematics
scores as factors or with 70% (100 × 0.842 = 70%) accuracy using PCA results and ACT
mathematics scores as covariates.
Model Specification
A
B
C
Pearson Correlation
0.63
1.00
0.84
N
80
48
48
M (SD)
3.76 (1.07)
3.97 (1.06)
3.94 (1.01)
Table 5. AP Calculus AB test scores predicted using the CO ordinal regression models.
The AP Calculus BC scores were predicted using the two multiple linear regression
models given in Table 1 and the Pearson correlations were calculated between the actual and
predicted values. The results indicate strong positive correlations and are summarized in
Table 6. The results can be interpreted as follows: The AP Calculus BC scores can be
predicted with 57% (100 × 0.752 = 57%) accuracy using the PCA results alone; or 95% (100
× 0.972 = 95%) accuracy using the PCA results along with the ACT mathematics test scores.
Model AP Calculus AB scores
predicted from the PCA scores
AP Calculus AB scores predicted
from the PCA and the actual ACT
mathematics scores
Pearson Correlation
0.75
0.97
N
63
25
M (SD)
4.15 (0.71)
3.92 (0.92)
Table 6. AP Calculus BC test scores predicted using the multiple linear regression models.
The AP Calculus BC scores were predicted using the three distinct model specifications
for the multinomial logistic regression models given in Table 2 and the Pearson correlations
were calculated between the actual and predicted values. The results indicate strong positive
correlations and are summarized in Table 7. Model specifications B and C yielded perfect
correlations with 100% accuracy in predicting the AP Calculus BC test scores. The results
can be interpreted as follows: AP Calculus BC scores can be predicted with 69% (100 × 0.832
= 69%) accuracy using the PCA results alone; or 100% (100 × 12 = 100%) accuracy using the
PCA results along with the ACT mathematics scores which are used as factors or covariates
depending on the model.
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
PME-NA 2011 Proceedings 793
Model Specification
A
B
C
Pearson Correlation
0.83
1.00
1.00
N
63
25
25
M (SD)
4.10 (0.98)
3.89 (0.93)
3.89 (0.93)
Table 7. AP Calculus BC test scores predicted using the multinomial logistic regression
models.
The AP Calculus BC scores were predicted using the three distinct model specifications
for the CO ordinal logistic regression models given in Table 2 and the Pearson correlations
were calculated between the actual and predicted values. The results indicate strong positive
correlations and are summarized in Table 8. Model specifications B and C yielded perfect
correlations with 100% accuracy in predicting the AP Calculus BC test scores. The results
can be interpreted as follows: The AP Calculus BC scores can be predicted with 49% (100 ×
0.702 = 49%) accuracy using the PCA results alone; or 100% (100 × 12 = 100%) accuracy
using the PCA results along with the ACT mathematics scores as factors or covariates
depending on the model.
Model Specification
A
B
C
Pearson Correlation
0.70
1.00
1.00
N
63
25
25
M (SD)
4.23 (0.94)
3.89 (0.93)
3.89 (0.93)
Table 8. AP Calculus BC test scores predicted using the CO ordinal regression models.
Please note that each of the 16 regression models summarized above as well as the
Pearson correlation values reported were statistically significant at the 0.01 level.
Discussion
Assessments are the standard for which teacher and institutions judge students’
knowledge. With the No Child Left Behind ACT of 2001 (NCLB, 2001) K-12 students are
taking assessments each year to demonstrate adequate yearly progress. However, the majority
of these exams were not developed with the intention of providing students or teachers with
feedback on deficiencies in student’s knowledge. The college mathematics placement exams
were also not developed with the end goal of assessing relevant and connected concepts that
are foundational for calculus.
The results of this study provide evidence that the PCA may be an exam that could be
used for multiple settings across high schools and colleges in the United States. The PCA was
found to be significantly correlated with the AP Calculus AB and BC exams and
correspondingly students’ PCA scores were a statistically significant predictor of the AP
exam scores. The results verify that multiple linear, multinomial logistic and CO-ordinal
logistic regression models can successfully be used in one or more of these predictions. As
for the generalizability of the results obtained, the mean and standard deviation values
calculated for each of the actual and predicted AP Calculus AB or BC scores were very close
meaning that the results were indeed generalizable.
While predicting the AP Calculus AB and BC scores, using the ACT mathematics scores
as factors or covariates improved the results of the prediction; particularly using the actual
ACT mathematics scores as ordinal variables (or factors) while performing logistic regression
yielded very strong positive and sometimes perfect correlations between the actual and the
predicted values.
These findings are consistent with research reported by Carlson et al. (2010) who found
that the PCA was a predictor of college students ability to receive a passing grade in calculus
at the college level. The PCA was specifically created to provide feedback to instructors on
what their students understand and do not understand about functions. Instructors could use
the results from the PCA to determine what prior knowledge or more importantly what
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
PME-NA 2011 Proceedings 794
misconceptions students have when entering a course that may cause them to not understand
and grasp the new material they encounter. The PCA could also be used to provide instructors
with diagnostic feedback on the specific precalculus topics that students did not understand
during their precalculus classes and then make modifications to their curriculum for future
classes.
A considerable amount of time and taxpayer money is spent every year on students who
retake calculus in college because they are not able to pass the college mathematics
placement test. There are also a vast number of students who drop out of calculus in college
and change their majors simply because of having the prejudice that they are unable to
succeed in mathematics; in fact this is not a new problem and it has not been solved as of yet
(Ma et.al., 1999). Thus, an early detection system could be part of the solution and be of
assistance to students, parents and teachers to take the necessary measures early (i.e. when
the students are still in high school). This is why a powerful tool like the PCA can be used to
identify students who need to spend more time on precalculus and are likely to have a hard
time in AP Calculus class or college level calculus, by predicting their AP Calculus AB or
BC test scores even before they enter the AP Calculus system.
However, it must be noted that the timing of the PCA test is an important factor to
produce the results which will enable the prediction of the scores on the AP Calculus AB or
BC tests. The PCA test should ideally be administered immediately after completing the
precalculus content courses prior to the students starting the AP Calculus AB or BC courses.
In closing, it is important to realize that without a purpose, assessments will become
something that students do and not something that is useful to them or instructors. The PCA is
a practical focused examination that can provide students and instructors with important
feedback to improve students’ understandings of the common mathematical topics that are
necessary for students to be successful in calculus.
References
ACT. (2011). ACT Test Prep - Math Test Description. Retrieved February 11, 2011 from
http://actstudent.org/testprep/descriptions/mathdescript.html.
Carlson, M., Oehrtman, M., & Engelke, N. (2010). The precalculus concept assessment: A
tool for assessing students' reasoning abilities and understandings. Cognition and
Instruction, 28(2), 113-145.
Kent, R. (2001). Data Analysis and Data Construction for Survey Research, 1st Edition,
PALGRAVE, N.Y.
Kutner, M.H., Nachtsheim, C.J., Neter, J., Li, W. (2005). Applied Linear Statistical Models,
5th Edition, McGraw-Hill Irwin.
Ma, X., Willms, J.D. (1999). Dropping out of advanced mathematics: How much do students
and schools contribute to the problem? Educational Evaluation and Policy Analysis,
21(4), 365 - 383.
NCLB. (2001). Public law no. 107-110. Retrieved December 16, 2009, from
http://www.ed.gov/policy/elsec/leg/esea02/index.html.
Peng, C.-Y. J., Lee, K. L., & Ingersoll, G. M. (2002). An introduction to logistic regression
analysis and reporting. The Journal of Educational Research, 96(1), 3-14.
Teuscher, D., Dingman, S. W., Nevels, N., & Reys, B. J. (2008). Curriculum standards,
course requirements, mandated assessments for high school mathematics: A status report
of state policies. Journal of Mathematics Education Leadership, Fall, 50-55.
Teuscher, D., & Reys, R. E. (in press). Rate of change: AP calculus students' understandings
and misconceptions after completing different curricular paths. School Science and
Mathematics.
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
Article
This paper investigates the implications of three students’ fraction and measure schemes for their understanding of rate of change functions. Steffe and Olive's (2010) descriptions of students’ reasoning about fraction and measure and Thompson's (1994) descriptions of students’ reasoning about rate of change serve as the theoretical foundation of the paper. Two of the students’ fraction and measure schemes obstructed their ability to make sense of rate of change functions. These two students assimilated with two levels of units and had additive fraction schemes. The student who assimilated with three levels of units and had multiplicative fraction schemes efficiently developed a productive meaning for rate of change functions. This study provides additional evidence from 2049 students showing difficulty with fraction and measure is typical for calculus students. Teachers implementing conceptual calculus instruction should consider how students who have only additive fraction schemes will make sense of rate of change functions.
Article
Full-text available
The purpose of this article is to provide researchers, editors, and readers with a set of guidelines for what to expect in an article using logistic regression techniques. Tables, figures, and charts that should be included to comprehensively assess the results and assumptions to be verified are discussed. This article demonstrates the preferred pattern for the application of logistic methods with an illustration of logistic regression applied to a data set in testing a research hypothesis. Recommendations are also offered for appropriate reporting formats of logistic regression results and the minimum observation-to-predictor ratio. The authors evaluated the use and interpretation of logistic regression presented in 8 articles published in The Journal of Educational Research between 1990 and 2000. They found that all 8 studies met or exceeded recommended criteria.
Article
Research on mathematics achievement has paid relatively little attention to exposure-the amount of instruction received by students and the number of courses they take. This study employs data from 6 waves of the Longitudinal Study of American Youth to discern when and why students drop out of advanced mathematics. The findings indicate that there are two critical transitions when large proportions of students drop out Of advanced mathematics. The first is from Grade 8 to Grade 9, and at this point prior achievement plays a more important role than either attitude towards mathematics or socioeconomic status. The second transition is from Grade Il to Grade 12, when students' attitude towards mathematics is the most important factor: The findings support calls to raise mathematics standards for graduation and to enrich the content of mathematics courses.
Article
This study examined Advanced Placement Calculus students' mathematical understanding of rate of change, after studying four years of college preparatory (integrated or single-subject) mathematics. Students completed the Precalculus Concept Assessment (PCA) and two open-ended tasks with questions about rates of change. After adjusting for prior achievement with the Iowa Algebra Aptitude Test, students from these two paths performed comparably (F = 3.54, p = .063) on the PCA. Student errors on the three instruments revealed a lack of understanding of the interpretation or meaning of rate of change regardless of the curricular path. Students successfully calculated the rate of change of linear functions; however, when the function was not linear, students struggled to calculate it, model it on a graph, or interpret it in a real-world context.
Article
Research on mathematics achievement has paid relatively little attention to exposure—the amount of instruction received by students and tire number of courses they take. This stud)' employs data from 6 waves of the Longitudinal Study of American Youth to discern when and why students drop out of advanced mathematics. The findings indicate that there are two critical transitions when large proportions of students drop out of advanced mathematics. The first is from Grade 8 to Grade 9, and at this point prior achievement plays a more important role than either attitude towards mathematics or socioeconomic status. The second transition is from Grade 11 to Grade 12, when students' attitude towards mathematics is tire most important factor. The findings support calls to raise mathematics standards for graduation anti to enrich the content of mathematics courses.
Article
This article describes the development of the Precalculus Concept Assessment (PCA) instrument, a 25-item multiple-choice exam. The reasoning abilities and understandings central to precalculus and foundational for beginning calculus were identified and characterized in a series of research studies and are articulated in the PCA Taxonomy. These include a strong understanding of ideas of rate of change and function, a process view of function, and the ability to use covariational reasoning to examine and represent how two covarying quantities change together. This taxonomy guided the PCA development and now provides the theoretical basis for interpreting and reporting PCA results. A critical element of PCA's design was to identify the constructs essential for learning calculus and to employ methods to assure that PCA items are effective in assessing these constructs. We illustrate the role that cognitive research played during both the design and validation phases of the PCA instrument. We also describe our Four-Phase Instrument Development Framework that articulates the methods used to create and validate PCA. This framework should also be useful for others interested in developing similar instruments in other content areas. The uses of PCA are described and include (a) assessing student learning in college algebra and precalculus, (b) comparing the effectiveness of various curricular treatments, and (c) determining student readiness for calculus.
Public law no. 107-110
  • Nclb
NCLB. (2001). Public law no. 107-110. Retrieved December 16, 2009, from http://www.ed.gov/policy/elsec/leg/esea02/index.html.