ArticlePDF Available

Making Peanut Butter and Jelly Sandwiches: Do Students from Different Disciplines Approach This Exercise Differently?

Authors:

Abstract

In this practice-oriented paper for human factors education, we describe our experiences piloting a variation of a classroom activity reported elsewhere. We conducted the activity with two different groups of students: psychology majors and software engineering majors. Focusing students on the simplest of algorithms is a fruitful activity to introduce them to biases and variations that occur in practical field studies. Students enjoyed the activity, and we took away learning for our industrial research context.
Making Peanut Butter and Jelly Sandwiches: Do Students From
Different Disciplines Approach This Exercise Differently?
Cheryl L. Coyle, Bell Laboratories, 4 Hazelwood Ct., Howell, NJ, 07731, USA,
coyle@alcatel-lucent.com, 732-901-6815
Heather Vaughn, Bell Laboratories, 318 Daniele Dr., Ocean, NJ, 07712, USA,
vaughn@alcatel-lucent.com, 732-493-0419
In this practice-oriented paper for human factors education, we describe our experiences
piloting a variation of a classroom activity reported elsewhere. We conducted the
activity with two different groups of students: psychology majors and software
engineering majors. Focusing students on the simplest of algorithms is a fruitful activity
to introduce them to biases and variations that occur in practical field studies. Students
enjoyed the activity, and we took away learning for our industrial research context.
INTRODUCTION
The use of peanut butter and jelly (PB&J)
sandwiches as a teaching tool is not new
(Lewandowski & Morehead, 1998). Recently, it
was reported PB&J sandwich-making can be part of
successful Human-Computer Interaction (HCI)
teaching activities (Hourcade, Garcia & Perry,
2007; Davis & Rebelsky, 2007).
The activity introduces students to the complexity
of use-case writing for something as simple as
“sandwiches.” From our telecommunications
background, we saw the value of this research.
Imagine those students 10 years later as HCI
professionals. Imagine these individuals will need
to design communication services end users can
personalize using time-dependent, conditionally-
branching instructions? Given that general
consumers today may be frustrated programming
clocks on devices (with little or no dependencies,
yet), or have spare knowledge about how telecom
networks interoperate because the phone, service, or
device should “work for me,” effective use-case
writing remains paramount. HCI professionals need
to define how individuals specify preferences. Use-
case writing is an effective tool for exposing
variations and specifics. Trends then come to light.
Given that HCI students in the prior studies we
referenced are “mostly computer science majors,”
we wondered about other disciplines. Can this
exercise benefit other students who enter careers in
HCI? What follows is our experience.
PRACTICE INNOVATION
Human factors and ergonomics attracts psychology
majors, but it lures in software engineers, too. How
would software engineering (SE) students react to a
PB&J sandwich-making activity we’d modified
requiring them to define the use case? What about
psychology (Psych.) majors? Would there be
differences between the two types of students?
The literature in psychology and engineering
education has suggested that different disciplines
attract students with different inclinations toward
instruction. (Felder, R.M. & Brent, R., 2005;
Ramsden, P., Date; Whitmire, 2002; Wu, Custer &
Dyrenfurth, 1996). “Learning tasks in [hard]
science are typically described as hierarchical,
logical, heterogeneous, and rule-based and
procedure governed. … Arts and social sciences
tasks are seen to require interpretation, comparison,
generalization, and to be more self-governed…”
(Ramsden, 1997, pp. 208-209). We were curious
whether these differences would be reflected in
differences in performing the PB&J activity.
The PB&J activity provides a compelling
demonstration on the difference between procedural
and abstract knowledge. For Americans, making a
PB&J sandwich is one of the most basic algorithms
in our cultural lexicon, and while the task seems
obvious, observing the differences is intriguing.
In addition to our curiosity about potential
differences based on student studies, we wondered
whether it was possible to quantify the methods of
observation of PB&J. Could we document the
learnings of the students by giving them a before-
observation and after-observation task, and compare
the two? In the field of human factors, we often
catalog user task steps, and then revise them after
observing a population performing those tasks
(alongside engineers revising requirements and
code as people use the design). The task steps and
design frequently change after we’ve observed
users. Would task steps change for both student
types?
METHODS
We conducted a PB&J sandwich-making and
observation activity with students in two different
courses. The first was a classroom of upper-level
Psych. students taking a course on “Qualitative
Methods.” The second was a class of upper-level
SE students in a “Software Practicum” course.
Table 1 presents their majors, gender, and age
ranges.
Students Gender*: Age Range
Psych., n = 11 F = 10, M = 1 19-23 (Mdn = 21)
SE , n = 6 F = 2, M = 4 20-22 (Mdn= 21)
Table 1. Demographics.
* F = female, M = male.
We ran the exercise on two different days in
separate classrooms, but we followed the same
procedure for both. We supplied jars of peanut
butter (two different brands), jars of jelly (two
different brands) and loaves of white bread (two
different brands). We also provided plastic utensils,
paper plates, napkins and water bottles.
We wanted to collect “before” and “after” data.
Before beginning the activity of making and
observing others make PB&J sandwiches, we asked
the students to “Please write in order, and in as
much detail as possible, all the steps involved in
making a peanut butter and jelly sandwich.” Our
goal was to compare these descriptions with
descriptions written after observation. We
distributed worksheets with demographic questions,
a place to write the steps, and the question: “Have
you ever made a peanut butter and jelly sandwich?”
After about five minutes of writing task steps,
students handed in their worksheets (so they could
not refer to them during the observation activity),
and separated into small groups. The Psych. class
had two groups with four participants each and one
with three; SE’s had two groups of three
participants. We provided each small group with
the PB&J sandwich-making materials and
instructed them to take turns making sandwiches.
The non-sandwich makers were told to carefully
observe the sandwich maker and to take notes. We
encouraged the sandwich makers to think aloud and
explain what they were doing. We included this as
“think aloud” protocols are common in HCI user
studies.
Every student made a sandwich and observed at
least two other students. After the observation
activity, we asked students to fill out a second
worksheet. They were asked to write all the activity
steps again, this time referring to the notes taken
during observation. The worksheet also ended with
this question: “How did it feel to be observed?”
When the worksheets were complete, we collected
them and we held a brief class discussion about
students’ experiences with the activity.
FINDINGS
In this pilot experiences, we provide numbers only
to guide other researchers wishing to replicate this
activity with larger populations. Our sample was
small, it was asymmetric, and as noted in our
Methods Section, there are potential confounds. The
experience we were exploring is whether instructors
can gather thinking-style & behaviors from
different types of students, while also providing a
training activity that will prepare SE and Psych
students who may enter the professional context of
HCI?
Surprisingly, 3 out of the 11 Psych. students had
never made a PB&J sandwich before. This was
unusual, given that all the students were raised in
the United States. All the SE students had made
PB&J sandwiches before, and all were U.S.-raised.
Task Steps (before and after)
We counted the number of task steps identified by
each student before and after they observed others
making PB&J sandwiches. There were no
perceivable differences in number of task steps
provided by students before and after descriptions
about how to build a PB&J. About a third of the
students across groups identified more steps, a third
identified the same number, and a third identified
fewer steps after observing than before. Refer to
Table 2 for declared majors and task-step ranges.
Students Range Before Range Afterward
Psychology 4-7 (Mdn = 6) 4-8 (Mdn = 5)
SE 10-20 (Mdn = 12) 6-21 (Mdn = 11)
Table 2. Task step ranges before and after observations.
Every SE student identified more task steps than
every Psych. student. We saw a trend in SE
students detailing more instructions building a
PB&J. Given a high correlation noting asymmetry
across groups (Spearman R = .67, p = .030., n=17),
we generated a collapsed variable based on tasks
steps written before and after the PB&J activity.
One-way ANOVA on the mean steps using major as
a factor yielded F = 28.96, df = 1, p =.000. In other
words, SE students were more detailed overall. We
look at this trend in Discussion, given our small,
asymmetric sample with potential confounds.
Taking Notes
Psychology students commented difficulties for
taking notes while observing. They wanted to
observe, but not take notes simultaneously. Many
reported using a shorthand. One student suggested
videotaping the activity would be better than
observing and note-taking simultaneously. One
stated that it is “much easier to miss something
while taking notes.” In fact, another observer didn’t
take notes. Others remarked that they “knew ahead
of time” what the sandwich maker was going to do
next, so they didn’t have to record it carefully.
SE students did not describe note-taking trouble,
but we did not collect students’ notes for both
classes, so we can not report here on quantity or
quality of notes for each class. Others seeking to
expand on our activity might wish to collect and
analyze notes.
Feelings on Being Observed by Others
In their written responses, most of the Psych.
students indicated that it felt “weird” or “strange” to
be observed. Eight out of 11 responses included a
description of an uncomfortable feeling. Only three
out of the six SE students’ written responses
indicated they felt uncomfortable being observed;
one wrote that it was “fine” and one even replied
that “it felt good.” In class discussion, there
appeared to be a bigger distinction between the two
groups: there was general agreement among the
Psych. students that being observed by others made
them feel uncomfortable. However, in the SE class,
none said they felt weird or strange being observed.
Some mentioned they were more deliberate in their
actions since they knew they were being observed,
but nobody volunteered they experienced
discomfort -- even when we prompted them!
Student Reactions
SE students seemed to have more fun with this
activity than the Psych. students. SE students
enjoyed the opportunity to think aloud and
demonstrate how they make PB&J. The Psych.
students giggled a little, but seemed embarrassed.
Figure 1. Three different sandwiches.
During observation, SE students frequently teased
others about “doing it wrong.” Discussion revealed
that many of the individual SE students believed
that their way was ‘correct’ when there were
alternate ways of doing something. The Psych.
students, on the other hand, were more inclined to
watch the variations they observed without
commentary. One Psych. student during post-
discussions was surprised that “something so simple
had so many different ways to do it.” (See Figure
1).
DISCUSSION
This was a good activity to teach observation skills
for both the Psych. and the SE majors. It is
important all students experience being observed.
This part of the activity alone is worth it.
We did not perceive differences in individual
students’ styles for documenting task steps before
versus after observing others. This was surprising
to us because the literature on students writing
algorithms for PB&J states that watching a
professor perform the task steps (incorrectly) helps
improve the algorithm (Lewandowski & Morehead,
1998). However, our sample was small and our
task was different. We did not ask students to
follow the steps that peers or professors wrote. Nor
did we provide an explanation for the purpose of
documenting task steps. We suggest running a
study in which a specific use case is provided. We
continue to believe that students can learn a great
deal from observing others perform a familiar task,
but we need a better method for quantifying that.
For example, a response-time measure such as time
to complete writing the procedure steps might be
more sensitive in showing differences due to
learning that occurs while observing others.
We saw interesting trends between the two
populations. SE majors were more precise and
thorough in their task steps. As a group, they
identified more task steps for the same activity than
the Psych. students. They were more animated
during the observation activity and the discussion
that followed. The SE students had a propensity for
making END statements in their tasks steps, such as
“DONE, EAT, ENJOY” (perhaps from their coding
training?) Far fewer of the Psych. students included
such steps.
Our observation is that the groups were sized right.
It is best to have at least three people in a group, so
students can observe more than one person making
a sandwich. A group of two is too small. Four
students to a group is the maximum, though. Larger
groups take too much time and boredom can set in.
There are a few ways we would enhance the
experimental design for these different majors who
may enter HCI. First, we might use a validated/
reliability-based inventory to factor in learning
styles, such as the Kolb Learning Model, or, the
Felder & Solomon Index of Learning Styles, which
have been heavily tested in engineering contexts; or
use the Myer-Briggs if there is a strong theoretical
construct about what will be done with the data..
(Felder R. M & Brent, R, 2008; Kolb, D. A., 2008;
Myers Briggs, M. Briggs, K. C., 2008) We believe
that for our pilot, placing students into “SE” versus
“Psych” categories was sufficient, but, for larger
sample sizes these tools can be used to manage
numbers and differences.
The second way we would refine our experimental
design is to inter-rate our analyses on the written
instructions given by the students before and after.
We had an impression that the SE students made
discrete statements, much as they would for writing
computer code or requirements. We had another
impression that the Psych. students wrote out
instructions in a prose-like fashion, as if they were
observing phenomena or writing how to review
notes. Perhaps one can speed analysis with newer
aids (e.g., Ethnograph, Atlas.ti, etc.) to refine
findings about distinct use-case writing styles.
Finally, we would enhance our experimental design
by bringing the students from the different classes
together to run this use-case writing activity. It is
often a reality that HCI professionals from different
training backgrounds need to work together, and the
future probably requires more of this multi-
disciplinary interaction. This activity in a mixed
group has potential in the academic context to
enhance each discipline’s curriculum, but more
importantly, it prepares students from each
discipline to work efficiently with “others” as they
enter HCI professions.
This study may seem unrelated to industrial design
at first blush. Nevertheless, there are a few
connections worth mentioning.
Writing use-cases for the next generation of
personalized technology-enabled services is
complicated. Whether the domain is tele-
communications, medicine, transportation, etc….
use cases require abstract and theoretical
partnership in thinking and documentation. This
requires both types of students - be they inclined
toward hierarchies and logic, or inclined toward
observation and generalities - to define what users
will need instructing services to behave as expected.
End-user personalization needs robust engineering.
What if you had a team of people from
different trainings who needed to build a design?
You could use the PB&J activity in an Agile
environment, especially if it is early in
conception/design phase, to expose thinking and
documentation differences. You could also overlay
this experimental activity upon your particular
domain problem, especially if the problem is a
variation on something that came before. Using an
industrial paradigm: imagine HCI people, graphic
designers, system testers, and managers all need to
come together to design the “next” Smartphone? Do
you think they all have the same use case in mind?
What if you built a device/application for a
small user set, and you suddenly found an
unexpected population uses it? Perhaps you can
use a domain-specific variation of PB&J to capture
new expectations and thinking. Fast redesigns to
accommodate the new set of users could follow.
ACKNOWLEDGMENTS
Thank you to Dr. Janice Stapley and Dr. Daniela
Rosca for sharing your classrooms. We appreciate
the support of Dr. Allen Milewski in finding us a
software engineering classroom. We thank the
students of Monmouth University. And thanks to
Juan Pablo Hourcade for inspiring us at CHI 2007.
REFERENCES
Davis, J. & Rebelsky, S. A. (2007). Food-first
computer science: Starting the first course right
with PB&J. Proceedings from SICSEI ’07: The
38th SIGCSE Technical Symposium on
Computer Science Education. (pp. 372-376).
New York: ACM Press.
Felder, R.M. & Brent, R. (2005). Understanding
student differences. Journal of Engineering
Education, 94(1), 57-72.
Felder, R.M & Soloman, B. (retrieved 2008). Index
of Learning Styles Questionnaire.
http://www.engr.ncsu.edu/learningstyles/ilsweb.
html.
Hourcade, J. P., Garcia, O. I., & Perry, K. B.
(2007). Learning observation skills by making
peanut butter and jelly sandwiches. Proceedings
from SIGCHI ’07: CHI ’07 Extended Abstracts
on Human Factors in Computing Systems. (pp.
1753-1758). New York: ACM Press.
Kolb, D. A. (retrieved 2008). Kolb Learning Style
Inventory, Version 3.1.
http://www.haygroup.com/tl/Questionnaires_W
orkbooks/Kolb_Learning_Style_Inventory.aspx
#Ordering.
Lewandowski, G. & Morehead, A. (1998).
Computer science through the eyes of dead
monkeys: Learning styles and interaction in CS
1. In D. Joyce & J. Impagliazzo (Eds.), The 29th
SIGCSE Technical Symposium on Computer
Science Education. (pp. 312-316). New York:
ACM Press.
Myers Briggs, M. & Briggs, K. C. (retrieved 2008).
http://www.myersbriggs.org/my%2Dmbti%2Dp
ersonality%2Dtype/take%2Dthe%2Dmbti%2Di
nstrument/.
Ramsden, P. (1997). The context of learning in
academic departments. In F. Martin, D.
Hounsell & N.J. Entwistle (Eds.), The
Experience of Learning: Implications for
Teaching and Studying in Higher Education.
(pp. 198-216). Edinburgh: Scottish Academic
Press.
Whitmire, E. (2002). Disciplinary differences and
undergraduates’ information-seeking behavior.
Journal of American Society for Information
Science & Technology, 53(8), 631-638.
Wu, T. F., Custer, R. L. & Dyrenfurth, M. J. (1996).
Technological and personal problem-solving
styles: Is there a difference? Journal of
Technology Education, 7(2), 55-71.
... A good example of humor in Computer Science education is the usage of the Great Peanut Butter Caper [18], which had many variations reported to be used in introductory courses focused on algorithmic thinking [1,3,7,8,[21][22][23] and even in courses of human-computer interaction [12]. The main idea is having students to write down the steps necessary to make a peanut butter sandwich. ...
... Lewandowski and Morehead [18] presented the concept of Common Learning Experience (CLE), which are playful activities that engage students in problem-solving using a fun and interactive classroom environment. The Great Peanut Butter Caper is one of the CLEs they apply, which has been extensively used, adapted and reported by other authors focused on programming or algorithmic thinking [1,3,7,8,[21][22][23]. Since they all provide similar perspectives, we will not exhaustively cover them here. ...
... We aimed at measuring the clarity of inputs, expected outputs, and step-by-step description, the elaboration of a non-obvious (i.e., not a "happy path") test case, and the total steps. While the last characteristic was inspired in an aspect that can change as a result of a PB&J activity focused on algorithms [7], the other four correspond to some of the categories used in a study that analyses mistakes in test case writing [10]. The authors of these studies we used as references described the criteria to evaluate these characteristics in a subjective way. ...
Conference Paper
New approaches that offer good learning experiences driven to computer science education have been applied in different places. One of the ways adopted is the application of dynamics in classrooms that challenge students to work in groups and make relations to situations of their lives. Besides, to improve content retention and students engagement, humor is one good element that should be applied in these dynamics. The "Peanut butter and jelly sandwich challenge" is an example that allows including the idea of challenging students using humor as a support to instructional content. This paper explains how that dynamic was applied to two students' groups. The first experience was offered in a mobile programming course that follows a boot camp style and involved a multidisciplinary group with students from three universities. The dynamic applied was used to present the relevance of algorithmic thinking. The second experience used the first case as motivation, adapting it to cover contents focused on test case writing applied to students of computer science. In both cases we present results gathered, such as learning impact for the students.
... Our sample was small, asymmetric and there are potential confounds. The experience we were exploring was whether or not we could utilize moment by moment data to determine information needs of users in a dynamic and changing sub-t environment (Coyle & Vaughn, 2008). ...
Article
In this practice-oriented paper for human factors research, we describe our experiences piloting an exercise to understand how a moment-by-moment input device can be utilized to help understand soldiers information needs while conducting subterranean (sub-t) warfare and facilitate development of Augmented Reality (AR) displays for sub-t use. We created a sub-t environment using the Unity game engine and utilized students as subjects to facilitate experiment setup and validation. Little research is being done on what information soldiers need in a subterranean environment. Here we begin to investigate these information needs.
Article
Full-text available
Students have different levels of motivation, different attitudes about teaching and learning, and different responses to specific classroom environments and instructional practices. The more thoroughly instructors understand the differences, the better chance they have of meeting the diverse learning needs of all of their students. Three categories of diversity that have been shown to have important implications for teaching and learning are differences in students' learning styles (characteristic ways of taking in and processing information), approaches to learning (surface, deep, and strategic), and intellectual development levels (attitudes about the nature of knowledge and how it should be acquired and evaluated). This article reviews models that have been developed for each of these categories, outlines their pedagogical implications, and suggests areas for further study.
Article
Full-text available
Directions Please provide us with your full name. Your name will be printed on the information that is returned to you. Full Name For each of the 44 questions below select either "a" or "b" to indicate your answer. Please choose only one answer for each question. If both "a" and "b" seem to apply to you, choose the one that applies more frequently. When you are finished selecting answers to each question please select the submit button at the end of the form. I understand something better after I (a) try it out. (b) think it through. 1. I would rather be considered (a) realistic. (b) innovative. 2. When I think about what I did yesterday, I am most likely to get (a) a picture. (b) words. 3. I tend to 4.
Conference Paper
Full-text available
Our breadth-first introduction to Computer Science presents the fundamentals of the discipline by engaging students in active learning. In designing and teaching this course we established four goals. First, since problem solving is essential to Computer Science, students should learn to solve problems proficiently in several areas. Second, since Computer Science is best learned through intimate engagement with the material, students should learn in an active classroom environment. Third, students of all experience levels and majors should feel equally comfortable with the course material. And fourth, students should discover that Computer Science is interesting, relevant and fun.We encountered two major obstacles to achieving our goals. First, in an introductory course such as this, one regularly finds a range of experience among students: some have never used a computer, others have used it only for word processing, and still others have built their own computers. Therefore, designing an interesting and useful course that doesn't bore or intimidate any students is a significant challenge. Second, students have a wide variety of preferred learning styles which affect the way they gather and process information. Instructors also have a preferred learning style which affects the way they present the course material. Therefore, presenting the material in ways that engage all learning styles is another challenge.Despite these obstacles, our presentation strategies for this course have yielded promising results. After teaching the course for three semesters, we have observed the following. First, the course involves every student and is highly interactive. Second, as students learn the core material they ask more depth questions and achieve a higher overall level of knowledge than students in previous semesters of the course. Finally, students enjoy the class and report that they are highly satisfied with their learning; more CS I students are choosing to take additional Computer Science classes.
Conference Paper
Our breadth-first introduction to Computer Science presents the fundamentals of the discipline by engaging students in active learning. In designing and teaching this course we established four goals. First, since problem solving is essential to Computer Science, students should learn to solve problems proficiently in several areas. Second, since Computer Science is best learned through intimate engagement with the material, students should learn in an active classroom environment. Third, students of all experience levels and majors should feel equally comfortable with the course material. And fourth, students should discover that Computer Science is interesting, relevant and fun.We encountered two major obstacles to achieving our goals. First, in an introductory course such as this, one regularly finds a range of experience among students: some have never used a computer, others have used it only for word processing, and still others have built their own computers. Therefore, designing an interesting and useful course that doesn't bore or intimidate any students is a significant challenge. Second, students have a wide variety of preferred learning styles which affect the way they gather and process information. Instructors also have a preferred learning style which affects the way they present the course material. Therefore, presenting the material in ways that engage all learning styles is another challenge.Despite these obstacles, our presentation strategies for this course have yielded promising results. After teaching the course for three semesters, we have observed the following. First, the course involves every student and is highly interactive. Second, as students learn the core material they ask more depth questions and achieve a higher overall level of knowledge than students in previous semesters of the course. Finally, students enjoy the class and report that they are highly satisfied with their learning; more CS I students are choosing to take additional Computer Science classes.
Conference Paper
We consider in some depth a common exercise for the first session of a typical introductory computer science course: The task of writing instructions to make a peanut butter and jelly sandwich. The exercise, although simple, can engage students and motivate a variety of topics important throughout the semester. We discuss reasons to use such an exercise on the first day of class, present lessons students can learn from the exercise, and give practical advice for the instructor who wishes to make the most of this exercise.
Conference Paper
In this report we describe our experience conducting a class activity where students learned and practiced observation skills. In the activity, students in small groups observed and were observed making peanut butter and jelly sandwiches. The groups then used their observations to sketch designs for a peanut butter and jelly maker that they presented to the class. We found that the activity helped students learn about the difficulties involved in observing and being observed. It also taught them about the value of observing users, even if they are performing tasks familiar to the observer. Having international students in the class brought an additional perspective to the activity which benefited everyone. These students discussed the difficulty of observing experts conduct tasks that are unfamiliar to the observer. In spite of the overall positive outcome, we discuss ways of improving the activity given our experience.
Article
This study applied the Biglan model of disciplinary differences to the information-seeking behavior patterns of 5,175 undergraduates responding to questions on the College Student Experiences Questionnaire (CSEQ). The Biglan model categorizes academic disciplines along three dimensions: (1) hard-soft, (2) pure-applied, and (3) life–nonlife systems. Using t-tests, this model proved to be valid for distinguishing differences in undergraduates' information-seeking behavior patterns among various academic disciplines. The results indicate that the Biglan model has implications for the redesign of academic library services and use as a valid theoretical framework for future library and information science research.
Article
this article's instrumentation section, it is important to note that Low scores on the PSI indicate high levels of problem solving self confidence, high approach behavior and high levels of personal control.) The difference between personal problem solving and technological problem solving scores within the individual disciplines was found to be significant for humanities students and technology students, but not for engineering students. Humanities students had the highest scores (least positive) in technological problem solving and the lowest scores in personal problem solving. Technology students had the lowest scores (most positive) in technological problem solving and medium scores in personal problem solving (see Figure 4). The data were also analyzed at the sub-scale level. Significant differences were found when comparing the two problem solving style subscales (problem solving confidence, and approach/avoidance) for both PSI-PSYCH and PSITECH scores across the three disciplines. Further comparisons of scores on each of the technological problem solving confidence, technological approach/avoidance, and personal control subscales among the three purposeful samples of students revealed that humanities students had the highest scores (i.e., were least positive) on all of the three technological subscales, while engineering students had medium scores and technology students had the lowest scores (i.e., were most positive) on each of the three subscales