Clinical Science Training at the University of Colorado: A
Measurement of Trainee Satisfaction
Deborah A. Hall, Jeanelle Sheeder, Tamara Box, and A. Laurie Shroyer
Background: Clinical science (CLSC) research education differs from basic science education in that many CLSC
programs have an added goal of creating successful academicians. CLSC programs have expanded curricula that include
teaching career development techniques, such as manuscript and grant writing, and helping young investigators
establish successful mentor-mentee relationships.
Methods: A group of K30 CLSC training program students coordinated a pilot survey to determine if the CLSC training
programs at the University of Colorado were meeting the needs of the participants in both didactic courses and in other
aspects of academic medicine, including research. The small group survey was conducted as part of a clinical outcomes
assessment course. Opportunities for improvement in the CLSC training programs were explored based on the results.
Results: Of 117 CLSC training program participants surveyed, 56% responded. Overall, there was a positive improvement
found for the didactic CLSC research constructs. Participants also reported success in manuscript publication and grant
writing applications. The CLSC program, however, was not successful in coordinating faculty mentor support for student
research projects for 78% of respondents. Once a mentoring relationship was established, students were satisfied with
the mentoring they received.
Conclusion: In general, CLSC trainees were satisfied that the K30 clinical research curriculum was meeting their needs.
Many of the trainees were successful in developing academic skills during the program. Establishing a mentor
relationship was the missing ingredient within the K30 CLSC training program. This may be an important component
that should be considered when developing programs to create the next generation of clinician-scientists.
Key words: clinical research, clinical science, mentor, K30, training
In the late 1990s, a dramatic fall in the number of
investigator-initiated grants submitted by physicians to
the National Institutes of Health (NIH) was recognized
and termed a crisis for medical research.1In 1994, the
National Center for Research Resources’ Committee
on Addressing Career Paths for Clinical Research
reported that the major barriers to the development of
clinical researchers included insufficient training in
research methods, inadequate mentoring, and inap-
propriate timing of training.2The committee’s report
concluded that research training programs that focus
on the formal training of physicians in the skills and
knowledge needed to conduct clinical research should
be developed and implemented. The NIH responded
to this need by funding clinical research curriculum
awards (CRCAs) to develop didactic training pro-
grams, through a K30 mechanism.
The CRCA was intended long-term to produce
clinical researchers who could successfully compete for
research support and were knowledgeable about the
complex, multidisciplinary issues associated with con-
ducting scientifically rigorous clinical research. In
From the Departments of Neurology (D.A.H.) and Pediatrics
(J.S.), University of Colorado at Denver Health Sciences
Center, Denver, CO; and Cardiovascular Outcomes Research
(T.B.) and Division of Cardiac Research (A.L.S.), Eastern
Colorado Health Care System, Denver, CO.
This work was supported by an American Academy of
Neurology Clinical Research Fellowship (to D.A.H.), National
Institutes of Health K23 NS052487 (to D.A.H.), and a National
Institutes of Health K30 award (to A.L.S.).
Presented in part at the Clinical Research 2006 meeting,
Washington, DC, March 16–18, 2006.
Address correspondence to: Dr. Deborah Hall, 4200 East
Ninth Avenue, B183, Denver, CO 80262; e-mail:
Journal of Investigative Medicine 2007;55:181–186.
2004, the Association of Clinical Research Training
Program Director’s Evaluation Committee conducted
a survey to determine the early capacity of the 59 NIH
K30 programs to produce clinical investigators trained
in the skills required to conduct quality clinical
The survey concluded that the K30
programs successfully fulfilled
However, neither the quality, efficacy, implementa-
tion, and contribution of the training program to the
success of its trainees4nor the role of mentorship has
been well studied.
The survey reported here evaluating K30 trainee
satisfaction was exploratory and had two purposes. The
K30 program leadership team wanted to determine if
the K30 clinical science (CLSC) training programs at
the University of Colorado at Denver Health Sciences
Center (UCDHSC) were meeting the needs of the
students in the following areas: learning clinical
research methodology, building career development
skills (including grant and manuscript writing techni-
ques), and establishing a research mentor relationship.
The second purpose was for the CLSC students to
complete a small group project to fulfill course
requirements for a ‘‘clinical outcomes assessment’’
course. The students were instructed by the course
directors to decide which key outcomes, as well as
other process and structural measures, could be used to
determine the best K30 training programs and where
the NIH could redirect resources to support the goal of
training future clinician-scientists.
The study design was cross-sectional: CLSC train-
ing program participants were asked about their
satisfaction with their skills before (retrospectively)
and after CLSC coursework. Additional questions
relating to career development and mentoring were
also included. Students were surveyed using a Web-
The survey was developed to fulfill a requirement
of the UCDHSC CLSC course ‘‘Clinical Outcomes
Assessment.’’ As part of the course, students learn to
assess not only patient-based clinical outcomes but also
participant satisfaction in various programs using
evaluation performance metrics. The instructions given
to the students conducting the survey were that they
were to act as if they had been hired as consultants to
the NIH to evaluate K30 programs. The ‘‘consultants’’
were to decide which key outcomes, as well as other
process and structural measures, could be used to
determine the best K30 training programs. Although
this project was conducted by the students of the
CLSC course to learn how to assess outcomes, the
educational program’s stakeholders were also attempt-
ing to measure CLSC program effectiveness. The
results of the students’ learning experience (those who
conducted this survey) are not reported in this study.
The hypothesis was that 70% of program enrollees
had improved competency in the skills they obtained
during their coursework, such as the ability to conduct
a clinical research project. Other hypotheses tested
were that students would be successful in manuscript
publication and grant submissions after coursework was
completed and that the program had established a
successful mentoring relationship for the student. The
educational program research was approved by the
Colorado Multiple Institutional Review Board.
Using the Competency Domains for Success in
Clinical Research from Henry and Bland and collea-
the Accreditation for Graduate Medical
Education (ACGME) research–related competencies,
and additional questions regarding mentoring devel-
oped by the students, the student group designed a 99-
question pilot online survey. Examples of these
questions are contained in Table 1. The questions
were organized by domain. A 5-point Likert scale was
used to evaluate the characteristics of the trainees and
the structures, processes, and outcomes that might be
anticipated from any successful NIH K30–funded
program. The Likert scale was a scale of satisfaction
with level of agreement with the statement, with 5
indicating ‘‘strongly agree’’ and 1 indicating ‘‘strongly
disagree.’’ The following structures, processes, and
outcomes were being evaluated by the survey: that
students were able to independently design, imple-
ment, and complete a research project (process);
whether the program provided sufficient support and
mentorship (structure); and the perceived level of
proficiency of the student and numbers of manuscripts
and grants (outcomes). The psychometric properties of
the survey were not assessed as that was not the
purpose of the group project. An electronic survey was
used as many of the responders were on different
campuses and hospitals. The five student team
members who developed the survey pretested the
Web-based survey prior to release. Questions were not
eliminated after the pilot survey, but some of the
questions needed rewording and clarification. One
hundred seventeen students received the survey in its
The Web survey was based on the principles of
Schonlau and colleagues for designing effective Web-
based surveys.7Their principles include techniques for
surveying with electronic media, such as password-
protecting data and using media tools (hypertext, color,
182Journal of Investigative Medicine N volume 55 number 4 N May 2007
and interactive tasks). User-friendly input mechanisms,
such as listing only a few questions per screen and
allowing respondents to interrupt or reenter the
survey,7were used. The pilot survey in this study
was designed to raise questions in the following
domains: manuscript and grant writing, faculty men-
torship, career development goals and planning, and
self-assessments related to ACGME research–related
competencies. Specifically, respondents were asked
details about their level of manuscript- and grant-
writing productivity and their number of publications
and successful grant applications. Faculty mentorship
questions included questions regarding the program
finding a mentor for the student, communication
between the mentor and mentee, and the effectiveness
of the mentor relationship. The career development
domain addressed activities related to promotion
(besides grants and publications), such as teaching and
being recognized in the field. CLSC participants were
asked to self-assess their confidence in their abilities to
achieve research-related competencies after completing
CLSC coursework in areas such as ethics in human
subjects research and research study design and analysis.
The mean and standard deviation were used to
describe the quantity of grant submissions and manu-
scripts. Wilcoxon signed rank tests were used to
determine significant differences (p , .05) between
mean responses to survey questions (before and after
coursework was completed) for the four domains
(graduate medical education competencies, grant and
manuscript preparation and submission, faculty men-
torship, and career development).
The online survey was sent to all University of
Colorado CLSC program participants in the clinical
research training programs, including the Certificate
program and PhD in Clinical Science program. Each
participant was invited until a response was received,
for a total of three times. For the latter two programs,
the respondents’ discipline of study was noted based on
their track-specific enrolment (eg, clinical investiga-
tion, health services research, or health information
technology tracks). The basic Clinical Research
Training Program is the most condensed of these
programs, containing only six introductory CLSC
courses, with a focus on human subjects ethics,
research methods, evidence-based health care, biosta-
tistics, clinical outcomes, and responsible conduct of
research. The results were not analyzed by program
type owing to the small number of respondents in the
Clinical Research Training Program and the high
overlap in coursework between the CLSC PhD and
The CLSC program participants vary in their
backgrounds, with doctoral trained (MD or PhD)
health professionals (eg, physicians from multiple
specialties) and other master’s level health professionals
(public health fields, epidemiology, biostatistics, health
administration, physical therapy, and nursing).
For this course-related student project, 117
students were invited by e-mail to participate. The
online survey was conducted during July and August
2005. Of the 117 invitees, 65 (56%) students responded
and completed the online survey assessment. Of the
respondents, 45% were CLSC PhD students, 34% were
CLSC Certificate students, and 14% were Clinical
Research Training Program participants. The remain-
ing respondents were not affiliated with the CLSC
Sample Survey Questions for Each of the Four Domains Tested
GME research–related competencies
I can devise and rigorously test experimental hypotheses.
I can use high legal and ethical standards in the development of new research projects.
I can select and apply the appropriate study designs and statistics to research problems.
Manuscript and grant writing
I can prepare grant proposals that are judged successfully competitive.
I can effectively communicate detailed information in manuscript form.
I have submitted ____ manuscripts as the first author.
My mentor provided enjoyable challenges.
My mentor suggested specific strategies for achieving research goals.
My mentor serves as a sounding board for me.
I am able to teach and communicate scientific knowledge through oral presentations and other teaching activities.
My colleagues understand the goals and benefits of my participating in the research program.
I can lead and manage a productive career in clinical research.
GME 5 graduate medical education.
Clinical Science Training/Hall et al183
program directly but were taking the courses as
electives for other UCDHSC graduate programs.
Although many students were in the early phases
of their training program activities, only one student
expressed dissatisfaction with the lack of knowledge
gained in the core research–related competency areas.
From the time of entry into the program to the date of
the survey, the results showed statistically significant
improvement (p , .001) in student self-perceptions of
their overall CLSC research–related competencies and
in three of the four domains evaluated (Table 2).
The participants’ responses increased from prior to
program enrolment to the time of survey from
‘‘unsure’’/’’disagree’’ toward ‘‘agree’’ for the skill-
based domains, including the following applied CLSC
research constructs: (1) devise and rigorously test
experimental hypotheses, (2) relate clinical research
to the development of new modalities, (3) comply
with ethical standards, (4) successfully conduct a
clinical research project, and (5) select or apply the
appropriate research methods or statistical approach to
a given research question. The ethical and responsible
conduct of research was the most uniform area in
which a positive impact had been reported (3.87 vs
4.63; p , .001). The mean number of manuscripts
submitted by students as the first author was 2.79 (6
4.4) and as the coauthor was 3.6 (6 6.0), and the mean
number of abstracts was 6.4 (6 8.7). The mean
number of grants submitted as the principal investi-
gator (PI) was 2.18 (6 2.9) and as the co-PI was 0.91
Based on the responses to the mentoring domain
questions, the CLSC program was not successful in
coordinating faculty mentor support for student
research projects (Table 3). Approximately 78% of
the student survey respondents noted a lack of
satisfaction with the support provided to assist them
to find a primary faculty mentor who met their
research project needs and long-term career goals.
Interestingly, however, students who stated that the
program had helped them find a mentor were more
likely to report that their mentor was effective in that
role than students who stated that the program did not
help them find a mentor (100% vs 59.4%; Fisher exact
p 5 .37).
This survey was an exploratory study that was used
to determine if the UCDHSC K30 training programs
were meeting the needs of the students and to fulfill a
small group project requirement for a clinical out-
Wilcoxon Signed Rank Tests for Three of the Domains
Domain Total Score at Beginning of Program*
Total Score at Time of Surveyp Value
GME competencies (5 questions)
Career development (5 questions)
Grants/manuscripts (3 questions)
17.36 6 4.35
16.59 6 3.23
9.43 6 2.49
21.47 6 2.74
19.63 6 2.87
12.2 6 1.94
GME 5 graduate medical education.
*This was collected retrospectively.
Results of Survey Questions Related to Mentoring (Likert Scale)
QuestionsMean 6 SD
The program helped me find a primary mentor.
My mentor lets me have a lot of input in deciding how our relationship
My mentor allows me to express my opinions.
My mentor takes my ideas and opinions into consideration.
My mentor provided enjoyable challenges.
My mentor provided information about how to do research.
My mentor gave me advice on how to attain recognition as a researcher.
My mentor suggested specific strategies for achieving research goals.
My mentor brings my accomplishments to the attention of important
My mentor serves as a sounding board for me.
My mentor guides my professional development.
I am satisfied with my mentor.
My mentor has been effective in his/her role.
2.52 6 1.09
4.00 6 0.85
4.20 6 0.76
4.13 6 0.79
4.15 6 0.89
4.10 6 0.88
3.98 6 0.95
3.98 6 0.89
3.83 6 0.98
4.03 6 0.92
4.11 6 0.98
4.08 6 1.00
4.03 6 0.95
184Journal of Investigative Medicine N volume 55 number 4 N May 2007
comes assessment course. Overall, the UCDHSC
CLSC program students indicated that the curriculum
successfully met their needs. This result was expected
given the success reported by students in other K30
programs. It is important to note that the survey
developed by the students was preliminary and was not
validated, nor was psychometric testing conducted.
This should be considered in interpreting survey
findings. The survey finding that many students were
missing the desired involvement of a faculty mentor-
ship relationship was not anticipated because the CLSC
program currently has over 78 graduate school faculty,
representing a wide diversity of clinical and analytic
Although this pilot survey was conducted for a
course-related project, these preliminary findings
suggest that in addition to evaluating the process and
outcomes in clinical research programs, mentorship
should be evaluated as well. Without this course-
related survey at UCDHSC, the challenges and barriers
related to participants identifying appropriate mentors
may not have been recognized.
Given this survey’s findings, the CLSC program
now has an established strategic plan to more formally
evaluate the clinical skills obtained and quality of
Wisconsin’s skill inventory (Bakken’s measure) and
the Berk et al mentorship evaluation scales.4,8Bakken’s
measure includes several categories of learning objec-
tives that are evaluated during the program: biostatistics
and study design, ethics, leadership/management,
presentation and teaching, scientific writing, and
research.4Several characteristics in each of these
categories are assessed using multiple methods, such
as expert panel review, postprogram tracking, and
portfolio assessments. The Berk mentorship scale is a
Likert-type summated rating scale that was generated
by the Ad Hoc Faculty Mentoring Committee at Johns
Hopkins University School of Nursing to reflect a
comprehensive assessment of the effectiveness of a
A mentoring relationship has been described in
studies as ‘‘a dynamic, reciprocal relationship in a work
environment between an advanced career incumbent
(mentor) and a beginner (protege), aimed at promoting
the development of both.’’9Unfortunately, most
successful physician-scientists may leave teaching and
mentoring to others as they concentrate their efforts
solely on scientific pursuits.10Concerns have been
raised that mentorship duties are then delegated to
more junior faculty, many of whom are still trying to
develop their own investigational skills and clinical
expertise. A study of maternal-fetal medicine fellows
reported that faculty members with mentors had
significantly higher career satisfaction scores than those
without mentors.11In addition; obstetrics-gynecology
fellows who reported having a mentor were more
likely to achieve promotion.12Although other NIH
funding options for ‘‘Mentoring a Mentor’’ programs
exist, the mentor-mentee relationship is a critical
component of most career development awards, as
well as the NIH loan repayment program. Many of the
departments at UCDHSC coordinated mentorship for
the CLSC program students directly, but this coordi-
nation was not consistent or uniform across disciplines.
For the rest of the students, they were asked to identify
a mentor in their field of research and establish a
commitment from the mentor prior to entry into the
training program. The variability and/or success of
these student-based approaches used to find a mentor
were not surveyed in this study. It is possible that
faculty mentors who did not meet their needs were
previously identified but replaced prior to enrolment in
the CLSC program. When CLSC program participants
reported that they had a mentor identified, their
satisfaction level was noted as high for the mentoring
they received. However, those who had mentors
provided from the CLSC program had more positive
feelings about their mentors than those who found
mentors on their own.
A recent review on mentoring in academic
medicine suggested that two techniques would be
useful to investigate mentoring in academics. The
first would involve randomizing participants to a
multifaceted mentoring intervention or to a single-
component mentoring intervention and evaluating
outcomes.13The second approach would be to use a
prospective cohort study design following trainees or
faculty forward in time.13The CLSC program student
group conducting this survey recommended that NIH
support K30 programs incorporating mentoring out-
reach activities, provide support to K30 programs for a
small fiscal incentive, and enhance recognition for
mentors in the future. As a result of the survey, the
UCDHSC program has now established a formal
mentor-mentee training opportunity with online
matching offered to faculty and students of the
The CLSC program at UCDHSC K30 training
programs was found to meet the needs of the students
in the domains of research-related competencies,
manuscript and grant writing, and career development.
Formal assessment within K30 programs should be
regularly conducted to improve program deficiencies,
such as the lack of formal mentor matching from the
CLSC program, which may be unanticipated.
Clinical Science Training/Hall et al 185
References Download full-text
1. Holcombe R. Crisis in medical research. Acad Med 2007;
2. Kelley W, Randolph M, editors. Careers in clinical
research: obstacles and opportunities. Washington (DC):
National Academy of Press; 1994.
3. BakkenLL, Lichtenstein
Committee. Survey of the impact of National Institutes of
Health clinical research curriculum awards (K30) between
1999 and 2004. J Invest Med 2005;53:123–7.
4. Bakken LL. An evaluation plan to assess the process and
outcomes of a learner-centered training program for clinical
research. Med Teacher 2002;24:162–8.
5. Henry R. Developing research skills for medical school
faculty. Fam Med 1997;29:258–61.
6. Bland CJ, Schmitz CC, Stritter FT, et al. Successful faculty
in academic medicine. Essential skills and how to acquire
them. New York: Springer; 1990.
M. ACRTPD Evaluation
7. Schonlau M, Fricker R, Elliott M. Conducting research
surveys via email and the Web. Santa Monica (CA):
8. Berk RA, Berg J, Mortimer R, et al. Measuring the
effectiveness of faculty mentoring relationships. Acad Med
9. Healy CC, Welchert AJ. Mentoring relations: a definition to
advance research and education. Educ Res 1990;19:17–21.
10. Kaushansky K. ASCI Presidential Address: mentoring and
teaching clinical investigation.J ClinInvest 2004;114:1165–8.
11. Sciscione AC, Colmorgen GH, D’Alton ME. Factors
affecting fellowship satisfaction, thesis completion, and
career direction among maternal-fetal medicine fellows.
Obstet Gynecol 1998;91:1023–6.
12. Wise MR, Shapiro H, Bodley J, et al. Factors affecting
academic promotion in obstetrics and gynaecology in
Canada. J Obstet Gynaecol Can 2004;26:127–36.
13. Sambunjak D, Straus SE, Marusic A. Mentoring in
academic medicine. JAMA 2006;296:1103–15.
186 Journal of Investigative Medicine N volume 55 number 4 N May 2007