ArticlePDF Available


The purpose of the paper is to determine the instance of errors made in physician dictation of medical records. Purposive sampling method was employed to select medical transcriptionists (MTs) as "experts" to identify the frequency and types of medical errors in dictation files. Seventy-nine MTs examined 2,391 dictation files during one standard work day, and used a common template to record errors. The results demonstrated that on the average, on the order of 315,000 errors in one million dictations were surfaced. This shows that medical errors occur in dictation, and quality assurance measures are needed in dealing with those errors. There was no potential for inter-coder reliability and confirming the error codes assigned by individual MTs. This study only examined the presence of errors in the dictation-transcription model. Finally, the project was done with the cooperation of MTSOs and transcription industry organizations. Anecdotal evidence points to the belief that records created directly by physicians alone will have fewer errors and thus be more accurate. This research demonstrates this is not necessarily the case when it comes to physician dictation. As a result, the place of quality assurance in the medical record production workflow needs to be carefully considered before implementing a "once-and-done" (i.e. physician-based) model of record creation. No other research has been published on the presence of errors or classification of errors in physician dictation. The paper questions the assumption that direct physician creation of medical records in the absence of secondary QA processes will result in higher quality documentation and fewer medical errors.
Error rates in physician dictation:
quality assurance and medical
record production
Gary C. David
Department of Sociology, Bentley University, Waltham,
Massachusetts, USA, and
Donald Chand and Balaji Sankaranarayanan
Information and Process Management, Bentley University, Waltham,
Massachusetts, USA
Purpose The purpose of the paper is to determine the instance of errors made in physician dictation
of medical records.
Design/methodology/approach – Purposive sampling method was employed to select medical
transcriptionists (MTs) as “experts” to identify the frequency and types of medical errors in dictation
files. Seventy-nine MTs examined 2,391 dictation files during one standard work day, and used a
common template to record errors.
Findings – The results demonstrated that on the average, on the order of 315,000 errors in one
million dictations were surfaced. This shows that medical errors occur in dictation, and quality
assurance measures are needed in dealing with those errors.
Research limitations/implications – There was no potential for inter-coder reliability and
confirming the error codes assigned by individual MTs. This study only examined the presence of
errors in the dictation-transcription model. Finally, the project was done with the cooperation of
MTSOs and transcription industry organizations.
Practical implications – Anecdotal evidence points to the belief that records created directly by
physicians alone will have fewer errors and thus be more accurate. This research demonstrates this is
not necessarily the case when it comes to physician dictation. As a result, the place of quality
assurance in the medical record production workflow needs to be carefully considered before
implementing a “once-and-done” (i.e. physician-based) model of record creation.
Originality/value – No other research has been published on the presence of errors or classification
of errors in physician dictation. The paper questions the assumption that direct physician creation of
medical records in the absence of secondary QA processes will result in higher quality documentation
and fewer medical errors.
Keywords Six Sigma, Quality assurance, Error analysis, Medical records
Paper type Research paper
1. Introduction
Patient medical records are a critical component of the US healthcare system. In many
ways, medical records are the linchpins that tie together all the disparate functions that
take place within the healthcare industry. “It binds the individual to the organization,
and, by means of a selective and well-practiced process, constructs a biography for
them” (Rees, 1981). Beyond the obvious uses for treatment, medical records also are
used for coding, billing, reimbursement, audit, legal disputes and inquiries,
The current issue and full text archive of this journal is available at
Error rates in
Received 5 June 2012
Revised 5 November 2012
Accepted 8 March 2013
International Journal of Health Care
Quality Assurance
Vol. 27 No. 2, 2014
pp. 99-110
qEmerald Group Publishing Limited
DOI 10.1108/IJHCQA-06-2012-0056
epidemiological research, physician and hospital scores, amongst others. The
centrality of the medical record has led to the adage, “If it isn’t written down, it didn’t
happen.” Thus, medical records are not just important for patient care; they are vital
for all of health care. Given increasing documentation burdens, a major challenge for
care providers is how to produce records more quickly and cheaply without reducing
accuracy and quality. This paper examines instances of errors in physician dictation of
patient encounters, and the potential impact this can have on health care in the absence
of quality assurance (QA) processes. In doing so, the paper raises the question of
whether records created only by physicians without any QA function can be trusted to
be reliable and accurate.
2. Medical record quality
Medical record production technologies are being seen as the tool to achieve the goal of
creating better and timely medical records, while at the same time increasing cost
effectiveness. Hieb (2007) forecasts that the “once-and-done” model of documentation
production, where records are created solely by the physician (either through speech
recognition technology (SRT) or electronic health record (EHR) systems), is the optimal
and most efficient approach to use. In this model, the “physician is responsible to correct
any errors in speech recognition, as well as to format the document appropriately” (Hieb,
2007). In fact, much has been made of the efficiency gains and potential in using SRT and
EHR systems. For instance, President Obama’s 2009 address to the Joint Session of
Congress stated, “Our recovery plan will invest in electronic health records and new
technology that will reduce errors, bring down costs, ensure privacy, and save lives.”
This view is part of “the widespread belief that the implementation of electronic medical
record systems will naturally eliminate medical errors and make healthcare safer” (Lin,
2010). In the physician-as-editor model, it is assumed that the physician will find errors,
edit the document, and do the proper formatting.
There is evidence, however, that this assumption does not necessarily hold, and that
physicians do not take the time to proof-read and edit their records. In fact, rather than
SRTs and EHRs eliminating errors, errors can persist and new types of errors arise.
Campbell et al. (2006) found that “When busy clinicians cannot readily find the
“correct” data entry location, they tend to enter data where it might fit.” Along with
issues related to cost and integration, system usability continues to confound doctors
and be a barrier to implementation (Ash and Bates, 2005; Shields et al., 2007), which
can have a direct impact on data quality and record errors. In terms of SRT systems,
even when the software is able to capture physician dictation exactly as spoken, this
does not mean that what is said is accurate. In other words, if a SRT system is 100
percent accurate in recognizing what a dictator says, but what the dictator says in
inaccurate, the record is still inaccurate. Furthermore, there also is the secondary issue
of whether it is the best use of physician time.
This raises the question of, “How good is ‘good enough’?” While it can be said that it
is the physician’s job to create accurate records, physicians often want their records to
be good enough to support their own work without recognition of the more global
import of these documents. So, while we might think medical records should be
completely accurate, such a view is not necessarily shared, nor is it necessarily
attainable. Medical transcription service organizations (MTSOs) would claim rates of
“98 percent accuracy,” a rate that was demanded by hospitals. At the same time despite
the stated importance of accurate records, within a healthcare setting there is not
necessarily a single role whose primary job it is to perform quality assurance in order
to identify errors in medical records. Depending upon the workflow model employed,
the QA activity could fall to medical transcriptionists (MT), clinical documentation
specialists (CDS), coder or office manager. In any of these cases, however, it is not
necessarily the principle job of these roles. For instance, MTs technically are employed
to turn what the doctor dictates into a usable document either by transcribing directly
from a voice file or editing from a speech recognition generated draft. CDSs review
documentation to make sure it contains a diagnosis and relevant major
complications/comorbid conditions (MCCs) or complications/comorbid conditions
(CCs) in order to support accurate coding and billing, and Case Mix Indices for
hospitals. Beyond these tasks, both can routinely point out (or “flag”) the presence of
errors so that the originating physician can double check and clarify any
inconsistencies. The same can be said for others who might make use of the record
in the patient care setting (Martin and Wall, 2011). However, it can be a challenge to
catch errors before the physician signs the record, as once the record is signed it is a
legal document and typically only can be modified through an addendum. In other
words, once a document is final, any remaining errors will likely stay.
Furthermore, once an error appears in a record there is the likelihood that it will be
replicated in an electronic system. “Copy-and-paste” usage of electronic records has
resulted in less quality review and the production of records that have errors repeated
(Hirschtick, 2006; O’Donnell et al., 2009). Hartzband and Groopman (2008) state, “we have
observed the [electronic medical record] become a powerful vehicle for perpetuating
erroneous information, leading to diagnostic errors that gain momentum when passed on
electronically.” Siegler and Adelman (2009) concur that “with each iteration, notes
lengthen and errors accumulate.” The ability to share medical records with an
unspecified number of users, especially in a nationwide health information network,
raises the stakes on maintaining quality in medical records at the point of creation.
Furthermore, potential uses beyond patient care (such as with health insurance) create
the potential for a one-time error in a record to have impacts beyond the immediate
healthcare setting and for anyone who uses the medical record. Thus, along with
physician use of SRTs, “additional research is needed on overcoming the EHR’s
limitations on dependably achieving higher quality at affordable costs” (Sidorov, 2006).
Since workflows are being recommended based on technological advances that
remove any secondary review from the process, this paper examines the occurrence of
physician errors in medical record dictation. As Cao et al. (2003) note, “The critical step
for managing and preventing medical errors is identifying them.” This was done
through the use of medical transcriptions examining raw dictation files for errors
during the routine course of their work. The results indicate that errors can occur on
the order of 315,000 errors in one million dictations (or 0.315 errors per dictation). We
thus discuss the issue of secondary review of medical records in the QA process, the
potential impact to health care and patient safety that an absence of a QA process in
medical record production could have.
3. Method
Purposive sampling method (Tashakkori and Teddlie, 2003) was employed to select
medical transcriptionists as “experts” to identify the frequency and types of medical
Error rates in
errors in dictation files. Examining errors in medical dictations requires the expertise
of trained professionals. Therefore, our sample consisted of medical transcriptionists
(MTs) whose job entails transcribing dictations from voice files and speech-recognition
software. Additionally, as a result of their training, MTs are also well-suited to
examine errors in dictation files.
For benchmarking purposes, our study utilized the quality measure developed by
the Association for Healthcare Documentation Integrity (AHDI), Medical Transcription
Industry Association (MTIA), and the American Health Information Management
Association (AHIMA) for assessing medical errors in dictations (Table I). The quality
measures identify two types of errors: critical errors and major errors. Critical errors
are those types of medical errors that compromise the safety of the patient. Common
critical errors include referring to a wrong patient during a dictation, citing incorrect
drug name, interchanging the patient’s left side and the right side, and providing
medically inconsistent medications. Major errors on the other hand are those types of
medical errors that compromise the integrity of the document, without any risk to
patient care or continuity of care. Common major errors include gender mismatch, age
Error type Examples Tally Total
Instructions Enter a hash mark # for each dictation error you
identify. Include errors you correct, send to QC
or flag for physician review. There may be
multiple hash marks for a single job
Critical errors
Wrong patient Patient demographics don’t match dictation
Wrong drug name or dosage Dictates “salmeterol” instead of Solu-Medrol;
“mcg” vs “mg”
Wrong lab values Dictates same lab value for two different tests
Left/right discrepancy Refers to “left knee” and further down “right
Medical discrepancy Drug listed in both meds and allergies; “no neck
pain” in ROS when CC is neck pain
Speech recognition error Translation error by SRT software: critical
Other Other errors that could affect patient safety
Major errors
Age mismatch Dictated age doesn’t match DOB
Sex mismatch Refers to patient as both “he” and “she”
Wrong name Dictates “patient’s mother” when its “patient’s
Wrong doctor Misidentifies referring doctor name
Wrong date Dictates “had surgery Sept. 2009”
Made up words, acronyms Dictates “patient was surgerized”
Speech recognition error Translation error by SRT software: major
Other Other errors that could affect document
Examples Optional Use this space to record actual
examples of errors encountered during the
study, or you can copy and paste into an
addendum to send in with your study results
Table I.
Dictation errors by type
of error
and date-of-birth mismatch, incorrect citing of event dates, and making up of words
and abbreviations.
The study also involved several important controls to limit the impact of any negative
biases in the study. First, to avoid any potential response bias due to mandatory
inclusion (Furnham, 1986), MTs were instructed that participation in the study was
entirely voluntary. Therefore, although purposive sampling was used, MTs
voluntarily chose to participate in the study. Second, our earlier interaction with
MTSOs revealed that most MTs telecommute to work, i.e. connect from home to the
hospital’s information. Therefore, rather than forcing MTs into an artificial
experimental setting or asking them to respond to simulated scenarios, we chose to
use MTs in their natural work setting, and having them surface dictation errors during
their daily routines of transcribing dictations and/or editing speech-recognition
generated drafts. The study thus required no intervention on the part of the authors
with any medical records or dictations. As the work was performed using the playback
equipment and transcription platform provided by each MTSO organization and/or
hospital, no patient health information was taken outside of the normal workflow of
worksites or viewed by the authors. Integrating the study as part of their natural job
setting coupled with minimal involvement by the study authors, contributed to a close
approximation of unobtrusive measurement of research data (Webb et al. 2000).
Finally, we sought to generate a representative sample of the type of work MTs
experience by utilizing these three techniques: choosing sites that were geographically
distributed in the US, providing two types of dictations: raw dictations that were
transcribed and speech-recognition software generated drafts that were edited, and
including a range of work types: inpatient jobs and outpatient jobs. These techniques
guaranteed that random assignment of medical transcription files were provided to the
To ensure that the instructions and coding sheet were consistently interpreted, a pilot
study with a convenient sample of MTs was conducted prior to the data gathering work
day. Through the coordinators of eight medical transcription service organizations
(MTSOs) and one in-house transcription department in a multi-hospital system, MTs
who could potentially participate in our study were identified. A request soliciting
participation in the study was sent to individual MTs. Once MTs accepted the invitation,
specific instructions regarding types of errors and how to record them were provided.
Study participants completed the error coding sheet (Table I) for each dictation job they
handled during one standard work day. They were instructed that when they discovered
an error they should check the error coding types to establish whether it is a critical or
major error, and whether it was due to physician misstatement (“dictation error”) or
mistranslation by speech recognition software (“speech recognition error”). Participants
were also instructed to record all errors whether they were corrected immediately or
“flagged” for review to be corrected later by quality control or the dictating physician.
A total of 79 medical transcriptionists processed 2,489 dictation files. Since
protocols were not followed, we eliminated 98 files from the study database. Therefore,
a final sample of 2,391 dictations was used in the study. The study results were
stratified on job types and collated within each job type into inpatient transcription and
outpatient transcription (Table II). Transcription jobs are those where the
transcriptionist processed a doctor’s dictation file by listening to it and typing what
she heard the physician saying. Speech recognition jobs are those in which a dictated
Error rates in
voice file is first processed through a back-end speech recognition system creating a
draft document, which then is paired with the original voice file and edited by an MT.
4. Findings
The incidence of dictation errors by job types is reported in Table III. The main
findings of this study are: doctors can make significant errors in dictations, and quality
assurance processes performed by medical transcriptionists can surface approximately
0.315 errors per dictation (755 errors in 2,391 dictations). Using the Six Sigma[1] quality
norm of errors in one million opportunities (Goyal, 2010), our data suggests that
doctors make on the average 315,000 errors in 1 million dictations. This can be broken
down into a critical error rate of 0.099 errors per dictation (99,958 critical errors per 1
million dictations) and a major error rate of 0.0216 errors per dictation (215,809 major
errors per 1 million dictations).
The most commonly occurring critical errors were wrong patient name (n¼58),
wrong drug name or dosage (n¼52), and “other” critical errors (n¼58). Commonly
occurring major errors were made-up words or acronyms (n¼123), followed by
gender mismatch (n¼41) or age mismatch (n¼35). The general “other” major errors
category accounted for 259 of the errors.
Inpatient Outpatient Total
Job types n%n%n%
Transcription jobs 367 1,058 1,425 60
Speech recognition jobs 608 358 966 40
Total 975 41 1,416 59 2,391 100
Table II.
Dictation jobs by
speech editing
speech editing Totals
Critical errors
Wrong patient 8 10 38 2 58
Wrong drug name/dosage 4 15 29 4 52
Wrong lab values 1 3 26 0 30
Left/right discrepancy 1 10 5 1 17
Medical discrepancy 3 3 15 3 24
Other 3 12 40 3 58
Total critical errors 20 53 153 13 239
Major errors
Age mismatch 5 10 11 9 35
Gender mismatch 3 6 28 4 41
Wrong name 1 7 5 3 16
Wrong doctor 0 6 12 2 23
Wrong date 3 7 6 3 19
Made-up words 11 28 76 8 123
Other 39 100 112 8 259
Total major errors 62 164 250 37 516
Total errors 82 217 403 50 752
Table III.
Incidence of dictation
errors by work-type
We further conducted Fisher’s exact test (Tables IV and V) on critical and major errors,
by type of jobs (transcription, speech recognition) and type of records (inpatient,
outpatient). Results indicate that there are significant differences between the critical
and major errors. We found that the proportions of errors that are critical or major
errors are significantly different for the types of jobs and records. This further
indicates that different types of errors are more prevalent in different types of jobs and
However, the numbers themselves only indicate the extent to which errors were
made, but do not completely demonstrate the nature of the errors. Along with tracking
the general classification of errors, MTs recorded specific examples of errors, with
some specific examples provided in Table VI.The first part of the table lists actual
doctor dictation and compares it to the text generated by the speech recognition
technology. The second part of the table lists doctor dictation and compares it to
contradictory information found in the medical record, and/or errors that were
identified by MTs through their professional experience.
Without going into details on each discrepancy or issue, some important instances
stand out. First, in terms of speech recognition errors, the first instance is fairly
significant. The doctor dictated in a cesarean post-operative note that something was
“followed easily by the rest of the infant.” While we do not know the exact context of
this utterance, we do know that it is significantly different from “with arrest of the
infant.” In medical terminology, an “arrest” means a stoppage or cessation of some
function, as in “cardiac arrest” (or heart stoppage). To say “with the arrest of the
infant” would be to indicate a significant problem, which is much different from
“followed easily by the rest of the infant.” A second example from the SRT list is the
dosage of Heparin, which typically is used as an anticoagulant. A total of 5,000 units of
Heparin is a typical dosage for adults, especially as an initial administration of the
drug. Of course, it would depend on the specific situation and reason for using the
drug. However, it is clear that 5,000 is not the same as 1,000, and that hospital
medication errors are a significant problem in health care. Thus the difference between
5,000 and 1,000 units of Heparin should be cause for concern. Furthermore, since it is
Value Degrees of freedom Asymptotic Significance (2-sided)
Pearson chi-square 15.363
3 0.002
Likelihood ratio 15.542 3 0.001
Fisher’s exact test 15.210
No. of valid cases 752
0 cells (0.0 percent) have expected count less than 5; the minimum expected count is 15.89
Table V.
Fisher’s exact test
Critical errors Major errors Total
Speech recognition inpatient 153 250 403
Speech recognition outpatient 13 37 50
Transcription – inpatient 20 62 82
Transcription – outpatient 53 164 217
Total 239 513 752
Table IV.
Cross-tabulation of
critical errors vs major
Error rates in
not necessarily unusual for 1,000 units of Heparin to be administered (especially as a
maintenance dose), this error may not have been caught in the absence of the original
voice file.
In the second set of examples, we have errors that were identified as a result of
discrepancies in the patient medical record. The first two examples are issues
regarding dates. Again, without knowing the context of the dates it is difficult to know
what impact the error would have. One potential impact is the ability to search for past
medical records by date. The wrong date obviously makes this more difficult to do.
Also, if the wrong date is put in conjunction to a procedure, this can cause problems in
terms of how follow-up care is administered, especially in the instance where the date is
off by three months. Another example has the patient being “seen by her
gastroenterologist during this pregnancy.” However, the patient also is listed as
nulligravida, or has never been pregnant. Furthermore, the record indicates that she
had a negative pregnancy test on her hospital visit. While the impact of this is not
known, it is enough of a discrepancy to result in an error.
A final example is for a patient who had the description of “Cranial nerves II
through XII intact.” At the same time, the patient has a diagnosis of being deaf. Cranial
nerve VIII (vestibulocochlear nerve) is concerned with hearing. Since the patient is
Original dictation Speech recognition Corrected or contrary information
Speech recognition technology
1. ((In cesarean op note))
“followed easily by the rest of
the infant”
“with arrest of the infant”
2. “5000 units of Heparin” “1000 units of Heparin”
3. “or vascular intervention” “of left pleuritic intervention”
4. “white count 9” “white count 90”
5. “Iliofemoral” Aortoiliofemoral
Dictation errors
6. 8/20/09 5/20/09
7. 4/23/09 5/20/09
8. Milliequivalents Millicuries
9. “Blood pressure is 98.4” Temperature is 98.4
10. “The patient has been seen by
her gastroenterologist during
this pregnancy”
The patient is actually
“nulligravida” and is scheduled
for surgery. She had a negative
pregnancy test on this hospital
11. “Cranial nerves II through XII
The patient has been diagnosed
with deafness. As cranial nerves
VIII are concerned with hearing,
nerves II through XII are likely
not intact
12. Weight given without unit of
measurement (pounds or kg?)
13. Current meds were listed
under allergies
Table VI.
Examples of errors in
medical record sample
deaf, there is the possibility that cranial nerve VIII is not intact. While it also is possible
that the deafness is the result of something other than damage to this cranial nerve, it
does raise a possible discrepancy in the record. Furthermore, no matter the cause of the
deafness, the examination of cranial Nerve VIII would not be “intact” since the only test
performed is to ascertain if the patient can hear normally in both ears.
It can be difficult to gauge the severity and impact of any errors out of context of the
record’s use. A “major error” may have no actual impact on patient treatment because
the physician recognizes the error, or the error is not consequential to treatment.
However, this lack of harm does not negate the fact that the record was wrong. At the
same time, the physician is not the only person who will use the record. The potential
users of the medical record include other healthcare professional, caregivers, family
members, billers, coders, auditors, researchers, lawyers, etc. Thus, while the error may
not have had any real impact in one situation, it does not mean it will not have an
impact in another situation.
This means that the overall goal must be to have records that are accurate and error
free. We do know that: errors do occur in dictation, there is less of a likelihood of
finding those errors when the QA function is diminished, and errors in medical records
can have negative impacts in patient care and other uses of medical records. Thus,
additional attention needs to be given regarding the shift from supported record
production to autonomous record production with no secondary quality assurance.
5. Discussion
Efficiency gains and cost reductions in the medical record production process may not
mean much if there is a loss in quality. As increased attention through public policy
discussions are focused on medical records, the quality of those records is being called
into question. Weir and Nebeker (2007) studied the quality of medical records produced
when physicians enter data directly into an electronic medical record (EMR) system,
and they found that 84 percent of all notes had at least one documentation error and an
average of 7.8 errors per impatient chart. In Birmingham (UK), “One in ten electronic
medical records contain errors” (Smith, 2010), such as out-of-date information that
affected current medication lists. As Fernandes (2009) notes, “poor data quality within
a system has always been kryptonite to success, and it still threatens attempts to
achieve significant reform.”
This paper demonstrates what is well known to virtually all who routinely review
medical records: errors happen. This fundamental point demonstrated by the data in
this study. While the efficiencies gained (in terms of turn-around time) through
physician direct entry might be significant, these gains have the potential to be off-set
by errors in the record, which might give rise to numerous other problems. These
potential problems are not limited to patient care. Since so much of what occurs in the
American healthcare system is based on the content of medical records, much of the
healthcare system can be impacted by errors in the records. This includes elements
such as coding, billing, reimbursement, audit, research, and legal proceedings. Thus
the “once and done” design philosophy of EHR/EMR tools overlooks the quality
assurance role of medical transcriptionists (amongst others such as clinical
documentation specialists). Removing a QA step in the workflow thus can have
important repercussions on documentation quality.
Error rates in
This raises the question of what can be done to improve the quality of medical
records. As demonstrated by this project, MTs can and have provided a QA element to
the production of medical records. However, their role has been ignored by those in the
healthcare setting as well as academics researching health care. David et al. (2009)
demonstrated the professional knowledge that MTs employ in the course of doing their
work. Garcia et al. (2010) similarly demonstrated that MTs do more than “just type”.
Rather, an MT “hears and interprets what the doctor has said; subsequently she sends
the completed transcript back to the physician, along with any queries or flags needed
to resolve ambiguities and correct suspected errors” (Garcia et al., 2010).
Furthermore, QA’ing is an explicit phase in the transcription workflow, performed
by someone whose job is to concurrently (before the record is delivered to the customer)
and retrospectively (after the record is delivered) review a certain percentage of
completed records. In some organizations every transcribed record is QA’d, and in
others QA is performed periodically on typically a 20 percent random sample of these
finished reports (although the actual percentage can vary across MTSOs). Because the
MTSOs have demonstrated a capability to achieve Six Sigma level of quality (Chand
and David, 2009), they have demonstrated the ability to play an important role in
validating medical records.
At the same time, it is not possible to provide a single QA solution for every
This paper is not arguing that every medical record needs to be transcribed, as this
is clearly not the case. However, regardless of how the record is created this paper is
recommending that healthcare systems need to careful examine how quality assurance
of medical records is performed. Furthermore, hospital administrators need to consider
how to best maintain QA functions when the method of medical record production
undergoes drastic transformation as when once-and-done production technologies are
6. Limitations and future directions
There are some potential limitations associated with the study that should be
considered. First, no corresponding voice files were collected in the project. Thus, there
was no potential for inter-coder reliability and confirming the error codes assigned by
individual MTs. Since this work was done during the normal course of a workday, and
that all work was subject to the normal internal QA review process within each
organization, there is no expectation of bias in the scoring. However, this cannot be
independently verified by the researchers without the potential loss of patient
confidentiality through the sharing of patient dictation and medical records. Future
research would benefit from a comparison between the scoring sheets and actual
dictation files.
Another limitation is that this study only examined the presence of errors in the
dictation-transcription model. Given the increasing number of methods in which
records are made, more comparison needs to be done to determine the presence of
errors in other methods. It is not known whether one production approach produces
more errors than another. This was not the focus of this study, but nonetheless is an
important consideration given the findings presented here. We do know that errors do
occur, as previous research has found that “[t]he vast majority of reviewed records
contained physician errors” (Lloyd and Rissing, 1985). It is difficult to compare error
rates across studies because of potentially divergent definition of what constitutes an
error. Without a uniform definition, such comparisons may not be possible. What these
various studies do show is that there are elements in records that can be deemed to be
erroneous and thus degrade the quality and usefulness of these records.
Finally, given that this project was done with the cooperation of MTSOs and
transcription industry organizations, there is the potential to see the study as biased or
advocating for the transcription industry. While our research has identified ways in
which MTSOs and medical transcription can add value to the production of medical
records, it is not the aim of the paper to say that transcription is the only suitable option
for medical record production. Our central point is a simple, yet often overlooked, one:
errors occur in physician. Despite the fact that physicians do make errors when making
records, there is a growing sentiment that physicians should be entrusted to do their
own editing. This paper, rather than advocating for a particular industry or approach,
simply shows how QA typically occurs in a particular documentation production
process. Our key recommendation from this study is that as the QA function is
removed through the implementation of new technologies, more attention needs to be
paid on the potential impacts of this decision, on the quality of the documentation
1. Six Sigma, originally developed by Motorola, seeks to improve the process performance
standards through continuous improvement techniques. The standard process in Six Sigma
is expected to have at least 99.99966 percent of the products manufactured to be free of
defects, i.e. only 3.4 defects/errors per million are allowed.
Ash, J.S. and Bates, D.W. (2005), “Factors and forces affecting EHR system adoption”, Journal of
the American Medical Informatics Association, Vol. 12, pp. 8-12.
Campbell, E.M., Sittig, D.F., Ash, J.S., Guappone, K.P. and Dykstra, R.H. (2006), “Types of
unintended consequences related to computerized provider order entry”, Journal of the
American Medical Informatics Association, Vol. 13, pp. 547-556.
Cao, H., Stetson, P. and Hripcsak, G. (2003), “Assessing explicit error reporting in the narrative
electronic medical record using keyword searching”, paper presented at American Medical
Informatics Association, November 8-12, Washington, DC.
Chand, D.R. and David, G.C. (2009), “Unpacking CMMI-SVC for medical transcription services”,
paper presented at: SIG-SVC Workshop; Pre-2009 ICIS, December 12, Phoenix, AZ.
David, G.C., Garcia, A.C., Rawls, A.W. and Chand, D.R. (2009), “Listening to what is said –
transcribing what is heard: the impact of speech recognition technology on the practice of
medical transcription”, Sociology of Health and Illness, Vol. 31, pp. 924-938.
Fernandes, L. (2009), “It’s time to enter the age of interoperability”, For The Record, Vol. 7, pp. 8-9.
Furnham, A. (1986), “Response bias, social desirability and dissimulation”, Personality and
Individual Differences, Vol. 7, pp. 385-400.
Garcia, A.C., David, G.C. and Chand, D.R. (2010), “Understanding the work of medical
transcriptionists in the production of medical records”, Health Informatics Journal, Vol. 16,
pp. 87-100.
Error rates in
Goyal, N. (2010), “Using Six Sigma to reduce medical transcription errors: an iSixSigma case
study”, available at:
Hartzband, P. and Groopman, J. (2008), “Off the record-avoiding the pitfalls of going electronic”,
New England Journal of Medicine, Vol. 358, pp. 1656-1658.
Hieb, B.R. (2007), The Evolving Model of Clinical Dictation and Transcription, Gartner, Stamford,
Hirschtick, R.E. (2006), “Copy-and-paste”, Journal of the American Medical Informatics
Association, Vol. 295, pp. 2335-2336.
Lin, K. (2010), “Electronic medical records: no cure-all for medical errors”, available at: http://
medical-records-no-cure-all-for-medical-errors (accessed 23 December 2010).
Lloyd, S.S. and Rissing, P. (1985), “Physician and coding errors in patient records”, Journal of the
American Medical Association, Vol. 254, pp. 1330-1336.
Martin, N. and Wall, P. (2011), “Behind the scenes: the business side of medical records”,
in Szymanski, M. and Whalen, J. (Eds), Making Work Visible: Ethnographically Grounded
Case Studies of Work Practice, Cambridge University Press, Cambridge, pp. 147-160.
O’Donnell, H.C., Kaushal, R., Barro
´n, Y., Callahan, M.A., Adelman, R.D. and Siegler, E.L. (2009),
“Attitudes towards copy and pasting in electronic note writing”, Journal of General
Internal Medicine, Vol. 24, pp. 63-68.
Rees, C. (1981), “Records and hospital routines”, in Atkinson, P. and Health, C. (Eds), Medical
Work: Realities and Routines, Gowler, Farnborough, pp. 55-70.
Shields, A.E., Shin, P., Leu, M.G., Levy, D.E., Betancourt, R.M., Hawkins, D. and Proser, M. (2007),
“Adoption of health information technology in community health centers: results of a
national survey”, Health Affairs, Vol. 5, pp. 1373-1383.
Sidorov, J. (2006), “It ain’t necessarily so: the electronic health record and the unlikely prospect of
reducing health care costs”, Health Affairs, Vol. 4, pp. 1079-1085.
Siegler, E.L. and Adelman, R. (2009), “Copy and paste: a remediable hazard of electronic health
records”, American Journal of Medicine, Vol. 122, pp. 495-496.
Smith, R. (2010), “One in ten electronic medical records contain errors: doctors”, The Telegraph,
available at:
medical-records-contain-errors-doctors.html (accessed 17 July 2010).
Tashakkori, A. and Teddlie, C. (Eds) (2003), Handbook of Mixed Methods in Social and
Behavioral Research, Sage, Thousand Oaks, CA.
Webb, E.J., Campbell, D.T., Schwartz, R.D. and Sechrest, L. (2000), Unobtrusive Measures,
Sage Publications, Thousand Oaks, CA.
Weir, C. and Nebeker, J.R. (2007), “Critical issues in an electronic documentation system”, paper
presented at American Medical Informatics Association, November 10-13, Chicago, IL.
Corresponding author
Gary C. David can be contacted at:
To purchase reprints of this article please e-mail:
Or visit our web site for further details:
... Hieb (2007) figures that the "once-and-done" model of documentation generation, where records are made exclusively by the doctor (either through discourse acknowledgment innovation (SRT) or electronic wellbeing record (EHR) frameworks), is the ideal and most efficient way to deal with utilize. In this model, the "doctor is capable to adjust any blunders in discourse acknowledgment, and also to organize the report fittingly" (Hieb, 2007) [13]. Innovative pharmaceuticals don't just profit patients however are a critical component of well-working medicinal services frameworks. ...
Transcription is a training key to subjective research, yet the writing that tends to interpretation presents it as underestimated in subjective investigations. Discussion investigator does the treatment of sound chronicle as pragmatist question and transcripts as constructivist protest and this is done as a method for cross examining the epistemological and ontological presumptions of computer application. Transcription rules should help analysts efficiently sort out and after that dissect printed information, paying little mind to the investigative systems and devices utilized. While Transcription is without a doubt an important methodological instrument for analysts concentrating particularly on talk and dialect, it has likewise been broadly embraced by scientists over the sociologies, and is here and there supported as a method for inalienably enhancing the meticulousness of subjective research. Analysts have started to investigate utilizing a robotized interpretation process utilizing advanced chronicles and voice acknowledgment programming (VRS) while VRS has enhanced as of late, it isn't yet accessible to the overall population in a configuration that can perceive in excess of one recorded voice. This essential work is away to give a learning into this erupt and to bring it into worry for further improvement and scholastics.
... To achieve these goals, cross-industry practices such as Lean Six Sigma (LSS) as a business process improvement for providing high-quality service (Verelst et al., 2012). Furthermore, healthcare administrators need to carefully examine how quality assurance of medical record is performed and sustained using proven quality assurance methodologies (David et al., 2014). Literature shows that very little research has been carried out on quality improvements of the MRD, and thus, more study would help practitioners to assure high-quality service at an affordable cost (Chan et al., 2002;Bergman, 1994). ...
Full-text available
The purpose of the article is to explore the voice of the customer, key performance indicators, critical to quality characteristics, critical success factors, and commonly used tools and techniques for deploying the Lean Six Sigma strategy (LSS) in Indian private hospitals, with special attention to the Medical Records Department. The study utilizes the action research methodology to obtain a greater understanding of the use of LSS in the Indian healthcare sector. Multiple case studies were designed and successfully deployed to understand and ascertain challenges in LSS implementation. Five case studies were carried out in the Medical Records Departments of four private hospitals in India Patients perceive that waiting in queue harms their health, which can be rectified by addressing the cycle time of the system. The research also found that effective leadership, availability of data, involvement of cross-functional team, and effective communication are critical to the success of LSS projects. In addition, control charts, cause and effect diagram, 5S, Gemba, two sample t-test, standardization, waste analysis, and value stream mapping are some of the common tools used to improve healthcare systems. The research was restricted to studying the impact of Lean Six Sigma on the workflow and resource consumption of the Medical Records Department in Indian allopathic hospitals only. The validity of the results can be improved by including more hospitals and more case studies from the healthcare sector in different countries. The findings will enable researchers, academicians, and practitioners to incorporate the results of the study in Lean Six Sigma implementation within the healthcare system to increase the likelihood of successful deployment. This will provide greater stimulus across other departments in the hospital sector for wider and broader application of Lean Six Sigma for creating and sustaining process improvements.
... Besides, several studies have reported that the quality of recording where physicians are involved did not show improvement with electronic medical record. [10][11][12] The CRABEL score as method for scoring the individual medical records on their quality of documentation was described by Crawford et al 13 in Annals of Royal College of Surgeons of England. The objective criteria used in the method were derived from the guidelines published by The Royal College of Surgeons. ...
Introduction This study is based on an approach employed by a medical college hospital for improving the adequacy of documentation in their medical records. The hospital utilized CRABEL scoring tool to screen and score their medical records and then used this information as a feedback to their clinical departments for encouraging them to improve their record documentation. Aim The study aims to determine whether the approach of the hospital resulted in any significant change in adequacy of their medical record documentation. Materials and methods Baseline sample of 250 current medical records (stratified random) from four clinical departments were scored using CRABEL scoring method to determine baseline average score and number of files with high scores (score > 0.85). Feedbacks on scores were given to departments, along with the information on areas for improvement. Scoring and feedback were repeated every month for six consecutive months, with sample size of 230 to 271. Trends in average score and number of files with high scores were observed. Difference between average scores of baseline sample and sample at the end of 6 months was statistically tested. Number of files with high scores, in departments where approach was carried out was compared with number of files with high scores, in departments were approach was not carried out, to check statistically significant difference, if any Results The trend showed a continuous monthly improvement in both average scores and number of files with high scores. Improvement was found in files of all clinical departments with minor variations. The chi-square test and Student's t test showed a significant difference at p < 0.05 (p for chi square — 0.001 and for t-test — 0.04). Conclusion The hospital's approach was found to be successful in improving the adequacy of documentation in medical records. Clinical significance Medical record constitutes the most important record in a clinical setting. Completeness of medical record is essential for proper patient care, but is a challenge in most organization. The approach has proven successful in this study and can be replicated in other settings for improvement. How to cite this article Raza A. Use of CRABEL Scores to improve Quality of Medical Records Documentation in Hospitals. Int J Res Foundation Hosp Healthc Adm 2016;4(1):5-10.
... El marco de trabajo para la aplicación de este tipo de procesos de mejoramiento ha sido el ciclo DMAIC [32][33][34][35][36] que han impactado procesos misionales y de apoyo con enfoque de excelencia operacional hacia la cadena de valor [37]. Entre los estudios analizados se resalta el uso de métricas de calidad del servicio [38][39][40][41] con el propósito de determinar los requerimientos de los clientes y diseñar estrategias encaminadas a su cumplimiento, incluso con sistemas de gestión de la calidad ISO 9000 [42][43]. El diseño de políticas públicas en salud está orientadas al monitoreo y evaluación de los sistemas de salud usando métricas en para la gestión de los procesos [44][45][46][47][48][49][50] y se resalta como una tendencia investigativa internacional el uso de Lean Six Sigma (o su enfoque particular Lean Healthcare) para el diseño de marcos de referencia para el mejoramiento continuo [51][52]. ...
Full-text available
El artículo presenta los resultados de una revisión de literatura orientada a la identificación de tendencias en la evaluación de la calidad del servicio en instituciones de salud a nivel internacional mediante el uso de métricas de mejoramiento continuo. La metodología se estructuró en las siguientes etapas: construcción de la ecuación de búsqueda, verificación de calidad de los artículos, construcción de diagramas para identificación de tendencias, análisis de convergencias y divergencias. Se analizaron 85 artículos de la base de datos Scopus® a través del software de vigilancia tecnológica Vantage Point® donde resaltan el uso de las metodologías de calidad como six sigma, lean six sigma y otras tendencias. Se identificó que los principales departamentos donde se aplican procesos de mejoramiento son urgencias y cirugía debido al costo asumido en el sistema de seguridad social y por la criticidad con relación a la Política de Seguridad del Paciente. De igual manera, las tendencias muestran que algunas políticas públicas relacionadas con sistemas de salud a nivel internacional incluyen metodologías de calidad como lean six sigma y similares.
... [6] Additionally, they enable more clinically accurate recording than widely prevalent dictationbased writing systems. [7][8][9] Furthermore, appropriately coded structured templates can be quite effective to collect data that are readily available for clinical studies. [10] Studies specifically addressing such structured templates for specific clinical conditions have demonstrated that they contribute to quality improvement of information recording itself, but contributions to outcomes related directly to patient prognosis have remained controversial. ...
Full-text available
Along with article-based checklists, structured template recording systems have been reported as useful to create more accurate clinical recording, but their contributions to the improvement of the quality of patient care have been controversial. An emergency department (ED) must manage many patients in a short time. Therefore, such a template might be especially useful, but few ED-based studies have examined such systems. A structured template produced according to widely used head injury guidelines was used by ED residents for head injury patients. The study was conducted by comparing each 6-month period before and after launching the system. The quality of the patient notes and factors recorded in the patient notes to support the head computed tomography (CT) performance were evaluated by medical students blinded to patient information. The subject patients were 188 and 177 in respective periods. The numbers of patient notes categorized as “CT indication cannot be determined” were significantly lower in the postintervention term (18% → 9.0%), which represents the patient note quality improvement. No difference was found in the rates of CT performance or CT skip without clearly recorded CT indication in the patient notes. The structured template functioned as a checklist to support residents in writing more appropriately recorded patient notes in the ED head injury patients. Such a template customized to each clinical condition can facilitate standardized patient management and can improve patient safety in the ED.
Full-text available
Many countries are currently making the transition from CAD modeling to BIM modeling and project management. There are considerable challenges for a society to integrate these new methodologies in an industry that is so changing, where many professional disciplines are involved, and whose economic contribution is relevant for the growth of a nation. There are different authors in the global context that have documented the advantages in the implementation of the methodology, but this does not mean that it is a simple process. In this sense, the undergraduate programs of universities play a fundamental role. This document describes the exercise that was done, on the recognition of the process that would be required to achieve the implementation of the BIM methodology in the current Civil Engineering program of the Catholic University of Colombia, in this, it reflects on the most relevant aspects to consider in the approach of BIM from the academy, particularly from Engineering.
Objective: This study tested validity, accuracy, and efficiency of the Orthopaedic Minimal Data Set Episode of Care (OME) compared with traditional operative report in arthroscopic surgery for shoulder instability. As of November 2017, OME had successfully captured baseline data on 97% of 18 700 eligible cases. Materials and methods: This study analyzes 100 cases entered into OME through smartphones by 12 surgeons at an institution from February to October 2015. A blinded reviewer extracted the same variables from operative report into a separate database. Completion rates and agreement were compared. They were assessed using raw percentages and McNemar's test (with continuity correction). Agreement between nominal variables was assessed by unweighted Cohen's kappa and a concordance correlation coefficient measured agreement between continuous variables. Efficiency was assessed by median time to complete. Results: Of 37 variables, OME demonstrated equal or higher completion rates for all but 1 and had significantly higher capture rates for 49% (n = 18; P < .05). Of 33 nominal variables, raw proportional agreement was ≥0.90 for 76% (n = 25). Raw proportional agreement was perfect for 15% (n = 5); no agreement statistic could be calculated due to a single variable in operative note and OME. Calculated agreement statistic was substantial or better (κ > 0.61) for 51% (n = 17) for the 33 nominal variables. All continuous variables assessed (n = 4) demonstrated poor agreement (concordance correlation coefficient <0.90). Median time for completing OME was 103.5 (interquartile range, 80.5-151) seconds. Conclusions: The OME smartphone data capture system routinely captured more data than operative report and demonstrated acceptable agreement for nearly all nominal variables, yet took <2 minutes to complete on average.
Conference Paper
Increasingly, the adoption of speech recognition technology (SRT ) by various hospitals has posed a threat to the medical transcription profession. As turnover intentions among medical transcriptionists are on the rise, understanding the role of technology in shaping turnover intentions requires attention, and yet is a significant gap in the literature. Drawing upon the theories of stress and turnover intentions, and prior work on professional obsolescence, we propose a new construct called technology-driven obsolescence perceptions in the medical transcription domain. We posit that technology-driven obsolescence perceptions positively impact turnover intentions, and antecedents such as work-family conflict, fairness of rewards, work excellence and job commitment have differential impacts on technology-driven obsolescence perceptions. Results indicate that all the hypotheses in the study are supported. Our study makes important contributions to the obsolescence and turnover intentions literature, and has important implications for research and practice alike.
Full-text available
Context is key in the design, implementation and evaluation of health information technology. Healthcare systems around the world are in transition; adopting technologies to deal with the problems of aging populations, increased numbers of chronically ill patients and limited resources. But a 'one size fits all' approach is not the answer, and may limit those local healthcare system innovations that are so crucial to the development of health informatics. Even the most advanced systems will fail to achieve the desired outcomes if context is not taken into account. This book presents the proceedings of the Context Sensitive Health Informatics (CSHI) conference, held in Curitiba, Brazil, in August 2015. Context sensitive health informatics is about health information technologies and their environments, and the 26 papers included here examine how health informatics systems are developed, implemented and evaluated in a complex environment of many places, many users, many uses and in many contexts. The book is divided into four themes: different users in different contexts; evaluating for context through usability testing and ensuring patient safety; organizational and social issues in different places; and understanding different contexts using theory. This overview of the research and experience critical to ensuring the successful introduction and adaptation of healthcare systems to new countries, contexts and healthcare settings will be of interest to all those involved in improving the quality of healthcare worldwide.
Health information technology (health IT) has been widely adopted by physician offices and hospitals as a result of federal legislation. Health IT has reduced some types of documentation errors and introduced others. Like errors in other industries, health IT documentation errors can be considered execution errors or planning errors that occur within a context of latent conditions. This chapter describes some latent conditions that contribute to these errors and specific tactics are suggested to mitigate the patient safety risk from them. The chapter concludes with a proposal to share physician documentation with patients and caregivers as a method to reduce multiple health IT documentation errors simultaneously.
Full-text available
Medical records have become central to nearly all aspects of healthcare. However, little research exists on their creation. Using data from an ongoing ethnographic study of healthcare documentation production, this paper examines the process of medical record creation through the use of speech recognition technology (SRT) and subsequent editing by medical transcriptionists (MTs). Informed by ethnomethodology (EM) and conversation analysis (CA), the results demonstrate the professional knowledge involved in the work of medical transcription, which includes a combination of skilled worksite practices and an orientation toward the social order properties of recorded dictation. Furthermore, we examine how the advantages and limitations of SRTs can impact the work of transcription. We conclude with strategic recommendations for using SRTs to support medical records production and recommend against total automation.
Full-text available
The ability to copy and paste text within computerized physician documentation facilitates electronic note writing, but may affect the quality of physician notes and patient care. Little is known about physicians' collective experience with the copy and paste function (CPF). To determine physicians' CPF use, perceptions of its impact on notes and patient care, and opinions regarding its future use. Cross-sectional survey. Resident and faculty physicians within two affiliated academic medical centers currently using a computerized documentation system. Responses on a self-administered survey. A total of 315 (70%) of 451 eligible physicians responded to the survey. Of the 253 (80%) physicians who wrote inpatient notes electronically, 226 (90%) used CPF, and 177 (70%) used it almost always or most of the time when writing daily progress notes. While noting that inconsistencies (71%) and outdated information (71%) were more common in notes containing copy and pasted text, few physicians felt that CPF had a negative impact on patient documentation (19%) or led to mistakes in patient care (24%). The majority of physicians (80%) wanted to continue to use CPF. Although recognizing deficits in notes written using CPF, the majority of physicians used CPF to write notes and did not perceive an overall negative impact on physician documentation or patient care. Further studies of the effects of electronic note writing on the quality and safety of patient care are required.
Full-text available
The Veterans Health Administration (VHA), of the U.S. Department of Veteran Affairs has instituted a medical record (EMR) that includes electronic documentation of all narrative components of the medical record. To support clinicians using the system, multiple efforts have been instituted to ease the creation of narrative reports. Although electronic documentation is easier to read and improves access to information, it also may create new and additional hazards for users. This study is the first step in a series of studies to evaluate the issues surrounding the creation and use of electronic documentation. Eighty-eight providers across multiple clinical roles were interviewed in 10 primary care sites in the VA system. Interviews were tape-recorded, transcribed and qualitatively analyzed for themes. In addition, specific questions were asked about perceived harm due to electronic documentation practices. Five themes relating to difficulties with electronic documentation were identified: 1) information overload; 2) hidden information; 3) lack of trust; 4) communication; 5) decision-making. Three providers reported that they knew of an incident where current documentation practices had caused patient harm and over 75% of respondents reported significant mis-trust of the system.
In 2007, Electronic Medical Records (EMR) systems, though not new, were clearly the future. Yet, most medical records in U.S. physicians' offices were on paper. Hing et al. (2007) report that only 29.2% of 2,117 survey respondents had any EMR system and only 12.4% had fully implemented an EMR system (i.e., no part was paper). An earlier study found that EMR use was higher in hospital Emergency Departments (31%) and Outpatient Departments (29%) than in physician practices (17%) (Burt and Hing, 2005), but penetration was low in all areas. Many have studied EMR systems and their use in doctor–patient interactions. Heath and Luff (1996) found that medical practitioners continued to use paper records along with a newly introduced EMR. Other studies (Clarke et al., 2001) have examined the use of an EMR in medical exams and issues an EMR poses in physician–patient interactions (Ventres et al., 2005). Martin et al. (2005) studied the issues involved in the integration and implementation of an EMR system in a large hospital. Workflow changes required by physicians using an EMR have also been discussed (Puffer et al., 2007). Fitzpatrick (2000) studied the implications of the use of paper records for EMR systems from the point of view of the providers. No studies focused on the work of practice management. Practice management deals with the business side of the clinic arranging interactions between the patients and clinical staff and insures payment for those interactions.
The Veterans Administration's discharge abstract system was studied to identify error frequency, source, and effect in five Veterans Administration hospitals. We reviewed 1,829 medical records from 21 services for concordance with the abstract; sampling provided 95% confidence for each service. Of these records, 1,499 (82%) differed from the abstract in at least one item. Of 20,260 items, 4,360 (22%) were incorrect, with three error sources: physician (62%), coding (35%), and keypunch (3%). We projected 2.14 physician and 0.81 coding errors in the average abstract. Eighty-nine percent of projected physician errors were failures to report a procedure or diagnosis. Coding was subjective and errors were synergistic with physician errors. We projected that correction of errors would change 19% of the records for diagnosis-related group purposes and substantially increase future resource allocation. This effect varied considerably by service. (JAMA 1985;254:1330-1336)
This review set out to review the extensive literature on response bias, and particularly dissimulating a socially desirable response to self-report data. Various terminological differences are discussed as well as the way test constructors attempt to measure or overcome social desirability response sets. As an example of the research in this field, four types of studies measuring social desirability in the Eysenckian personality measures (MPI, EPI, EPQ) are reviewed. Also studies of faking in psychiatric symptom inventories, and a wide range of other tests are briefly reviewed. Various equivocal results from attempts to determine what makes some measures more prone to social desirability than others. However there appears to be growing evidence that social desirability is a relatively stable, multidimensional trait, rather than a situationally-specific response set. Faking studies may also be used to examine people's stereotypes and images of normality and abnormality, and various studies of‘abnormal groups’ perception of normality are examined. Recommendations for further work in this area are proposed.
Efforts to improve healthcare by reducing medical errors often center on the accuracy of medical records. At the same time, the impact of new technologies such as speech recognition technology on the process of producing medical records has not been sufficiently examined. In this article we analyzed interview data from medical transcriptionists (MTs) describing how they do the work of transcription to produce accurate medical records from doctors' dictation. We found that medical transcriptionists rely on several types of skills that current speech recognition technology lacks. We conclude with a discussion of the implications of these findings for the design and implementation of SRT systems for the production of medical records and for how the work of MTs can help reduce medical errors.
The Veterans Administration's discharge abstract system was studied to identify error frequency, source, and effect in five Veterans Administration hospitals. We reviewed 1,829 medical records from 21 services for concordance with the abstract; sampling provided 95% confidence for each service. Of these records, 1,499 (82%) differed from the abstract in at least one item. Of 20,260 items, 4,360 (22%) were incorrect, with three error sources: physician (62%), coding (35%), and keypunch (3%). We projected 2.14 physician and 0.81 coding errors in the average abstract. Eighty-nine percent of projected physician errors were failures to report a procedure or diagnosis. Coding was subjective and errors were synergistic with physician errors. We projected that correction of errors would change 19% of the records for diagnosis-related group purposes and substantially increase future resource allocation. This effect varied considerably by service.