Article

Assessing pragmatic competence in oral proficiency interviews at the C1 level with the new CEFR descriptors

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The study of pragmatic competence has gained increasing importance within second language assessment over the last three decades. However, its study in L2 language testing is still scarce. The aim of this paper is to research the extent to which pragmatic competence as defined by the Common European Framework of Reference for Languages (CEFR) has been accommodated in the task descriptions and rating scales of two of the most popular Oral Proficiency Interviews (OPIs) at a C1 level: Cambridge’s Certificate in Advanced English (CAE) and Trinity’s Integrated Skills in English (ISE) III. To carry out this research, OPI tests are first defined, highlighting their differences from L2 pragmatic tests. After pragmatic competence in the CEFR is examined, focusing on the updates in the new descriptors, CAE and ISE III formats, structure and task characteristics are compared, showing that, while the formats and some characteristics are found to differ, the structures and task types are comparable. Finally, we systematically analyse CEFR pragmatic competence in the task skills and rating scale descriptors of both OPIs. The findings show that the task descriptions incorporate mostly aspects of discourse and design competence. Additionally, we find that each OPI is seen to prioritise different aspects of pragmatic competence within their rating scale, with CAE focusing mostly on discourse competence and fluency, and ISE III on functional competence. Our study shows that the tests fail to fully accommodate all aspects of pragmatic competence in the task skills and rating scales, although the aspects they do incorporate follow the CEFR descriptors on pragmatic competence. It also reveals a mismatch between the task competences being tested and the rating scale. To conclude, some research lines are proposed.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Eizaga-Rebollar and Heras-Ramirez (2020) research the extent to which pragmatic competence as defined by the CEFR has been accommodated in the task descriptions and rating scales of two of the most popular Oral Proficiency Interviews at a C1 level [9]. The findings show that the task descriptions incorporate mostly aspects of discourse and design competence. ...
Article
The aim of this study is to analyse 8th grade students' English speaking levels in the academic year 2020–2021 according to the speaking criteria of the Common European Language Framework of Reference for Languages. This study also aims to identify the speaking difficulties which are experienced by the students and find out the possible problems related to being able to carry out tasks which necessitate share data on known themes and exercises such as describing experiences, events, hopes and ambitions, understanding what the discussion is about and having the option to keep the discussion going successfully. Being able to speak in English or any target language is a vital skill and can be difficult at times. For the speaking analysis, a questionnaire was administered to 32 8th grade students of N20 R. Isetov school in Turkistan, Kazakhstan in the academic year 2020–2021. The results of the questionnaires were assessed statistically. The findings in the research indicate that students think they are competent with the A1 speaking criteria the most according to the CEFR. In other words, as the students reach higher levels of competencies in their current levels (A1, A2, B1), the means that show their speaking performance levels tend to go down.
... Eizaga-Rebollar and Heras-Ramirez (2020) research the extent to which pragmatic competence as defined by the CEFR has been accommodated in the task descriptions and rating scales of two of the most popular Oral Proficiency Interviews at a C1 level [9]. The findings show that the task descriptions incorporate mostly aspects of discourse and design competence. ...
Article
Мәңгілік ел идеясы тарихи негізінің бірі – ұлттық идеологиямызға айналған Алаштану. Алаштану ұғымы өміртану, адамтану категорияларымен сабақтасады. Алаштану концепциясы өміртаным философиясы аясында моральдық құндылықтармен өлшенеді. Мақала Алаштану негізінде азаматтық ой-санаға қозғау салатын прозалық туындылардағы суреттелу жайлы. Алаштану – өміртанудың көркем әдебиеттегі дәстүрлі жолы прозаның шағын жанрында суреткерлікпен көрініс табуда. Алаштану идеясының көркем мәтінде бейнелену тәсілдері қазіргі шағын жанрды талдау арқылы жан-жақты сараланады.
Book
Full-text available
على الرغم من انتشار مصطلح "التداوليّة" في العقدين الماضيين في الأدب التربويّ العربيّ إلّا أنّ جوهره ما زال غامضًا لكثير من المعنيين بتعليم العربيّة للناطقين بغيرها، وعلى الرغم من أن الكفاية التداوليّة مكوّن رئيس في أغلب نظريات الكفاءة اللغويّة إلا أنّ المناهج والمواد التعليميّة التي تؤهّل المتعلمين لاكتساب الكفاية التداوليّة محدودة إن لم تكن نادرة. من هنا جاءت فكرة هذا الكتاب البحثيّ؛ إذ يهدف إلى نقل "التداوليّة" من الميدان النظريّ إلى الميدان العمليّ. وفي سبيل ذلك جاء هذا الكتاب في ثلاثة فصول وخاتمة وملحقين: الفصل الأول: يقدّم خطّة البحث الفصل الثاني: الإطار النظريّ للبحث، وقد جاء في ثلاثة مباحث هي: الأول: يعرّف التداوليّة لغة واصطلاحًا. الثاني: يعرّف أبعاد التداوليّة. الثالث: يعرّف الكفاية التداوليّة ويركّز على موقعها في الإطار المرجعيّ الأوربيّ المشترك للغات. الفصل الثالث: يتناول إجراءات البحث ونتائجه. الخاتمة: تتناول خلاصة البحث وتقدّم بعض المقترحات العمليّة والبحثيّة. وألحق بالكتاب قائمة تشمل واصفات الكفاية التداوليّة في الإطار المرجعي الأوربيّ المشترك للغات ومؤشراتها التي نتجت عن هذا البحث. يتوقع الاستفادة بهذا العمل في بناء المناهج والمواد التعليمية والاختبارات اللغويّة الموجّهة للناطقين بغير العربيّة.
Article
Full-text available
Paired speaking tasks are now commonly used in both pedagogic and assessment contexts, as they elicit a wide range of interactional skills. The current study aims to offer an investigation of the interaction co-constructed by learners at different proficiency levels who are engaged in a paired speaking test, and to provide insights into the conceptualization of interactional competence and key distinguishing interactional features across levels. The findings suggest that our understanding of interactional competence both in the classroom and as a construct underlying tests and assessment scales needs to broaden to include not just interactional features such as topic development organization, but also listener support strategies and turn-taking management. A more comprehensive understanding of interactional competence has the potential to complement available descriptions of interactional skills in assessment scales of speaking and aid learners and teachers in communicative classrooms.
Article
Full-text available
This paper discusses an advanced language assessment tool designed to assess university students' linguistic competence in L2. The tool serves the final exam of a course offered to second-year students of English language and literature. Of the three parts of the exam, (1) reading, (2) language awareness, and (3) writing, the third part (language awareness) explicitly addresses students' pragmatic competence. In the light of our work, assessing students' pragmatic competence is shown to involve the identification of certain levels of competence ensuing from the interpretive routes learners follow in their attempt to (a) interpret the communicator's intention, (b) identify the linguistic devices that lead them to this interpretation, and (c) explicitly verbalize the link between linguistic devices and interpretation. The suggested ranking of levels draws on data from statistical analysis of 190 final exam scripts. The proposed assessment of pragmatic competence manifested when dealing with written discourse is considered to be an innovative testing tool, because it is a discourse -based approach to testing pragmatic competence, where both explicit and implicit meanings are retrieved by drawing on a naturally-occurring wide range of lexical and grammatical features . More importantly, the proposed assessment constitutes an accurate testing tool because it allows for levels of pragmatic, hence linguistic, competence to naturally unveil in an authentic reading context requesting the reader's spontaneous reaction and contribution to the process of meaning making in L2.
Chapter
Full-text available
In this chapter I describe a theoretical rationale for and, where possible, empirical research into criteria to be adopted when progressively increasing the cognitive demands of second language (L2) tasks. These criteria, I argue, provide a basis for decisions about sequencing tasks in a task-based syllabus as well as a framework for studying the effects of increasing L2 task complexity on production, comprehension and learning. I distinguish task complexity (the task dependent and proactively manipulable cognitive demands of tasks) from task difficulty (dependent on learner factors such as aptitude, confidence, motivation, etc.) and task conditions (the interactive demands of tasks), arguing that these influences on task performance and learning are different in kind, and have not been sufficiently distinguished in previous approaches to conceptualizing the options in, and consequences of, sequencing tasks from the syllabus designer's perspective. My focus in this chapter is on the issue of task complexity, which I argue should be the sole basis of prospective sequencing decisions since most learner factors implicated in decisions about task difficulty can only be diagnosed in situ and in process, so cannot be anticipated in advance of implementation of a syllabus and therefore can be of no use to the prospective materials and syllabus designer. Those learner factors which can be diagnosed in advance of syllabus implementation (e.g., aptitude and cognitive style) have not to date been shown to have stable effects on task performance at the different levels of complexity proposed here.
Article
Full-text available
The oral fluency level of an L2 speaker is often used as a measure in assessing language proficiency. The present study reports on four experiments investigating the contributions of three fluency aspects (pauses, speed and repairs) to perceived fluency. In Experiment 1 untrained raters evaluated the oral fluency of L2 Dutch speakers. Using specific acoustic measures of pause, speed and repair phenomena, linear regression analyses revealed that pause and speed measures best predicted the subjective fluency ratings, and that repair measures contributed only very little. A second research question sought to account for these results by investigating perceptual sensitivity to acoustic pause, speed and repair phenomena, possibly accounting for the results from Experiment 1. In Experiments 2–4 three new groups of untrained raters rated the same L2 speech materials from Experiment 1 on the use of pauses, speed and repairs. A comparison of the results from perceptual sensitivity (Experiments 2–4) with fluency perception (Experiment 1) showed that perceptual sensitivity alone could not account for the contributions of the three aspects to perceived fluency. We conclude that listeners weigh the importance of the perceived aspects of fluency to come to an overall judgment.
Book
Speaking is a central yet complex area of language acquisition. The assessment of this crucial skill is equally complex. This book takes teachers and language testers through the research on the assessment of speaking as well as through current tests of speaking. The book then guides language testers through the stages of test tasks, rating practices and design.
Chapter
Intercultural Pragmatics is concerned with the way the language system is put to use in social encounters between human beings who have different first languages, communicate in a common language, and, usually, represent different cultures (cf. Kecskes 2013). The communicative process in these encounters is synergistic in the sense that in them existing pragmatic norms and emerging, co-constructed features are present to a varying degree. Intercultural Pragmatics represents a socio-cognitive perspective in which individual prior experience and actual social situational experience are equally important in meaning construction and comprehension.
Article
This article on interactional competence provides an overview of the historical influences that have shaped theoretical conceptualisations of this construct as it relates to spoken language use, leading to the current view of it as involving both cognitive and social dimensions, and then describes its operationalisation in tests and assessment scales, and the challenges associated with this activity. Looking into the future, issues that need to be dealt with include developing a fuller representation of the construct and of more contextually relevant assessments, deciding upon additional assessment criteria and the appropriate interpretation thereof, and determining how technology can be applied in assessment practice and the extent to which technology fundamentally changes the construct itself. These all have implications for testing if it is to be relevant and fit for purpose.
Article
Oral Proficiency Interviews (OPIs) are widely used to measure speaking ability in a second or foreign language. The Michigan English Language Assessment Battery (MELAB) Speaking Test is an OPI used for academic and professional purposes around the world. However, little research on this or other OPIs has quantitatively compared test takers’ speech with the target domains of the test. Such a comparison could be used as evidence for the validity argument (Kane, 2013) for the MELAB. In this study we use corpus-based register analysis and Multi-Dimensional (MD) analysis, investigating a large number of linguistic features to determine the extent to which the language of the MELAB is similar to conversational, academic, and professional spoken discourse, specifically nurse–patient interactions, since many of the test takers are preparing for nursing licensure. The results show that the MELAB has similarities with conversation in its use of stance, and is closely aligned with academic registers and nurse–patient interactions in the use of language for informational exchange, which provides support for the validity argument of the MELAB. However, the use of narrative features and discussion of future possibilities and suggestions are important aspects of both conversation and academic and professional registers but may be harder to evaluate through the MELAB and other similar OPIs.
Chapter
Pragmatics is a key domain in language assessment. For more than two decades, advances have been made in conceptualizing the domain, developing assessment instruments, and applying current methods of data treatment to the analysis of test performance. This book, the first edited volume on the topic, brings together empirical studies on a range of well-established and innovative strategies for the assessment of pragmatic ability in a second language. In this introductory chapter, we will first offer an overview of key concepts, situate theoretical models of pragmatic competence within the larger frameworks of communicative language ability and interactional competence, and consider the relationship between pragmatics and language testing. We will then introduce the chapters, organized into two Parts. The chapters assembled in Part I investigate assessment instruments and practices for a variety of assessment constructs, purposes, and contexts, guided by different theoretical outlooks on pragmatics. Part II comprises studies of interaction in different forms of oral proficiency interview, conducted from the perspective of conversation analysis.
Chapter
The ACTFL Proficiency Guidelines — Speaking and the oral proficiency interview (OPI) have had a long-standing impact on foreign language pedagogies in the United States. The Guidelines have been widely adopted as a curriculum benchmark in foreign language programs in colleges and secondary schools, and the ACTFL OPI, which measures foreign language speaking proficiency based on the criteria described in the Guidelines, has been used for various assessment purposes, such as program evaluation and accreditation, entrance and exit requirement, placement test, and teacher certification (Chambless, 2012; Houston, 2005; Kagan & Friedman, 2003; Kondo-Brown, 2012; Rifkin, 2003; Wetzel & Watanabe, 1998). Today, the Guidelines and the OPI have considerable influence not only on how teachers and professionals evaluate L2 speakers’ oral proficiency, but also on how foreign language programs are developed, implemented, evaluated, and revised. As such, it is the responsibility of ACTFL and the users of the Guidelines to examine and assure the adequacy of level descriptions in the Guidelines and the quality of the OPI.
Chapter
This study investigates the extent to which oral proficiency interviews (OPIs) may be described as interactional varieties, which may then be compared with interaction in university and L2 classroom settings. The three varieties are related in terms of preparing students for a next stage in an educational process. L2 classroom interaction may prepare students for examinations such as OPIs, which provide access to universities, which in turn may prepare students for the world of work.1 Therefore, it is legitimate to examine the varieties in terms of whether the interactional experiences of students align or not in the different settings. OPIs in general are intended to assess the language proficiency of non-native speakers and to predict their ability to communicate in future encounters.
Chapter
In this chapter, the potential utility and limitations of teacher-based assessment are explored in the Japanese-as-a-foreign-language classroom context. Teacher-based assessment constitutes “a more teacher-mediated, contextbased, classroom-embedded assessment practice,” which is situated in opposition to traditional formal assessment that is often externally set and administered (Davison & Leung, 2009, p. 395). Teacher-based assessment is sometimes termed alternative assessment, classroom(-based) assessment, or authentic assessment (e.g., Brown & Hudson, 1998; O’Malley & Valdes-Pierce, 1996; Rea-Dickins, 2008). Despite the rigorous efforts to measure learners’ pragmatic competence (e.g., Ahn, 2005; Brown, 2001; Enochs & Yoshitake-Strain, 1999; Hartford & Bardovi-Harlig, 1992; Hudson 2001; Hudson, Detmer, & Brown, 1992, 1995; Itomitsu, 2009; Liu, 2006; Rintell & Mitchell, 1989; Rose, 1994; Roever, 2005; Yamashita, 1996), their application to everyday classrooms long remained underdeveloped (Hudson, 2001). The assessment of learners’ pragmatic competence in classroom contexts has only begun to be explored recently even though assessment is an integral part of instruction. From a teachers’ perspective, we need to know how to implement effective assessment for L2 pragmatics in the classroom; the same concern is true for researchers and teacher educators if pragmatics is to be promoted in L2 instruction and teacher development.
Article
This study investigates the validity of assessing L2 pragmatics in interaction using mixed methods, focusing on the evaluation inference. Open role-plays that are meaningful and relevant to the stakeholders in an English for Academic Purposes context were developed for classroom assessment. For meaningful score interpretations and accurate evaluations of interaction-involved pragmatic performances, interaction-sensitive data-driven rating criteria were developed, based on the qualitative analyses of examinees’ role-play performances. The conversation analysis performed on the data revealed various pragmatic and interactional features indicative of differing levels of pragmatic competence in interaction. The FACETS analysis indicated that the role-plays stably differentiated between the varying degrees of the 102 examinees’ pragmatic abilities. The raters showed internal consistency despite their differing degrees of severity. Stable fit statistics and distinct difficulties were reported within each of the interaction-sensitive rating criteria.The findings served as backing for the evaluation inference in the validity argument. Finally, implications of the findings in operationalizing interaction-involved language performances and developing rating criteria are discussed.
Article
Testing of second language pragmatic competence is an underexplored but growing area of second language assessment. Tests have focused on assessing learners' sociopragmatic and pragmalinguistic abilities but the speech act framework informing most current productive testing instruments in interlanguage pragmatics has been criticized for under-representing the construct. In particular, the assessment of learners' ability to produce extended monologic and dialogic discourse is a missing component in existing assessments. This paper reviews existing tests and argues for a discursive re-orientation of pragmatics tests. Suggestions for tasks and scoring approaches to assess discursive abilities while maintaining practicality are provided, and the problematicity of native speaker benchmarking is discussed.
Article
While the viva voce (oral) examination has always been used in content-based educational assessment (Latham 1877: 132), the assessment of second language (L2) speaking in performance tests is relatively recent. The impetus for the growth in testing speaking during the 19th and 20th centuries is twofold. Firstly, in educational settings the development of rating scales was driven by the need to improve achievement in public schools, and to communicate that improvement to the outside world. Chadwick (1864, see timeline) implies that the rating scales first devised in the 1830s served two purposes: providing information to the classroom teacher on learner progress for formative use, and generating data for school accountability. From the earliest days, such data was used for parents to select schools for their children in order to 'maximize the benefit of their investment' (Chadwick 1858). Secondly, in military settings it was imperative to be able to predict which soldiers were able to undertake tasks in the field without risk to themselves or other personnel (Kaulfers 1944, see timeline). Many of the key developments in speaking test design and rating scales are linked to military needs.
Article
The paper examines pragmatic competence re-defined (0130 and 0135) in terms of an open-ended array of pragmatically inferred implicatures rather than a fixed set of routines (e.g. speech acts) or isolated implicatures. The data draws on L2 students of English Language and Literature, University of Athens, who are exposed to and assessed by pragmatic awareness and meta-pragmatic awareness types of task. Longitudinal evidence is used to assess the development of pragmatic competence in students first exposed to a pragmatic awareness task in fall 2009 and re-assessed in spring 2011 after explicit instruction. Cross-sectional data from a pragmatic test tapping into different aspects of pragmatic competence, namely (a) speech acts, (b) implicatures in a constrained linguistic context, (c) pragmatic inference in a global context, show differential results on the types of pragmatic ability assessed for two groups of learners. Performance achievement in the pragmatic trial under (c) is attributed to preceding explicit instruction in the case of group 1, and to instruction offered 12 months before the pragmatic trial in the case of group 2. Short-term and long-term effects of explicit intervention are confirmed.
Article
The paper investigates how ELF speakers improve their pragmatic competence by using the discourse markers yes/yeah, so and okay as expressions of (inter)subjectivity and connectivity. The data discussed in this paper stems from university consultation hours, and it is part of a larger project conducted at the University of Hamburg on multilingualism and multiculturalism in the international university. Findings of the case studies described in this paper suggest that speakers of English as a lingua franca in academic consultation hours tend to strategically re-interpret certain discourse markers in order to help themselves improve their pragmatic competence and thus function smoothly in the flow of talk.
Article
Despite increasing interest in interlanguage pragmatics research, research on assessment of this crucial area of second language competence still lags behind assessment of other aspects of learners’ developing second language (L2) competence. This study describes the development and validation of a 36-item web-based test of ESL pragmalinguistics, measuring learners’ offline knowledge of implicatures and routines with multiple-choice questions, and their knowledge of speech acts with discourse completion tests. The test was delivered online to 267 ESL and EFL learners, ranging in proficiency from beginner to advanced. Evidence for construct validity was collected through correlational analyses and comparisons between groups. The effect of browser familiarity was found to be negligible, and learners generally performed as previous research would suggest: their knowledge of speech acts increased with proficiency, as did their knowledge of implicature. Their knowledge of routines, however, was strongly dependent on L2 exposure. Correlations between the sections and factor analysis confirmed that the routines, implicatures, and speech act sections are related but that each has some unique variance. The test was sufficiently reliable and practical, taking an hour to administer and little time to score. Limitations and future research directions are discussed.
Article
Part I. A Theory of Speech Acts: 1. Methods and scope 2. Expressions, meaning and speech acts 3. The structure of illocutionary acts 4. Reference as a speech act 5. Predication Part II. Some Applications of the Theory: 6. Three fallacies in contemporary philosophy 7. Problems of reference 8. Deriving 'ought' from 'is' Index.
Article
Whilst claims to validity for conversational oral interviews as measures of nontest conversational skills are based largely on the unpredictable or impromptu nature of the test interaction, ironically this very feature is also likely to lead to a lack of standardisation across interviews, and hence potential unfairness. This article addresses the question of variation amongst interviewers in the ways they elicit demonstrations of communicative ability and the impact of this variation on candidate performance and, hence, raters’ perceptions of candidate ability. Through a discourse analysis of two interviews involving the same candidate with two different interviewers, it illustrates how intimately the interviewer is implicated in the construction of candidate proficiency. The interviewers differed with respect to the ways in which they structured sequences of topical talk, their questioning techniques, and the type of feedback they provided. An analysis of verbal reports produced by some of the raters confirmed that these differences resulted in different impressions of the candidate’s ability: in one interview the candidate was considered to be more ‘effective’ and ‘willing’ as a communicator than in the other. The paper concludes with a discussion of the implications for rater training and test design.
Article
This study, framed within sociocultural theory. examines the interaction (A adult ESL test-takers in two tests of oral proficiency: one in which they interacted with all examiner (the individual format) and one in which they interacted with another Student (the paired format). The data for the eight Pairs in this Study were drawn from a larger Study comparing the two test formats in the context of high-stakes exit testing from all Academic Preparation Program at a large Canadian University. All of the test-takers participated in both test formats involving a discussion with comparable speaking prompts. The findings from the quantitative analyses show that overall the test-takers performed better in the paired format in that their scores were oil average higher than when they interacted with ail examiner. Qualitative analysis (A the test-takers' speaking indicates that the differences in performance ill the two test formats were more marked than the scores suggest. When test-takers interacted with other Students in the paired test, the interaction was touch more complex and revealed the co-construction of a more linguistically demanding performance than did the interaction between examiners and Students. The paired testins, format resulted in more interaction, negotiation of meaning, consideration of the interlocutor and more complex Output. Among the implications for test theory and practice is the need to account for the joint construction of performance in a speaking test in both construct definitions and rating scales.
Article
Unlike other areas of second language study, which are primarily concerned with acquisitional patterns of interlanguage knowledge over time, most studies in interlanguage pragmatics have focused on second language use rather than second language learning. The aim of this paper is to profile interlanguage pragmatics as an area of inquiry in second language acquisition research, by reviewing existing studies with a focus on learning, examining research findings in interlanguage pragmatics that shed light on some basic questions in SLA, exploring cognitive and social-psychological theories that might offer explanations of different aspects of pragmatic development, and proposing a research agenda for the study of interlanguage pragmatics with a developmental perspective that will tie it more closely to other areas of SLA.
Article
A pervasive feature of oral proficiency interviewing is the interviewer's management of candidate comprehension of questions and tasks used for assessment. Interaction troubles ensuing from interview candidates misconstruing or mishearing questions may impede the process of gathering evidence of candidate proficiency and arriving at justifiable rating of proficiency. In order to forestall potential comprehension problems, and in some instances, as reactions to specific troubles, many OPI question turns include multiple questions on the same topic of talk. The situated relevance to both reactive and proactive multiple questions is the focus of the present paper. The data for the analysis of multiple questions come form a corpus of more than 100 instances of oral proficiency interview questions and tasks. Attention is placed on the interaction relevance of multiple questions with the goal of identifying features of interaction that motivate both reactive (or vertical) repeated questions, and proactive (or horizontal) question repetitions. Vertical multiple questions are shown to be sequenced according to immediate troubles in candidate uptake of prior question content, corresponding to the phenomena of repair, missing rejoinders, or problematic answers. Horizontal multiple questions are in contrast situated in 'fragile' environments where the probability of mishearing may be large. # 2007 Elsevier B.V. All rights reserved.
Article
This study aims to discuss role-play activity in oral proficiency interviews (OPIs) in terms of its construct validity; that is, whether it correlates with what it is supposed to measure. The data for the analysis were obtained from 71 role-play activities conducted during an OPI. The analysis is based on conversation analytic (CA) methodology and invokes the analytic frameworks of interactive footing and interactional competencies. Conversational analyses performed on the data revealed that candidates executed not only the role-play instructions but also the interviewers’ explicit and implicit requirements of the next desirable action. In doing so, candidates employed and displayed their interactional competencies in role-play interactions. The role-play activity in the OPI being managed by the interviewer created an asymmetrical relationship between the interviewers and the candidates in terms of speaking rights (i.e., turn-taking and topical organization). Nevertheless, competencies that candidates displayed in performing a role-play activity did not seem different from those employed in ordinary conversations. Thus, role-play activity in OPIs, if recognized as an interactional phenomenon co-constructed by participants’ display of their turn-by-turn practical evaluation of each other's actions, seems to be a valid instrument for assessing the candidates’ performance of the given tasks.
Article
The oral proficiency interview (OPI) is often used in the domain of business English as a criterion for access to overseas assignments and job promotions. Little, however, is known about variation in interaction style across interviewers, which motivates in this study a contrastive analysis of two oral proficiency interviews used for gatekeeping purposes. The two interviews were conducted with the same candidate three months apart, and provide a rare glimpse of contrastive interviewer strategies with a single candidate. The analysis examines evidence that the candidate backslid from an earlier successful interview to a categorically lower level of performance in the second interview. Analyses of the candidate's differential establishment of footing in the interview, misalignments to the tone of the interviewer, and differential tendencies of the two interviewers to accommodate to the candidate are featured. Interviewer differences in proclivity to backchannel indicate how facilitative accommodation in scaffolding the interaction may influence the initial rating. The micro-analyses of interview discourse suggest that differences in interviewer style potentially lead to divergent outcomes in the two interviews. In spite of considerable variation in the interaction styles of the two interviewers, consistency in the outcomes of five repeated second ratings of the candidate's performance suggest a rating system robust against even large differences in interviewer style.
Assessment commentary and marks
Cambridge Assessment English (CAE). 2012. Assessment commentary and marks. Commentary on CAE Speaking test: Meritxell and Stefan. Available at: http://readeralexey.narod.ru/ENGLISH/marks_and_commentary_merixtell_stefan.pdf (accessed
From communicative competence to communicative language pedagogy
  • Michael Canale
Canale, Michael. 1983. From communicative competence to communicative language pedagogy. In Jack C. Richards & Richard W. Schmidt (eds.). Language and Communication, 1-27. London and New York: Routledge.
An investigation into assessing ESL learners’ pragmatic competence at B2-C2 levels
  • Edit Ficzere
Ficzere, Edit. 2019. An investigation into assessing ESL learners' pragmatic competence at B2-C2 levels. PhD. Available at: https://uobrep.openrepository.com/handle/10547/623986 (accessed 2 April, 2020).
Developing pragmatic competence in English as a lingua franca
  • Juliane House
House, Juliane. 2002. Developing pragmatic competence in English as a lingua franca. In Knapp Karlfried & Christiane Meierkord (eds.), Lingua Franca Communication, 245-267. Frankfurt am Main: Peter Lang.
Measuring Pragmatic Knowledge: Issues of Construct Underrepresentation or Labeling? Frankfurt am Main
  • Jianda Liu
Liu, Jianda. 2006. Measuring Pragmatic Knowledge: Issues of Construct Underrepresentation or Labeling? Frankfurt am Main: Peter Lang.
The Equivalence of Direct and Semi-Direct Speaking Tests
  • Kieran O'loughlin
O'Loughlin, Kieran. 2001. The Equivalence of Direct and Semi-Direct Speaking Tests. Cambridge: Cambridge University Press.
Interlanguage pragmatics: A historical sketch and future directions
  • Naoko Taguchi
Taguchi, Naoko. 2017. Interlanguage pragmatics: A historical sketch and future directions. In Anne Barron, Gu Yuego & Gerard Steen (eds.), The Routledge Handbook of Pragmatics, 153-167. Oxford/NewYork: Routledge.
Videos -ISE III (C1)
  • Trinity College London
Trinity College London. 2020. Videos -ISE III (C1). Available at: https://www.trinitycollege.com/qualifications/english-language/ISE/ISE-III-C1-resources/ISE-III-C1-videos (accessed 15 June, 2020).
Handbook of Second Language Assessment 203-218
  • De Jong
  • H Nivja
Interactional Competence and L2 Pragmatics. The Routledge handbook of second language acquisition and pragmatics
  • Richard Young
  • Frederick