CALA 2017 Winter Event
Evaluation of the French language: Innovations and discoveries
Hosts: Canadian Association of Language Assessment and the Language Assessment Group (LARG) of the Official Languages and Bilingualism Institute (OLBI) of the University of Ottawa
Date: Friday, February 24, 2017, 1–5 p.m.
Location: Room 5028 at 120 University (Social Sciences Building – FSS) of the University of Ottawa (http://www.uottawa.ca/maps/CampusMap.pdf).
Free (registration not required), open to all.
- 13h00 - 13h10 – Mot de bienvenue
- 13h10 - 13h40 – Adelehsadat Mahdavi (Université Laval) présentation PowerPoint
Le rôle de la connaissance du vocabulaire, de la conscience syntaxique et de la conscience métacognitive dans la compréhension en lecture des textes académiques en français langue seconde chez des étudiants iraniens à l’Université Laval
- 13h45 - 14h15 – Angel Arias, Michel Laurier et Jean-Guy Blais (Université de Montréal et Université d’Ottawa)
La validation du volet de compréhension orale du TCF à l’aide de l’analyse factorielle confirmatoire
- 14h20 - 14h50 – Zahra Mahdavi (Université Laval) présentation PowerPoint
Analyse des besoins langagiers d’assistants d’enseignement internationaux poursuivant des études supérieures en sciences et en génie au sein d'universités francophones : implications pour l’évaluation de leurs compétences langagières
- 14h50 - 15h20 – Pause café
- 15h20 - 15h50 – Michel Laurier (Université d’Ottawa)
D’une échelle de compétence vers le développement d’instruments pour évaluer la compétence d’immigrants adultes au Québec
- 16h00 - 16h40 – Conférencière invitée: Ahlem Ammar (Université de Montréal)
La rétroaction corrective à l’écrit: est-ce une simple question de techniques?
- 16h40 – 17h00 – Discussion
2017 Fall Event
LARG, along with Carleton University and the Canadian Association of Language Assesment, were pleased to be a sponsor of a research talk with Carol A. Chapelle (from Iowa State University) and John Read (from University of Auckland) about "After Twenty Years: Vocabulary Assessment in Applied Linguistics" at Carleton University on October 2, 2017.
Workshop by Amelia Hope and Carla Hall on January 29, 2016 at 9:30 am
Enriching Speaking Assessment: Interactional Competence in Test Rubrics?
Interactional competence (IC) is widely recognized as an important component of speaking proficiency, but has proven an elusive criterion both to define and to measure in speaking tests. This workshop addresses how to incorporate the measurement of IC into speaking test rubrics. After a brief overview of the definition of IC and the challenges in measuring it, participants will review IC in existing checklists and grids, including some developed by the presenters. Participants will then observe test-taker performance from recorded speaking tests in an effort to describe IC abilities at different levels of proficiency. Drawing on these observations and existing rubrics, participants will be guided in producing descriptors that can document observable IC behavior. Finally, implications for task design will be considered.
Amelia Kreitzer Hope is Head of Language Testing Services at the Official Languages and Bilingualism Institute at the University of Ottawa.
Carla Hall is a Language Teacher at the Official Languages and Bilingualism Institute at the University of Ottawa.
For more information, please click here
Talk by Liying Cheng on October 16, 2015
Teachers’ Grading Decision-Making: Validating the Interface between Teaching and Assessment
Grading is one of the most challenging aspects of assessment for teachers as it is a complex decision-making process that requires them to make professional judgment. Various factors determine this process, such as the grade-level at which teachers teach (Randall & Engelhard, 2009), the assessment training they receive (Brookhart, 1993), and the subject matter they teach (McMillan, 2001). Further, teachers tend to consider confounding factors such as effort, work habits and achievement when assigning grades (Guskey, 2011; Yesbeck, 2011). This is discrepant with measurement recommendations that grades should be based solely on students’ academic achievement. Brookhart (1993, 2004) suggests that this discrepancy is a symptom of a validity problem that can be best framed by Messick’s (1989) framework. Such framing entails exploring teachers’ interpretation of what a grade represents, how they think about grade use and consequences, and what values they place on grades. Despite the importance of grading in the interface between assessment and teaching/learning, only a few studies on grading have been conducted in language assessment, and ever fewer within the Asian context where non-achievement factors are valued (Cheng & Wang, 2007). This study employs a survey design with mixed mode analysis to address this research gap. A questionnaire survey was conducted with 350 Chinese English language teachers. First, the questionnaire measures the extent to which teachers consider different factors and use different assessment methods to determine grades. Second, it provides three grading scenarios to explore the meaning and values associated with grades assigned by the teachers, and finally, it gathers demographic data of the participants. These findings together shed light on understanding the validity of teachers’ grading where non-achievement factors are valued and highlight the influences of the social and educational values on teachers’ grading decision-making within the Asian context.
Liying Cheng (程李颖), Ph.D. is Professor and Director of the Assessment and Evaluation Group (AEG) at the Faculty of Education, Queen’s University. Her primary research interests are the impact of large-scale testing on instruction, the relationships between assessment and instruction, and the academic and professional acculturation of international and new immigrant students, workers, and professionals to Canada. She conducts the majority of her research within the context of teaching and learning English as a second/foreign language (including immersion and bilingual contexts). Since 2000, she has obtained research funding totalling more than 1.5 million Canadian dollars. In addition, she has conducted more than 170 conference presentations and has 120 publications in journals including Language Testing, Language Assessment Quarterly, Language Testing in Asia, Assessment in Education, and Assessment & Evaluation in Higher Education. Her recent books are Language Classroom Assessment (single-authored, TESOL English Language Teacher Development Series, 2013); English Language Assessment and the Chinese Learner (co-edited with A. Curtis, Taylor & Francis, 2010); Language Testing Reconsidered (co-edited with J. Fox et. al., University of Ottawa Press, 2007); Changing Language Teaching through Language Testing (single-authored, Cambridge University Press, 2005); and Washback in Language Testing: Research Contexts and Methods (co-edited with Y. Watanabe with A, Curtis, Lawrence Erlbaum Associates, 2004).
LARG hosted the 2015 Canadian Association of Language Assessment (CALA) 2015 Symposium and Annual General Meeting.
Discussant: Janna Fox (Carleton)
Léonard P. Rivard & Ndeye R. Gueye (Université de Saint-Boniface, Canada): Summary Writing in Secondary and University Students: A Multi-Variable Comparative Analysis
Jake Stone, Keren Oded, & Jill Fu (Paragon Testing Enterprises): Scale Anchoring: An examination of the Correspondence between CELPIP-General Level Scores and Canadian Language Benchmarks
Monika Jezak (ILOB, l’Université d’Ottawa), & Élissa Beaulieu (Centre des Niveaux de compétence linguistique canadiens): L'évaluation en fonction des Niveaux de compétence linguistique canadiens: Batterie de tests de rendement et Listes «Je suis capable de.... »
Joselyn Brooksbank, Irina Goundareva, Valerie Kolesova, Jessica McGregor, Mélissa Pesant & Beverly Baker (OLBI, University of Ottawa): Oral admissions testing for university entrance: Interactive functions and perceptions of anxiety
Event held April 24, 2015:
Evolution and practice in classroom-based assessment (formative assessment-FA, assessment for learning-AFL, learning-oriented assessment-LOA): Where are we now?
What comes to mind when the word assessment is mentioned for language classrooms? How is it to be interpreted? For many years, it has mainly been viewed as a tool to record student achievement through the types of items and tasks employed in traditional large-scale testing. In reality, however, much more happens in classrooms in terms of using assessment to support learning and inform teaching. With the growing awareness of assessment activities internal to the classroom and managed by teachers, classroom assessment has become an emerging paradigm of its own with a focus on learning and an evolving research agenda (Turner, 2012). This talk/workshop underscores this paradigm and concentrates on the local context of classrooms, where assessment has the potential to serve the learning process. Specifically, the focus is on L2 classrooms, where it has been claimed that assessment can serve as the “bridge” between teaching and learning (Colby-Kelly & Turner, 2007). This culture in L2 classrooms, where assessment is central, has been referred to as learning-oriented assessment (LOA) (Purpura 2004, 2009; and more recently Purpura and Turner, forthcoming).
Carolyn Turner is Associate Professor of Second Language Education in the Department of Integrated Studies in Education at McGill University, where she teaches assessment and research methods. Her research interests include language testing/assessment in educational settings and in healthcare contexts concerning access for linguistic minorities. With James Purpura, she is currently co-authoring the book “Learning-oriented L2 assessment.” A former President of the International Language Testing Association (ILTA) and an Associate Editor of Language Assessment Quarterly, her work appears in journals such as Language Testing, Language Assessment Quarterly, TESOL Quarterly, Canadian Modern Language Review, and Health Communication, and in chapters in edited collections concerning language assessment.