The evaluation of language programme and instruction quality is highly relevant, everywhere. To test the effectiveness of a language intervention programme, one needs to take a holistic approach. For a language intervention to be effective, the designer has to bring into harmony five components: policy prescription, curriculum, instruction, learning and assessment When these are aligned, we have the golden pentagon of language intervention design. Where to begin? Continue reading
The Code of Ethics of the International Language Testing Association (ILTA) is a guide to language testers of how they should conduct their business in ways that are caring and compassionate, and at the same time deliberate and professional. It is complemented by locally formulated Codes of Practice. The Code of Ethics is already available in eleven languages.
A team of South African translators, Sanet Steyn and Gini Keyser, tasked by the Network of Expertise in Language Assessment (NExLA), did the initial translation of the Code of Ethics into Afrikaans. Then Colleen du Plessis, Albert Weideman, and language policy specialist Theo du Plessis produced a further two drafts. The fourth draft of the Code is now being presented to the language testing community at large, and has been placed on the NExLA website for comment. Continue reading
Returning to the still unresolved issue of how best to conceptualize test validation and validity, I attempt to answer this question in a special issue of Language & Communication that commemorates the work of the late Alan Davies. In particular, I argue that responsible test design encompasses ethicality and accountability, and is a conceptually clearer way of thinking about the quality of a language test.
Elsevier, the publisher of the journal, has generously, though for a limited period, provided unlimited access to the article that I contributed to this commemorative issue. The final published version of the article, “Does responsibility encompass ethicality and accountability in language test design?” is available until 17 December to anyone who clicks on the following link: https://authors.elsevier.com/a/1Vy-wzlItpy~5. No sign up, registration or fees are required – you can simply click and read.
If you were a scientist working in the 1950’s, you would claim that your work, the theory that you subscribed to, and the results of your academic endeavours were all neutral and objective. In the heydays of modernism, the mere suggestion that there were any external, non-scientific influences on your work would have implied a threat to the integrity of that work.
Fast forward 60 years, and you would now find it difficult to acknowledge that your scientific analyses are indeed purely scientific, uninfluenced by any prejudice, and untainted by subjective issues. Continue reading
Is a theory of applied linguistics desirable? And if so, is it possible? My new book, Responsible design in applied linguistics: theory and practice (2017; Springer) proceeds from the thesis that applied linguistics needs a theoretical foundation. It is indeed possible to delineate its work (and specifically distinguish it from linguistics). Providing it with a theoretical foundation might additionally yield new insight into the principles that underlie applied linguistic designs. Those designs we encounter as the interventions that we call language courses, language tests and language policies. Continue reading
Avasha Rambiritch of the University of Pretoria and I have just written a chapter for a book edited by John Read (Post-admission Language Assessment of University Students, Springer, 2016) that shows how making sufficient information available about the conception, design, development, refinement and eventual administration of a test of language ability — in other words “telling the story of a test” — is the first step towards ensuring accountability for such tests. The test in question, the Test of Academic Literacy for Postgraduate Students (TALPS), is used to determine the academic literacy of prospective postgraduate students. For the full reference, see the bibliography on this site. Continue reading
The assessment of the 11 “home languages” at the end of secondary school in South Africa is patently unfair. That is the finding of a recent investigation that Colleen du Plessis (UFS), Sanet Steyn (NWU) and I report on in an article that has just been published on LitNet Akademies. The Grade 12 exit examinations are high stakes assessments, since the Home Language mark contributes disproportionately to the index on the basis of which access is granted to higher education (or entry into the world of work). They are unfair, because they are not equivalent: in some languages one has a much better chance to pass than in others. Continue reading