Returning to the still unresolved issue of how best to conceptualize test validation and validity, I attempt to answer this question in a special issue of Language & Communication that commemorates the work of the late Alan Davies. In particular, I argue that responsible test design encompasses ethicality and accountability, and is a conceptually clearer way of thinking about the quality of a language test.
Elsevier, the publisher of the journal, has generously, though for a limited period, provided unlimited access to the article that I contributed to this commemorative issue. The final published version of the article, “Does responsibility encompass ethicality and accountability in language test design?” is available until 17 December to anyone who clicks on the following link: https://authors.elsevier.com/a/1Vy-wzlItpy~5. No sign up, registration or fees are required – you can simply click and read.
If you were a scientist working in the 1950’s, you would claim that your work, the theory that you subscribed to, and the results of your academic endeavours were all neutral and objective. In the heydays of modernism, the mere suggestion that there were any external, non-scientific influences on your work would have implied a threat to the integrity of that work.
Fast forward 60 years, and you would now find it difficult to acknowledge that your scientific analyses are indeed purely scientific, uninfluenced by any prejudice, and untainted by subjective issues. Continue reading →
Is a theory of applied linguistics desirable? And if so, is it possible? My new book, Responsible design in applied linguistics: theory and practice (2017; Springer) proceeds from the thesis that applied linguistics needs a theoretical foundation. It is indeed possible to delineate its work (and specifically distinguish it from linguistics). Providing it with a theoretical foundation might additionally yield new insight into the principles that underlie applied linguistic designs. Those designs we encounter as the interventions that we call language courses, language tests and language policies. Continue reading →
Avasha Rambiritch of the University of Pretoria and I have just written a chapter for a book edited by John Read (Post-admission Language Assessment of University Students, Springer, 2016) that shows how making sufficient information available about the conception, design, development, refinement and eventual administration of a test of language ability — in other words “telling the story of a test” — is the first step towards ensuring accountability for such tests. The test in question, the Test of Academic Literacy for Postgraduate Students (TALPS), is used to determine the academic literacy of prospective postgraduate students. For the full reference, see the bibliography on this site. Continue reading →
The assessment of the 11 “home languages” at the end of secondary school in South Africa is patently unfair. That is the finding of a recent investigation that Colleen du Plessis (UFS), Sanet Steyn (NWU) and I report on in an article that has just been published on LitNet Akademies. The Grade 12 exit examinations are high stakes assessments, since the Home Language mark contributes disproportionately to the index on the basis of which access is granted to higher education (or entry into the world of work). They are unfair, because they are not equivalent: in some languages one has a much better chance to pass than in others. Continue reading →