Developing one’s own language assessments: taking responsibility, ensuring appropriateness, taking ownership

Singapore_University_of_Technology

Singapore Institute of Technology

There is something reassuring for university administrators and decision-makers in using the results of large-scale tests. They seldom worry about their contextual appropriateness, or about their cost, or even enquire about their quality. The large reach of the test in their minds ensures its reputation. As to costs? Well, the argument goes, if students wish to undertake studies at this university, they must be prepared to pay for that privilege.

But do institutions of higher education get what they want from large-scale commercial tests, some of which have a global reach? Do they find enough diagnostic information in them, for example, to help them devise focussed language courses that would overcome the problems identified? Are the tests specific enough to be contextually appropriate measures of, in this instance, the academic literacy levels of their particular students? Are they to be trusted when it comes to making decisions of whether to place students on the language development interventions they are providing, or, in those cases where placement on more than one intervention is available, selecting who should be placed where?

big-quote-marks-opening… many tertiary education institutions are now taking responsibility to develop and design their own instruments to measure language ability

It is telling that many tertiary education institutions are now taking responsibility to develop and design their own instruments to measure language ability. In the majority of cases, that means that they become competent in making adequate assessments – they acquire what is called assessment literacy, a much investigated and topical field at the moment. Making their own language test enables them to have immediate access not only to the test results, but to learn to interpret the statistical analyses of the empirical data that the test will yield. What is more, they can make the test available at a very small fraction (as little as one-two hundred and fortieth) of the cost of a commercial test. So even if they have to ask students to pay for it, the impact would be minimal. The further benefit is the diagnostic information they gain, information for which they did not have empirical backing before. Finally, they can ensure that the test is wholly contextually appropriate, tailored to their exact needs. In a word: they have gained in many respects from taking ownership of the assessment.

Centre for Communication Skills (CCS)

This week and the previous week an academic literacy test development team from the Centre for Communication Skills (CCS) of the Singapore Institute of Technology (SIT) came together to design and develop a test for their specific use. I was fortunate to have been asked to act as advisor to the team, headed up by Xudong Deng, the CCS director, and deputy director Chien Ching Lee. Together, the various team members developed a total of eight subtests that will make up the first tier of the test. From the more than 180 items that they devised they will, after piloting, be able to select the most productive ones for the final version of the test. They have also developed a second tier test, that can be used as a second chance test for borderline results. That ensures greater fairness in making decisions based on the results of the test. Earlier this year, at a similar venture of the Open Access College of the University of Southern Queensland, I was equally fortunate in heading up a design team that developed a contextually appropriate and relevant academic literacy assessment for them. They are about to start piloting that.

A two tier test of academic literacy

The provisional name of the assessment we developed over the last two weeks is the Academic Literacy Test of the Singapore Institute of Technology (ALTSIT). The test, that still needs to be piloted, will have two tiers, the first of which comprises an extensive 60-item test, of 75 minute duration, and in multiple-choice format. Then there will be a second chance, further test for those who might potentially have been misclassified by the first tier test. The tier two test will require candidates to produce a written argument based on a topical issue related to the overall theme of the ALTSIT, and is likely to be manually scored, whereas the plan is to machine-score the first tier.

Pictured from left to right on the first picture are several key members of the team: Hwee Yang Neo, Suchen Wang, Hwee Hoon Lee, and Albert Weideman. On the others, you will also spot Su Jia Gan, Victor Cole and Padma Rao.

Albert Weideman with ALTSIT team

Albert Weideman with test development team

Albert Weideman at SIT

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s