04年亚洲地区语言测试会议论文摘要
教程:双语阅读  浏览:1140  
  • 提示:点击文章中的单词,就可以看到词义解释

    1. Linking Validity and Test Use in Language Assessments Lyle F. Bachman
    University of California, Los Angeles
     

    Abstract

      The fields of language testing and educational and psychological measurement have not, as yet, developed a set of principles and procedures for linking test scores and score-based inferences to test use and the consequences of test use. While Messick (1989), discusses test use and consequences, this framework provides virtually no guidance on how to go about investigating these in the course of practical test development. Argument-based formulations of validity (e.g. Kane, 1992, 2001, 2002; Kane, Crooks, & Cohen, 1999; Mislevy, in press; Mislevy, Steinberg, & Almond, 2003) provide a logic and set of procedures for investigating and supporting claims about score-based inferences, but do not address issues of test use and the consequences of test use. Recent formulations in language testing (e.g. Bachman & Palmer, 1996; Kunnan, 2003; Lynch, 2001) are essentially lists of more or less independent qualities and questions, with no clear mechanism for integrating these into a set of procedures for test developers and users to follow.
      Discussions of validity and test use in the language testing literature have generally failed to provide an explicit link between these two essential considerations. The extensive research on validity and validation has tended to ignore test use, on the one hand, while discussions of test use and consequences have tended to ignore validity, on the other. To their credit, those researchers who have attempted to link validity and test use have enlarged our perspective beyond Messick’s unitary validity model. In articulating the test qualities that they believe to be important, these researchers have opened up the lines of debate and expanded the dialogue about what should be the overarching concern in language assessment—the way language assessments get used and the consequences of these uses.   Nevertheless, what we have at present, as a basis for justifying test use, are essentially lists of qualities and questions that test developers and test users need to consider, with no clear logical mechanism for integrating these into a set of procedures for test developers and users to follow.
      In this presentation I describe what I believe to be a means for providing this logical linkage between validity and test use. I describe how an argument for linking validity to test use might be articulated following Toulmin’s (2003) argument structure. This assessment use argument consists of two parts: 1) an assessment validity argument, linking test performance to score-based interpretations, and 2) an assessment utilization argument, linking interpretations to intended uses or decision. I argue that an assessment use argument can guide the design and development of assessments, and can also lead to a much more focused, efficient program for collecting the most critical evidence (backing) in support of the interpretations and uses for which the assessment is intended.

    2. The standard setting procedure and the Rasch analysis for
    the English basic competency assessment in Hong Kong

    Guanzhong Luo, Gregory Chan and Peter Hill
    Hong Kong Examination and Assessment Authority

    Abstract

      The Territory-wide System Assessment (TSA) that was administered to Primary 3 in July 2004 covered three major subjects including English Language. In the assessment, several sub-papers were designed to cover the entire curricula of English Language. In addition and for the standard setting purpose, two groups of judges were called to give the percentage of getting each of the items right for a typical student with the minimum required competency level. This paper describes the standard setting procedure and the relevant analysis using the Item Response Theory (IRT), in particular, the Rasch models.

    0/0
      上一篇:第七界亚洲地区英语语言测试学术研讨会 下一篇:北京上班路上用时大揭秘

      本周热门

      受欢迎的教程