Normal view MARC view ISBD view

Test theory for a new generation of tests / ed. Norman Frederiksen, Robert J. Mislevy, Isaac I. Bejar

Secondary Author Frederiksen, Norman
Mislevy, Robert J.
Bejar, Isaac I.
Country Estados Unidos. Publication Hillsdale : Lawrence Erlbaum Associates, 1993 Description XII, 404 p. : il. ; 24 cm ISBN 0-8058-0593-1 CDU 371.26
Tags from this library: No tags from this library for this title. Log in to add tags.
    average rating: 0.0 (0 votes)
Holdings
Item type Current location Call number Status Date due Barcode Item holds
Monografia Biblioteca de Ciências da Educação
BCE1 371.26 - T Available 135112
Total holds: 0

Enhanced descriptions from Syndetics:

The editors of this volume suggest that there are missing elements in the conceptualization upon which standard test theory is based. Those elements are models for just how people know what they know and do what they can do, and the ways in which they increase these capacities. Different models are useful for different purposes; therefore, broader or alternative student models may be appropriate. The chapters in this volume consider a variety of directions in which standard test theory might be extended. Topics covered include: the role of test theory in light of recent work in cognitive and educational psychology, test design, student modeling, test analysis, and the integration of assessment and instruction.

Table of contents provided by Syndetics

  • Introduction (p. ix)
  • References (p. xi)
  • 1 Cognitive Psychology, New Test Design, and New Test Theory: An Introduction (p. 1)
  • Conclusion and Challenge (p. 12)
  • 2 Foundations of a New Test Theory (p. 19)
  • Introduction (p. 19)
  • Conclusion (p. 34)
  • References (p. 36)
  • Introduction (p. 41)
  • 3 Cognitive Diagnosis: from Statistically Based Assessment Toward Theory-Based Assessment (p. 41)
  • References (p. 69)
  • Comments on Chapters 1-3 (p. 72)
  • Conclusion (p. 77)
  • 4 Repealing Rules That No Longer Apply to Psychological Measurement (p. 79)
  • References (p. 96)
  • 5 Toward Intelligent Assessment: An Integration of Constructed-Response Testing, Artificial Intelligence, and Model-Based Measurement (p. 99)
  • References (p. 122)
  • 6 Psychometric Models for Learning and Cognitive Processes (p. 125)
  • Comments on Chapters 4-6 (p. 151)
  • 7 Assessing Schema Knowledge (p. 155)
  • References (p. 180)
  • 8 Learning, Teaching, and Testing for Complex Conceptual Understanding (p. 181)
  • Introduction (p. 181)
  • Conclusion (p. 213)
  • Acknowledgments (p. 215)
  • 9 New Views of Student Learning: Implications for Educational Measurement (p. 219)
  • Introduction (p. 219)
  • Conclusion (p. 239)
  • References (p. 241)
  • 10 Addressing Process Variables in Test Analysis (p. 243)
  • Introduction (p. 243)
  • Acknowledgments (p. 267)
  • References (p. 268)
  • Comments on Chapters 7-10 (p. 269)
  • References (p. 274)
  • 11 Application of a Hybrid Model to a Test of Cognitive Skill Representation (p. 275)
  • References (p. 294)
  • 12 Test Theory and the Behavioral Scaling of Test Performance (p. 297)
  • References (p. 322)
  • 13 A Generative Approach to Psychological and Educational Measurement (p. 323)
  • Conclusions (p. 347)
  • References (p. 352)
  • Introduction (p. 359)
  • 14 Representations of Ability Structures: Implications for Testing (p. 359)
  • Conclusions (p. 382)
  • References (p. 384)
  • Comments on Chapters 11-14 (p. 385)
  • Author Index (p. 391)
  • Subject Index (p. 401)

Reviews provided by Syndetics

CHOICE Review

An edited book with a format much like a research seminar. Authors in each section address a particular measurement issue and a "discussant" evaluates their contributions to test theory and practice. The rationale for the work is that classical test and item response theories rely on oversimplified theories of human abilities which emphasize status, rather than the evidence or direction of change. Current concerns about construct validity in testing, often expressed as a desire for more "authentic assessment," underscore the need for test theory and practice that reflects a more contemporary understanding of human ability, of learning, and of the structure of knowledge. The discussion ranges from a critique of classical test and item response theories (IRT) to an extension of the capabilities of IRT, and finally to evaluations of response generative modeling (RGM), approaches which permit constructed responses to be computer scored. Recommended. Junior/senior undergraduates; graduate students; faculty; test practitioners. D. E. Tanner; California State University, Fresno

There are no comments for this item.

Log in to your account to post a comment.