CMART Lecture and Workshop Series 2012-2013
Alina von Davier, Ph.D.
Center for Advanced Psychometrics
Educational Testing Service
Peter Halpin, Ph.D.
New York University
TITLE: Collaborative Problem Solving: Definition, Psychometric Models, and Assessment
DATE AND TIME: Wednesday, March 27, 2013, 4:00 – 5:30 PM
PLACE: Peter/McKenna Rooms, University Center – 2nd Floor, Carnegie Mellon University
This presentation discusses the considerations needed for building an assessment of domain skills through collaborative problem solving tasks. Collaborative problem solving tasks are part of a new generation of learning frameworks that involve innovative items, better use of technology, integrated tasks, and both cognitive and noncognitive skills (such as teamwork). In order to address the challenges of skill assessment in these learning frameworks, new psychometric models are required. The research ideas presented here focus on analyzing process data and on analyzing outcome/summative data. Our focus is on measurement of both cognitive skills and teamwork interactions.
Collaborative problem solving requires that individuals work together to complete a complex task. We propose some broad principles and specific models that can be used to generalize traditional psychometric methods to the collaborative problem solving context. Analyzing data from an assessment that includes collaborative problem solving tasks involves several modeling aspects that are not encountered in traditional tests. Two of the assumptions suggested by prior research are 1) that people behave differently when they interact in teams than when they work alone, and 2) that their individual domain skills might not correlate highly with the team’s outcome. Assessing these differences in behavior may lead to the augmentation of the individual domain score in isolation by an individual domain score in collaboration and a team score. The data from such an assessment will, therefore, contain process data and outcome data. Traditional assessment issues, such as reliability, validity, and comparability of tasks are discussed. Modeling strategies are also presented. For the process data, the usefulness of dyadic interactions, of dynamic models, and of hidden Markov models is discussed. For the outcome data, we argue the need for measures of (a) the individual performance; (b) the group performance, and; (c) the contribution of each individual to the group performance. Item response theory (IRT) based models are considered for outcome data.
All are encouraged to attend.
CMART: Carnegie Mellon and RAND Traineeship in Methodology and Interdisciplinary Education Research
Sponsored by Institute for Education Sciences