Please join us Monday, March 9 at 10:00 a.m. in 232M Baker Hall for this talk given by Jeremy Koster, PhD, from the University of Cincinnati
Title: “Multilevel Item Response Models of Ethnobiological Knowledge Among Indigenous Nicaraguans”
A common assumption among anthropologists is that individuals continue to accumulate ethnobiological knowledge throughout their lives, resulting in greater expertise among the elder generations. Alternative theoretical perspectives suggest that ethnobiological knowledge about animals should peak earlier in life, paralleling and facilitating the emergence of foraging proficiency among younger adults. In a study conducted among the Mayangna and Miskito of Nicaragua, I assessed knowledge about fish behavior in three ways: (1) via a free listing exercise, (2) a photo recognition task, and (3) a 50-question instrument about fish behavior, as developed from biologists’ reports on fish in the region. I analyze data with multilevel logistic regression models, as estimated via MCMC methods, incorporating cross-classified random effects for the informants and the questions/species. The results indicate that individuals exhibit considerable domain knowledge as relatively young adults. Related models reveal a positive correlation between knowledge and fishing ability, suggesting that knowledge promotes and develops from specialization and the allocation of effort to fishing. Finally, a comparison of responses to the questions about fish behavior suggests that parents and their offspring exhibit similar beliefs, providing novel support for anthropological models that cultural transmission from parents to children is central to the ontogeny of ethnobiological knowledge.
Please join us for the first of the 2015 CMART Speaker Series
Kosukei Imai will be speaking at 4:00pm this Monday March 2.
125 Scaife Hall
Kosukei is a professor in the department of Politics at Princeton University, and has written on a wide array of topics in causal inference.
The title: Causal Interaction in High Dimension
The abstract: Estimating causal interaction effects is essential for the exploration of heterogeneous treatment effects. In the presence of multiple treatment variables with each having several levels, researchers are often interested in identifying the combinations of treatments that induce large additional causal effects beyond the sum of separate effects attributable to each treatment. We show, however, the standard approach to causal interaction suffers from the lack of invariance to the choice of baseline condition and the difficulty of interpretation beyond two-way interaction. We propose an alternative definition of causal interaction effect, called the marginal treatment interaction effect, whose relative magnitude does not depend on the choice of baseline condition while maintaining an intuitive interpretation even for higher-order interaction. The proposed approach enables researchers to effectively summarize the structure of causal interaction in high-dimension by decomposing the total effect of any treatment combination into the marginal effects and the interaction effects. We also establish the identification condition and develop an estimation strategy for the proposed marginal treatment interaction effects. Our motivating example is conjoint analysis where the existing literature largely assumes the absence of causal interaction. Given a large number of interaction effects, we apply a variable selection method to identify significant causal interaction. Our analysis of a survey experiment on immigration preferences reveals substantive insights the standard conjoint analysis fails to discover. The paper is available at http://imai.princeton.edu/research/int.html
Please join us Monday, December 1 at 10:00 a.m. in 232M Baker Hall for this talk given by J.R. Lockwood, PhD, of The Educational Testing Service
Title: Inferring Constructs of Effective Teaching from Classroom Observations: An Application of Bayesian Exploratory Factor Analysis Without Restrictions
Abstract: he dramatic public policy shifts toward increasing teacher ac-
countability have generated numerous sets of student outcome, teach-
ing process and teacher knowledge-based instruments all seeking to
measure the quality of teaching. These instruments are being used
to assess and pay teachers without a clear understanding of what as-
pects of teaching are being assessed, and how the many dimensions
that comprise any one measure relate to those of other measures. We
use data from multiple instruments collected from approximately 450
middle school mathematics and English language arts teachers and
their students to inform research and practice on teacher performance
measurement by modeling the underlying constructs of high-quality
teaching. We make inferences about these constructs using a novel
approach to Bayesian exploratory factor analysis (EFA) that, unlike
commonly-used approaches for identifying factor loadings in Bayesian
EFA, is invariant to how the data dimensions are ordered. Using this
approach with our data reveals two distinct teaching constructs in
both mathematics and English language arts: 1) Practices used by
teachers to instruct and engage students; and 2) Teacher management
of classrooms.We demonstrate the relationships of these constructs to
other indicators of teaching quality including teacher content knowl-
edge and student performance on standardized tests.
Please join us Monday, November 17 at 10:00 a.m. in 232M Baker Hall for this talk given by Ilya Goldin, PhD, of Pearson Education
Title: Individual differences in identifying sources of science knowledge
Joint work with Maggie Renken, April Galyardt, and Ellen Litkowski
Abstract. We have developed an instrument to assess students’ proficiencies in identifying sources of science knowledge (SoK) in text passages. We describe the new web-based instrument and our evaluation of the instrument with a sample (n = 338) of children grades 2-8. By creating and validating this tool, we aim to establish a learning progression, inform science teaching, and tailor instruction to individual differences. Our findings suggest that students demonstrate differential ability in identifying SoK and thus imply the need for instruction to accommodate individual student perspectives on SoK. We expect that highlighting student ability in identifying SoK as a distinct skill will enable differentiated, adaptive instruction. We further expect this instrument to make explicit a component of what it means to think like a scientist, and in doing so facilitate conversations among teachers and students about the practice of science.
Please join us Monday, October 20 at 10:00 a.m. in 232M Baker Hall for this talk given by Dan McCaffrey of the Educational Testing Service
Title: Uncovering Multivariate Structure in Classroom Observations in the Presence of Rater Errors
We examine the factor structure of scores from the CLASS-S protocol obtained from observations of middle school classroom teaching. Factor analysis has been used to support both interpretations of scores from classroom observation protocols, like CLASS-S, and the theories about teaching that underlie them. However, classroom observations contain multiple sources of error, most predominately rater errors. We demonstrate that errors in scores made by two raters on same lesson have a factor structure that is distinct from the factor structure at the teacher level. Consequently, the ‘standard’ approach of analyzing on teacher-level average dimension scores can yield incorrect inferences about the factor structure at the teacher level and possibly misleading evidence about the validity of scores and theories of teaching. We consider alternative hierarchical estimation approaches designed to prevent the contamination of estimated teacher-level factor. These alternative approaches find a teacher-level factor structure for CLASS-S that consists of strongly correlated support and classroom management factors. Our results have implications for future studies using factor analysis on classroom observation data to develop validity evidence and test theories of teaching and for practitioners who rely on the results of such studies to support their use and interpretation of the classroom observation scores.
Please join us Monday, October 6 at 10:00 a.m. in 232M Baker Hall for this talk given by Brian Junker (CMU)
Title: Predictive Inference Using Latent Variables with Covariates
Joint with Dan A Black (University of Chicago), Lynne Steuerle Schoefield (Swarthmore), and Lowell J Taylor (Carnegie Mellon)
Abstract: Plausible Values (PVs) have been a standard multiple imputation tool for latent proficiency variables in large scale education survey data since their implementation in the National Assessment of Educational Progress (NAEP) in the 1980′s. Today PVs are used widely in many national and international education surveys. When latent proficiency is the dependent variable in an analysis, well-constructed PVs provide guarantees of unbiasedness for inferences about latent proficiency. We review the well-known results that provide these guarantees, and try to extend them to the case in which latent proficiency is one of the independent variables in an analysis. We show that the same guarantees are impossible in the latter case, and provide an alternative approach, based on Schofield’s (2008) mixed effects structural equations (MESE) model. An example using data from the 1992 National Adult Literacy Survey (NALS) illustrates our results.
The schedule for talks is as follows:
Sept 22: Adam Sales
Oct 6: Brian Junker
Oct 20: Dan McCaffrey
Nov 3: Ally Thomas
Nov 17: Ilya Goldin
Dec 1: JR Lockwood
Looking forward to seeing you!
Please join us Monday, Sept 22 at 10:00 a.m. in 232M Baker Hall for this talk given by Adam Sales (CMART post-doc, CMU/RAND)
Title: A Useful Model? Using Covariates to Test the Usefulness of the Randomization Assumption.
Social scientists often look for “natural experiments” to estimate causal effects. They claim, based on quirks in known part of the data generating process, that a treatment was assigned haphazardly. It is good practice in these analyses to test covariate balance: significantly different distributions of baseline covariates between treatment and control groups will falsify the random treatment model. But as we know from Stats 101, failing to reject the hypothesis that covariates are balanced does not imply that they are. In this talk, I will propose a modified procedure where instead of testing the hypothesis that covariates are balanced, researchers can test whether they are imbalanced enough to invalidate the study.
I will use as an example data from Demming (2009) that investigated the effects of head start.
UPDATE: To see the slides, click here
After a bit of a break,
please join us Monday, August 11 at 10:00 a.m. in 232M Baker Hall for this talk given by David Choi, a Professor of Statistics and Information Systems at Heinz College, CMU.
Title: Estimation of monotone treatment effects in network experiments
Abstract: Randomized experiments in social network settings are a trending research topic. In addition to the logistical difficulties of running a large social experiment, there may also be statistical challenges in analyzing the data. We discuss the statistical challenge of analyzing experimental data in social networks when the network cannot be divided into smaller non-interacting subgroups, so that interference between units must be taken into account. We present work in progress on how to rigorously analyze such data, assuming that the treatment effect is nonnegative but otherwise making no further assumptions on the flow of influence between units.
Please join us Monday, June 30 at 10:00 a.m. in 232M Baker Hall for this talk and workshop led by Leah Clark (CMU Economics and Public Policy)
Title: Patterns of Student Enrollment and Teacher Staffing in Allegheny County Schools since 1997.
**This will be a data analysis workshop. Leah will present her research problem, data resources and questions of interest, and we will talk through some of the data analysis issues as a group.**
Abstract: Enrollment in Pittsburgh Public Schools has declined every year since 1997. Meanwhile, some public schools have closed, some charter schools have opened, and the school-age population in Pittsburgh has declined. I am investigating whether school- and district-level data can provide insight into the choices parents make about where to enroll their children, and the choices teachers make about where to work. The data will permit analyses of other similarly-situated U.S. cities.