Buscar

Bias and noise in proportion estimation: A mixture psychophysical model

RL3, Publisher: Cognition, Link>


AUTHORS

Gouet, C., Jin, W., Naiman, D. Q., Peña, M., Halberda, J.


ABSTRACT

The importance of proportional reasoning has long been recognized by psychologists and educators, yet we still do not have a good understanding of how humans mentally represent proportions. In this paper we present a psychophysical model of proportion estimation, extending previous approaches. We assumed that proportion representations are formed by representing each magnitude of a proportion stimuli (the part and its complement) as Gaussian activations in the mind, which are then mentally combined in the form of a proportion. We next derived the internal representation of proportions, including bias and internal noise parameters -capturing respectively how our estimations depart from true values and how variable estimations are. Methodologically, we introduced a mixture of components to account for contaminating behaviors (guessing and reversal of responses) and framed the model in a hierarchical way. We found empirical support for the model by testing a group of 4th grade children in a spatial proportion estimation task. In particular, the internal density reproduced the asymmetries (skewedness) seen in this and in previous reports of estimation tasks, and the model accurately described wide variations between subjects in behavior. Bias estimates were in general smaller than by using previous approaches, due to the model's capacity to absorb contaminating behaviors. This property of the model can be of especial relevance for studies aimed at linking psychophysical measures with broader cognitive abilities. We also recovered higher levels of noise than those reported in discrimination of spatial magnitudes and discuss possible explanations for it. We conclude by illustrating a concrete application of our model to study the effects of scaling in proportional reasoning, highlighting the value of quantitative models in this field of research.

1 visualización

Entradas Recientes

Ver todo

RL2, Publisher: Journal of Machine Learning Research, Link> AUTHORS Jorge Pérez, Pablo Barceló, Javier Marinkovic ABSTRACT Alternatives to recurrent neural networks, in particular, architectures bas

RL2, Publisher: https://github.com/pdm-book/community Link> AUTHORS Marcelo Arenas, Pablo Barceló, Leonid Libkin, Wim Martens, Andreas Pieris ABSTRACT This is a release of parts 1, 2, and 4 of the