Most categorization models are insensitive to the order in which stimuli are presented. However, studies have shown that the sequence received during learning can influence how categories are formed. In this work, we develop a transfer model called Ordinal General Context Model (OGCM) that incorporates ordinal information. OGCM, based on the well-known Generalized Context Model, integrates serial order as a feature along ordinary physical features distorting the psychological space in which stimuli are embedded. We show that integrating serial order during learning in the OGCM provides the best account of classification of the stimuli in our data sets.
Computational models are spreading in several branches of cognitive science. It is thus essential to develop robust methods for comparing them. In this work, we propose a general inference method based on a specific hold-out strategy for the selection of learning models. This method allows one to retrieve the model that best fits the learning strategy of a single individual, while taking into account the dependency within the data. We then apply this individual approach to two category learning models (ALCOVE and Component-Cue) on data-sets manipulating presentation order. The image on the right (or below if you are using your phone) shows the artificial neural structure of ALCOVE.
Investigating interactions between types of order in categorization
A large array of studies have shown that presentation order can influence learning speed and retention. However, little effort has been made to investigate how types of order interact with one another. In this work, we use a full factorial design (within-category, between-category, and across-blocks manipulations), each factor having two levels (rule-based vs. similarity-based, blocked vs. interleaved, and constant vs. variable orders, respectively), to study how concurrent types of order influence category learning and generalization.
How to fit transfer models to learning data: a segmentation/clustering approach
Transfer models in categorization do not have the ability to evolve over time, constraining them to only account for participants' generalization patterns. In this study, we propose a statistical framework that allows transfer models to be applied to learning data. This framework is based on a segmentation/clustering technique which was specifically tailored to suit category learning data. This adjusted segmentation/clustering technique is applied to a well-known transfer model (the Generalized Context Model) on three novel experiments. The image on the right (or below if you are using your phone) shows the application of the adjusted segmentation/clustering technique to classification performance during the learning phase.