Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This study proposes that recognizing objects through configural cues, perceiving spatial relationships among an object's components, provides a more robust recognition strategy compared to relying on local featural cues. This hypothesis was tested using identification tasks with composite letter stimuli and neural network models trained on either local or configural cues. For researchers interested in replicating this study or further exploring its findings, the stimulus set and evaluation codes are available. A brief summary is provided below, but for further inquiries, please contact hojin4671@korea.ac.kr. **data**: - The data include synthetic letter stimuli (EMNIST) and face images (MakeFace). The EMNIST database and the MakeHuman toolbox are necessary to generate these datasets. The script **analysis_create_dataset_emnist_v1.py** is provided to create datasets for "local," "configural," and "local-plus-configural" conditions. - Tasks 1, 3, and 4 correspond to the "local," "configural," and "local-plus-configural" datasets, respectively. Tasks 5, 7, and 8 refer to control versions of these datasets, where individual local features consist of new sets of letters (see **Results** section "*Configural processing is independent of individual local features*"). **models**: - Both feedforward and recurrent neural network models were tested, including ResNet18, ResNet34, ResNet50, CORnet-S, BLnet, BLTnet, and ConvLSTM. Each model was slightly modified from its original version (refer to **Methods** section, "*Neural network architectures*"). **scripts**: - **analysis_v1_accuracy.py**: Evaluates the accuracy performance of the trained networks on "local" and "configural" tasks. - **analysis_v1_accuracy_task4.py**: Evaluates the accuracy performance of the trained networks on the "local-plus-configural" task. - **analysis_v1_accuracy_shuffle.py** and **analysis_v1_accuracy_task4_shuffle.py**: Evaluate the accuracy performance of the trained networks using a random shuffling strategy (refer to **Supplementary Figures 2-3**). - **analysis_v1_bias.py** and **analysis_v1_bias_task4.py**: Measure the sensitivity of individual neurons to local and configural cues (refer to **Methods** section, "*Layerwise neural sensitivity analysis*"). - **train_v1_task?.py**: Trains the network using the prototypical loss function. - **train_v2_task?.py**: Trains the network using the standard cross-entropy classification loss function. - **train_v5_task?.py**: Trains the network using the prototypical loss function with a random shuffling strategy. - **train_v6_facescrub.py**: Trains the network on the FaceScrub database. - **train_v10_imagenet.py**: Trains the network on the ImageNet database. **figures**: - Notebook codes for generating the five main figures and four supplementary figures presented in the manuscript are provided.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.