Pigeons' Pixel Pattern Recognition
Date created: | Last Updated:
: DOI | ARK
Creating DOI. Please wait...
Description: Six pigeons were trained to simultaneously discriminate two different patterns — resembling the letters Z and H — displayed on a pair of a horizontal miniature 5 x 7 red light emitting diode (LED) arrays. Once they had learned the task to quasi-perfection (>90% correct choices), they were successively exposed to 84 further sessions that included interspersed non-reinforced presentations of variously degraded, modified, and translocated pattern pairs intended to examine the pigeons’ generalization in responding to 84 such modified stimulus pairs. Their choice scores during these tests ranged between 100% and 50% for one or the other stimulus in the various pattern pairs presented. Analysis of these data showed that pattern consisting of few lit LEDs generally led to low choice % scores whereas those consisting of a large number of lit LEDs yielded a broad range of low to high % choice scores. Those generalization patterns that contained a high number of lit pixels coinciding with those of the positive rewarded training stimulus were most preferred and those that evinced a high number of lit pixels coinciding with the negative non-rewarded training pattern were the most avoided; a discrimination index based on this circumstance correlated quite highly (r = 0.74) with the pattern % choice scores. A correlational pixel-by-pixel analysis revealed that three pixel locations common to both the positive and negative training patterns were nearly neutral and that eight pixels of the Z pattern and six pixels of the H pattern played a prominent role as to the pigeons’ choice behavior. These pixels were disposed in respectively four and two clusters of neighboring locations. A summary index based solely on these locations still yielded a r = 0.73 correlation with the pattern % choices. However, when referring back to the individual pigeons’ data it was found that the eminence of these clusters was largely due to an averaging mirage. A much more detailed account of the experiment is available here: Delius, J. D., & Delius, J. A. M. (2019). Systematic analysis of pigeons’ discrimination of pixelated stimuli: A hierarchical pattern recognition system is not identifiable. Scientific Reports, 9: 13929. https://doi.org/10.1038/s41598-019-50212-1 The pigeons’ performance is almost certain to go back to a deep learning process supported by a midbrain-forebrain multimillion variable synapse neuronal network. The information processing in these networks may be accountable through simulations with through smaller artificial neuronal networks capable of deep learning. We thus freely offer herewith the data to practitioners of such simulations on the sole condition that they share the results of their efforts with us and are willing to consider publishing them jointly with us. The present table is composed of 9 separate data tables and 4 tables displaying stimulus pattern pairs in different presentation styles. Table 1. Average results of all 6 pigeons over all 84 tests. 1, lit diodes, 0, unlit diodes of the stimulus pattern pairs, and to the right, the % average choice of first listed stimuli. Table 2. The ensemble of training pairs (1 frequent; 2 rare) and the 84 test stimulus pairs in order of use over the 84 test sessions: digitized. Table 3. Stimuli shown in Table 2 as the pigeons saw them. Table 4. Stimuli shown in Table 2 decomposed pixel-wise with reference to training stimuli. Table 5. Same stimulus pairs ordered as pigeons chose them (50% to 100%). Tables 6 & 7. Average results of the 3 H+Z- team pigeons, and below, of the 3 Z+H- team pigeons. Tables 8–13. Six tables showing the separate results for 6 single (3 Z+H- plus 3 H+Z-) pigeons. Please note that Z and H qualities of the test patterns were defined by the 6-pigeon average responses but that the two separate 3-pigeon teams, and more so the individual pigeons, did occasionally depart from that definition; this is noted by a corresponding green / blue switch in given data rows. (J.D.D can provide an alternative Daltonian-friendly table.) For any further information, please contact email@example.com Note. The authors thank Andreas M. Brandmaier, Max Planck Institute for Human Development, Berlin, for his helpful comments on this project. Konstanz, Berlin, September 2019