ABSTRACT
Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remains a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Columns integrate their changing inputs over time to learn complete models of observed objects. Lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location is calculated within the sub-granular layers of each column. The representation of location is relative to the object being sensed. Pairing sensory features with locations is a requirement for modeling objects and therefore must occur somewhere in the neocortex. We propose it occurs in every column in every region. Our network model contains two layers and one or more columns. Simulations show that small single-column networks can learn to recognize hundreds of complex multi-dimensional objects. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed.
Footnotes
Emails: jhawkins{at}numenta.com, sahmad{at}numenta.com, ycui{at}numenta.com