Study identifies our ‘inner pickpocket’

Researchers have identified how the human brain is able to determine the properties of a particular object using purely statistical information: a result which suggests there is an ‘inner pickpocket’ in all of us.

The researchers, from the University of Cambridge, the Central European University, and Columbia University, found that one of the reasons that successful pickpockets are so efficient is that they are able to identify objects they have never seen before just by touching them. Similarly, we are able to anticipate what an object in a shop window will feel like just by looking at it.

In both scenarios, we are relying on the brain’s ability to break up the continuous stream of information received by our sensory inputs into distinct chunks. The pickpocket is able to interpret the sequence of small depressions on their fingers as a series of well-defined objects in a pocket or handbag, while the shopper’s visual system is able to interpret photons as reflections of light from the objects in the window.

Our ability to extract distinct objects from cluttered scenes by touch or sight alone and accurately predict how they will feel based on how they look, or how they look based on how they feel, is critical to how we interact with the world.

By performing clever statistical analyses of previous experiences, the brain can immediately both identify objects without the need for clear-cut boundaries or other specialised cues, and predict unknown properties of new objects. The results are reported in the open-access journal eLife.

“We’re looking at how the brain takes in the continuous flow of information it receives and segments it into objects,” said Professor Máté Lengyel from Cambridge’s Department of Engineering, who co-led the research. “The common view is that the brain receives specialised cues: such as edges or occlusions, about where one things ends and another thing begins, but we’ve found that the brain is a really smart statistical machine: it looks for patterns and finds building blocks to construct objects.”

Lengyel and his colleagues designed scenes of several abstract shapes without visible boundaries between them, and asked participants to either observe the shapes on a screen or to ‘pull’ them apart along a tear line that passed either through or between the objects.

Participants were then tested on their ability to predict the visual (how familiar did real jigsaw pieces appear compared to abstract pieces constructed from the parts of two different pieces) and haptic properties of these jigsaw pieces (how hard would it be to physically pull apart new scenes in different directions).

The researchers found that participants were able to form the correct mental model of the jigsaw pieces from either visual or haptic (touch) experience alone, and were able to immediately predict haptic properties from visual ones and vice versa.

Source: Read Full Article