A new way of looking at vision

Visual processing has always been one of the topics in cognitive science that interests me the least, but after reading this paper, “Constructing Meaning,” by Seana Coulson, I’ve changed my mind (at least a tiny bit). Instead of ascribing to the folk psychological view of vision as a “passive registration process in which people record optical information that exists in the world,” she suggests that it’s an “active, constructive process in which people exploit optical information to interact with the world.”

Early accounts of vision represented it as a hierarchical, feed-forward process, but more recent studies have revealed that there are in fact a number of backward connections, in which information is passed from higher-level areas to lower, as well as a number of lateral information transfers. Vision isn’t as simple as was once thought.

As if this wasn't already complicated enough, there's more than this to vision! Image: http://www.webmd.com/eye-health/picture-of-the-eyes
As if this wasn’t already complicated enough, vision is even more complex than this!
Image: http://www.webmd.com/eye-health/picture-of-the-eyes

Further demonstrating this point is the notion of context sensitivity. One example is the phenomenon of color constancy: even when lighting conditions change, the color we perceive objects remains constant. Another example is neural filling-in. We all have a blind spot, the region of the retina where the optic nerve attaches and there aren’t receptors, but we don’t perceive the small hole in our visual field that we would if our brains weren’t somehow filling in the gaps. This is a specific example of the more general problem that despite frequent blinks (about every 5 seconds), we don’t experience perceptual discontinuity. And a final example that I thought was earth-shattering the first time I read Noe’s account of it: when we look at an object, we’re actually only seeing it in two dimensions, but we perceive the whole of it in its three-dimensional glory.

The short story: we don’t perceive what’s actually there, but instead construct a representation of what we’re seeing based on context and prior knowledge of the world around us.

Then Coulson likens the process of visual perception to language processing. Making meaning out of an utterance is not simply a decoding process based solely on the linguistic information, just as visual perception isn’t simply passively absorbing visual stimuli. Instead, perceiving linguistic meaning involves a complex interplay between linguistic and nonlinguistic knowledge. After reviewing quite a bit of cognitive neuroscience data, Coulson concludes that the studies “argue against a deterministic, algorithmic coding and decoding model of meaning, and for a dynamic, context sensitive one.” I thought this paper was a really cool way of saying that context is crucial for making meaning, whether out of visual or linguistic input, rather than an incidental property of the stimuli.

Advertisements

Imagery Colors Perception: The Perky Effect

Ben Bergen’s book Louder than Words: The New Science of how the Mind Makes Meaning advocates for embodiment as the source of deriving meaning from language. The argument is that by simulating (creating experiences of perception and action in our minds in the absence of their external manifestations), we understand language.

louder than words

One piece of evidence that supports the embodiment hypothesis is the Perky effect. In this famous study, participants were asked to fixate on a screen and visualize various objects. While they were doing so, Perky projected a faint patch of color (just above the visibility threshold, and the same color as the objects participants were visualizing) onto the screen. Afterwards, every participant believed that what they “saw” on the screen was solely a product of their imagination, instead of an actual physically present stimulus.

Bergen points out that the Perky effect is common in everyday life: when you daydream, your eyes are open and you’re completely awake, yet you’re imagining something else that isn’t there. While doing so, you don’t process as much of the visual world around you as you might otherwise.

The results suggest that there isn’t a black and white difference between the experiences of perceiving and imagining, but instead the difference between imagining a stimulus and actually perceiving it is a matter of degree. This makes simulation as a means for understanding language seem quite plausible.

The results also suggest that imagining can interfere with perception. This is really cool to me, because it demonstrates the importance of context for cognition. If the participants in Perky’s experiment hadn’t been visualizing shapes, they most likely would have reported seeing the faint colors projected on the screen. Similarly, if I’m daydreaming while driving down the street, I’m much more likely to perceive my surroundings quite differently from if I’m fully focused on my driving. This is a reminder to me that context is inescapable and will always color our perceptions of reality.