The most informative home video

My family has some home videos. Some are on actual cassettes, and others are on our iPhones or in the cloud. They’re mostly short, and like photographs, they’re somewhat staged. In many cases, they show premeditated or sugar-coated shots of our lives.

But MIT researcher Deb Roy has some home videos that break the mould I just described. Roy and his wife set up video cameras to record every room of their house for about 10 hours a day for the first three years of their son’s life. They have more than 200,000 hours of data. Analyzing it is a mammoth and ongoing task, but it has helped answer some highly-debated and longstanding questions about how humans develop language. For example, one paper describes how this data can be used to predict the “birth” of a spoken word.

What factors facilitate word learning?

Not surprisingly, the child produced shorter words before longer words, as well as words that tended to occur in shorter sentences, and words he had heard often before rarer words. To me, these are intuitive features of words that make them easier to learn.

But there were also some less intuitive features that predicted how early the child would produce certain words. These features were more contextual, taking into account the when and where he heard words in the time leading up to his first production.

One feature that predicted a word’s birth was based on how often the boy heard the word in different rooms throughout the house. Some words were spatially distinct–for example, “breakfast” usually occurred in the kitchen, and others were more spatially dispersed, like the word “toy.” Spatial distinctiveness tended to help him learn words faster.

The researchers also measured temporal distinctiveness, or when during the day the word was likely to be heard by the toddler. Again, “breakfast” was temporally distinct, occurring almost exclusively in the morning, while the word “beautiful” was much more dispersed throughout the day. As with spatially distinct words, the researchers found that more temporally distinct words– those that were most often said at a similar time during the day — were learned sooner than those whose uses were spread out throughout a typical day.

Finally, they looked at the contextual distinctiveness of each word. This is basically the variation in the language that the child tended to hear with the word of interest. The word “fish” was contextually distinct, for example, often occurring with other animal words or words related to stories. “Toy,” on the other hand, occurred with a much greater variety of words and topics, so it was less contextually distinct. As with spatial and temporal distinctiveness, contextual distinctiveness made a word easier to learn.

This TED talk blew my mind.

Why does distinctiveness affect word learning?

Children learn language through conversations that are inseparable from the everyday life contexts they occur in. Those contexts are not just incidental features of word learning, but are actually crucial variables affecting how language is learned. This work is a reminder that language use and development is actually about much more than language, just as thinking is something that requires much more than just a brain. We humans are inseparable from our environments, and those environments play a big role in shaping how we think and navigate the wonderfully messy world we live in.

Advertisements

CogSci 2016 Day 1 Personal Highlights

I stepped out of the airport Wednesday night and my glasses fogged up. Ah, what a reminder of the world that awaits outside southern California, where I’m immersed in my PhD work. I had arrived in Philadelphia for CogSci 2016 to be bombarded by fascinating new work on the mind and behavior and the clever researchers responsible for it.

With 9 simultaneous talks at any time and over 150 posters on display during each poster session, I of course only got to learn about a fraction of all that was there. Nonetheless, here are some projects that are still on my mind after day 1:

  • Cognitive biases and social coordination in the emergence of temporal language (Tessa Verhoef, Esther Walker, Tyler Marghetis): Across languages, people use spatial language to talk about time (i.e., looking forward to a meeting, or reflecting back on the past). How does this practice come about? To investigate language evolution on a much faster time scale than occurs in the wild, this team had pairs of participants use a vertical tool (I believe the official term was bubble bar, see below). to create a communication system for time concepts like yesterday and next year. The pairs were in separate rooms, so this new communication system was their only way of communicating. Each successive pair inherited the previous pair’s system, allowing the researchers to observe the evolution of the bubble bar communication system for temporal concepts. Over the generations, participants became more accurate at guessing the term their partner was communicating (as the bubble bar language was honed), and systematic mappings between space and time emerged; that is, although each chain ended up with pretty different systems, within a single chain people tended to use the top part of the bar to indicate the same types of concepts (i.e., past or future), and used systematic motions (for example, small rapid oscillations for relatively close times like tomorrow and yesterday and larger, slower oscillations for more temporally distant concepts).

    Screen Shot 2016-08-12 at 6.27.45 AM
    The bubble bar
  • Deconstructing “tomorrow”: How children learn the semantics of time (Katharine Tillman, Tyler Marghetis, David Barner, Mahesh Srinivasan): This team had children of varying ages place time points (like yesterday and last week) on a timeline. They analyzed different features of the kids’ timelines to investigate at what age kids seem to understand three different concepts of time (or that they begin to understand these concepts in ways that adults do). The first was whether a time is in the past or future relative to now (did kids place it to the left or right of the now mark on the timeline?). The second aspect they looked at was whether kids understand sequences of different times – for example, that last week comes before (to the left on a timeline) yesterday (regardless of where those events were placed compared to now). Finally, they compared the way kids’ timelines showed remoteness – how temporally distant different events are from now – to how adults showed the same concept. Adults, for example, will place tomorrow quite close to the now mark and next year significantly farther away. They found that kids acquired an adult-like sense of remoteness much later than the first two – deictic (past vs. future) and sequence – concepts. While the latter two concepts reliably emerged in kids by 4 years old, but knowledge of remoteness wasn’t present until much later – after 7 years old. These data are an indication that while kids can pick up a lot of information about what different time words mean from the language they encounter, they may need formal education in order to really grasp that tomorrow is much closer to today than last year was.
  • Gesture reveals spatial analogies during complex relational reasoning (Kensy Cooperrider, Dedre Gentner, Susan Goldin-Meadow): After reading about positive feedback systems (i.e., an increase in A leads to an increase in B, which leads to more increase in A…) and negative feedback systems (an increase in A leads to an increase in B, which leads to a decrease in A), participants had to explain these complicated concepts. Even though the material that people read had almost no spatial language , spatial gestures were extremely common during their explanations (often occurring without any accompanying spatial language in speech). These gestures often built off each other, acting as a way to show relational information through space, and they suggest that people invoke spatial analogies in order to reason about complex relational concepts.

    Screen Shot 2016-08-12 at 6.49.08 AM.png
    Sample Gestures showing (from left to right) a factor reference, a change in a factor, a causal relation, and a whole system explanation.
  • Environmental orientation affects emotional expression identification (Stephen Flusberg, Derek Shapiro, Kevin Collister, Paul Thibodeau): Past work has shown us that we not only talk about emotions by using spatial metaphors (for example, I’m feeling down today, or your call lifted me up), but we also invoke these same aspects of space to think about emotions. In the first experiment, the researchers found that people were faster to say that a face was happy when it was oriented upwards and that it was sad when oriented downwards (both of which are considered congruent with the metaphor) than for the incongruent cases. Then, to differentiate between an egocentric (facing up or down with respect to the viewer’s body) and environmental (facing up or down with respect to the world) reference frames, people completed the same face classification task while lying on their sides. This time, they only showed the metaphor consistent effect (faster to say happy when faces were oriented up and to say sad when faces were oriented down) when the face was oriented with respect to the world – not when the orientation was with respect to the person’s own position. This talk won my surprising finding award for the day, since researchers often explain our association between emotion and vertical space as originating in our bodily experiences: we physically droop when we’re sad and we rise taller when we’re happy. That explanation isn’t consistent with what these researchers found, though, suggesting that people’s association between vertical space and emotions was critically an association involving vertical space with respect to their environment, and not their own bodies.
  • Context, but not proficiency, moderates the effects of metaphor framing: A case study in India (Paul Thibodeau, Daye Lee, Stephen Flusberg): People use metaphors they encounter to reason about complex issues. For example, when a crime problem is framed as a beast, they think that the town should take a more punitive approach to dealing with it than when that same problem is framed as a virus. What if you encounter this metaphor in English, but English isn’t your native language – does the metaphor frame influence your reasoning less than it would influence a native English speaker’s? People from India (all of whose native language was not English) read the metaphor frames embedded in contexts, and reasoned about the issues that were framed metaphorically. Overall, people reasoned in metaphor-consistent ways (i.e., saying that crime should be dealt with more punitively after it was framed as a beast than a virus). Their self-reported proficiency in English did not affect the degree to which people were influenced by the metaphor; people who were more fluent in English were not more swayed by the frames. However, the context in which they typically spoke English, did play a role: Those who reported using English mostly in informal contexts, such as with friends and family and through the media, were more influenced by the frames than those who reported using English more in formal contexts, like educational and professional settings. These experiments don’t explain why those who use English more in informal settings were more swayed by metaphorical frames than those who use the language more in formal settings, but it opens the door for some cool future research possibilities.

Check back for highlights from days 2 and 3!

Ad hoc cognition

I heard about lots of cool research this past weekend at Psychonomics – how taking photos affects our memory (not positively), how we prefer musical meters (i.e. 3/4 or 6/8) that we’ve been exposed to earlier, and how we tend to use the same syntactic structure to talk about things as was used when we learned them (even when we’ve heard other structures between the learning and talking).

But my favorite group of talks was a block on Ad Hoc Cognition – the idea that our mental categories, concepts, and word meanings are not stable, but are constructed each time we use them. Daniel Casasanto presented a great example of this theory. We might consider that everyone has a relatively similar concept of what constitutes furniture, or at least that one person’s mental category for furniture is always the same. It’s probably made up of things like tables, chairs, and couches. But what about when you’re camping and someone says, “We need some furniture for this bonfire. Let’s pull over that log.” Although logs aren’t typically included when we think of furniture, in that moment, it seems appropriate to call a log a piece of furniture. We have no problem constructing our idea of what belongs in the furniture category on the spot (ad hoc), and the larger argument is that we’re constantly doing this.

Jeff Elman also presented some cool evidence demonstrating the same point. He (and others) have measured the electrical activity along people’s scalps (using EEG) that occurs when people read sentences, and have found that people have different expectations for what words should come next depending on the context. For example, in the context of an ice skater who just won a championship, when people read “the crowd roared as she took her place on the podium,” they show no evidence or surprise at the word podium (this would be demonstrated by a spike in activity 400ms after the word podium was presented, also known as the N400 component). When they read that “the crowd roared as she took her place on the medal,” they show a medium N400 response, indicating that they were moderately surprised by the word medal, which doesn’t actually make sense but is an object that would likely be involved in the situation. When they read that the “crowd roared as she took her place on the beach,” which makes no sense and should also not be associated with the situation, they show the largest N400 response, or the most surprise. The fact that the responses to bleach and medal differ even though neither makes logical sense shows that people are constantly constructing expectations based on context.

Image: http://www.nydailynews.com/sports/olympics/wagner-olympics-finishing-4th-article-1.1576994
Image: http://www.nydailynews.com/sports/olympics/wagner-olympics-finishing-4th-article-1.1576994

So if we’re constructing our concepts, categories, and word meanings (CC&Ms) on the spot in a new way for every concept, is it futile to study our mental CC&Ms? Casasanto gave a helpful analogy to show that studying how we think about these things is still fruitful, even if they’re context-dependent and therefore unstable. Just as physicists know that Newtonian physics is not quite accurate and that the theory of general relativity is more scientifically sound, we still rely on Newtonian physics all the time – when we go on a diet, we use a scale to measure our mass, since the Newtonian concept of mass is good enough for what we need). Returning to mental concepts, if a child asks you what a plethora is, it’s probably not a good idea to respond that it’s an ill-formed question, since our idea of plethora is constantly constructed from context. We can come up with a definition of plethora that’s close enough for many purposes, and the kid will not be living a lie if he doesn’t acknowledge that the meaning might differ slightly from one context to the next. However, it’s important for people who research CC&Ms for a living to honor their context-sensitivity and bear it in mind as they go about defining and exploring these nebulous ideas.

The future of cog sci

As a final project in a class on theories and methods in cognitive science, we had to give a talk on what we believed to be the future of cog sci, drawing on what we had synthesized during the course and our own specific interests. I’ve started to think of this blog as a time capsule for my thoughts about the field, so I find it fitting to post my presentation (or at least, the spoken part and a few screenshots from the visual). I’ll probably look back on this someday and think, “oh, how much I had yet to learn…” but that will not be such a travesty. Today, having completed only 10 weeks of grad school courses, here’s what I think about the present and future of cog sci:

As I was thinking about the future of cog sci, I started by rephrasing the question to myself. Where’s the field going? What questions and methods will push the field forward? I don’t need to specify that cog sci isn’t a physical thing, so it can’t actually go anywhere, or that, questions and methods don’t have agency and therefore can’t push the abstract field anywhere. The fact that we produce and comprehend phrases like this naturally and possibly even base our perceptualizations on them is one of the many phenomena we’re still working on understanding.

Because we’re all familiar with the cultural convention of blending time and space with a timeline, I’m going to use that as a jumping off point. In order to tackle the question of the future of cog sci, we have to consider the larger context of its historical roots.

Early research was materialistic in that it didn’t differentiate the mind from the brain. Neuroscience contributed the tenet that brain activity equals thought, while computer science gave rise to the conviction that thinking is equivalent to symbolic information processing. As new topics for investigation have emerged, many researchers have abandoned the traditional assumptions. The orderliness “mind as a computer” metaphor just doesn’t mesh with the complexity and messiness of our cognition.

Traditional cog sci: the mind is inseparable from brain processes and the notion of the mind as a computer was widespread
Traditional cog sci: the mind is inseparable from brain processes and the notion of the mind as a computer was widespread

Over time, more and more people have begun to recognize that every brain is situated in a unique body, and every body situated in a dynamic world. Spivey eloquently noted, “it might just be that your mind is bigger than your brain.” This is where we are now: context often reigns supreme. Most researchers accept that people behave differently inside a lab than outside it, and they strive to make their experiments as ecologically valid as possible, or in other words, as similar as they can to the real-world circumstances they want to generalize about.

A brain within a body within a dynamic, complex world
A brain within a body within a dynamic, complex world

Gick and Holyoak provide one of many examples of the importance of context, specifically linguistic context, for reasoning. Half of their subjects were given the tumor problem: a patient needs radiation to kill a tumor. A weak ray will not be strong enough to kill the tumor. A strong ray will be too strong, killing much of the healthy surrounding tissue. What should the radiologist do? Few people answered correctly. Other participants received the fortress problem: There’s a fortress at the intersection of many roads. The general wants to capture the fortress, but if all his troops attack from the same road, they run the risk of being blown up by mines. What should he do? This question is much easier: the army should split up and attack from many angles. The answer to the tumor question was the same: the radiologist should send converging weak rays from different directions so that they’ll be strong enough cumulatively to kill the tumor. But the framing of the problem makes a significant difference for how people go about and succeed in solving it.

Tumor and fortress problems
Tumor and fortress problems

Language is an undeniably huge part of our culture and day-to-day lives. We communicate so much directly, and possibly even more indirectly. When I say, “can you open the window?” we understand that I am in fact asking, “will you open the window?” The intended unspoken information conveyed is interesting, but what I find even more interesting is all the information we convey linguistically without intending to. When I talk about cognitive science moving forward, I’m blending the concepts of physical movement and time, with the assumption (thanks to my culture and language) that the future is forward. If I tell you that there’s a canyon separating the rich and the poor in America, I might be conveying different information than if I say, “the poor are lagging behind the rich.” A canyon is often an impenetrable divide resulting from natural causes. Lagging behind, on the other hand, suggests a separation because the slower group is incompetent or lazy, but a separation that can change throughout the course of a race. Do these different metaphoric instantiations of a single, abstract concept have consequences for our thoughts and behaviors?

Screen shot 2013-12-10 at 3.56.02 PM
Are there implications of talking about a canyon between the rich and poor versus the poor lagging behind in the economic race?

We know that context is important. But context is a really broad term. What context is important for different aspects of cognition? How can contextual alterations shape our thoughts and behaviors? This is, I think, one of the many important directions in which cog sci is heading.

Context is everything

I just read a classic paper by Bransford and Johnson that I find really clever. Before going into the findings, I’ll just present the passage presented to subjects in Experiment 1:

Screen shot 2013-11-02 at 8.39.12 PM

Even though the sentences in this passage follow normal rules of English grammar and the vocabulary is straightforward, you probably didn’t understand much of what you read and will have a hard time recalling it. Things might be clearer, however, if you see this picture:
Screen shot 2013-11-02 at 8.41.09 PM

Not surprisingly, the subjects who saw the picture before reading reported better comprehension and recall of the passage upon finishing it than those who saw the picture after. A third group was never shown the picture establishing context, and their comprehension and recall were lowest.

In another experiment described in this paper, the participants read a different passage:

Screen shot 2013-11-02 at 8.48.12 PM

Again, there’s nothing wrong with the English, but it doesn’t seem to say much. After learning that the paragraph is about doing laundry, however, it takes on a whole new meaning. As with the first study, subjects who were told that it was about laundry before reading it comprehended and recalled significantly more than those who were told the topic after reading. Again, those who were never told the topic did the worst.

It’s pretty awesome how just a small piece of context information can give meaning to an entire paragraph. What I love is that it brings out the flexibility of words and the incredible difficulty of grasping abstract descriptions and concepts without mapping them onto concrete realities. Also probably something to keep in mind when we’re writing things that we want people to understand – they need context!

Little known benefits of a positive mood

I just read an interesting paper by Marta Kutas called One lesson learned: frame language processing- literal and figurative- as a human brain function. In it, she discusses common assumptions underlying much research on language processing and their evolution over time. Instead of treating language as a brain function that can be isolated from all others, she calls for a more open-minded approach to psycholinguistic research, one that incorporates the contexts of both hemispheres, the importance of timing for linguistic processing, personality traits and moods, and individual differences as a proxy for experience.

One part that I found especially interesting was the section on the power of an individual’s mood as an aspect of context that can influence perception. One study she mentions found that the mood induced by the researchers (positive, negative, or neutral) affected whether participants were more attuned to the global or specific characteristics of an image. Those who were happy were more likely than those in a neutral or negative mood to use the shape of an image (a global characteristic) as a criterion for making decisions when shown novel stimuli, while other participants seemed to be more attuned to the features of the image.

Participants in positive moods have also demonstrated to be better at solving difficult problems by coming up with less obvious solutions. People in positive moods also produce more associations (and more unusual ones) when given a word. Along these lines, they have also demonstrated a greater sensitivity to more distant relations between words. For example, when given the words “bee,” “comb,” and “dew,” and asked to come up with a fourth that can be combined with all of them, participants in happy moods tended to outperform those in neutral moods.

The word "honey" occurred to participants in a positive mood more quickly than those in a neutral or negative mood. Image: http://jawadhutourism.org/honey.html
The word “honey” occurred to participants in a positive mood more quickly than those in a neutral or negative mood.
Image: http://jawadhutourism.org/honey.html

Finally, a positive mood may also be able to modulate some aspect of contextual of semantic analysis and contextual integration. In ERP studies, the N400 component is often more active when participants encounter unexpected stimuli. When participants read sentences in which the final word was unexpected (such as, “They wanted to make the hotel look more like a tropical resort. So, along the driveway they planted rows of tulips.”), participants who were in a positive mood elicited smaller N400 amplitudes than those in neutral moods. In other words, they were less surprised by the unpredictable endings, possibly because it was easier for them to draw a less-obvious solution than it was for other participants to do so.

Surely we know our mood has many consequences in how we perceive situations and act in them, but the idea that it may also affect perception and cognition that we typically consider outside the realm of mood is pretty interesting. It seems to help us think flexibly and to have a more open mind. Just one more complicating contextual factor that should be taken into account when studying cognition…

A new way of looking at vision

Visual processing has always been one of the topics in cognitive science that interests me the least, but after reading this paper, “Constructing Meaning,” by Seana Coulson, I’ve changed my mind (at least a tiny bit). Instead of ascribing to the folk psychological view of vision as a “passive registration process in which people record optical information that exists in the world,” she suggests that it’s an “active, constructive process in which people exploit optical information to interact with the world.”

Early accounts of vision represented it as a hierarchical, feed-forward process, but more recent studies have revealed that there are in fact a number of backward connections, in which information is passed from higher-level areas to lower, as well as a number of lateral information transfers. Vision isn’t as simple as was once thought.

As if this wasn't already complicated enough, there's more than this to vision! Image: http://www.webmd.com/eye-health/picture-of-the-eyes
As if this wasn’t already complicated enough, vision is even more complex than this!
Image: http://www.webmd.com/eye-health/picture-of-the-eyes

Further demonstrating this point is the notion of context sensitivity. One example is the phenomenon of color constancy: even when lighting conditions change, the color we perceive objects remains constant. Another example is neural filling-in. We all have a blind spot, the region of the retina where the optic nerve attaches and there aren’t receptors, but we don’t perceive the small hole in our visual field that we would if our brains weren’t somehow filling in the gaps. This is a specific example of the more general problem that despite frequent blinks (about every 5 seconds), we don’t experience perceptual discontinuity. And a final example that I thought was earth-shattering the first time I read Noe’s account of it: when we look at an object, we’re actually only seeing it in two dimensions, but we perceive the whole of it in its three-dimensional glory.

The short story: we don’t perceive what’s actually there, but instead construct a representation of what we’re seeing based on context and prior knowledge of the world around us.

Then Coulson likens the process of visual perception to language processing. Making meaning out of an utterance is not simply a decoding process based solely on the linguistic information, just as visual perception isn’t simply passively absorbing visual stimuli. Instead, perceiving linguistic meaning involves a complex interplay between linguistic and nonlinguistic knowledge. After reviewing quite a bit of cognitive neuroscience data, Coulson concludes that the studies “argue against a deterministic, algorithmic coding and decoding model of meaning, and for a dynamic, context sensitive one.” I thought this paper was a really cool way of saying that context is crucial for making meaning, whether out of visual or linguistic input, rather than an incidental property of the stimuli.

Embodied Language Conflict

Last night, I read a cool paper by Bergen and colleagues on the role of embodiment in understanding language. The idea is that portions of the brain that are used for perception and motor activity also play a role in understanding language via a process referred to as “simulation”.

Variations of the Perky effect can be used to study language understanding. For example, if a person is simulating while understanding language, it may be harder for him to use that same part of the brain in a visual or motor task. This is exactly what Bergen et al. found:

In Experiment 1, participants viewed sentences whose verbs literally denoted up or down, such as “The cork rocketed,” an “UP” sentence. At the same time, they had to characterize pictures of objects that were either located at the top or bottom of a screen. When the objects were located at the top, they were slower to do so, demonstrating an interference effect that may have occurred because they were simulating an “UP” sentence. This effect was also observed for “DOWN” sentences and objects located at the bottom of the screen.

When reading that "the cork rocketed," you probably simulated something in the upward direction, like this. Image: www.thewinectr.com
When reading that “the cork rocketed,” you probably simulated something in the upward direction, like this.
Image: http://www.thewinectr.com

Experiment 2 was the same, except up/down nouns were used instead of verbs. The experimenters again found an interference effect in the same direction. This suggests that the specific lexical entry isn’t what causes the simulation, but instead understanding the sentence as a whole may.

In Experiment 3, sentences containing verbs that expressed metaphorical motion were used (for example, “The prices climbed.”). There was no interference effect, nor was there an effect in Experiment 3, in which abstract, non-metaphorical verbs (such as “the percentage decreased”) were used. Together, these add support to the idea that the meaning of a sentence as a whole triggers simulation, rather than individual words.

Then this morning, I read a post about a paper that counters Bergen et al.’s findings. In the fMRI study reported, participants were shown nouns, verbs, noun-like nonwords, and verb-like nonwords (their endings were what signaled whether they were noun- or verb-like). The authors found that when viewing verbs and verb-like nonwords, participants’ premotor cortices were activated more than when viewing nouns and noun-like nonwords. They took this as an indication that the observed cortical responses to action words result from ortho-phonological probabilistic cues to grammar class, as opposed to embodied motor representations.

But, what about context? We rarely come in contact with words in isolation, but instead with words embedded in the context of a sentence, and sentences in their contexts too. Since the methods in the anti-embodied language study aren’t reflective of the real-life situations in which we encounter language, are they meaningful? How can we reconcile a these two studies?

Imagining an apple vs. a banana

I’ve written previously about my skepticism regarding the use of fMRI to localize a function in the brain. Multivariate pattern analysis (MVPA) is commonly used to detect subtle differences in patterns of brain activity in fMRI data.

This post articulated the misleading potential of MVPA really clearly.

In the new paper, Todd et al make a very simple point: all MVPA really shows is that there are places where, in most people’s brain, activity differs when they’re doing one thing as opposed to another. But there infinite reasons why that might be the case, many of them rather trivial.

The authors give an example of two similar tasks, imagining apples and imagining bananas. Although standard fMRI analysis might find no significant differences in activation, an MVPA might reveal one area in which the different tasks produced differing patterns of activity (differences in blood flow). At first, it might seem that the MPVA revealed the area of the brain responsible for encoding specific fruits, but this conclusion overlooks context. Every person has had different experiences with apples and bananas. Maybe the participant likes one and not the other. Maybe he ate a banana right before participating in the study and therefore has bananas on his mind. The potential contextual differences in experiences that any individual has had with the two fruits is endless, and would lead to an apparent difference in individual activation patterns for different tasks.

Apples and bananas are linked to different experiences and thoughts for everyone.  Image: http://www.ireallylikefood.com/708661250/do-you-like-to-eat-eat-eat-apples-or-bananas/
Apples and bananas are linked to different experiences and thoughts for everyone.
Image: http://www.ireallylikefood.com/708661250/do-you-like-to-eat-eat-eat-apples-or-bananas/

Here’s another way of looking at the hypothetical task (image below). For Participant A (the dark line), task A (imagining the apple) requires more concentration than task B (imagining the banana) does. For Participant B, the difficulties are reversed. Normally, these individual differences would cancel each other, demonstrating no significant difference between the brain activity when people imagine apples and the activity when they imagine bananas, but MVPA is designed not to allow individual differences to cancel each other out.

MVPA confound
Image from Todd et al.: http://www.sciencedirect.com/science/article/pii/S1053811913002887

The authors of the original paper suggest that MVPA confounds can be controlled by classical Generalized Linear Model (GLM) analysis or by post hoc linear regressions, so  I don’t take this as a sign that MVPA or fMRI data aren’t meaningful, but more of a warning to be very cautious when interpreting them. Further, I take it as a reminder of the importance of context- we’re all such different creatures and our vastly different experiences inevitably color our cognition and behavior.

Imagery Colors Perception: The Perky Effect

Ben Bergen’s book Louder than Words: The New Science of how the Mind Makes Meaning advocates for embodiment as the source of deriving meaning from language. The argument is that by simulating (creating experiences of perception and action in our minds in the absence of their external manifestations), we understand language.

louder than words

One piece of evidence that supports the embodiment hypothesis is the Perky effect. In this famous study, participants were asked to fixate on a screen and visualize various objects. While they were doing so, Perky projected a faint patch of color (just above the visibility threshold, and the same color as the objects participants were visualizing) onto the screen. Afterwards, every participant believed that what they “saw” on the screen was solely a product of their imagination, instead of an actual physically present stimulus.

Bergen points out that the Perky effect is common in everyday life: when you daydream, your eyes are open and you’re completely awake, yet you’re imagining something else that isn’t there. While doing so, you don’t process as much of the visual world around you as you might otherwise.

The results suggest that there isn’t a black and white difference between the experiences of perceiving and imagining, but instead the difference between imagining a stimulus and actually perceiving it is a matter of degree. This makes simulation as a means for understanding language seem quite plausible.

The results also suggest that imagining can interfere with perception. This is really cool to me, because it demonstrates the importance of context for cognition. If the participants in Perky’s experiment hadn’t been visualizing shapes, they most likely would have reported seeing the faint colors projected on the screen. Similarly, if I’m daydreaming while driving down the street, I’m much more likely to perceive my surroundings quite differently from if I’m fully focused on my driving. This is a reminder to me that context is inescapable and will always color our perceptions of reality.