Language shapes what we see

Language shapes the way we see the world. For example, the metaphors used to describe a concept like crime can shape the way people reason about it; native speakers of different languages tend to conceptualize time in ways consistent with their language; and when an object (say, a chair) is assigned the feminine grammatical gender in one language and the masculine gender in another, speakers of the former language actually think of that object as more feminine than speakers of the latter.

But new research (here’s the pdf) shows that the language we speak literally affects the way we see the world. By tracking people’s eye movements as they watched scenes unfold, researchers found that speakers tended to fixate more on parts of the scene that their language would require them to encode when communicating, relative to speakers of another language.

The experiment included German and Korean speakers. One way these two languages differ is in how they refer to spatial relationships between objects. In German (as in English), there’s a word for containment (in, which means the same as it does in English), which contrasts with the word used for one object supporting another (in German, auf, analogous to on in English). Preposition use in Korean isn’t dictated by whether one object contains or supports the other; instead, different prepositions are used depending on the tightness of the fit of the relationship. For example, putting a cap on a pen is a tight fit, which Korean describes with the word kkita. This contrasts with putting an apple in a bowl, which is a loose fit, so the preposition netha would be used instead (though the authors note that netha tends to be used for loose containment while notha is used for loose support, the line is a bit more blurred than in English or German).

In German, then, the most relevant part of a spatial relationship (for communication purposes) is whether one object contains or supports the other. In Korean, the most relevant part of a relationship is the tightness of fit. The researchers predicted that German and Korean speakers may habitually pay closer attention to certain parts of a scene — the ones their language requires them to communicate — than others.

In the experiment, participants watched videos of objects coming in contact with each other (screenshots are below), while the researchers tracked their eye movements. Participants always saw a pair (one video followed by a second) and rated how similar the two videos were to each other. Importantly, participants were not told which dimension their similarity ratings should be based on — this was for them to decide on their own.

Screen Shot 2018-01-07 at 1.00.04 PM.png

Consistent with language practices, Korean speakers based their similarity ratings on tightness of fit — for example, videos from the second and third rows above (both showing tight fits, and therefore typically described using kkita), were seen as more similar than the first and second, or the third and fourth were (both of which would include one kkita video and one netha or notha video). German speakers, on the other hand, based their ratings on containment vs. support. For them, the first and second (both described by auf) or the third and fourth (in) were more similar than the second and third (auf vs. in). Again, it’s especially relevant that participants were not told to use their language practices to determine similarity; they were simply encouraged to determine how similar different pairs were to each other, and their language practices guided them in this task.

The really novel part of this study, though, is in the eye-tracking. The researchers found that German speakers spent more time looking at the base figure (the bowl, block, or tray that the second object would sit on or in) than Korean speakers did, probably because that object contains more information for a person who needs to determine whether the relationship will be a supportive (on) or containment (in) one, which is what Germans habitually have to encode. Instead of looking at that base figure as much, Korean speakers looked more at the one that did the resting on or in, and particularly looked at the area where the objects intersected, which again holds the most information for speakers of a language that requires communicating the tightness of fit.

Even though participants were not watching these videos in order to communicate about them, their viewing patterns still reflected the tendencies of their languages. They have years of experience needing to pay attention to containment vs. support or tightness vs. looseness, so they now approach the world with a predisposition to look for those same characteristics that their language encodes.

This finding may not have huge practical consequences. People’s vision isn’t impaired by what their language encodes or doesn’t. But the study does show that our attention can be influenced by our language. Visual attention is a pretty low-level process, in the sense that it’s constant and so much of it happens without conscious awareness. That, I think, is why this study is so cool — even when people are watching simple videos of objects, their language shapes the way they approach the situation. Just imagine what our language does for us when we actually go out and navigate the world.


Cover photo by MabelAmber. CC.

Advertisements

CogSci 2016 Day 3 Personal Highlights

  • There is more to gesture than meets the eye: Visual attention to gesture’s referents cannot account for its facilitative effects during math instruction (Miriam Novack, Elizabeth Wakefield, Eliza Congdon, Steven Franconeri, Susan Goldin-Meadow): Earlier work has shown that gestures can help kids learn math concepts, but this work explores one possible explanation for why this is so: that gestures attract and focus visual attention. To test this, kids watched a video in which someone explained how to do a mathematical equivalence problem (a problem like 5 + 6 + 3 = __ + 3. For some kids, the explainer gestured by pointing to relevant parts of the problem as she explained; for others, she just explained (using the exact same speech as for the gesture-receiving kids). The researchers used eye tracking while the kids watched the videos and found that those who watched the video with gestures looked more to the problem (and less at the speaker) than who watched the video sans gesture. More importantly, those who watched the gesture video did better on a posttest than those who didn’t. The main caveat was that the kids’ eye patterns did not predict their posttest performance; in other words, looking more at the problem and less at the speaker while learning may have contributed to better understanding of the math principle, but not significantly; other mechanisms must also be underlying gesture’s effect on learning. 

    But in case you started to think that gestures are a magic learning bullet:

  • Effects of Gesture on Analogical Problem Solving: When the Hands Lead You Astray (Autumn Hostetter, Mareike Wieth, Keith Moreno, Jeffrey Washington): There’s a pretty famous problem for cognitive science tests studying people’s analogical abilities, referred to as Duncker’s radiation problem: A person has a tumor and needs radiation. A strong beam will be too strong and will kill healthy skin. A weak beam won’t be strong enough to kill the tumor. What to do? The reason this problem is used as a test of analogical reading is that participants are presented a different story – an army wants to attack a fortress (and the fortress is at the intersection of a bunch of roads), but there are mines placed on the roads leading up to it, so the whole army can’t pass down one road at a time. Yet if they only send a small portion of the army down a road, the attack will be too weak. The solve this by splitting up and all converging on the fortress at the same time. Now can you solve the radiation problem? Even though the solution is analogous (target the tumor with weak rays coming from different directions) people (college undergrads) usually still struggle. It’s a testament to how hard analogical reasoning is.
    But that’s just background leading to the current study, where the researchers asked: if people gesture while retelling the fortress story, will they have more success on the radiation problem? To test this, they had one group of participants that they explicitly told to gesture, one group that they told not to gesture, and a final group that they didn’t instruct at all regarding gestures. They found that the gesturers in fact did worse than non-gesturers, and after analyzing the things that people actually talked about in the different conditions, discovered that when people gestured, they tended to talk more about concrete details of the situation – for example, the roads and the fortress – and this focus on the perceptual features of the fortress story actually inhibited their ability to apply the analogical relations of that story to the radiation case.
    Taking this study into consideration with the previous one, it’s clear that gesture is not all good or all bad; there are lots of nuances of a situation that need to be taken into account and lots of open questions ripe for research.
  • tDCS to premotor cortex changes action verb understanding: Complementary effects of inhibitory and excitatory stimulation (Tom Gijssels, Daniel Casasanto): We know the premotor cortex is involved when we execute actions, and there’s quite a bit of debate about to what extent it’s involved in using language about actions. They used transcranial direct current stimulation – a method that provides a small electrical current to a targeted area of the brain – over the premotor cortex (PMC) to test for its involvement in processing action verbs (specifically, seeing a word or a non-word and indicating whether it’s a real English word). People who received PMC inhibitory stimulation (which decreases the likelihood of the PMC neurons firing) were more accurate for their responses about action verbs, while those who received PMC excitatory stimulation (increasing the likelihood of the PMC neurons firing). This at first seems paradoxical – inhibiting the motor area helps performance and exciting it hurts, but there are some potential explanations for this finding. One that seems intriguing to me is that since the PMC is also responsible for motor movements, inhibiting the area helped people suppress the inappropriate motor action (for example, actually grabbing if they read the verb grab), and as a consequence facilitated their performance on the word task; excitatory stimulation over the same area had the opposite effect. Again, this study makes it clear that something cool is going on in the parts of our brain responsible for motor actions when we encounter language about actions… but as always, more research is needed.

journey

  • Tacos for dinner. After three days of long, stimulating conference days, the veggie tacos at El Vez were so good that they make the conference highlight list.

For every cool project I heard about, there were undoubtedly many more that I didn’t get to see. Luckily, the proceedings are published online, giving us the printed version of all the work presented at the conference. Already looking forward to next year’s event in London!