Metaphor shapes thought: When, why, and how?

A lot of concepts that are central to the human experience are abstract, things we can’t directly see or touch. For example, relationships, ideas, and time are concepts that we think and talk about a lot. We commonly use concrete language to talk and think about these things — we use metaphors.

A recent review paper I worked on with Paul Thibodeau and Lera Boroditsky focuses on the role that metaphors in language play in shaping our thoughts. We summarize numerous studies that show the power of metaphor to guide the way we think, and discuss cases in which metaphors are most influential. Here are some of my favorite takeaway points from the paper.

A lot of studies that show that metaphors shape the way we think

Climate change, illnesses, the stock market, crime… These are all important issues, and are among the many domains that have been investigated in “metaphor framing studies.” In these studies, researchers present information about the topic to their participants. The information usually includes one metaphor that the researchers intend to test. Other participants get identical information about that same issue, but their information includes a different metaphor. The researchers ask everyone the same opinion questions after, and measure differences in belief that can be attributed to the metaphor people read.

This method has been used to reveal that referring to a war against global warming encourages people to feel urgency for reducing emissions than a race against global warming (more on this study here).

Similar results have also revealed that ideas seem more exceptional when they’re referred to as light bulbs than as seeds. And that conflict hurts people’s idea of their relationship more when the relationship is described as a perfect union than as a journey. These studies and many others show that when we encounter a metaphor in natural language, we often reason about that metaphorical idea in ways that are consistent with the literal idea used to describe it.

Metaphors are most influential when people have just the right amount of prior knowledge

In order for the phrase Crime is a beast to shape the way you think about crime, you have to know something about crime already, and you have to know something about beasts. We review studies that show that when people don’t know enough or care enough about one of the topics, metaphors don’t persuade them. For example, students who liked sports were more in favor of a senior thesis requirement when it was framed with sports metaphors than when it wasn’t, but students who didn’t like sports were not affected at all by the sports metaphors.

At the same, metaphors are most persuasive when people don’t have too much knowledge or strong prior beliefs about the topic being described. For example, people who have deep-seated beliefs about crime are not as swayed by crime metaphors as those who don’t. In other work, when an experiment was designed to make people feel unconfident in their economic knowledge (by giving them a hard quiz), they were more likely to reason about an economic situation (a company’s bankruptcy) in metaphor-consistent ways than people who got an easy quiz, which inflated their confidence.

Metaphors are most effective, then, when people have not too much, or not too little knowledge on a topic — their knowledge level has to be just right.

Metaphors shape memory and attention

It’s useful for us to know that metaphor shapes thought, and when metaphor shapes thought, but it’s also important to work to understand how it does so. In many metaphor framing studies, participants receive a passage with a metaphor, and tend to reason in metaphor-consistent ways, but what’s going on in the space between those events? What is the mechanism through which metaphors exert their effects?

It seems that one way metaphors shape thought is by guiding what we pay attention to in a communication, and therefore what we remember about it. For example, an eye-tracking study revealed that people move their eyes in a path-like motion while they process metaphorical sentences, like “The road goes through the desert” (remember, roads don’t “go” anywhere – they stay still!) compared to literal sentences, like “the road is in the desert.” Eye movements are often used as an indicator of what people are paying most attention to, suggesting that metaphors can shape how people pay attention to incoming information.

In addition to reviewing what cognitive science has revealed about the relationship between metaphor and thought, our paper also reviews what we don’t yet know. To me, one of the most important areas for future work is to understand how insights from these theoretically informative and tightly controlled lab studies can be applied to addressing real-world issues. We’re starting with a solid foundation that shows us that metaphor does shape thought, but we still have much to do to figure out how to apply that knowledge.

Advertisements

How We Learn: A Guest Review

I mentioned in a previous post that I have some stellar undergraduate Research Assistants. I neglected to mention that this summer I also have some stellar high school assistants. Juliette Hill is a rising senior whose main goal for her time in the lab was to learn what it’s like to be a cognitive science grad student. She worked on some open-ended and exploratory questions as well as some very detailed data collection. She also read and thought about cognitive science ideas beyond the specific ones we’re addressing in the lab. Here are her thoughts on How We Learn, a book by Benedict Carey:

learn

Like most of us, Benedict Carey grew up with the belief that in order to learn best, one had to find a quiet, designated study space. Practice was the only path to perfection. The Internet and all other electronic devices should be turned off lest they disturb your concentration. Highlighting and rereading notes, if done frequently, will improve your test scores. Forgetting is the enemy of learning.

Yet most of these adages are far from the truth.

Distractions can actually aid learning in ways that remaining focused cannot. Studying in the same spot repeatedly may weaken your grasp on the subject. After an intense study session of revising notes, we feel confident we know our subject inside out, but we still barely manage a B on the test. Why?

With the advent of modern science, we are barely able to scrape the surface of discovering the cognitive aspect of learning. In his book How We Learn, Benedict Carey walks the reader through a multitude of discoveries that may revolutionize the way we perceive the learning process. Here are some of the findings he explains:

Distraction can aid learning. While this is not an absolute (checking Facebook during a lecture does not help you learn what the teacher is presenting you), it certainly has much potential, especially in today’s society. While stuck on a difficult math problem or other similar pit, taking a study break can definitely boost your ability to solve the problem the second time around. Does this mean taking an hour-long nap will have similar effects? Absolutely! And it can possibly help even more than a simple distraction.

Sleep is your friend. Most people know that sleep can help consolidate learned facts and motor skills, but few people know when such benefits occur in the night. Each night is comprised of several cycles, alternating between a deep sleep and a more wakeful one. The times in the night when you sleep the deepest occur around the first 2 to 3 hours of sleep. This deep sleep has been found to reinforce the learning of rote facts. Yet if you are preparing for a music recital (which would involve your motor skills and learning), your peak of the night would occur slightly later.

Highlighting and rereading of notes will not carry you far. In fact, you will feel as if you know the subject manner by heart, but will be disappointed when you see an unexpected score on your test. What happened? You knew the content so well, right? The danger of highlighting and rereading is that it gives you the impression that you know the material, when you actually are only familiar with it. The best way to review content is to maintain a “desirable difficulty” (as coined by Dr. Robert Bjork) in your studying. This means that testing yourself (as opposed to just reading the content) will help you retain the material much better. So you can dig up those flashcards you never thought you’d use again. This applies to preparing a speech too, in that you will be better prepared if you practice reciting your speech instead of just rereading your notes.

Interleaving helps retain information best. If you are asked to memorize the styles of 12 different artists from different eras, do you think you would do best by studying all the works done by each artist one at a time (a method called “blocking”) or by mixing up the artists? If you are like most, you may choose to study by blocking. However, this has shown to be significantly less effective than mixing up the artists (interleaving) and studying that way. Ever noticed that when you do your math problems (by each section), you understand right away and feel like you mastered the skill, yet come time for the test, you are confused by which equations to use? This can easily be avoided with interleaving, which would mean, in this case, that you include problems from previous sections along with the night’s homework.

Your study corner is a trap. There have been several studies that looked at the effect of location on retention and found that if you studied certain information in a particular spot and were tested on it at that same location, you do better than if you studied the material in one place and tested in another location. The same is true for body states (hunger, influence of drugs, mood…) or when listening to music. You do best when these stay consistent. Yet it is often too hard to study and test in the same location, and more importantly, it becomes harder to recall the information when not in that same area. The answer is to vary your location when studying. If you only study in one location, the information will unconsciously (though not on a large scale), be tied to that location. This means that if you move to another spot, your recall will not be at its optimal. However, by altering your study spots, you can avoid this dependence on your surroundings and possibly increase your score on the next test.

These are just a few of the topics Carey explains in his book, and there are many more discovered since the book’s publishing. Therefore, I highly recommend that you look into this book and share your findings with others. It’s a shame so few people know about the science of learning, despite the fact that their lives revolve around it.

In a relationship with Google

I recently read this Slate article about the effect that “digital tools” like Google and Evernote have on our memory. Many people suspect that the ability to instantly Google any fact they’ve forgotten might be taking a toll on their memory, based on the adage “use it or lose it.” However, the article argues that the effects are “much weirder than that.”

First the author writes about transactive memory- the use of people around us to store memories. For example, married couples often subconsciously divide up the memory tasks- “the husband knows the in-laws’ birthdays and where the spare light bulbs are kept; the wife knows the bank account numbers and how to program the TiVo… Together, they know a lot. Separately, less so.”

One clever study has found that we’ve begun to treat technology as our memory spouses. Researcher Betsey Sparrow gave subjects sentences of random trivia, like “an ostrich’s eye is bigger than its brain,” and either told them the facts would not be saved or that they would be saved, and specified in which folder they’d be saved in. When tested a few days later, those who were told the computer would save the facts were less likely to remember the facts than those who were told the computer would not save them. However, when she asked the students to remember whether a fact had been saved or erased, they were better at recalling the instances in which the facts had been saved in a particular folder. Thus, she argues, a different memory is strengthened in the cases in which we know information is being saved- specifically, the knowledge where we can re-find that info later.

Pretty soon...
Pretty soon…

While the argument doesn’t make the case that we should outsource all our memories as long we know where they’ve been outsourced to, it does emphasize that technology is not ruining our memories. In fact, there are benefits to knowing that information is stored digitally, such as the completeness of the information they store (for example, a quick stop at Wikipedia is bound to turn up way more info than you were actually wondering about, for better or for worse).

This feels a lot like the extended mind hypothesis – that the brain is not the home of all our knowledge. If I save some thoughts in a document on my computer and know exactly how to access them but may not be able to exactly reproduce the thoughts without prompting, are they a part of my mind?