CogSci 2016 Day 2 Personal Highlights

Cool stuff is happening at CogSci 2016 (for some evidence, see yesterday’s highlights; for more evidence, keep reading). Here are some of the things I thought were especially awesome during the second day of the conference:

  • Temporal horizons and decision-making: A big-data approach (Robert Thorstad, Phillip Wolff): We all think about the future, but for some of us, that future tends to be a few hours or days from now, and for others it’s more like months or years. These are our temporal horizons, and someone with a farther temporal horizon thinks (and talks) more about distant future events than someone with a closer temporal horizon. These researchers used over 8 million tweets to find differences in people’s temporal horizons across different states. They found that people in some states tweet more about near future events than in others – that temporal horizons vary from state to state (shown below, right panel). They then asked, if you see farther into the future (metaphorically), do you engage in more future-oriented behaviors (like saving money – either at the individual or state level; or doing fewer risky things, like smoking or driving without a seatbelt)? Indeed, the the farther the temporal horizon revealed through people in a given a state’s tweets, the more future-oriented behavior the state demonstrated on the whole (below, left panel).
    Screen Shot 2016-08-12 at 9.28.54 AM
    Then, recruited some participants for a lab experiment. The researchers then compared the temporal horizons expressed in people’s tweets with their behavior in a lab task, asking whether those who wrote about events farther in the future displayed a greater willingness to delay gratification – for example, waiting a period of time for a monetary quantity if the future quantity will be greater than taking the money today. They also compared the language in people’s tweets with their risk taking behavior in an online game. They found that the language people generated on Twitter predicted both their willingness to delay gratification (more references to the more distant future were associated with more patience for rewards) and their risk-taking behaviors in the lab (more references to the more distant future were associated with less risk taking). While the findings aren’t earth shattering – if you think and talk more about the future, you delay gratification more and take fewer risks – this big data approach using tweets, census information, and lab tasks opens up possibilities for findings that could not have arisen from any of these in isolation.
  • Extended metaphors are very persuasive (Paul Thibodeau, Peace Iyiewuare, Matias Berretta): Anecdotally, when I read an extended metaphor – especially one that an author carries throughout a paragraph, pointing out the various features that the literal concept and metaphorical idea have in common – persuades me. But this group quantitatively showed the added strength that an extended metaphor has over a reduced (or simple, one-time) or inconsistent metaphor. For example, a baseline metaphor that they used is crime is a beast (vs. crime is a virus). People are given two choices for dealing with the crime: they can increase punitive enforcement solutions (beast-consistent) or get to the root of the issue and heal the town (virus-consistent). In this baseline case, people tend to reason in metaphor consistent ways. When the metaphor is extended into the options, though (for example adding a metaphor-consistent verb like treat or enforce to the choices), the framing has an even stronger effect. When there are still metaphor-consistent responses but the verbs are now reversed – so that the virus-consistent verb (treat) is with the beast-consistent solution (be harsher on enforcement), the metaphor framing goes away. Really cool way to test the intuition that extended metaphors can be really powerful in a controlled lab setting.
  • And, I have to admit, I had a lot of fun sharing my own work and discussing it with people who stopped by my poster – Emotional implications of metaphor: Consequences of metaphor framing for mindsets about hardship [for an abridged, more visual version, with added content – see the poster]. When people face hardships like cancer or depression, we often talk about them in terms of a metaphorical battle – fighting the disease, staying strong. Particularly in the domain of cancer, there’s pushback against that dominant metaphor: does it imply that if someone doesn’t get better, they’re not a good enough fighter? Should they pursue life-prolonging treatments no matter the cost to quality of life? We found that people who read about someone’s cancer or depression in terms of a battle felt that he’d feel more guilty if he didn’t recover than those who read about it as a journey (other than the metaphor, they read the exact same information). Those who read about the journey, on the other hand, felt he’d have a better chance of making peace with his situation than those who read about the battle. When people had a chance to write more about the person’s experience, they tended to perpetuate the metaphor they had read: repeating the same words they had encountered but also expanding on them, using metaphor consistent words that hadn’t been present in the original passage. These findings show some examples of the way that metaphor can affect our emotional inferences and show us how that metaphorical language is perpetuated and expanded as people continue to communicate.
  • But the real treat of the conference was hearing Dedre Gentner’s Rumelhart Prize talk: Why we’re so smart: Analogical processing and relational representation. In the talk, Dedre offered snippets of work that she and her collaborators have been working on over the course of her productive career to better understand relational learning. Relational learning is anything involving relations – so something as simple as Mary gave Fido to John or more complex like how global warming works. Her overarching message was that relational learning and reasoning are central in higher-order cognition, but it’s not easy to acquire relational insights. In order to achieve relational learning, people must engage in a structure-mapping process, connecting like features of the two concepts. For example, when learning about electrical circuits, students might use an analogy to water flowing pipes, and would then map the similarities – the water is like the electricity, for example – to understand the relation. My favorite portion of the talk was about the relationship that language and structure-mapping have with each other: language (especially relational language) can support the structure-mapping process, which can in turn support language. The title of her talk promised we would learn about why humans are so smart, and she delivered on that promise with the claim that “Our exceptional cognitive powers stem from combining analogical ability with language.” Many studies of the human mind and behavior highlight the surprising ways that our brains fail, so it was fun to hear and think instead about the important ways that our brains don’t fail; instead, to hear about “why we’re so smart.”
  • And finally, the talk I wish I had seen because the paper is great: Reading shapes the mental timeline but not the mental number line (Benjamin Pitt, Daniel Casasanto). By having people read backwards (mirror-reading) and normally, they found that while the mental timeline was disrupted: people who read from right to left instead of the normal left to right showed an attenuated left-right mental timeline compared to those who read normally from left to right. This part replicates prior work, and they built on it by comparing the effects of these same reading conditions on people’s mental number lines. This time they found that backwards reading did not influence the mental number line in the way it had decreased people’s tendency to think of time as flowing from left to right, suggesting that while reading direction plays a role in our development of mental timelines that flow from left to right, it does not have the same influence on our mental number lines; these must instead arise from other sources.

One more day to absorb and share exciting research in cognitive science – more highlights to be posted soon!

Advertisements

Speed reading might not be all it’s chalked up to be

Just read a great blog post by UCSD Cog Sci professor Ben Bergen for Psychology Today. There’s been a lot of hype lately about apps that allow you to read text more quickly than you normally do, and this post discusses why you should not buy into the hype. The way the apps work is by rapidly presenting readers with one word at a time, forcing you to read faster than your natural rate. However, as with many things, quantity does not replace quality. Bergen points out that even world champion speed readers only comprehend about half of what they read. He’s not alone in voicing concerns for our comprehension using these apps either.

Image: http://theweek.com/article/index/258243/the-problem-with-that-speed-reading-app-everyones-talking-about
Image: http://theweek.com/article/index/258243/the-problem-with-that-speed-reading-app-everyones-talking-about

The reasons that Rapid Serial Visual Presentation (RSVP) is not a good technique mainly relate to the fact that reading is an active process. We don’t sit idly consuming words at an even rate, but instead move faster over words that adhere to our predictions, slow down when we encounter new or surprising words, and often backtrack to re-read words, though much of this is probably not even conscious. RSVP doesn’t allow a reader to do any of these crucial things, and therefore hinders comprehension.

I was also really glad to see that Bergen points out that RSVP assumes that a reader’s goal is first and foremost to consume as many words as possible and just get the gist of them. That is not why I read at all, and I suspect it’s not why most people read most of what they do. I read because I enjoy it, because it encourages me to think, and in order to truly learn. After trying a quick demo of RSVP, I can’t imagine that anyone engages in it because it’s a pleasurable activity. I couldn’t help but hear the words in a sort of robotic voice as the screen flashed one at a time, and although it was amusing for a moment, it must keep readers from truly perceiving the author’s unique tone. In so few cases is reading pleasurable because the gist of the piece is interesting. Gist is certainly one piece of what makes reading great, but the details of the words, the ability to interact with the text, and certainly the ability to remember what we read should not be overlooked.

Image: http://oxfamblogs.org/fp2p/speed-reading-rocks-so-why-dont-we-all-learn-it/
http://oxfamblogs.org/fp2p/speed-reading-rocks-so-why-dont-we-all-learn-it/

The inseparability of writing and science

Whether you agree with Steven Pinker‘s views on cognition or not, it’s hard to deny that he’s an eloquent writer.  I recently found an interesting clip of Pinker discussing his new writing manual, The Sense of Style, which will be out in September.

I was first captivated by this quote: “There’s no aspect of life that cannot be illuminated by a better understanding of the mind from scientific psychology. And for me the most recent example is the process of writing itself.”

Throughout the video, Pinker explains why knowing more about the mind can help us to become better writers, which in turn will facilitate communication about scientific innovations like the mind. One reason Pinker makes this claim is because, in his view, “writing is cognitively unnatural.” In conversations, we can adjust what we’re saying based on feedback we receive from our audience, but we don’t have this privilege when writing. Instead, we must imagine our audience ahead of time in order to convey our message as clearly as possible.

Pinker points out that many writers write with an agenda of proving themselves as a good scientist, lawyer, or other professional. This stance doesn’t give rise to good writing. A writer should instead try to show the writer something that’s cool about the world.

He also points out that to be a good writer, you must first be a good reader, specifically “having absorbed tens or hundreds of thousands of constructions and idioms and irregularities from the printed page.” He uses the verbs “savor” and “reverse-engineer” to describe the process of reading to become a better writer. This echoes a lot of advice I’ve encountered (often in written form) since I first decided to pursue a PhD: read as much as you can. (I have also learned that any amount of reading I do will never feel like enough).

Image: http://www.gatherings.info/Blog/readingisfun.aspx
Image: http://www.gatherings.info/Blog/readingisfun.aspx

Regarding his style manual, Pinker wants to avoid the prescriptivist (someone who prescribes what constitutes correct language) vs. descriptivist (someone who reports how language is used in practice, regardless of correctness) distinction. Another great quote:

The controversy between ‘prescriptivists’ and ‘descriptivists’ is like the choice in ‘America: Love it or leave it,’ or ‘Nature versus Nurture’—a euphonious dichotomy that prevents you from thinking.

His overall point is that the humanities and sciences should not be seen as mutually exclusive. Instead, science should be used to inform humanities (in this case, writing, but I think his argument generalizes beyond this), and a knowledge of the humanities should inform science as well. To me, this is what cognitive science must necessarily be – an understanding of the human mind and behavior requires rigorous science, no doubt, but I think we need to continue to look outside the three pounds of neural tissue inside our skulls for the most complete understanding.