CogSci 2016 Day 1 Personal Highlights

I stepped out of the airport Wednesday night and my glasses fogged up. Ah, what a reminder of the world that awaits outside southern California, where I’m immersed in my PhD work. I had arrived in Philadelphia for CogSci 2016 to be bombarded by fascinating new work on the mind and behavior and the clever researchers responsible for it.

With 9 simultaneous talks at any time and over 150 posters on display during each poster session, I of course only got to learn about a fraction of all that was there. Nonetheless, here are some projects that are still on my mind after day 1:

  • Cognitive biases and social coordination in the emergence of temporal language (Tessa Verhoef, Esther Walker, Tyler Marghetis): Across languages, people use spatial language to talk about time (i.e., looking forward to a meeting, or reflecting back on the past). How does this practice come about? To investigate language evolution on a much faster time scale than occurs in the wild, this team had pairs of participants use a vertical tool (I believe the official term was bubble bar, see below). to create a communication system for time concepts like yesterday and next year. The pairs were in separate rooms, so this new communication system was their only way of communicating. Each successive pair inherited the previous pair’s system, allowing the researchers to observe the evolution of the bubble bar communication system for temporal concepts. Over the generations, participants became more accurate at guessing the term their partner was communicating (as the bubble bar language was honed), and systematic mappings between space and time emerged; that is, although each chain ended up with pretty different systems, within a single chain people tended to use the top part of the bar to indicate the same types of concepts (i.e., past or future), and used systematic motions (for example, small rapid oscillations for relatively close times like tomorrow and yesterday and larger, slower oscillations for more temporally distant concepts).

    Screen Shot 2016-08-12 at 6.27.45 AM

    The bubble bar

  • Deconstructing “tomorrow”: How children learn the semantics of time (Katharine Tillman, Tyler Marghetis, David Barner, Mahesh Srinivasan): This team had children of varying ages place time points (like yesterday and last week) on a timeline. They analyzed different features of the kids’ timelines to investigate at what age kids seem to understand three different concepts of time (or that they begin to understand these concepts in ways that adults do). The first was whether a time is in the past or future relative to now (did kids place it to the left or right of the now mark on the timeline?). The second aspect they looked at was whether kids understand sequences of different times – for example, that last week comes before (to the left on a timeline) yesterday (regardless of where those events were placed compared to now). Finally, they compared the way kids’ timelines showed remoteness – how temporally distant different events are from now – to how adults showed the same concept. Adults, for example, will place tomorrow quite close to the now mark and next year significantly farther away. They found that kids acquired an adult-like sense of remoteness much later than the first two – deictic (past vs. future) and sequence – concepts. While the latter two concepts reliably emerged in kids by 4 years old, but knowledge of remoteness wasn’t present until much later – after 7 years old. These data are an indication that while kids can pick up a lot of information about what different time words mean from the language they encounter, they may need formal education in order to really grasp that tomorrow is much closer to today than last year was.
  • Gesture reveals spatial analogies during complex relational reasoning (Kensy Cooperrider, Dedre Gentner, Susan Goldin-Meadow): After reading about positive feedback systems (i.e., an increase in A leads to an increase in B, which leads to more increase in A…) and negative feedback systems (an increase in A leads to an increase in B, which leads to a decrease in A), participants had to explain these complicated concepts. Even though the material that people read had almost no spatial language , spatial gestures were extremely common during their explanations (often occurring without any accompanying spatial language in speech). These gestures often built off each other, acting as a way to show relational information through space, and they suggest that people invoke spatial analogies in order to reason about complex relational concepts.

    Screen Shot 2016-08-12 at 6.49.08 AM.png

    Sample Gestures showing (from left to right) a factor reference, a change in a factor, a causal relation, and a whole system explanation.

  • Environmental orientation affects emotional expression identification (Stephen Flusberg, Derek Shapiro, Kevin Collister, Paul Thibodeau): Past work has shown us that we not only talk about emotions by using spatial metaphors (for example, I’m feeling down today, or your call lifted me up), but we also invoke these same aspects of space to think about emotions. In the first experiment, the researchers found that people were faster to say that a face was happy when it was oriented upwards and that it was sad when oriented downwards (both of which are considered congruent with the metaphor) than for the incongruent cases. Then, to differentiate between an egocentric (facing up or down with respect to the viewer’s body) and environmental (facing up or down with respect to the world) reference frames, people completed the same face classification task while lying on their sides. This time, they only showed the metaphor consistent effect (faster to say happy when faces were oriented up and to say sad when faces were oriented down) when the face was oriented with respect to the world – not when the orientation was with respect to the person’s own position. This talk won my surprising finding award for the day, since researchers often explain our association between emotion and vertical space as originating in our bodily experiences: we physically droop when we’re sad and we rise taller when we’re happy. That explanation isn’t consistent with what these researchers found, though, suggesting that people’s association between vertical space and emotions was critically an association involving vertical space with respect to their environment, and not their own bodies.
  • Context, but not proficiency, moderates the effects of metaphor framing: A case study in India (Paul Thibodeau, Daye Lee, Stephen Flusberg): People use metaphors they encounter to reason about complex issues. For example, when a crime problem is framed as a beast, they think that the town should take a more punitive approach to dealing with it than when that same problem is framed as a virus. What if you encounter this metaphor in English, but English isn’t your native language – does the metaphor frame influence your reasoning less than it would influence a native English speaker’s? People from India (all of whose native language was not English) read the metaphor frames embedded in contexts, and reasoned about the issues that were framed metaphorically. Overall, people reasoned in metaphor-consistent ways (i.e., saying that crime should be dealt with more punitively after it was framed as a beast than a virus). Their self-reported proficiency in English did not affect the degree to which people were influenced by the metaphor; people who were more fluent in English were not more swayed by the frames. However, the context in which they typically spoke English, did play a role: Those who reported using English mostly in informal contexts, such as with friends and family and through the media, were more influenced by the frames than those who reported using English more in formal contexts, like educational and professional settings. These experiments don’t explain why those who use English more in informal settings were more swayed by metaphorical frames than those who use the language more in formal settings, but it opens the door for some cool future research possibilities.

Check back for highlights from days 2 and 3!

Advertisements

One comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s