Refugee for a day: A glimpse into the ugliness and the beauty humans are capable of

It was the best of times, it was the worst of times.

But mostly, it was the worst, as 300 exhausted passengers fended for themselves to find cots at 3am in Boston’s Logan Airport.

At first we were mildly frustrated, as we waited 3 hours at the gate for our flight that was continually delayed for mechanical reasons. Our frustration grew when we then learned that our flight was canceled for the evening: “Sorry, folks, we’re not going to be able to fly out tonight. Please go to counter 36 for more information and hotel vouchers.”

We vacillated between hope and despair as we waited for 5 more hours for hotel vouchers and new flight reservations that never materialized for most of us. We started seeing glimmers of hope in everything (A man with a suit! He’s coming to fix the problem!). By 3am even our mirages were put to rest when an announcement informed us that there were no more available hotels in the Boston area and that we would receive more information at 10am.

Based on the avoidant airline personnel we had encountered to that point, few people were surprised when 10am came and went, and we had received no update. By 2pm, we were receiving new tickets for a flight that would take off at 5. By 8pm, after another delay because the flight crew had been stuck in traffic on their way to airport, we were out of emotions. A converted air tanker took off over the Atlantic Ocean 29 hours after scheduled departure with over 300 zombies on board.

During those 29 hours, my fellow passengers and I witnessed some of the ugliness humans are capable of. Some people jogged and jostled each other each time an announcement directed us to form yet another line – for hotel vouchers, meal tickets, or new boarding passes. At random intervals, a new passenger would break down and start shouting, so the state police came to make sure things remained civil. When the airport employees brought “food” and drinks to the line of people waiting for nonexistent hotel vouchers, some people rushed to grab what they could from the stash of mini water bottles and bags of Cheez-Its that made you wonder if someone in the factory had snagged a handful before sealing the bag with 5 crackers in it.

logan2
Some people were hesitant to leave the ticketing area, so they brought their cots over to not lose their place in line.

Our mass sleepover in Logan airport was uncomfortable and denigrating, but for every sneer there were many smiles. We were not happy to be stuck in an airport, but the fact was that we were there. We got to know each other, we commiserated and, somehow, we laughed. I learned that to a Brit, Cheez-Its taste like sweaty socks. We shared – iPhones for those who needed to make calls, sweatshirts (because damn, air conditioned airports are really cold when you don’t have a blanket), and the coveted cots and blankets once we got our hands on them.

A week after this debacle, I still look back and cringe at this experience. But the entire time, I knew I’d get to a comfortable bed eventually where I could sleep for 11 hours. I knew I’d have a good meal and a glass of wine at the end of the trip. When I stopped griping for a moment, I realized that knowing that those comforts were in my near future was a lot more than many people can say, as they find themselves wondering where they’ll sleep tonight, tomorrow night, and for the foreseeable future. We lived like refugees for one night, and it was a pain. But many people do it for years.

 

 

What’s going on in our minds when language shapes how we think?

This is the second post in a two-part series on a new paper my advisor Lera Boroditsky and I published that shows that learning a new way to talk about time creates new ways of thinking about it. You can check out the first post here.


A lot of psychology research measures behavior — what people do, often in a lab experiment — as a way of understanding what they’re thinking.

For example, in some of my favorite work by Paul Thibodeau and Lera Boroditsky, participants read about a crime problem in a fictional city. The problem was described metaphorically, either as a beast or as a virus. After reading about the problem, participants indicated what the city officials should do to solve the crime problem. Those who had read that crime was a beast were more likely to suggest punitive solutions, as one would likely suggest if a literal beast were ravaging the city, than those who read that crime was a virus. In this experiment, the researchers measured people’s behaviors — their suggestions for dealing with the crime — as a way of understanding how the crime metaphors shaped their thoughts.

Thought is a pretty tricky thing to measure, especially when it’s about high-level topics like crime, and our behaviors give researchers a useful glimpse into the mind. But behavioral evidence still leaves us asking what’s actually going on in people’s minds when metaphors shape the way we think.

brain-2146817_1280

In our recent work, participants learned new ways to talk about time. We then measured their subconscious associations between different parts of space and different aspects of time, and their behavior suggested that they now thought about time in ways that were consistent with how they learned to talk about it.

We wanted to learn more about how this was happening. We call this new way of thinking — associating earlier events with either higher or lower parts of space, depending on the metaphors a person learned — a new representation. We think of this new representation as a new timeline a person has in mind. We wondered whether this new representation that people acquired through the metaphors they learned in our lab was a kind that relied on language while they were using it.

For example, we know that counting items is a kind of representation or cognitive routine that does require a person to engage their language capacities in their mind, even when they’re not talking out loud. Intuitively this makes sense — when you ask yourself what’s going on in your mind when you count an array of objects, it might make sense that you’re actually saying “one, two, three…” in your head as you count. Researchers have indeed been able to show that this is what’s going on in our minds when we count by having people memorize strings of letters (which requires them to rehearse the letters in their head, saying something like “F, J, D, C, P, R” silently to themselves while also trying to count objects. Under this condition, often referred to as verbal interference, people (specifically MIT students) struggle to even count an array of dots accurately. This result is taken as evidence that counting objects relies on our ability to engage in a linguistic routine in our heads, since the verbal interference, which also relies on the ability to engage in a linguistic routine, disrupted counting performance.

When participants learned new metaphors for time in our experiment, we also wanted to find out whether their newly learned mental timelines also required a linguistic routine for the metaphors to exert their effects on mental timelines. To that end, we had participants complete the typical task measuring their subconscious space-time associations while they were mentally rehearsing a string of letters, as the researchers in the counting work did. The mental rehearsal taxed their linguistic working memory, leaving them unable to engage these linguistic cognitive resources for other tasks. If they needed to engage those resources for the new metaphors to shape how they think about time, we should see that they no longer associate the parts of space and time that their metaphors suggested when they undergo verbal interference at the same time.

We found that even under verbal interference, people showed mental timelines that were consistent with the new metaphors they learned in our lab. In other words, learning a new way to talk about time shaped how people thought about it, and it was not just because people were adopting a new routine in their minds, subvocalizing to themselves, “earlier is up, later is down” (or vice versa) while doing the main task. Language can shape non-linguistic thought patterns.

But what is going on in our minds when we learn new metaphors for time that shape how we think about it? Are we imagining earlier events (like breakfast) as being physically above or below later events (dinner)? We’re still not sure, but there’s no shortage of mysteries to work on to better understand how language shapes our thoughts about topics as ubiquitous as time.

Much more than a way of talking: Metaphors in language shape how we think

We gather a lot of knowledge through our physical experiences in the world: what a good steak tastes like, how to get from home to work, or how it feels to be caught in a downpour. But at the same time, many of the topics that are most central to our lives, like the concepts of love, justice, or time, aren’t things we can directly experience, for example by seeing, touching, or tasting them. How, then, do we make sense of them?

One way we develop these concepts in our minds is by thinking about them in terms of concepts we do have direct experiences with. We use metaphors like love is a journey to conceptualize love in terms of a more concrete idea, a journey. Research I conducted with my advisor Lera Boroditsky shows that linguistic metaphors can actually cause us to think about concepts like time in new ways.

In our lab, we taught participants new ways of talking about time that used vertical terms to talk about sequences of events. For example, some people learned that earlier events take place above later ones. They were told things like Tuesday is higher than Wednesday, and When we eat dinner, breakfast is above us. Other people learned the opposite system of metaphors, that earlier events take place below later ones.

After learning these metaphors, participants completed a task that measures how much they subconsciously associate different parts of space with different aspects of time. This task didn’t require language to complete it (people saw pictures and pressed buttons to indicate the order that events happened in), so there was no encouragement for people to even connect this task with the earlier part of the experiment in which they learned new ways to talk about time.

We found that people associated space with time in ways that were consistent with the new metaphors they had just learned. Those who learned that earlier events happen above later ones tended to associate earlier events with higher space than later events and vice versa.

Learning a new way to talk about time creates new ways of thinking about it.

This work is based on a foundation of research pointing to similarities between the way we talk and think about time. Across languages, people often use spatial language to talk about time. For example in English, we can have a long meeting or time can fly by, we look forward to the future and back on the past, and it’s appropriate to say either that we’re approaching a deadline or that the deadline is approaching us. In all of these cases, the metaphors we use to talk about time suggest that passage of time is akin to movement through space.

Using space to talk about time is not specific to English. Many languages include similar metaphors, though different aspects of space can be associated with different aspects of time. For example, the Aymara, a group in South America, refer to the past as ahead of them and the future as behind, a reversal of the English convention. Similarly, speakers of Mandarin Chinese can use vertical language — the same words that mean up and down — when talking about time.

Screen Shot 2017-07-01 at 10.03.42 AM

We don’t just use spatial language to talk about time, but we actually think about time in ways that are consistent with the specific spatial metaphors our language uses. For example, English speakers lean slightly forward when thinking about the future and back when thinking about the past, and demonstrate subconscious associations between space in front of the body and the future and space behind the body and the past on reaction time tasks. These studies suggest that we’re often drawing on our knowledge of space when we think about time.

Does the language we use to talk about time (like the future ahead of us and past behind) cause us to actually think of time in consistent ways, or do we use these spatial metaphors because we naturally think about the future in front of us and the past behind? This is pretty much a textbook chicken-and-egg problem.

Both possibilities could be true, but existing research can’t shed light on causal relationships. Showing that Mandarin speakers think about time vertically (consistent with their metaphors in language) and English speakers do not doesn’t tell us that different metaphors cause differences in thought — there are many ways in which two groups of people who speak different languages will differ, and it’s impossible to know whether any of those factors actually lead to an observed difference in thinking about time between the two groups. In order to make the causal claim — to know whether metaphors in language can actually shape the way people think about time — we needed to randomly assign participants to conditions. By teaching all participants a new way of talking in the lab, that’s exactly what we did — we randomly assigned some to the group that learned that earlier events are above later ones and others to the group that learned that earlier events are below. This way we could be sure that the metaphors participants learned, and not some other uncontrolled difference between the two groups, was the reason the two groups differed in the way they associated vertical space and time.

This work shows that metaphors in language can shape the way people think. In fact, learning a new way of talking about time can foster new ways of thinking about this topic that is central to our everyday lives.


In the second (and final) post in this series, I’ll dive more into what was actually going on in people’s minds when these new metaphors shaped how they think about time.

Here’s the link to the original article

Vaccinating, metaphorically and literally

There’s a lot of bad (either misleading or blatantly false) science information on the Internet. Science communicators often try to combat the bad content by dumping as much accurate information as they can into the world, but that strategy is not as effective as many would hope. One reason it’s not effective is that social circles on the Internet are echo chambers: people tend to be follow like-minded others. Scientists and science communicators follow each other, and skeptics follow each other, so we rarely even hear what others outside our circle are talking about. Plus, when we do encounter evidence that contradicts our beliefs, we tend to discount it and keep believing what we already did.

A recent study by Sander van der Linden, Anthony Leiserowitz, Seth Rosenthal, & Edward Maibach (that I recently wrote about) gives a glimmer of hope to this science communication trap: communicators may be able to “vaccinate” their audiences against misinformation. They found that if people are cued in to the kinds of tactics that opponents of global warming deploy, they’re less likely to believe them. This finding offers some hope in a time when the proliferation of fake and misleading science information seems inevitable. Scientific facts, along with a heads up about anti-scientific strategies, can help people better evaluate the information they receive to form evidence-based beliefs and decisions.

Does this apply to other scientific issues? Can we vaccinate against anti-vaccination rhetoric?

I don’t know. But I’d like to find out. In order to design a communication that alerts people about anti-vaccine messages they might encounter, it’s important to understand anti-vaccine tactics. I explored some very passionate corners of the Internet (videos, discussion threads, and blog posts by anti-vaccine proponents) for a better understanding. Here are the anti-vaccine tactics I found, a lot of which are described in this SciShow video:

Ethos: Appeal to Authority

First, note that this immunologist isn’t explicitly saying that children shouldn’t be vaccinated. But the quote implies so much. I don’t know if that’s actually her belief, but regardless, as a consumer of this image, I do get the sense that she looks pretty smart (#educated, in fact), and maybe she knows what she’s talking about…

Jargon

Screen Shot 2017-03-11 at 9.32.35 AM

There are four chemical names in the first five lines of the ad above. It sounds like whoever wrote it must really know their science. The message implies that the author has deep scientific knowledge about the chemicals mentioned and wants to warn you of their presence in vaccines. Paired with our society’s tendency to believe that all things “natural” are good, and all things “chemical” are enemies, this jargon-wielding author might appeal as someone worth listening to. Most of us (and I am definitely included here) don’t know much or anything about those chemicals — how do they work? Are they actually dangerous in the doses found in vaccines? This jargon paves the way for persuasion through the naturalistic fallacy — the idea that all natural things are better than non-natural things.

Logos: Appeal to “Logic”

Logical fallacies

A logical fallacy is faulty logic disguised as real logic, and it’s another common tactic used by anti-vaccine proponents. In the Tweet above, the author presents two facts, implying that they’re connected (that an increase in mandatory vaccines led to a change from 20th to 37th in the worldwide ranking of infant mortality rates. Just because America “lost ground” on this ranking, it doesn’t necessarily mean our mortality rate even went up — it’s likely that many other nations’ mortality went down. Plus, there are so many other factors beyond number of mandatory vaccines that influence infant mortality rate, and no evidence supplied by the Tweet that vaccines and mortality are related. They’re just two pieces of information, placed next to each other to give a sense of a causal relationship.

There are lots of ways logic can be distorted to suggest that vaccines are bad. One that really stands out to me is the suggestion that if vaccines work, why should we care if some children are not vaccinated? After all, they’ll be the ones who get sick… why does it concern the rest of us?

It does. For one, no child should end up with a paralyzing or fatal disease because their parent chose to disregard scientific consensus. But one person’s choice not to vaccinate directly affects others — for example, people who CAN’T be vaccinated for health reasons. If everyone else receives vaccines, that one person who cannot is safe thanks to “community immunity.” But if others stop receiving those vaccines, the person who had no choice but to remain unvaccinated is susceptible. This person is unjustly at danger as a result of others’ choices.

Pathos: Appeal to Emotion

Fear

Fear is a powerful motivator. Appeals to ethos and logos can work together to have an emotional effect. Parents just want to do their best for their kids, so messages that strike up fears about the harms of vaccines have a good chance of swaying them.

One way of drumming up fear is to promote vaccine proponents as bullies, as this article demonstrates:

Screen Shot 2017-04-30 at 1.08.20 PM.png

Yea, that description sounds pretty scary to me… Breakdown Radio. Link to article

Considerations for inoculation messages

Of course, I’ve just scratched the surface with these tactics that anti-vaccine proponents use (you can get an idea of some others in a post on how the anti-vax movement uses psychology to endanger us by Dr. Doom) Messages that vaccinate against misconceptions have to walk an extremely fine line. The goal of such a message is to foreshadow misleading messages a person may encounter, and to point out the reasons that message should be reconsidered.

Vaccine messages might be useful when they introduce new information, but they also need to be proactive, anticipating anti-vaccine rhetoric and alerting people to its flaws. There are a few dangers in doing so, though. For one, it often requires repeating the misconception, and research shows that doing so can backfire and reinforce the inaccuracy instead. In addition, pointing out flaws in an argument that someone might be prone to believing can alienate that person. If the warning message isn’t constructed conscientiously (for example, if it suggests that seeing through the misleading information is a no-brainer), it can imply that anyone who might believe the misconceptions is an idiot. A message like this will make some members of the audience feel defensive (wow, am I an idiot? No, I can’t be an idiot. Maybe this author of this message is the idiot…).

That doesn’t mean that inoculation messages can’t be effective. We have some evidence to suggest they can, and I think there’s a lot of room to continue honing this strategy. The first step in a successful inoculation message is to uncover the tactics used by those who misrepresent the science. Then it’s important to raise awareness of those tactics without alienating the audience and while being careful not to repeat the misinformation in a way that can be construed as reinforcing it.

Communicators can keep in mind that anti-vaccine messages often attempt to establish authority, tap into emotions, and apply misleading logic in order to convince people of their message. By anticipating these strategies, we can have greater success in counteracting them and promoting vaccines as the life-saving technologies they are.

More information

 

By teaching, we learn: A retrospective on teaching about blogging

On the surface, teaching and learning have a pretty straightforward relationship: we learn something, and then we teach it, so that others can learn it (and maybe even teach it themselves). This does happen, but the learning-teaching relationship is far less linear than this might imply.

First, teachers and professors learn a topic well enough that they decide they can teach it. Sometimes they’re an expert in the topic, and other times they know the gist of a topic and (more importantly) where they can learn more.

Then they plan the course, during this phase, they often realize how much they don’t know. So they learn more. As they continue planning, they’ll put together lectures. This is another crucial part of the learning-teaching relationship, since teachers start distilling information from other sources into their own words to fit with their own course structure. Now they’re really learning.

Then comes the day of the lecture. The students might assume the professor knows all there is to know about the topic, and the professor hopefully feels prepared. During the lecture, hopefully students will ask questions. Some the professor will be able to answer — she’s already learned this stuff! But other questions might be more challenging. They might make apparent to the teacher what she doesn’t yet know. Hopefully she then tries to find the answer (if an answer exists). She learns again, and maybe communicates what she learned to the student who asked the question — so she teaches again.

This is a classroom example of how learning and teaching are inseparable — they often must happen simultaneously, since each supports the other.

kids-894787_1920

Teaching BlogSci

This quarter, I was fortunate to experience this tangle of teaching and learning for science blogging. I co-taught a seminar with Prof. Seana Coulson to introduce students to science blogging and guide them toward creating their own blog posts about Cognitive Science Research.

I’ve blogged for a few years and have paid some attention to other science blogs, implicitly gaining an understanding of the topics and strategies that make for the most engaging posts. But planning the class drove me to find and synthesize new science communication resources. Then I shared what I’ve learned with the class, and they asked great questions. Often these questions sparked the realization for me that I didn’t know the answer — and until they asked it, I didn’t know I didn’t know it.

Those moments can be unsettling (isn’t the instructor supposed to know the answer to topic-related questions?), or we can embrace them. For example, students wanted to know what makes for a good blog post title. For the final class, I asked around and looked up what other bloggers believe makes a good title, which we discussed as a class, but then we just experimented. We listed potential titles, shared them with the group, and got input on which were most compelling. We did some background research, and then we experimented.

Although I was one of the instructors, I didn’t know the answer to the post title question ahead of time. The seminar provided an opportunity for me to discover topics I didn’t know, and then work with the group to learn more. This is one example of many that show that I learned in order to teach the group, then learned while teaching the group, and in many cases, learned after formally teaching, once I realized how much was left to learn.

I’m grateful for the bright, curious students who fueled this process.

Seneca purportedly said Docendo discimus: By teaching, we learn. So my experience of learning while teaching is not novel. Instead, it’s an application of a timeless concept to a very modern one — blogging about science.

To learn more about our seminar and read the students’ polished products, check out our class blog: UCSDBlogSci.

The most informative home video

My family has some home videos. Some are on actual cassettes, and others are on our iPhones or in the cloud. They’re mostly short, and like photographs, they’re somewhat staged. In many cases, they show premeditated or sugar-coated shots of our lives.

But MIT researcher Deb Roy has some home videos that break the mould I just described. Roy and his wife set up video cameras to record every room of their house for about 10 hours a day for the first three years of their son’s life. They have more than 200,000 hours of data. Analyzing it is a mammoth and ongoing task, but it has helped answer some highly-debated and longstanding questions about how humans develop language. For example, one paper describes how this data can be used to predict the “birth” of a spoken word.

What factors facilitate word learning?

Not surprisingly, the child produced shorter words before longer words, as well as words that tended to occur in shorter sentences, and words he had heard often before rarer words. To me, these are intuitive features of words that make them easier to learn.

But there were also some less intuitive features that predicted how early the child would produce certain words. These features were more contextual, taking into account the when and where he heard words in the time leading up to his first production.

One feature that predicted a word’s birth was based on how often the boy heard the word in different rooms throughout the house. Some words were spatially distinct–for example, “breakfast” usually occurred in the kitchen, and others were more spatially dispersed, like the word “toy.” Spatial distinctiveness tended to help him learn words faster.

The researchers also measured temporal distinctiveness, or when during the day the word was likely to be heard by the toddler. Again, “breakfast” was temporally distinct, occurring almost exclusively in the morning, while the word “beautiful” was much more dispersed throughout the day. As with spatially distinct words, the researchers found that more temporally distinct words– those that were most often said at a similar time during the day — were learned sooner than those whose uses were spread out throughout a typical day.

Finally, they looked at the contextual distinctiveness of each word. This is basically the variation in the language that the child tended to hear with the word of interest. The word “fish” was contextually distinct, for example, often occurring with other animal words or words related to stories. “Toy,” on the other hand, occurred with a much greater variety of words and topics, so it was less contextually distinct. As with spatial and temporal distinctiveness, contextual distinctiveness made a word easier to learn.

This TED talk blew my mind.

Why does distinctiveness affect word learning?

Children learn language through conversations that are inseparable from the everyday life contexts they occur in. Those contexts are not just incidental features of word learning, but are actually crucial variables affecting how language is learned. This work is a reminder that language use and development is actually about much more than language, just as thinking is something that requires much more than just a brain. We humans are inseparable from our environments, and those environments play a big role in shaping how we think and navigate the wonderfully messy world we live in.

The Pope’s #scicomm: Effects of Laudato si’ on beliefs about climate change

Climate change is an extremely polarized issue: while many people firmly believe scientific evidence that human-caused climate change is ruining the planet and our health, many others adamantly maintain that it is not a problem. Figuring out how to communicate the gravity of climate change has been an urgent puzzle for climate change scientists and communicators (a topic I’ve written quite a bit about).

Collectively, we’re trying many different ways of communicating this issue. I especially love these videos by climate scientist Katharine Hayhoe and others by researcher M. Sanjayan with the University of California and Vox. Pope Francis also contributes to the scicomm effort — in 2015 he published an encyclical called Laudato si’: On Care for Our Common Home, which called for global action toward climate change (he also gave a copy of this encyclical to Donald Trump recently when the two met).

Was Laudato si’ effective?

Did the document influence beliefs about the seriousness of climate change and its effects on the poor? Recent research by Asheley Landrum and colleagues took up this question.

The work is based on survey results from Americans — the same people reported their beliefs about climate change before and after the encyclical came out.

They found that the encyclical did not directly affect people’s beliefs about the seriousness of climate change or its disproportionate effects on the poor.

But… the encyclical did affect people’s views of the pope’s credibility on climate change, encouraging them to see him as more of an authority after the document was published than before. This was especially true for liberals, though, reflecting a sort of echo chamber effect: people who already found climate change to be an issue gave the pope more credit for his stances on climate change after he published the encyclical.

Importantly, these altered views of the pope’s credibility did in turn affect how much people agreed with the pope’s message on climate change. In other words, there wasn’t a direct effect from the publication of the encyclical to agreement with its message; instead, there was first an effect of the document on beliefs about the pope’s credibility, and then an effect of those credibility assessments on agreement with the pope’s message.

Diem.png

This work reminds us that science communication efforts can’t be considered in isolation. Whether people agree with a message is influenced by factors like their political beliefs and the credibility of the source. This point calls for two directions for future scicomm: for one, communicators should do their best to consider their message and audience holistically — what factors are likely to shape an audience’s receptiveness to a message, and how can those be influenced? This work also reminds us that we need more research on the science of science communication. We need to continue working to understand how people perceive scientific issues and communicators, and how they respond to the scicomm they encounter.


Featured Image: Korea.net / Korean Culture and Information Service (Jeon Han)

Depression & its metaphors

Depression is high on my Incredibly-important-topics-that-we-humans-struggle-to-make-sense-of list (with a list at least I can impose some order on things that are nearly impossible to grapple with otherwise). It’s prevalent — within a single year, the Anxiety and Depression Association of America estimates that 6.7% of American adults over age 18 have struggle with depression.

And depression is really hard to wrap our heads around, whether we’re experiencing it or not. For one, we can’t really see it — people with depression look just like people without. Depression continues to challenge scientists — why do some people experience depression while others don’t? Why do some treatments work for certain patients and not for others?

Even though it’s a challenge to really understand depression, we still find ways to make some sense of it. One of those ways is through metaphor. We describe depression in terms of things we have more concrete experience with — enemies in war, darkness, lowness, or burdens, for example. Metaphors can both reflect and shape our thoughts about concepts like depression. Research gives some glimpses into the relationship between the metaphors we use to describe depression and how we think about it or actually feel. But there’s still so much we don’t know about depression metaphors and their effects on cognition. Here I present some of what research does reveal, sprinkled with my own speculation.

 

Depression as down

4649749639_2992f9c638_o
Depressed by Sander van der Wel. CC BY-SA.

In English, it’s common to talk about sadness as feeling down. Depression is not the same as sadness, but it’s also often associated with being physically low. One reason might be that when we’re feeling sad or depressed, our bodies are often slouched or closer to the ground than when we’re feeling great. This idea that our natural bodily experiences, for example hanging our heads or looking down when we feel depressed, can give rise to conventional linguistic metaphors like “down in the dumps” is called Conceptual Metaphor Theory (CMT). CMT’s main claim is that linguistic metaphors reflect the way we already think about concepts. In other words, we say we need a “pick-me-up” because in our minds, depression is down in the same way that the ground is down. The literal and figurative meanings of depression are associated in mind.

Depression as an enemy

Depression can be something that people fight or slay, or they may be beaten or attacked by it. On one hand, this might be a productive framing, since it implies that people can do something about their situation. They can choose the most appropriate weapons in their arsenal to strike back against their depression enemy.

But for the same reason, the enemy frame may be counterproductive. Many aspects of depression are outside a person’s control, which can be especially hard to understand if we haven’t personally experienced it. In this case, suggesting that a person should fight harder or better may backfire, implying that someone who doesn’t seem to be able to beat depression is too weak. There’s a fine line here.

Depression as darkness

Darkness is another common metaphor for talking about depression. The origins may be similar to talking about depression as being down. When it’s cloudy or rainy, we often feel a more blah than when the sun is out (this association is especially relevant for people with Seasonal Affective Disorder).

Again, talking about cloudy and rainy days as a basis for understanding depression conflates very normal sadness with depression, and clinical depression is so much more than a cloudy day feeling. But our experiences with dark, cloudy days may create a foundation for thinking about both common sadness and depression. Even when we’re indoors, dark spaces are often associated with fatigue and negative feelings.

Depression as a physical burden

It’s also common to think of depression as a physical burden we carry around. Fortunately, burdens can be unloaded, and research has documented positive experiences in psychotherapy can bring about a feeling of unloading a depression burden.

In fact, dynamicity seems to be a productive feature of many depression metaphors. Just as a burden can be unloaded, things that are dark can be brightened, low can be lifted, and enemies can be defeated (or become allies). Even though these metaphors are quite different on the surface, they also have similarities and seem to be compatible with each other.

Depression as multiple things at once

We mix metaphors naturally in speech. Referring to depression an enemy, for example, doesn’t mean I won’t then refer to it as a being down or dark (or that someone else I’m conversing with won’t, as you can see in the short Twitter exchange that follows):

This next tweet also shows a blend of multiple metaphors. The text refers to hitting “rock bottom,” while the image shows a dark cloud. The words and image “say” different things, but we naturally integrate them into one mental image of a low, dark depression.

And depression can be a dark enemy:

Screen Shot 2017-05-13 at 1.50.44 PM.png

Or a dark physical burden:

These mixed metaphors aren’t confusing at all. We don’t feel like the simultaneous depiction of depression as being both dark and a burden conflict because depression can be both things at once. Metaphors don’t create rigid structures that define our thoughts, but instead they create templates for thinking that can be overlapped and mixed and matched.

Are there ideal metaphors for depression?

On the whole, probably not. Each person will have their own preferred metaphors for making sense of depression. It’s key to be conscientious of the metaphors you encounter and produce, and to evaluate what they imply about mental illness. And if no conventional metaphors seem fitting, you might try designing your own or considering Bruce Springsteen’s comparison of depression to a car:

I always picture it as a car. All your selves are in it. And a new self can get in, but the old selves can’t ever get out. The important thing is, who’s got their hands on the wheel at any given moment?


More Reading:

4 years later: Why write this blog?

My blog is 4 years old today!

My first post was called Why write this blog? I was a fourth year undergrad, less than a week from graduation. Now I’m a fourth year PhD student, and uh… more than a week from graduation. Are the reasons I began blogging the same as the reasons I continue to blog?

Why I started blogging 4 years ago

I love to write, but writing in an online forum, where anyone could be sitting at their computer reading my thoughts, has always made me feel too exposed and vulnerable. Time to get over that, especially since I hope to go into academia, where putting myself “out there” will be a key to success.

There’s a funny paradox: on the one hand I want people to read what I write, but on the other, it can be paralyzing to actually think about people reading it when I’m trying to get words out. I deal with this by imagining that only a few people will read. I imagine someone who’s my quintessential audience. Usually, that’s my mom (you’re reading, right, mom?!). She’s educated and curious, but she’s not a cognitive scientist. She’s my ideal reader. So I imagine my mom reading, and no one else, and I just go with it.

It’s still hard to express your thoughts when you think smart people are listening and might criticize them. It took me a long time to be able to do this in person — in group meetings and talks — and I have no idea if my blog helped me with that. Throughout grad school, my relationship with criticism has evolved. Criticism is almost always an opportunity for improving your work, and actually has very little to do with me as a person. When I think of it that way, criticism is something to seek out, not to avoid.

I want to keep learning, reading, and thinking about thinking, and I think the best way to do this is to collaborate as much as possible. I’ve loved having frequent opportunities during college for cog sci dialogues with so many people, and I don’t want to give those dialogues up.

Occasionally people engage with my posts in the comments or on social media, and it’s great to have those conversations. But realistically, this blog is a

It’s  not the ideal platform for dialogue that I had hoped, but that’s ok.

I want to be a better reader, writer, and thinker, and this link convinced me that a blog is probably a good way to achieve that goal. In it, Maria Konnikova writes:

“What am I doing but honing my ability to think, research, analyze, and write—the core skills required to complete a dissertation? And I’m doing so, I would argue, in a far more effective fashion than I would ever be able to do were I to keep to a more traditional academia-only route.”

Spot on.

Why I still blog today

When I started blogging, I couldn’t entirely anticipate what my blogging experience would actually be like. Four years later, I may have even more reasons to blog than I did when I started.

My blog is somewhat of a lab. I can try things out – like a vlog, an infographic, or that megaphone graphic above. Do they make my posts more engaging? I don’t know, but I’m testing them out. If they flop, no harm done. I experiment with different topics, and I can use metrics that show me numbers of page views and how those readers got to my site for a rough idea of what resonates with people and how they’re finding my blog.

My blog also acts as an archive. It documents events like conferences and workshops I’ve attended, getting married, and the 2016 Presidential election, all through the lens of language and thought. My past posts help me recommend a book to a friend or find a paper I know I liked but can’t remember why. And it gives me ample opportunities for laughing at my past self. Like did it occur to me that acknowledging my college graduation by writing a post about euthenics at Vassar was maybe a bit perverse??

And I blog because it’s fun. It’s challenging, and it’s creative, and I make the rules. Some of my motivations might be unique, but it turns out I’m not alone in blogging “for the love of words.” In a recent post on her blog, From the Lab Bench, Paige Jarreau compares science bloggers’ reasons for blogging to Orwell’s “four great motives for writing.”

I’m a long-range thinker, but I don’t think I would have predicted when I clicked the “PUBLISH” button for the first time that I’d be clicking it for many of the same reasons four years later.

Questions you never knew you had about doing a PhD

A few times a month, I receive an email from Quora, a site where people ask questions and people with background on that topic weigh in. My Quora digest has questions they suspect I might be interested in. They’re almost always about doing a PhD. Here are some of the most intriguing ones. The responses are often thorough (long), so I’ve linked to them and included pieces from my favorites here.

Is pursuing a PhD as stressful as a full time job? Or more?

TLDR: It depends.

Ravi Santo noted that a PhD is likely a different kind of stress than a typical 9-to-5 job, and the stress varies based on which phase of the PhD you’re in. He describes a whirlwind phase (coursework), followed by a crunch (qualifying exam, or whatever the program requires to count as having achieved a Master’s degree), the plan (proposing the dissertation), and finally discovery (analyzing and writing the thesis).

Kyle Niemeyer pointed out that unlike many 9-to-5s, PhD students (or academics more generally) don’t usually leave work at work. They don’t stop what they’re doing because it’s 5:00 or Friday, and having your work follow you everywhere can be stressful. But on the flip side, academics often enjoy more flexibility in their schedules. The virtue is also the vice.

Some people weighed in saying a PhD is definitely more stressful, while others said they miss the glorious days of writing a PhD, when they had a single primary objective, as opposed to life in their post-PhD jobs with many responsibilities. We’ll agree to disagree and move on.

What is a depressing fact you’ve realized after/during your PhD?

TLDR: There are a lot. I’ll list some that seem to recur.

It’s been said that writing a dissertation is like giving birth— French feminist Helene Cixous even posited that men write as way of replacing reproduction.

But there’s a big difference between the two. After you have a baby, people want to see the baby and ask about it, and think it’s cute; whereas after you’ve slaved over your dissertation and defended it, no one will ever want to see it or hear about it.
-Ken Eckert

Other responders mentioned competition, starting to hate the subject you once loved, and, maybe most commonly, that it’s incredibly hard to obtain a tenure-track job afterward. In some cases, hard work isn’t enough to achieve success, whether because you need to rely on other people (especially advisors), or because you’re not at a prestigious university, or simply because experiments and lines of research are just sometimes not fruitful.

This segues well into another question:

Why is it so difficult to do a PhD?

TLDR: Research

Leading to grad school, education is based on a model where students are taught information, and are subsequently given questions about that information to answer. Once you start a PhD, however, you have to first find the problem, then figure out the best way to address it, and then actually do it.

  • You may find a problem, but it may not be solvable, so you will need to iterate through multiple attempts to find a problem
  • The problem may be solved by someone else while you work on it! (so, you need to start from scratch)
  • There is a solution, but it is hard to find and you have to make a call: do I keep trying or do I give up?
    -Konstantinos Konstantinides

Others pointed out that successful PhD students need to be patient, courageous, focused, and persistent. Come on, that’s not that much to ask for…

How do top/successful PhD students lead their lives?

Screen Shot 2017-05-14 at 5.11.26 PM
They drink coffee and write blog posts, obviously.

The responses to this question share a common theme — successful PhD students are thoughtful about their research. They don’t rush into a project, but carefully consider a topic first. And when they design studies, they focus on those that measure a lot of things (collect a lot of data), to increase the chances that they’ll have usable results, no matter how the data turn out.

What happens after a PhD?

TLDR: It depends.

“I think also, once you’ve seen the sausage being made, you see how arbitrary the point at which you get a Ph.D. is” -Ben Webster

It’s often anti-climactic. Some people report their minds going blank, or their parents celebrating more than they themselves did, or making sure the first thing they did was pick up a fiction book. Ultimately, Krishna KumariChalla comments that what happens after a PhD is “Simple: What you decide would happen!”

I have some experience with the topics of all of these questions except this last one. I believe there might be such a thing as post-PhD life, but it’s hard to picture right now as I’m deep in my fourth year. For now, I’ll rely on these Quora contributors and will report back later.

What other important PhD questions do you have? Let’s ask the Internet!