Vaccinating, metaphorically and literally

There’s a lot of bad (either misleading or blatantly false) science information on the Internet. Science communicators often try to combat the bad content by dumping as much accurate information as they can into the world, but that strategy is not as effective as many would hope. One reason it’s not effective is that social circles on the Internet are echo chambers: people tend to be follow like-minded others. Scientists and science communicators follow each other, and skeptics follow each other, so we rarely even hear what others outside our circle are talking about. Plus, when we do encounter evidence that contradicts our beliefs, we tend to discount it and keep believing what we already did.

A recent study by Sander van der Linden, Anthony Leiserowitz, Seth Rosenthal, & Edward Maibach (that I recently wrote about) gives a glimmer of hope to this science communication trap: communicators may be able to “vaccinate” their audiences against misinformation. They found that if people are cued in to the kinds of tactics that opponents of global warming deploy, they’re less likely to believe them. This finding offers some hope in a time when the proliferation of fake and misleading science information seems inevitable. Scientific facts, along with a heads up about anti-scientific strategies, can help people better evaluate the information they receive to form evidence-based beliefs and decisions.

Does this apply to other scientific issues? Can we vaccinate against anti-vaccination rhetoric?

I don’t know. But I’d like to find out. In order to design a communication that alerts people about anti-vaccine messages they might encounter, it’s important to understand anti-vaccine tactics. I explored some very passionate corners of the Internet (videos, discussion threads, and blog posts by anti-vaccine proponents) for a better understanding. Here are the anti-vaccine tactics I found, a lot of which are described in this SciShow video:

Ethos: Appeal to Authority

First, note that this immunologist isn’t explicitly saying that children shouldn’t be vaccinated. But the quote implies so much. I don’t know if that’s actually her belief, but regardless, as a consumer of this image, I do get the sense that she looks pretty smart (#educated, in fact), and maybe she knows what she’s talking about…

Jargon

Screen Shot 2017-03-11 at 9.32.35 AM

There are four chemical names in the first five lines of the ad above. It sounds like whoever wrote it must really know their science. The message implies that the author has deep scientific knowledge about the chemicals mentioned and wants to warn you of their presence in vaccines. Paired with our society’s tendency to believe that all things “natural” are good, and all things “chemical” are enemies, this jargon-wielding author might appeal as someone worth listening to. Most of us (and I am definitely included here) don’t know much or anything about those chemicals — how do they work? Are they actually dangerous in the doses found in vaccines? This jargon paves the way for persuasion through the naturalistic fallacy — the idea that all natural things are better than non-natural things.

Logos: Appeal to “Logic”

Logical fallacies

A logical fallacy is faulty logic disguised as real logic, and it’s another common tactic used by anti-vaccine proponents. In the Tweet above, the author presents two facts, implying that they’re connected (that an increase in mandatory vaccines led to a change from 20th to 37th in the worldwide ranking of infant mortality rates. Just because America “lost ground” on this ranking, it doesn’t necessarily mean our mortality rate even went up — it’s likely that many other nations’ mortality went down. Plus, there are so many other factors beyond number of mandatory vaccines that influence infant mortality rate, and no evidence supplied by the Tweet that vaccines and mortality are related. They’re just two pieces of information, placed next to each other to give a sense of a causal relationship.

There are lots of ways logic can be distorted to suggest that vaccines are bad. One that really stands out to me is the suggestion that if vaccines work, why should we care if some children are not vaccinated? After all, they’ll be the ones who get sick… why does it concern the rest of us?

It does. For one, no child should end up with a paralyzing or fatal disease because their parent chose to disregard scientific consensus. But one person’s choice not to vaccinate directly affects others — for example, people who CAN’T be vaccinated for health reasons. If everyone else receives vaccines, that one person who cannot is safe thanks to “community immunity.” But if others stop receiving those vaccines, the person who had no choice but to remain unvaccinated is susceptible. This person is unjustly at danger as a result of others’ choices.

Pathos: Appeal to Emotion

Fear

Fear is a powerful motivator. Appeals to ethos and logos can work together to have an emotional effect. Parents just want to do their best for their kids, so messages that strike up fears about the harms of vaccines have a good chance of swaying them.

One way of drumming up fear is to promote vaccine proponents as bullies, as this article demonstrates:

Screen Shot 2017-04-30 at 1.08.20 PM.png

Yea, that description sounds pretty scary to me… Breakdown Radio. Link to article

Considerations for inoculation messages

Of course, I’ve just scratched the surface with these tactics that anti-vaccine proponents use (you can get an idea of some others in a post on how the anti-vax movement uses psychology to endanger us by Dr. Doom) Messages that vaccinate against misconceptions have to walk an extremely fine line. The goal of such a message is to foreshadow misleading messages a person may encounter, and to point out the reasons that message should be reconsidered.

Vaccine messages might be useful when they introduce new information, but they also need to be proactive, anticipating anti-vaccine rhetoric and alerting people to its flaws. There are a few dangers in doing so, though. For one, it often requires repeating the misconception, and research shows that doing so can backfire and reinforce the inaccuracy instead. In addition, pointing out flaws in an argument that someone might be prone to believing can alienate that person. If the warning message isn’t constructed conscientiously (for example, if it suggests that seeing through the misleading information is a no-brainer), it can imply that anyone who might believe the misconceptions is an idiot. A message like this will make some members of the audience feel defensive (wow, am I an idiot? No, I can’t be an idiot. Maybe this author of this message is the idiot…).

That doesn’t mean that inoculation messages can’t be effective. We have some evidence to suggest they can, and I think there’s a lot of room to continue honing this strategy. The first step in a successful inoculation message is to uncover the tactics used by those who misrepresent the science. Then it’s important to raise awareness of those tactics without alienating the audience and while being careful not to repeat the misinformation in a way that can be construed as reinforcing it.

Communicators can keep in mind that anti-vaccine messages often attempt to establish authority, tap into emotions, and apply misleading logic in order to convince people of their message. By anticipating these strategies, we can have greater success in counteracting them and promoting vaccines as the life-saving technologies they are.

More information

 

The Pope’s #scicomm: Effects of Laudato si’ on beliefs about climate change

Climate change is an extremely polarized issue: while many people firmly believe scientific evidence that human-caused climate change is ruining the planet and our health, many others adamantly maintain that it is not a problem. Figuring out how to communicate the gravity of climate change has been an urgent puzzle for climate change scientists and communicators (a topic I’ve written quite a bit about).

Collectively, we’re trying many different ways of communicating this issue. I especially love these videos by climate scientist Katharine Hayhoe and others by researcher M. Sanjayan with the University of California and Vox. Pope Francis also contributes to the scicomm effort — in 2015 he published an encyclical called Laudato si’: On Care for Our Common Home, which called for global action toward climate change (he also gave a copy of this encyclical to Donald Trump recently when the two met).

Was Laudato si’ effective?

Did the document influence beliefs about the seriousness of climate change and its effects on the poor? Recent research by Asheley Landrum and colleagues took up this question.

The work is based on survey results from Americans — the same people reported their beliefs about climate change before and after the encyclical came out.

They found that the encyclical did not directly affect people’s beliefs about the seriousness of climate change or its disproportionate effects on the poor.

But… the encyclical did affect people’s views of the pope’s credibility on climate change, encouraging them to see him as more of an authority after the document was published than before. This was especially true for liberals, though, reflecting a sort of echo chamber effect: people who already found climate change to be an issue gave the pope more credit for his stances on climate change after he published the encyclical.

Importantly, these altered views of the pope’s credibility did in turn affect how much people agreed with the pope’s message on climate change. In other words, there wasn’t a direct effect from the publication of the encyclical to agreement with its message; instead, there was first an effect of the document on beliefs about the pope’s credibility, and then an effect of those credibility assessments on agreement with the pope’s message.

Diem.png

This work reminds us that science communication efforts can’t be considered in isolation. Whether people agree with a message is influenced by factors like their political beliefs and the credibility of the source. This point calls for two directions for future scicomm: for one, communicators should do their best to consider their message and audience holistically — what factors are likely to shape an audience’s receptiveness to a message, and how can those be influenced? This work also reminds us that we need more research on the science of science communication. We need to continue working to understand how people perceive scientific issues and communicators, and how they respond to the scicomm they encounter.


Featured Image: Korea.net / Korean Culture and Information Service (Jeon Han)

Philip Guo & I talk about scicomm

The summer before I started grad school, I scoured the Internet for first-person accounts of what it’s really like to be a PhD student. I had just committed to doing a PhD in Cognitive Science at UCSD and figured that would be a good time to find out what I was in for.

Philip Guo‘s PhD memoir, the PhD Grind, was the most satisfying – check out my earlier post with reflections and favorite quotes to learn more about his free e-book. Just a couple years later, Philip came to UCSD Cognitive Science as a professor where he does research on human-computer interaction, online learning, and computing education.

He also creates some podcasts – “video interviews of interesting people [he] know[s].” I somehow fell into that category, and Philip and I had a fun conversation about science communication. We touched on the science of science communication, the blogging seminar I’m co-teaching, and how I discovered and pursued science communication.

You can read more and watch our conversation on Philip’s site.

The Language of Twitter

Technology is well-known (at least in linguist circles) for giving rise to new language. New innovations require new words, but those words are often quickly repurposed from their original parts of speech. For example, we can receive an e-mail (noun), but we can also straight up e-mail (verb) someone, and I think I’ve heard people refer to e-mail (adjective) messages (those are probably people who grew up with the idea of some other kind of messages for a while before they were introduced to the e-mail, though). Similarly, we have text (a group of words), a text (noun – a book, or, more recently, a text (adjective) message), and we can definitely text (verb) people. Instead of creating nouns, adjectives, and verbs for new technology concepts, we often create one word and use it for whatever parts of speech we need.

Twitter language

Social media platforms tend to also have their own niche linguistic habits. Twitter and Twitter users have introduced lots of new terms – for example the verb tweet as a thing humans can do while at a computer (with its accompanying noun — the tweet). Tweet is “productive,” in the linguistic sense that it can be combined with other morphemes (meaningful word parts) to make new words: there are retweets, subtweets, and tweetups.

Screen Shot 2017-04-01 at 3.20.50 PM
2010, seriously!?

Of course there’s also the expansion of the word hashtag (into something people now say verbally preceding pretty much anything they want). In fact, the primary definition of hashtag seems to be the Twitter sense now, with the actual symbol taking on the secondary definition.

Screen Shot 2017-04-01 at 3.22.10 PM

Plus, Twitter’s strict character limit encourages lots of esoteric abbreviations, bringing about lots of new elements of language. Sometimes, scrolling through my Twitter feed I’m reminded of the experience translating sentences from Latin — I’d figure out pieces one at a time, not necessarily in a logical order, and put them together, to hopefully reveal something meaningful.

Lately I’ve noticed a few especially cool linguistic inventions on Twitter that I think result in part from character restrictions, and also because even though most people’s Tweets are public for anyone on the Internet to read, conversations often include people with a lot of common ground. They may not even know each other IRL, but they follow similar people, communicate about similar topics online, and maybe share some background experiences.

First, an important mention: The people I follow on Twitter are not representative of the population of Twitter users. When I compare my Twitter followers to all Twitter users, there are some pretty striking differences. For example, a greater percentage of my followers are between ages 25 and 34 than the Twitter population at large.

Screen Shot 2017-04-01 at 2.43.49 PM

Similarly, my followers are much more interested in a handful of related topics than the whole Twitter population:

Screen Shot 2017-04-01 at 2.45.34 PM

These demographics should provide some context for the linguistic innovations I experience on Twitter.

#NotAllMen

First, the nature of hashtags on Twitter has kind of coerced these 3 words into one, as it often appears as #notallmen without caps to distinguish the component words. #Notallmen means what it sounds like. When someone says something negative about men, someone might reply with the reminder that not all men (#notallmen) are sexist (or whatever the original claim was — usually sexist). But I usually see #notallmen take on a more meta meaning, a way of pointing out that replying to some instance of sexism with “not all men” distracts from and avoids the problem (i.e., “Men who disguise their own hurt under #notallmen – into the bin with you”). Here, #notallmen is a noun.

But it can also be an adjective: “In my dream last night I was dating a #NotAllMen boy I went to high school with…”, “walk off your #notallmen instincts dude”, and “I wish guys put all of their angry ‘#NotAllMen!’ energy into just.. actually not being one of those men.” I know there must be verb uses of #notallmen out there, but I’ve yet to stumble upon one…

One other cool thing is that I see #notallmen in lots foreign language tweets — for example “Pero en este punto los hombres se vuelven víctimas y debemos dedicarnos al #notallmen para no herir a aquellos que “aman a las mujeres”.” To my eye, that looks like: “Spanish Spanish Spanish #notallmen Spanish.” (If you’re interested, Twitter translates it as: “But at this point the men become victims and we must dedicate ourselves to the #notallmen to not hurt those who “love women”.”)

#WellActually

#WellActually is #NotAllMen’s cousin. I admittedly don’t always understand how people are using it, but I do often see it to indicate that someone (most often a man) is correcting someone else (most often a woman). Sometimes it’s used to call out a man-splainer (as the man-splainer is likely to say “well, actually…” to a woman), but I’ve also seen it used to refer to correcting people in general: “I got to #wellActually one of the people interviewing me and it felt gooooooooodddddddddd” or “sorry to #wellactually.”

Like many of the other terms I’ve described, #WellActually can take on whatever part of speech its user needs. It’s often a verb (“Got a BALD MAN in my mentions trying to #WellActually me”), but can also be a noun (“Cue the glasses being pushed up and the ‘#WellActually'”) or an adjective (“Alright, #wellactually twitter. I see you never waste any time.” or “#WellActually twitter came really hard at the people trying to revel in the magnitude of this upset, huh?”). Well actually, I’m not completely convinced that #WellActually is describing Twitter in that second example. It might be an instance of using the hashtag for the actual words “well” and “actually,” which are… an interjection and an adverb? Someone can #WellActually me if that’s not right.

I love the content that I find on Twitter, but I can’t help paying attention to the way people package the content — which words they use and how they use them. The more I pay attention, the more I remember that people are clever, and language is one of the many ways they let that cleverness out.

Hurdles to Communicating Science & Strategies to Overcome them

Communicating science is hard in part because doing and understanding science is hard, but there are also some unique hurdles that science communicators face — especially when communicating information that’s relevant for policies. James Druckman recently described some of the challenges that particularly face people communicating policy-relevant science, and ways those challenges can be minimized.

Value-laden diversity

We all have values, relatively unchanging beliefs that reflect the way we see the world. For example, some people are more “individualist,” while others are more “communitarian.” If scientific information seems to contradict a value, we’ll be hesitant to accept that information. Information about climate change, especially if it contains or implies suggestions for reducing the problem by increasing regulations on businesses, might contradict an individualist’s values, making it hard for that person to even consider the scientific information. For more on this hurdle, see Dan Kahan’s work on The Science of Science communication, an earlier post on this blog about Kahan’s work, or a great post by Chris Mooney on Mother Jones.

What to do about it

First, communicators have to recognize that their audience will have a diverse set of values, some of which will conflict with the communicator’s values, and that these values will influence the way people receive scientific information.

Next, communicators should minimize the extent to which their message contains a value commentary. In other words, they should make sure the relevant science comes into play for certain policy decisions without defining “good” or “competent” decisions.

774926895_9fe0f5495a_b
Hurdles start. by robert voors. CC BY-NC-ND.

Motivated Reasoning

Motivated reasoning (or confirmation bias) is our drive to seek information that reinforces our prior beliefs and disregard information that does not. For example, in work by Duckman & Bolsen (2011), participants initially indicated their support for genetically modified (GM) foods. After 10 days, all participants received 3 types of info: positive information about how GM foods combat diseases, negative information about their possible longterm health consequences, and neutral information about their economic consequences.

People who initially supported GM foods dismissed the negative information and rated the positive information as valid, and perceived the neutral information as indicating benefits of GM foods. People who were initially opposed to GM foods did the exact opposite: dismissed the positive information, considered the negative information as valid, and interpreted the neutral information as indicating drawbacks of GM foods. Work on motivated reasoning shows we interpret information through a lens laden with our prior beliefs.

1427849853_1cb5cefd16_m
Last of the crop. By Mrs eNil. CC BY-ND-NC.

There have been lots of great articles highlighting motivated reasoning lately. These include Why Facts don’t Change our Minds, This Article won’t Change your Mind, and Why You Think You’re Right, Even When You’re Wrong.

What to do about it

When motivated reasoning occurs, people are motivated to understand information in a way that aligns with their previous beliefs. Instead, science communicators want to motivate their audience to understand new information in a way that will lead to maximum accuracy. There are a few things communicators can do to encourage people to seek accurate understandings:

  • Show that the issue and information matter for the individual’s life. Show relevance.
  • Present information that comes from a variety of sources, preferably ones with different goals (i.e., from Democrat and Republican sources)
  • Encourage people to explain their position to others (or at least prepare themselves to explain their position). Elaborating on your position requires people to think it through more carefully, and provide explicit evidence for their claims that goes beyond “because I want to believe this.”

Politicization

This term does not mean what we might expect given its name. Politicization is “the inevitable uncertainties about aspects of science to cast doubt on the science overall…thereby magnifying doubts in the public mind” (Stekette 2010, p. 2). It’s not exactly misinformation, since it doesn’t introduce false findings, but instead magnifies doubt. It’s especially common in issues about global warming and vaccination. People who politicize these issues send the message that scientific evidence on these issues is not as conclusive as it’s been made out to be.

What to do about it

Politicization comes directly from people perceiving scientists or informants as lacking credibility and being motivated to reason in ways consistent with their prior beliefs. Thus, politicization can be countered by addressing those hurdles – establishing credibility and encouraging an accuracy motivation. There are a couple other things we can do to overcome this hurdle:

  • Warn people of politicization they’re likely to encounter before they encounter it. This is sometimes referred to as an inoculation message, and it points out the strategies politicizers use and why their message is not to be trusted
  • Correct politicized messages after people have encountered them. Corrections are often not as effective as inoculation messages since people may have already had time to process and begin to believe the politicized message. However, corrections can be effective when people are motivated to reach an accurate understanding.

There’s more

Of course, these aren’t the only hurdles to communicating policy-relevant science. Other hurdles described by Druckman that I haven’t elaborated on include: communicating policy-relevant science requires effort on the part of scientists, getting and maintaining attention, establishing credibility, and changing government inaction.

More and more scientists are recognizing the value of communicating their science outside the Ivory Tower. At the same time, the science of science communication is advancing to help us all understand the hurdles we face and how to best overcome them.

Scientists Agree on Climate Change: A Gateway Belief

 

Screen Shot 2017-03-24 at 3.13.04 PM
https://climate.nasa.gov/scientific-consensus/

It doesn’t get much clearer. The Earth’s climate is warming. Humans are the reason. But how many people are actually aware of the scientific consensus on this issue?

Research by Sander van der Linden and colleagues shows that when people believe that scientists overwhelmingly agree about climate change, they increase their a) own beliefs in climate change and b) beliefs that humans are responsible. They feel c) more worried about climate change, and as a result of a, b, and c, they support public action to mitigate the effects of climate change.

journal.pone.0118489.g001

At the beginning of the study, participants indicated the percentage of scientists they thought agree on global warming and they answered some questions about their own climate change beliefs. People then received a message about scientific consensus, which took the form either of a) a simple description, b) a pie chart, or c) a metaphorical comparison related to trusting engineers’ consensus about bridges (i.e., if 97% of engineers agreed that a bridge was unsafe, would you use it?) or doctors’ consensus about illness. All the messages included the info that “97 % of climate scientists have concluded that human-caused climate change is happening.”

Then participants again indicated what percent of scientists they thought agree on global warming and answered questions about their own beliefs. All messages “worked,” in the sense that people perceived greater scientific agreement after the messages telling them that 97% of scientists agree than if they hadn’t read anything about the consensus at all (though the simple description and pie chart were more effective than the metaphor. People shifted their climate change beliefs more after encountering one of the more straightforward messages than the more complex metaphor. Great food for thought, as many science communicators insert metaphors wherever they can).

Of course, having people believe that there’s strong scientific consensus about climate is only one step toward the larger goal of having them endorse actions that mitigate the effects of climate change. But in follow-up analyses, the researchers identified that perceiving scientific agreement is a gateway belief: believing that scientists agree about global warming led to other beliefs, ones that get us closer to the goal of actions in favor of mitigating climate change. Specifically, it led to greater belief that climate change was real, human-caused, and worrisome. These beliefs, in turn, led to greater support for public action against climate change. It’s often hard to know what leads to what, especially when it comes to beliefs we keep hidden in our own heads, but with some semi-fancy math, these researchers quantified those relationships.

9154498428_d5d720f9ee_b
Climate 365 by NASA Goddard Space Space Flight Center. CC BY.

These studies have some clear takeaways for science communicators (especially when communicating about climate change — but maybe these ideas apply to other topics too — need more research!)

  • Emphasize scientific consensus, that an overwhelming percentage of scientists agree that climate change is a real problem caused by human activity.
  • Don’t worry so much about immediately pushing for public action against climate change. When people understand that scientists agree, they come to agree themselves that climate change is a problem that should be addressed, and THEN they come to support public action. Be careful about skipping steps.

At the same time, there’s not only one right way to communicate about climate change. There are truly effective ways, ineffective and potentially backfiring ways, and many in between. There aren’t cut-and-dry rules because every audience is unique, and taking the audience into account — their beliefs, values, and past experiences, for example — is crucial. But this work sheds light on communication strategies that are probably pretty far toward the “truly effective” end of the ways-to-communicate-climate-change continuum.

Reframing the war on science

America’s kind of tense right now. Leading up to and following the November 2016 election, there’s a lot of talk of “the two Americas” and “the Divided States of America.” Americans are divided on a lot of issues, including scientific topics like vaccine safety and global warming. To many, it’s surprising that we disagree about these things because according to scientists who research these topics, there are no debates at all: vaccines do not cause autism and humans are responsible for global warming

At the same time, the current administration in the US has sent numerous messages that they devalue science (for example, by censoring scientists at organizations like the USA and EPA and establishing a Committee on Vaccine Safety). Actions like these seem to be only fueling the divided science beliefs.

In response, many people have declared that we’re in a war on science: This idea is expressed in headlines like Facts are the reason science is losing the current war on reason, How the Anti-Vaxxers are Winning, and documentary titles like The Vaccine War. (There are so many pieces that talk about the war on science).

screen-shot-2017-03-04-at-10-08-43-am
Scientific American

I’m a PhD student in Cognitive Science, a firm believer in the scientific method and basing beliefs and actions on evidence. I highly value scientific funding, vaccinations, and measures that reduce the effects of climate change. As Americans, we have freedom of speech, and we should exercise that freedom to speak up when scientific knowledge and interests are being trampled on. I agree with the ideas expressed in blog posts like The War on Facts is a War on Democracy and I’m a Scientist. This is what I’ll Fight for and many of the ideas that continuously populate threads on Twitter like #defendscience and #resist. But I’m much less enthusiastic about the widespread use of a war metaphor to get those ideas across.

Here’s why.

Metaphors shape thought

The metaphors we use to describe complex social problems actually shape the way we think about them. For example, when crime was described as a beast ravaging a town, people tended to suggest harsh law enforcement policies — similar to how they’d likely react to a literal beast ravaging their town. On the other hand, when that same crime was described as a virus, people suggested fewer harsh enforcement policies. Instead, they turned their focus to curing the town of problems that may underlie the crime, like improving education and welfare services.

People make inferences in line with the metaphors used to describe complex issues, so it’s important to reflect on what the war on science implies. It does have some helpful implications. Wars are serious, and often require urgent action. These are probably the messages that those who perpetuate the war on science want us to infer, even if not consciously.

But the war also suggests that there are enemies and casualties. There are two sides locked in combat, and neither will back down until they win (or they’re decimated). I like this quote from A Gentleman in Moscow, a novel I just happened to be reading while working on this post: After all, in the midst of armed conflicts, facts are bound to be just as susceptible to injury as ships and men, if not more so. In other words, we sometimes do stupid things in wars. We shirk thoughtfulness and conscientiousness, and instead we just fight. As I see it, our current political situation (for lack of a better word) needs all the thoughtfulness and conscientiousness we can give it.  

I recently expressed my concern in a conversation on Twitter:

The war metaphor challenges those who are not already on the “side” of science. It tells them they’re the enemy. When people feel that they’re being attacked, even idealistically, they’re likely to strengthen their stance and gear up to fight back. No matter how many scientists tweet about science or participate in the March for Science on Earth Day, people who have found themselves on the “anti-science” side of this war are not going to decide all of a sudden that climate change must be real after all or that they should rush their kids to the pediatrician for overdue vaccines (especially if we tell them we’re marching to fight the war on science!). People who have already been labeled as the enemy of science may as well go out and buy a new gas guzzler and decide that their kids are just fine without vaccines.

Others have already pointed out that actions like the science march are already in jeopardy of isolating anti-science proponents as opponents (for example see  Daniel Engber’s piece for Slate and Robert Young’s in the New York Times). Using war metaphors has the potential to only hammer that point home.

138245187_3f2bafe0bc_z
This just doesn’t seem productive. Image: Battle by Thomas Hawk. CC BY-NC 2.0

Alternative frames?

If we want to stop thinking about ourselves as engaged in a war on science, we need an alternative. Proponents of and believers in science are experiencing a sort of struggle, but it doesn’t have to be a fight between the left and right, Democrats and Republicans, Coastal Elite and Middle America. Maybe we can reframe the situation as a challenge that unites all humans. Science communicators want to share how important it is to address climate change and to have children vaccinated for the good of all people. We can all be on the same side, working to better the world we live in, and it’s important that we convey that message in our communications.

Referring to the movie Hidden Figures, NPR blogger Marcelo Gleiser points out that if there is a central lesson in the movie, it is that united we win; that what makes America great is not segregation and intolerance, but openness and inclusiveness.

I considered the possibility that guiding people to trust empirical evidence and the scientific process might be better framed as a puzzle — a challenge, no doubt, but at least everyone’s working toward a common goal.

Marisa makes a really important point. The peacekeeper in me would love a frame that emphasizes hey, guys! We’re all in this together!, but that ship may have already sailed. At this point, it’s important not to downplay the gravity of discrediting and distrusting science. This is not a game.

 

I’ve had quite a few conversations on the war on science, but I still don’t have a one-size-fits-all framing suggestion for talking about America’s disconnect in belief in science. But when we’re considering talking about this issue as a war, it’ll be helpful to step back and assess our goals and the potential consequences of the words we use.

Right now, there are deep social and political divides in American society — and though it’s crucial to stand up for what we believe in (especially science and facts!), we should be careful about taking up arms in a war on science that might deepen those divides. 

I welcome other comments on the framing of the war on science. Do you find the war helpful? Why? Are there other frames we could use to avoid deepening ideological divides?


Featured image: United States USA Flag by Mike Mozart. CC BY 2.0

Communicating climate change: Focus on the framing, not just the facts

Image 20170303 29002 1h47na1
How you package the information matters.
Frame image via www.shutterstock.com.

Rose Hendricks, University of California, San Diego

Humans are currently in a war against global warming. Or is it a race against global warming? Or maybe it’s just a problem we have to deal with? The Conversation

If you already consider climate change a pressing issue, you might not think carefully about the way you talk about it – regardless of how you discuss it, you already think of global warming as a problem. But the way we talk about climate change affects the way people think about it.

For scientific evidence to shape people’s actions – both personal behaviors like recycling and choices on policies to vote for – it’s crucial that science be communicated to the public effectively. Social scientists have been increasingly studying the science of science communication, to better understand what does and does not work for discussing different scientific topics. It turns out the language you use and how you frame the discussion can make a big difference.

The paradox of science communication

“Never have human societies known so much about mitigating the dangers they faced but agreed so little about what they collectively know,” writes Yale law professor Dan Kahan, a leading researcher in the science of science communication.

Kahan’s work shows that just because someone has scientific knowledge, he or she won’t necessarily hold science-supported beliefs about controversial topics like global warming, private gun possession or fracking.

Instead, beliefs are shaped by the social groups people consider themselves to be a part of. We’re all simultaneously members of many social groups – based, for example, on political or religious affiliation, occupation or sexuality. If people are confronted with scientific evidence that seems to attack their group’s values, they’re likely to become defensive. They may consider the evidence they’ve encountered to be flawed, and strengthen their conviction in their prior beliefs.

Unfortunately, scientific evidence does sometimes contradict some groups’ values. For example, some religious people trust a strict reading of the Bible: God said there would be four seasons, and hot and cold, so they don’t worry about the patterns in climate that alarm scientists. In cases like this one, how can communicators get their message across?

A growing body of research suggests that instead of bombarding people with piles of evidence, science communicators can focus more on how they present it. The problem isn’t that people haven’t been given enough facts. It’s that they haven’t been given facts in the right ways. Researchers often refer to this packaging as framing. Just as picture frames enhance and draw attention to parts of an image inside, linguistic frames can do the same with ideas.

One framing technique Kahan encourages is disentangling facts from people’s identities. Biologist Andrew Thaler describes one way of doing so in a post called “When I talk about climate change, I don’t talk about science.” Instead, he talks about things that are important to his audiences, such as fishing, flooding, farming, faith and the future. These issues that matter to the people with whom he’s communicating become an entry into discussing global warming. Now they can see scientific evidence as important to their social group identity, not contradictory to it.

Let me rephrase that

Metaphors also provide frames for talking about climate change. Recent work by psychologists Stephen Flusberg, Paul Thibodeau and Teenie Matlock suggests that the metaphors we use to describe global warming can influence people’s beliefs and actions.

Ready for combat?
Thomas Hawk, CC BY-NC

The researchers asked 3,000 Americans on an online platform to read a short fictional news article about climate change. The articles were exactly the same, but they used different metaphors: One referred to the “war against” and another to the “race against” climate change. For example, each article included phrases about the U.S. seeking to either “combat” (war) or “go after” (race) excessive energy use.

After reading just one of these passages, participants answered questions about their global warming beliefs, like how serious global warming is and whether they would be willing to engage in more pro-environmental behaviors.

Metaphors mattered. Reading about the “war” against global warming led to greater agreement with scientific evidence showing it is real and human-caused. This group of participants indicated more urgency for reducing emissions, believed global warming poses a greater risk and responded that they were more willing to change their behaviors to reduce their carbon footprint than people who read about the “race” against global warming.

The only difference between the articles that participants read was the metaphors they included. Why would reading about a war rather than a race affect people’s beliefs about climate change in such important ways?

The researchers suggest that when we encounter war metaphors, we are reminded (though not always consciously) of other war-related concepts like death, destruction, opposition and struggle. These concepts affect our emotions and remind us of the negative feelings and consequences of defeat. With those war-related thoughts in mind, we may be motivated to avoid losing. If we have these war thoughts swimming around in our minds when we think about global warming, we’re more likely to believe it’s important to defeat the opponent, which, in this case, is global warming.

There are other analogies that are good at conveying the causes and consequences for global warming. Work by psychologists Kaitlin Raimi, Paul Stern and Alexander Maki suggests it helps to point out how global warming is similar to many medical diseases. For both, risks are often caused or aggravated by human behaviors, the processes are often progressive, they produce symptoms outside the normal range of past experiences, there are uncertainties in the prognosis of future events, treatment often involves trade-offs or side effects, it’s usually most effective to treat the underlying problem instead of just alleviating symptoms and they’re hard to reverse.

People who read the medical disease analogy for climate change were more likely to agree with the science-backed explanations for global warming causes and consequences than those who read a different analogy or no analogy at all.

Golden past or rosy future?

Climate change messages can also be framed by focusing on different time periods. Social psychologists Matthew Baldwin and Joris Lammers asked people to read either a past-focused climate change message (like “Looking back to our nation’s past… there was less traffic on the road”) or a similar future-focused message (“Looking forward to our nation’s future… there is increasing traffic on the road”).

The researchers found that self-identified conservatives, who tend to resist climate change messages more than liberals, agreed that we should change how we interact with the planet more after reading the past-focused passage. Liberals, on the other hand, reported liking the future-focused frame better, but the frames had no influence on their environmental attitudes.

Example of a past-focused image (top) and a future-focused image (bottom) of a reservoir.
Image courtesy of NASA. Used in Baldwin and Lammers, PNAS December 27, 2016 vol. 113 no. 52 14953-14957.

And the frames didn’t have to be words. Conservatives also shifted their beliefs to be more pro-environmental after seeing past-focused images (satellite images that progressed from the past to today) more than after seeing future-focused ones (satellite images that progressed from today into the future). Liberals showed no differences in their attitudes after seeing the two frames.

Many climate change messages focus on the potential future consequences of not addressing climate change now. This research on time-framing suggests that such a forward-looking message may in fact be unproductive for those who already tend to resist the idea.

There’s no one-size-fits-all frame for motivating people to care about climate change. Communicators need to know their audience and anticipate their reactions to different messages. When in doubt, though, these studies suggest science communicators might want to bring out the big guns and encourage people to fire away in this war on climate change, while reminding them how wonderful the Earth used to be before our universal opponent began attacking full force.

Rose Hendricks, Ph.D. Candidate in Cognitive Science, University of California, San Diego

This article was originally published on The Conversation. Read the original article.

Past vs. Future Frames for Communicating Climate Change

Climate change (is it happening? how problematic is it? and are humans responsible?) is a partisan issue. Work by Dan Kahan (which I’ve written about before) shows that conservatives are more likely than liberals to believe that climate change is not a result of human activity and that if unchanged, it will not be as destructive as many people claim. Researchers Matthew Baldwin & Joris Lammers explore the possibility that partisan differences in beliefs about climate change might result from differences in the way conservatives and liberals tend to think about time (their temporal focus).

Their starting point was that previous research has shown that conservatives focus more on the past than liberals do. Then they tested two competing frames: one was future-focused (“Looking forward to our nation’s future… there is increasing traffic on the road”) and the other was past-focused (“Looking back to our nation’s past… there was less traffic on the road”). Each participant read just one of these, and then reported their attitudes about climate change and the environment. They found that conservatives reported liking the past-focused message better than the future-focused one and also reported higher environmental attitudes after the past- compared to the future-focused frame.

Screen Shot 2017-01-08 at 5.20.38 PM.png

They replicated these findings in additional experiments with variations. For example, in one test, instead of using linguistic frames to draw attention to either the past or the future, they used satellite images, either showing a progression from the past to today or a forecasted progression from today to the future. Again, conservatives reported more proenvironmental attitudes after viewing past-focused images than future-focused ones.

Next they investigated the temporal focus that real environmental charities tend to use. Not surprisingly, they found that the charities’ messages disproportionately express future consequences, with less focus on the past. Following up on this, they presented participants with money that they could divide among two (fake) charities (one whose message was strongly past- and one whose message was strongly future-focused), or they could keep some or all of it. They saw each charity’s logo and mission statement (the past-focused one stated: “Restoring the planet to its original state” and the future one: “Creating a new Earth for the future”).

Screen Shot 2017-01-08 at 5.28.45 PM.png

Conservatives donated more to the past- than the future-oriented charity. Liberals did the opposite. Further, looking at just the past-oriented charity, conservatives donated more than liberals did. Looking just at the future-oriented one, the opposite pattern emerges. This is a very beautiful interaction (plus the researchers did a few other experiments with slightly varied methods and a meta-analysis, all of which add some weight to these findings).

Considering the finding that climate change communications rely heavily on future-focused appeals, these findings should really make us pause. Is it possible that climate change issues themselves may not actually be what divides conservatives and liberals so much, but instead the way they’re communicated might be driving much of the disagreement between them? My intuition is that framing is not entirely to blame for conservatives’ and liberals’ divergent beliefs about climate change, but this work shows that it may be a big part of the story. It certainly won’t hurt for communicators to start diversifying our temporal frames for discussing climate change.


For more consideration on this topic, see earlier posts: Climate change is a big problem and we need to find better ways of talking about it; Narratives for Communicating Climate Change; and The paradox of science communication and the new science to resolve it.

All figures from Baldwin, M. & Lammers, J. (2016) Past-focused environmental comparisons proenvironmental outcomes for conservatives. PNAS, 113(52), 14953-14957.

For a discussion of why the framing described in this paper might not be enough to change conservatives’ minds about climate change, see This one weird trick will not convince conservatives to fight climate change, by David Roberts for Vox.