Metaphors are everywhere — in our classrooms, hospitals, homes… and in Trump’s tweets.
In 1980, George Lakoff and Mark Johnson published a book, Metaphors We Live By, that catalyzed extensive research on the relationship between metaphor and thought. That book and much research since has argued that the metaphors we use in language reflect much deeper patterns in thought.
For example, we talk about arguments in terms of war — you can fight, defend, win, and lose both arguments and physical wars, for example. Researchers like Lakoff and Johnson suggest that we’re not merely talking about arguments in terms of wars, but actually thinking of them that way too.
Trump loves these metaphors.
DACA has been made increasingly difficult by the fact that Cryin’ Chuck Schumer took such a beating over the shutdown that he is unable to act on immigration!
Another pervasive metaphor is the idea that good things are up (when you cheer someone up you lift their spirits, and in times of extreme happiness you’re on top of the world, for example). Relatedly, metaphors commonly express the idea that important things are large (like when we have big ideas or grandiose plans). Trump likes to rally his audiences by talking about how big America is(metaphorically), and the ways in which we are on top.
AMERICA will once again be a NATION that thinks big, dreams bigger, and always reaches for the stars. YOU are the ones who will shape America’s destiny. YOU are the ones who will restore our prosperity. And YOU are the ones who are MAKING AMERICA GREAT AGAIN! #MAGApic.twitter.com/f2abNK47II
We dream big and reach high. And on the flip side, Trump’s enemies occupy low positions:
….Actually, throughout my life, my two greatest assets have been mental stability and being, like, really smart. Crooked Hillary Clinton also played these cards very hard and, as everyone knows, went down in flames. I went from VERY successful businessman, to top T.V. Star…..
Another topic that we almost can’t talk about without invoking metaphors is time. There are many ways we use spatial metaphors to talk about time, but referring to the future as ahead of us and the past as behind is among one of the most common ways. Trump is well aware that forward is the direction of the future and of progress.
The so-called bipartisan DACA deal presented yesterday to myself and a group of Republican Senators and Congressmen was a big step backwards. Wall was not properly funded, Chain & Lottery were made worse and USA would be forced to take large numbers of people from high crime…..
Then there’s the thing that, for Trump, is usually literal, but possibly for a small time became understood as metaphorical, which led to Trump’s assertion that it is MOST CERTAINLY LITERAL:
The Wall is the Wall, it has never changed or evolved from the first day I conceived of it. Parts will be, of necessity, see through and it was never intended to be built in areas where there is natural protection such as mountains, wastelands or tough rivers or water…..
Technology is well-known (at least in linguist circles) for giving rise to new language. New innovations require new words, but those words are often quickly repurposed from their original parts of speech. For example, we can receive an e-mail (noun), but we can also straight up e-mail (verb) someone, and I think I’ve heard people refer to e-mail (adjective) messages (those are probably people who grew up with the idea of some other kind of messages for a while before they were introduced to the e-mail, though). Similarly, we have text (a group of words), a text (noun – a book, or, more recently, a text (adjective) message), and we can definitely text (verb) people. Instead of creating nouns, adjectives, and verbs for new technology concepts, we often create one word and use it for whatever parts of speech we need.
Social media platforms tend to also have their own niche linguistic habits. Twitter and Twitter users have introduced lots of new terms – for example the verb tweet as a thing humans can do while at a computer (with its accompanying noun — the tweet). Tweet is “productive,” in the linguistic sense that it can be combined with other morphemes (meaningful word parts) to make new words: there are retweets, subtweets, and tweetups.
Of course there’s also the expansion of the word hashtag (into something people now say verbally preceding pretty much anything they want). In fact, the primary definition of hashtag seems to be the Twitter sense now, with the actual symbol taking on the secondary definition.
Plus, Twitter’s strict character limit encourages lots of esoteric abbreviations, bringing about lots of new elements of language. Sometimes, scrolling through my Twitter feed I’m reminded of the experience translating sentences from Latin — I’d figure out pieces one at a time, not necessarily in a logical order, and put them together, to hopefully reveal something meaningful.
Lately I’ve noticed a few especially cool linguistic inventions on Twitter that I think result in part from character restrictions, and also because even though most people’s Tweets are public for anyone on the Internet to read, conversations often include people with a lot of common ground. They may not even know each other IRL, but they follow similar people, communicate about similar topics online, and maybe share some background experiences.
First, an important mention: The people I follow on Twitter are not representative of the population of Twitter users. When I compare my Twitter followers to all Twitter users, there are some pretty striking differences. For example, a greater percentage of my followers are between ages 25 and 34 than the Twitter population at large.
Similarly, my followers are much more interested in a handful of related topics than the whole Twitter population:
These demographics should provide some context for the linguistic innovations I experience on Twitter.
First, the nature of hashtags on Twitter has kind of coerced these 3 words into one, as it often appears as #notallmen without caps to distinguish the component words. #Notallmen means what it sounds like. When someone says something negative about men, someone might reply with the reminder that not all men (#notallmen) are sexist (or whatever the original claim was — usually sexist). But I usually see #notallmen take on a more meta meaning, a way of pointing out that replying to some instance of sexism with “not all men” distracts from and avoids the problem (i.e., “Men who disguise their own hurt under #notallmen – into the bin with you”). Here, #notallmen is a noun.
But it can also be an adjective: “In my dream last night I was dating a #NotAllMen boy I went to high school with…”, “walk off your #notallmen instincts dude”, and “I wish guys put all of their angry ‘#NotAllMen!’ energy into just.. actually not being one of those men.” I know there must be verb uses of #notallmen out there, but I’ve yet to stumble upon one…
One other cool thing is that I see #notallmen in lots foreign language tweets — for example “Pero en este punto los hombres se vuelven víctimas y debemos dedicarnos al #notallmen para no herir a aquellos que “aman a las mujeres”.” To my eye, that looks like: “Spanish Spanish Spanish #notallmen Spanish.” (If you’re interested, Twitter translates it as: “But at this point the men become victims and we must dedicate ourselves to the #notallmen to not hurt those who “love women”.”)
#WellActually is #NotAllMen’s cousin. I admittedly don’t always understand how people are using it, but I do often see it to indicate that someone (most often a man) is correcting someone else (most often a woman). Sometimes it’s used to call out a man-splainer (as the man-splainer is likely to say “well, actually…” to a woman), but I’ve also seen it used to refer to correcting people in general: “I got to #wellActually one of the people interviewing me and it felt gooooooooodddddddddd” or “sorry to #wellactually.”
Like many of the other terms I’ve described, #WellActually can take on whatever part of speech its user needs. It’s often a verb (“Got a BALD MAN in my mentions trying to #WellActually me”), but can also be a noun (“Cue the glasses being pushed up and the ‘#WellActually'”) or an adjective (“Alright, #wellactually twitter. I see you never waste any time.” or “#WellActually twitter came really hard at the people trying to revel in the magnitude of this upset, huh?”). Well actually, I’m not completely convinced that #WellActually is describing Twitter in that second example. It might be an instance of using the hashtag for the actual words “well” and “actually,” which are… an interjection and an adverb? Someone can #WellActually me if that’s not right.
I love the content that I find on Twitter, but I can’t help paying attention to the way people package the content — which words they use and how they use them. The more I pay attention, the more I remember that people are clever, and language is one of the many ways they let that cleverness out.
Cool stuff is happening at CogSci 2016 (for some evidence, see yesterday’s highlights; for more evidence, keep reading). Here are some of the things I thought were especially awesome during the second day of the conference:
Temporal horizons and decision-making: A big-data approach (Robert Thorstad, Phillip Wolff): We all think about the future, but for some of us, that future tends to be a few hours or days from now, and for others it’s more like months or years. These are our temporal horizons, and someone with a farther temporal horizon thinks (and talks) more about distant future events than someone with a closer temporal horizon. These researchers used over 8 million tweets to find differences in people’s temporal horizons across different states. They found that people in some states tweet more about near future events than in others – that temporal horizons vary from state to state (shown below, right panel). They then asked, if you see farther into the future (metaphorically), do you engage in more future-oriented behaviors (like saving money – either at the individual or state level; or doing fewer risky things, like smoking or driving without a seatbelt)? Indeed, the the farther the temporal horizon revealed through people in a given a state’s tweets, the more future-oriented behavior the state demonstrated on the whole (below, left panel).
Then, recruited some participants for a lab experiment. The researchers then compared the temporal horizons expressed in people’s tweets with their behavior in a lab task, asking whether those who wrote about events farther in the future displayed a greater willingness to delay gratification – for example, waiting a period of time for a monetary quantity if the future quantity will be greater than taking the money today. They also compared the language in people’s tweets with their risk taking behavior in an online game. They found that the language people generated on Twitter predicted both their willingness to delay gratification (more references to the more distant future were associated with more patience for rewards) and their risk-taking behaviors in the lab (more references to the more distant future were associated with less risk taking). While the findings aren’t earth shattering – if you think and talk more about the future, you delay gratification more and take fewer risks – this big data approach using tweets, census information, and lab tasks opens up possibilities for findings that could not have arisen from any of these in isolation.
Extended metaphors are very persuasive (Paul Thibodeau, Peace Iyiewuare, Matias Berretta): Anecdotally, when I read an extended metaphor – especially one that an author carries throughout a paragraph, pointing out the various features that the literal concept and metaphorical idea have in common – persuades me. But this group quantitatively showed the added strength that an extended metaphor has over a reduced (or simple, one-time) or inconsistent metaphor. For example, a baseline metaphor that they used is crime is a beast (vs. crime is a virus). People are given two choices for dealing with the crime: they can increase punitive enforcement solutions (beast-consistent) or get to the root of the issue and heal the town (virus-consistent). In this baseline case, people tend to reason in metaphor consistent ways. When the metaphor is extended into the options, though (for example adding a metaphor-consistent verb like treat or enforce to the choices), the framing has an even stronger effect. When there are still metaphor-consistent responses but the verbs are now reversed – so that the virus-consistent verb (treat) is with the beast-consistent solution (be harsher on enforcement), the metaphor framing goes away. Really cool way to test the intuition that extended metaphors can be really powerful in a controlled lab setting.
And, I have to admit, I had a lot of fun sharing my own work and discussing it with people who stopped by my poster – Emotional implications of metaphor: Consequences of metaphor framing for mindsets about hardship [for an abridged, more visual version, with added content – see the poster]. When people face hardships like cancer or depression, we often talk about them in terms of a metaphorical battle – fighting the disease, staying strong. Particularly in the domain of cancer, there’s pushback against that dominant metaphor: does it imply that if someone doesn’t get better, they’re not a good enough fighter? Should they pursue life-prolonging treatments no matter the cost to quality of life? We found that people who read about someone’s cancer or depression in terms of a battle felt that he’d feel more guilty if he didn’t recover than those who read about it as a journey (other than the metaphor, they read the exact same information). Those who read about the journey, on the other hand, felt he’d have a better chance of making peace with his situation than those who read about the battle. When people had a chance to write more about the person’s experience, they tended to perpetuate the metaphor they had read: repeating the same words they had encountered but also expanding on them, using metaphor consistent words that hadn’t been present in the original passage. These findings show some examples of the way that metaphor can affect our emotional inferences and show us how that metaphorical language is perpetuated and expanded as people continue to communicate.
But the real treat of the conference was hearing Dedre Gentner’s Rumelhart Prize talk: Why we’re so smart: Analogical processing and relational representation. In the talk, Dedre offered snippets of work that she and her collaborators have been working on over the course of her productive career to better understand relational learning. Relational learning is anything involving relations – so something as simple as Mary gave Fido to John or more complex like how global warming works. Her overarching message was that relational learning and reasoning are central in higher-order cognition, but it’s not easy to acquire relational insights. In order to achieve relational learning, people must engage in a structure-mapping process, connecting like features of the two concepts. For example, when learning about electrical circuits, students might use an analogy to water flowing pipes, and would then map the similarities – the water is like the electricity, for example – to understand the relation. My favorite portion of the talk was about the relationship that language and structure-mapping have with each other: language (especially relational language) can support the structure-mapping process, which can in turn support language. The title of her talk promised we would learn about why humans are so smart, and she delivered on that promise with the claim that “Our exceptional cognitive powers stem from combining analogical ability with language.” Many studies of the human mind and behavior highlight the surprising ways that our brains fail, so it was fun to hear and think instead about the important ways that our brains don’t fail; instead, to hear about “why we’re so smart.”
And finally, the talk I wish I had seen because the paper is great: Reading shapes the mental timeline but not the mental number line (Benjamin Pitt, Daniel Casasanto). By having people read backwards (mirror-reading) and normally, they found that while the mental timeline was disrupted: people who read from right to left instead of the normal left to right showed an attenuated left-right mental timeline compared to those who read normally from left to right. This part replicates prior work, and they built on it by comparing the effects of these same reading conditions on people’s mental number lines. This time they found that backwards reading did not influence the mental number line in the way it had decreased people’s tendency to think of time as flowing from left to right, suggesting that while reading direction plays a role in our development of mental timelines that flow from left to right, it does not have the same influence on our mental number lines; these must instead arise from other sources.
One more day to absorb and share exciting research in cognitive science – more highlights to be posted soon!
It’s no secret that the information we share on social media can get us in trouble. You can embarrass yourself, ruin your reputation, and even get arrested using fewer than 140 characters.
Tweets are also reflections of a person’s current state – they shed light on things we find interesting, the events in our lives, and our opinions. In these cases, we’re conscious of the states our tweets reflect. However, our tweets may also be able to predict aspects of our lives that we’re not conscious of at the time of tweet composition, like the rate of heart attacks in the communities we live in.
If you think about it, it’s not that surprising that negative tweets come from places with greater incidences of cardiac events. The authors crucially point out that it was not the tweeters who were dying, though. One person’s angry tweets did not predict that same person’s later risk of heart attack (though to me this doesn’t seem like too far-fetched a possibility). Instead, the counties that the most negative tweets were coming from were the same ones that had the highest incidence of cardiac events. I don’t think anyone would argue that the angry tweets (coming primarily from young people) were causing high rates of heart attack (in primarily old people). Instead, the correlation probably reflects that good physical and mental health are often associated – both in individuals and on a larger geographical level. So what should we do with this knowledge? Is there anything we can do beyond existing efforts to improve heath and wellness in the communities that need it most? What other warning signs are evident in corpora containing millions of tweets and other social media behaviors?
I don’t know. I’m about to go tweet about rainbows and daisies though, just in case.
In all aspects of life, we’re often forced to accept that things change, and language is no exception. The only reason the English language even exists today is because languages change.
Last week, The Oxford Dictionaries Online announced the newest additions to their database: some words include “buzzworthy” (likely to arouse the interest and attention of the public), “food baby” (a protruding stomach caused by eating a large quantity of food and supposedly resembling that of a woman in the early stages of pregnancy), and “selfie” (a photograph that one has taken of oneself, typically with a smartphone or webcam and uploaded to a social media website.
In fact, language evolution is so natural that we don’t even realize how many things we say are products of recent changes. The American Heritage Dictionary surveys about 200 writers each year about what is acceptable in the English language. In the 1960s, 53% answered “no” to the question: “The construction sick at one’s stomach is defined by most dictionaries. and usage manuals. Can ‘at’ be replaced by ‘to’?” Even more recently, in the 1990s, 80% of writers deemed this sentence to be an inappropriate use of the word “grow”: “One of our strategies is to grow our business by increasing the number of clients.” Srsly?!?
One major way that a language changes is by coming in contact with other languages, such as when people learn a second language, but native speakers are behind many of the changes causing the current drama. The digital age has brought about a vast number of new concepts and therefore a vast number of new names to describe them, but technology also serves as a platform for the proliferation of new linguistic trends. This article, “We the Tweeple,” highlights Twitter in particular as a “fusion muse,” the inspiration for words like Twitterati, Twittersphere, and twirting. One linguist, Ben Zimmer, suggests that the distinctiveness and playfulness of the prefix “tw-” may be a main reason that Twitter is venue for so many portmanteau words. Twitter is also the ideal forum for coining new words because communicative space is limited and new words can catch on and spread thanks to the practice of hashtagging.
As with pretty much any other signs of change in society, there are always dissidents. In the debate over language change, those people are the prescriptive linguists, who try to report what proper language should be, and people often referred to as “Grammar Nazis,” who are enraged by signs like this one. The use of the word “irregardless” is a common Grammar Nazi gripe, but in keeping with language change and a practice of descriptive linguistics, Merriam-Webster assures us that it is, in fact, a word.
However, plenty of people embrace the evolution that language continually undergoes. I especially love this Atlantic article by Derek Thompson in which he cleverly incorporates all 44 words that were recently added to the Oxford Dictionary Online. Courtrooms seem to be another place in which language change is not only accepted, but even embraced. The New York Times reports that because conventional dictionaries exclude slang by design, courtrooms have begun looking to other sources for these definitions, namely, Urban Dictionary, a site with an extensive crowdsourced slang database.
Undeniably (and maybe fortunately), many new words are fads. This Atlantic article looks back on some of the new words of the ‘90s. While some, like geek and LOL, stuck, many others, like cowabunga and infobahn (information highway) either never really caught on or have disappeared almost entirely.
My thoughts on the new additions to the dictionary: many may sound silly, but the fact of the matter is that they’ve disseminated at least to some extent and are being used by English speakers. Maybe twerk will disappear from our lexicons before the end of the year, or maybe in a few generations, children won’t be able to believe there ever was an English-speaking world without the word twerk. Either way, today it’s a word, so I guess it belongs in our dictionaries… for now.