Getting a scientific message across means taking human nature into account

I really enjoyed thinking about, researching, and writing this piece for The Conversation, where this work was originally published.

Getting a scientific message across means taking human nature into account

Rose Hendricks, University of California, San Diego

We humans have collectively accumulated a lot of science knowledge. We’ve developed vaccines that can eradicate some of the most devastating diseases. We’ve engineered bridges and cities and the internet. We’ve created massive metal vehicles that rise tens of thousands of feet and then safely set down on the other side of the globe. And this is just the tip of the iceberg (which, by the way, we’ve discovered is melting). While this shared knowledge is impressive, it’s not distributed evenly. Not even close. There are too many important issues that science has reached a consensus on that the public has not.

Scientists and the media need to communicate more science and communicate it better. Good communication ensures that scientific progress benefits society, bolsters democracy, weakens the potency of fake news and misinformation and fulfills researchers’ responsibility to engage with the public. Such beliefs have motivated training programs, workshops and a research agenda from the National Academies of Science, Engineering, and Medicine on learning more about science communication. A resounding question remains for science communicators: What can we do better?

A common intuition is that the main goal of science communication is to present facts; once people encounter those facts, they will think and behave accordingly. The National Academies’ recent report refers to this as the “deficit model.”

But in reality, just knowing facts doesn’t necessarily guarantee that one’s opinions and behaviors will be consistent with them. For example, many people “know” that recycling is beneficial but still throw plastic bottles in the trash. Or they read an online article by a scientist about the necessity of vaccines, but leave comments expressing outrage that doctors are trying to further a pro-vaccine agenda. Convincing people that scientific evidence has merit and should guide behavior may be the greatest science communication challenge, particularly in our “post-truth” era.

Luckily, we know a lot about human psychology – how people perceive, reason and learn about the world – and many lessons from psychology can be applied to science communication endeavors.

Consider human nature

Regardless of your religious affiliation, imagine that you’ve always learned that God created human beings just as we are today. Your parents, teachers and books all told you so. You’ve also noticed throughout your life that science is pretty useful – you especially love heating up a frozen dinner in the microwave while browsing Snapchat on your iPhone.

One day you read that scientists have evidence for human evolution. You feel uncomfortable: Were your parents, teachers and books wrong about where people originally came from? Are these scientists wrong? You experience cognitive dissonance – the uneasiness that results from entertaining two conflicting ideas.

It’s uncomfortable to hold two conflicting ideas at the same time. Man image via www.shutterstock.com.

Psychologist Leon Festinger first articulated the theory of cognitive dissonance in 1957, noting that it’s human nature to be uncomfortable with maintaining two conflicting beliefs at the same time. That discomfort leads us to try to reconcile the competing ideas we come across. Regardless of political leaning, we’re hesitant to accept new information that contradicts our existing worldviews.

One way we subconsciously avoid cognitive dissonance is through confirmation bias – a tendency to seek information that confirms what we already believe and discard information that doesn’t.

This human tendency was first exposed by psychologist Peter Wason in the 1960s in a simple logic experiment. He found that people tend to seek confirmatory information and avoid information that would potentially disprove their beliefs.

The concept of confirmation bias scales up to larger issues, too. For example, psychologists John Cook and Stephen Lewandowsky asked people about their beliefs concerning global warming and then gave them information stating that 97 percent of scientists agree that human activity causes climate change. The researchers measured whether the information about the scientific consensus influenced people’s beliefs about global warming.

Those who initially opposed the idea of human-caused global warming became even less accepting after reading about the scientific consensus on the issue. People who had already believed that human actions cause global warming supported their position even more strongly after learning about the scientific consensus. Presenting these participants with factual information ended up further polarizing their views, strengthening everyone’s resolve in their initial positions. It was a case of confirmation bias at work: New information consistent with prior beliefs strengthened those beliefs; new information conflicting with existing beliefs led people to discredit the message as a way to hold on to their original position.

Just shouting louder isn’t going to help. Megaphone image via www.shutterstock.com.

Overcoming cognitive biases

How can science communicators share their messages in a way that leads people to change their beliefs and actions about important science issues, given our natural cognitive biases?

The first step is to acknowledge that every audience has preexisting beliefs about the world. Expect those beliefs to color the way they receive your message. Anticipate that people will accept information that is consistent with their prior beliefs and discredit information that is not.

Then, focus on framing. No message can contain all the information available on a topic, so any communication will emphasize some aspects while downplaying others. While it’s unhelpful to cherry-pick and present only evidence in your favor – which can backfire anyway – it is helpful to focus on what an audience cares about.

For example, these University of California researchers point out that the idea of climate change causing rising sea levels may not alarm an inland farmer dealing with drought as much as it does someone living on the coast. Referring to the impact our actions today may have for our grandchildren might be more compelling to those who actually have grandchildren than to those who don’t. By anticipating what an audience believes and what’s important to them, communicators can choose more effective frames for their messages – focusing on the most compelling aspects of the issue for their audience and presenting it in a way the audience can identify with.

In addition to the ideas expressed in a frame, the specific words used matter. Psychologists Amos Tversky and Daniel Kahneman first showed when numerical information is presented in different ways, people think about it differently. Here’s an example from their 1981 study:

Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of the consequences of the programs are as follows: If Program A is adopted, 200 people will be saved. If Program B is adopted, there is ⅓ probability that 600 people will be saved, and ⅔ probability that no people will be saved.

Both programs have an expected value of 200 lives saved. But 72 percent of participants chose Program A. We reason about mathematically equivalent options differently when they’re framed differently: Our intuitions are often not consistent with probabilities and other math concepts.

Metaphors can also act as linguistic frames. Psychologists Paul Thibodeau and Lera Boroditsky found that people who read that crime is a beast proposed different solutions than those who read that crime is a virus – even if they had no memory of reading the metaphor. The metaphors guided people’s reasoning, encouraging them to transfer solutions they’d propose for real beasts (cage them) or viruses (find the source) to dealing with crime (harsher law enforcement or more social programs).

The words we use to package our ideas can drastically influence how people think about those ideas.

What’s next?

We have a lot to learn. Quantitative research on the efficacy of science communication strategies is in its infancy but becoming an increasing priority. As we continue to untangle more about what works and why, it’s important for science communicators to be conscious of the biases they and their audiences bring to their exchanges and the frames they select to share their messages.

A lingustically-inclined cognitive scientist’s take on Arrival

Note: This post doesn’t just contain spoilers. The whole thing is pretty much a spoiler. Read it now only if you have seen the movie, don’t plan to see the movie, or don’t mind knowing the end of the movie. Read it later if none of those previous conditions apply to you. Either way, read it at some point. 

This weekend I saw Arrival. The movie finished around 9:30pm, which is about bedtime for me, but I was wired. A few times during the movie, I squeezed my husband’s hand. He passed over his sweatshirt for me to rest on my lap, assuming the squeezes were my way of telling him I was cold (they often are). I clarified, I’m just excited.

Why was I so excited? Because Arrival nailed some of the intellectual issues that make me tick.

Wikipedia has a solid overview of the plot, so mine will be brief. In the movie, aliens land in 12 different locations across the Earth. One of those locations is in the U.S., and Louise, a linguistics professor, is called to help make sense of their language so humans can communicate with the aliens (referred to as heptapods) and ask them why they’re here.

Lessons Learned

Early on, the colonel asks Louise why she has such a lengthy list of terms she needs to learn to communicate with the heptapods. The military only wants the answer to the question: “What is your purpose here?” Louise briefly points out the layers of complexities underlying such a seemingly simple question. First, it’s a question, so you have to make sure the heptapods know what a question is; that it’s a request for information. Then there’s the pronoun your, which is ambiguous in English in a way it’s not in other human languages. Your can refer to Joe alien or it can refer to the aliens collectively, an important specification that needs to be clear to effectively ask the heptapods why they’re here. Understanding the word purpose assumes an agreed-upon sense of intentionality. These are just a few of the reasons that Louise needs to be able to communicate human and Louise and many other seemingly-unrelated words before diving into the meaty why are you here? question. Lesson #1: Communication is not simple.


Eventually, Louise gets to the point where she can ask the heptapods why they’re on Earth. They write their response, which Louise translates as Offer weapon. Other teams of linguists at the other 11 locations with heptapod shells have also gotten to a similar point in their communication with the heptapods and translate the responses similarly: Use weapon. Not surprisingly, people freak out. China has declared that they’ll open fire on the shell if they don’t leave within 24 hours. Pakistan and Sudan follow suit. Nations start disconnecting from each other. Everyone is afraid that the heptapods are going to attack, and the U.S. military starts evacuating from the site.

Louise is not so ready to accept this message as a warning of attack. Maybe the weapon the heptapods were talking about what English speakers refer to as a tool (which is a really ambiguous term, accounting for so many different objects. Of course a screwdriver is a tool, a knife is a tool, a pen is a tool. But so are cars and iPhones and… language). Lesson #2: Translating is messy (this version of the Fresh Prince of Bel-Air translated many times over hillariously reminds us of this fact).

Despite the military’s disapproval, Louise takes it upon herself to clarify the heptapods’ message. Why are they here? They are here to help humanity because in 3,000 years they will need humans’ help. Louise asks how they can possibly know that they’ll need our help in 3,000 years. They know because they have an ability to perceive time in a way we don’t: they can see the future. And, they point out to Louise, so can she. It is at this point that we realize that the visions Louise has been having throughout the movie, which we assumed to have been flashbacks to her daughter’s life and death from a rare form of cancer, are actually flash-forwards. As Louise has learned the heptapods’ language, she has acquired the ability to perceive time as they do.


The heptapods’ written language is not linear, as every known human language is. It’s written simultaneously from left to right and right to left. It’s cyclical. They have come to help humanity by offering up an incredibly valuable tool — their language. Once someone knows their language, they will be able to perceive time as the heptapods do, in a new way. And that is a gift. Lesson #3: Language is a gift. Lesson #3a: It can shape the way you see the world.

As I left the movie, I looked around at the other people in the theater and tried to imagine the conversations they’d have on the way home. I imagined someone commenting, Imagine if the language you spoke and the way you wrote actually affected the way you perceive time? That would be wild.

You know what would be even more wild? If people spent all day every day thinking about and working on that very topic. If they earned government and university funding, conducted academic research experiments, talked and wrote incessantly about it, and at the end of it, they were granted a PhD. So wild. That’s my life, so I guess I’m wild — there’s a first time for everything.

Language Shapes Thought about Time

As far as we know, there are no human speakers of any language who can see the future as a result of their language’s way of talking about time. But there are other cool connections between the way different groups of people talk about time and the way we think about it. Across many languages, we tend to use features of space to talk about time, and cognitive science research shows that we tend to invoke space when we think about time as well.

In English, for example, we often talk about looking forward to the future and putting the past behind us. Beyond just a way of talking, we’re faster to think about the future when doing so involves some kind of forward component (like moving our arms or bodies forward) and faster to think about the past when it involves backward movement. Speakers of the Aymara language actually reverse this convention: since they know what happened in the past, it’s in front of them, in visible space, while the future, unknown, is behind. Their spontaneous gestures reveal that they also think about the past as ahead and future as behind. And Mandarin Chinese speakers can talk about time using vertical space. The same words that mean above and below can be combined with temporal words like month to produce the phrases last month and next month. Compared to English speakers, who don’t talk about time using vertical metaphors, Mandarin speakers have more robust vertical mental timelines.

Linguistic metaphors matter for the way speakers of a language think about time, but so does their writing direction. As left-to-right readers and writers, English speakers think of time as left-to-right. Right-to-left readers and writers, like speakers of Hebrew and Arabic, think of time as flowing from right-to-left. And Mandarin speakers with more experience with top-to-bottom text think of time even more vertically than those who speak the same language but don’t read vertically (whether Mandarin is written vertically varies from one location to another). When you read and write, you are continually experiencing the flow of time in one direction. Your eyes and hand move in a consistent direction as time unfolds, which seems to instill a consistent mental timeline. (See the list of resources at the bottom of this post for more info on all of these studies and more)

Back to Arrival

The movie was a 5/5 in my book because it was captivating. It was a 5/5 because a linguist saved the day, and because the military recognized that they needed someone with a PhD in linguistics for this crucial job. And, to boot, the linguist was a female, which is not at all far-fetched in the real world, but is not to be taken for granted in a Hollywood portrayal of an academic. As a bonus, Arrival spread the concept of my research much farther than my dissertation will, and it proved — even to me — that there are so many reasons for us to continue methodically investigating the world’s languages and their impact on cognition. Because you just never know when the heptapods will arrive.


You can also find this post published on moviepilot.com.

More Information

Bergen, B., & Chan Lau, T. (2012). Writing direction affects how people map space onto time. Frontiers in Cultural Psychology, 3(109).

Boroditsky, L., Fuhrman, O., & McCormick, K. (2010). Do English and Mandarin speakers think about time differently? Cognition, 118(1), 123–129. http://doi.org/10.1016/j.cognition.2010.09.010

Casasanto, D. (2008). Who’s afraid of the big bad Whorf? Crosslinguistic differences in temporal language and thought. Language Learning, 58(s1), 63–79.

Casasanto, D., & Jasmin, K. (2012). The hands of time: Temporal gestures in English speakers. Retrieved from http://www.degruyter.com/view/j/cog.2012.23.issue-4/cog-2012-0020/cog-2012-0020.xml

Fuhrman, O., & Boroditsky, L. (2010). Cross-Cultural Differences in Mental Representations of Time: Evidence From an Implicit Nonlinguistic Task. Cognitive Science, 34(8), 1430–1451. http://doi.org/10.1111/j.1551-6709.2010.01105.x

Fuhrman, O., McCormick, K., Chen, E., Jiang, H., Shu, D., Mao, S., & Boroditsky, L. (2011). How Linguistic and Cultural Forces Shape Conceptions of Time: English and Mandarin Time in 3D. Cognitive Science, 35(7), 1305–1328. http://doi.org/10.1111/j.1551-6709.2011.01193.x

Miles, L. K., Tan, L., Noble, G. D., Lumsden, J., & Macrae, C. N. (2011). Can a mind have two time lines? Exploring space–time mapping in Mandarin and English speakers. Psychonomic Bulletin & Review, 18(3), 598–604. http://doi.org/10.3758/s13423-011-0068-y

Núñez, R. E., & Sweetser, E. (2006). With the future behind them: Convergent evidence from Aymara language and gesture in the crosslinguistic comparison of spatial construals of time. Cognitive Science, 30(3), 401–450.

Ouellet, M., Santiago, J., Israeli, Z., & Gabay, S. (2010). Is the future the right time? Experimental Psychology, 57(4), 307-314.


Are memories just pasta?

I just read a really fun description of memories in a Nautilus post: The pasta theory of memory & your personal beginning of time. It’s a post on childhood amnesia, the frustrating phenomenon that we just don’t remember much from the earliest part of our lives.

The piece is written by Dana Mackenzie, but the rich title inspiration comes from an Emory University psychologist that he interviewed, Patricia Bauer. Here’s how Bauer describes children’s memory:

“I compare memory to a colander,” Bauer says. “If you’re cooking fettucine, the pasta stays in. But if you’re cooking orzo, it goes right through the holes. The immature brain is a lot like a colander with big holes, and the little memories are like the orzo. As you get older, you’re either getting bigger pasta or a net with smaller holes.”

Why do I like this metaphor? It paints a nice picture of what happens. Kids still make memories, but those memories tend to escape. Older people’s memories are more likely to be contained by the colander brain.

This metaphor is compelling, but is it the best thing since sliced bread? Pasta easily trumps bread on my carbs hierarchy, but what about in the context of describing memory? Importantly, it demonstrates that children retain fewer memories than adults (which we probably don’t need much convincing of), but it doesn’t tell us why this is so. Why are children’s memories orzo-like, and how to do they become fettucine-like over time? There’s a lot about this process that scientists still don’t know, but the metaphor can’t capture those things they actually do know. For example, Mackenzie acknowledges in the piece, when we retell a memory, we increase our chances of remembering that event later (though retelling memories also introduces inaccuracies that seem to increase the more we retell…). A similar issue with the metaphor is that our brains are constantly changing, and a large part of the reason that kids don’t remember as much as adults do results from that dynamic property. But colanders don’t change as they age, so the pasta metaphor might make it less evident that the massive changes that take place in our brains underlie many of the memory differences throughout our lives.

Metaphors highlight some things – they play up certain features of the two things they’re comparing, and they downplay others. It’s probably not possible to accurately capture every important aspect of a phenomenon like childhood amnesia in one metaphor. And that’s ok, because metaphors can be supplemented by other information. But metaphors don’t only leave out relevant details. They can also mislead, as I think the static colander has the potential to do. Maybe the best way, then, to communicate the complexity of childhood amnesia is to remind ourselves (and those we’re communicating with) that although some features of children’s forgetting and orzo pasta do map onto each other well, other features, like the colander, fall short – at least until we design one that develops in a brain-like way over the course of its lifespan.

Grad school is like…

Now that I’ve survived my first full week of classes in grad school, I am clearly a grad school expert.


But I have been spending quite a lot of mental energy trying to figure it out – noticing how it’s similar to, and especially different from, undergrad; working to figure out what’s expected of me, by others and myself; and trying to articulate what exactly my goal(s) is/are.

This look is pretty consistently on my face. Image: http://janiebryant.com/blog/265/

This look is pretty consistently on my face.
Image: http://janiebryant.com/blog/265/

I’ve also been a bit preoccupied with metaphors, as I’m working on a metaphor-based research proposal for a fellowship application. I guess the two have become intertwined in my subconscious, because my first (coherent) thought upon waking up this morning was, “grad school isn’t a sprint; it’s a marathon!” Not long after I began giving myself credit for this clever analogy, I was racking my brain for more. As a firm believer that concrete metaphors help us make sense of complicated abstract concepts, I was determined to uncover more metaphors for grad school as a means of better understanding what exactly it is I’m doing with my life.

Naturally, I turned to Google, querying, “Grad school is like “. Here’s what I found:

According to Ronald Azuma:

“Being a graduate student is like becoming all of the Seven Dwarves. In the beginning you’re Dopey and Bashful. In the middle, you are usually sick (Sneezy), tired (Sleepy), and irritable (Grumpy). But at the end, they call you Doc, and then you’re Happy.”

Another way to think about it might be the Dorothy’s saga in The Wizard of Oz. So far this aspect of her story feels most parallel to mine:

A huge weather event occurs, dramatic enough to lift the whole house and deposit it in a parallel universe, bursting with plastic flowers and a phalanx of Little People dressed in outfits vaguely reminiscent of lederhosen.  

This is true: perplexing undergraduate creatures are everywhere!

 Another metaphor I found intriguing is that grad school is like kindergarten:


After this exercise in metaphor collection, I feel much more confident that I’ve got a solid mental conceptualization of grad school. I’ve got it dialed in now, and I’m ready for week 2 🙂

What does “American” mean?

There aren’t many words whose meanings I haven’t immediately grasped upon being introduced to them. But as I was reflecting on the anniversary of the events that happened 12 years ago, I realized that my understanding of the word “American” has been evolving and deepening for my whole life, and I suspect that it’s still far from complete. I also suspect that this one word has an extremely powerful impact on everyone who identifies as an American, and that that meaning is probably a little different for each one of us.

When I was little, being American meant saying the pledge of allegiance every day in school, even if the phrase “…and to the republic, for which it stands…” never really made any sense. It also meant memorizing the 50 states song, which listed all the states in alphabetical order, as well as the preamble song. Being American meant celebrating the 4th of July by covering our bikes in gaudy red, white, and blue streamers and joining all the other neighborhood kids in a bike parade. It also meant wearing an Old Navy flag t-shirt and probably those flip flops that had stars on the left foot and stripes on the right. Fireworks and sparklers were also imperative.


Image: pbs.org

By the time I went to live in France for 4 months, I was almost 20, and my patriotism was in need of a boost. Luckily, my séjour in Paris provided me with just that. I began to appreciate more American luxuries like having eggs for breakfast, running errands on Sundays, and public transportation that actually runs when it’s supposed to (to be fair, the Parisian Metro is great, except when they decide they need to spice life up a little and strike… which is much too often). I also started to fully grasp the concept of the self-made American man, and why it’s one that sets America apart from others. I started to truly understand why America prides itself as the land of the free and the home of the brave.

Even recently, I’ve had the opportunity to explore a great variety of places in America, and these have all shaped my understanding of what “American” means. The essence of the word was captured this past 4th of July, listening to the Boston Pops perform at the Hatch Shell, followed by fireworks over the Charles River. It was clear in the billboards, neon lights, and smells of grilling hot dogs as I approached Times Square late one night. The meaning of “American” was pretty clear as I rode down the Pacific Coast Highway, the ocean immediately to my right, mountains to the left.

Tonight, my working definition of “American” is still a little hazy. It has something to do with picking apples and turning them into pies right around the time of year when the leaves turn vibrant shades of yellow and orange. Being American also has something to do with riding down the highway, blasting country songs in which men sing about their women, their pickup trucks, and their beer. My understanding of “American” also includes Thanksgiving, a day in which we watch football (the football that requires helmets and shoulder pads), spend all day preparing and eating the food we imagine the Pilgrims ate, and proceed to stimulate our economy the following day. America isn’t perfect – for example, that first Thanksgiving likely marked the beginning of a long history of mistreating Native Americans – but any nation who can come out of a tragedy such as the attacks of 9/11 as a stronger and more unified nation is pretty awesome in my book.


Euthenics at Vassar

In honor of my graduation today from Vassar College, I wanted to write about the Cognitive Science program’s home, Blodgett Hall, and the unique Euthenics program that it once housed.


Blodgett Hall
Image: Van Lengren, K. & Reilly, L. (2004). Vassar College: An architectural tour. Princeton Architectural Press: New York.

An article in the Vassar Encyclopedia, The Disappointing First Thrust of Euthenics, details Vassar’s unique and short-lived Euthenics program. The program was inspired by Vassar alumna Ellen Swallow Richards (1870), who was the first woman to be accepted to MIT. She coined the word “euthenics,” the science of controllable environment, from the Greek stems eu (well) and tithemi (to cause). Put another way, euthenics was the development of human well-being through the improvement of living conditions, so it concentrated on the application of scientific principles in protecting air, water, and food. Much emphasis was placed on parents’ roles in assuring a quality life for their children in the future. Some courses included nutrition, food chemistry, child psychology, sanitation, horticulture, sociological and statistical studies, and economic geography.

The college’s President at the time, Henry MacCracken, was very excited to offer this new multidisciplinary subject, and saw euthenics as a progressive movement, a way for women to link their coursework at Vassar with professions afterward. His hope was for sciences and arts to enhance each other, rather than compete, which would be done by teaching them together as one multidisciplinary field.

The faculty were not as enthusiastic about the idea of euthenics as MacCracken had been. Many believed it would limit women’s development by pushing traditionally feminine fields on them. However, the program was narrowly accepted in 1924 because Minnie Cunnock Blodgett offered to fund the building to house the program. In addition to being equipped with classrooms and labs for the euthenics program, Blodgett Hall also contained a model apartment for the study of interior design and efficient housekeeping, which was also intended to be Blodgett’s residence when she returned to campus. It also included a social museum, displaying exhibits on topics such as tenement housing, racism, and children’s health.

Image: Van Lengren, K. & Reilly, L. (2004). Vassar College: An architectural tour. Princeton  Architectural Press: New York.

Minnie Cunnock Blodgett and President Henry Noble MacCracken
Image: Van Lengren, K. & Reilly, L. (2004). Vassar College: An architectural tour. Princeton Architectural Press: New York.


In 1925, Euthenics courses were officially part of the curriculum, but the program was not as popular with the students as MacCracken and Blodgett had hoped, possibly because they were aware of the faculty’s general opposition to the program.

During the Depression, part of Blodgett was repurposed to create a lower cost coop housing opportunity, and after WWII, the Euthenics program was officially removed. One of the biggest problems with the program was that MacCracken presented it at to a traditional faculty just getting used to having autonomy over developing their own single disciplinary programs, making them resistant to the progressive multidisciplinary approach that he envisioned. However, the transient program did set the precedent for a number of multidisciplinary fields that exist at Vassar today, such as Asian Studies, Environmental Studies, and of course, Cognitive Science. Perhaps Euthenics is partly to thank for the Cognitive Science program that has continued to intrigue, excite, and push me throughout 4 awesome years at Vassar.

Plaque that remains under the archway of Blodgett to this day. Image: Van Lengren, K. & Reilly, L. (2004). Vassar College: An architectural tour. Princeton Architectural Press: New York.

Plaque that remains under the archway of Blodgett today. Reads: “This building dedicated to the study of Euthenics is given to Vassar College… to encourage the application of the arts and sciences to the betterment of human living.”
Image: Van Lengren, K. & Reilly, L. (2004). Vassar College: An architectural tour. Princeton Architectural Press: New York.


Van Lengren, K. & Reilly, L. (2004). Vassar College: An architectural tour. Princeton Architectural Press: New York.

Vassar Encyclopedia: The Disappointing First Thrusts of Euthenics

A working definition of cog sci

With my college graduation just days away, it’s only natural that I’ve been doing quite a bit of introspecting: In what ways am I different from the 17-year old my parents dropped off at Vassar in 2009? How do my current beliefs and thoughts differ from those I had as I began my freshman year, and what aspects of my education have contributed to those developments? I think back to many of the classes I’ve taken over the four years: French, Latin, and Chinese, computer science, physiological psychology, the history of the English language, and anthropological linguistics come to mind. I feel that cumulatively, regardless of whether they counted towards the Cognitive Science major in the eyes of the Registrar, they have all contributed to my current understanding of the human mind.

In the fall, I’ll begin working on a PhD in cognitive science, so it seems just to expect myself to have a clear definition of the field. “It’s like psychology, right?” asks almost every curious relative, family friend, and dental hygienist I’ve encountered in the past. Others with more understanding of what cognitive science entails may see it as a lofty field, thinking about thinking, without practical applications. The conventional understanding of cognitive science, as articulated by Wikipedia, the hub of collective intelligence, is “the interdisciplinary scientific study of the mind and its processes.” While I certainly can’t disagree with this, such a pithy statement falls short for me.

The world is messy. I’ve always been tempted to impose order on it, applying logic to circumstances in which it may not belong, and I feel confident that I’m not alone in the propensity to reduce the world around me to causes and effects. However, causes and effects are meaningless in the absence of context, the world in which anything- and everything- occurs. Because this world is dynamic and constantly changing, explanatory reductions may be misguided; instead, context may be the only acceptable explanation for the perceptions and actions that we seek to understand. Cognitive science is, to me, the study of the mind- of any agent that perceives and acts in its world- that takes context as its starting point. In order to truly take context into account, the discipline necessarily draws from a number of fields, including psychology, philosophy, linguistics, anthropology, computer science and artificial intelligence, and neuroscience. Each field is simply one piece of the larger puzzle: alone, it has awkward edges and indiscernible shapes, but the amalgamation reveals a whole image that’s greater than the sum of all its parts.

On the first day of Introduction to Cognitive Science freshman year, I had no idea what cog sci was, except that “cognitive” meant something along the lines of “brain.” I created a Turing machine that could determine whether any string of x’s and y’s was a palindrome. All it needed was a set of rules, and the machine was infallible. But as soon as I added a z into the input string, it broke down completely: No Rule Defined, it told me. Because my human brain does not break down and halt in the middle of problem solving, it was evident to me that there aren’t Turing machines in our heads, but instead something else, something more complex than states and rules, that must shape how we think, sense, and act in the world.

Lessons on Chinese grammar, cultures of South American tribes, and programming a for-loop also triggered mind-related thoughts and curiosities in my foreign language, anthropology, and computer science classes. In Perception & Action, I learned more about ants than almost any human would desire to know. An ant colony is a miraculously intelligent system, another example of a product much greater than the sum of its parts. Context alone determines an individual ant’s role and how and when he will carry it out. The ant lives in a constantly changing world, but instead of causing a break down, as such a world would for a Turing machine, it encourages various behaviors that contribute to the colony’s overall success.

What does this mean for the study of human minds? It means that our perceptions, thoughts, and actions are inseparable from the contexts in which they occur. We are situated in the world, and numerous aspects of our world, like prior experiences, culture, and other people, play a prominent role in shaping what we may intuitively believe occurs only or primarily in our heads.

As I prepare to begin a new chapter in my Cognitive Science career, I expect (and hope) that my appreciation of context will color the ways in which I move forward. My devotion to the importance of context has taught me to question everything. It is important to question whether studies done under different circumstances (i.e., outside a lab) and with different subpopulations (i.e., not westernized college undergrads) may have resulted in different conclusions. It is important to question whether there may be ways of viewing the world that differ from my own view (i.e., as cyclical as opposed to linear, or correlational as opposed to causal) that may shape the research questions posed, methods employed, and findings extracted. I hope that by doing this, my mind will remain open to new possibilities, continually working toward achieving the most comprehensive understanding of the mind possible.