4 years later: Why write this blog?

My blog is 4 years old today!

My first post was called Why write this blog? I was a fourth year undergrad, less than a week from graduation. Now I’m a fourth year PhD student, and uh… more than a week from graduation. Are the reasons I began blogging the same as the reasons I continue to blog?

Why I started blogging 4 years ago

I love to write, but writing in an online forum, where anyone could be sitting at their computer reading my thoughts, has always made me feel too exposed and vulnerable. Time to get over that, especially since I hope to go into academia, where putting myself “out there” will be a key to success.

There’s a funny paradox: on the one hand I want people to read what I write, but on the other, it can be paralyzing to actually think about people reading it when I’m trying to get words out. I deal with this by imagining that only a few people will read. I imagine someone who’s my quintessential audience. Usually, that’s my mom (you’re reading, right, mom?!). She’s educated and curious, but she’s not a cognitive scientist. She’s my ideal reader. So I imagine my mom reading, and no one else, and I just go with it.

It’s still hard to express your thoughts when you think smart people are listening and might criticize them. It took me a long time to be able to do this in person — in group meetings and talks — and I have no idea if my blog helped me with that. Throughout grad school, my relationship with criticism has evolved. Criticism is almost always an opportunity for improving your work, and actually has very little to do with me as a person. When I think of it that way, criticism is something to seek out, not to avoid.

I want to keep learning, reading, and thinking about thinking, and I think the best way to do this is to collaborate as much as possible. I’ve loved having frequent opportunities during college for cog sci dialogues with so many people, and I don’t want to give those dialogues up.

Occasionally people engage with my posts in the comments or on social media, and it’s great to have those conversations. But realistically, this blog is a

It’s  not the ideal platform for dialogue that I had hoped, but that’s ok.

I want to be a better reader, writer, and thinker, and this link convinced me that a blog is probably a good way to achieve that goal. In it, Maria Konnikova writes:

“What am I doing but honing my ability to think, research, analyze, and write—the core skills required to complete a dissertation? And I’m doing so, I would argue, in a far more effective fashion than I would ever be able to do were I to keep to a more traditional academia-only route.”

Spot on.

Why I still blog today

When I started blogging, I couldn’t entirely anticipate what my blogging experience would actually be like. Four years later, I may have even more reasons to blog than I did when I started.

My blog is somewhat of a lab. I can try things out – like a vlog, an infographic, or that megaphone graphic above. Do they make my posts more engaging? I don’t know, but I’m testing them out. If they flop, no harm done. I experiment with different topics, and I can use metrics that show me numbers of page views and how those readers got to my site for a rough idea of what resonates with people and how they’re finding my blog.

My blog also acts as an archive. It documents events like conferences and workshops I’ve attended, getting married, and the 2016 Presidential election, all through the lens of language and thought. My past posts help me recommend a book to a friend or find a paper I know I liked but can’t remember why. And it gives me ample opportunities for laughing at my past self. Like did it occur to me that acknowledging my college graduation by writing a post about euthenics at Vassar was maybe a bit perverse??

And I blog because it’s fun. It’s challenging, and it’s creative, and I make the rules. Some of my motivations might be unique, but it turns out I’m not alone in blogging “for the love of words.” In a recent post on her blog, From the Lab Bench, Paige Jarreau compares science bloggers’ reasons for blogging to Orwell’s “four great motives for writing.”

I’m a long-range thinker, but I don’t think I would have predicted when I clicked the “PUBLISH” button for the first time that I’d be clicking it for many of the same reasons four years later.

Questions you never knew you had about doing a PhD

A few times a month, I receive an email from Quora, a site where people ask questions and people with background on that topic weigh in. My Quora digest has questions they suspect I might be interested in. They’re almost always about doing a PhD. Here are some of the most intriguing ones. The responses are often thorough (long), so I’ve linked to them and included pieces from my favorites here.

Is pursuing a PhD as stressful as a full time job? Or more?

TLDR: It depends.

Ravi Santo noted that a PhD is likely a different kind of stress than a typical 9-to-5 job, and the stress varies based on which phase of the PhD you’re in. He describes a whirlwind phase (coursework), followed by a crunch (qualifying exam, or whatever the program requires to count as having achieved a Master’s degree), the plan (proposing the dissertation), and finally discovery (analyzing and writing the thesis).

Kyle Niemeyer pointed out that unlike many 9-to-5s, PhD students (or academics more generally) don’t usually leave work at work. They don’t stop what they’re doing because it’s 5:00 or Friday, and having your work follow you everywhere can be stressful. But on the flip side, academics often enjoy more flexibility in their schedules. The virtue is also the vice.

Some people weighed in saying a PhD is definitely more stressful, while others said they miss the glorious days of writing a PhD, when they had a single primary objective, as opposed to life in their post-PhD jobs with many responsibilities. We’ll agree to disagree and move on.

What is a depressing fact you’ve realized after/during your PhD?

TLDR: There are a lot. I’ll list some that seem to recur.

It’s been said that writing a dissertation is like giving birth— French feminist Helene Cixous even posited that men write as way of replacing reproduction.

But there’s a big difference between the two. After you have a baby, people want to see the baby and ask about it, and think it’s cute; whereas after you’ve slaved over your dissertation and defended it, no one will ever want to see it or hear about it.
-Ken Eckert

Other responders mentioned competition, starting to hate the subject you once loved, and, maybe most commonly, that it’s incredibly hard to obtain a tenure-track job afterward. In some cases, hard work isn’t enough to achieve success, whether because you need to rely on other people (especially advisors), or because you’re not at a prestigious university, or simply because experiments and lines of research are just sometimes not fruitful.

This segues well into another question:

Why is it so difficult to do a PhD?

TLDR: Research

Leading to grad school, education is based on a model where students are taught information, and are subsequently given questions about that information to answer. Once you start a PhD, however, you have to first find the problem, then figure out the best way to address it, and then actually do it.

  • You may find a problem, but it may not be solvable, so you will need to iterate through multiple attempts to find a problem
  • The problem may be solved by someone else while you work on it! (so, you need to start from scratch)
  • There is a solution, but it is hard to find and you have to make a call: do I keep trying or do I give up?
    -Konstantinos Konstantinides

Others pointed out that successful PhD students need to be patient, courageous, focused, and persistent. Come on, that’s not that much to ask for…

How do top/successful PhD students lead their lives?

Screen Shot 2017-05-14 at 5.11.26 PM
They drink coffee and write blog posts, obviously.

The responses to this question share a common theme — successful PhD students are thoughtful about their research. They don’t rush into a project, but carefully consider a topic first. And when they design studies, they focus on those that measure a lot of things (collect a lot of data), to increase the chances that they’ll have usable results, no matter how the data turn out.

What happens after a PhD?

TLDR: It depends.

“I think also, once you’ve seen the sausage being made, you see how arbitrary the point at which you get a Ph.D. is” -Ben Webster

It’s often anti-climactic. Some people report their minds going blank, or their parents celebrating more than they themselves did, or making sure the first thing they did was pick up a fiction book. Ultimately, Krishna KumariChalla comments that what happens after a PhD is “Simple: What you decide would happen!”

I have some experience with the topics of all of these questions except this last one. I believe there might be such a thing as post-PhD life, but it’s hard to picture right now as I’m deep in my fourth year. For now, I’ll rely on these Quora contributors and will report back later.

What other important PhD questions do you have? Let’s ask the Internet!

Metaphors in science: How should we talk about genes?

As genetic research results in new insights every day, mass media has continued to discuss genetic information. This is a good thing, and it means that members of the public are developing mental models of genes — their own internal conceptual frameworks for how genes work. These mental models are not always accurate, though. A group of researchers characterized how members of the public actually talk about genes and their relationship to diseases (specifically heart disease, lung cancer, diabetes, and depression).

The group set out with the concern, supported by prior research, that public health messages about the contribution of genes to diseases can increase fatalism. If people believe a gene variant they have definitively causes some disease, they may feel that condition is inevitable and not actually make health-promoting decisions.

As a result, they identified metaphors people use to talk about genetics, analyzed how those metaphors might affect fatalistic beliefs, and suggested more productive alternatives.

They conducted many interviews, and although they never asked specifically about metaphors people think of as associated with genetics, the people they interviewed included many metaphors in their descriptions.

The most common ones included:

Gene as a disease or problem

Many people described genes as an already existing disease that might be dormant. For example, one participant, when asked to describe gene said: “Gene means disease.” Another commented that it’s something you can “have a high chance of catching,” and still another commented that it runs through the bloodstream. The authors consider these comments to reflect metaphors for genes, but I wonder if these participants are not being metaphorical at all — if these comments just reflect misperceptions of genes. Either way, if people are to consider the influence of gene-environment interactions for different diseases, it’s not productive for them to think of a gene as a disease.

Gene as a fire or bomb

fire-227291_1920

When people talked about genes as a fire or a bomb, they suggested that genes are something already explosive (or exploding). In this case, genes for certain diseases can be activated by an unhealthy environment. For example, one person commented that if someone has a genetic predisposition for a disease, every unhealthy thing they do is “like adding fuel to the fire. It’s like pouring gasoline on the fire.” Another referred to genes as a “ticking time bomb.”

The fire metaphor at least suggests that people have control over environmental influences on genes — they can pour fuel on the fire (and get the disease) or not. But both the fire and bomb metaphors seriously underestimate the complexity of the interactions between genes and the environment, in particular disregarding the fact that environment can have a cumulative effect on health — more than a one-shot case of pouring fuel on a fire or not. A genetic predisposition for a disease is not just a fire waiting to flare up, since the gene will not necessarily cause a person to get the disease.

Genes as a gamble

Some people also talked about their genes as a game of chance like Russian roulette. If a person has a genetic predisposition, participants expressed that whether that person actually gets the disease is a “crapshoot” or a roll of the dice.

Again, there’s something helpful in this metaphor since it doesn’t imply that everyone with a disposition will also get a disease. But by suggesting that whether the disease manifests is random, people underestimate their own ability to influence their outcomes by creating healthy environments. Plus, participants tended to see genetic gambling as similar to literal gambling at a casino, where overall the house always wins.

None of the metaphors that people tended to elicit spontaneously emphasize the complex role of the gene-environment interaction in shaping outcomes for someone with a genetic predisposition for a disease actually gets the disease. The authors propose two similar alternatives to decrease the fatalistic beliefs that people form about genes.

Genes as a dance or a band

For one, people generally think of dancing and bands as positive, which seems to be a good start for engineering a metaphor that will not make people feel that a predisposition for a disease means that a person will necessarily end up with the disease.

In both the dance and the band, there are at least two components (gene and environmental factors) that come together in coordination. Further, over time they become more coordinated, just as environmental effects on genes accumulate over time. Finally, both metaphors emphasize that humans have agency — they can actively shape their health through their behaviors.

day

The group pilot tested these metaphors and found a decrease in fatalistic beliefs about genes. People who were exposed to these metaphors tended to feel less like having a disposition meant they would definitely end up with the disease than those participants who hadn’t encountered them. If these metaphors can help people understand that they can influence their health, they’ll hopefully be more likely to make health-promoting decisions (though that’s another assumption that needs to be tested!). Overall, public health messages that are based in evidence — research that reveals how people actually respond to different methods — can make a big help in improving out health.

Philip Guo & I talk about scicomm

The summer before I started grad school, I scoured the Internet for first-person accounts of what it’s really like to be a PhD student. I had just committed to doing a PhD in Cognitive Science at UCSD and figured that would be a good time to find out what I was in for.

Philip Guo‘s PhD memoir, the PhD Grind, was the most satisfying – check out my earlier post with reflections and favorite quotes to learn more about his free e-book. Just a couple years later, Philip came to UCSD Cognitive Science as a professor where he does research on human-computer interaction, online learning, and computing education.

He also creates some podcasts – “video interviews of interesting people [he] know[s].” I somehow fell into that category, and Philip and I had a fun conversation about science communication. We touched on the science of science communication, the blogging seminar I’m co-teaching, and how I discovered and pursued science communication.

You can read more and watch our conversation on Philip’s site.

Is my research me-search?

I recently listened to the inaugural episode of a new academic psychology podcast called The Black Goat (the podcast is great, by the way!). During the show, the hosts (Sanjay Srivastava, Alexa Tullett, and Simine Vazire) answered a question from an anonymous letter-writer who commented: I’ve been wondering how important it is to feel personally invested in what you study, like if it needs to be related to a part of you or your life that you care deeply about. The question was: should my research be me-search?

Me-search… I’d never heard of that term before. From the question and the hosts’ discussion, it seems that me-search is research driven by the researcher’s identity. It’s research that reflects something about the person doing it. Maybe it’s something you feel personally invested in for some reason beyond the typical reasons people are invested in their work: intellectual curiosity and because research progress means career progress.

On the one hand, it seems like it would be beneficial to be deeply passionate about the topic you research. That passion is more likely to lead to long-term motivation, which is crucial for academic research. Plus one for me-search.

But on the other hand, being personally invested in your research can be dangerous. If you really want a specific outcome, there’s a good chance you’ll get that outcome – whether that outcome actually reflects the state of the world or not. This doesn’t need to be intentional misconduct, either. For example, confirmation bias (which has been enjoying quite a bit of media spotlight lately) takes place when we (unintentionally) discount evidence that contradicts our prior belief, trust evidence that supports it, and even interpret neutral evidence as supporting that initial belief we held. (For more info, I have some past posts (1) (2)  that describe confirmation bias in more detail.) Minus one for me-search?

The hosts didn’t come to a conclusion, but instead weighed pros and cons of me-search, suggesting to me that moderate me-search (something you feel connected to, but maybe not on a life-or-death level) might be a happy medium.

So I asked myself: is my work me-search? It does incorporate things I love: I’ve always been fascinated with humans — observing them, describing them, analyzing how they work. I grew up with younger sisters who were identical twins, and they made great study subjects — I took ample advantage of this situation.

IMG_0587
This is me with the world’s most fascinating study subjects

I’ve also always been interested in language. My hobbies have always been reading and writing, and I was practically salivating when I got to take my first foreign language class (French) in high school. Humans and language are my jamz.

I study how language, especially metaphor, shapes the way we think. This work definitely incorporates, and probably stems from, some things I love, but is it my identity? Not really. More obvious examples of me-search might be bench science that could contribute to a cure for a disease someone I love has. Or researching the personality traits of people who grew up with younger siblings who were identical twins (because that would be a fascinating line of research — at least to me!). But I don’t do those things. Even though I do believe that language shapes the way we think and perceive the world, and of course I want my findings to be interesting because research careers require interesting findings, I don’t have any identity-driven motivation to find any particular outcomes.

But after working on a line of work for years (only 4, in my case), how can it not be a part of you?

I’ve created a variety of different me-search definitions for myself, and the one I use at any given moment influences whether I think my current work is me-search.

I’m curious about other researchers’ ideas of me-search: do you think your work should be classified this way, and do you think it’s more of an advantage or a disadvantage to think of your work as me-search?

We marched for science

I’m a homebody. Most Saturday mornings are complete with tea, reading blogs or fiction, and getting in a good workout. This Saturday was different, and I’m glad it was.

The March for Science has been in the works for a few months. During that time, people have debated whether it will exacerbate political divides and whether scientists should be political activists; plans grew for satellite marches in over 600 cities around the world; and so many people mobilized to attend a march or support scientific causes.

I prepared by reading and thinking a lot about the role of science in society and potential consequences of marches and science-driven activism. My tentative conclusion from that reflection period was I’m in. I crocheted hats, designed a poster, and chose my wardrobe.

It was a cool experience to be in a place with tons of people all driven there by a common belief that science is crucial and a common priority to express that belief. It was fun to see creative signs, to come together with others in my department, and to hear some of the organized talks (especially those given by the middle school science fair winners – their projects sounded amazing!). But what blew my mind the most was the fact that people all around the world, from so many backgrounds, with different beliefs, opinions, and ideas, united around a cause that’s so much bigger than any individual person or country.

Let’s keep this going.

I wish I could carry 20 posters at the March for Science

This Saturday, science-lovers in over 400 cities around the globe will be marching for science. I’ll be marching in San Diego with friends and colleagues (and many strangers). This march takes a lot of planning — of course at the macro level, orchestrating an international (or even local) event is massive — but also at a much more micro level, for the people involved.

My first stage of planning was to read and think a lot about the march — the goals of marchers, the message it might send, and its downstream consequences. I worry that it will be perceived by some as a coastal elite and liberal rally against Trump — and for some marchers, it probably will be that, but I see it as an opportunity for us to celebrate science and affirm that it’s important in our lives. I also know the March for Science (DC) organization has experienced a lot of internal mayhem, and many of the original organizers are no longer with the group because of disagreements with the way the organization has proceeded. This march is not a cure-all. It will probably offend people (unintentionally, I hope), and we should actively work to avoid offense, but I am optimistic that the benefits of coming together for science can outweigh the inevitable negative aspects.

So I’ve decided it’s an event I want to be a part of. Next step: planning logistics.

I have an important wardrobe decision to make. I own so many great science t-shirts, but I have to choose one for the march. I’ll also wear one of the science hats I’ve crocheted (I’ve made 38 so far, so hopefully I come across lots of hatless marchers). I haven’t yet hammered out these wardrobe details.

 

I also have to decide my primary message for the march: What will I put on the poster that I carry? I’ve organized an event for people in my department to make posters together tomorrow afternoon, and I decided I should do some research to provide people with inspiration. What kinds of messages will be most productive? The San Diego march team created a helpful guide for poster messages. A quick Google search provided so many clever and seemingly effective poster possibilities that I’m nearly overwhelmed. Here are a few of my favorite messages (I’ve remade my own visuals with the help of Canva but borrowed the messages from around the Internet).

 

Stay tuned to find out my eventual wardrobe and sign decisions. There are so many great possibilities, and I’m looking forward to seeing the many ways that marchers express their love and commitment to science.

 

 

 

 

The Language of Twitter

Technology is well-known (at least in linguist circles) for giving rise to new language. New innovations require new words, but those words are often quickly repurposed from their original parts of speech. For example, we can receive an e-mail (noun), but we can also straight up e-mail (verb) someone, and I think I’ve heard people refer to e-mail (adjective) messages (those are probably people who grew up with the idea of some other kind of messages for a while before they were introduced to the e-mail, though). Similarly, we have text (a group of words), a text (noun – a book, or, more recently, a text (adjective) message), and we can definitely text (verb) people. Instead of creating nouns, adjectives, and verbs for new technology concepts, we often create one word and use it for whatever parts of speech we need.

Twitter language

Social media platforms tend to also have their own niche linguistic habits. Twitter and Twitter users have introduced lots of new terms – for example the verb tweet as a thing humans can do while at a computer (with its accompanying noun — the tweet). Tweet is “productive,” in the linguistic sense that it can be combined with other morphemes (meaningful word parts) to make new words: there are retweets, subtweets, and tweetups.

Screen Shot 2017-04-01 at 3.20.50 PM
2010, seriously!?

Of course there’s also the expansion of the word hashtag (into something people now say verbally preceding pretty much anything they want). In fact, the primary definition of hashtag seems to be the Twitter sense now, with the actual symbol taking on the secondary definition.

Screen Shot 2017-04-01 at 3.22.10 PM

Plus, Twitter’s strict character limit encourages lots of esoteric abbreviations, bringing about lots of new elements of language. Sometimes, scrolling through my Twitter feed I’m reminded of the experience translating sentences from Latin — I’d figure out pieces one at a time, not necessarily in a logical order, and put them together, to hopefully reveal something meaningful.

Lately I’ve noticed a few especially cool linguistic inventions on Twitter that I think result in part from character restrictions, and also because even though most people’s Tweets are public for anyone on the Internet to read, conversations often include people with a lot of common ground. They may not even know each other IRL, but they follow similar people, communicate about similar topics online, and maybe share some background experiences.

First, an important mention: The people I follow on Twitter are not representative of the population of Twitter users. When I compare my Twitter followers to all Twitter users, there are some pretty striking differences. For example, a greater percentage of my followers are between ages 25 and 34 than the Twitter population at large.

Screen Shot 2017-04-01 at 2.43.49 PM

Similarly, my followers are much more interested in a handful of related topics than the whole Twitter population:

Screen Shot 2017-04-01 at 2.45.34 PM

These demographics should provide some context for the linguistic innovations I experience on Twitter.

#NotAllMen

First, the nature of hashtags on Twitter has kind of coerced these 3 words into one, as it often appears as #notallmen without caps to distinguish the component words. #Notallmen means what it sounds like. When someone says something negative about men, someone might reply with the reminder that not all men (#notallmen) are sexist (or whatever the original claim was — usually sexist). But I usually see #notallmen take on a more meta meaning, a way of pointing out that replying to some instance of sexism with “not all men” distracts from and avoids the problem (i.e., “Men who disguise their own hurt under #notallmen – into the bin with you”). Here, #notallmen is a noun.

But it can also be an adjective: “In my dream last night I was dating a #NotAllMen boy I went to high school with…”, “walk off your #notallmen instincts dude”, and “I wish guys put all of their angry ‘#NotAllMen!’ energy into just.. actually not being one of those men.” I know there must be verb uses of #notallmen out there, but I’ve yet to stumble upon one…

One other cool thing is that I see #notallmen in lots foreign language tweets — for example “Pero en este punto los hombres se vuelven víctimas y debemos dedicarnos al #notallmen para no herir a aquellos que “aman a las mujeres”.” To my eye, that looks like: “Spanish Spanish Spanish #notallmen Spanish.” (If you’re interested, Twitter translates it as: “But at this point the men become victims and we must dedicate ourselves to the #notallmen to not hurt those who “love women”.”)

#WellActually

#WellActually is #NotAllMen’s cousin. I admittedly don’t always understand how people are using it, but I do often see it to indicate that someone (most often a man) is correcting someone else (most often a woman). Sometimes it’s used to call out a man-splainer (as the man-splainer is likely to say “well, actually…” to a woman), but I’ve also seen it used to refer to correcting people in general: “I got to #wellActually one of the people interviewing me and it felt gooooooooodddddddddd” or “sorry to #wellactually.”

Like many of the other terms I’ve described, #WellActually can take on whatever part of speech its user needs. It’s often a verb (“Got a BALD MAN in my mentions trying to #WellActually me”), but can also be a noun (“Cue the glasses being pushed up and the ‘#WellActually'”) or an adjective (“Alright, #wellactually twitter. I see you never waste any time.” or “#WellActually twitter came really hard at the people trying to revel in the magnitude of this upset, huh?”). Well actually, I’m not completely convinced that #WellActually is describing Twitter in that second example. It might be an instance of using the hashtag for the actual words “well” and “actually,” which are… an interjection and an adverb? Someone can #WellActually me if that’s not right.

I love the content that I find on Twitter, but I can’t help paying attention to the way people package the content — which words they use and how they use them. The more I pay attention, the more I remember that people are clever, and language is one of the many ways they let that cleverness out.

Cognition at Work: A Celebration of CogSci Designed & Executed by Undergrads

This past weekend I was invited to present at UCSD’s Cognitive Science Student Association‘s annual conference. The undergraduate CSSA leaders pulled off a polished and fascinating conference, focusing on the role of cognitive science in all kinds of work, from design, to mental health, to academic research.

In the first half of the workshop I gave, they asked me to talk about my journey to cognitive science: how did I discover I wanted to pursue CogSci, how did I end up at UCSD, and what might lie ahead? This is a fun story to tell. It includes growing up in a tiny Massachusetts town with fascinating identical twin sisters and supportive parents. It also includes my undergraduate years at Vassar College, where I accidentally found Cognitive Science and took classes that truly nurtured my intellectual side and inspired me to learn more. I discovered UCSD’s unique Cognitive Science program and was dead set on getting in — and somehow I did. I’ve been having a blast researching the relationship between language and the mind, working with brilliant people, and exploring other intellectual interests. I talked about the essential skills for doing a PhD, and in response to the question: “what next?” I was honest: I don’t know! But I expect it’ll be exciting. Here are the slides from that portion of the workshop.

The second half of the workshop was focused on my Cognitive Science research. The two Research Assistants who have helped me collect data on the projects I wanted to share (David and Yahan) also helped me give the talk. I’m SO proud of the work they put into this project and the presentation, and I’m confident they inspired other undergrads in the audience. David and Yahan showed them that undergraduates can do great research AND communicate about it (which can be just as hard as the research itself!).

IMG_1669
David, Yahan, and I show our work.

Here’s a more legible version of our poster.

I left the conference feeling energized, and I hope many of the attendees did as well. It was a unique conference since most attendees were not there to promote their own work (since they were mainly undergrads). Of course there’s nothing wrong with academic conferences where promoting one’s work is a goal, but at this conference, attendees’ primary objectives were to learn, be inspired, and think about CogSci outside their classes. To me, it was a celebration of CogSci, and a great reminder of why I work in this really cool field at this really cool university.