Is there a link between intelligence and worrying?

A Slate post I read this week – Scary smart: do intelligent people worry more? – left me feeling uneasy. It’s not because I like to think of myself as smart, and therefore discovered that I might worry more than the average person (figured that one out in second grade, I think). The uneasiness resulted from feeling like the author’s overall story was really not the one that the data he reported show.

The post started by discussing a study by a group at Lakehead University in Ontario which had students complete a survey with items like “I am always worrying about something” and complete a verbal intelligence test. They found that those two scores were positively correlated: people who reported worrying more also had higher scores on the verbal test.

Next the author reports an amusing study conducted by researchers at the Interdisciplinary Center Herzliya in Tel Aviv, in which participants thought their task was to assess artwork on a computer. While doing this, the computer informed them that they had just activated a virus, and the research assistant running the experiment frantically asked them to go get help. As they were going to get help, more stress was thrown at them – someone in the hall asked them to answer a survey, another person dropped a stack of papers at their feet… The researchers found that the people who scored highest on an anxiety measure were the least distracted from their mission to find computer help by the additional stressors they encountered on the way. The Slate article reports, “Nervous Nellies proved more alert and effective.” I’m not sure I would come to the same conclusion, since in some cases, being fixed on one goal when other important things arise might not actually be a good trait. Regardless, it’s hardly a sign of intelligence, so I’m still not sure why that research was included in this piece. These same researchers have also shown that people who are higher in anxiety sense threats like smoke more quickly than others. Again, this might not always be a good thing (is it really beneficial to smell your neighbor’s BBQ and get distracted from the task at hand?), and even in cases where hyper vigilance is helpful, it’s not how most of us define intelligence.

Image from original slate article
Image from original slate article

There are a few other examples that seem consistent with the idea that higher intelligence (defined in a variety of ways) is associated with more worrying (also defined in a variety of ways). But then there’s some evidence that shows that the positive relationship between intelligence and worrying might not be so clear-cut. For example, although higher IQ and anxiety seem to be positively correlated among people who have diagnosed generalized anxiety disorder, the reverse was true in a control group: less worrying was associated with people with higher IQs.

But not much attention is paid to the contradictory evidence. The author writes “Still, the suspicion persists that a tendency to be twitchy just might bequeath a mental advantage.” He lists famous people who have been considered intelligent and have had anxiety – Nikola Tesla, Charles Darwin, Kurt Gödel, and Abraham Lincoln. While this is interesting, it’s not evidence – he conveniently forgot to list the many geniuses who didn’t have excessive anxiety and the many anxious people who don’t have exceptional intelligence.

Kudos to the article for presenting two sides, one step in the direction of showing that the question is not cut-and-dry (so few are, especially in psychology). But what should a reader think after this? That psychologists are wasting time and money running experiments that contradict each other and that we might never know which ones to believe? (If I wasn’t a grad student in a related field, that’s what I’d take away). Instead, readers should understand that “worrying” is complicated. Maybe, just maybe… “worrying” is not all one thing. We might use the same word to describe it, but that doesn’t mean that it’s really just one concept. There’s rumination about things you’ve done, worry about far-off future events, fear of public speaking, being mugged, or spiders, jitteriness, pessimism, and many other flavors of worrying. While a person who engages in one type might often engage in others, that’s not necessarily the case. And since there’s no absolute way to measure “worry,” researchers have to operationalize – they have to create a working definition for worrying, something measurable that they take to reflect worrying. We shouldn’t expect that a finding based, for example, on a generalized anxiety questionnaire, will apply to all types of people. Further, these studies involve testing people in different contexts, places (many cultural characteristics can affect performance on the measures researchers use to reflect worry) and by different researchers (even subtle differences in mannerisms, experiment design, or environmental controls could affect the results.

We need to be careful how much we generalize. Instead of concluding from the study that correlated people’s verbal test scores with their anxiety inventory scores that intelligent people worry more, it might be more accurate to say something like, college students in Ontario who have high verbal scores (according to one particular test) also have high anxiety scores (according to another particular test).

Granted, if all these caveats were heaped on readers, they’d probably be really disillusioned with research, and maybe stop reading catchy articles in the public press like this one, and that’s not the goal either. It’s just important to point out that not all DVs are created equal, and that single experiments, especially on such abstract traits as “worrying” shouldn’t be recklessly generalized.

Overgeneralization is a problem in psychology, probably because the flashy conclusions are much more interesting to non-psychologists, and popular press writers’ goal is to engage their audience. I think our goal should be engaging people in a way that doesn’t overgeneralize, though. Is society really becoming more scientifically literate if they’re reading articles about science but misunderstanding the implications of that science? I have higher hopes for improving scientific literacy. I think we can engage people, tell them about exciting and controversial findings, and help them to think critically in order to generalize when it’s appropriate, and take things with a grain of salt when that’s appropriate. We can have our scientific cake and eat it too, as long as we remember that that’s the goal of science communication.

 

Advertisements

The power of Ngrams

For me, one sign of a really good book is that I learn things I wasn’t expecting to learn. I had that experience while reading almost every chapter of Uncharted: Big Data as a Lens on Human Culture. The book is written by the creators of Google’s Ngram Viewer, which is a tool that shows the frequency of any word or phrase (single words are 1-grams, 2-word phrases are 2-grams…) in the massive and continually growing corpus of books in the Google Books database. The most informative feature of Ngram Viewer is that you can compare frequencies of different phrases to each other and see changes in their use over time (here’s a holiday phrase comparison that I made.).

uncharted

The book includes many ngram comparisons that are much more informative than mine. It tells the story of the Ngram Viewer’s birth, shows lots of interesting ngram comparisons, and goes more in depth on a variety of uses. Maybe the most surprising use is that ngrams can reflect censorship efforts. By looking at the slopes of the changes in frequency for different people’s names during the Nazi regime, it becomes clear that some names were being censored (those ngrams have negative slopes for that time period) and others were rising in prominence (those have positive slopes). When compared with historical records, the ngram-based conclusions are strikingly accurate.

The book only shows a tiny slice of what the Ngram Viewer can be used to learn. It’s the epitome of cognitive science, piecing together wisdom from many disciplines. Ngram Viewer is a great tool, whether you’re at home on the couch wondering when the phrase “Merry Christmas” became popular, or doing paid research, and this book was a cool way to learn more about it.

I'm partial to this comparison (found on the About page for Ngram Viewer - https://books.google.com/ngrams/info)
I’m partial to this comparison (found on the About page for Ngram Viewer)

Mars v. Venus: are men and women really different?

Last year, researchers researched that women taking Ambien, the sleep drug, were taking double the dose they should have been taken. This mistake was accredited to the fact that most drugs are tested only on males (first on male animals and then on male humans). The logic is that female hormones are complicated and males and females are basically the same, so it’s better to test the drug on the simpler sub-population and generalize the findings. Oops. Turns out women metabolize the drug differently, so the effective dose for women is much lower than for men, but because the prescribed doses were the same, women were consistently overdosing on Ambien. Here’s the 60 Minutes episode exploring this blunder. To correct it, the FDA ordered all sleep medications to be retested on both women and men. But what about all sorts of other medications that might deserve different recommended doses for women and men? It would be too expensive to retest them all, so we’re not going to do it.

Image: http://www.recapo.com/dr-oz/dr-oz-news/dr-oz-ambien-safe-ambien-defense-zombie-state-female-dosage/
Image: http://www.recapo.com/dr-oz/dr-oz-news/dr-oz-ambien-safe-ambien-defense-zombie-state-female-dosage/

This incident sheds a light on a difference between male and female bodies that can’t be ignored. Whether there are other differences, differences between the male and female brain, remains a contentious issue. This article by Larry Cahill, Equal ≠ The Same: Sex Differences in the Human Brain, presented some really compelling evidence for nontrivial differences in male and female brains.

Cahill cites one study by Ingalhaliker and colleagues that used diffusion tensor imaging (a form of MRI) to measure connectivity patterns of brain (how much different neurons connect with others, reflected by the amount of white matter in the brain) in 428 males and 521 females between the ages of 8 and 22. This huge sample led them to conclude that “male brains are optimized for intrahemispheric and female brains for interhemispheric communication.” In other words, male brains, on the whole, demonstrate more connectivity within each hemisphere, while there’s more connectivity between the two hemispheres in female brains. According to the authors, these findings “suggest that male brains are structured to facilitate connectivity between perception and coordinated action, whereas female brains are designed to facilitate communication between analytical and intuitive processing modes.” Maybe this is a post hoc explanation, but it seems pretty consistent with the bulk of males and females I know. (Also worth noting that, as to be expected with such a heated topic, there are disagreements about these researchers’ interpretation of the data).

Another informative study that Cahill writes about was conducted by Cribbs and colleagues. These researchers examined the patterns of expression of immune system-related genes implicated in Alzheimer’s and aging. It was already known that in general, these genes are expressed in the hippocampus and and the superior frontal gyrus, a part of the frontal cortex, but the researchers found that the superior frontal gyrus was more prone to immune-type gene reactions in males than it was in females, and the hippocampus was more prone in females than in males. The manifestation of Alzheimer’s Disease and aging may look similar in males and females, but the underlying mechanistic patterns may be different. If these researchers found this in one circumstance, what other medical conditions might arise differently in males and females?

Cahill summed the issue up eloquently:

At the root of the resistance to sex-influences research, especially regarding the human brain, is a deeply ingrained, implicit, false assumption that if men and women are equal, then men and women must be the same. This is false. The truth is that of course men and women are equal (all human beings are equal), but this does not mean that they are, on average, the same. 2 + 3 = 10 – 5, but these expressions are not the same. And, in fact, if two groups really are different on average in some respect, but they are being treated the same, then they are not being treated equally on average.

It’s a tough issue to resolve: how do we treat the two groups equally, but not the same? Parents of multiple children seem overall to solve this problem at least to a “good enough” extent, and I have faith that we can do the same on a larger scale in science.

Faking science

In many scientific fields, and especially in psychology, there’s currently a lot of concern about the integrity of published work. A lot of the concerns stem from “sloppy science” – practices like massaging data, like by excluding participants who detract from a significant effect or by peeking at the data over the course of running participants and ending data collection when the results confirm the hypothesis. Then there’s the “file drawer problem.” This results from widespread acceptance that a p-value of less than 0.05 indicates significant results. What this p-value actually means, is that 5% of the time, the same data will be a result of a false positive – basically a false alarm, detecting an effect that doesn’t exist. So in theory, if the same (wrong) study is run 20 times, it will yield null findings 19 times and a false alarm once. If the false alarm gets published (as truth) and scientists base future research on it, it could result in a lot of wasted time and money.

Most cases of questionable scientific integrity lay in a blurry area between honest mistake and unethical, but this isn’t always the case. I recently discovered a captivating and lengthy article in the New York Times called “The Mind of a Con Man,” spotlighting Diederik Stapel, a prominent Dutch social psychologist whose fraudulent scientific practices were discovered in 2011. Fabricating any data is a serious enough offense, but investigations have revealed that Stapel fabricated studies for at least 55 papers. Stapel also allowed the fabricated data to be used in 10 different students’ PhD dissertations, but none of these students are being held liable because they had no idea that any of the data had been faked.

In most cases, Stapel completely made up the data. He didn’t even run many of the experiments he claimed to have run, and in some cases the experiments wouldn’t even have been possible to run. For example, he claimed to have conducted one study in the Utrecht train station, but the set up of the station would have made the arrangement that Stapel described impossible. In another, he was supposed to measure whether participants who were given a M&Ms in a mug that read “Capitalism” consumed more than those whose M&Ms were in a different mug. Instead of running the experiment, he brought the M&Ms home and became the sole participant in his experiment.

How to get a scientific publication, step 1: start with some M&Ms in a mug... Image: http://www.flickr.com/photos/donut4147/3500197318/
How to get a scientific publication, step 1: start with some M&Ms in a mug…
Image: http://www.flickr.com/photos/donut4147/3500197318/

Why did he go through such elaborate lengths to fabricate data over and over? His advisor claims that he was a brilliant researcher, so it seems likely that he could have produced actual high-quality work. In many cases, he never ran the experiment, so he didn’t even try to genuinely investigate his hypothesis. Stapel claims that a “lifelong obsession with clarity and order… led him to concoct sexy results that journals found attractive.” According to the NYT, “he described his behavior as an addiction that drove him to carry out acts of increasingly daring fraud, like a junkie seeking a bigger and better high.”

There are serious consequences, undoubtedly for Stapel, whose academic career is over, but also for science more broadly. For one, science is a collaborative process. Although Stapel was technically collaborating with many other researchers, he was able to report that he ran many experiments and analyzed a lot data without actually proving this to anyone. Increased transparency would eliminate the ability to conceal so much. Additionally, we should evaluate whether current practices may be contributing to motivation for fraud like Stapel’s. In an earlier post, I wrote about mounting concerns that prestigious journals are actually encouraging bad science by accepting only extremely clean and flashy results… Stapel may be a case in point for this argument. What he did was undebatably wrong, but there may be steps we can take to avoid recurrences.

Sushi vs. Hamburger Science

I was introduced to this article, Sushi Science and Hamburger Science, last semester in one of my favorite classes, and it’s still making me think. It’s written by a Japanese biologist visiting America, who says:

I had always regarded science as a universal and believed there are no differences in science at all between countries. But I was wrong. People with different cultures think in different ways, and therefore their science also may well be different.

He first compares the cuisines of the East and West. According to the Easterner, in the West, we overcook food, but at the same time have some dishes in which the genius of the chef is truly apparent. In the East, on the other hand, many of the meals are not cooked at all, and although there are many skills needed to be a good preparer even of sushi and sashimi, the materials speak more than the cook does.

Screen shot 2013-07-13 at 9.37.41 PM

His religion panel also sums up differences that he says are evident in many cultural practices. In general, westerners are more focused on one God, intentionality, the individual, and rational facts. While dichotomies are pervasive in Western culture, they are absent from Eastern. Motokawa sums the difference up by equating Western thought with the concept of “one” and Eastern thought with the concept of “many.”

religion panel

This “one-many” distinction is very clear in the two cultures’ beliefs about science. In the West, we assume that nature is uniform and rational, and the goal is to discover universal rules. Instead of seeking universality, Easterners focus on finding differences and specificity (which Motokawa attributes at least in part to their belief in many gods). He writes that according to Eastern philosophy, “To interpret is to create your personal world, which always closes the way to the truth.”

science panel

If this is right, it explains why 70% of psychology citations come from the West. In psychology, the goal is to discover how the human brain/mind works. This implies that every human brain (and primates and rats too, since they’re frequently the subjects used in studies to learn about humans) behaves the same way under the same traditions. Are our mentalities are so different among different cultures that we don’t even see value in studying the same things? I wonder if there’s any way to reconcile the Eastern and Western cultural beliefs in order to study the mind? Maybe this is an important step in coming closer to a true understanding, as opposed to an understanding based on constricted cultural beliefs.

If cheeseburger sushi is a culinary possibility, maybe there’s hope for reconciliation of these two very different cultures for scientific investigation... Photo: http://aht.seriouseats.com/archives/2010/07/cheeseburger-themed-sushi-available-at-yatta--truck-in-los-angeles.html
If cheeseburger sushi is a culinary possibility, maybe there’s hope for reconciliation of these two very different cultures for scientific investigation… Photo: http://aht.seriouseats.com/archives/2010/07/cheeseburger-themed-sushi-available-at-yatta–truck-in-los-angeles.html

Imagining an apple vs. a banana

I’ve written previously about my skepticism regarding the use of fMRI to localize a function in the brain. Multivariate pattern analysis (MVPA) is commonly used to detect subtle differences in patterns of brain activity in fMRI data.

This post articulated the misleading potential of MVPA really clearly.

In the new paper, Todd et al make a very simple point: all MVPA really shows is that there are places where, in most people’s brain, activity differs when they’re doing one thing as opposed to another. But there infinite reasons why that might be the case, many of them rather trivial.

The authors give an example of two similar tasks, imagining apples and imagining bananas. Although standard fMRI analysis might find no significant differences in activation, an MVPA might reveal one area in which the different tasks produced differing patterns of activity (differences in blood flow). At first, it might seem that the MPVA revealed the area of the brain responsible for encoding specific fruits, but this conclusion overlooks context. Every person has had different experiences with apples and bananas. Maybe the participant likes one and not the other. Maybe he ate a banana right before participating in the study and therefore has bananas on his mind. The potential contextual differences in experiences that any individual has had with the two fruits is endless, and would lead to an apparent difference in individual activation patterns for different tasks.

Apples and bananas are linked to different experiences and thoughts for everyone.  Image: http://www.ireallylikefood.com/708661250/do-you-like-to-eat-eat-eat-apples-or-bananas/
Apples and bananas are linked to different experiences and thoughts for everyone.
Image: http://www.ireallylikefood.com/708661250/do-you-like-to-eat-eat-eat-apples-or-bananas/

Here’s another way of looking at the hypothetical task (image below). For Participant A (the dark line), task A (imagining the apple) requires more concentration than task B (imagining the banana) does. For Participant B, the difficulties are reversed. Normally, these individual differences would cancel each other, demonstrating no significant difference between the brain activity when people imagine apples and the activity when they imagine bananas, but MVPA is designed not to allow individual differences to cancel each other out.

MVPA confound
Image from Todd et al.: http://www.sciencedirect.com/science/article/pii/S1053811913002887

The authors of the original paper suggest that MVPA confounds can be controlled by classical Generalized Linear Model (GLM) analysis or by post hoc linear regressions, so  I don’t take this as a sign that MVPA or fMRI data aren’t meaningful, but more of a warning to be very cautious when interpreting them. Further, I take it as a reminder of the importance of context- we’re all such different creatures and our vastly different experiences inevitably color our cognition and behavior.