Curious: The Desire to Know and Why Your Future Depends on it (Review)

One day as I was clicking through Amazon, the site recommended a book with the word Curious across a black cover with an owl beneath. Naturally, I was curious: A whole book on curiosity? How much is there to say? About 45 seconds later, I was reading it. It was a fun read, peppered with stories, descriptions of research, and historical anecdotes. It was filled with rich quotes, by the author and many others that have written about the topic over centuries, and I’ll let those quotes drive this review.

curious

A Taxonomy of Curiosity

Curiosity is not just one thing. Ian Leslie describes three types of curiosity, distinguished by the contexts in which they arise and the behaviors they encourage us to seek out.

Diversive curiosity is an attraction to things that are novel. I imagine a dog on a walk, pausing to inspect every seemingly new patch of dirt, trash, or fire hydrant. Humans show a lot of diversive curiosity too, like when we scroll through a Twitter feed or flip the TV channels 30 times in a minute. It’s not just a low-level type of curiosity, but instead is a starting point that drives us to seek out new experiences and people and paves the way for two deeper types of curiosity.

Epistemic curiosity manifests when diversive curiosity is honed as a quest for knowledge or understanding. It is “deeper, more disciplined, and effortful” than diversive curiosity, a desire to understand how the world works. Psychologists use the term Need For Cognition (NFC) as a measure of intellectual curiosity. People with a high NFC thrive on and enjoy intellectual challenges, while those low in NFC prefer their mental lives to be as straightforward as possible.

Empathic curiosity is the drive to understand the thoughts and feelings of others, which we can attain by learning to put ourselves in others’ shoes.

A History of Curiosity

Leslie takes us through curiosity’s ups and downs over the past centuries: in some eras, it was looked down upon, and little innovation took place during those times. In other times, for example during the Renaissance, empathic and epistemic curiosity became widely popular, and culture exploded. Cities, too, promote the explosion of curiosity: “The city was a serendipity generator.”

Even now, public opinion of curiosity is a mixed bag: we still repeat warnings of Adam and Eve’s curiosity, we parrot the phrase, “curiosity killed the cat,” use the word curious when we actually mean that someone is weird, and emphasize practical job skills in education over all else. At the same time, there’s a market for books like this one, lauding the trait and going so far as to claim that “your life depends on it.”

How does the Internet fit into society’s curiosity? On the one hand, we have an incredible amount of information literally at our fingertips. Naturally curious people can have a field day, and many do. But people who are lower in NFC can use the internet to stunt the development of their curiosity… which many also do. Who/what/when/where questions can usually be answered by typing a pithy phrase into Google, clicking on the first search result without reading about it, and scanning a sentence or two of the web page. This type of information-seeking is not effortful, and therefore doesn’t engage the processes at work when we truly exercise curiosity. Leslie comes back to this theme often: while the Internet has amazing potential for expanding our horizons and allowing us to share ideas faster than ever, if we’re not careful, it can also squash our curiosity, much to society’s detriment.

Metaphors for curiosity

Puzzle vs. Mystery: Leslie attributes this distinction to security and intelligence expert Gregory Treverton. Some problems are puzzles:

they have definite answers… are orderly; they have a beginning and an eng. Once the missing information is found, it’s not a puzzle anymore. The frustration you felt when you were searching for the answer is replaced by satisfaction. Mysteries are murkier, less neat. They pose questions that can’t be answered definitively, because the answers often depend on a highly complex and interrelated set of factors, both known and unknown… Puzzles tend to be how many or where questions; mysteries are more likely to be why or how.

He uses the question “where is Osama bin Laden?” as an example of a puzzle. Its mystery equivalent might be “how does Osama bin Laden think?” Similarly, reading a mystery novel is also a puzzle, because once you get to the end, you know who did what, and the problem is solved. Reading a novel like The Great Gatsby, on the other hand, is a mystery, because it leaves you thinking about questions that don’t have definite answers, like the true nature of the American dream.

Leslie encourages people to “forage like a foxhog.” This idea, credited to the Greek poet Archilochus, is that “‘The fox knows many things, but the hedgehog knows one big thing.'” The hybrid foxhog is the compromise to the question of whether we should strive to become generally knowledgable people or aim to become experts in very specific areas. The foxhog does both of these, resulting in knowledge that can be considered “T-shaped”: The top of the T is surface knowledge, and foxhogs have a lot of it. The other part of the T is its slender, lengthy spine. Foxhogs also possess tall Ts, because they have intense knowledge about at least one area. In other words, “curious learners go deep, and they go wide.” As a side note, robust, healthy Ts are precisely the goal of a PhD program, designed to make you smart in a way that will be conducive to having happy hour drinks with many people (academics) while becoming so knowledgable about your own field (or subfield, or sub-subfield…) that sometimes you have to teach your advisor what you’re doing.

The Malleability of Curiosity

Leslie emphasizes that “a person’s curiosity is more state than trait.” That means that although we are born with varying degrees of innate NFC, curiosity is highly influenced by our surroundings.

Questions are crucial. They’re tools through which we learn incredible amounts of information about the world.While asking questions may seem like a very basic ability, it actually requires a few important skills: you have to know that there are things you don’t know, you have to be able to imagine that there are different possibilities for the things you don’t know, and you have to recognize that other people are sources of information. A kid between the ages of 2 and 5 will ask roughly 40,000 explanatory questions. And when kids are spoken to by adults who ask questions themselves, the kids begin to ask more. The moral of that story is that asking kids questions gets them to also ask questions, which helps them not only learn about the world, but also to learn that inquiring about the world is a fruitful behavior.

The Importance of Curiosity

Curiosity fosters innovation. Computers are now smarter than humans at many tasks, but computers aren’t curious. For this reason, Leslie writes:

The truly curious will be increasingly in demand. Employers are looking for people who can do more than follow procedures competently or respond to requests, who have a strong, intrinsic desire to learn, solve problems, and ask penetrating questions. They may be difficult to manage at times, there individuals, for their interests and enthusiasms can take them along unpredictable paths, and they don’t respond well to being told what to think. But for the most part, they are worth the difficulty.

Why can curious people innovate better than non-curious ones or better than computers? Curious people are “the ones most likely to make creative connections between different fields, of the kind that lead to new ideas.”

Angela Duckworth is well-known for popularizing the concept of grit: “the ability to deal with failure, overcome setbacks, and focus on long-term goals.” Grit has been demonstrated to be an incredible predictor of success in many areas of life. I once heard two professors talking about the most successful grad students as those who have grit, and their conversation plays through my head on a weekly basis, if not more often. Grit and curiosity go hand in hand. If you’re curious, you just keep learning and exploring, even once you’ve learned what you set out to know. If you’re gritty, you just keep going, even when obstacles arise and the goal you’re pursuing becomes more difficult.

To be curious, you have to know things. One way of thinking about curiosity, attributed to George Loewenstein, is that there’s an information gap: you know some things about a topic, and then realize that you don’t know everything, but that you can learn more. This creates an awesome cycle: the more you learn, the more you want to learn.

Advertisements

An exercise in wrongness

Wrongness isn’t a word, you say? Then I’m off to a great start. (It is, though).

My department makes a pretty big deal of our second year projects. We don’t have any qualifying exams, just an oral presentation and paper. We’re still 4 long weeks away from presenting these projects, but there have already been plenty of eye-opening moments for me to write about. For many of us, this is the first time we’ve done a project of this nature and magnitude from start to “finish” (are these projects really ever over?) largely independently. This means that there are a lot of surprise moments for making mistakes.

Going back to last summer when I started running the experiments that will be included in my project, I screwed up plenty of things. My sloppy programming meant that the experiment crashed sometimes. Other times, I failed to communicate important details to the research assistants running the experiment, and we had to trash the data. It turned out that the data collection was actually the phase of the project in which I made the fewest mistakes, though. The process of analyzing the data was a cycle of mistakes and inefficiencies that were usually followed up by more mistakes and inefficiencies. Every once in a while, I’d do something useful, and that was enough to keep me going.

Sometimes, I’ve gotten annoyed at myself for making these mistakes, especially when deadlines are approaching or when my advisor has to be the one to point them out to me. I’ve been frustrated by the messiness of the data, though logically I know that I should probably be skeptical if my data weren’t messy), and all those things I should have done differently continue to come to mind and nag at me.

"Piled Higher and Deeper" by Jorge Cham www.phdcomics.com
“Piled Higher and Deeper” by Jorge Cham
http://www.phdcomics.com

Luckily, I’m pretty sure I’m not alone. A handful of older grad students have told me about their second year project mistakes, and mine start to look like par for the course.

And then I discovered a Nautilus interview of physicist David Deutsch. It’s a pretty philosophical interview on the importance of fallibility, but the takeaway is that the ability to be wrong is something we should embrace because the very fact that we’re error-prone means that it’s possible to be right. He points out that so often in science, people prove things wrong that have been assumed for many years to be truths.

What makes progress possible is not whether one is right or wrong, but how one deals with ideas. And it doesn’t matter how wrong one is. Because there’s unlimited progress possible, it must mean that our state of knowledge at any one time hasn’t even scratched the surface yet. [As the philosopher Karl Popper said], “We’re all alike in our infinite ignorance.”

This interview lifted a lot of weight off my second-year grad student shoulders. I’ve made lots of mistakes throughout the process of putting together this project (and I’m not finished making them, I feel pretty confident), and therefore, there is a such thing as doing the work correctly. In the end, the p-values that I find when I analyze my data aren’t really the important part (though, unfortunately, they’re what will determine if and where the work gets published…). Instead, it’s a reminder to focus on the ideas – the ones the work was based on and the one the work opens up – and embrace the wrongness.

PhD Smoothie

I woke up yesterday morning feeling reluctant to tackle the day. The class I TA had a final exam at 11:30, which meant that my office would likely be overflowing all morning with panicked students wanting to cram at the last minute. Then, I’d get to stare at them take a test for 3 hours, followed by a few more hours of fighting with copiers and grading software so I could turn the grades in. At some point during the day, I’d start analyzing a recent experiment that my gut tells me is a heap of noise, and turn in a seminar paper, thinking “if only I had one more day to polish this up…”

Right away, I knew that this day was going to call for a PhD smoothie. Here’s the recipe:

smoothie-463331_640
Image: http://pixabay.com/en/smoothie-turmix-raspberry-fitness-463331/

Start with a base of mystery fruit. Pick something that you have no idea what its name is or what it’ll taste like. The important part is that you’re curious about its taste, and you’re going to find out.

Next, add in a generous cup of grits. There are going to be lots of setbacks, and grit is the best predictor of success.

Then add in two handfuls of kale – because it’s good for you!

If the color is brownish, you’re doing it right. This is the time to add a couple tablespoons of Trader Joe’s Soyaki sauce because it’s anything but conventional and keeps things interesting.

Continuing with the ethnic theme, add in some wasabi. You’ll have to titrate the amount to your own threshold. The goal is to have just enough to make you cry a little, but not on every sip – just on a few. After opening your tear ducts, the wasabi will help you feel fresh and ready for the next taste.

Then, add a scoop of protein powder. The importance of physical strength is not to be underestimated, and your gym time will likely be limited.

And for the final touch, three spoonfuls of sugar, because sugar is satisfying. You should know that it’ll make you addicted, though, and may even contribute to high blood pressure (more research is needed).

Put all this in a blender and mix it at high speed – like, as fast as your blender will go. Keep at it for anywhere between 4 and 8 years – you can’t put a timer on masterpieces like this one. When your gut tells you it’s done (there will be no other indicator) or whenever you get too impatient – whichever comes first – gulp it down and head to the lab before you have any second thoughts. Bon appétit!

A movable academic feast

This weekend I’m at Psychonomics, and except for the fact that there’s no food at the conference, it is very much a movable feast. Just like at many restaurants, before the experience even starts you can go online to get a menu in the form of a 300+ page program. Both menus are broken up in a logical sequence – either by appetizers, mains, and desserts or by morning, afternoon, and evening sessions. Instead of separate sections for meat and fish, though, the conference menu has sections for poster sessions and talks, and subsections for posters and talks that are on related topics.

The menu is only a small part of the experience, though. It’s a guide. The feast starts when you get to the conference center, choose an item off the menu, and seek it out. You float from one conference room to another in many cases, each time getting a small taste of something new. It’s not the Olive Garden with whopping portion sizes and bottomless breadsticks, though by the end you’ll probably feel like you ingested the mental equivalent of a huge bowl of penne alla vodka (which is delicious, but also invokes lethargy, a coma-like feeling). It’s rich and filling, an experience you know you want to have again… but first, a recovery period.

pasta

Body spills into the brain

This quarter I’m TAing for a class called Distributed Cognition, which explores a bunch of ways that cognition might not be something that happens exclusively in the brain. This week we looked at different flavors of embodiment, the idea that the body is crucial for cognition. For example, we talked about one study showing that people who were unknowingly leaning to the left made numerical estimates that were too small (consistent with the location of smaller numbers on our number line), while those leaning to the right made overestimations (Eerland, Guadalupe, & Zwaan, 2011). The overarching theme was that the state of our body can affect thoughts that we typically attribute only to our brain.

One study that I was reminded about when talking to a student is a study that has gotten a good amount of popular press attention. It’s called Extraneous factors in judicial decisions (Danziger, Levav, & Avnaim-Pesso, 2011), but the message that is usually taken is that judges have no mercy when they’re hungry. The authors divided judges’ work days into three chunks, divided by their food breaks. They found that at the beginning of each segment, judges made favorable decisions about 65% of the time, and their favorable decision rate declined steadily, reaching nearly 0%, throughout each segment. As someone whose brain and body shut down without a relatively consistent stream of food, this finding is not too shocking, though the magnitude of the change in favorable decisions is dramatic. I think it’s a great example of “body spills into the brain.”

It’s also an example of what many researchers refer to as “ego depletion,” the idea that we have a limited pool of mental resources, and cognition suffers once they’re used up. We get mentally fatigued, and then make poor decisions or have poor performance on some task as a result. Ego depletion underlies claims that working fewer hours increases productivity. I read this sort of advice often, each time thinking to myself, yes! I should do that. I feel this way especially on days like today, a Saturday morning in which ego depletion is fresh on my mind. I’m in recovery mode. Then, inspired to change my work habits, I’ll open my calendar to decide which work hours I’ll shave off the week, and I just stare at it. My trusty, color-coded calendar feels non-negotiable, so I close it and decide that working fewer hours maybe isn’t that crucial. I convince myself of this by reading reminders that some researchers claim that ego depletion is all in our heads. There’s probably some truth to this too – I often don’t start to feel drained until I acknowledge how busy I’ve been.

Image: http://www.artofmanliness.com/2012/01/08/willpower-part-ii/
Image: http://www.artofmanliness.com/2012/01/08/willpower-part-ii/

I do a lot of meta-cognition about work. By that I mean that I think about my work patterns and other people’s, and I try to evaluate what’s good bad about those patterns. My conclusion, for this morning at least, is that there’s probably not a one-size-fits-all solution to this issue. Some people might suffer from major ego depletion, while others might be more Energizer-bunny-like. Some weeks a person might get tons done while putting in many hours, and other weeks might be more efficient with a leaner schedule. For me, my goal is to work deliberately and mindfully, taking each week, day, or project as it comes, and adapting work habits as necessary. I will probably never discover the secret recipe for 100% efficient work, but that’s ok – it’s kind of fun trying to figure it out anyway.

Quitting the 9 to 5 before starting it

I recently stumbled upon a blog post at raptitude titled “The frightening thing you learn when you quit the 9 to 5.” I’m not sure why I was so drawn to it, since I’ve never actually worked a traditional 9 to 5 job. Maybe I was trying to mentally prepare for the day I quit a job I will most likely never have. Regardless, I was curious.

David Cain, the author, is 32 years old and recently left an unfulfilling 9-5 job to pursue writing. Although bizarre curiosity might have led me to click the link in the first place, I was soon captivated by the parallels between his situation and the one I’ve found myself in after beginning work on my PhD, and especially this summer, a time when much of the structure I was used to has temporarily died down.

Cain writes, “before I quit my job at 32, I had never really experienced a self-directed period of my life in which I was actually trying to accomplish something.” Oddly enough, this is probably true for most of us. We might have side projects that are self-directed and goal-oriented, but how rare is it for your everyday life to be this way? It sounds a little fantastical, the sort of thing we might wish for: no boss, doing work we love, when and how we want to do it. Cain’s reflections suggest that it’s not the walk in the park it might seem to be at first. It’s great in a lot of ways, but it’s far from intuitive. Although the post has nothing to do with academia, I recognize that thriving in this situation is what needs to be done to earn a PhD.

 

A few other quotes that really hit the nail on the head for me:

“If I chose not to work, it was my loss and only mine. When you’re self-employed, every day is Wednesday.”

“Each day is a blank page with no outline indicating where the crayons go. I have to decide what to draw, how ambitious or humble it’s going to be, and what it’s all going to add up to over time.”

Is The Office what 9 to 5 jobs are like?!
Is The Office what 9 to 5 jobs are like?!

Cain came face-to-face with the sudden need to be his own boss and define his own career path at age 32, after an average of 10 post-college years characterized by the having-a-boss experience. I wonder if it’s more jarring at that point in life than at 22 when you’re inexperienced and naive, but haven’t had the 9-5 routine grounded into you yet? In some ways, college seems like an intermediate step between school years when children are micromanaged and this self-directed state that Cain writes about. It seems like the traditional 9-5 path is a step in the opposite direction, though, so maybe the freedom is less dumbfounding for me than it might be if I had become accustomed to a more traditional work scenario.

The goal of Cain’s post is to urge all people, from those currently employed in a 9-5 job to children still in school, to think about their escape from the resignation to trudge through 5/7 of your life to earn a paycheck. “Much better than resignation is to make a long-term plan to find work that is valuable enough to you that your typical day is a fulfilling one, and valuable enough to others that people will pay you for doing it.” It’s a pretty romantic prospect, but a pretty cool one to aim for nonetheless.

Exponential Learning

We toss around the  phrase, “learn something new everyday” jokingly, but in reality, we learn so much more than one thing per day. Many of these things are implicit, so we don’t realize we’re learning, but each experience we have is making its mark on our cognition. Many other things we learn, though, are explicit – we’re consciously learning in an effort to get better at something. Before we can master a skill or knowledge set, we often have to learn how to learn that thing. What strategies facilitate optimal learning? Which are ineffective? A recent NYT column by David Brooks highlights some overarching differences in the learning processes in different domains.

In some domains, progress is logarithmic. This means that for every small increase in x (input, or effort), there is a disproportionately large increase in y (output, or skill) early on. Over time, the same increases in x will no longer yield the same return, and progress will slow. Running and learning a language are two examples of skills that show logarithmic learning processes.

logarithmic

Other domains have exponential learning processes. Early on, large increases in effort are needed to see even minimal progress. Eventually, though, progress accelerates and might continue to do so without substantial additional effort.

Mastering an academic discipline is an exponential domain. You have to learn the basics over years of graduate school before you internalize the structures of the field and can begin to play creatively with the concepts.

My advisor has also told me a version of this story. She’s said that working hard in grad school (specifically I think she phrased it as “tipping the work-life balance in favor of work”) is an investment in my career. Just as monetary investments become exponentially more valuable over time, intense work early in my career will be exponentially more valuable in the long run than trying to compensate by working extra later on.

exponential_graph

Even in my first year of grad school, I developed a clear sense that even learning how the field works and what are good questions to ask takes time. When I wrote my progress report for my first year, I concluded that most of what I learned this year has been implicit. I can’t point to much technical knowledge that I’ve acquired, but I can say that I’ve gained a much better idea of what cognitive science is about as a field. I’ve gained this by talking (and especially by listening) to others’ ideas, by attending talks, and by reading as much as I could. This implicit knowledge doesn’t necessarily advance my “PhD Progress Meter” (a meter that exists only in my mind), but it is also necessary to at least start to acquire before I’ll see any real progress on that meter. Once the PhD meter is complete, I will merely have built the foundation for my career, but will probably still have much learning to do before I reach the steepest and most gratifying part of the learning curve.

Brooks points out that many people quit the exponential domains early on. He uses the word “bullheaded” as a requirement for someone who wants to stick with one of these domains, since you must be able to continually put in work while receiving no glory. I think that understanding where you are on the curve at any given time is crucial for sticking with one of these fields, so that you can recognize that eventually, the return on effort will accelerate, and the many hours (tears, complaints, whatever) that went into mastering the domain early on were not in vain. Where I stand right now, progress is pretty flat… so I must be doing something right.

A homunculus for time

Last week, Chris Fry, Twitter’s Senior VP for Engineering and a UCSD Cog Sci alum, came to talk at our department’s open house. The theme of his talk was why a PhD (specifically one from our department) was a good investment and helps him to be successful even in a career that’s seemingly far from academia.

There’s a metaphor he used that I can’t get out of my head this week. He talked about the homunculus, which, in the brain sciences, refers to the disproportionate mapping of different body parts in the motor and somatosensory cortices. In other words, the portion of the motor cortex devoted to hand movement is much larger than the proportion of the hand to the actual physical body, which explains why we can do much more dexterous movements with our hands than with our toes, for example.

homunculus

Fry commented that he imagines a homunculus for time in his memory. Each unit of time (usually we measure time in years) is not necessarily represented in his memory as the true portion that the period comprises in a person’s life. Grad school, he noted, was a disproportionately large chunk of his time homunculus. He seemed to suggest that it was a time of freedom and intensity, and a time in which he learned extensively and made many memories.

This metaphor takes an aspect of time that we’re all aware of – equivalent units of time often do not feel equal – and makes it concrete. His articulation was certainly effective in inspiring me to make this chunk of my temporal homunculus as disproportionally large as possible.

Rethinking a metaphor for grad school

I’ve suggested that grad school is a marathon: a long and demanding process requiring endurance, determination, and discipline to reach the end. Recently it occurred to me that the last few words of that marathon definition, “the end,” detract from the parallelism between the two processes (marathon and grad school). Successfully defending a dissertation and adding the letters “Ph.D.” to the end of your name mark the end of a process, but not nearly as conclusive as the finish line. Really, the end of grad school marks the start of the true marathon – a career.

Image: http://www.myrtle-beach.com/2013/08/19/registration-now-open-for-myrtle-beach-marathon/
Image: http://www.myrtle-beach.com/2013/08/19/registration-now-open-for-myrtle-beach-marathon/

If that’s the case, then a Ph.D. program is maybe more accurately described as training for a marathon. Just as a serious runner trains rigorously to learn and practice the skills needed to complete the marathon, a Ph.D. program provides a serious student an opportunity to learn and practice the necessary skills

A little self-promotion

I’ve written two posts for different blogs this week that I’d like to share, in part because the other blogs themselves are really great and worth checking out.

The first post was for my department’s blog: UCSD Cognitive Science. In two weeks, a handful of applicants who have already impressed the faculty members will arrive for our open-house weekend. There will be lots of catered food and talk about the mind, and it will be so different for me being a current student versus an applicant. I wrote a post with open house weekend in mind, with the goals of introducing a handful of the labs in the department, parodying the process of getting a Ph.D., and offering a little bit of “what-to-expect” for applicants, all through the lens of my favorite topic, metaphor.

The second post was for GradHacker, a blog that provides tidbits of wisdom (hacks) from grad students for grad students. I wrote about time-management, which is a skill that’s always come naturally to me until I started my Ph.D. program. I wasn’t  failing at managing my time, but I also wasn’t content. I did quite a bit of my own research and soul-searching (it sounds dramatic, but actually it was that important to me) to figure out what might help me meet my new demands. One technique I discovered was the Pomodoro method, which structures time by breaking it into 20-minute chunks of intense focus.

Hoping to get more writing opportunities like these in the near future!