Exponential Learning

We toss around the  phrase, “learn something new everyday” jokingly, but in reality, we learn so much more than one thing per day. Many of these things are implicit, so we don’t realize we’re learning, but each experience we have is making its mark on our cognition. Many other things we learn, though, are explicit – we’re consciously learning in an effort to get better at something. Before we can master a skill or knowledge set, we often have to learn how to learn that thing. What strategies facilitate optimal learning? Which are ineffective? A recent NYT column by David Brooks highlights some overarching differences in the learning processes in different domains.

In some domains, progress is logarithmic. This means that for every small increase in x (input, or effort), there is a disproportionately large increase in y (output, or skill) early on. Over time, the same increases in x will no longer yield the same return, and progress will slow. Running and learning a language are two examples of skills that show logarithmic learning processes.

logarithmic

Other domains have exponential learning processes. Early on, large increases in effort are needed to see even minimal progress. Eventually, though, progress accelerates and might continue to do so without substantial additional effort.

Mastering an academic discipline is an exponential domain. You have to learn the basics over years of graduate school before you internalize the structures of the field and can begin to play creatively with the concepts.

My advisor has also told me a version of this story. She’s said that working hard in grad school (specifically I think she phrased it as “tipping the work-life balance in favor of work”) is an investment in my career. Just as monetary investments become exponentially more valuable over time, intense work early in my career will be exponentially more valuable in the long run than trying to compensate by working extra later on.

exponential_graph

Even in my first year of grad school, I developed a clear sense that even learning how the field works and what are good questions to ask takes time. When I wrote my progress report for my first year, I concluded that most of what I learned this year has been implicit. I can’t point to much technical knowledge that I’ve acquired, but I can say that I’ve gained a much better idea of what cognitive science is about as a field. I’ve gained this by talking (and especially by listening) to others’ ideas, by attending talks, and by reading as much as I could. This implicit knowledge doesn’t necessarily advance my “PhD Progress Meter” (a meter that exists only in my mind), but it is also necessary to at least start to acquire before I’ll see any real progress on that meter. Once the PhD meter is complete, I will merely have built the foundation for my career, but will probably still have much learning to do before I reach the steepest and most gratifying part of the learning curve.

Brooks points out that many people quit the exponential domains early on. He uses the word “bullheaded” as a requirement for someone who wants to stick with one of these domains, since you must be able to continually put in work while receiving no glory. I think that understanding where you are on the curve at any given time is crucial for sticking with one of these fields, so that you can recognize that eventually, the return on effort will accelerate, and the many hours (tears, complaints, whatever) that went into mastering the domain early on were not in vain. Where I stand right now, progress is pretty flat… so I must be doing something right.

Rethinking a metaphor for grad school

I’ve suggested that grad school is a marathon: a long and demanding process requiring endurance, determination, and discipline to reach the end. Recently it occurred to me that the last few words of that marathon definition, “the end,” detract from the parallelism between the two processes (marathon and grad school). Successfully defending a dissertation and adding the letters “Ph.D.” to the end of your name mark the end of a process, but not nearly as conclusive as the finish line. Really, the end of grad school marks the start of the true marathon – a career.

Image: http://www.myrtle-beach.com/2013/08/19/registration-now-open-for-myrtle-beach-marathon/
Image: http://www.myrtle-beach.com/2013/08/19/registration-now-open-for-myrtle-beach-marathon/

If that’s the case, then a Ph.D. program is maybe more accurately described as training for a marathon. Just as a serious runner trains rigorously to learn and practice the skills needed to complete the marathon, a Ph.D. program provides a serious student an opportunity to learn and practice the necessary skills

Post fellowship app reflections

Yesterday I submitted my first application for external funding. It was for the NSF Graduate Research Fellowship Program, and it’s safe to say that almost every first- and second-year PhD student in any STEM field (as well as many fourth-year undergrads) in America applied for the award. When I began to seriously think about my application about 2 months ago, it seemed like a relatively small, reasonable task. Last winter I applied to about 8 grad schools, each with different essays and application requirements, so this seemed small in comparison. As the deadline kept getting closer, the magnitude of the project seemed to increase. The main components of the application are a personal statement (your past, current interests, and future goals), and a research proposal for a project you’ve designed yourself. Now that I’ve hit the Submit button, I have a moment to reflect on just how valuable this process was, regardless of the success of my application.

I had been working on the essays with my advisor throughout the process, but the day before the deadline, I met with her to work out the concerns I still had. We scrutinized every word in my application. We questioned the value of every sentence. We simplified, clarified, and, sometimes, we agonized. Four hours later, I could confidently say that the application as a whole accurately reflected the applicant I believe I am. 

I’ve recently started deciding how long a task should take and setting a stopwatch for that time. When it’s up, I’m done. While it works pretty well for tasks like readings in which absorbing every word isn’t crucial, this final editing experience was a perfect example of a pitfall of the stopwatch method. I’m incredibly thankful that my advisor didn’t set a stopwatch when I walked in. She was fully on board with the fact that we’d work together until the task was truly complete. By the time we had shaved the last bit of a sentence, bringing my essay to the required length, it was dark and cold (by San Diego standards). But those facts were irrelevant in light of the state of the finished products. They’re a great mix of art and science: creative and melodious in places, while simultaneously logical, thorough, and articulate.

And now we wait.

The Ph.D. Grind

I recently read Philip Guo’s memoir, The PhD Grind, detailing his experience working toward a PhD in Computer Science at Stanford. Even though elements of his experience were  unique and wouldn’t apply to many people doing PhDs in other fields, or even in computer science (and he acknowledges this fact), it was still incredibly interesting.

One real strength of this book is that Guo wrote it immediately after finishing his Ph.D., and argues that this was the best time to do so because current Ph.D. students aren’t able to reflect on the entirety of their experience, while people who completed their degrees years ago might have “selective hindsight.” Guo also details the good and the bad, unlike bitter PhD dropouts who dwell on the futility of working on a doctoral degree or successful researchers who extoll the wonderful journey that earning a Ph.D. entails.

Comic: http://www.phdcomics.com/comics/archive.php?comicid=282
Comic: http://www.phdcomics.com/comics/archive.php?comicid=282

An example of a less-than-triumphant time:

At the time, I had absolutely no idea that my first year of Ph.D. would be the most demoralizing and emotionally distressing period of my life thus far.

Another quote that really hit home for me:

I found it almost impossible to shut off my brain and relax in the evenings, which I later discovered was a common ailment afflicting Ph.D. students.

And, during a 10-week period of stagnation in which he hardly spoke to anyone:

There was no point in complaining, since nobody could understand what I was going through at the time. My friends who were not in Ph.D. programs thought that I was merely “in school” and taking classes like a regular student. And the few friends I had made in my department were equally depressed with their own first-year Ph.D. struggles – most notably, the shock of being thrown head-first into challenging, open-ended research problems without the power to affect the high-level direction of their assigned projects.

Related, and a difficulty I’ve already begun encountering:

Unlike our peers with regular nine-to-five jobs, there was no immediate pressure for grad students to produce anything tangible – no short-term deadlines to meet or middle managers to please.

I like this one too:

Contrary to romanticized notions of a lone scholar sitting outside sipping a latte and doodling on blank sheets of notebook paper, real research is never done in a vacuum.

Image: http://www.dreamstime.com/
Image: http://www.dreamstime.com/

A blunt but honest comment from the epilogue:

There simply aren’t enough available faculty positions, so most Ph.D. students are directly training for a job that they will never get.

But luckily, that thought is followed shortly after by this one:

So why would anyone spend six or more years doing a Ph.D. when they aren’t going to become professors? Everyone has different motivations, but one possible answer is that a PhD program provides a safe environment for certain types of people to push themselves far beyong their mental limits and then emerge stronger as a result… Here is an imperfect analogy: Why would anyone spend years training to excel in a sport such as the Ironman Triathalon – a grueling race consisting of a 2.4-mile swim, 112-mile bike ride, and a 26.2-mile run – when they aren’t going to become professional athletes? In short, this experience pushes people far beyond their physical limits and enables them to emerge stronger as a result. In some ways, doing a Ph.D. is the intellectual equivalent of intense athletic training.

Overall, Guo’s memoir recounts many more struggles that triumphs, but in the end, he still accomplished what he set out to do and claims to have made tremendous gains in the process. Maybe I’m in denial (thinking, “oh, but he was in Computer Science, so that won’t happen to me”), but I didn’t finish the book discouraged. Instead, I think I finished with some realistic expectations and even more determination that while an Ironman Triathalon may never be in the cards for me, I’m up for the challenge of the “Ph.D. grind.”

What game are you playing?

In anticipation of the start of my PhD program, I’ve been devouring everything I can find that claims to give advice to grad students or shares an anecdote from someone who’s already done what I’m about to start. One post that I love is called “Playing the Wrong Game,” in which the author challenges the  metaphor that the path to professorship in science is like a traditional video game with obstacles, occasional cheat codes, level-up bonuses, and an end goal, at which point, you win. Instead of thinking of her time as a video game, she compares it to a pinball game – “where a player is thrust into the field with great energy and must navigate a field of obstacles trying along the way to rack up points.” The most important part of this metaphor, to me, is that there’s no “winning.” There’s just a constant effort to keep the ball from falling into the abyss and a persistent desire to get more points than you did last time.

Image: http://www.winbeta.org/news/microsoft-explains-why-pinball-game-never-made-it-past-windows-xp
Image: http://www.winbeta.org/news/microsoft-explains-why-pinball-game-never-made-it-past-windows-xp

Stemming from the pinball metaphor, “Dr. Johnna” advises:

Don’t take formulaic advice based on the video game metaphor, which insinuates that all players must follow a common path to a singularly defined ‘success’.

At the end of the post, she asks: “What’s your game?” There are so many games in existence, each with nuances that separate it from the rest. And we all have different paths (an interesting metaphor in itself), goals, and perceptions of the world around us, regardless of whether we’re aiming for a professorship, a political position, or just to be the best parent/friend/sibling/child/significant other we can be. Maybe we’re all playing multiple games at once… Monopoly? Darts? Bananagrams?

A working definition of cog sci

With my college graduation just days away, it’s only natural that I’ve been doing quite a bit of introspecting: In what ways am I different from the 17-year old my parents dropped off at Vassar in 2009? How do my current beliefs and thoughts differ from those I had as I began my freshman year, and what aspects of my education have contributed to those developments? I think back to many of the classes I’ve taken over the four years: French, Latin, and Chinese, computer science, physiological psychology, the history of the English language, and anthropological linguistics come to mind. I feel that cumulatively, regardless of whether they counted towards the Cognitive Science major in the eyes of the Registrar, they have all contributed to my current understanding of the human mind.

In the fall, I’ll begin working on a PhD in cognitive science, so it seems just to expect myself to have a clear definition of the field. “It’s like psychology, right?” asks almost every curious relative, family friend, and dental hygienist I’ve encountered in the past. Others with more understanding of what cognitive science entails may see it as a lofty field, thinking about thinking, without practical applications. The conventional understanding of cognitive science, as articulated by Wikipedia, the hub of collective intelligence, is “the interdisciplinary scientific study of the mind and its processes.” While I certainly can’t disagree with this, such a pithy statement falls short for me.

The world is messy. I’ve always been tempted to impose order on it, applying logic to circumstances in which it may not belong, and I feel confident that I’m not alone in the propensity to reduce the world around me to causes and effects. However, causes and effects are meaningless in the absence of context, the world in which anything- and everything- occurs. Because this world is dynamic and constantly changing, explanatory reductions may be misguided; instead, context may be the only acceptable explanation for the perceptions and actions that we seek to understand. Cognitive science is, to me, the study of the mind- of any agent that perceives and acts in its world- that takes context as its starting point. In order to truly take context into account, the discipline necessarily draws from a number of fields, including psychology, philosophy, linguistics, anthropology, computer science and artificial intelligence, and neuroscience. Each field is simply one piece of the larger puzzle: alone, it has awkward edges and indiscernible shapes, but the amalgamation reveals a whole image that’s greater than the sum of all its parts.

On the first day of Introduction to Cognitive Science freshman year, I had no idea what cog sci was, except that “cognitive” meant something along the lines of “brain.” I created a Turing machine that could determine whether any string of x’s and y’s was a palindrome. All it needed was a set of rules, and the machine was infallible. But as soon as I added a z into the input string, it broke down completely: No Rule Defined, it told me. Because my human brain does not break down and halt in the middle of problem solving, it was evident to me that there aren’t Turing machines in our heads, but instead something else, something more complex than states and rules, that must shape how we think, sense, and act in the world.

Lessons on Chinese grammar, cultures of South American tribes, and programming a for-loop also triggered mind-related thoughts and curiosities in my foreign language, anthropology, and computer science classes. In Perception & Action, I learned more about ants than almost any human would desire to know. An ant colony is a miraculously intelligent system, another example of a product much greater than the sum of its parts. Context alone determines an individual ant’s role and how and when he will carry it out. The ant lives in a constantly changing world, but instead of causing a break down, as such a world would for a Turing machine, it encourages various behaviors that contribute to the colony’s overall success.

What does this mean for the study of human minds? It means that our perceptions, thoughts, and actions are inseparable from the contexts in which they occur. We are situated in the world, and numerous aspects of our world, like prior experiences, culture, and other people, play a prominent role in shaping what we may intuitively believe occurs only or primarily in our heads.

As I prepare to begin a new chapter in my Cognitive Science career, I expect (and hope) that my appreciation of context will color the ways in which I move forward. My devotion to the importance of context has taught me to question everything. It is important to question whether studies done under different circumstances (i.e., outside a lab) and with different subpopulations (i.e., not westernized college undergrads) may have resulted in different conclusions. It is important to question whether there may be ways of viewing the world that differ from my own view (i.e., as cyclical as opposed to linear, or correlational as opposed to causal) that may shape the research questions posed, methods employed, and findings extracted. I hope that by doing this, my mind will remain open to new possibilities, continually working toward achieving the most comprehensive understanding of the mind possible.

Why write this blog?

One day in the middle of class, I decided that I would start a cog blog. I’m not sure exactly why I made that decision, but as I let the idea marinate for a while, I came up with a bunch of reasons why this was a necessary project.

  1. I love to write, but writing in an online forum, where anyone could be sitting at their computer reading my thoughts, has always made me feel too exposed and vulnerable. Time to get over that, especially since I hope to go into academia, where putting myself “out there” will be a key to success.
  2. I want to keep learning, reading, and thinking about thinking, and I think the best way to do this is to collaborate as much as possible. I’ve loved having frequent opportunities during college for cog sci dialogues with so many people, and I don’t want to give those dialogues up.
  3. I want to be a better reader, writer, and thinker, and this link convinced me that a blog is probably a good way to achieve that goal. In it, Maria Konnikova writes:

    “What am I doing but honing my ability to think, research, analyze, and write—the core skills required to complete a dissertation? And I’m doing so, I would argue, in a far more effective fashion than I would ever be able to do were I to keep to a more traditional academia-only route.”

Sounds like a good deal!