Becoming a better teacher: Fish is Fish

This summer, I’ll be the Instructor of Record (real teacher, not Teaching Assistant) for the first time. I’m teaching Research Methods, which is a “lower level” (mainly first- and second-year undergrads) course that I’ve TAed for twice, and I really enjoy its content. Because I’m participating in UCSD’s Summer Graduate Teaching Scholar Program, I have to complete a course called Teaching + Learning at the College Level. We’re two weeks in, and I’ve picked up some interesting nuggets from the readings and class discussions, but one analogy in particular is still on my mind.

We talked about the children’s story Fish is Fish by Leo Lionni. I’m kind of glad I never encountered this story when I was a kid because its novelty had a great impact on 25-year old me. The story is about a fish and tadpole who wonder what life on land is like. Eventually the tadpole becomes a frog who can leave the water to learn about the land. He reports back to the fish, listing off features of things on land. Cows, he says, have black spots, four legs, and udders. The frog describes birds and people too, and here’s what the fish imagines:

The fish and the frog are talking about the same things, and they assume that they have common concepts of cows and birds and people, but their actual mental representations — what they see in their mind’s eye — of these things are quite different. If the fish had been given a traditional paper and pencil test that asked him to define a cow, he’d be correct in writing that it has 4 legs, black spots, and udders. He’d ace the test, fooling not only the teacher, but also himself, into thinking he actually knows what a cow is.

The takeaway, of course, is to try to make sure your students aren’t fish. Find ways to lead them beyond their fishy cow concepts, which can especially be hard when they’ve never been on land and they come to class knowing only what fish are like. Students almost always need foundational knowledge in order to understand a new concept, and there’s a good chance that at least some of the students in any class are missing that foundation.  Instructors need to be mindful that there will be times when they have to step back to assess and teach prerequisite knowledge before launching into an hour-long lecture about cows (or cognition). And then once they think the students actually know what cows are, it’s important to provide assessments that actually test understanding, and not just memorization.

There are plenty of things I might not pull off perfectly when I teach for the first time this summer, but I do feel confident that I’ll at least be on a quest to help the fish in the class become frogs so they can see what cows are really like.

Teachers of all levels and subjects: I invite you to share how you make sure your students are truly understanding and not simply parroting. How do you make sure their concepts of the cow are really cow-like, and not just fish with spots and udders?

Facebook and the Chinese Room

A recent assignment had me revisit Searle’s thought experiment referred to as “the Chinese room” and the debate of whether machines can understand as humans do. He puts forward this scenario: He doesn’t know any Chinese, but is sitting in a room with  reference books that allow him to produce coherent written responses to any message he’s given in Chinese. Thus, the person outside the room receiving his responses will believe that the person understands Chinese, when in reality, there is no understanding going on. Searle uses this as an analogy for computer programs- even if a computer program can produce logical or correct or sophisticated outputs, it doesn’t truly understand what it’s computing. Computers simulate intelligence but simulation is not sufficient for true consciousness or to be considered a “mind”.

Searle first wrote about the Chinese Room in 1980, but when I just recently read about the deep learning, the most current advance in artificial intelligence, I couldn’t help think that the Chinese Room is still quite pertinent. The process of deep learning is accomplished by a network in which connections among concepts. The smallest concepts form one layer, and above them are slightly larger ones, and so on, so that to retrieve information, the network doesn’t have to search a massive pool of data, but instead has to find the right associations between data (a much less onerous computational task). The name “Deep Learning” alone is telling to me because typically only humans, or agents with human cognitive capacities, can truly learn. If it helps make my Facebook newsfeed more interesting (i.e., excludes status updates from those random people from elementary school I’ve just never gotten around to unfriending), that’s great. But have technological innovations like this one brought us closer to a feeling that true artificial understanding is achievable (or already achieved?)