Uniquely Human

A Column by Heather Macomber

About The Column


This column will use recent discoveries in psychology, neuroscience, and sociology to tackle modern issues, unravel common misconceptions, and search for a scientific solution to uniquely human problems.

About The Author


Heather Macomber

Columnist

Heather is a senior in Columbia College, majoring in Neuroscience and Behavior. In addition to writing for the Lion, she is the Vice President Emeritus and Research Coordinator of the Columbia Neuroscience Society, researches perception using mouse models in the Bruno Lab, edits for the Columbia Science Review, and serves as a peer advisor for the Neuroscience major. She is passionate about making neuroscience accessible and mentoring the next generation of neuroscientists, and she hopes to spend her life unraveling the mysteries of the brain.

Posts


Although I had intended to continue the series on the neuroscience of education, when I sat down to write a column a day before the United States votes for a new president, many new senate members, and hundreds of ballot measures, I’ve found that this election has truly consumed us all. So instead, today’s column will be dedicated to the young realm of neuropolitics – and what ramifications neuroscience may have for tomorrow’s vote.

Although contentious elections are nothing new, this cycle certainly feels more polarizing than years past. Many people on both sides are in disbelief as to how supporters of the opposing candidate could possibly overlook the horrible things they’ve said or done. Both sides are utterly confident that not only are they correct, but that all the facts support their position. Here is where fMRI has an answer.

In one of the first studies of its kind right before the 2004 elections, 30 self-identified ‘strong’ Democrats and 30 Republicans reviewed John Kerry and George W. Bush making self-contradictory statements while having their brains imaged. In an experience familiar to anyone who has tried this tactic against a member of the opposing party, the participants were critical of the hypocrisy in the opposing candidate while letting their own candidate off easy. While that result is predictable, the fMRI results were not at all.

The participants achieved this feat of mental gymnastics by quieting down the part of their brains necessary for impartial reasoning like the dorsolateral prefrontal cortex, and instead lighting up emotional circuitry such as the amygdala, the anterior cingulate cortex, and the insula, which will all be important later. Specifically, an area of the brain called the basal ganglia lit up, which is, among other tasks, responsible for rewarding selective behaviors with dopamine. Effectively, partisan brains were triggering dopamine rushes for ignoring the issues in their own candidates’ statements and criticizing their opponents. Once entrenched, it seems very difficult to combat confirmation bias by rational arguments when the ‘rational argument’ part of the brain is offline during these discussions.

The differences that divide us seem to run deeper than confirmation bias. A growing body of research shows some fundamental wiring differences in the brains of liberals and conservatives. One study was actually able to use brain regions of interest from an fMRI to determine political affiliation with 83% accuracy, which is over 10% higher than the next-best factor of parent’s ideology. In general, a conservative brain will more strongly react to disgust and react with more emotionality to uncertain concepts or events, thanks to a larger and more active insula and right amygdala.

Liberals, on the other hand, are less fearful of new stimuli and less reactive to negative events, and more likely to adapt to changes in established patterns. Some of these effects can be attributed to their larger and more reactive anterior cingulate cortex, which has long been known to monitor and mediate conflicting information. From the psychology side of things, personality data shows that conservatives value loyalty, stability, and are both risk- and change-averse.

Meanwhile, liberals are more likely to change their opinions and base decision-making on new information, specifically the kind of fact-heavy information that activates the dorsolateral prefrontal cortex. Without placing a value judgement on either ideology, it seems that biological differences in how people process and respond to information aligns with ideological differences.

Of course it’s important to keep in mind that the brain is a highly plastic structure, so there’s a classic chicken-and-the-egg problem in play here. Twin studies, long the gold standard for measuring genetic influence, attribute somewhere between 40 to 60% of political preference up to genetics, as manifested by differences in brain structure. It’s also possible, even likely, that slight anatomical differences might snowball into bigger ones if those neurological pathways are strengthened by continued exposure to politically charged information.

As with much of neuroscience, it’s sometimes unnerving to think about how our decisions are so frequently based on the activation of subcortical structures, not conscious thought. While we may find it difficult how someone could possibly vote for the other candidate, perhaps political neuroscience can contribute some understanding to the underlying motivations that determine political choices. So as we decide on a new president this Tuesday, give a thought to those scientists trying to figure out what’s going on in your brain while you’re making that oh-so-important choice.  

Uniquely Human runs alternative Mondays. To submit a comment or a piece of your own, email submissions@columbialion.com.

Photo courtesy of Scouting NY

As I discussed in the last column in this series, Columbia’s heavy reliance on the lecture is a disservice to its students– the ‘learning’ happening in a traditional lecture isn’t translating to long-term memory. Evidence going back over a hundred years tells us that the typical memorize-and-regurgitate approach most students employ to get through a lecture course is an astonishingly bad way to learn – when tested six months after completing a typical lecture course, students have reliably forgotten ~95% of the information they learned.1

While completely replacing lectures with core-sized classes is the obvious suggestion, it’s likely too expensive to execute, even for a well-endowed school like Columbia. Instead, I’m going to focus on easy, relatively cheap, and scientifically effective ways to improve the lecture-based classroom by using what we know about how humans form memories.

While there are few different kinds of memory, the type most relevant to higher education is declarative memory – that which can be consciously accessed. This long-lasting memory we’re going after involves four steps: encoding new information, storage, retrieval, and forgetting. Over the next four columns, we’ll be exploring each of these areas in detail, starting with how we initially process new information.

The standard Columbia lecture requires you to pay attention to the lecturer speaking for 75 minutes straight, often followed by short break and yet another 75-minute information deluge if you, like me, have the misfortune of back-to-back lectures. Empirical research into attention span during lecture courses suggests that students pay attention for less and less time in ever-shortening cycles. The longer a lecture goes on, the less students pay attention, and the bigger each lapse in attention gets.2

Here’s a common story that plays out in lectures across Columbia. You walk into a lecture ready to learn, pay attention for fifteen minutes…and then spend a minute checking Facebook. You tune back in, maybe for only ten minutes this time, only to be distracted for a three-minute stretch by your group chat. By the end of the lecture, you’re only spending two or three out of every ten minutes actually listening, and the rest of it distracted and hoping the lecture ends.

The neurological reason for these lapses comes from the ‘top-down’ way your conscious brain focuses on a single thing for an extended period of time. Your prefrontal cortex, which is physically located on top of the rest of your brain tells the lower, more primitive parts of your brain to shut up and allow you to focus on a specific task. That’s what lets you listen to your professor while tuning out all irrelevant stimuli, like your phone buzzing in your pocket, your stomach rumbling, or that siren wailing past on Broadway.  

This kind of conscious selection is necessary to even hearing new information in the first place – if you’re not paying attention, you won’t be able to recall the information later. But forcing your brain to do this for an extended period of time comes at a steep neurological cost. Overuse of these suppression mechanisms leads to mental fatigue – effectively preventing your brain from focusing any more. Any further attempt to focus only makes it worse, and you’re prone to completely tuning out and giving up on paying attention at all.3 The 75-minute lecture is excellent at causing just this sort of dangerous mental fatigue,4 and far from being the best, it’s possibly one of the worst ways of introducing information.

Instead of using time in-class to relay new information, students would benefit most from having control of their initial information encoding. Students could choose the type of input they prefer, whether that be pre-recorded lectures, readings, compellingly explained visuals, interactive formats, or a combination different methods. Imagine if you could take a pause when your attention slips, going back over difficult concepts a few times, and skim quickly those you already understand. The idea of doing this sort of learning as ‘homework’ has a number of other benefits.

The idea of doing initial learning before class is called flipping the classroom, and it’s one of the most scientifically-supported ideas for improving lecture courses.5 To solve our lecture attention problem, the best idea may be to trust the intelligent and motivated Columbia students to learn at their own pace and think about the material first, before even walking into a classroom.

By flipping the classroom, we’ll be able to better pay attention to new information, and therefore be better prepared for the next stage of memory formation. Importantly, it frees up valuable in-class time to use more interactive teaching techniques, which is necessary if we want to improve the storage and recall phases of memory.  

Stay tuned for the next column, where we’ll talk about how to most effectively use time spent physically in the classroom to help Columbia students actually learn from their lecture classes.

Uniquely Human runs alternative Mondays. To submit a comment or a piece of your own, email submissions@columbialion.com.

References:

  1. Deslauriers, L. & Wieman, C. (2011). Learning and retention of quantum concepts with different teaching methods. Physical Review Special Topics – Physics Education Research, 7.
  2.  Bunce, D., Flens, E., & Neiles, K. (2010). How Long Can Students Pay Attention in Class? A Study of Student Attention Decline Using Clickers. J. Chem. Educ., 87(12), 1438-1443.
  3. Ishii, A., Tanaka, M., & Watanabe, Y. (2014). Neural mechanisms of mental fatigue. Reviews In The Neurosciences, 0(0).
  4. Aron, A. (2007). The Neural Basis of Inhibition in Cognitive Control. The Neuroscientist, 13(3), 214-228.
  5. Roehl, A., Reddy, S., & Shannon, G. (2013). The Flipped Classroom: An Opportunity To Engage Millennial Students Through Active Learning Strategies. Journal Of Family & Consumer Sciences, 105(2), 44-49. http://dx.doi.org/10.14307/jfcs105.2.12

 

While Columbia courses are advertised as mostly intimate and discussion-based, walking into your second (or even third, or fourth) lecture of the day is disturbingly common. Some courses, such as Introductory Biology, consistently reach over 200 students per section. Personally, 54% of my courses (by credit value) in the first two years have been large lectures.

In the engineering school, the percentage of time spent in Havemeyer 309 or Pupin 301 increases, with a close friend with a typical Biomedical Engineering major courseload spent a whopping 81% of her initial coursework stuck in a lecture hall. While humanities courses may admittedly have fewer lecture courses, a significant number of Columbia STEM students spend the majority of their time in lecture courses for their first few years here.

The central role that lectures play in today’s system of higher education cannot be overstated. Ever since parchment was precious and reading a skill reserved for the exclusive elite, any hope at educating the populace relied on the lecture for information transfer. The core format of the lecture would be recognizable to a medieval instructor, while the dramatically changed world outside would entirely unrecognizable.

The reality of a 21st century world makes information not only overwhelmingly available in written form, but also in new, innovative, and interactive formats. As creative ways of learning proliferate at an exponential pace, it is well past time for this ivy-league world-renowned institution of higher education to seriously reconsider consider the ineffectiveness of its most overused workhorse.

Columbia owes it to both its students and itself as a leader to take into account the increasing consensus in neuroeducation research that there is a better way to teach than through lectures. When considering the best way to teach students, we should be thinking about how people actually learn, especially when implementing neuroscience-based changes would hardly cost more and would simultaneously increase both professor and student satisfaction with our Columbia-brand education.

Over forty years of scientific research shows that a student can hardly pay attention to a lecture past its first twenty minutes, when Columbia teaches in 75 minute blocks, that interactive learning is over twice as effective as passively listening, and our nobel laureates could be better put to use actually interacting with the students they teach instead of being kept at arm’s length. Lectures are simply incompatible with the way we’re wired to understand our world.

There is a better use for Columbia’s highly-esteemed professors than wasting time repeating the same information semester after semester to half-empty classrooms of bored and distracted students. There are better uses for its bright and energetic graduate students than re-explaining the material to confused undergraduates.

It’s ironic that some of the best research on learning, the very research that shows how ineffective lectures are, is coming from the labs of Columbia professors who have to turn around and continue to teach in this outdated style. Our diplomas cannot only be valuable on the merits of Columbia’s history; there must be true learning behind our degrees.

Isn’t that why we came here, to learn from the best and brightest, to learn for the rest of our lives and not just for the next exam? In the next few columns, we will be exploring how recent research on attention, learning, memory encoding, and recall can redesign the Columbia classroom. Columbia has always been at the forefront of societal change; it only makes sense that we should be leading the revolution in higher education as well.

Uniquely Human runs alternative Mondays. To submit a comment or a piece of your own, email submissions@columbialion.com.

I am far from the first person to wonder what the answer might be. The idea we are special certainly isn’t new; the core curriculum gives us plenty of arguments to back up that idea. From Aristotle to Kant to yes, even Darwin, our greatest thinkers have always believed that human cognition is unparalleled in the universe. So here we stand, at the top of the food chain, looking down at the rest of the animal kingdom and wondering; are we actually unique, or just egocentric?

Neuroscience might finally give us that answer we crave. For a field younger than some of our parents, it has managed to begin the daunting process of untangling the web of neurons in our brains, while giving us Buzzfeed-worthy headlines along the way. Neuroscience has a way of getting up in every other field’s business, with a reach that’s far exceeded standard academic discourse – perhaps that’s why I’m so hopelessly fascinated by it.

Like an angsty teenager, this young field has a tendency to argue and frequently change its mind. Unlike a teenager, when new research dethrones one theory and crowns another, the public often loses faith in our credibility. After all, we once believed the heart was the seat of all intelligence, and the brain was nothing more than a simple regulator. How can we be expected to really know anything at all?

As scientists, we learn to accept this inherent instability, the sobering truth that we’re wrong far more frequently than we’re right. But as scientists, it’s also on us to explain why our work, even in its failures, is important. Perhaps more importantly, we do not work in a vacuum, and this field is poised to understand how we think and how we live. For all of humanity’s success in conquering the world, our species now stares down threats primarily of our own making. What better way to approach these issues than through understanding who we are and what makes us tick? After all, most of our salaries come from you, the taxpayer – ultimately it is up to the public to see the value in what we study.

This column is an attempt to use powerful discoveries about our brains to propose science-backed solutions to wider social issues. Neuroscience is an ever evolving, consistently contradictory, frequently flawed, and ultimately and beautifully human pursuit of the kind of knowledge we like best: knowledge about ourselves. While it’s not perfect, it’s hard to deny that studying our brains might provide some valuable insight into our uniquely human problems.

Uniquely Human runs alternate Mondays. Questions, comments, concerns, and thoughtful dialogue are always welcome.

­­