Columnists


Image courtesy of Laura Elizabeth Hand, CC’19

The hippocampus is one of those brain regions that pops up again and again in popular science literature, and for good reason. Most people associate the hippocampus with memory, mainly thanks to Henry Molaison, better known as H.M. Over fifty years ago, a hotshot neurosurgeon named William Scoville removed most of his hippocampus in an attempt to cure his severe epilepsy. The treatment worked but at a severe cost, as H.M. lost the ability to form new memories.

This curious case kicked off modern memory research as we know it. Decades of follow-up research has connected activity in the hippocampus to a variety of functions, most famously  the formation of episodic memories. Inspired by this human case, researchers peered into the brains of awake mice in an attempt to learn more.

One of the reasons why we can investigate this brain region in particular across species is just how similar the hippocampus of a mouse is to a human. It is an ancient structure, millions of years old, but it is arguably the first of the most ‘advanced’ brain regions to develop. While there are obviously differences in size between the species, the underlying organizational principles are nearly identical. What makes the hippocampus so special that we and our rodent cousins have one, but frogs don’t?

During one of these mouse experiments, a scientist named John O’Keefe made a curious finding. When the animal ran around in its environment, a certain kind of cell in the hippocampus would consistently fire only when the mouse navigated through a particular position. This finding later won him the Nobel Prize in Physiology or Medicine and spurred another avenue of research into how these ‘place cells’ (as they have since been dubbed) form a sophisticated ‘cognitive map’ of space.

Meanwhile, the development of fMRI in humans enabled human researchers to study learning, memory, attention, curiosity, and many other cognitive functions of the hippocampus. More than just memory, this enigmatic part of the brain is necessary for imagination, planning, and many other processes we consider so essential to our human existence.

Given the similarities between mice and men, it’s reasonable to expect that the mouse and human hippocampus are doing similar things. So why are their scopes of research so radically different? How exactly do cells that respond to a rodent’s current location in place create memory? While long existing in different spheres, new research aims to bridge the gap.

From the mouse side, non-place features of place cells are increasingly providing evidence for a broader, more integrative role of hippocampal pyramidal neurons than simply recording place. Recent findings, some unpublished, from the Society for Neuroscience 2017 Annual Meeting demonstrated many of these newly discovered, more diverse functions.

In highly social bats, ‘place’ cells can record the location of their fellow bats just as well as their own. In rats, ‘place’ cells can ‘map out’ a representation of sound. In monkeys, ‘place’ cells can fire without movement simply by looking around the environment. Most convincingly, a number of studies have shown that ‘place’ cells can also record a detailed representation of time.

Increasingly, it seems that these special hippocampal cells fire not only to locations, but a number of other things too. Some, if not most, of these cells respond to multiple things at once, like place and time, or sound and place.That feature, crucially, is indispensable in creating a memory. These cells aren’t just recording places, they’re combining different aspects of an experience together. Put another way, a ‘place cell’ isn’t simply mapping space, it’s making a memory.

While neither I nor neuroscience more generally has an answer to the question I posed at the beginning of this column, combining decades of research in mice and humans will help guide the way forward.

 

Citations and further reading:

  1. Scientific reviews are a great way to delve deeper than articles like mine without wading too deep into the terminology of primary articles. For an overview of the importance of H.M. to the field, I recommend: Squire, L. R. (2009). The legacy of patient H.M. for neuroscience. Neuron, 61(1), 6–9.
  2. To read the seminal place-cell study by O’Keefe: O’Keefe, J., & Dostrovsky, J. (1971). The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Research, 34(1), 171–175.
  3. For a broader review of place cells by nobel laureates in the field: Moser, M.-B., Rowland, D. C., & Moser, E. I. (2015). Place cells, grid cells, and memory. Cold Spring Harbor Perspectives in Biology, 7(2), a021808.
  4. Bats encoding in 3D, same lab with the preliminary unpublished social findings (primary paper): Sarel, A., Finkelstein, A., Las, L., & Ulanovsky, N. (2017). Vectorial representation of spatial goals in the hippocampus of bats. Science, 355(6321), 176–180.
  5. Rats encoding non-spatial ‘sound map’ (primary paper): Aronov, D., Nevers, R., & Tank, D. W. (2017). Mapping of a non-spatial dimension by the hippocampal–entorhinal circuit. Nature, 543, 719.
  6. Monkeys encoding a non-movement based ‘visual map’ (primary paper): Killian, N. J., Jutras, M. J., & Buffalo, E. A. (2012). A map of visual space in the primate entorhinal cortex. Nature, 491(7426), 761–764.
  7. Review of time cells by a giant in the field: Eichenbaum, H. (2014). Time cells in the hippocampus: a new dimension for mapping memories. Nature Reviews. Neuroscience, 15, 732.
  8. To read more about a fascinating brand-new big-picture theory about the hippocampus: Stachenfeld, K. L., Botvinick, M. M., & Gershman, S. J. (2017). The hippocampus as a predictive map. Nature Neuroscience, 20(11), 1643–1653.

The new Star Wars: The Last Jedi trailer has been out for months now, and fans—old and new alike—are still raving about it, once more submerging themselves in that paroxysm of fervent fan-boy anticipation, pre-packaged with every preview of the upcoming chapter which instantaneously dominates the masses, spreading like wildfire the moment they hit YouTube. “What this trailer did,” said Jeremy Jahns, popular YouTube movie reviewer, “is what Star Wars trailers do, and that’s put Star Wars at the forefront—like yeah, this is happening.”

One person who’s probably less excited about the upcoming film is Star Wars creator himself, George Lucas, who gave up creative rights to the Star Wars universe after selling the franchise to Disney in 2012 for a whopping 4.05 billion USD. In a 2015 interview with Charlie Rose, when asked how he felt about Episode VII: The Force Awakens (the first installment of the reboot trilogy) Lucas said: “We call it space opera but it’s actually a soap opera. And it’s all about family problems—it’s not about spaceships…They decided they were gonna go do their own thing…They wanted to make a retro movie—I don’t like that. I like…Every movie I make I work very hard to make them different. I make them completely different: with different planets, different spaceships—yenno, to make it new.

I disagree with Lucas’ judgement of Disney’s “nostalgia” approach and maintain that, in order for the reboot to have had the same initial impression of awe-inspiring proportions on the new generation as A New Hope (’77) had on the old, it had to retain as much of its mythic dimensions as possible—which, in order to accomplish, adopting the nostalgia approach was clearly the most surefire way to go. Whatever backlash The Force Awakens (2015) might have received in regards to its “uninteresting” and “boring” semblance to the original fails to recognize what it is that makes Star Wars so compelling a cultural force: that is, its function as myth, which, by its very nature, must remain as little changed as possible if it is to remain relevant.

Here it is important to distinguish between myth and narrative, for the latter is merely the particular (and always varying) mediation of the former (which is always the same). Put another way, a narrative, or an individual story, is simply a representation of a kind of “master story” that pre-exists in the audience’s mind long before they sit down to watch The Force Awakens for the first time—assuming, of course, the audience has lived long enough to have acquired a fairly confident intuition in regards to what constitutes this so-called “master story” that is myth.

“Myth” comes from the Greek word “mythos,” meaning “story.” It is from this definition that our understanding of myth must necessarily arise, for most theories of myth begin from the accepted idea of myth as a kind of “canon of story.” Here it is noteworthy that the medium of the story is not signified, for it would be erroneous to confine myth to a single art form (i.e. myth as the literary canon). Consider, for example, how ancient cave paintings are fraught with narrative imagery, from the dancing scenes of Serra de Capivera, Piauí, Brazil (28,000 to 6,000 BC) to the enigmatic beings and animals of Kadaku, Northern Territory, Australia (26,000 BC); after all, the story “I saw a kangaroo” is still a story, though, to us, not a particularly interesting one (insofar as it is not all that sophisticated).

What is interesting is that such geographically disparate populations, who would have had no physical means of contact with one another, should engage in the same activity (which is not necessary for biological survival) with the same level of behavioral predictability of birds from separate continents—all of whom seem to instinctively grasp the concept of “nest-building” as pivotal for their offspring’s protection. What is it, then, that prompts what appears to be a primordially entrenched instinct of human nature? What is the point of saying, “I saw a kangaroo”?

The answer to this can be arrived at by emphasizing the two subjects of the sentence and studying the resulting truth-values derived thereof. For if the emphasis is placed on “a kangaroo,” then one extracts an empirical value tantamount to the scientist’s collected data. Here, the sentence derives significance from its illumination of some perceived aspect (in this case, the “kangaroo”) of the world, that is, of reality. On the other hand, if one places the emphasis on “I saw,” a second meaning is discovered, this time signifying the presence of “I,” that is, the storyteller. This too can be perceived as empirical but, more notably, as humanistic, for the manifested will to engage in an activity that will record the existence of oneself at a given time is a behavior unique to the human species.

What results from this innocuously curios act of paint-on-wall, then, is the radical evolutionary leap towards self-reflexivity, whereby an innate curiosity is cognitively mastered through creativity. Of course, this process has long been practiced by humans, but early-on it was strictly in the material sense, and motivated by survival at that. With the emergence of art, however, the human’s cognitive faculties began to operate within a more fundamentally psychological dimension, one motivated not by survival, but the acquirement of knowledge, especially as this knowledge relates to the human being. In other words, cave painting illustrates a primordial desire to understand reality–that is, the universe–and humanity’s place in it.

The primary questions which myth asks, then, are: What is the nature of reality, and why am I a part of it?

The narrative patterns that emerge from humanity’s collective efforts to answer these questions is myth. These patterns can be found not only in paintings (depictions of animals, hunting scenes), but also, more complexly, in the literary tradition. Herein lies my previous need to distinguish the “storytelling” canon from the “literary” one, since the literary, by its very nature, allows for a more immediate and elaborate representation of stories. We can count in these patterns, among others, creation stories, Campbell’s “monomyths,” earth/water mothers, etc. Most of us brought up with a classical education which included a relatively similar rubric of books are no longer surprised to find that the narrative elements of the Bible can be found in the Epic of Gilgamesh, can be found in the Popol Vuh, Homer, Shakespeare, Faulkner—you get the idea.

The last author mentioned beautifully described this intrinsic human need for myth during his Banquet Speech at the Nobel Prize ceremony in 1949. Having discussed the paranoia bred by the Cold War, and the consequent nihilism of that milieu, he insisted that Man must remind Himself of “the old virtues and truths of the heart, the old universal truths lacking which any story is ephemeral and doomed—love and honor and pity and pride and compassion and sacrifice…[otherwise] His griefs grieve in no universal bones.”

All the “universal truths” Faulkner mentioned are major narrative forces of George Lucas’ epic saga: Anakin’s pride leading up to his metamorphosis into Darth Vader (The Revenge of the Sith, 2005), only for him to express compassion and pity in his final moments (The Return of the Jedi, 1983); the honor and love between friends that keeps the pack together through all manner of adversities (as in, say, Leia’s rescuing of Luke in The Empire Strikes Back, 1980); and, more recently, the sacrificial deaths from all of Rogue One’s (2016) major characters. Thus, The Last Jedi will be the latest installment of what can safely be called one of modernity’s greatest myths, for its treatment of these perennial themes has given it a universal appeal and, consequently, a formidable staying power worthy of mythic status.

In light of all this, the Reader (especially if they do not consider themselves a fan—on any level) may begin to appreciate the magnitude of cultural significance The Last Jedi is bound to have come this Christmas. Its inception into cinemas this December will call upon (as the best mythic tales often do) a mass gathering of people who will expect to be awed and moved and shocked and, on top of all these things, reminded of these universal truths, thereby permeating, if at least for a moment, a sense of solidarity among the masses which the cynical media eye will have us believe is practically nonexistent in modern times.

Too sentimental? Perhaps. Let’s just hope the film isn’t (i.e. don’t kill Rei yet, by far my favorite Star Wars character ever!).

P.S. You can watch the trailer here, for those of you who (for whatever reason) haven’t seen it yet.

Official White House Photo by Lawrence Jackson

 

Some of our loyal readers may have noticed this column has had an irregular publication schedule lately. This is because I wanted to give everyone a fresh update from the Society for Neuroscience 2017’s annual meeting, the largest gathering of over 30,000 neuroscientists every year to discuss the most fascinating and cutting-edge research.

Unfortunately, that update will have to wait another week, because today I feel compelled to use my platform to talk about the current tax bill making its way through congress. This bill, if passed, would effectively make graduate school impossible for all those but the independently wealthy, and would decimate the structure of science as we know it.

I typically keep this column apolitical, as my goal is to spread interesting neuroscience knowledge to everyone, rather than wading into the political thicket. Were this bill to have been proposed by the other side of the aisle, I would take equal issue. This overarching legislation aims to in part simplify taxes to, as its proponents so often state, ‘the back of a postcard.’

One such ‘simplification’ is the repeal of Section 117(d)(5), a tiny piece of the tax code that makes a huge difference to graduate students. In most STEM graduate programs, students have their tuitions waived and are awarded a modest stipend of approximately $20,000-$30,000 per year to focus on their research. Under the current tax code, graduate students are only taxed on their stipends, which makes sense, as this is the only money they actually take home.

In the tax bill just approved by the house, this exemption is removed. That means a catastrophic increase in tax burdens for all STEM graduate students. Let’s take an average graduate student in Columbia’s Neurobiology PhD program. Their take-home income is just under $30,000 from their stipends, but Columbia’s tuition (which, again, a graduate student never sees or pays), is nearly $50,000. If the senate passes the current version of this bill, graduate students will see a tripling of their tax burden, an increase of over $10,000.

Essentially, by trying to simplify the tax code, this bill would prevent all but the most wealthy of graduate students from pursuing higher education. While some universities may be able to increase stipends to compensate, most cannot afford to. Graduate students are the backbone of labs, and their projects make up the bulk of research happening in the US; without them, there is no science as we know it.  

Without this tiny line of tax code, programs will slash acceptances, US science productivity will plummet, and the hundreds of innovations which have made us a superpower will grind to a halt. Like all of STEM, neuroscience is reliant on the productive output of graduate students. While we are on the cusp of incredible breakthroughs in understanding the brain — many of which can lead to cures for heartbreaking diseases — none of that is possible with the passage of this tax bill in its current form.

This is bigger than politics, and this is bigger than just science. This is about ensuring that the United States continues to be the world’s leader in innovative scientific and technological breakthroughs. If you enjoy the tiny computer in your pocket, have yourself been or known someone helped by modern medicine, or believe in the necessity of scientific progress, please take the time to speak out against this bill and ensure that if it progresses, it does so without this provision. You can find your representative’s information here; ask them to oppose the repeal of Section 117(d)(5) within the Tax Cut and Jobs Act.

Next week, I promise we’ll be back to our regularly scheduled programming with some fun, new neuroscience findings.

In a society as fast-paced and demanding as ours, it’s no wonder that, given the opportunity to rewind, the average person would opt for a film pre-packaged with all those qualities the viewer knows will suffice to fulfill their expectations without demanding much “mental exertion” on their part: archetypal characters, traditional narrative structures, impressive special effects, maybe a few laughs. A good story, a good time. One might have read a good novel instead and been subjected to the same gist of artistic treatment, but the movie has the added bonus of passive viewing—compared to the arduous demand of reading—within a radically condensed span of time (roughly two hours or so). Indeed, there is a reason Aristotle’s Poetics has become standardized reading for many an aspiring filmmaker: today, cinema has become the equivalent of the “condensed visual novel.”

This is a gross underuse of a medium that, as we shall see, can offer us so much more.

To begin with, any art is most compelling (that is, most likely to emotionally impact the receiver of the art) if it prioritizes those potentialities that are unique to the particular form. In other words, if these potentialities are what come to the forefront in the artistic expression, insofar as they are the principal driving mechanisms by which the artist aims to achieve their goal(s).

This is the presupposition that drives the cinematic theories of avant-garde filmmakers Jean Epstein (1897-1953) and Germaine Dulac (1882-1942), both of whom are invaluable resources in the search for an “essence of cinema.”

That both of these theorists are avant-garde is key, because, as Dulac teaches us, the avant-garde filmmaker is characterized by their “in tune-ness” with this so-called “essence of cinema” in their work–a cinematic approach that dawned after all previous major forms (realism, narrative, psychological realism) had been exhausted. Dulac stresses the importance of the avant-garde scene, for the continued evolution of the cinema form is dependent upon its ongoing survival.

This may seem as if Dulac is interested in cinema’s evolution in and of itself—that is, for the hackneyed postmodern “art for art’s sake” case—but one mustn’t be fooled by the formal intellectualization of her language. Beneath all the technicalities, the reader senses an authentic desire to affect the viewer through a kind of crystallized beauty, which, in film, for Dulac can only be accomplished through the formation of a “visual poem made up of life instincts, playing with matter and the imponderable. A symphonic poem, where emotion bursts forth not in facts, not in actions, but in visual sonorities” (655). Such impassioned—almost sentimental—statements prove Dulac is completely on board with Epstein’s search for a cinema that “arouses an aesthetic emotion, a sense of infallible wonderment and pleasure” (257).

For both theorists, said search is characterized by the filmmaker’s quest to pierce through that elusive, truth-veiling something, which both of them term “the imponderable.” But what is the imponderable? The filmmaker is aspiring to unveil the truth about what?

This is a question that is not particular to the cinematic form and whose answer is virtually the same for all modes of artistic expression: truth about the nature of reality itself. This has been the role assigned to the artist since time immemorial, dating back to the tragedy plays of the Classical era. Even today, the cinema-goer is most contented when they can confidently say about a film that it “told it how it is” (with the bonus fantastical embellishments here and there, of course).

Following the premises of Dulac and Epstein, the question then becomes, “How is the filmmaker uniquely positioned to approach this task, and what are the artistic utilities at his or her disposal?” To the first question, both theorists would answer the same way: that the filmmaker is uniquely positioned insofar as they deal with—by the very nature of the medium—visual movement. This answer consequently explains Dulac’s emphasis for rhythm as the vital technique in fulfilling the artist’s expression. After all, the visual movement exists within a “frozen” space-time continuum (a kind of filmmaker’s “canvas”), and it is only by deriving a contrived cadence from this canvas that the filmmaker achieves personal expression; in other words, the filmmaker concerns themselves with the manipulation of time in order to achieve their creative expression.

Although Epstein’s “Photogénie and the Imponderable” (1935) is far less specific than Dulac’s “The Avant-Garde Filmmaker” in answering the second question, his text nevertheless proves to be a rich resource for a better understanding of this “filmmaker’s canvas,” this “frozen space-time continuum,” especially as it pertains to the viewer’s emotional needs—needs which, by the way, the viewer may be unaware of possessing. We may arrive at these affective ramifications using “Photogénie” in a rather indirect manner.

Epstein points out man’s “physiological inability to master the notion of space-time and to escape this atemporal section of the world, which we call the present” (254). He describes this eternal “atemporal section,” this present, as “psychological time,” as it is borne out of our “egocentric [that is, automatic, subconscious] habit” (255) of accepting this flow as an absolute in our lives—which is true. Despite Einstein’s illuminating truths which characterize space-time as a malleable fabric permeating the entirety of the universe, capable of being stretched, producing myriad ebbs and flows, we on Earth experience only one of these flows and have learned to accept it as an inherent aspect of what is in fact (as Einstein shows us) a very limiting perspective of physical reality.

Here I want to take what will feel like a digression, but I assure you, it’s not (please just bear with me for a second): I want to take a moment to consider the teachings of twentieth-century German philosopher Martin Heidegger.

According to Heidegger, people tend to stay out of touch with the sheer mystery of existence, the mystery he termed “das Sein,” meaning “Being.” One of the main culprits, he notes, is the rapidity of the modern world—always keeping us on the move, overwhelming us with work and information so that we’re virtually in a state of perpetual distraction from the mystery of being, unable to step back and see the strange in the familiar, the act of which, Heidegger admits, has its downside: fear, or “angst,” may take ahold of us as we realize the primordial chaos from which we come, and are in fact constantly in. In this way, we come face to face with the meaninglessness of all things.

Epstein alludes to this “angst” in his own—and more colorful—way: “Not without some anxiety, man finds himself before that chaos which he has covered up, denied, forgotten, or thought was tamed. Cinematography apprises him of a monster” (255).

The thing is, once the initial shock has passed, what follows is a kind of out-of-body, existentialist sensation which is nonetheless therapeutic in its own way. Epstein uses the example of watching footage of oneself from long ago: though we acknowledge the ontological link, this link feels disconcertedly severed by the fact that that former self no longer lives in psychological time–that is, in the present. Consequently, this gives us the impression of a phantom-like projection of ourselves that is simultaneously there and not there. But herein lies the secret of cinema’s unique “medicinal” capabilities.

Both Epstein and Dulac wrote about the rhythmic grace emanated by time-manipulated footage. Dulac mentions the “formation of crystals,” “the bursting of a bubble,” and the “evolutions of microbes,” (656) while Epstein points out how a plant “bends its stalk and turns its leaves toward the light,” as elegant as “the horse and rider in slow motion” (254-255). It is clear that, for both of these theorists (and I am completely on board with this), the key to freeing the viewer from the mentally draining chains of “psychological time,” which is keeping us from experiencing the wonder of “das Sein,” is by showing them the fragility of their cage, accomplished through cinema by its “trappings” of space-time, by absorbing it like a bubble and freezing it to produce crystal balls through which the viewer looks into the past and realizes the obvious anew: that our time here is short, and every instant is filled with boundless grace and beauty. By playing God, the filmmaker may thus bestow the viewer their moment of affective transcendence.

*  *  *

The single cinephile in my (admittedly small) social group, I have never been inclined to suggest to my friends such “lofty” films as Tarkovsky’s, Bergman’s, or Antonioni’s—all of who play with time (or call attention to the strangeness of psychological time, especially through the use of long-shots [think Steve McQueen’s heart-wrenching eighty-six-second shot of Solomon’s quasi-lynch scene from 12 Years a Slave; however, in light of this example, I will also note that the “balance” between narrative and avant-garde was not touched on in this essay—all in good time]) and have, for me, produced that aforementioned affective transcendent effect. After all, as Dulac mentions on more than one occasion, the avant-garde “does not appeal to the mere pleasure of the crowd” (653).

But perhaps the fault lies with us, who understand cinema’s greatest power. Perhaps we ought to take a cue from Dulac, who wrote and lectured widely on film aesthetics, to be less apologetic about cinema’s “purer” dimensions. After all, academic institutions deem it worthwhile of students to learn the language of literature, visual arts, and music, in order for us to not only gain appreciation for the Arts, but to derive from them momentous personal value as well.

Why shouldn’t cinema be any different? Is it because, as the Seventh Art, it is still relatively new?

Consider last year’s “top-grossing films” list. These films are not bad, nor are their narratives  utterly irrelevant (something I, and both the theorists we have discussed, would disagree on), but there’s just so much to be gained by learning the cinematic language.

And so, I’ve changed my mind—watch Bergman, like, right now!

*  *  *

Below, a recommended list of more “purely cinematic” works, from which the budding cinephile may “branch” out on their own accord (in order of “difficulty,” 1 being “most challenging”):

  1. Breathless (1960), Jean-Luc Godard.
  2. The Revenant (2015), Alejandro González Iñárritu.
  3. Elephant (2003), Gus Van Sant.
  4. Come and See (1987), Elem Klimov.
  5. The Tree of Life (2011), Terrence Malick.
  6. Melancholia (2011), Lars Von Trier.
  7. Red Desert (1964), Michelangelo Antonioni.
  8. Persona (1966), Ingmar Bergman.
  9. Stalker (1979), Andrei Tarkovsky.
  10. 2001: A Space Odyssey (1968), Stanley Kubrick.

Quotes from: Critical Visions in Film Theory: Classic and Contemporary Readings (First Ed., 2011).

It’s hard to see it right now, but this time next week, we’ll all be on break. It’ll be the morning of Thanksgiving, and you’ll wake to the sounds of the Macy’s Day Parade or the smell of turkey in the oven. The leaves outside will be colorful and the weather will be beautiful (don’t ask me how, but I’m telling you the weather will shape up come Thanksgiving), and whether you’re a football fan or not, you’ll feel compelled to participate in the age-old American tradition of watching the game.

But when the football game is over and you come inside for those few hours in between the morning festivities and dinner time, all you’ll want to do is curl up on your couch and watch some feel-good family television. And lucky for you, there’s plenty out there.

Here’s a definitive ranking of the best Thanksgiving TV episodes of all time.

  1. “Blair Waldorf Must Pie,” Gossip Girl (Season 1, Episode 9)
    Say what you will about Gossip Girl’s later seasons, but it’s hard to deny that Gossip Girl’s pilot season came out swinging. So accurately portraying the zeitgeist of 2007 teenage life, the drama and glamour of the Upper East Side has never been so deliciously intriguing. And it all came to an emotional tipping point with the show’s first Thanksgiving episode, which featured rich-girl Serena’s family uncomfortably dining at her new Brooklyn beau’s family loft. Back on the Upper East Side, Serena’s entitled friends are reeling from the aftermath of family disentanglement and dangerous secrets. It’s oh-so-wonderfully juicy.
  2. “Happy Thanksgiving,” Parenthood (Season 2, Episode 10)
    There’s nothing like watching a feel-good family TV show on a chilly Thanksgiving morning, but Parenthood’s distinct ability to make you laugh, cry, and totally relate makes it one of the best family-driven dramas of recent television. This episode features patriarch Adam struggling with his career, his outspoken sister Sarah insisting on bringing her boyfriend (and son’s teacher) to Thanksgiving dinner, and their younger brother Crosby desperately trying to impress his fiancee’s mother. The episode has a heartwarming resolution–but it’s the Parenthood classic moments of sincerity and family devotion that make this a Thanksgiving must.
  3. “A Deep Fried Korean Thanksgiving,” Gilmore Girls (Season 3, Episode 9)
    Despite its seven-year run, Gilmore Girls only aired one Thanksgiving episode–and it’s definitely worth the watch. The episode features mother-daughter duo Lorelai and Rory trying to navigate four different Thanksgiving feasts, culminating in their annual (and dreaded) trip to the grandparents’ house. The episode ends with a revelation that fuels the rest of the season, but (save for the last five minutes) it’s an episode that you can watch on its own if you’re looking to vicariously join the rituals of a small town’s favorite holiday.
  4. “Thespis,” Sports Night (Season 1, Episode 8)
    Aaron Sorkin’s first show only ran for two seasons, but it marked fame’s beginning for not only Sorkin, but actors like Joshua Malina, Josh Charles, and Peter Krause (who would later go on to star in The West Wing, The Good Wife, and Parenthood, respectively).This particular episode highlights their unique talents. Malina’s character insists that a Greek ghost is haunting the sports-news studio and the other characters shoot him down–all while trying to prepare for Thanksgiving dinner later that night that indeed seems to be haunted by some ghostly presence. The episode is cute and fresh, and provides a nice comic relief from the more serious shows above.
  5. “The One With The Rumor,” Friends (Season 8, Episode 9)
    Speaking of comedic Thanksgiving episodes, no show did it better than Friends. Known for their plethora of Thanksgiving specials, watching Friends has become a staple of my Thanksgiving weekend (as it should for you). If you’re wondering which one to watch first, start with “The One With The Rumor,” which features Brad Pitt as an ex-enemy of Rachel’s arriving just in time to shake up her relationship with Ross. Meanwhile, Joey promises to eat an entire turkey, and everyone just has a ball of a time.
  6. “My First Thanksgiving with Josh,” Crazy Ex-Girlfriend (Season 1, Episode 6)
    Back when Crazy Ex-Girlfriend still served up delightfully-concocted spoofy musical numbers every episode and we were still rooting for protagonist Rebecca to win over her ex Josh, creator Rachel Bloom gave us a gem of a Thanksgiving episode. In this episode, Rebecca meets Josh’s parents (much to his dismay), and goes on to imagine herself becoming a part of the family. Their friend Gregg, meanwhile, sings a cliched song about his future, and Rebecca really has to pee. Don’t ask; just watch it.
  7. “Thanksgiving Orphans,” Cheers (Season 5, Episode 9)
    Still one of the greatest sitcoms of all time, Cheers aired its fair share of Thanksgiving episodes, but only one featured an elaborate food-fight and Diane in a pilgrim costume. In this episode, the gang of co-workers gathers at the ever-grumpy Carla’s for Thanksgiving dinner, and of course everything goes wrong. Suffice it to say, womanizer Sam ends up with a much-deserved pie in his face. Oh, and one of the show’s best running jokes reaches its height when we get the only glimpse of couch potato Norm’s infamous wife we’ll see throughout all eleven seasons.
  8. “Slapsgiving,” How I Met Your Mother (Season 3, Episode 9)
    “Slapsgiving” was arguably the best How I Met Your Mother episode of all time, probably because it became the impetus for so many of the jokes that would consistently resurface throughout the series. Marshall and Barney’s “slap-bet” (a bet Marshall won that gives him the power to slap Barney as hard as he wants) comes to a head in this episode, and it isn’t addressed again until the following year, in an episode aptly titled “Slapsgiving 2: Revenge of the Slap.” Robin and Ted introduce the “Major” joke that remains one of the most quoted jokes from the show, and the episode’s heartfelt ending helps catapult the season forward. It’s a masterpiece.
  9. “Shibboleth” The West Wing (Season 2, Episode 8)
    So you already know that every Thanksgiving, the President pardons a turkey. But did you know that the Press Secretary has to decide between two turkeys, essentially condemning one to die and setting the other free? Well, at least, that’s what happens in this wonderfully delightful episode of The West Wing, where Press Secretary CJ Cregg has to decide the fate of two turkeys as they run amok in the White House. Meanwhile, the President himself hazes the newbie on staff into finding him an appropriate carving knife, and the senior staff gathers to watch football. Add in some crises with immigration and education policies, some nepotism, and a hell of a lot of political maneuvering, and you’ve got one of the greatest episodes of Aaron Sorkin’s masterful show.
  10. “The One With All The Thanksgivings,” Friends (Season 5, Episode 8)
    Like I said before, Friends did Thanksgiving right, and it’s earned itself two episodes on this list. Although it’s hard for me to delegate any episode of The West Wing to the second slot, the clear winner of Thanksgiving episodes is this flashback-driven episode of Friends. Framed by cuts to Thanksgivings of the past, when the gang was awkward and stupid, this episode has everything. It will make you laugh, cry, long for sweet potato pie, and dream of a 2020 Friends reunion. The flashback focus makes this an easy episode to watch even if you’ve never seen the show (although who’s never seen Friends?) and the image of Joey’s head stuck in a turkey will definitely make it worth your while. In fact, I love this episode so much, I named my column after it.

Happy Binge-giving!