Eighty years ago, in the fall of 1941, the musical Best Foot Forward opened on Broadway. Written by Hugh Martin and Ralph Blane, who would go on to write the songs for the classic film Meet Me in St. Louis a few years later, Best Foot Forward is not a classic—it’s a piece of fluff about a prep-school boy who invites a Hollywood actress to be his prom date, to the annoyance of his actual girlfriend. But it ran on Broadway for almost a year and then was turned into a movie, with Lucille Ball as the starlet, that’s still worth watching.

A standout number is “The Three Bs,” in which three high-school girls tell Harry James’s big band what not to play: “I never wanna hear Johann Sebastian Bach/ And I don’t wanna listen to Ludwig Beethoven/ And I don’t give a hoot for old Johannes Brahms.” Instead, the three Bs they demand are “the barrelhouse, the boogie-woogie, and the blues,” popular styles of the day that are each sampled in the song.

Aside from being fun, the song also captures an interesting transitional moment in American culture. By 1941, pop had already taken over from classical as the musical lingua franca, the sound everybody wanted to hear. No actual teenager at the time would have needed to tell the band to play popular music at the prom rather than, say, Brahms’s Hungarian Dances. But for Martin and Blane, and for the Broadway audience they were aiming to entertain, the great German composers still represented a kind of cultural superego. They were what you were supposed to like, even if what you actually liked was big-band music you could dance to. So when the teenagers at Winsocki Military Academy thumb their noses at the three Bs, the gesture is at least notionally naughty.

From the 1920s to the 1950s, from jazz and blues to rock and roll, tweaking the canon was part of the appeal of pop music—and a favorite device of lyricists.

From the 1920s to the 1950s, from jazz and blues to rock and roll, tweaking the canon was part of the appeal of pop music—and a favorite device of lyricists. Ella Fitzgerald had a signature hit with Sam Coslow’s “(If You Can’t Sing It) You’ll Have to Swing It (Mr. Paganini).” Betty Comden and Adolph Green wrote the lyrics to “It’s a Simple Little System,” from the musical Bells Are Ringing, in which a bookie uses composers’ names as code to refer to racetracks: “Beethoven is Belmont Park/ Tchaikovsky is Churchill Downs.” Chuck Berry hit the same targets in “Roll Over Beethoven”: “My heart’s beating rhythm/ And my soul keeps singing the blues/ Roll over Beethoven/ Tell Tchaikovsky the news.”

In recent decades, however, this type of indirect homage to the authority of classical music has completely disappeared from popular music. The last example may be “Rock Me, Amadeus,” a German pop hit from 1985 that was inspired less by Mozart himself than by the 1984 movie Amadeus, in which the composer is portrayed as, in the song’s words, “ein Punker” and “ein Rockidol.” Today’s pop lyricists don’t poke fun at Beethoven and Tchaikovsky because young listeners no longer recognize those names as possessing any cultural authority or prestige, if they recognize them at all. It would make as much sense to write a pop song called “Roll Over Palestrina” or “Rock Me, Hildegard von Bingen,” since all composers are equally unfamiliar to a mass audience.

Like the disappearance of a certain species of frog or insect, this is a small change that signals a profound transformation of the climate—in this case, the cultural climate. It’s a truism that high culture, as it used to be known, has been steadily losing its authority since the rise of mass culture in the early twentieth century. In 1939, the art critic Clement Greenberg observed in his essay “Avant-Garde and Kitsch” that the audience for stringent modernist works by Joyce or Picasso was dwarfed by the appetite for “ersatz culture, kitsch, destined for those who, insensible to the values of genuine culture, are hungry nevertheless for the diversion that only culture of some sort can provide.” Kitsch, Greenberg wrote, “has gone on a triumphal tour of the world, crowding out and defacing native cultures in one colonial country after another, so that it is now by way of becoming a universal culture, the first universal culture ever beheld.”

A few years later, in their 1947 book Dialectic of Enlightenment, the Marxist cultural critics Theodor Adorno and Max Horkheimer offered an influential analysis of “the culture industry,” arguing that while genuine art allowed for free, individual response, Hollywood movies and pop songs turned its audience into passive and docile consumers. In 1960, Dwight Macdonald popularized that idea in his essay “Masscult and Midcult,” writing that mass culture was “anti-art,” “a uniform product whose humble aim is not even entertainment, for this too implies life and hence effort, but merely distraction.”

High culture remained the superego of a society that still nominally believed in artistic values like genius, originality, beauty, and complexity.

It’s no coincidence that these famous attacks on kitsch and masscult appeared in just the same midcentury moment as “The Three Bs” and “Roll Over Beethoven.” High culture remained the superego of a society that still nominally believed in artistic values like genius, originality, beauty, and complexity. Similarly, for Greenberg, Adorno, and Macdonald, in their different ways, high culture remains the standard by which the inferior products of the culture industry can be judged and found wanting. Though they write as analysts, they are really making an appeal: read Joyce, not James Gould Cozzens; look at Picasso, not Norman Rockwell; listen to Beethoven, not Chuck Berry.

There would be no point in making such an appeal if these critics didn’t believe that the general public is capable of preferring the better to the worse, if only they are given the chance. “It is precisely because I do believe in the potentialities of ordinary people that I criticize Masscult,” Macdonald writes. In his 1963 essay “Culture Industry Reconsidered,” Adorno insists that the consumers of mass culture actually despise and resent it: “They force their eyes shut and voice approval, in a kind of self-loathing, for what is meted out to them, knowing fully the purpose for which it is manufactured.”

Some such assumption is necessary for any thinker who wants to reconcile a commitment to artistic excellence with a commitment to democracy. It is parallel to the Marxist concept of false consciousness, which does the same thing in the political realm. If the working class ought to be revolutionary, why does it cling to its reactionary attachments to nation and religion? Because it is deceived by the ideology of the ruling classes. Why does the public prefer masscult to high culture? Because it is the victim of indoctrination by the culture industry. For Adorno that industry was a branch of capitalism, but Macdonald notes that the Soviet culture industry operated in much the same way: “like ours, it is imposed from above and it exploits rather than satisfies the needs of the masses.”

The idea of cultural false consciousness put a twentieth-century spin on a quintessentially Victorian idea: that the ills of modern society can be remedied by the right kind of culture. In his 1829 book On the Constitution of Church and State, Samuel Taylor Coleridge proposed that this task be assigned to an intellectual class he named the clerisy, which would be paid by the state through a national endowment as a secular counterpart to the clergy of the Church of England. The clerisy would be “planted throughout the realm, each in his appointed place, as the immediate agents and instruments in the great and indispensable work of perpetuating, promoting, and increasing the civilization of the nation.”

As Raymond Williams showed in his classic 1958 book, Culture and Society, the idea of culture as a social panacea appealed to Victorian thinkers troubled by the approach of democracy. Social critics like Thomas Carlyle and John Stuart Mill believed that the diffusion of culture would elevate the mind of the people, making it a wise sovereign rather than a violent and vengeful one. In the title of his 1869 book, Culture and Anarchy, Matthew Arnold named two possible futures for democratic society, leaving no doubt which was to be preferred.

It wasn’t just the working class that Arnold saw as in need of culture. “Sweetness and light” was also lacking among the aristocracy, whom Arnold nicknamed Barbarians, and the middle class, whom he called Philistines. Every class suffered from the tendency toward “anarchy,” since they all believed in “an Englishman’s right to do what he likes; his right to march where he likes, meet where he likes, enter where he likes, hoot as he likes, threaten as he likes, smash as he likes.”

To counter this libertarian nihilism, Arnold looked to the influence of culture, which he famously defined as “the best which has been thought and said in the world.” Exposure to the best would teach the Englishman to embrace “the pursuit of our total perfection,” which Arnold saw as the goal of culture and the key to a better society. He served this cause in his own working life as a school inspector, part of the new bureaucracy created to extend education to the working class.

In his 1988 book, Highbrow/Lowbrow: The Emergence of Cultural Hierarchy in America, the historian Lawrence Levine shows that Arnold had “an enormous influence in the United States,” where he inspired a generation of the intelligentsia to work for higher cultural standards. As Henry James wrote in 1884, “I shall not go so far as to say of Mr. Arnold that he invented” the concept of culture, “but he made it more definite than it had been before—he vivified and lighted it up.”

Arnold looked forward to a future in which democracy became cultured.

Arnold looked forward to a future in which democracy became cultured. By the time Greenberg and Adorno wrote in the 1930s and 1940s, however, it was clear that the opposite had happened: under the pressure of mass society and mass media, culture had been democratized. In the mid-twentieth century, it was no longer possible to argue that most people in Britain and America didn’t have access to “the best which has been thought and said.” Thanks to public education, public libraries, and public museums—and even more to new technologies like the phonograph and rotogravure—the treasures of the past had never been available to more people, regardless of birth. Someone like Clement Greenberg, the son of Jewish immigrants from the Bronx, could never have become a paladin of Western culture before the twentieth century. He simply wouldn’t have had access to the pictures and texts (to say nothing of the religious and class prejudice that would have barred him from his authoritative position).

Yet it turned out that the problem with culture had more to do with demand than supply. The same forces that made high culture accessible also created a mass culture that was a thousand times more popular, profitable, and influential. As Greenberg observed, “Because it can be turned out mechanically, kitsch has become an integral part of our productive system in a way in which true culture could never be, except accidentally.” This fact has never been as unavoidable as it is today, in our age of digital quantification. Kindle and Spotify give us a degree of access to “the best which has been thought and said” that a Medici or a Rockefeller couldn’t have bought at any price, while simultaneously reminding us that almost no one cares.

For instance, if you search for Beethoven’s Fifth Symphony on Spotify, the most popular recording of the most popular piece in the classical repertoire is the one made in 1984 by Herbert von Karajan with the Berlin Philharmonic. The first movement has been streamed about 1.5 million times, the third about half a million (which tells a story in itself). By contrast, the hit song “Driver’s License,” by the teen pop star Olivia Rodrigo, was released in January 2021 and by the end of May it had been streamed 800 million times. These numbers are hard to reconcile with Adorno’s theory that pop music fans “force their eyes shut and voice approval, in a kind of self-loathing.”

As for books, a recent visit to Amazon’s “Literature and Fiction” section found that the top sellers were a romance novel by Nicholas Sparks and a science-fiction novel by Andy Weir. By comparison, Ulysses, which served Greenberg as an example of the best of the twentieth-century avant-garde, is down around 81,000 in the long-standard Gabler edition (though, since it’s now out of copyright, there are several editions to choose from).

Of course, Spotify and Kindle are imperfect measures of the true currency of any work. But they confirm the impression that people devoted to high culture must already have: that they are members of a very small minority. Just how small is impossible to say with any confidence. How many Americans pay attention to serious contemporary literature, art, or music? An estimate of one-half of one percent of the population—1.6 million people—would surely be on the high side.

The fact that most people aren’t interested in “the best which has been thought and said” isn’t really new.

The fact that most people aren’t interested in “the best which has been thought and said” isn’t really new. The same was true in 1869—that’s why Arnold wrote his book. What is new is the rejection of high culture in theory, as well as in practice. Since the 1960s, traditionally “high” forms and values have lost the power to command even notional respect from the public at large—not even through the inverted homage of satire, as in “Roll Over Beethoven.”

Over the same period, high culture has also been losing authority among its traditional custodians. This development was heralded by Susan Sontag’s 1966 essay collection, Against Interpretation, which concludes with Sontag’s praise of what she calls “the new sensibility,” for which “the distinction between ‘high’ and ‘low’ culture seems less and less meaningful.” She welcomes “a new attitude toward pleasure,” “a new, more open way of looking at the world and at things in the world”—in particular, at the products of pop culture, such as “the personalities and music of the Beatles.”

In calling for an end to the snobbish rejection of pop culture, Sontag believed that she was being daringly progressive. But in the rueful and revealing afterword she wrote for a reissue of the book in 1996, she realized that she had been kicking down a door already dangling from its hinges. “In writing about what I was discovering, I assumed the preeminence of the canonical treasures of the past,” Sontag admitted. “No hierarchy, then? Certainly there’s a hierarchy. If I had to choose between the Doors and Dostoevsky, then—of course—I’d choose Dostoevsky. But do I have to choose?”

After the 1960s, no one had to choose. John Berryman quoted T. S. Eliot in “Dream Song #53”—“I seldom go to films. They are too exciting,/ said the Honorable Possum”—but that kind of purism died long before Eliot himself, and no one misses it today. Sontag was right that popular culture offers legitimate pleasures of its own that we’d be worse off without. “What I didn’t understand,” she admitted in 1996, “was that seriousness itself was in the early stages of losing credibility in the culture at large and that some of the more transgressive art I was enjoying would reinforce frivolous, merely consumerist transgressions.”

But by then it was too late. People like humanities professors, arts administrators, and museum curators, whose identity and livelihood depended on the prestige of “seriousness,” quickly saw that it was now possible to give up the ungrateful mission of telling the public to like things it didn’t like. Instead, they could tell the public why it was right not to like them.

Levine’s Highbrow/Lowbrow is a good example. The historian argues that when some Americans in the nineteenth century started to want to see Wagner operas at full length and in an atmosphere of concentration, rather than a potpourri of Verdi arias interspersed with patriotic songs and pantomimes, it was because they were consolidating their class privilege. The Arnoldian idea of culture, Levine writes, appealed to “segments of the new professional and middle classes who lacked any bedrock of security and needed to distance themselves, culturally at least, from those below them on the socioeconomic scale. The cloak of culture—approved, sanctified, conspicuous culture—promised to become a carapace impervious to assault from above or below.”

Such a populist interpretation of culture, which became irresistible in the late twentieth century, essentially agrees with Arnold’s description of the nineteenth-century American public in Culture and Anarchy: “This leaves the Philistines for the great bulk of the nation;—a livelier sort of Philistine than ours, and with the pressure and false ideal of our Barbarians taken away, but left all the more to himself and to have his full swing!” It just reverses the plus and minus signs: the Philistines are praised for their rude cultural health while the apostles of sweetness and light are denounced as bullying snobs.

Ironically, today’s custodians of culture still believe in the Arnoldian mission of culture to improve civilization. The difference is that, for most of them, the definition of culture as “the best which has been thought and said” now appears to stand in the way of that mission instead of furthering it. Instead the watchword is inclusion. As the Denver Art Museum put it last summer in a statement of its “Commitment to Action,” “the museum will strive to be an inclusive space where all are recognized and heard.”

Many cultural institutions made similar statements last year in response to the Black Lives Matter protests. But while the political context is new, the imperatives of inclusion and anti-elitism aren’t. They were already at work in the entirely apolitical “The Art of the Motorcycle” show at the Guggenheim back in 1998, which the museum’s director, Thomas Krens, defended on the grounds that “We can’t focus on Monet and minimalism too much.” Then Monet (of all artists!) was too elitist, now he may be too Eurocentric, but the core of the problem is the same: cultural institutions that depend on the public for support are loath to tell the public what it’s supposed to like.

High culture now functions like a counterculture, entailing a conscious act of dissent from the mainstream.

The idea that high culture could challenge the values of democratic society and come out the victor was wishful from the beginning. Shelley implicitly acknowledged as much two hundred years ago in “A Defense of Poetry,” when he called poets “the unacknowledged legislators of mankind.” The Victorian sages hoped to turn the poets—and the novelists, philosophers, painters, and composers—into acknowledged legislators, and for a time parts of society paid lip service to the idea. But the reality of overwhelming public indifference to high culture was always plain to see, and in time the partisans of culture lost their appetite for fighting it.

Today Arnold’s dream is turning itself inside out: those he would have considered cultured increasingly have to justify themselves to the uncultured, rather than vice versa. Another way of putting it is that high culture now functions like a counterculture, entailing a conscious act of dissent from the mainstream. Popular culture—television shows, pop songs, memes—is every American’s first language, the one we acquire whether we want to or not. Learning to understand and appreciate high culture is like learning a second language, which requires deliberate effort (and which Americans are famously averse to doing).

When high culture was officially prized, joining the counterculture meant rejecting its values. In the 1950s and ’60s, Beats and hippies rejected modernist ideals like irony, complexity, and awareness of tradition in favor of sincerity and immediacy—Allen Ginsberg’s motto was “first thought, best thought.” Today the situation is reversed, and things like aestheticism, pessimism, and the embrace of difficulty are highly countercultural. Indeed, they are more subversive than the Sixties counterculture ever was, since the latter—as Sontag belatedly realized—promoted a hedonism that was very much in the American grain, and which proved to be easily assimilated to plain old consumerism. Whereas there has never been much of a constituency for the obstinate and the exigent—just look at the worldly careers of Herman Melville and Emily Dickinson.

Finally, culture is countercultural in the sense that it carries more social risk than reward. Preferring things that are old, distant, and difficult to those that are immediate and ubiquitous means alienating oneself from one’s community, in some cases from one’s own family. It is at best an inexplicable quirk, at worst a form of antisocial arrogance. Villains in American movies are notoriously fond of classical music—like Lt. Col. Kilgore in Apocalypse Now, who massacres Vietnamese civilians with the “Ride of the Valkyries” blasting, or Hannibal Lecter in Silence of the Lambs, who listens to the Goldberg Variations while eating a prison guard’s flesh.

In the twenty-first century, culture isn’t an asset even in the narrow precincts where it might seem most at home. The academy is the closest thing we have to a culture preserve, like the nature preserves where endangered species roam free. But humanities enrollments are plummeting even at elite colleges, and students who do major in subjects like English or art history generally emerge with limited, idiosyncratic knowledge and a developed antagonism to the idea of high culture. The poet Randall Jarrell once joked that English departments produce literary criticism only in the sense that a penitentiary produces counterfeit money—because some of the inmates are pursuing a private vocation. The same is true when it comes to culture. The point of academic study, especially at the graduate level, is to develop skills for professional advancement in the academy. Becoming a cultured person happens on your own time, if at all.

All this may sound like a lament. But if acknowledging that culture is a counterculture means letting go of an old humanistic dream, it also puts an end to the evasions and compromises that are inevitable when culture is conceived of as an edifying force. The idea that engagement with classic works of art and thought is productive of sweetness and light was always at best a half-truth. There’s at least as good a case to be made that high culture is an antisocial force, encouraging inwardness and withdrawal, perplexity and disturbance. Arnold’s own poetry is anything but sweet or light; it echoes with what he calls, in “Dover Beach,” “the eternal note of sadness.”

In twenty-first-century America, certainly, high culture appears deeply subversive. Plato’s Republic teaches contempt for democracy as surely as King Lear teaches contempt for humanity. The Goldberg Variations are useless in the strict sense—they can be put to no use; they do nothing to make the listener more effective or a better citizen. Indeed, the most unsettling thing about high culture is that it is not a means to an end but an end in itself—which makes it the exact opposite of money, our usual standard for measuring worth.

That’s one reason why people who become culture heroes after their death are often seen as useless or worse while they’re alive. As the historian Ernest Renan wrote, “Opposition always makes the glory of a country.” If high culture must go into opposition in twenty-first-century America, at least it should exercise a privilege that is unavailable to a clerisy, but has always accrued to countercultures—the enjoyment of a certain stylish defiance.

A Message from the Editors

Your donation sustains our efforts to inspire joyous rediscoveries.

This article originally appeared in The New Criterion, Volume 40 Number 1, on page 74
Copyright © 2024 The New Criterion | www.newcriterion.com
https://newcriterion.com/issues/2021/9/culture-as-counterculture

Popular Right Now