What lies beneath the Chapel Hill murders? More than a ‘parking dispute’

—Nadine Naber

We may never know exactly what Craig Stephen Hicks was thinking when he killed Syrian American medical student Deah Barakat, his Palestinian American wife Yusor Abu-Salha, and her sister Razan Abu-Salha. But we do know that U.S.-led war in Arab and Muslim majority countries has normalized the killing of Arabs and Muslims. It is more crucial than ever before to understand the institutionalization of racism against persons perceived to be Arab or Muslim in terms of the structures of imperial war that normalize killing and death and hold no one (other than victims themselves) accountable.

Photo: Molly Riley/UPI.

The Obama Administration may have dropped the language of the “war on terror,” but it has continued its fundamental strategy of endless war and killing in the Arab region and Muslim majority countries such as Afghanistan and Pakistan (without evidence of criminal activity). The unconstitutional “kill list” for instance, allows the president to authorize murders every week, waging a private war on individuals outside the authorization of congress. Strategies like the “kill list” resolve the guilt or innocent of list members in secret and replace the judicial process (including cases involving U.S. citizens abroad) with quick and expedited killing. These and related practices, and their accompanying impunity, look something like this:

Al Ishaqi massacre, Iraq 2006: The U.S. army rounded up and shot at least 10 civilians including 4 women and 5 children. The Iraqis were handcuffed and shot in the head execution style. The U.S. spokesperson’s response? “Military personnel followed proper procedures and rules of engagement and did nothing wrong.”

Drone attack, Yemen 2015: A drone killed 13-year old Mohammad Tuaiman (whose father was killed in a 2011 drone strike), his brother, and a third man. Questioned about the incident, the CIA stated that “the 3 men were believed to be Al Qaeda” even though the CIA refused to confirm that he was an Al Qaida militant.

The U.S.-backed Israeli killing of Palestinians reinforces the acceptability of Arab and Muslim death. In July 2014, the Israeli Defense Force killed at least 2,000 Palestinians including 500 children. It is well established that the IDF soldiers deliberately targeted civilians. The Obama Administration’s response? Explicit support for Israel.

And those left behind are forced to watch their loved ones’ bodies fall to the ground or burn like charcoal and can only conclude that, “In [the U.S. government’s] eyes, we don’t deserve to live like people in the rest of the world and we don’t have feelings or emotions or cry or feel pain like all the other humans around the world.”

Since the 1970s (when the U.S. consolidated its alliance with Israel), the corporate news media has reinforced the acceptability of Arab and Muslim death—from one-sided reporting to fostering fear of Arabs and Muslims. From Black Sunday (1977) to American Sniper (2015), Hollywood has sent one uninterrupted message: Arabs and Muslims are savage, misogynist terrorists; their lives have no value; and they deserve to die.

This interplay between the U.S. war agenda abroad and the U.S. corporate media extends directly into the lives of persons perceived to be Arab and/or Muslim in the United States. Hate crimes, firebomb attacks, bomb threats, vandalism, detention and deportation without evidence of criminal activity and more have all been well documented. Of course, such incidents escalated in the aftermath of the horrific attacks of 9/11. As the U.S. state and media beat the drums of war, anyone perceived to be Arab and/or Muslim (including Sikhs, Christian Arabs, and Arab Jews) became suspect. Muslim women who wore the headscarf became walking emblems of the state and media discourse of Islamic terrorism. Across the United States, at school, on the bus, at work, and on the streets, women wearing the headscarf have been bullied, have had their scarves torn off, and have been asked over and over why they support Al Qaeda, Saddam Hussein, terrorism, and the oppression of women.

Despite this, the corporate media (replicating the words of the police) and government officials have either reduced the North Carolina killings to a parking dispute or expressed grave confusion over why an angry white man would kill three Arab Muslim students in North Carolina execution-style. Yet the father of one of the women students stated that his son-in law did not have any trouble with Hicks when he lived there alone. The trouble, he said, started only after Yusor, who wore a headscarf identifying her as a Muslim, moved in. Even so, Chapel Hill Mayor Mark Kleinschmidt told CNN that the community is still “struggling to understand what could have motivated Mr. Hicks to commit this crime,” adding, “It just baffles us.”

The “parking dispute” defense individualizes and exceptionalizes Hicks’ crime—in this case, through a logic that obscures the connection between whiteness, Islamophobia, and racism. And the bafflement rhetoric constructs a reality in which there are no conceivable conditions that could have potentially provoked Hicks. Both approaches deny the possible links between the killings, U.S. and Israeli military killings, the media that supports them, and the U.S. culture of militarized violence. They will also assist Hicks in attempting to avoid the more serious hate crime charge that would come with a heavy additional sentence.

Alternately, discussions insisting on the significance of Islamophobia in this case must go beyond the call for religious tolerance and account for the projects of U.S. empire building and war that underlie Islamophobia. Contemporary Islamophobia is a form of racism and an extension of U.S.-led war abroad. As I wrote in Arab America, immigrant communities from the regions of U.S.-led war engage with U.S. racial structures, specifically anti-Arab and anti-Muslim racism, as diasporas of empire—subjects of the U.S. empire living in the empire itself. Perhaps then, we should also avoid applying the same analysis of racism across the board—as if all racisms are the same or as if the framework #blacklivesmatter can simply be transposed onto the killing of Arab Muslim Americans. Otherwise, we risk disremembering the distinct conditions of black struggle (and black Muslims) including the systematic state-sanctioned extrajudicial killing of black people by police and vigilantes as well as black poverty, and histories of slavery and mass incarceration. It is also important to remember the distinct conditions of the war on terror whereby anyone and everyone perceived be Muslim (including Arab Christians and Sikhs) are potential targets.

Rest in peace and power Deah Barakat, Yusor Abu-Salha, and Razan Abu-Salha. May your loved ones find strength and support. My heart is shattered.

Nadine Naber is Associate Professor in the Gender and Women’s Studies Program at the University of Illinois at Chicago. She is the author of Arab America: Gender, Cultural Politics, and Activism (NYU Press, 2012). 

Beyond intent: Why we need a new paradigm to think about racialized violence

—Evelyn Alsultany

Three Muslim Americans – Deah Shaddy Barakat, 23; his wife, Yusor Mohammad, 21; and her sister, Razan Mohammad Abu-Salha, 19 – were murdered last week in Chapel Hill, North Carolina by 46-year-old resident Craig Stephen Hicks. The tragedy has sparked a debate over whether or not these deaths were the result of a hate crime or a parking dispute.

Women take part in a vigil for three young Muslims killed in Chapel Hill, North Carolina. Photo: Mandel Ngan/AFP/Getty Images.

Muslim Americans who claimed that this was surely a hate crime were presented with evidence to the contrary. Hicks’s Facebook and other online posts revealed that he is an atheist who is against all religion, regardless of whether it is Islam, Christianity, or Judaism, a gun enthusiast, and an advocate for gay rights. His online posts show that he is passionate about the protection of constitutional rights, especially freedom of speech and freedom of religion. His archived posts even include commentary on the “Ground Zero mosque” controversy, in which he writes in support of Muslim rights and notes the important distinction between Muslims and Muslim extremists. His wife has insisted that the murders were the result of a parking dispute, and not a hate crime. As a result, Hicks has been portrayed as not hating Muslims.

This profile of Hicks is indeed complex. He does not fit the conventional profile of a “racist” – i.e., someone who believes that all Muslims are a threat to America; who clings to essentialist and binary notions of identities; who espouses that certain groups of people do not deserve human rights; who practices intentional bigotry; who is firmly rooted in a logic that justifies inequality. I am reluctant to use the term “racist” since it conjures an image of someone who participates in blatant and intentional forms of hate. However, what this case shows us is that we need a new paradigm to understand racialized violence today. Yes, this case is complex, but that does not mean it is not a hate crime. It is complex because it does not fit the narrow way in which we have defined a hate crime.

Posing an either/or option – either this is or is not a hate crime – does not help us understand what transpired. Racism is not an either/or phenomenon. It is deeply embedded in our society and, when left unchecked, has the potential to inform our perceptions and actions in ways not captured by a caricatured understanding of its diverse forms. Racism is not always conscious or intentional. It exists across a vast spectrum of consciousness and intentionality.  As part of our subconscious, racism can manifest in the form of microaggressions that are often unintentional and sometimes even well-meaning. On the more dangerous side of the spectrum, it manifests in violence. We need to break the association of racism with intent because racism endures without it.

Our current cultural paradigm often makes a simplistic equation: Good people are well-intentioned and are therefore not racist; bad people are ill-intentioned and are therefore racist. Consequently, if the white police officers who killed Michael Brown and Eric Gardner are considered “good people” by their friends, families, and colleagues, their actions cannot be deemed racist. Such a conclusion focuses solely on intent and overlooks how members of the police – like all of us – have been shaped and influenced by notions of black men as threatening and how such cultural imagery has, in turn, structured racialized violence.

The point is not that Craig Hicks is any more or any less racist than the white police officers who murdered Michael Brown, Eric Garner, and other black men. Indeed, the question of their individual, consciously expressed or felt racism does not help us to understand what happened or how to prevent it in the future; it just provokes denial and defensiveness. Conversely, claiming that we are “post-race” and/or denying that a particular incident has anything to do with race does not help us solve the problem of racialized violence.

The point is not whether Craig Hicks is any more or less racist than any of us; the point is that Craig Hicks lives and his victims died in a society that is structured by deeply institutionalized and culturally pervasive racism that exists regardless of whether any individual “wants” it to or not, and regardless of whether we as a society want to acknowledge it or not. We need a new paradigm, a new language and framework, to understand racialized violence today. Hicks’ profile provides an opportunity to challenge ourselves to rethink our understanding of racism and hate crimes in order to prevent murder.

Evelyn Alsultany is Associate Professor in the Program in American Culture at the University of Michigan. She is the author of Arabs and Muslims in the Media: Race and Representation after 9/11 (NYU Press, 2012).

#MuslimLivesMatter, #BlackLivesMatter, and the fight against violent extremism

—Zareena Grewal

On Tuesday February 10, 2015, Craig Stephen Hicks, 46, was charged with first-degree murder of three Arab, Muslim college students in Chapel Hill, North Carolina.

Photo: http://twitter.com/samahahmeed.

Hicks’ neighbors, Deah Shaddy Barakat, 23, and Yusor Mohammad, 21, were newlyweds—and Razan Mohammad Abu-Salha, 19, was visiting her older sister and brother-in-law at the time of the execution-style killing. After the mainstream US media’s initial silence, the homicide is now referred to as “a shooting,” sparking worldwide Twitter hashtag campaigns such as #CallItTerrorism and #MuslimLivesMatter with many speculating on how the crime might have been framed had the perpetrator been Muslim and the victims white.

The motives of Hicks, who turned himself in to police, are the source of heated debate and speculation. According to his Facebook profile, Hicks describes himself as an anti-theist, a fan of the controversial film American Sniper and atheist polemicist Richard Dawkins, and a proud gun-owner. The Chapel Hill Police Department described the crime as motivated by an on-going dispute between the neighbors over parking, while the father of the two young women insists it was a “hate-crime.” Chief Chris Blue recognizes and continues to investigate “the possibility that this was hate-motivated.”

Such language suggests that while Hicks’ violence is exceptional and excessive, his motivations could have been ordinary and benign: maybe he was there first, maybe he had dibs on that parking spot, maybe he had a bad day or a bad life and so he had a mental breakdown with a gun in hand. After all, while this murder is devastating to the family and friends of the victims, for many of us, it is not shocking. We know and expect “lone shooters” to be white, heterosexual men; we know and expect their victims to be men of color, women, youth.

But it is American Muslim leaders who will gather in DC for the Obama administration’s “Countering Violent Extremism Summit” in a few days.

Individualizing the violence of white American men into “lone wolves” conceals the regularity of such violence and the state’s inability to prevent it, to make us “secure,” even to name it. This is one of the searing lessons of the #BlackLivesMatter movement; George Zimmerman’s sense of insecurity was used to justify his murder of an unarmed, black teenager, Trayvon Martin. As the #BlackLivesMatter movement demonstrates, Zimmerman was part and parcel of a larger phenomenon of racial, homicidal violence against unarmed blacks enacted in tandem by ordinary white citizens “standing their ground” and militarized police forces.

A significant number of blacks in the US are also Muslim and, therefore, vulnerable to being brutalized and murdered simply because they are black. Despite the fact that black youth are more than four times likely than any other group to be gunned down by police, critics of #BlackLivesMatter continue to ignore this harsh reality, insisting that #AllLivesMatter.

Clearly, all lives do not matter to everyone. The #BlackLivesMatter movement brings our attention to the fact that violence in the name of white supremacy only horrifies and terrifies some of us.

Disingenuous claims about how all lives matter or how parking is frustrating hide the insidious influence of racism. In my book, Islam is a Foreign Country, I explore how American Muslim communities grapple with the pervasive, racial hatred of their religion. This morning a Pakistani friend asked whether she will now have to explain to her young children that some people hate them just for being Muslim. African American Muslims know all too well that the question is not whether but when to teach their children that they are vulnerable. Hicks’ victim knew it too; she saw it in his eyes, telling her father, “He hates us for what we are and how we look.”

Zareena Grewal is Associate Professor of American Studies and Religious Studies at Yale University. She is the author of Islam is a Foreign Country: American Muslims and the Global Crisis of Authority (NYU Press, 2015).

No scrubbing away America’s racist past

—Carl A. Zimring

Last week, @deray tweeted an image of a century old soap advertisement showing a young white boy using soap to wash the pigment off of a young African-American boy’s body. He captioned it “Ads. Bleaching. History. America.”

Had he wished to, @deray could have sent out dozens of such tweets, each with a different image. The tweeted image was but one of dozens printed between 1880 and 1915 displaying claims soaps could literally washed dark pigment off of skin. My forthcoming book, Clean and White, reproduces similar examples from Lautz Brothers, Kirkman and Sons, and Pearline. The latter featured an illustration of an African-American woman scrubbing a young child and exclaiming “Golly! I B’leve PEARLINE Make Dat Chile White.”

These racist caricatures focused primarily but not exclusively on African-Americans. Kirkman and Sons released an advertisement sometime after 1906 that referenced the year’s Pure Food and Drug Act. The ad showed three white women washing three wealthy Turkish men’s skin from brown to white. The accompanying poem tells the story of how the women were the Turkish men’s maids. They convinced the men to let them wash them with the soap, transforming their features to milky white. The story ended happily, with the now-white men marrying each of the maids. Cross-racial and cross-class lines were transcended, all through the miracle of a pure, cleansing soap.

Such a message was consistent with the trope that skin darker than white was somehow impure and dirty. Products boasting of absolute purity claimed to be so powerful that they could literally wash away the stain of race.

Why do these images matter as anything beyond century-old relics of America’s racist past? These images proliferated at a time when the rhetoric and imagery of hygiene became conflated with a racial order that made white people pure, and anyone who was not considered white was somehow dirty. The order extended from caricatures to labor markets. Analysis of census data indicates the work of handling waste (be it garbage, scrap metal, laundry, or domestic cleaning) was disproportionately done by people who were not native-born white Americans.

Through World War II, this involved work by African Americans and first- and second-generation immigrants from Asia, Latin America, and Southern and Eastern Europe. In the second half of the twentieth century, the burdens of this dirty and dangerous work fell heavier on Hispanic and African-American workers, creating environmental inequalities that endure to this day. They are evident in the conditions that led to the Memphis’s sanitation workers strike in 1968, as well as the residents of Warren County, North Carolina laying down in the street to block bulldozers from developing a hazardous waste landfill in 1982. Environmental inequalities are evident still in environmental justice movements active across the United States in 2015.

Since the end of the Civil War, American sanitation systems, zoning boards, real estate practices, federal, state, and municipal governments, and makers and marketers of cleaning products have all worked with an understanding of hygiene that assumes “white people” are clean, and “nonwhite people” are less than clean. This assumption is fundamental to racist claims of white supremacy, a rhetoric that involves “race pollution,” white purity, and the dangers of nonwhite sexuality as miscegenation. It is also fundamental to broad social and environmental inequalities that emerged after the Civil War and that remain in place in the early twenty-first century. Learning the history of racist attitudes towards hygiene allows us to better understand the roots of present-day inequalities, for the attitudes that shaped those racist soap advertisements remain embedded in our culture.

Carl A. Zimring is Associate Professor of Sustainability Studies at Pratt Institute. He is the author of Clean and White: A History of Environmental Racism in the United States from Monticello to Memphis (forthcoming from NYU Press).

‘Left Behind,’ again? The re-emergence of a political phenomenon

—Glenn W. Shuck

Critics just don’t get Left Behind, a new movie adaptation of the best-selling book series. Sure, it’s predictably awful. The acting is bad, the production is terrible, and the plot is thinner than Soviet toilet paper. But the stakes are far higher than with a typical, first-order howler. Left Behind preaches to the choir, sure, but this is no ordinary choir! The film, like the novels, doesn’t cater to Hollywood styles; it’s all about motivating people to “spread the word,” and that word is just as political as it is otherworldly.

Ten years have passed since the original Left Behind novels by Tim LaHaye and Jerry B. Jenkins concluded. In a world where news cycles grow ever shorter, ten years is several lifetimes. Sure, the Christian suspense novels helped unleash powerful political forces then, but what about now?  The series and the values it champions may have found a way to return with the debut of the new Left Behind film.

Why Left Behind?  Why now?  Financial motives, as always, play a powerful role. Just look at recent films and television serials. Apocalypse sells. God sells. Fear sells. But another motive is at hand: apocalyptic narratives are also multi-stranded; they carry, after all, a revelation. They proclaim a new way of being in the world. In short, apocalyptic narratives often motivate action.

Dr. LaHaye and Mr. Jenkins helped politically conservative evangelicals in the 1990’s move beyond “single issue” and “values voters” labels to empower their political imaginations beyond narrow and predictable categories.  As the series progressed and the politics behind them came together, they helped an upstart and then highly disliked presidential candidate, George W. Bush, to two unlikely victories, selling millions of political primers along the way.  The Left Behind phenomenon helped embolden a hyper-energized religious right.

But something went wrong with doomsayers’ forecasts of evangelical political dominance: they stopped voting at the high rates that boosted President Bush. It wasn’t just that the original Left Behind film and its sequels were big–screen busts. “New” Republican standard-bearers Senator John McCain and former Massachusetts governor Mitt Romney never held much appeal for evangelicals. Yes, it hurt that the Left Behind series and its spin-offs and spokespersons were no longer so influential. Moreover, Bush’s unpopularity became toxic. More “moderate” Republican candidates, however, without the full support of a key voting bloc, found a different kind of apocalypse.

Fast-forward to 2014. A president is not on the ballots but President Obama’s policies certainly are as Democrats fight to retain the Senate. Polls and pundits raise concerns for Democrats. But Democrats ought to also consider a voting bloc that has been under-engaged for a decade. Some experts have assumed evangelicals and the Tea Party are one and the same (or similar enough), hence one can already account for these potential voters. But it is simplistic to equate the Tea Party with the religious right. It takes more than faux filibusters to help push high percentages of mercurial evangelical conservatives to the polls, especially in a midterm election, albeit as critical as this year.

Re-enter the Left Behind phenomenon. Left Behind, another adaptation of the novels, is earning the Left Behind phenomenon and the values it champions, a closer look. The film does not have the highest budget, but this re-boot has fared much better than the 2001 original, grossing almost twice as much (roughly $ 7 million) in the first weekend as the ill-fated original all told.  Whether the film grosses $20 million or $50 million matters less, however, than the fact it has brought conservative evangelicals back into the news cycle.

Thus in the days leading up to the 2014 midterms, Republicans have a wild card in Left Behind that just may become an ace. It is absurd to suggest a low-budget film will change the balance of power in the U.S.A., but it has resurrected the dynamics of the novels, and conservative evangelicals finally have a powerful reminder to vote.  And as any pundit will admit, it won’t take much to tip the scales in Washington.

Finally, the timely release of Left Behind may owe to coincidence, one month before the crucial midterms.  Evangelicals do not believe in coincidence, however, nor should campaigners in evangelical-filled battleground states such as North Carolina, Kansas, and Iowa, to name a few. Left Behind is playing in the heartland, playing for the hearts and minds of conservative evangelical voters. Critics, who dismiss Left Behind as simply an awful film and fire dull hip-shots with dismissive derision and canned clichés, miss the point. Left Behind is not about film prizes or outstanding cinematography or even good taste. It’s all about “spreading the word.” Who will be left behind if the film re-energizes its core audience and steers them into action just weeks before the crucial elections next month?

Glenn W. Shuck is Assistant Professor of Religion at Williams College and author of Marks of the Beast: The Left Behind Novels and the Struggle for Evangelical Identity (NYU Press, 2004).

“Sounds familiar”: The revolution in #Ferguson

—Shana L. Redmond

My first efforts to see the real time, on-the-ground happenings in Ferguson was on that day, the same day that alternative media streams temporarily went black. I visited Activist World News Now online for the live stream of Ferguson but I did not get a visual of that embattled community’s resolve; instead I heard it. From beyond the black screen I heard a voice leading a chant:Seven days after the murder of Black youth Michael Brown by police officer Darren Wilson in Ferguson, Missouri, state governor Jay Nixon instituted a citywide curfew between the hours of midnight and 5am. This effort to tame and remove from view those who continue to protest and rebel against the injustices suffered there and around the country was a part of a larger blackout designed to conceal the escalation of the militarized police state in Ferguson. This strategy on the part of the city’s police department included disabling streetlights and attacking and arresting journalists covering the story.

Solo voice: “Won’t be no police brutality…”

All: “…when the revolution come.”

Solo voice: “Won’t be mass incarceration…”

All: “…when the revolution come.”

This performance, in the step-worn and gas canister-ridden streets of Ferguson, was the sound of protest and all the evidence I needed to document this war zone. The sound showed me that police antagonized protestors. The sound showed me that there were critical and politically diverse numbers of people there, demanding change. The sound showed me that neither voices nor spirits were broken in that city under siege. And the sound showed me that the peoples’ determination to imagine and claim different futures, free from police brutality and mass incarceration (amongst other violences), is alive even when haunted and pursued by death.

Unfortunately, the sounds emanating from Ferguson also showed me that times haven’t changed as much as some insist they have—at least not for African descended people in the U.S. The demands by protestors for alternatives to the frightening present are not new. Marcus Garvey, the inimitable leader of the Universal Negro Improvement Association (UNIA), argued a century ago for these futures that we still march for today. This truth is particularly devastating when one considers that one of his most famous speeches on Black violability was compelled by events that occurred 15 miles southeast of Ferguson.

In 1917, Garvey delivered a speech entitled “The Conspiracy of the East St. Louis Riots.” The bloody events of that massacre, which ensued just over the state line in western Illinois, began with a white mob who attacked the Black working class section of the city over perceived competition for employment. Their murderous nativism was so explosive that the Illinois National Guard was deployed. Garvey’s speech on the incident followed a large silent protest staged by the National Association for the Advancement of Colored People (NAACP) in New York City. Far from condoning this approach, Garvey argued that it was “no time for fine words, but a time to lift one’s voice against the savagery of a people who claim to be the dispensers of democracy.” He continued,

For three hundred years the Negroes of America have given their life blood to make the Republic the first among the nations of the world, and all along this time there has never been even one year of justice but on the contrary a continuous round of oppression. At one time it was slavery, at another time lynching and burning, and up to date it is wholesale butchering. This is a crime against the laws of humanity; it is a crime against the laws of the nation, it is a crime against Nature, and a crime against the God of all mankind.

The litany of brutalities described here provide a genealogy of state-sanctioned violence that continues to its logical end in contemporary Ferguson, New York City, Los Angeles, Milwaukee, Atlanta, and countless other locations across the country. As in 1917, the National Guard has again been called to greater St. Louis. Black communities continue to be the laboratories for warfare, testing the efficacy of technology (Ferguson police have employed tanks, snipers, tear gas, and rubber bullets, to name but a few weapons) and enemy narratives that turn murder victims into easily disposable criminals. This is the status of our democracy.

I do not know what special meaning the people who slaughtered the Negroes of East. St. Louis have for democracy of which they are the custodians, but I do know that it has no literal meaning for me as used and applied by these same lawless people. America, that has been ringing the bells of the world, proclaiming to the nations and the peoples thereof that she has democracy to give to all … has herself no satisfaction to give 12,000,000 of her own citizens except the satisfaction of a farcical inquiry that will end where it begun, over the brutal murder of men, women and children for no other reason than that they are black people seeking an industrial chance in a country that they have labored for three hundred years to make great.[1] 


Shana L. Redmond
 is Associate Professor of American Studies and Ethnicity at the University of Southern California. She is a former musician and labor organizer. Her book, Anthem: Social Movements and the Sound of Solidarity in the African Diaspora, is available now from NYU Press. Follow her on Twitter: @ShanaRedmond.


[1] Marcus Garvey, “The Conspiracy of the East St. Louis Riots” (1917), reprinted on the “American Experience” website, Public Broadcasting Service (PBS), http://www.pbs.org/wgbh/amex/garvey/filmmore/ps_riots.html (accessed August 17, 2014).

Maleficent: A feminist fairy tale?

—Jessie Klein and Meredith Finnerty

Maleficent makes us want to stand up and cheer—and then sit down stunned. The film distinguishes itself as the third in a trend of major studio releases that seem determined to reverse the damage of the common fairy tale motif: “Wealthy princes save skinny damsels for love ever after.” Yet, as research reveals high U.S. social isolation, the reinvented princess plots portend ominous new troubles while embracing old snares; together these phenomena suggest that human love in the U.S. may be endangered.

In the wake of Brave (2012) and Frozen (2013), Maleficent suggests that true love at best won’t be found in some random prince you meet one day, and at worst, said prince may well be seeking to destroy you to realize his own ambitions.

“You got engaged to someone you met the same day?” howls Kristoff to Anna in Frozen. These messages are a partial triumph, advising young people to work to find a forever partner, among other priorities.

The other themes, though, are foreboding: In addition to pressure to look like ever more unattainable Photoshopped images (still contributing to eating disorders at ever younger ages), young people are told to look for intimacy from parents and siblings—and consider romantic love from a spouse (or anyone else) a distant, and perhaps unachievable, goal.

Maleficent’s former love, Prince Stefan, steals her power to fly when he absconds with her wings, to become King. In Frozen, Anna’s fiancé, Prince Hans, tries to kill Anna and destroy the ice-power endowed to her older sister, Queen Elsa, in order to mount their throne. And Princess Merida’s suitors, in Brave, chosen by her parents, are arrogant and incompetent.

In Frozen, it is Anna’s sister, Elsa, who accidentally ices Anna’s heart, and then frees her from this fate with her own true love sibling kiss. In Maleficent, the evil witch-turned-doting mother figure embodies such love; and in Brave, Merida herself liberates her mother from life as a bear, with the heart only a daughter can bestow.

What a departure from the historic themes where evil stepsisters, stepmothers, and girls generally are so competitive that they achieve each other’s demise. Such parables characterizing sisters as envious and hateful are present in, among others, Oz, the Great and Powerful (2013) and expected in Cinderella (2015); and a constant in contemporary film renditions of classics such as King Lear.

The depiction of sisters and “stepmothers” as devoted to one another in Frozen and Maleficent is new; and the portrayal of true love found in familial bonds reflects startling statistics. Family intimacy remains constant when relationships of other kinds are disintegrating as revealed by the General Social Survey 2004 when compared to GSS 1985. The U.S. marriage rate has reached its lowest point in the past century. In 1920, 92.3 percent of Americans married; now it is 31.1 percent according to a 2013 study by Bowling Green State University’s National Center for Marriage and Family; and 40 to 50 percent of those unions end in divorce. Not least, people have fewer friends, and connect with neighbors and other community members less.

Today’s fairy tale heroines are also turning to non-human companions for support (note Maleficent’s bird and Anna’s snowman). Princess Merida and her mother see each other’s wisdom only when the mom becomes a bear. Could this be a reference to real world declining rates of social connections outside family? Almost 25 percent of women won’t marry unless their pets approve (as per JDate and Christian Mingles State of Dating in America, 2014), suggesting that animals are replacing humans for family support. Another trend is for women to adopt dogs instead of children.

Young people watch these films while social isolation has tripled; and empathy and trust decreased. Other than with Mom and Dad, a trusted sibling, and perhaps a dog, people in the U.S. have less love in their lives than past generations.

We celebrate the victories in these reimagined legends. When before have children’s movies warned against blindly following the call to marry, above any other goal—and encouraged girls to look for intimacy elsewhere, much less the family? We appreciate the themes encouraging girls to know and use their inner power. These are among the memos we wish we and our peers received in our formative years.

We hope, though, that future scripts will also describe, and prescribe, more hope for social relationships in America among intimate partners (gay, straight and other) and male and female human friends. We look forward to heroines who defy the still frozen frames whereby women must be blonde and stick-thin to be loved.

These standards are destructive and cruel, and have even expanded to torment men. New impossibly high-definition muscle man images have contributed to increasing rates of eating disorders among men who are afflicted with life-threatening diseases such as the still recently dubbed: “Bigorexia.”

Each of these tales shifts hope for the marriage in question from the classic “happily ever after” to “perhaps.” Will we see such a “maybe” embrace heroes and heroines with different body types, in future films? Could friends and neighbors be the source of an expanded depiction of the many shapes of true love? Let us know.

Jessie Klein is the author of The Bully Society: School Shootings and the Crisis of Bullying in America’s Schools (NYU Press, 2012). She is Associate Professor of Sociology and Criminal Justice at Adelphi University. Meredith Finnerty is a Birth doula and certified HynoBirthing Childbirth Educator (HBCE).

[Note: This article originally appeared on Psychology Today.]

‘The Fault’ in our memories

—Jodi Eichler-Levine

One fine morning in Amsterdam, Hazel Grace Lancaster, the protagonist of The Fault in Our Stars, sports a tee shirt emblazoned with Magritte’s most famous painting. It reads, “Ceci n’est pas une pipe” (This is not a pipe) under a painting of… a pipe. The point of the painting is that it is not a pipe, but rather, a representation of a pipe. A signifier. A treacherous fake.

Yet sometimes we insist that we see a pipe. In the same way, The Fault in Our Stars is not a group of teenagers with cancer; it is a representation of teenagers with cancer. We are enraptured by it because it signifies suffering but it is not the real thing, giving us a vicarious “fantasy of witnessing” tragedy. We insist that we are seeing heartbreak.

The film’s blockbuster success stems from many sources: the popularity of the novel; the rising power of teenage girls at the box office; our cultural fascination with death; and the fact that it is genuinely a strong film. However, except for a significant kerfuffle over a kiss in the Anne Frank house, the role of religion in the film has gone unremarked—particularly when it is religion on the fuzzy line between what we call “religious” and “secular.”

John Green, the author of the book on which the film is based, was a religion and English major at Kenyon College. Before becoming a writer, he served as a hospital chaplain and considered a career in ministry. Perhaps this is one reason why his luminescent book is filled with existential fear and a refusal to meet the terror of theodicy with empty platitudes. Here, teens with cancer meet in the “literal heart of Jesus” for a support group at a local church. Hazel is not comforted by this 12-step two-step, but she also recognizes the Sisyphean task of the group’s peppy leader, Patrick. Elsewhere, Hazel’s father asks who we are to deny an elegant universe its desire to be noticed.

This is what I find so profound about the book, its inspirations, and its afterlife. Religion no longer happens only in formal institutional spaces (and it probably never did). In the hallways of hospitals, in our visceral reaction as characters high on a movie screen ponder ultimate questions—in the act of sitting in that dark theater itself—religion is happening. So is memory.

Augustus Waters wants to be noticed before he dies. At first, by the universe: to live an exceptional life. He and Hazel know this cannot be. They know they are finite; they never declare “always,” as some other lovers do, but rather, “okay.”

We all want to be noticed by the universe. This is why we yelp into our virtual superaddressee: the echoing expanse of Facebook and Twitter. We are all writing our own eulogies and those of our friends, day by day, good words and bad words and sublime and despairing logics (and the Kardashians, alas) all spun together. And it is here that we address the dead in plaintive tones. In the book, a grieving Hazel reads the memorial posts on Augustus’ “wall page.” She is both horrified by and empathetic towards the endless tributes. Giving in to temptation, she replies to one post, but is never answered, “lost in the blizzard of new posts.”

Hazel finds the term “forever in our hearts” especially galling.  Skeptical of memory, she mimics the poster’s intentions: “‘You will live forever in my memory, because I will live forever! I AM YOUR GOD NOW, DEAD BOY! I OWN YOU!’ Thinking you won’t die is yet another side effect of dying.” Hazel sees through memory’s ruse: we think our power to remember and to recover memories is how we resurrect those who are lost—and that has theological implications. To possess one’s own fellow creature through memory is godlike… but we are mortals.

What happens to our memories of love and of suffering, here in the twenty-first century?

Green answers us with both dark infinitude and a leap of faith. He became a parent while writing the book, and says this changed it. When Hazel is eight, her mother fears that she will not be a mother anymore without her daughter. Years later, she moves past that into a brazen, stark resilience. She tells Hazel that she will always be her mother. Green has said, “I just could think of no other way to lay bare the absolute hideousness of living in a world where parents have to bury their children … Humans have always lived in that world, and always will.”

And yet, he also writes: “I couldn’t write the book until I understood that the love between a parent and child (like many other kinds of love) is literally stronger than death: As long as either person survives, the relationship survives.”

John Green wants to have his existential cake, and eat it, too. Maybe that’s not the worst idea ever.

Okay.

Jodi Eichler-Levine is Associate Professor of Religious Studies at the University of Wisconsin, Oshkosh. She holds a joint appointment with the Women’s Studies Program. She is the author of Suffer the Little Children: Uses of the Past in Jewish and African American Children’s Literature (NYU Press, 2013).

Depictions of masculinity on television

Amanda D. Lotz

It is revealing that so little has been written about men on television. Men have embodied such an undeniable presence and composed a significant percentage of the actors upon the small screen—be they real or fictional—since the dawn of this central cultural medium and yet rarely have been considered as a particularly gendered group. In some ways a parallel exists with the situation of men in history that Michael Kimmel notes in his cultural history, Manhood in America. Kimmel opens his book by noting that “American men have no history” because although the dominant and widely known version of American history is full of men, it never considers the key figures as men. Similarly to Kimmel’s assertion, then, we can claim that we have no history of men, masculinity, and manhood on television—or at best, a very limited one—despite the fact that male characters have been central in all aspects of the sixty-some years of US television history. It is the peculiar situation that nearly all assessments of gender and television have examined the place and nature of women, femininity, and feminism on television while we have no typologies of archetypes or thematic analyses of stories about men or masculinities.

For much of television studies’ brief history, this attention to women made considerable sense given prevailing frameworks for understanding the significance of gender representation in the media. Analyses of women on television largely emerged out of concern about women’s historical absence in central roles and the lack of diversity in their portrayals. Exhaustive surveys of characters revealed that women were underrepresented on television relative to their composition of the general populace and that those onscreen tended to be relegated to roles as wives, love interests, or sex objects. In many cases, this analysis was linked with the feminist project of illustrating how television contributed to the social construction of beliefs about gender roles and abilities, and given the considerable gender-based inequity onscreen and off, attention to the situation of men seemed less pressing. As a result, far less research has considered representations of men on television and the norms or changes in the stories the medium has told about being a man.

Transitioning the frameworks used for analyzing women on television is not as simple as changing the focus of which characters or series one examines. Analyzing men and masculinity also requires a different theoretical framework, as the task of the analysis is not a matter of identifying underrepresentation or problematic stereotypes in the manner that has dominated considerations of female characters. The historic diversity of stories about and depictions of straight white men has seemed to prevent the development of “stereotypes” that have plagued depictions of women and has lessened the perceived need to interrogate straight white men’s depictions and the stories predominantly told about their lives. Any single story about a straight white man has seemed insignificant relative to the many others circulating simultaneously, so no one worried that the populace would begin to assume all men were babbling incompetents when Darrin bumbled through episodes of Bewitched, that all men were bigoted louts because of Archie Bunker, or even that all men were conflicted yet homicidal thugs in the wake of Tony Soprano. Further, given men’s dominance in society, concern about their representation lacked the activist motivation compelling the study of women that tied women’s subordinated place in society to the way they appeared—or didn’t appear—in popular media.

So why explore men now? First, it was arguably shortsighted to ignore analysis of men and changing patterns in the dominant masculinities offered by television to the degree that has occurred. Images of and stories about straight white men have been just as important in fostering perceptions of gender roles, but they have done their work by prioritizing some attributes of masculinity—supported some ways of being a man—more than others. Although men’s roles might not have been limited to the narrow opportunities available to women for much of television history, characteristics consistent with a preferred masculinity have pervaded—always specific to the era of production—that might generally be described as the attributes consistent with what is meant when a male is told to “be a man.” In the past, traits such as the stoicism and controlled emotionality of not being moved to tears, of proving oneself capable of physical feats, and of aggressive leadership in the workplace and home have been common. Men’s roles have been more varied than women’s, but television storytelling has nevertheless performed significant ideological work by consistently supporting some behaviors, traits, and beliefs among the male characters it constructs as heroic or admirable, while denigrating others. So although television series may have displayed a range of men and masculinities, they also circumscribed a “preferred” or “best” masculinity through attributes that were consistently idealized.

The lack of comprehensive attention to men in any era of television’s sixty-some-year history makes the task of beginning difficult because there are so few historical benchmarks or established histories or typologies against which newer developments can be gauged. Perhaps few have considered the history of male portrayal because so many characteristics seemed unexceptional due to their consistency with expectations and because no activist movement has pushed a societal reexamination of men’s gender identity in the manner that occurred for women as a component of second-wave feminism. Male characters performed their identity in expected ways that were perceived as “natural” and drew little attention, indicating the strength of these constructs. Indeed, television’s network-era operational norms of seeking broad, heterogeneous audiences of men and women, young and old, led to representations that were fairly mundane and unlikely to shock or challenge audience expectations of gender roles.

One notable aspect of men’s depictions has been the manner through which narratives have defined them primarily as workers in public spaces or through roles as fathers or husbands—even though most male characters have been afforded access to both spaces. A key distinction between the general characterizations of men versus women has been that shows in which men functioned primarily as fathers (Father Knows BestThe Cosby Show) also allowed for them to leave the domestic sphere and have professional duties that were part of their central identity—even if actually performing these duties was rarely given significant screen time. So in addition to being fathers and husbands, with few exceptions, television’s men also have been workers. Similarly, the performance of professional duties has primarily defined the roles of another set of male characters, as for much of television history, stories about doctors, lawyers, and detectives were necessarily stories about male doctors, lawyers, and detectives. Such shows may have noted the familial status of these men but rarely have incorporated family life or issues into storytelling in a regular or consistent manner.

This split probably occurs primarily for reasons of storytelling convention rather than any concerted effort to fragment men’s identity. I belabor this point here because a gradual breakdown in this separate-spheres approach occurs in many dramatic depictions of men beginning in the 1980s and becomes common enough to characterize a sub-genre by the twenty-first century. Whether allowing a male character an inner life that is revealed through first-person voice-over—as in series such as Magnum, P.I.Dexter, or Hung—or gradually connecting men’s private and professional lives even when the narrative primarily depicts only one of these spheres—as in Hill Street Blues or ER—such cases in which the whole lives of men contribute to characterization can be seen as antecedents to the narratives that emphasize the multifaceted approach to male characters that occurs in the male-centered serial in the early 2000s. Though these series offer intricately drawn and complex protagonists, their narrative framing does not propose them as “role models” or as men who have figured out the challenges of contemporary life. The series and their characters provide not so much a blueprint of how to be a man in contemporary society as a constellation of case studies exposing, but not resolving, the challenges faced.

The scholarly inattention to men on television is oddly somewhat particular to the study of television. The field of film studies features a fairly extensive range of scholarship attending to changing patterns of men’s portrayals and masculinities. While these accounts are fascinating, the specificity of film as a medium very different from television in its storytelling norms (a two-hour contained story as opposed to television’s prevailing use of continuing characters over years of narrative), industrial characteristics (the economic model of film was built on audiences paying for a one-time engagement with the story while television relies on advertisers that seek a mass audience on an ongoing basis), and reception environment (one chooses to go out and see films as opposed to television’s flow into the home) prevent these studies of men on film to tell us much about men on television. Further, gender studies and sociology have developed extensive theories of masculinity and have been more equitable in extending beyond the study of women. Although theories developed in these fields provide a crucial starting point—such as breaking open the simple binary of masculinity and femininity to provide a language of masculinities—it is the case that the world of television does not mirror the “real world” and that the tools useful for exploring how societies police gender performance aren’t always the most helpful for analyzing fictional narratives. Sociological concepts about men aid assessments of men and masculinity on television, but it is clearly the case that the particularities of television’s dominant cultural, industrial, and textual features require focused and specific examination.

Why Cable Guys?

One of the motivations that instigated my 2006 book Redesigning Women: Television after the Network Era was frustration with how increasingly outdated frameworks for understanding the political significance of emerging gender representations were inspiring mis-, or at least incomplete, readings of shows and characters that indicated a rupture from previous norms. Tools established to make sense of a milieu lacking central female protagonists disregarded key contextual adjustments—such as the gradual incorporation of aspects of second-wave feminism into many aspects of public and private life—and were inadequate in a society profoundly different from that of the late 1960s. For example, it seemed that some aspects of gender scripts had changed enough to make the old models outdated, or that there was something more to Ally McBeal than the length of her skirts, her visions of dancing babies, and her longing for lost love that had led to scorn and dismissal from those applying conventional feminist analytics. Given generational and sociohistorical transitions apparent by the mid-1990s, it seemed that this series and its stories might be trying to voice and engage with adjustments in gender politics rather than be the same old effort to contain women through domesticity and conventional femininity, as was frequently asserted.

I’m struck with a similar impulse in reflecting on how stories about men, their lives, and their relationships have become increasingly complicated in the fictional narratives of the last decade. Indeed, this evolution in depictions of male identities has not received the kind of attention levied on the arrival of the sexy, career-driven singles of Sex and the City and Ally McBeal or the physically empowered tough women of Buffy the Vampire Slayer or Xena: Warrior Princess. Assessments of men in popular culture, and particularly television, haven’t been plentiful in the last decade. Most of the discussion of men on television merely acknowledges new trends in depiction—whether they be the sensitivity and everymanness of broadcast characters or the dastardly antiheroism of cable protagonists. Such trend pieces have offered little deeper engagement with the cultural and industrial features contributing to these shifts or analysis of what their consequences might be for the cultures consuming them.

While these curiosities might motivate any scholar, I suspect the motivations of a female feminist scholar embarking on an analysis of men and masculinity also deserve some explanation. In addition to curiosity about shifting depictions and stories on my television screen, for well over a decade I’ve also had the sense that “something is going on” with men of the post–Baby Boomer generation, who, like me, were born into a world already responding to the critiques and activism of second-wave feminism. Yet nothing I’ve read has adequately captured the perplexing negotiations I’ve observed. For example, on a sunny Tuesday morning just after the end of winter semester classes, I took a weekday to enjoy the arrival of spring with my toddler. We found ourselves in the sandpit at the neighborhood park, and shared it that day with two sisters—one a bit older, the other a bit younger than my nearly two-year-old son—who were being watched over by their father. He was about my age and was similarly clad in the parental uniform of exercise pants and a fleece jacket. With some curiosity I unobtrusively watched him interact with his daughters. Dads providing childcare aren’t uncommon in my neighborhood—overrun as it is with academics and medical professionals with odd hours that allow for unconventional childcare arrangements—but something in his demeanor, his willingness to go all in to the tea party of sandcakes his oldest was engaging him with, grabbed my attention for its play with gender roles. It reminded me of the many male friends with whom I share a history back to our teen years who have similarly transformed into engaged and involved dads; they’ve seemingly eradicated much of the juvenile, but also sexist, perspectives they once presented, and also have become men very different from their fathers. Then his phone rang. Immediately, his body language and intonation shifted as he became a much more conventional “guy.” Was it a brother? It was definitely another man. An entirely different performance overtook his speech and demeanor as he strolled away from the sandpit, yet, suggesting that all was not reversed, he proceeded to discuss attending a baby shower, whether he and his wife would get a sitter, and the etiquette of gift giving for second babies. When the call ended he shifted back to the self I had first observed.

Watching this made me reflect on how the gender-based complaints I might register regarding balancing work and family—such as the exhausting demands, the still-tricky negotiations of relationships that cross the working mom/stay-at-home mom divide, and the ever-ratcheting demands to be the Best Mom Ever while maintaining pre-mom employment productivity—have been well documented by others and are problems with a name. My male peers, in contrast, must feel out to sea with no land or comrades in sight. Esteemed gender historian Stephanie Coontz has gone so far as to propose the term and reality of a “masculine mystique” as an important component of contemporary gender issues.

This wasn’t the first time I’d been left thinking about the contradictory messages offered to men these days. The uncertain embodiment of contemporary manhood appears in many places. For years now I’ve wondered, even worried, about the men in my classes. In general, they seem to decrease in number each year, perhaps being eaten by the ball caps pulled ever lower on their foreheads. As a hopefully enlightened feminist scholar, I try to stay attuned to the gender dynamics of my classroom—but what I’ve commonly found was not at all what I was prepared for or expected. Consistent with the Atlantic cover story in the summer of 2010 that declared “The End of Men” and touted that women had become the majority of the workforce, that the majority of managers were women, and that three women earned college degrees for every two men, the young women in my classes consistently dominate their male peers in all measures of performance—tests, papers, class participation, attendance. I haven’t been able to explain why, but it has seemed that most—although certainly not all—of the young men have no idea why they find themselves seated in a college classroom or what they are meant to do there. Though I must acknowledge that despite evidence of female advancement in sectors of the academy like mine, men still dominate in many of the most prestigious and financially well-rewarded fields, including engineering, business, and computer science.

I brought my pondering about classroom gender dynamics home at night as I negotiated the beginning of a heterosexual cohabitation in the late 1990s and thought a lot about what it meant to become a “wife” and eventually a “mother.” There were also conversations about what it meant to be the husband of a feminist and how being a dad has changed since our parents started out, although the grounds for these talks were more uncertain and role models and gender scripts seemed more lacking. Both in charting our early years of marriage and still in facing parenthood, my husband and I have often felt adrift and without models. Although we had little to quibble with in regard to our own upbringing, neither of us was raised in households in which both parents had full-time careers, which seemed quite a game changer and has proved the source of our most contentious dilemmas. While a wide range of feminist scholarship and perspectives has offered insight into the challenges of being a mom and professor, my husband and his compatriots seem to be divining paths without a map or a trail guide. As the mother of both a son and a daughter, I feel somewhat more prepared to help my daughter find her way among culturally imposed gender norms than my son; at least for her the threats and perils are known and named.

Amanda D. Lotz is Associate Professor of Communication Studies at the University of Michigan. She is the author of Cable Guys: Television and Masculinities in the 21st Century (NYU Press, 2014).

[Read a fuller version of this excerpt from Amanda D. Lotz's new book, Cable Guys on Salon.com.]

No April Fool: Q&A with author Kembrew McLeod

To celebrate April 1 and the release of our new book, Pranksters: Making Mischief in the Modern World, today we have a Q&A with the author—and self-proclaimed prankster—Kembrew McLeod. McLeod discusses pranks, hoaxes and cons (and what makes them different), the origins of secret societies, and how pranks and humor have been used throughout history to spark debate and inspire change.

Interviewer: What are the differences between pranks, hoaxes and cons?

Kembrew McLeod: When media outlets report that a person has been “pranked,” they are often discussing what I consider a hoax. A hoax is a kissing cousin of a prank, but its primary purpose is to fool people and attract attention. A prank, for me, is a staged provocation that uses media to enlighten or stir up a debate. I use cons as an all­purpose term for scams that are meant to defraud or gain an advantage—like an email phishing scam. Although it seems like the Internet Age has created a hurricane of pranking, hoaxing and conning, this tricky tradition has thrived for centuries.

You mention that one of America’s “founding fathers” was a merry prankster.

Ben Franklin was an O.P.—Original Prankster. In fact, Franklin’s very first print publication was a pseudonymously penned hoax (he wrote more than 100 satires, pranks and hoaxes under fake names over the course of his lifetime). Just before he died, Franklin penned an op­ed under the name “Historicus,” which trolled the anti­abolitionists by arguing that Muslims should enslave Christians. You won’t find that story in any Fox News­produced documentary on Ben Franklin!

What does media have to do with pranks?

If reduced to a mathematical formula, the art and science of pranking can be expressed as Performance Art + Satire x Media = Pranks. Put simply, pranks are playful critiques performed within the public sphere, and amplified by media. They allow ordinary people to reach large audiences despite constraints (like a lack of wealth or connections) that would normally mute their voices.

What are the prank origins of the urban legend that smoking banana peels can get you high?

Members of the hippie band Country Joe & the Fish started this rumor, which first spread through word of mouth and was quickly picked up by the national news media. Soon, lots of people joined in on the fun. For instance, Rep. Frank Thompson drafted the Banana Labeling Act of 1967 after a “high official in the FDA,” the Congressman claimed, urged him to introduce the bill. “From bananas,” Thompson stated in the halls of Congress, “it is a short but shocking step to other fruits.”

The past year has seen many pranks and hoaxes. Does the wired age lend itself to these events, or are we just more aware of them?

The Internet has changed the ways that pranks, hoaxes and cons can circulate, but trickery has been a pronounced part of the modern age since Jonathan Swift’s time. Pranks went viral much more slowly back then, but the dynamic is still the same.

Your book pays homage to women involved in important pranks. Many readers are probably familiar with Yoko Ono, but fewer know WITCH. What was WITCH?

The Women’s International Terrorist Conspiracy from Hell (WITCH) was an unruly group of ‘60s feminists who pulled many a political prank. For instance, they crashed a large bridal fair and performed an Un­Wedding Ceremony: “We promise to smash the alienated family unit,” they said in unison. “We promise not to obey.”

Some people have heard of the Illuminati from hip­-hop, or they may have encountered the Rosicrucians in a book or movie. What are the prank origins of these so-­called “secret societies”?

The Rosicrucian Brotherhood was invented in the early seventeenth century by Protestant pranksters in 1614. Their anonymously published “Rosicrucian Manifestos” were intended to stir up a public debate about scientific and theological ideas that the Catholic Church opposed. The Rosicrucian myth created the template for virtually every occult conspiracy theory that followed: an elite body of initiates—a satanic secret society within a secret society, sometimes known as the Illuminati—that wants to overthrow the established religious­political authority and create a New World Order.

Why do people put so much credence in ideas that a simple Google search can debunk?

Belief systems are powerful. People fall for pranks, hoaxes, cons and conspiracy theories when they confirm their deep­seated worldviews. Conspiracy theories are inherently non­falsifiable, and any attempt to disprove them is considered suspect.

What sparked your interest in pranks?

When I was a twenty­ year-old college student, I created a fictitious movement to change my school mascot to a three-eyed pig with antlers. It snowballed from the campus newspaper to regional news media, eventually landing on CNN. Reflecting back on the mascot changing prank, it helped me understand how trickery can shape mass media and, to a certain extent, how we perceive the world. It was my first dive into the prankster pond, and I was never the same.

Finally: Is Andy Kaufman still alive?

You’ll have to ask him yourself.

Embracing spreadability in academic publishing

—Sam Ford

The world of academic publishing was built on a model of scarcity. The specialist knowledge of an academic discipline was considered too limited for general commercial publication, so a niche industry was built to support the development and publication of essay and book-length academic publications. Academic presses played a vital role in this model and built their infrastructure to protect and make available academic essays for university libraries and specialists in a particular field. And, in return, the system for evaluating success among academics has been built in tandem with this publishing model—so that publishing milestones have become the logic on which tenure processes are built.

I had the pleasure of being invited to speak to the American Association of University Presses last summer on a panel about “reaching the world.” At it, I advocated that university presses have to rethink their raison d’être in the 21st century.

In a world where information is now overabundant rather than scarce, might it make sense that publishers have to change their logic dramatically in order to stay relevant? Rather than protecting and bringing information to circulation inside academia, as had been the old model, might not the role of the press be to curate and further cultivate the most important content in that vast field—and, equally as important—to focus on bringing that content to new audiences outside university libraries and professionals within one discipline?

I cited—as example—my experiences with Spreadable Media, the book I published this year (co-authored with Henry Jenkins and Joshua Green) with New York University Press. There were few arguments or examples in this book that weren’t, in some form, published or presented somewhere previously: various white papers, blog posts, online articles, academic essays, keynote speeches, and so on. And we have published excerpts and examples from the book in a variety of places since it came out. Further, the overall project included more than 30 essays, available freely online, in addition to the book we co-authored.

As far as I can tell, the availability of all that material hasn’t hindered interest in our book. For whatever few people who would have bought the book but were instead sated by finding the information available online, there were many more that discovered the book through these various materials and purchased it.

Writing more than a decade ago about piracy, Tim O’Reilly said, “Obscurity is a far greater threat to authors.” The same can be said for concerns of “self-cannibalization.” And the logic of at least some presses’ acquisition editors underscore this. Consider this statement from Harvard University Press: “prior availability doesn’t have a clear relationship to market viability.”

An early version of a piece Peter Froehlich (with Indiana University Press) published in Learned Publishing in October highlights the model now employed by Harvard Business Review Press as a potential way forward: the press embraces multiple-platform publishing, thinking about the connection among its blog, its magazine, and its books as varying tiers of publication and embracing authors who share their ideas elsewhere—in the process developing a reputation as a catalyst for thinking and then curating the best of that thinking in more increasingly formal ways.

In this model, the book acts as a thoroughly edited articulation of an idea at a moment in time: the culmination of work up to that point, the launching point of work to come. And the press helps take that idea and make it accessible, in reasonable fullness, to those who haven’t been following the development of the argument all along the way. In other words, the press’ role is about curating the information that most needs to be preserved and then making that information more visible to people outside the narrow field from which it came.

A similar model might be understood by publications like Fast Company. Authors like me write online pieces, with Fast Company receiving 24-hour exclusivity for our writing, followed by it being shared elsewhere. The magazine may pull together and curate its deepest, most considered pieces. Meanwhile, thoughts I initiated at Fast Company may end up eventually showing up elsewhere (properly attributed and sourced, of course). Such is a publishing model that still provides windows for a viable business model without being focused on locking content down.

This is a vital problem to be figured out, for not just the current and next generation of academics but, crucially, for the next generation of college students and all of us who benefit when ideas from within the academy spread throughout the culture and our professional worlds. It’s not just an issue niche university presses need to solve but rather a crucial question for us all.

Sam Ford is Director of Audience Engagement with Peppercomm, an affiliate with both MIT Comparative Media Studies/Writing and the Western Kentucky University Popular Culture Studies Program, and co-author of Spreadable Media: Creating Value and Meaning in a Networked Culture (NYU Press, 2013). He is also a contributor to Harvard Business Review and Fast Company.

 

Breaking Bad breakdown: Deserving denouement

—Jason Mittell

[A longer version of this article originally appeared on the media and cultural studies blog, Antenna.]

What do we want from a finale? Should it be a spectacular episode that serves as the dramatic peak of the series? Should it be like any other episode of the series, only more so? Should it be surprising, shocking, or transformative? Or should it offer closure?

For me, the main thing that I’m looking for in any finale is for a series to be true to itself, ending in the way it needs to conclude to deliver consistency, offering the type of ending that its beginning and middle demand. That changes based on the series, obviously, but it’s what makes Six Feet Under’s emphasis on mortality and The Wire’s portrayal of the endless cycle of urban crime and decay so effective, and why Battlestar Galactica’s final act turn toward mysticism (and that goofy robot epilogue) felt like such a betrayal to many fans. And it’s why finales like The Sopranos and Lost divide viewers, as the finales cater to one aspect of each series at the expense of other facets that some fans were much more invested in.

“Felina” delivered the ending that Breaking Bad needed by emphasizing closure over surprise. In many ways, it was predictable, with fans guessing many of the plot developments—read through the suggestions on Linda Holmes’s site for claiming cockamamie theories and you’ll see pretty much everything that happened on “Felina” listed there (alongside many more predictions that did not come to pass). For a series that often thrived on delivering “holy shit” moments of narrative spectacle, the finale was quite straightforward and direct.

The big shocks and surprises were to be found in episodes leading up to this one, especially the brilliant “Ozymandias”; since then, we’ve gotten the denouement to Walt’s story, his last attempt to make his journey mean something. It’s strange to think that an episode that concludes with a robot machine gun taking down half a dozen Nazis feels like a mellow epilogue, but emotionally it was this season’s least tense and intense episode. Instead, Walt returned home a beaten-down man, lacking the emotional intensity that drove him up the criminal ladder, but driven by a plan that he had just enough energy to complete. Given that the series premise was built on the necessity of a character arc building toward finality, and that it began with that character receiving a death sentence, we always knew that closure was likely to come in the form of Walt’s death, and this episode simply showed us how his final moments played out in satisfying fashion.

While Walt’s mission to destroy the remnants of his business occupy the bulk of the episode’s plot, its emotional centerpiece is his meeting with Skyler. As always, [Bryan] Cranston and Anna Gunn make the scene crackle, conveying the both the bonds and fissures between the two characters that make their final goodbye neither reconciliation nor retribution. He visits her as one of his more selfless acts we have seen. He has no illusions that he’ll resolve things or get her back on his side; he simply wants to give her two things. First, the coordinates for Hank and Gomie’s grave, offered to provide closure to Marie and others, as well as assuaging Walt’s guilt over this one act of violence he caused but could not stop. Second, the closest he’ll ever come to an apology—after starting like a typical rationalization about “the things I’ve done” that Skyler rightly attacks him as another deceptive rationalization about family, Walt finally admits the truth. “I did it for me. I liked it. I was good at it. And I was alive.” This is not easy for Walt to say; it is his most brutal penance, having to admit his own selfishness to both his wife and himself. But in the end, Skyler returns the favor with the gift of a final moment with Holly, the child that Walt used as a bargaining chip the last time they spoke, as she remembers the part of him that still loved his children despite his abusive treatment of them. And Walt takes his own moment to observe Flynn from afar, looking at a child who rightly despises him, but he still loves. When I look back on this finale, this will be the scene I replay in my mind.

Of course the episode and series climax is the final confrontation. I fully believe that Walt intends to kill Jesse alongside the Nazis, as he fully believes that his protege has both betrayed him and stolen his formula—and based on Badger’s testimony, the student has surpassed the teacher. Many fans were speculating that Walt sought to “save Jesse,” but up until he sees his former partner in chained servitude, Jesse is an equal target of his wrath. Yet again, Cranston conveys Walt’s emotional shifts wordlessly, as he devises a plan to spare Jesse from his robo-gun once he sees that Jesse is yet again a victim of men with larger egos and more malice than him. While this final confrontation was a satisfying moment of Walt putting the monsters that he had unleashed back in the box, it was almost entirely suspense free. I never doubted that Walt would successfully kill the Nazis and spare Jesse, that he had poisoned Lydia, and that Jesse would not pull the trigger on Walt. These were the moral necessities of a well-crafted tale; Breaking Bad was done playing games with twists and surprises, but ready to allow Walt to sacrifice himself to put down the monsters he had unleashed. Yet the scene was constructed to create suspense with the potential that Walt might not get the remote control in time, creating a rare moment of failed tension in the series—I awaited and anticipated the emotional confrontation between Walt and Jesse without ever doubting the outcome or tension about what might happen.

The “how it happened” was quite satisfying, however. I saw the robo-gun as an homage to one of my favorite Breaking Bad scenes: in “Four Days Out” when Jesse thinks Walt is building a robot to engineer their rescue. This time he does, and it works in an appropriately macabre and darkly funny payoff: the excessive gunfire mirrors Walt’s frequent insistence to maximize his inventions (as with the overpowered magnets, insistence to capture every last drop of methylamine, etc.), and it keeps firing blanks as Kenny’s body receives an endless massage. Although Jesse is no cold-blooded killer, killing Todd was a line he was happy to cross in payback for months of torture and Todd’s own heartless killings of Drew Sharp and Andrea. However when given a chance to kill Walt, Jesse takes a pass; instead he forces Walt to admit that Jesse killing him is what he wants, and then denies him that pleasure. When Jesse sees that Walt was shot, Jesse thinks leaving him to die alone is what Walt deserves, especially given what happened with Jane.

What Walt deserves matters in Breaking Bad. I’m reminded of an important scene in the penultimate episode of The Wire, when one character wonders why another has plotted to kill him, asking what he’s done to deserve it (keeping names vague if you haven’t seen it). The would-be killer’s reply quotes the film Unforgiven: “Deserve’s got nothing to do with it.” But on Breaking Bad, deserve’s got everything to do with it, as it has always been a tale of morality and consequences. Jesse deserves his freedom, even though he is a broken-down shell of who he was—and while we want to know what’s next for him, I’m content with the openness that allows me to imagine him driving to Alaska and becoming a carpenter, perhaps after rescuing Brock and Lydia’s daughter from orphanhood.

Walt deserves to die, and we deserve to see it. The final musical cue in a series that excelled at their use was Badfinger’s “Baby Blue,” another classic like “Crystal Blue Persuasion” that the producers have probably been hanging onto for years. The opening line of the song is as essential as the color-specific romance: “Guess I got what I deserve.” In this final glorious sequence, Walt gets to die in the lab, as the music sings a love song to chemistry—which in this context, serves as an ode to his own talents in perfecting the Baby Blue. His tour around the lab has prompted some debate as to what Walt is doing: is he strategically leaving his bloody fingerprints to claim ownership, a sort-of turf-claiming mark of Heisenberg Was Here? I think not, but rather that Walt is admiring the precision and craft of the lab, both as a testament to his own pedagogical prowess that yielded Jesse’s talents, and as his natural habitat where he “felt alive,” as he told Skyler earlier. To the soundtrack romanticizing Walt’s own greatness, it’s a final moment of pride and arrogance that he seizes to overshadow all the carnage he has caused, an acceptance that more than his family, he did it for the chemistry.

“Felina” is far from Breaking Bad’s best episode, but it is the conclusion that the series and its viewers deserve. I think it will play even better both for viewers bingeing the season in quick succession and upon rewatch without the trappings of anticipation, hype, and suspense. Jesse escapes, Skyler and her family survive, and Walt and his one-time minions die. It all happens with less emotion and drama than what we’ve come to expect from the series, but given the strain of the journey up to this point, we’re as emotionally drained as the characters. So a low-key bloodbath is an appropriate way to exit this wonderful trip.

Jason Mittell is Professor of Film & Media Culture and American Studies at Middlebury College. He is the co-editor (with Ethan Thompson) of How to Watch Television (NYU Press, 2013).