How email ruined my life

—Catherine Zimmer

I got my first email account the fall I started graduate school, in 1995. Even then I had an inkling of the pressures that would come to be associated with this miracle of communication. My entry into grad school coincided with a relationship transitioning into a long-distance one, and what at first was a welcome mode of remaining tethered soon enough became a source, and an outlet, of demand, anxiety, guilt, and recrimination.

This early experience of the pressure and affective weight of email faded into memory alongside that relationship, and certainly at the time it did not occur to me to hold the medium responsible for the messages. But over the past couple of years, now that I am lucky enough to be firmly cemented in an academic job and stupid enough to have taken on an administrative role, that experience has reemerged for me as something of a retroactive portent of the place email would come to hold in my life. Because as anyone in an even remotely bureaucratic environment will tell you, “email” is no longer simply a way of communicating with others, though periodically a message gets through that is significant enough that the medium becomes momentarily transparent. Email is now an entity in and of itself—a gargantuan, self-perpetuating and purely self-referential monstrosity that I do not “use” but barely “manage,” a time-sucking and soul-crushing mass that I can only chip away at in an attempt to climb over or around to get to an actual task.

From epidemic-level ADHD and eye fatigue to degenerative spinal conditions at younger and younger ages—not to mention my self-diagnosed early-onset thumb arthritis—constant interaction with digital devices has arguably had widespread health consequences. It is also fraught with an expansion, intensification, and perversion of the emotions associated with that first email account. But while then I attached those affective elements to a romantic relationship, they are now purely indicative of my relationship to email “itself”: the phenomenon that makes constant and growing demands on my time, attention, and energy, requiring that I devote at least a modicum of substantive thought to each individual expression of its enormous, garbage-filled maw. Time spent on email has grown to hours every day. This is not a measure of increased “productivity.” In fact it is just the opposite, as answering email has become the forest I have to machete my way through just to get to the things that actually constitute my job. And while I do get angry at the jaw-dropping idiocy of certain student emails (Hi Mrs. Zimmer can u send me the syllabus because the final is tomorrow and i missed the last eleven weeks of class) and irritated at the endlessly accumulating details of academic work (Dear Dr. Zimmer, this is a friendly reminder that the due date for the Learning Outcomes Rubrics Departmental Self-Assessment Model is March 23rd) ultimately each one of these individually maddening iterations is just a sign of the incomprehensible sprawl of the medium. And when factored in with texting, messaging, social media, streaming television, and any number of other incoming and outgoing information flows, the sense of being “overwhelmed” seems unsurprisingly ubiquitous.

Email is of course inseparable from the character of any digital labor and the economy of which it is a part: it thus becomes a useful metonymic device to understand how convenience has become so profoundly debilitating. Though no one explicitly states it (because it would sound insane), the demand that we keep up with and process this level of information, and communicate coherently in return, is a demand that the human brain function closer to the speed of digital communications. Needless to say, it does not. Thus the unparalleled levels of prescription of amphetamines and pain medications are not merely the triumph of the pharmaceutical industry, but an attempt to make the human brain and body function alongside and through digital mediation. The relative ease of communications, the instantaneity of information exchange, does not make our lives simpler: it means that we are asked to attend to every goddamn thing that occurs to the countless people we know, institutions we are a part of, and every other organization whose mailing list you have been automatically placed on simply by having a single interaction with them. It’s like being able to hear the internal mutterings of both individual people and cultural constructs: a litany of the needs of others and the expectations of the social sphere (not to mention my own neurotic meanderings when I have to construct a careful response to someone, or an email I have sent goes unanswered). Finding it increasingly impossible to recognize and affectively react only to the articulations of each missive, I respond instead to the cacophonous noise of the whole damn thing. That noise is now constant, while its volume ebbs and flows with the rhythms of the work year. As the only constant, email becomes an end in itself. Email never goes away. Email is an asshole.

It is not surprising that this self-perpetuating mode of interaction comes alongside a proliferation of (self-)assessment and (self-)documentation—talking about what you will, have, or are doing instead of just doing it. Thus the ability to communicate about everything, at all times, seems to have come with the attendant requirements that we accompany every action with a qualitative and quantitative discourse about that action. Inside and in addition to this vast circularity are all those things that one’s job actually entails on a practical, daily basis: all the small questions, all the little tasks that need to be accomplished to make sure a class gets scheduled, a course description is revised, or a grade gets changed. Given how few academic organizations have well-functioning automatic systems that might allow these elements to be managed simply, and that my own university seems especially committed to individually hand-cranking every single gear involved in its operation on an ad hoc basis, most elements of my job mean that emails need to be sent to other people.

Once I send an email, I can do nothing further until someone sends an email back, and thus in a sense, sending that email became a task in itself, a task now completed. More and more it is just a game of hot potato with everyone supposedly moving the task forward by getting it off their desk and onto someone else’s, via email. Every node in this network are themselves fighting to keep up with all their emails, in the back and forth required before anything can actually be done. The irony of the incredible speed of digital mediation is thus that it often results in an intractable slowness in accomplishing simple tasks. (My solution has been to return to the telephone, which easily reduces any 10-email exchange into a 2-minute conversation. Sidenote: I never answer my own phone.)

In case it isn’t already clear, such an onslaught of emails, and the pressure of immediacy exerted sometimes explicitly but mostly by the character of the media, means that we no longer get to leave work (or school, or our friends or our partners). We are always at work, even during time off. The joy of turning on our vacation auto-reply messages is cursory, for even as we cite the “limited access” we will have to email (in, like, Vancouver), we know that we can and will check it. And of course we know that everyone else knows that it’s a lie. Even if we really do take time away from email, making ourselves unavailable (not looking at email, not answering our texts) does not mean email has not been sent to us and is not waiting for us. And we know it, with virtually every fiber of our being. Our practical unavailability does not mitigate our affective understanding that if we ignore email too long, not only will work pile up, but there will be emotional consequences. I can feel the brewing hostility of the email senders: irritated, anxious, angry, disappointed.

Even if I start to relax on one level, on another my own anxiety, irritation, and guilt begin to grow. Email doesn’t go away. It’s never over. It’s the fucking digital Babadook, a relentless, reflexive reminder of the unfathomable mass underlying every small transaction of information.

The nonstop stream of communication and its affective vortex are in part what philosopher Gilles Deleuze (and now many others) have described as “societies of control,” distinguished not by discipline but by the constant modulation and management of the flow of information. Ultimately we are exhausted by the endless negotiation of this unmappable terrain, and our personal and professional labors increasingly have the character of merely keeping ourselves afloat. Which is not to say that discipline no longer functions: those excluded from the privilege of control will often find themselves subject to the sharper baton of policing and incarceration.

There does appear to be increasingly widespread recognition that email is having a significant effect on both the amount of work one does and the increasing threat of that work to health and well-being. A widely and enthusiastically misreported news story that France had instituted a ban on sending work email after 6:00pm provided a much-needed salve for the idea that there is no outside to the onslaught. Never mind that this was a wishful, apocryphal version of a French labor agreement that in reality didn’t cut off email at any hour—the story still allowed France to perform its ongoing service as the English-speaking world’s fetish of a superior, butter-drenched, bicycle-riding quality of life, a life in which steak frites is now accompanied by possible escape from a particularly maddening incarnation of digital labor. That life is apparently now the stuff of internet rumor and media fancy.

The range of feelings I associate with the era of my first email account roll on through now and then as I check my inbox, and I could probably name them, though perhaps they were never discrete. And I understand that it is my job as a participant in digital culture to respond to email, and text, and instant messaging—in writing and in sentiment. But the truth is that I am just really tired. Perhaps the vacuum in affect attested to by the accumulation of emoticons and emojis has little to do with the flattening effect of digital communication. Maybe feelings are simply exhausted.

Catherine Zimmer is Associate Professor of Film and Screen Studies and English at Pace University in New York City. She is the author of Surveillance Cinema (NYU Press, 2015).

[This piece was originally posted on Avidly, Los Angeles Review of Books channel.]

‘Fun Home’ and Pride

—Amber Jamilla Musser

MotheralOn June 7th, 2015, the musical Fun Home emerged triumphant. It won 5 Tony Awards, including Best Musical, Best Original Score, Best Book of a Musical, Best Lead Actor in a Musical, and Best Direction of a Musical. The significance of these wins cannot be overstated. A musical based on a graphic memoir featuring a lesbian, her gay father, and the rest of the family has been thrust into the purview of mainstream America—and really, who can resist having ALL of the feelings when Sydney Lucas sings “Ring of Keys?” Moreover, Jeanine Tesori and Lisa Kron have made history as the first women to win a Tony for best songwriting team.

It is clear that Fun Home gives people many reasons to be proud, especially in a month when we traditionally celebrate LGBT pride. One of the things that I find most moving about the musical (and the original graphic memoir by Alison Bechdel) is the way it actually subverts traditional narratives of pride and shame based on particular understandings of identity and masochism.

One of the conventional understandings of Pride is that it exists to celebrate triumph over homophobia and prejudice against LGBT people. That this narrative privileges a particular form of progress and has been easier for particular segments of the LGBT population is something that has been written about extensively by other queer studies scholars. In this post, I’m more interested in mentioning the ways that this conventional version of identity politics shores up a particular vision of masochism. One of the main arguments in my book Sensational Flesh: Race, Power, and Masochism is that the framework that we’ve been using to understand the relationship between individuals and power is masochism. In the book that means various things, but in the context of Pride, it has meant reveling in the wounds that produce LGBT identity—triumph would not be possible if there were no obstacle to overcome and the more wounds that are available, the more visible the triumph and the more celebrated the identity/person.

While I am not the first to describe this relationship between identity, woundedness, and masochism, I argue that this narrative frames our understanding of what it is to be an individual so that those with the privilege of appearing wounded are able to do because they are already part of an assumed arc of redemption and celebration while those whose wounds are less affective and more structural in terms of access to resources cannot access this arc in the same way (see last year’s post on Kara Walker as an example).

On the surface, it would appear as though Fun Home could fall easily into this particular trope, but it smartly sidesteps the arc of progress. In her retrospective gaze at her family life and its relationship to her father’s gayness, Alison (the oldest version of the character that we see) doesn’t pity her father or frame his suicide as the effect of a bygone prejudice that she has been fortunate to avoid. The question is not what would have happened to Bruce Bechdel had he lived in an era when he could live freely as a gay man. Neither is the focus on Alison’s ability to come out as a college student and live as a butch because things are better now. The universe of the musical understands these characters as inhabiting different modes of queerness, but it doesn’t ask us to do a comparison (despite the fact that Bruce commits suicide, which would seem to be the ultimate masochistic act).

Instead, the character whose life we imagine might have been different is Bechdel’s mother, Helen, played achingly by Judy Kuhn, whose song near the end of the show, “Days and Days” is a tearjerker —not because she is self-pitying but because she is resigned. This is structural difference at work. She knows that her suffering does not connect to later progress or triumph, but it does not diminish her work or her pain.

Where does this lacuna of feeling lie in a world structured by suffering or triumph, a world where the individual is a masochist in order to receive redemption through pity? Throughout the musical, we see so many moments when the semi-closeted world that Bruce inhabits that his daughter so desperately wants to remember and connect to, is not uniformly sad; there is fun—a dance with a casket, a furtive sighting of a kindred spirit (the butch that Lucas sings so movingly about). In all, it is not a play about moving through masochism to find identity, but about recognizing the many different notes being played at the same time. The arc of identity need not be neat or masochistic (so as to end in triumph), but it makes one feel, and gives reason for finding different narratives of individuality.

Amber Jamilla Musser is Assistant Professor of Women, Gender, and Sexuality Studies at Washington University in St. Louis. She is the author of Sensational Flesh: Race, Power, and Masochism (NYU Press, 2014).

Mad Men, Esalen, and spiritual privilege

—Marion Goldman

The online community is still pulsing with speculation about the final close up of Don Draper meditating on the edge of the Pacific at Esalen Institute—where he found bliss or maybe just an idea for another blockbuster ad campaign.

stevemcqueen

The writers and set decorators of Mad Men got 1970s Esalen spot on: from the lone outside pay phone at the run-down central Lodge to the dozens of long-haired hippies, former beatniks and spiritual seekers revealing themselves to each other in encounter groups. The images are so accurate that an alternative cyber universe of old Esalen hands has been speculating about how the writers were able to depict the old days so well—and whether the morning meditation leader was supposed to be Zen trailblazer Alan Watts or edgy encounter group leader Will Schutz.

None of these debates matter much to the entrepreneurs who have transformed Esalen from a rustic spiritual retreat to a polished destination resort that serves gourmet meals and offers workshops with themes like ‘capitalism and higher consciousness.’ Soon after the last episode of Mad Men aired, Yahoo Travel published an article promoting a “Don Draper Weekend Getaway” for fortunate consumers who could foot the tab. The rates vary, but on a weekend, a premium single room at Esalen costs $450 per night and the prices go way up for luxurious accommodations overlooking the sea. In a throwback to the old days, there is a ‘hardship policy’—making it possible for up to a dozen people who take weekend workshops to spend ‘only’ about $200 a night to spread out their sleeping bags in meeting rooms that they must vacate between 9:00 in the morning and 11:00 at night.

When Esalen opened its gates in the 1960s, visitors and residents traded work for housing or paid what they could afford. The founding generation believed that everyone was entitled to personal expansion and spiritual awakening through the growing Human Potential Movement. My book, The American Soul Rush chronicles how Esalen changed from being a mystical think tank, sacred retreat and therapeutic community into a wellness spa dedicated to de-stressing affluent customers with challenges at work or in their relationships.

In the late 1960s and early 1970s very different kinds of people drove along Highway 1 to Esalen, hoping to create better lives for themselves and often hoping to repair the world as well. They were spiritually privileged, with the time and resources to select, combine and revise their religious beliefs and personal practices. However, many of them were far from wealthy, because Esalen opened at a time of economic abundance that extended far down into the white middle class and there was widespread faith in unlimited possibilities for every American.

People in small towns and distant cities read long articles about Esalen and human possibilities in Life Magazine, Newsweek and other popular periodicals. Its key encounter group leader briefly became a celebrity when he appeared regularly on the Tonight Show Starring Johnny Carson. And during Esalen’s glory days, movie stars like Natalie Wood, Cary Grant and Steve McQueen regularly drove north from Hollywood to discover more about themselves and to soak in the famous hot springs baths. But once they arrived, they stayed in simple rooms, they were called only by their first names and other workshop participants tried to honor their humanity by treating the stars as if they were just like them.

Esalen was dedicated to opening the gates to personal and spiritual expansion to everyone and it fueled a Soul Rush. It popularized many things that contemporary Americans have added to their lives and can practice almost anywhere: yoga, mindful meditation, holistic health, humanistic psychology and therapeutic massage.

But most people can no longer afford to visit Esalen itself. A leader who left Big Sur to counsel clients in disadvantaged neighborhoods summed up how much the Institute has changed over the decades: “Damn,” she said, “I guess we got gentrified just like everybody else.”

Marion Goldman is Professor of Sociology and Religious Studies at the University of Oregon, and author of The American Soul Rush: Esalen and the Rise of Spiritual Privilege (NYU Press, 2012).

Why has TV storytelling become so complex?

—Jason Mittell

[This post originally appeared at The Conversation.]

If you watch the season finale of The Walking Dead this Sunday, the story will likely evoke events from previous episodes, while making references to an array of minor and major characters. Such storytelling devices belie the show’s simplistic scenario of zombie survival, but are consistent with a major trend in television narrative.

Prime time television’s storytelling palette is broader than ever before, and today, a serialized show like The Walking Dead is more the norm than the exception. We can see the heavy use of serialization in other dramas (The Good Wife and Fargo) and comedies (Girls and New Girl). And some series have used self-conscious narrative devices like dual time frames (True Detective), voice-over narration (Jane the Virgin) and direct address of the viewer (House of Cards). Meanwhile, shows like Louie blur the line between fantasy and reality.

 

Many have praised contemporary television using cross-media accolades like “novelistic” or “cinematic.” But I believe we should recognize the medium’s aesthetic accomplishments on its own terms. For this reason, the name I’ve given to this shift in television storytelling is “complex TV.”

There are a wealth of facets to explore about such developments (enough to fill a book), but there’s one core question that seems to go unasked: “why has American television suddenly embraced complex storytelling in recent years?”

To answer, we need to consider major shifts in the television industry, new forms of television technology, and the growth of active, engaged viewing communities.

A business model transformed

We can quibble about the precise chronology, but programs that were exceptionally innovative in their storytelling in the 1990s (Seinfeld, The X-Files, Buffy the Vampire Slayer) appear more in line with narrative norms of the 2000s. And many of their innovations – season-long narrative arcs or single episodes that feature markedly unusual storytelling devices – seem almost formulaic today.

What changed to allow this rapid shift to happen?

As with all facets of American television, the economic goals of the industry is a primary motivation for all programming decisions.

For most of their existence, television networks sought to reach the broadest possible audiences. Typically, this meant pursuing a strategy of mass appeal featuring what some derisively call “least objectionable programming.” To appeal to as many viewers as possible, these shows avoided controversial content or confusing structures.

But with the advent of cable television channels in the 1980s and 1990s, audiences became more diffuse. Suddenly, it was more feasible to craft a successful program by appealing to a smaller, more demographically uniform subset of viewers – a trend that accelerated into the 2000s.

In one telling example, FOX’s 1996 series Profit, which possessed many of contemporary television’s narrative complexities, was quickly canceled after four episodes for weak ratings (roughly 5.3 million households). These numbers placed it 83rd among 87 prime time series.

Yet today, such ratings would likely rank the show in the top 20 most-watched broadcast programs in a given week.

This era of complex television has benefited not only from more niche audiences, but also from the emergence of channels beyond the traditional broadcast networks. Certainly HBO’s growth into an original programming powerhouse is a crucial catalyst, with landmarks such as The Sopranos and The Wire.

But other cable channels have followed suit, crafting original programming that wouldn’t fly on the traditional “Big Four” networks of ABC, CBS, NBC and FOX.

A well-made, narratively-complex series can be used to rebrand a channel as a more prestigious, desirable destination. The Shield and It’s Only Sunny in Philadelphia transformed FX into a channel known for nuanced drama and comedy. Mad Men and Breaking Bad similarly bolstered AMC’s reputation.

The success of these networks has led upstart viewing services like Netflix and Amazon to champion complex, original content of their own – while charging a subscription fee.

The effect of this shift has been to make complex television a desirable business strategy. It’s no longer the risky proposition it was for most of the 20th century.

Miss something? Hit rewind

Technological changes have also played an important role.

Many new series reduce the internal storytelling redundancy typical of traditional television programs (where dialogue was regularly employed to remind viewers what had previously occurred).

Instead, these series subtly refer to previous episodes, insert more characters without worrying about confusing viewers, and present long-simmering mysteries and enigmas that span multiple seasons. Think of examples such as Lost, Arrested Development and Game of Thrones. Such series embrace complexity to an extent that they almost require multiple viewings simply to be understood.

In the 20th century, rewatching a program meant either relying on almost random reruns or being savvy enough to tape the show on your VCR. But viewing technologies such as DVR, on-demand services like HBO GO, and DVD box sets have given producers more leeway to fashion programs that benefit from sequential viewing and planned rewatching.

Serialized novels, like Charles Dickens’ The Mystery of Edwin Drood, were commonplace in the 19th century. Wikimedia Commons.

Like 19th century serial literature, 21st century serial television releases its episodes in separate installments. Then, at the end of a season or series, it “binds” them together into larger units via physical boxed sets, or makes them viewable in their entirety through virtual, on-demand streaming. Both encourage binge watching.

Giving viewers the technology to easily watch and rewatch a series at their own pace has freed television storytellers to craft complex narratives that are not dependent on being understood by erratic or distracted viewers. Today’s television assumes that viewers can pay close attention because the technology allows them to easily do so.

Forensic fandom

Shifts in both technology and industry practices point toward the third major factor leading to the rise in complex television: the growth of online communities of fans.

Today there are a number of robust platforms for television viewers to congregate and discuss their favorite series. This could mean partaking in vibrant discussions on general forums on Reddit or contributing to dedicated, program-specific wikis.

As shows craft ongoing mysteries, convoluted chronologies or elaborate webs of references, viewers embrace practices that I’ve termed “forensic fandom.” Working as a virtual team, dedicated fans embrace the complexities of the narrative – where not all answers are explicit – and seek to decode a program’s mysteries, analyze its story arc and make predictions.

The presence of such discussion and documentation allows producers to stretch their storytelling complexity even further. They can assume that confused viewers can always reference the web to bolster their understanding.

Other factors certainly matter. For example, the creative contributions of innovative writer-producers like Joss Whedon, J.J. Abrams and David Simon have harnessed their unique visions to craft wildly popular shows. But without the contextual shifts that I’ve described, such innovations would have likely been relegated to the trash bin, joining older series like Profit, Freaks and Geeks and My So-Called Life in the “brilliant but canceled” category.

Furthermore, the success of complex television has led to shifts in how the medium conceptualizes characters, embraces melodrama, re-frames authorship and engages with other media. But those are all broader topics for another chapter – or, as television frequently promises, to be continued.

Jason Mittell is Professor of Film & Media Culture and American Studies at Middlebury College. He is the author of Complex TV: The Poetics of Contemporary Television Storytelling (NYU Press, 2015), and co-editor of How to Watch Television (NYU Press, 2013).

What lies beneath the Chapel Hill murders? More than a ‘parking dispute’

—Nadine Naber

We may never know exactly what Craig Stephen Hicks was thinking when he killed Syrian American medical student Deah Barakat, his Palestinian American wife Yusor Abu-Salha, and her sister Razan Abu-Salha. But we do know that U.S.-led war in Arab and Muslim majority countries has normalized the killing of Arabs and Muslims. It is more crucial than ever before to understand the institutionalization of racism against persons perceived to be Arab or Muslim in terms of the structures of imperial war that normalize killing and death and hold no one (other than victims themselves) accountable.

Photo: Molly Riley/UPI.

The Obama Administration may have dropped the language of the “war on terror,” but it has continued its fundamental strategy of endless war and killing in the Arab region and Muslim majority countries such as Afghanistan and Pakistan (without evidence of criminal activity). The unconstitutional “kill list” for instance, allows the president to authorize murders every week, waging a private war on individuals outside the authorization of congress. Strategies like the “kill list” resolve the guilt or innocent of list members in secret and replace the judicial process (including cases involving U.S. citizens abroad) with quick and expedited killing. These and related practices, and their accompanying impunity, look something like this:

Al Ishaqi massacre, Iraq 2006: The U.S. army rounded up and shot at least 10 civilians including 4 women and 5 children. The Iraqis were handcuffed and shot in the head execution style. The U.S. spokesperson’s response? “Military personnel followed proper procedures and rules of engagement and did nothing wrong.”

Drone attack, Yemen 2015: A drone killed 13-year old Mohammad Tuaiman (whose father was killed in a 2011 drone strike), his brother, and a third man. Questioned about the incident, the CIA stated that “the 3 men were believed to be Al Qaeda” even though the CIA refused to confirm that he was an Al Qaida militant.

The U.S.-backed Israeli killing of Palestinians reinforces the acceptability of Arab and Muslim death. In July 2014, the Israeli Defense Force killed at least 2,000 Palestinians including 500 children. It is well established that the IDF soldiers deliberately targeted civilians. The Obama Administration’s response? Explicit support for Israel.

And those left behind are forced to watch their loved ones’ bodies fall to the ground or burn like charcoal and can only conclude that, “In [the U.S. government’s] eyes, we don’t deserve to live like people in the rest of the world and we don’t have feelings or emotions or cry or feel pain like all the other humans around the world.”

Since the 1970s (when the U.S. consolidated its alliance with Israel), the corporate news media has reinforced the acceptability of Arab and Muslim death—from one-sided reporting to fostering fear of Arabs and Muslims. From Black Sunday (1977) to American Sniper (2015), Hollywood has sent one uninterrupted message: Arabs and Muslims are savage, misogynist terrorists; their lives have no value; and they deserve to die.

This interplay between the U.S. war agenda abroad and the U.S. corporate media extends directly into the lives of persons perceived to be Arab and/or Muslim in the United States. Hate crimes, firebomb attacks, bomb threats, vandalism, detention and deportation without evidence of criminal activity and more have all been well documented. Of course, such incidents escalated in the aftermath of the horrific attacks of 9/11. As the U.S. state and media beat the drums of war, anyone perceived to be Arab and/or Muslim (including Sikhs, Christian Arabs, and Arab Jews) became suspect. Muslim women who wore the headscarf became walking emblems of the state and media discourse of Islamic terrorism. Across the United States, at school, on the bus, at work, and on the streets, women wearing the headscarf have been bullied, have had their scarves torn off, and have been asked over and over why they support Al Qaeda, Saddam Hussein, terrorism, and the oppression of women.

Despite this, the corporate media (replicating the words of the police) and government officials have either reduced the North Carolina killings to a parking dispute or expressed grave confusion over why an angry white man would kill three Arab Muslim students in North Carolina execution-style. Yet the father of one of the women students stated that his son-in law did not have any trouble with Hicks when he lived there alone. The trouble, he said, started only after Yusor, who wore a headscarf identifying her as a Muslim, moved in. Even so, Chapel Hill Mayor Mark Kleinschmidt told CNN that the community is still “struggling to understand what could have motivated Mr. Hicks to commit this crime,” adding, “It just baffles us.”

The “parking dispute” defense individualizes and exceptionalizes Hicks’ crime—in this case, through a logic that obscures the connection between whiteness, Islamophobia, and racism. And the bafflement rhetoric constructs a reality in which there are no conceivable conditions that could have potentially provoked Hicks. Both approaches deny the possible links between the killings, U.S. and Israeli military killings, the media that supports them, and the U.S. culture of militarized violence. They will also assist Hicks in attempting to avoid the more serious hate crime charge that would come with a heavy additional sentence.

Alternately, discussions insisting on the significance of Islamophobia in this case must go beyond the call for religious tolerance and account for the projects of U.S. empire building and war that underlie Islamophobia. Contemporary Islamophobia is a form of racism and an extension of U.S.-led war abroad. As I wrote in Arab America, immigrant communities from the regions of U.S.-led war engage with U.S. racial structures, specifically anti-Arab and anti-Muslim racism, as diasporas of empire—subjects of the U.S. empire living in the empire itself. Perhaps then, we should also avoid applying the same analysis of racism across the board—as if all racisms are the same or as if the framework #blacklivesmatter can simply be transposed onto the killing of Arab Muslim Americans. Otherwise, we risk disremembering the distinct conditions of black struggle (and black Muslims) including the systematic state-sanctioned extrajudicial killing of black people by police and vigilantes as well as black poverty, and histories of slavery and mass incarceration. It is also important to remember the distinct conditions of the war on terror whereby anyone and everyone perceived be Muslim (including Arab Christians and Sikhs) are potential targets.

Rest in peace and power Deah Barakat, Yusor Abu-Salha, and Razan Abu-Salha. May your loved ones find strength and support. My heart is shattered.

Nadine Naber is Associate Professor in the Gender and Women’s Studies Program at the University of Illinois at Chicago. She is the author of Arab America: Gender, Cultural Politics, and Activism (NYU Press, 2012). 

Beyond intent: Why we need a new paradigm to think about racialized violence

—Evelyn Alsultany

Three Muslim Americans – Deah Shaddy Barakat, 23; his wife, Yusor Mohammad, 21; and her sister, Razan Mohammad Abu-Salha, 19 – were murdered last week in Chapel Hill, North Carolina by 46-year-old resident Craig Stephen Hicks. The tragedy has sparked a debate over whether or not these deaths were the result of a hate crime or a parking dispute.

Women take part in a vigil for three young Muslims killed in Chapel Hill, North Carolina. Photo: Mandel Ngan/AFP/Getty Images.

Muslim Americans who claimed that this was surely a hate crime were presented with evidence to the contrary. Hicks’s Facebook and other online posts revealed that he is an atheist who is against all religion, regardless of whether it is Islam, Christianity, or Judaism, a gun enthusiast, and an advocate for gay rights. His online posts show that he is passionate about the protection of constitutional rights, especially freedom of speech and freedom of religion. His archived posts even include commentary on the “Ground Zero mosque” controversy, in which he writes in support of Muslim rights and notes the important distinction between Muslims and Muslim extremists. His wife has insisted that the murders were the result of a parking dispute, and not a hate crime. As a result, Hicks has been portrayed as not hating Muslims.

This profile of Hicks is indeed complex. He does not fit the conventional profile of a “racist” – i.e., someone who believes that all Muslims are a threat to America; who clings to essentialist and binary notions of identities; who espouses that certain groups of people do not deserve human rights; who practices intentional bigotry; who is firmly rooted in a logic that justifies inequality. I am reluctant to use the term “racist” since it conjures an image of someone who participates in blatant and intentional forms of hate. However, what this case shows us is that we need a new paradigm to understand racialized violence today. Yes, this case is complex, but that does not mean it is not a hate crime. It is complex because it does not fit the narrow way in which we have defined a hate crime.

Posing an either/or option – either this is or is not a hate crime – does not help us understand what transpired. Racism is not an either/or phenomenon. It is deeply embedded in our society and, when left unchecked, has the potential to inform our perceptions and actions in ways not captured by a caricatured understanding of its diverse forms. Racism is not always conscious or intentional. It exists across a vast spectrum of consciousness and intentionality.  As part of our subconscious, racism can manifest in the form of microaggressions that are often unintentional and sometimes even well-meaning. On the more dangerous side of the spectrum, it manifests in violence. We need to break the association of racism with intent because racism endures without it.

Our current cultural paradigm often makes a simplistic equation: Good people are well-intentioned and are therefore not racist; bad people are ill-intentioned and are therefore racist. Consequently, if the white police officers who killed Michael Brown and Eric Gardner are considered “good people” by their friends, families, and colleagues, their actions cannot be deemed racist. Such a conclusion focuses solely on intent and overlooks how members of the police – like all of us – have been shaped and influenced by notions of black men as threatening and how such cultural imagery has, in turn, structured racialized violence.

The point is not that Craig Hicks is any more or any less racist than the white police officers who murdered Michael Brown, Eric Garner, and other black men. Indeed, the question of their individual, consciously expressed or felt racism does not help us to understand what happened or how to prevent it in the future; it just provokes denial and defensiveness. Conversely, claiming that we are “post-race” and/or denying that a particular incident has anything to do with race does not help us solve the problem of racialized violence.

The point is not whether Craig Hicks is any more or less racist than any of us; the point is that Craig Hicks lives and his victims died in a society that is structured by deeply institutionalized and culturally pervasive racism that exists regardless of whether any individual “wants” it to or not, and regardless of whether we as a society want to acknowledge it or not. We need a new paradigm, a new language and framework, to understand racialized violence today. Hicks’ profile provides an opportunity to challenge ourselves to rethink our understanding of racism and hate crimes in order to prevent murder.

Evelyn Alsultany is Associate Professor in the Program in American Culture at the University of Michigan. She is the author of Arabs and Muslims in the Media: Race and Representation after 9/11 (NYU Press, 2012).

#MuslimLivesMatter, #BlackLivesMatter, and the fight against violent extremism

—Zareena Grewal

On Tuesday February 10, 2015, Craig Stephen Hicks, 46, was charged with first-degree murder of three Arab, Muslim college students in Chapel Hill, North Carolina.

Photo: http://twitter.com/samahahmeed.

Hicks’ neighbors, Deah Shaddy Barakat, 23, and Yusor Mohammad, 21, were newlyweds—and Razan Mohammad Abu-Salha, 19, was visiting her older sister and brother-in-law at the time of the execution-style killing. After the mainstream US media’s initial silence, the homicide is now referred to as “a shooting,” sparking worldwide Twitter hashtag campaigns such as #CallItTerrorism and #MuslimLivesMatter with many speculating on how the crime might have been framed had the perpetrator been Muslim and the victims white.

The motives of Hicks, who turned himself in to police, are the source of heated debate and speculation. According to his Facebook profile, Hicks describes himself as an anti-theist, a fan of the controversial film American Sniper and atheist polemicist Richard Dawkins, and a proud gun-owner. The Chapel Hill Police Department described the crime as motivated by an on-going dispute between the neighbors over parking, while the father of the two young women insists it was a “hate-crime.” Chief Chris Blue recognizes and continues to investigate “the possibility that this was hate-motivated.”

Such language suggests that while Hicks’ violence is exceptional and excessive, his motivations could have been ordinary and benign: maybe he was there first, maybe he had dibs on that parking spot, maybe he had a bad day or a bad life and so he had a mental breakdown with a gun in hand. After all, while this murder is devastating to the family and friends of the victims, for many of us, it is not shocking. We know and expect “lone shooters” to be white, heterosexual men; we know and expect their victims to be men of color, women, youth.

But it is American Muslim leaders who will gather in DC for the Obama administration’s “Countering Violent Extremism Summit” in a few days.

Individualizing the violence of white American men into “lone wolves” conceals the regularity of such violence and the state’s inability to prevent it, to make us “secure,” even to name it. This is one of the searing lessons of the #BlackLivesMatter movement; George Zimmerman’s sense of insecurity was used to justify his murder of an unarmed, black teenager, Trayvon Martin. As the #BlackLivesMatter movement demonstrates, Zimmerman was part and parcel of a larger phenomenon of racial, homicidal violence against unarmed blacks enacted in tandem by ordinary white citizens “standing their ground” and militarized police forces.

A significant number of blacks in the US are also Muslim and, therefore, vulnerable to being brutalized and murdered simply because they are black. Despite the fact that black youth are more than four times likely than any other group to be gunned down by police, critics of #BlackLivesMatter continue to ignore this harsh reality, insisting that #AllLivesMatter.

Clearly, all lives do not matter to everyone. The #BlackLivesMatter movement brings our attention to the fact that violence in the name of white supremacy only horrifies and terrifies some of us.

Disingenuous claims about how all lives matter or how parking is frustrating hide the insidious influence of racism. In my book, Islam is a Foreign Country, I explore how American Muslim communities grapple with the pervasive, racial hatred of their religion. This morning a Pakistani friend asked whether she will now have to explain to her young children that some people hate them just for being Muslim. African American Muslims know all too well that the question is not whether but when to teach their children that they are vulnerable. Hicks’ victim knew it too; she saw it in his eyes, telling her father, “He hates us for what we are and how we look.”

Zareena Grewal is Associate Professor of American Studies and Religious Studies at Yale University. She is the author of Islam is a Foreign Country: American Muslims and the Global Crisis of Authority (NYU Press, 2015).

No scrubbing away America’s racist past

—Carl A. Zimring

Last week, @deray tweeted an image of a century old soap advertisement showing a young white boy using soap to wash the pigment off of a young African-American boy’s body. He captioned it “Ads. Bleaching. History. America.”

Had he wished to, @deray could have sent out dozens of such tweets, each with a different image. The tweeted image was but one of dozens printed between 1880 and 1915 displaying claims soaps could literally washed dark pigment off of skin. My forthcoming book, Clean and White, reproduces similar examples from Lautz Brothers, Kirkman and Sons, and Pearline. The latter featured an illustration of an African-American woman scrubbing a young child and exclaiming “Golly! I B’leve PEARLINE Make Dat Chile White.”

These racist caricatures focused primarily but not exclusively on African-Americans. Kirkman and Sons released an advertisement sometime after 1906 that referenced the year’s Pure Food and Drug Act. The ad showed three white women washing three wealthy Turkish men’s skin from brown to white. The accompanying poem tells the story of how the women were the Turkish men’s maids. They convinced the men to let them wash them with the soap, transforming their features to milky white. The story ended happily, with the now-white men marrying each of the maids. Cross-racial and cross-class lines were transcended, all through the miracle of a pure, cleansing soap.

Such a message was consistent with the trope that skin darker than white was somehow impure and dirty. Products boasting of absolute purity claimed to be so powerful that they could literally wash away the stain of race.

Why do these images matter as anything beyond century-old relics of America’s racist past? These images proliferated at a time when the rhetoric and imagery of hygiene became conflated with a racial order that made white people pure, and anyone who was not considered white was somehow dirty. The order extended from caricatures to labor markets. Analysis of census data indicates the work of handling waste (be it garbage, scrap metal, laundry, or domestic cleaning) was disproportionately done by people who were not native-born white Americans.

Through World War II, this involved work by African Americans and first- and second-generation immigrants from Asia, Latin America, and Southern and Eastern Europe. In the second half of the twentieth century, the burdens of this dirty and dangerous work fell heavier on Hispanic and African-American workers, creating environmental inequalities that endure to this day. They are evident in the conditions that led to the Memphis’s sanitation workers strike in 1968, as well as the residents of Warren County, North Carolina laying down in the street to block bulldozers from developing a hazardous waste landfill in 1982. Environmental inequalities are evident still in environmental justice movements active across the United States in 2015.

Since the end of the Civil War, American sanitation systems, zoning boards, real estate practices, federal, state, and municipal governments, and makers and marketers of cleaning products have all worked with an understanding of hygiene that assumes “white people” are clean, and “nonwhite people” are less than clean. This assumption is fundamental to racist claims of white supremacy, a rhetoric that involves “race pollution,” white purity, and the dangers of nonwhite sexuality as miscegenation. It is also fundamental to broad social and environmental inequalities that emerged after the Civil War and that remain in place in the early twenty-first century. Learning the history of racist attitudes towards hygiene allows us to better understand the roots of present-day inequalities, for the attitudes that shaped those racist soap advertisements remain embedded in our culture.

Carl A. Zimring is Associate Professor of Sustainability Studies at Pratt Institute. He is the author of Clean and White: A History of Environmental Racism in the United States from Monticello to Memphis (forthcoming from NYU Press).