Q&A with Ralph Young, author of Dissent

youngHow did you come to write an entire book on the concept of dissent? 

Ralph Young: The idea came to me while I was compiling and editing Dissent in America: The Voices That Shaped a Nation. This was a massive 800-page compilation of 400 years of dissenting speeches, sermons, petitions, songs, poems, polemics, etc. I let the voices of dissenters speak in this book and then I thought I should write my own narrative history of the United States from the standpoint of dissenters and protest movements. From the standpoint of those outsiders who sought more equality, more opportunity, more empowerment.

Can you succinctly define what dissent is—or perhaps it’s easier to say what dissent is not?

There are many ways to define dissent. And I go into this at great length in the introduction to the book. On the broadest level it’s going against the grain. Dissenting against what is. Whatever that is is.

The act of dissent covers a lot of ground ranging from intellectual skepticism to radical violence. Is there ‘good’ dissent and ‘bad’ dissent?

I believe there is. Dissent is dissent, regardless of the motives behind it. But for me dissent that seeks to empower the disempowered without infringing on anyone’s natural rights, that seeks to broaden rights rather than limiting rights is “good” dissent. Dissent that seeks to disempower other individuals or groups, that seeks to maintain white supremacy, or in other ways to limit the rights of others, is “bad” dissent. But ultimately I don’t like to use the words good dissent or bad dissent, because things have a way of working out in surprisingly unexpected ways.

What separates violent dissent from terrorism?

Violence is a somewhat mindless blind reaction against what is and resorting to violence reveals the frustration of those who have been fighting for a cause without success. Terrorism is on a different level. It is more strategic and is the ultimate weapon of groups that wish to destroy a government or a ruling paradigm and set up something entirely different. Violent dissent expresses frustration and is perhaps the last gasp of a group that still wants to reform the system. Terrorism is an attempt to utterly destroy the system.

Is dissent a uniquely American construct, sort of like jazz?

No, it is not uniquely American. Dissent has existed from time eternal and throughout the world. But it is a concept that Americans valued so much that we put it in our constitution as a right and we have been dissenting and refining dissent ever since. (In fact, dissent itself was one of the forces that brought about the creation of the United States, the Constitution, and the Bill of Rights itself.)

The book you’ve written is in many ways an alternate history of America. How would you compare Dissent to Howard Zinn’s A People’s History of the United States?

I admire Zinn’s People’s History, but it is clearly a one-sided point of view. And while my book admittedly has a point of view, and any reader will know where I stand about the subject under discussion, I do try to give voice to those I don’t agree with. Also Zinn’s book looks at America through the prism of class, of class consciousness, of the age-old class struggle, while Dissent: The History of an American Idea looks at the scope of American history through the lens of dissent, which is a broader perspective. True, some dissent is a manifestation of the class struggle, but it is not limited to class. There are thousands of middle-class, even upper-class, dissenters. So dissent can be a manifestation of far more divergent points of view.

What would you say are some of the biggest triumphs of dissent in America and the biggest disappointments or failures?

Certainly the abolitionist, women’s, and civil rights movements achieved a great deal of success although at the time it seemed maddeningly slow to the participants. The movements that have protested income inequality, like Coxey’s Army and the recent Occupy movement have not achieved success, although they might simply have been the early rounds in an ongoing struggle. Some movements had a great deal of success, like the labor movement in the 1930s, but much of that success has been rolled back since the 1950s.

Have the active protest aspects of dissent such as rallies and marches permanently given way to more passive activities still such as legal action and armchair clicktivism (hitting a like button to support a cause) or are we just going through a phase?

I don’t think active protests and marches will ever come to an end. In some ways the Internet and social media have diminished attendance at protest rallies and marches, but in some ways social media has also increased attendance. I would say the jury is still out on the impact of social media on protest movements. Throughout history dissenters have always employed the latest technology to get out their message: radio, television, song, poetry, mass-printed sources like posters and flyers, etc.

Some would argue that modern dissent is less about life and death struggles and more of a lament against first world problems. Would you agree or disagree and why?

Dissent has never had to be about purely life and death struggles. In some cases, like with the abolitionist movement, yes. But in other cases it has primarily been about forcing the United States to live up to its part of the bargain. The Declaration of Independence and the Constitution established highly idealistic principles that have not always applied to all people. Those who have felt left out of the “American Dream” have viewed these documents as a contract that the United States government must honor.

During the roughly 400-year history you examine in the book, many of the root causes of dissent—race, gender and economic inequality, religious differences, whether or not to fight wars and even police violence—are recurring themes. Does this repetition mean that we are not learning from history and are therefore doomed to repeat past mistakes?

Do we ever learn from history? There are lessons, to be sure, that history can teach us. But these issues are central to human nature. One of my favorite protest signs I saw recently was “I can’t believe we still have to protest this shit!”

Does the passage of time make it easier to judge the motivational integrity and the results of dissent? In other words, is it easier to draw conclusions about the Revolutionary War than the Occupy Wall Street or Tea Party movements?

Yes. The Revolutionary War resulted in the foundation of the United States and so we can interpret and evaluate its motivational integrity far better than we can Occupy or Tea Party since these are still unfolding. The irony, though, is that interpretations of the past are fluid, they are still changing. Historians have analyzed and critiqued the motivations of the Founding Fathers ever since the creation of the United States. Were they motivated by truly democratic principles, or were they economic elites who created a Constitution that would protect the interests of economic elites? This is still a debatable interpretation.

By studying the past, might you be able to predict the future face of dissent or perhaps see the next wave of dissent in America on the horizon?

I’ve always viewed history as the study of the past, the present, and the future! History is organic. And we are part of that organism. We cannot actually predict the future, but we can see the patterns and come up with some reasonable expectations of what they might lead to.

Do you think it’s only a matter of time before some form of violent revolution revisits America or have we progressed beyond that in the 21st century?

I can’t see it happening in the near future, but I wouldn’t count out anything. It depends on how bad things get. If economic inequality continues to grow and fester, it’s like putting a cap on a volcano. Pressures will continue to build unless there is some effort at reform to act as a safety valve. Theodore Roosevelt always believed that reform is essential and that if the powers that be ignore reform they are stoking the fires of revolution.

What do you hope people will take away from reading Dissent?

That dissent is patriotic. It is one of the central attributes of being an American. And that no single individual can change the world, but if thousands, millions of individuals work together toward a goal, together they can make a difference. And making a difference in small incremental steps is the way we do change the world.

Ralph Young is Professor of History at Temple University. He is the author of Dissent in America: The Voices That Shaped a Nation, a compilation of primary documents of 400 years of American dissenters.

Book giveaway: Dissent

Dissent (NYU Press, 2015)“Temple University historian Young delivers a doorstopper that few readers will ever want to misuse in such a manner; his clear and elegant style and a keen eye for good stories make it a page-turner…Young convincingly demonstrates that the history of the United States is inextricably linked to dissent and shows how ‘protest is one of the consummate expressions of Americanness.'”
STARRED Publishers Weekly

“A broad-ranging, evenhanded view of a tradition honed into an art form in America: the use of dissent as ‘a critique of governance’…Young has a knack for finding obscure but thoroughly revealing moments of history to illustrate his points; learning about Fries’ Rebellion and the Quasi-War with France is worth the price of admission alone, though his narrative offers much more besides…Refreshingly democratic—solid supplemental reading to the likes of Terkel and Alinsky, insistent on upholding the rights of political minorities even when they’re wrong.”
Kirkus Reviews

To celebrate the stellar reviews rolling in for our forthcoming book, Dissent: The History of an American Idea, we are giving away a free copy to two lucky winners!

Dissent: The History of an American Idea examines the key role dissent has played in shaping the United States. It focuses on those who, from colonial days to the present, dissented against the ruling paradigm of their time: from the Puritan Anne Hutchinson and Native American chief Powhatan in the seventeenth century, to the Occupy and Tea Party movements in the twenty-first century. The emphasis is on the way Americans, celebrated figures and anonymous ordinary citizens, responded to what they saw as the injustices that prevented them from fully experiencing their vision of America.

To enter our book giveaway, simply fill out the form below with your name and preferred e-mail address. We will randomly select our winners on Friday, May 1st, 2015 at 1:00 pm EST.

Why has TV storytelling become so complex?

—Jason Mittell

[This post originally appeared at The Conversation.]

If you watch the season finale of The Walking Dead this Sunday, the story will likely evoke events from previous episodes, while making references to an array of minor and major characters. Such storytelling devices belie the show’s simplistic scenario of zombie survival, but are consistent with a major trend in television narrative.

Prime time television’s storytelling palette is broader than ever before, and today, a serialized show like The Walking Dead is more the norm than the exception. We can see the heavy use of serialization in other dramas (The Good Wife and Fargo) and comedies (Girls and New Girl). And some series have used self-conscious narrative devices like dual time frames (True Detective), voice-over narration (Jane the Virgin) and direct address of the viewer (House of Cards). Meanwhile, shows like Louie blur the line between fantasy and reality.

 

Many have praised contemporary television using cross-media accolades like “novelistic” or “cinematic.” But I believe we should recognize the medium’s aesthetic accomplishments on its own terms. For this reason, the name I’ve given to this shift in television storytelling is “complex TV.”

There are a wealth of facets to explore about such developments (enough to fill a book), but there’s one core question that seems to go unasked: “why has American television suddenly embraced complex storytelling in recent years?”

To answer, we need to consider major shifts in the television industry, new forms of television technology, and the growth of active, engaged viewing communities.

A business model transformed

We can quibble about the precise chronology, but programs that were exceptionally innovative in their storytelling in the 1990s (Seinfeld, The X-Files, Buffy the Vampire Slayer) appear more in line with narrative norms of the 2000s. And many of their innovations – season-long narrative arcs or single episodes that feature markedly unusual storytelling devices – seem almost formulaic today.

What changed to allow this rapid shift to happen?

As with all facets of American television, the economic goals of the industry is a primary motivation for all programming decisions.

For most of their existence, television networks sought to reach the broadest possible audiences. Typically, this meant pursuing a strategy of mass appeal featuring what some derisively call “least objectionable programming.” To appeal to as many viewers as possible, these shows avoided controversial content or confusing structures.

But with the advent of cable television channels in the 1980s and 1990s, audiences became more diffuse. Suddenly, it was more feasible to craft a successful program by appealing to a smaller, more demographically uniform subset of viewers – a trend that accelerated into the 2000s.

In one telling example, FOX’s 1996 series Profit, which possessed many of contemporary television’s narrative complexities, was quickly canceled after four episodes for weak ratings (roughly 5.3 million households). These numbers placed it 83rd among 87 prime time series.

Yet today, such ratings would likely rank the show in the top 20 most-watched broadcast programs in a given week.

This era of complex television has benefited not only from more niche audiences, but also from the emergence of channels beyond the traditional broadcast networks. Certainly HBO’s growth into an original programming powerhouse is a crucial catalyst, with landmarks such as The Sopranos and The Wire.

But other cable channels have followed suit, crafting original programming that wouldn’t fly on the traditional “Big Four” networks of ABC, CBS, NBC and FOX.

A well-made, narratively-complex series can be used to rebrand a channel as a more prestigious, desirable destination. The Shield and It’s Only Sunny in Philadelphia transformed FX into a channel known for nuanced drama and comedy. Mad Men and Breaking Bad similarly bolstered AMC’s reputation.

The success of these networks has led upstart viewing services like Netflix and Amazon to champion complex, original content of their own – while charging a subscription fee.

The effect of this shift has been to make complex television a desirable business strategy. It’s no longer the risky proposition it was for most of the 20th century.

Miss something? Hit rewind

Technological changes have also played an important role.

Many new series reduce the internal storytelling redundancy typical of traditional television programs (where dialogue was regularly employed to remind viewers what had previously occurred).

Instead, these series subtly refer to previous episodes, insert more characters without worrying about confusing viewers, and present long-simmering mysteries and enigmas that span multiple seasons. Think of examples such as Lost, Arrested Development and Game of Thrones. Such series embrace complexity to an extent that they almost require multiple viewings simply to be understood.

In the 20th century, rewatching a program meant either relying on almost random reruns or being savvy enough to tape the show on your VCR. But viewing technologies such as DVR, on-demand services like HBO GO, and DVD box sets have given producers more leeway to fashion programs that benefit from sequential viewing and planned rewatching.

Serialized novels, like Charles Dickens’ The Mystery of Edwin Drood, were commonplace in the 19th century. Wikimedia Commons.

Like 19th century serial literature, 21st century serial television releases its episodes in separate installments. Then, at the end of a season or series, it “binds” them together into larger units via physical boxed sets, or makes them viewable in their entirety through virtual, on-demand streaming. Both encourage binge watching.

Giving viewers the technology to easily watch and rewatch a series at their own pace has freed television storytellers to craft complex narratives that are not dependent on being understood by erratic or distracted viewers. Today’s television assumes that viewers can pay close attention because the technology allows them to easily do so.

Forensic fandom

Shifts in both technology and industry practices point toward the third major factor leading to the rise in complex television: the growth of online communities of fans.

Today there are a number of robust platforms for television viewers to congregate and discuss their favorite series. This could mean partaking in vibrant discussions on general forums on Reddit or contributing to dedicated, program-specific wikis.

As shows craft ongoing mysteries, convoluted chronologies or elaborate webs of references, viewers embrace practices that I’ve termed “forensic fandom.” Working as a virtual team, dedicated fans embrace the complexities of the narrative – where not all answers are explicit – and seek to decode a program’s mysteries, analyze its story arc and make predictions.

The presence of such discussion and documentation allows producers to stretch their storytelling complexity even further. They can assume that confused viewers can always reference the web to bolster their understanding.

Other factors certainly matter. For example, the creative contributions of innovative writer-producers like Joss Whedon, J.J. Abrams and David Simon have harnessed their unique visions to craft wildly popular shows. But without the contextual shifts that I’ve described, such innovations would have likely been relegated to the trash bin, joining older series like Profit, Freaks and Geeks and My So-Called Life in the “brilliant but canceled” category.

Furthermore, the success of complex television has led to shifts in how the medium conceptualizes characters, embraces melodrama, re-frames authorship and engages with other media. But those are all broader topics for another chapter – or, as television frequently promises, to be continued.

Jason Mittell is Professor of Film & Media Culture and American Studies at Middlebury College. He is the author of Complex TV: The Poetics of Contemporary Television Storytelling (NYU Press, 2015), and co-editor of How to Watch Television (NYU Press, 2013).

Maddening pleasures, subsequent silence

—Stanley I. Thangaraj

March Madness is just kicking off, and ESPN has already predicted that this year’s tournament will see over $9 billion in bets and gambling.

MarchMadness-confettiFrom offices to college campuses, March Madness continues to attract more and more constituencies in ways that other sporting events, even the Super Bowl, cannot. This time in March and April is often marked by explicit displays of collegiate allegiances and intense and passionate rivalries within the institution of higher learning, a kind of ‘madness’ that is unmatched in any collegiate setting across the globe.

Yet, there is something very particularly American about this event. This is very much a United States phenomenon. In many ways, March Madness tells us about ourselves, and the values we interject into collegiate sports, and March Madness, in particular. It is this matter of values—and how sport reflects us, as an American society—that I am interested in. Specifically, I want to focus on student-athletes in two respects: the male basketball collegiate players and the women’s NCAA tournament.

While March Madness is a time to celebrate alma maters, there is a way in which the iconicity of the athletes, the power and recognition of coaches, and the transcendental nature of sport intersect to create quite a venomous concoction. American studies scholar, Nicole Fleetwood, in her elegant and sophisticated analysis of the visual plane of black iconicity in Troubling Vision: Performance, Visuality, and Blackness, asks that we critically evaluate how iconicity and icons fail to address either the messiness of social life or its major contradictions. Likewise, the iconicity of coaches, such as Jim Boeheim, John Calipari, Roy Williams, and Mike Krzyzewski, along with number of collegiate players, hides significant problems within the realm of sport. In the midst of sheer athletic movements, creative plays, intense and intimate camaraderie, and shows of sportsmanship, many other questions and points will remain, at best, minimally discussed and, at worst, completely brushed over.

With so much of the focus on the athleticism of the young men in the men’s national basketball tournament, there is little time to reflect on their lives outside of sport and in classes, in the collegiate physical environment, and in the larger social landscape. As I have taught in big and small institutions of higher learning, in Division I and Division III schools, and I have myself played and coached at the collegiate level, I recognize that the student-athlete has become a source of capital in ways that the “student” is extracted from the “athlete.” The hyphen connecting and demanding a peaceful, synchronized, holistic existence does not exist in present-day sport. Rather, our collegiate (as well as other forms of amateur) sports are a mere show in ideals, but the reality is much more troublesome. “Student” is often treated as an adjective to athlete. With that, as the games proceed through March Madness, I ask this first question: What are ways to create a fantastic learning experience for student-athletes? What type of support is there and where is the support for the student-athlete? The athlete has become the pariah within the realm of students, as if his/her natural place is often assumed to be only on the court, the field, the pool, or the mat. My encounters with student-athletes have shown the precarity of their lives and various forms of alienation within institutions of higher learning.

There has been a trend to let out a big sigh of frustration upon hearing of a student-athlete in one’s class, especially if he is a basketball or football player (read as African American). Although instructors might take great joy in the feats of the athletes on the playing field, the same type of energy does not surface in the face-to-face interactions with student-athletes. As a result, some student-athletes that I have met expressed their alienation in the class setting. They felt like disregarded, like scrap metal. Especially for working-class African American student-athletes, as I discuss in my book, there was the everyday experience and dilemma of already being overdetermined as athletes and sporting bodies. Scott Brooks, in Black Men Can’t Shoot, and Rueben May, in Living through the Hoop, attest to the difficulties of poor young black basketball players. This over-determination meant that their excessive bodies were seen as lacking mind and other key elements of the academic experience.

In the place of this crevice within the college experience for athletes are academic counselors, advisors, and tutors. This seems like a good substitute. However, would anyone substitute John Calipari with a non-sport professional who does not have any of the training, experience, and strong basketball pedigree? Why then would it be okay to insert advisors and tutors who do not have the training and expertise as the professors teaching the courses? The providing of such tutors, counselors, and advisors is important, but it is a double-edged sword, cutting deeply on both ends. For one, it fails to manufacture a positive learning experience and relationship with faculty in the classroom. Instead, what we need are fewer classes so that student-athletes can enjoy classes like the rest of the student population. Taking only two courses a semester would free up time for student-athletes to engage the material fully. They would not feel overwhelmed and feel like studying is a losing battle especially with the demands of the sporting field.

The athletic academic counseling structure justifies an entire cottage industry of sport services professionals within higher learning without providing a greater critique of collegiate and amateur sport. Several football players would be so worn out that staying awake in class took greater energy out of an already exhausted body. When I asked them why they were tired, a few of them spoke candidly that their coaches take every bit of energy out of them. Each coach, assistant coach, and trainer is there to take out all that if left in the athletic body. The student-athletes secure jobs and income for a wide assortment of sport professionals yet their lives reveal such insecurity. An injury could derail the entire collegiate experience. Yet, these students are pushed to the limits, and when demands are made for large stipends or paying student-athletes, the response is always, “They are lucky to be here,” “The scholarship is their payment,” or “The scholarship is the greatest gift.” Really? Is it this simple? Or have we become so blinded to the corporate regime of college sports?

The student-athletes barely have time for anything other than sports. Yet, they have to manage their work day (sports training) with their full-time class schedule. How many students, other than student-athletes, have to travel long distances for work, miss classes (and holidays and family events), and train for the entire course of the year? While we watch March Madness and take in all the joys that come with it, we have to ask whether the traditional student would have to put in the same hours and labor without pay as student-athletes. Student-athletes cannot even enjoy other experiences of the college environment, such as partying, studying abroad, holding part-time jobs, and securing important professional internships. With each round that goes by during March Madness, we should be obligated to ask how to equip and provide support for all of our students, including our student-athletes. As jobs increase around sports like coaches, assistant coaches, trainers, medical professionals, and even scholars of sport like me, we owe it to fair play in sport that we give our student-athletes a fair play in academia with stipends and an unlimited commitment to fund the scholarship for student-athletes, even many years after their playing days.

While we talk about guaranteeing college futures for male student-athletes, we need to also interrogate why men’s collegiate basketball appears in sports media as just “basketball,” while women’s basketball foregrounds the gender category of woman as an adjective, appendage, and an addition to basketball. Basketball, as my research has shown, was already taken for granted as “masculine”—a sport to be practiced by men. As such, March Madness stands ubiquitously for men’s basketball. While filling out the men’s bracket, there is little engagement in sporting communities for filling out the women’s bracket. Accordingly, the iconicity of men’s basketball reduces sport to a male arena and celebrates male sporting accomplishments. In the process, female athletes, like female basketball players, are relegated to a realm where they are outside the language of everyday basketball talk. There will be little to no discussion of how Title IX does not guarantee equity in the field of play. (See Deborah Brake’s brilliant book, Getting in the Game: Title IX and the Women’s Sport Revolution.) Rather, one sees equal numbers of men and women playing collegiate sports—but this metric does not translate into equal access to resources, nor does it mean that the voices of women players are heard as loudly as men’s.

This disparity is also prevalent in sponsorship opportunities and the minimal funding for women’s teams. There is frequent talk about the greatest collegiate basketball coaches, but rarely do coaches of the women’s game like Pat Summit, Dawn Staley, and Geno Auriemma enter that conversation. Likewise, there are many men coaching the women’s game, but no women coaching (as a head coach) the men’s game. Furthermore, as the case of transgender athlete Kye Allums shows us, there are few spaces in either the men’s or women’s game for gender-non-conforming or trans athletes. To add, another disturbing fact is the gendered and sexual violence within women’s collegiate sports. None of this, or very little of it, will be the subject of conversation during March Madness. The sexual violence that is normalized on college campuses seeps into and destroys women’s athletics as well. As basketball is rendered as a game for men, the violence against female basketball players is not always fully investigated. This is also because the women’s tournament becomes a side-show, not the main attraction. As a result, the storylines and issues within women’s sports are not legitimated and made visible.

There has to be a national discussion about sexual violence and it must also take place within the confines of collegiate sport. We need that discussion to begin now. In the late 1960s, sociologist Dr. Harry Edwards played a critical role in organizing African American student-athletes against racism locally, and within the larger Olympic Games. We know of the 1968 Mexico City Olympics protest. There is a foundation, although we cannot always see it, to use sport as one of the key arenas for creating livable, fair, just, and equitable worlds. Sport, as the great scholar C.L.R. James has argued in Beyond a Boundary, is not apart from the real world but intricately connected to it. Sport provides various forms of reprieve from the outside world but that does not mean that we can forget about how power operates in sporting cultures. Through sport, we can harness new social arrangements and social justice principles that then truly make sport the most utopian social site.

Stanley I. Thangaraj is Assistant Professor of Anthropology at City College of New York and the author of Desi Hoop Dreams: Pickup Basketball and the Making of Asian American Masculinity (NYU Press, June 2015).

Picture us free

—Jasmine Nichole Cobb

I have always been enamored by U.S. illustrations of black struggles for freedom. Typical depictions feature African descendants insisting on respect and white state officials denying privileges. Cross-generational portrayals mediate these conflicts by construing blackness as spectacularly distinct within U.S. race relations. Although the specific “rights” in question change over time, these characteristics appear in images from the nineteenth to the twenty-first century. They are the stakes that underpin racist caricatures, early photography, and black screen cultures; in many ways, these themes define the pictorial history of the United States.

The year 2015 marks several watershed moments in the long arc of strivings for black freedom, including the 50th anniversary of the Voting Rights Act, signed by President Lyndon Johnson in 1965, as well as the 150th anniversary of the 13th amendment, ratified in 1865 to abolish the legal practice of slavery. On this continuum, 2015 will contribute its own pictures to the timeline of race in America, with illustrations of the first black president presiding over many important commemorations.

Last Saturday, March 7th, President Barack Obama gave a speech to memorialize the marches from Selma to Montgomery, Alabama. While his words were politically salient—charging congress to restore the Voting Rights Act—it is in pictures that the first black president marked an important contribution to the history of black people in the United States. Joined hand-in-hand with survivors of the civil rights struggle, the First Family stood on the frontline of a procession to cross the Edmund Pettus Bridge—the site of Bloody Sunday. Captured on the verge of marching, the Obamas were camera ready, smiling, their sense of motion stilled for the photographic opportunity. Hand-in-hand with Senator John Lewis and foot soldier Amelia Boynton Robinson, the Obamas were at the forefront, with Martin Luther King III and Rev. Al Sharpton among those notable figures that receded into back rows of the image.

blog

Staged to reminisce on the marches to Montgomery, this illustration suggests the fulfillment of black citizenship by the existence of a black president. It honors the murder and violation of activists in 1965, but draws on the sober and respectable depictions of triumph taken from days like March 25th when Martin Luther King Jr., Coretta Scott King and other activists safely arrived in the state capital. In living color, Obama implies the completion of black freedom struggles. Read against the precursor images—black-and-white photos of sixties activists marching forward, lips parted in song, tired, but convicted—the affirmation of black citizenship pictured in 2015 connects to illustrations that exclude contention.

blog2

But like every attempt to picture freedom, threats against black life persist at the margins. While President Obama described the abuses of billy clubs and tear gas during the 1960s, he denied a connection to Ferguson, Missouri, where many of these tactics endure. While we were all invited to picture freedom through illustrations like Selma 2015, we were to look away from the mediation of murder, this time of Wisconsin teen, Tony Robinson.

I cannot deny the beauty of seeing a black president address civil rights activists at the site of transgression. I especially enjoy seeing the First Lady, her mother and growing daughters. But seeing the Obama family is different from viewing their existence as a representation of black freedom. National commemorations of the black freedom struggle, which ask us to hail the Commander-in-Chief as the fulfillment of our strivings, refuse the ways in which that struggle remains ever present. The tension between these motives animates every illustration of this kind.

The most popular depictions of free black people serve to bolster national narratives of U.S. race relations. In my book, Picture Freedom, I explore how people of African descent envisioned black autonomy in the context of slavery and among popular representations that were hostile to the idea of free black people. I consider complicated images that reveal the power and permanence of nineteenth century approaches to blackness. After them, I still enjoy pictures of black freedom, but now I wonder what they obscure.

Jasmine Nichole Cobb is an Assistant Professor of Communication Studies at Northwestern University and an American Fellow of the AAUW. She is the author of Picture Freedom: Remaking Black Visuality in the Early Nineteenth Century (NYU Press, 2015).

St. Patrick, St. Joseph and Irish-Italian harmony

—Paul Moses

[This post originally appeared in The Wall Street Journal.]

Right after Valentine’s Day, the front window of my Brooklyn home sprouts a field of cardboard shamrocks each year. A statue of St. Patrick appears on the bookshelf and a sign is posted on the back door: “If you’re lucky enough to be Irish, you’re lucky enough.”

moses-comp-finalThis is the work of my Irish-American wife in preparation for St. Patrick’s Day. As the Italian-American husband, I have in past years suggested equal attention to St. Joseph, a favorite saint of Italians. Nothing doing.

The proximity of St. Patrick’s Day on March 17 and the Feast of St. Joseph two days later leads to a good deal of teasing and ribbing every year between Catholics of Irish and Italian ancestry.

There is nothing extraordinary about this little bit of fun, unless one considers the bitterness that once marked relations between these two peoples. As impoverished Italians poured into New York and other major cities in the late 19th and early 20th centuries, the already established Irish became their mentors and tormentors—more so the latter, at first.

Much of the rivalry concerned jobs: Italian laborers were willing to work for less pay and longer hours than the Irish, and sometimes they were used to break strikes. Fights were so common between crews of Irish and Italian construction workers that the Brooklyn Eagle headlined a 1894 editorial: “Can’t They Be Separated?”

This bitterness spilled over into the Catholic parishes where the two peoples mingled with their very different forms of practicing the same religion.

The Italians “are so despised for their filth and beggary that in New York the Irish granted them free use of the basement of the Church of Transfiguration, so that they could gather for their religious practices, since the Irish did not want to have them in the upstairs church,” a Vatican agency noted in an 1887 report that singled out an Irish parish on Mott Street in what is now Manhattan’s Chinatown for maltreatment of Italian immigrants.

The pastor of Transfiguration Church responded through an article his brother wrote in a Catholic journal that said the Italian immigrants didn’t know even elementary Catholic doctrines. Nor were they so concerned about having to hold services in the church basement, it added, because “the Italians as a body are not humiliated by humiliation.”

These were, in turn, fighting words for a prominent Italian priest who wrote to his bishop in Italy: “I have proofs at hand—it would make your blood boil—to see how Italian priests have been treated by American pastors.”

Such exchanges continued for decades, with Irish churchmen trying to cope with the “Italian problem” and Italians complaining angrily to their bishops and the Vatican.

The Italian brand of Catholicism—with processions and raucous street celebrations in honor of patron saints—didn’t sit well with Irish-American prelates. They knew their Protestant opponents looked down on these customs as pagan-like superstitions. Michael Corrigan, a son of Irish immigrants who served as New York’s archbishop in the late 19th century, tried to bar the processions. The Italians ignored him, and took note of the fact that the Irish celebrated their own feast on St. Patrick’s Day.

This battle within the Catholic Church was fought in many big-city parishes well into the 20th century. No Italian-American headed a diocese in New York state until 1968, when Francis J. Mugavero was appointed bishop of Brooklyn.

And yet, as a diverse group of marchers steps up Fifth Avenue led by Cardinal Timothy Dolan in this year’s New York City St. Patrick’s Day Parade, it is worth noting that the Catholic parish played an important role in reconciling the Irish and Italians. In the years after World War II, people got to mingle and know each other in their parishes, especially in the suburbs and residential sections of the city.

Scholarly studies have shown that Italian-Americans who attended Catholic schools became more like the Irish in their practice of the Catholic faith.

As a result, as one 1960s study of New York Catholics found, Italian-Americans who went to Catholic schools and attended Mass regularly almost always wed spouses of Irish origin if they did not marry another Italian. That’s especially so for third-generation Italian-Americans, as I am on my mother’s side, a fact to which my Irish-American wife Maureen can attest.

In the early years of the 20th century, those who predicted large-scale Irish-Italian friendship and intermarriage were dismissed as impossibly optimistic. But the story of the Irish and Italians in America demonstrates that it is possible over time for serious divisions to be transformed into a matter of gentle teasing and ribbing between friends—if not husbands and wives.

Paul Moses teaches journalism at Brooklyn College/CUNY. His book An Unlikely Union: The Love-Hate Story of New York’s Irish and Italians will be published by NYU Press in July.

Ayahuasca and the spiritual natives

—Brett Hendrickson 

What do Lindsay Lohan, Sting, and hundreds of Brooklyn hipsters have in common besides their glowing personalities? They all sing the praises of ayahuasca, a hallucinogenic and psychedelic brew that has long been used by indigenous Amazonian groups. Ayahuasca sends its consumers into throes of reverie and feelings of spiritual connectedness. It also causes bouts of vomiting, which users lift up as part of the cathartic experience—the “ayahuasca cleanse.”

North American and European spiritual tourists being treated by a Peruvian shaman.

In its original Amazonian context, ayahuasca use is an integral part of the trances that shamans enter to carry out powerful transactions between waking life and other levels of their reality. The impetus for most of these trance journeys and transactions is healing of one sort or another, whether this be physical recovery from illness or the restoration of ruptured social norms. Shaman specialists take the ayahuasca in order to enter the visionary realm wherein they can do the important work of re-establishing balance, harmony, and health for their patients and communities.

By the mid-twentieth century, anthropologists who studied ayahuasca-using South American tribes were trying the drug for themselves and bringing back stories of its psychedelic properties. Soon, the growing counter-culture was experimenting with ayahuasca and other psychotropic plants common in Central and South America like peyote cactus and psilocybin mushrooms. Adding significantly to these plants’ inherent hallucinogenic properties was the ostensible authenticity and simplicity of indigenous people’s wisdom and spirituality.

The last few years have witnessed a rise in the popularity of ayahuasca use both on ethno-tourist jaunts to Peru, Ecuador, and Brazil, and in spiritual salons dedicated to the drug in the United States. It has become especially trendy among creative types like musicians and writers and also with young urbanites who might self-identify as spiritual seekers. Like-minded people have taken advantage of online social networking to gather with shaman/entrepreneurs who provide not only the ayahuasca but also a guided tour into a commodified form of indigenous spirituality.

A recent story in the New York Times describes such a meet-up in Brooklyn that featured a Colombian shaman, cups of ayahuasca, barf buckets, candlelight, chanting, drumming, and a $150 price tag. Others are not content with this kind of dabbling and have taken the plunge to remote South America to learn to have even more authentic experiences and perhaps become shamans themselves. A recent profile of one such individual describes a young Jewish man from Williamsburg who made various trips to the Amazon and the Caribbean where he received a new name from indigenous masters: Turey Tekina (allegedly “Sky Singer” in Quechua). After many spiritual adventures and self-discoveries, he “returned to Brooklyn, and turned his apartment into a temple for [ayahuasca] ceremonies. He has a steady flow of regular and new clients, all who learn of him through word of mouth.”

The history of Anglo-Americans who have dabbled in—or even appropriated—the religious and traditional medicines of indigenous people is long but remarkably constant. In almost every case, the white seekers are looking for healing and wholeness, but almost always in a such a way that critiques the complications and coldness of “Western” life and/or its “institutional religion;” utterly romanticizes indigenous people as simple and pure sources of unadulterated ancient wisdom; and can be easily commodified and thus sold in packages with other alternative medicines or therapies.

The latest craze for ayahuasca’s visions and vomiting is one more item in what sociologist of religion Wade Clark Roof has called America’s “spiritual marketplace.” When this particular trend passes, no doubt another will take its place in this unique form of American religiosity that privileges the sacred wisdom of the natives, as long as we can have it when—and how—we want it.

Brett Hendrickson is Assistant Professor of Religious Studies at Lafayette College (PA). He is the author of Border Medicine: A Transcultural History of Mexican American Curanderismo (NYU Press, 2014).

What lies beneath the Chapel Hill murders? More than a ‘parking dispute’

—Nadine Naber

We may never know exactly what Craig Stephen Hicks was thinking when he killed Syrian American medical student Deah Barakat, his Palestinian American wife Yusor Abu-Salha, and her sister Razan Abu-Salha. But we do know that U.S.-led war in Arab and Muslim majority countries has normalized the killing of Arabs and Muslims. It is more crucial than ever before to understand the institutionalization of racism against persons perceived to be Arab or Muslim in terms of the structures of imperial war that normalize killing and death and hold no one (other than victims themselves) accountable.

Photo: Molly Riley/UPI.

The Obama Administration may have dropped the language of the “war on terror,” but it has continued its fundamental strategy of endless war and killing in the Arab region and Muslim majority countries such as Afghanistan and Pakistan (without evidence of criminal activity). The unconstitutional “kill list” for instance, allows the president to authorize murders every week, waging a private war on individuals outside the authorization of congress. Strategies like the “kill list” resolve the guilt or innocent of list members in secret and replace the judicial process (including cases involving U.S. citizens abroad) with quick and expedited killing. These and related practices, and their accompanying impunity, look something like this:

Al Ishaqi massacre, Iraq 2006: The U.S. army rounded up and shot at least 10 civilians including 4 women and 5 children. The Iraqis were handcuffed and shot in the head execution style. The U.S. spokesperson’s response? “Military personnel followed proper procedures and rules of engagement and did nothing wrong.”

Drone attack, Yemen 2015: A drone killed 13-year old Mohammad Tuaiman (whose father was killed in a 2011 drone strike), his brother, and a third man. Questioned about the incident, the CIA stated that “the 3 men were believed to be Al Qaeda” even though the CIA refused to confirm that he was an Al Qaida militant.

The U.S.-backed Israeli killing of Palestinians reinforces the acceptability of Arab and Muslim death. In July 2014, the Israeli Defense Force killed at least 2,000 Palestinians including 500 children. It is well established that the IDF soldiers deliberately targeted civilians. The Obama Administration’s response? Explicit support for Israel.

And those left behind are forced to watch their loved ones’ bodies fall to the ground or burn like charcoal and can only conclude that, “In [the U.S. government’s] eyes, we don’t deserve to live like people in the rest of the world and we don’t have feelings or emotions or cry or feel pain like all the other humans around the world.”

Since the 1970s (when the U.S. consolidated its alliance with Israel), the corporate news media has reinforced the acceptability of Arab and Muslim death—from one-sided reporting to fostering fear of Arabs and Muslims. From Black Sunday (1977) to American Sniper (2015), Hollywood has sent one uninterrupted message: Arabs and Muslims are savage, misogynist terrorists; their lives have no value; and they deserve to die.

This interplay between the U.S. war agenda abroad and the U.S. corporate media extends directly into the lives of persons perceived to be Arab and/or Muslim in the United States. Hate crimes, firebomb attacks, bomb threats, vandalism, detention and deportation without evidence of criminal activity and more have all been well documented. Of course, such incidents escalated in the aftermath of the horrific attacks of 9/11. As the U.S. state and media beat the drums of war, anyone perceived to be Arab and/or Muslim (including Sikhs, Christian Arabs, and Arab Jews) became suspect. Muslim women who wore the headscarf became walking emblems of the state and media discourse of Islamic terrorism. Across the United States, at school, on the bus, at work, and on the streets, women wearing the headscarf have been bullied, have had their scarves torn off, and have been asked over and over why they support Al Qaeda, Saddam Hussein, terrorism, and the oppression of women.

Despite this, the corporate media (replicating the words of the police) and government officials have either reduced the North Carolina killings to a parking dispute or expressed grave confusion over why an angry white man would kill three Arab Muslim students in North Carolina execution-style. Yet the father of one of the women students stated that his son-in law did not have any trouble with Hicks when he lived there alone. The trouble, he said, started only after Yusor, who wore a headscarf identifying her as a Muslim, moved in. Even so, Chapel Hill Mayor Mark Kleinschmidt told CNN that the community is still “struggling to understand what could have motivated Mr. Hicks to commit this crime,” adding, “It just baffles us.”

The “parking dispute” defense individualizes and exceptionalizes Hicks’ crime—in this case, through a logic that obscures the connection between whiteness, Islamophobia, and racism. And the bafflement rhetoric constructs a reality in which there are no conceivable conditions that could have potentially provoked Hicks. Both approaches deny the possible links between the killings, U.S. and Israeli military killings, the media that supports them, and the U.S. culture of militarized violence. They will also assist Hicks in attempting to avoid the more serious hate crime charge that would come with a heavy additional sentence.

Alternately, discussions insisting on the significance of Islamophobia in this case must go beyond the call for religious tolerance and account for the projects of U.S. empire building and war that underlie Islamophobia. Contemporary Islamophobia is a form of racism and an extension of U.S.-led war abroad. As I wrote in Arab America, immigrant communities from the regions of U.S.-led war engage with U.S. racial structures, specifically anti-Arab and anti-Muslim racism, as diasporas of empire—subjects of the U.S. empire living in the empire itself. Perhaps then, we should also avoid applying the same analysis of racism across the board—as if all racisms are the same or as if the framework #blacklivesmatter can simply be transposed onto the killing of Arab Muslim Americans. Otherwise, we risk disremembering the distinct conditions of black struggle (and black Muslims) including the systematic state-sanctioned extrajudicial killing of black people by police and vigilantes as well as black poverty, and histories of slavery and mass incarceration. It is also important to remember the distinct conditions of the war on terror whereby anyone and everyone perceived be Muslim (including Arab Christians and Sikhs) are potential targets.

Rest in peace and power Deah Barakat, Yusor Abu-Salha, and Razan Abu-Salha. May your loved ones find strength and support. My heart is shattered.

Nadine Naber is Associate Professor in the Gender and Women’s Studies Program at the University of Illinois at Chicago. She is the author of Arab America: Gender, Cultural Politics, and Activism (NYU Press, 2012).