The New Southern Strategy: GOP plays the race card (again)

—Matthew W. Hughey and Gregory S. Parks

At a recent Tea Party meeting in Hood County, Texas, Rafael Cruz, the father of Sen. Ted Cruz (R-Texas), made bold statements reeking of white supremacy and Christian nativism, suggesting that the U.S. is a “Christian nation,” in which the Declaration of Independence and the Constitution were a “divine revelation from God […] yet our president has the gall to tell us that this is not a Christian nation […] The United States of America was formed to honor the word of God.” And just months prior to that event, in speaking to the North Tea Party on behalf of his son Ted (who was then running for Senate), Rafael urged the crowd to send Obama “back to Kenya.”

In the wake of the political dysfunction, gridlock, and rightwing obstruction, these statements certainly gesture toward important questions. Namely, is Rafael Cruz merely a “bad apple” that threatens to spoil the GOP or are his comments indicative of a larger strategic rhetoric that resonates with the lesser angels of our nature?

We suggest the latter.

Since the “Southern Strategy” whereby the GOP turned its back on Civil Rights and began courting white Southerners who believed they were victims of a new reverse racism social order, the Republican Party has employed subtle (and not so subtle, as witnessed above) rhetoric that turns on implicit anti-Black and anti-immigrant messages. It is said no better than former Republican Party strategist Lee Atwater’s 1981 remarks in which he laid bare the GOP racialized strategy:

You start out in 1954 by saying, “Nigger, nigger, nigger.”  By 1968 you can’t say “nigger”—that hurts you.  Backfires.  So you say stuff like forced busing, states’ rights and all that stuff.  You’re getting so abstract now [that] you’re talking about cutting taxes, and all these things you’re talking about are totally economic things and a byproduct of them is [that] blacks get hurt worse than whites.  And subconsciously maybe that is part of it.  I’m not saying that.  But I’m saying that if it is getting that abstract, and that coded, that we are doing away with the racial problem one way or the other.  You follow me—because obviously sitting around saying, “We want to cut this,” is much more abstract than even the busing thing, and a hell of a lot more abstract than “Nigger, nigger.”

Hence, Ronald Reagan’s evocation that “welfare queens” were gaming the system sent a clear yet implicit message: Your taxes are high because Lyndon Johnson’s programs are funneling your money to undeserving and lazy black women.  When a group that supported George H. W. Bush’s presidential campaign ran a television advertisement that blamed Michael Dukakis for murders committed by Willie Horton (a black parolee who broke into a white couple’s home), the message baited white fears about young black male violence.  When Jesse Helms, a white senator from North Carolina, faced black challenger Harvey Gantt in 1990, Helms’s camp ran a television advertisement showing the hands of a white person crumbling a rejection letter with the voiceover: “You needed that job, and you were the best-qualified. But they had to give it to a minority because of a racial quota. Is that really fair?” The ad was broadcast just a few days shy of the election and boosted Helms to victory in what was previously a dead heat.

And in relation to Obama, we have a new Southern Strategy: Cruz’s comments fall lock and step with GOP and Tea Party elements that have attempted to frame Obama as either culturally out of place if not legally unable to hold the presidency. Due in part to the conservative conspiracy theorists, a small cadre of politicians (e.g., Missouri Republican Sen. Roy Blunt, or Nathan Deal, the Democrat-turned-Republican Gov. of Georgia), and already established cultural tropes that conflate whiteness and Christianity with Americaness, by the 2008 election season, studies showed that U.S. residents were more likely to associate American symbols with white politicians (e.g., Hillary Clinton) or even white European politicians (e.g., Tony Blair) than with Obama. Even when American citizens viewed an American flag they then showed implicit and explicit prejudice toward African Americans in general and reluctance to vote for Obama when compared to those not exposed to the flag.

Simply put, Barack Obama did not fit most American’s implicit idea of an authentic American, and the GOP has seized upon that opportunity to engage in the latest state of the New Southern Strategy. The playing of the race card, even in implicit fashion, remains an efficacious political strategy for those on the Right.

Matthew W. Hughey is associate professor of sociology at the University of Connecticut. Gregory S. Parks is assistant professor of law at Wake Forest University School of Law.  They are co-authors of The Wrongs of the Right: Language, Race, and the Republican Party in the Age of Obama (forthcoming in 2014 from New York University Press).

Vaginal birth for twins as safe as c-section delivery

—Theresa Morris

A study published in the New England Journal of Medicine (NEJM) on October 3rd examines what is safer for the delivery of twins: planned vaginal or planned cesarean section? A summary of the study was written by the Associated Press and distributed widely through many news outlets, including National Public Radio. The title of the AP article—“Most Twins Can Be Born Without a C-Section”—gets at the major finding of the study. The authors randomly assigned women with twin pregnancies between 32 weeks 0 days gestation and 38 weeks 6 days gestation and with the first twin in a head-down (cephalic) position to planned cesarean section (1398 women) or planned vaginal delivery (1406 women). There was no significant difference in outcomes between these two groups. That is, a planned vaginal twin delivery posed no greater risk to women and babies than a planned cesarean section twin delivery.

This study is notable. As the authors of the NEJM study observe, the rate of vaginal twin delivery has plummeted in recent years. There is no doubt that part of this decrease is due to publication of findings from The Term Breech Trial, which discovered worse outcomes for babies presenting in breech (head up) position who were delivered vaginally. Although in a follow-up study published in 2004 the outcomes at age 2 of the babies born vaginally were no different from the outcomes at age 2 of babies born by c-section, planned vaginal breech delivery had already been greatly curtailed worldwide.

The Term Breech Trial affected twin deliveries because many second twins (i.e. the twin that is delivered second) present in a breech position. Although the Term Breech Trial only included singleton pregnancies, maternity clinicians I interviewed for the research in my book indicated that many obstetricians stopped offering vaginal twin deliveries when the second twin was presenting in a breech presentation because, with the publication of findings from the Term Breech Trial, if there were to be a bad outcome, they believed they would be sued for malpractice. The NEJM study published on October 3rd, which includes five of the authors of the Term Breech Trial publications, is a good redress to obstetricians’ defensive practice of delivering most twins by c-section.

This study is good news for women pregnant with twins. If vaginal delivery is not being presented to them as an option, they should bring this publicly available article to their next prenatal appointment.

Theresa Morris is Professor of Sociology at Trinity College in Hartford, Connecticut. She is the author of Cut It Out: The C-Section Epidemic in America (NYU Press, October 2013).

NYU Press authors sweep top prizes in American studies

We are thrilled to announce that Lisa Cacho’s Social Death: Racialized Rightlessness and the Criminalization of the Unprotected has been selected by the American Studies Association (ASA) to receive the 2013 John Hope Franklin Publication Prize, which celebrates the best published book in American studies. This book is part of the Nation of Newcomers series.

We’re also honored to share that Kyla Wazana Tompkins’ Racial Indigestion: Eating Bodies in the 19th Century has won the other top prize awarded by the ASA, the 2013 Lora Romero First Book Publication Prize, which identifies the best first book in American studies that highlights intersections of race with gender, class, sexuality, and/or the nation. Racial Indigestion—part of the America and the Long 19th Century series and the American Literatures Initiative—was also awarded the 2013 Association for the Study of Food and Society Book Award earlier this year.

Congratulations to the authors, editors, and everyone who has worked on these books!

Breaking Bad breakdown: Deserving denouement

—Jason Mittell

[A longer version of this article originally appeared on the media and cultural studies blog, Antenna.]

What do we want from a finale? Should it be a spectacular episode that serves as the dramatic peak of the series? Should it be like any other episode of the series, only more so? Should it be surprising, shocking, or transformative? Or should it offer closure?

For me, the main thing that I’m looking for in any finale is for a series to be true to itself, ending in the way it needs to conclude to deliver consistency, offering the type of ending that its beginning and middle demand. That changes based on the series, obviously, but it’s what makes Six Feet Under’s emphasis on mortality and The Wire’s portrayal of the endless cycle of urban crime and decay so effective, and why Battlestar Galactica’s final act turn toward mysticism (and that goofy robot epilogue) felt like such a betrayal to many fans. And it’s why finales like The Sopranos and Lost divide viewers, as the finales cater to one aspect of each series at the expense of other facets that some fans were much more invested in.

“Felina” delivered the ending that Breaking Bad needed by emphasizing closure over surprise. In many ways, it was predictable, with fans guessing many of the plot developments—read through the suggestions on Linda Holmes’s site for claiming cockamamie theories and you’ll see pretty much everything that happened on “Felina” listed there (alongside many more predictions that did not come to pass). For a series that often thrived on delivering “holy shit” moments of narrative spectacle, the finale was quite straightforward and direct.

The big shocks and surprises were to be found in episodes leading up to this one, especially the brilliant “Ozymandias”; since then, we’ve gotten the denouement to Walt’s story, his last attempt to make his journey mean something. It’s strange to think that an episode that concludes with a robot machine gun taking down half a dozen Nazis feels like a mellow epilogue, but emotionally it was this season’s least tense and intense episode. Instead, Walt returned home a beaten-down man, lacking the emotional intensity that drove him up the criminal ladder, but driven by a plan that he had just enough energy to complete. Given that the series premise was built on the necessity of a character arc building toward finality, and that it began with that character receiving a death sentence, we always knew that closure was likely to come in the form of Walt’s death, and this episode simply showed us how his final moments played out in satisfying fashion.

While Walt’s mission to destroy the remnants of his business occupy the bulk of the episode’s plot, its emotional centerpiece is his meeting with Skyler. As always, [Bryan] Cranston and Anna Gunn make the scene crackle, conveying the both the bonds and fissures between the two characters that make their final goodbye neither reconciliation nor retribution. He visits her as one of his more selfless acts we have seen. He has no illusions that he’ll resolve things or get her back on his side; he simply wants to give her two things. First, the coordinates for Hank and Gomie’s grave, offered to provide closure to Marie and others, as well as assuaging Walt’s guilt over this one act of violence he caused but could not stop. Second, the closest he’ll ever come to an apology—after starting like a typical rationalization about “the things I’ve done” that Skyler rightly attacks him as another deceptive rationalization about family, Walt finally admits the truth. “I did it for me. I liked it. I was good at it. And I was alive.” This is not easy for Walt to say; it is his most brutal penance, having to admit his own selfishness to both his wife and himself. But in the end, Skyler returns the favor with the gift of a final moment with Holly, the child that Walt used as a bargaining chip the last time they spoke, as she remembers the part of him that still loved his children despite his abusive treatment of them. And Walt takes his own moment to observe Flynn from afar, looking at a child who rightly despises him, but he still loves. When I look back on this finale, this will be the scene I replay in my mind.

Of course the episode and series climax is the final confrontation. I fully believe that Walt intends to kill Jesse alongside the Nazis, as he fully believes that his protege has both betrayed him and stolen his formula—and based on Badger’s testimony, the student has surpassed the teacher. Many fans were speculating that Walt sought to “save Jesse,” but up until he sees his former partner in chained servitude, Jesse is an equal target of his wrath. Yet again, Cranston conveys Walt’s emotional shifts wordlessly, as he devises a plan to spare Jesse from his robo-gun once he sees that Jesse is yet again a victim of men with larger egos and more malice than him. While this final confrontation was a satisfying moment of Walt putting the monsters that he had unleashed back in the box, it was almost entirely suspense free. I never doubted that Walt would successfully kill the Nazis and spare Jesse, that he had poisoned Lydia, and that Jesse would not pull the trigger on Walt. These were the moral necessities of a well-crafted tale; Breaking Bad was done playing games with twists and surprises, but ready to allow Walt to sacrifice himself to put down the monsters he had unleashed. Yet the scene was constructed to create suspense with the potential that Walt might not get the remote control in time, creating a rare moment of failed tension in the series—I awaited and anticipated the emotional confrontation between Walt and Jesse without ever doubting the outcome or tension about what might happen.

The “how it happened” was quite satisfying, however. I saw the robo-gun as an homage to one of my favorite Breaking Bad scenes: in “Four Days Out” when Jesse thinks Walt is building a robot to engineer their rescue. This time he does, and it works in an appropriately macabre and darkly funny payoff: the excessive gunfire mirrors Walt’s frequent insistence to maximize his inventions (as with the overpowered magnets, insistence to capture every last drop of methylamine, etc.), and it keeps firing blanks as Kenny’s body receives an endless massage. Although Jesse is no cold-blooded killer, killing Todd was a line he was happy to cross in payback for months of torture and Todd’s own heartless killings of Drew Sharp and Andrea. However when given a chance to kill Walt, Jesse takes a pass; instead he forces Walt to admit that Jesse killing him is what he wants, and then denies him that pleasure. When Jesse sees that Walt was shot, Jesse thinks leaving him to die alone is what Walt deserves, especially given what happened with Jane.

What Walt deserves matters in Breaking Bad. I’m reminded of an important scene in the penultimate episode of The Wire, when one character wonders why another has plotted to kill him, asking what he’s done to deserve it (keeping names vague if you haven’t seen it). The would-be killer’s reply quotes the film Unforgiven: “Deserve’s got nothing to do with it.” But on Breaking Bad, deserve’s got everything to do with it, as it has always been a tale of morality and consequences. Jesse deserves his freedom, even though he is a broken-down shell of who he was—and while we want to know what’s next for him, I’m content with the openness that allows me to imagine him driving to Alaska and becoming a carpenter, perhaps after rescuing Brock and Lydia’s daughter from orphanhood.

Walt deserves to die, and we deserve to see it. The final musical cue in a series that excelled at their use was Badfinger’s “Baby Blue,” another classic like “Crystal Blue Persuasion” that the producers have probably been hanging onto for years. The opening line of the song is as essential as the color-specific romance: “Guess I got what I deserve.” In this final glorious sequence, Walt gets to die in the lab, as the music sings a love song to chemistry—which in this context, serves as an ode to his own talents in perfecting the Baby Blue. His tour around the lab has prompted some debate as to what Walt is doing: is he strategically leaving his bloody fingerprints to claim ownership, a sort-of turf-claiming mark of Heisenberg Was Here? I think not, but rather that Walt is admiring the precision and craft of the lab, both as a testament to his own pedagogical prowess that yielded Jesse’s talents, and as his natural habitat where he “felt alive,” as he told Skyler earlier. To the soundtrack romanticizing Walt’s own greatness, it’s a final moment of pride and arrogance that he seizes to overshadow all the carnage he has caused, an acceptance that more than his family, he did it for the chemistry.

“Felina” is far from Breaking Bad’s best episode, but it is the conclusion that the series and its viewers deserve. I think it will play even better both for viewers bingeing the season in quick succession and upon rewatch without the trappings of anticipation, hype, and suspense. Jesse escapes, Skyler and her family survive, and Walt and his one-time minions die. It all happens with less emotion and drama than what we’ve come to expect from the series, but given the strain of the journey up to this point, we’re as emotionally drained as the characters. So a low-key bloodbath is an appropriate way to exit this wonderful trip.

Jason Mittell is Professor of Film & Media Culture and American Studies at Middlebury College. He is the co-editor (with Ethan Thompson) of How to Watch Television (NYU Press, 2013).

Is the Miss America pageant good or bad for women?

Earlier this month, the NYT’s “Room for Debate” blog featured a thoughtful discussion on the Miss America pageant and its role in today’s society. Now that this year’s pageant is over, we asked Megan Seely, author of Fight Like a Girl: How to Be a Fearless Feminist, to weigh in on the controversy in light of recent racial backlash faced by its first Indian-American winner. Read her piece below.
 

Miss America Colleen Kay Hutchins (R) looking at her trophy, September 1952.

I often hear the Miss America pageant defended as a great source of scholarship funds. Indeed, it is said that this year’s winner will receive about $50,000 in scholarship money. But given the fact that women receive fewer academic and merit scholarships than their male counterparts, despite overall higher grades; and despite the fact that there are fewer role models for women and girls in education particularly within Science, Technology, Engineering and Math (STEM); and despite the fact that women continue to confront pay inequity once they are in their jobs and careers, it is offensive that we defend and celebrate a ‘scholarship program’ whose main requirement is women meet a specific and narrow definition of physical beauty.

Some have recently argued that there have been gains for women of color, citing the very small handful of women of color who have been crowned. It should be noted that we have never had a Latina Miss America or a transgendered Miss America. And until the 1930s the official rulebook of the pageant required contestants to be “of the white race.” A rule that might officially be gone today, but is clearly still expected, judging by the blatantly racist response to the 2014 winner, Nina Davuluri.

There are women who love the pageant. There are participants who defend and celebrate what Miss America has meant to their lives. They argue choice in participating or watching or believing in the pageant. I don’t mean to dismiss these perspectives. But I do question, what is choice, in a culture that so deeply holds and reinforces a beauty standard that is then required for participation in the pageant?  Miss America deviates little from the thin, tall, heteronormative, and more than not, white ideal of beauty. Though there are a few who have successfully challenged the whiteness of the pageant, very little has changed. If we teach women and girls that their value is in their physical appearance, then it is no wonder that many turn to Miss America for validation.

I would hope that the winners would use their public platform to create change and impact the world.  But changing the pageant and the culture in which it exists is a far greater challenge. While women of color who become Miss America certainly defy the stereotypes of American beauty, they often do so while reinforcing the expectations of body size and appearance. There is still little, if any, diversity in regards to all races, ethnicities, cultural identities, body sizes, genders, sexualities, ages and abilities.  As long as this remains true, and as long as all women do not see themselves routinely represented and valued in every aspect of society, then we cannot justify the existence of this overtly misogynistic institution. Even if we celebrate the few women who have managed to stand out within it.  We cannot ignore the negative and harmful impacts this event has on thousands upon thousands of women of all ages who struggle to find their worth in a culture that emphasizes and rewards women’s physical appearance above all else. Girls are watching; we owe them more.

Megan Seely is a third wave feminist and activist, and author of Fight Like a Girl: How to Be a Fearless Feminist (NYU Press, 2007). She lives and teaches in northern California.

Fall books available on NetGalley

We’ve got quite a few gems in our NetGalley catalog this fall, all available for advance review now. Book reviewers, journalists, bloggers, librarians, professors, and booksellerswe welcome you to submit a request!

Not familiar with NetGalley? Learn more about how it works.

 
Buzz: Urban Beekeeping and the Power of the Bee by Lisa Jean Moore and Mary Kosut (September 27, 2013)

We think Booklist said it best: “In this fascinating blend of sociology, ecology, ethnographic research, and personal memoir, the authors range through all of the aspects of the human relationship with the honeybee.”

Ever thought of honeybees as sexy? You might after watching Mary Kosut discuss the sensual nature of beekeeping.

 

Cut It Out: The C-Section Epidemic in America by Theresa Morris (October 7, 2013)

In Cut It Out, Theresa Morris offers a riveting and comprehensive look at this little-known epidemic, as well as concrete solutions “that deserve the attention of policymakers” (Publishers Weekly starred review).

C-sections are just as safe as vaginal births, right? Not true, says Theresa Morris. Watch her discusses this and other misconceptions on our YouTube channel.

 

Hanukkah in America: A History by Dianne Ashton (October 14, 2013)

Hanukkah will fall on Thanksgiving this year for the first time ever—and the last time for another 70,000 years. Brush up on your knowledge of the holiday in time to celebrate the once-in-an-eternity event. Publishers Weekly, in another starred review, promises a “scholarly but accessible guide to the evolution of the Festival of Lights in America.”

Stay tuned for our interview with the author!

 
Browse all of our e-galleys available for review on NetGalley.

Constitution Day: 5 books to read now

September 17th is Constitution Day – a federally recognized day to celebrate and teach about the United States Constitution. But what are the proper “texts” for this day of teaching?

To start, we’ve selected a short list of recent NYU Press books we think every citizen should read this year. But, there are certainly others. What’s on your list? Let us know in the comments section!

5 books for Constitution Day

Why Jury Duty Matters: A Citizen’s Guide to Constitutional Action by Andrew Guthrie Ferguson

Jury duty is constitutional duty—and a core responsibility of citizenship! The first book written for jurors, Why Jury Duty Matters provides readers with an understanding of the constitutional value of jury duty. (Also, be sure to read the author’s excellent piece in The Atlantic on ways to the make the Constitution relevant to our daily lives.)

 

The Embattled Constitution
Edited by Norman Dorsen, with Catharine DeJulio

The book presents a collection of the James Madison lectures delivered at the NYU School of Law. The result is a fascinating look into the minds of the judges who interpret, apply, and give meaning to our “embattled Constitution.”

 

America’s Founding Son: John Bingham and the Invention of the Fourteenth Amendment by Gerard N. Magliocca

This book sheds light on John Bingham, the father of the Fourteenth Amendment, who helped put a guarantee of fundamental rights and equality to all Americans into the U.S. Constitution.

 

Government by DissentProtest, Resistance, and Radical Democratic Thought in the Early American Republic by Robert W.T. Martin

Democracy is the rule of the people. But what exactly does it mean for a people to rule? The American political radicals of the 1790s understood, articulated, and defended the crucial necessity of dissent to democracy. This is their story.

 

Bonds of Citizenship: Law and the Labors of Emancipation by Hoang Gia Phan

In this study of literature and law from the Constitutional founding through the Civil War, Hoang Gia Phan demonstrates how citizenship and civic culture were transformed by antebellum debates over slavery, free labor, and national Union.


The secret history of gay marriage

Excerpted from As Long as We Both Shall Love: The White Wedding in Postwar America by Karen M. Dunak.

On October 10, 1987, nearly 7,000 people witnessed a wedding on the National Mall in Washington, DC. Men and women cheered and threw rice and confetti as family, friends, and community members took part in the largest mass wedding in American history. After the celebrants exchanged rings and were pronounced newlywed, guests released hundreds of balloons into the air. Brides and grooms, dressed in formal wedding attire, cried and embraced after an “emotional and festive” ceremony. Like so many brides and grooms, participants identified the wedding day as one of the happiest, most meaningful days of their lives.

But this was no ordinary wedding. And these were not typical brides and grooms. This wedding held special significance for its participants. Beyond the “mass” nature of the celebration, something else was unique. The newlyweds that fall Saturday paired off as brides and brides, grooms and grooms. “The Wedding,” as it came to be known, marked the symbolic beginning of nearly 2,000 same-sex marriages. Rejecting the idea that a wedding—and by implication, a marriage—should have one male and one female participant, the grooms and their grooms, the brides and their brides presented a striking picture.A wedding, a fairly conventional affair, became a site of radical protest. Layered in meaning, “The Wedding” celebrated the personal commitments of those being wed. At the same time, it was a direct political act that challenged the legal, religious, and social barriers against same-sex relationships. Like couples before them, gay men and lesbians found they could use their weddings to make a statement about the world and the place of their relationship in it.

Designed to reflect an alternative approach to love and marriage, “The Wedding,” part of the 1987 March on Washington for Gay and Lesbian Rights, rejected the narrow definition of marriage that limited the relationship to members of the opposite sex. “The Wedding” likewise rejected a narrow view of the standard wedding celebration. Dina Bachelor, metaphysical minister, hypnotherapist, and “Wedding” officiant, designed a new-age style ceremony. Bachelor recognized the uniqueness of the celebration and chose her words and actions carefully. Standing under a swaying arch of silver, white, and black balloons, Bachelor omitted any mention of the customary “honor and obey, till death do us part.” Including observers in the celebration, she asked witnesses to join hands and encircle the celebrants. For participants, “The Wedding” was not about fitting into a pre-arranged style. Instead, it was about expanding the celebration to include various approaches to marriage and family. Like alternative wedding celebrants of the 1960s and 1970s, same-sex partners recognized the flexibility of the wedding and used the celebration to express their views about life and love. Bachelor likewise noted the celebration’s significance and concluded the event by stating, “It matters not who we love, only that we love.”

Gay community leaders emphasized the political component of the celebration. Drawing on the activist view that the personal was political, the public pronouncement and celebration of a long-ridiculed personal lifestyle served as the ultimate political statement.Those present rejected the shame associated with their relationships and proved that many same-sex couples shared long-term, committed relationships. Courageously displaying their individual love and their membership in a community of likeminded gay men and lesbians, “Wedding” participants did not demand a social inclusion marked by assimilation or guarded emotions. Rather, they demanded full acceptance of their lifestyle and relationship choices. Reverend Troy Perry, a minister evicted from the Pentecostal Church of God for his own homosexuality and founder of the Universal Fellowship of Metropolitan Community Churches, spoke to this desire for openness and acceptance as he rallied his congregants with a shout of “Out of the closets and into the chapels!”

Hosting “The Wedding” in front of the Internal Revenue Service’s building was a symbolic choice meant to protest the tax office’s refusal to accept taxes jointly filed by same-sex couples. As activist Sue Hyde recalled, couples participated in “The Wedding” “both to protest discrimination against them and to celebrate their love and commitment to each other.” Challenging conventional views of family and marriage, groom and “Wedding” organizer Carey Junkin of Los Angeles echoed “The Wedding’s” official slogan when he said, “Love makes a family, nothing else.” Adding his own sentiment, he stated, “We won’t go back.” Marriages celebrated that day held no legal standing, but that did not diminish the emotional impact of the event. The community of couples who wed accomplished their political objective by making their private relationships part of the political discourse. The very public, very political event demanded recognition of the legitimacy of the relationship between two brides or two grooms.

As for “The Wedding” participants (composed of more male than female couples, suggesting an ongoing discomfort with weddings and marriage among politically active feminists), they expressed warm praise for the celebration, as well as a sense of anger that any members of the gay or lesbian community would criticize their decision to wed. Dressed in suits, tuxedos, and wedding gowns, albeit with little regard for normative notions of gender, the celebrants saw the day as an important turning point in their lives and relationships. Despite their unorthodox appearances, many participants noted that they would have been comfortable with an even more “traditional” ceremony. The only registered disappointment pertained to the desire that the ceremony might have been more explicit in regard to monogamy or couples’ exclusivity. The mass “Wedding” was not intended to replicate heterosexual marital relationships or wedding celebrations, but the importance given the celebration and the desire for expression of personal preference—be it for a more or even less traditional form than the ceremony before the IRS—hinted at possible similarities between same-sex weddings and their opposite-sex counterparts.

While “The Wedding” looked unlike the individual white weddings celebrated by heterosexual couples, the event incorporated familiar elements of the wedding ceremony. Most participants wore some sort of special dress; an authority figured presided over the celebration; and guests bore witness to the event. The relationships may have seemed atypical or strange in the eyes of the mainstream observer, but there could be no question as to what had transpired that October day. The familiarity of the wedding served as a valuable political tool even as it fulfilled the personal desires of same-sex couples who wished to share their lives together. For a population who had the option— admittedly the very unpleasant option—of invisibility, the choice to make public the intimacies of private life was a political statement in and of itself.

Same-sex weddings transcended the “difference vs. accommodation” debates often raised in subcultural groups and hotly contested within the queer community.8 In the years following the celebration of “The Wedding,” gay men and lesbians expressed a blend of intentions and motivations with their celebrations. The flexibility of the wedding, continually tested by the heterosexual marrying population in the decades since World War II, likewise served the personal as well as the political objectives of queer couples. Moving from the mass to the individual, weddings legitimated and celebrated relationships that had long been deemed wrong or strange and had thus been cloaked in secrecy. Such celebrations allowed men and women to celebrate their private lives in a public style and with the sanction of chosen and accepting family and community members. By publicly celebrating their relationships, queers challenged a political system that refused to recognize their right to wed.

Like the weddings of those before them, the white weddings hosted by same-sex couples in the 1990s and in the early years of the new century seemingly adhered to a standardized form of celebration. The similarity between opposite-sex and same-sex events, of course, was noticeable in the continued reliance on a wedding industry and adherence to wedding norms: formal dress, recitation of vows, and elaborate receptions. On the surface, this suggested a kind of queer accommodation to the standard form. Even though a gay couple might purchase a cake topper that featured two grooms, the couple still purchased a cake topper. The prerequisites of a wedding had tremendous staying power. But same-sex couples shaped their weddings in ways specific to their relationships and cultural identifications. Ceremonial alteration and amendment, whether slight or pronounced, reflected the beliefs and desires of same-sex couples.

Queer couples, like other brides and grooms, negotiated tensions created by family, cost, and the overall wedding planning procedure. Unlike heterosexual couples, same-sex brides and grooms challenged existing authority in the very act of celebrating a wedding. Couples celebrated the communities from which they came, to which they currently belonged, and those they created, if only for their weddings. They exerted individual authority over their ceremonies not only in their selection of music, dress, and wedding style, but also in their very direct rejection of a legal system that denied them access to the rights and privileges of marriage. They publicly celebrated relationships long denied public recognition. Weddings could be and could say whatever the celebrating couples wished. As various states began to recognize same-sex marriages, acceptance of same-sex unions extended even beyond the queer community. Weddings both affirmed the political victory achieved by those who had long advocated on behalf of equal rights and marked the triumph of personalization in American wedding culture.

Read the rest of this entry at Salon.com.

Pinning dreams, perpetuating stereotypes

—Karen M. Dunak

[This piece originally appeared on the author's blog here.]

I recently read an article about the seemingly widespread practice of creating wedding-related Pinterest boards before a wedding is planned, an engagement proposed, or a partner even identified. I’ve seen some of this impulse toward “When I…” boards on the social media site. Sometimes the speculation is “When I have a baby,” or “When I buy a home,” and so naturally “When I get married” fits as the kind of category for which one might plan. But for some reason, the wedding seems a more problematic hypothetical, and I do think the process for planning without any sort of end date in mind (or end mate, for that matter) contributes to that. When people critique American wedding culture, this is what they’re looking at. Too many women – and the suggestion is that this is primarily a female phenomenon – focus more on what they want their wedding to look like than on what they want their partner or their marriage to be like. What’s more, they don’t care what that partner might desire for his/her wedding day. The bride’s day will be the bride’s day.

As a whole, these “When I…” boards give me pause, but I worried that I might be too knee-jerk in my critique. Trying to think about the process of “pinning” a dream wedding in a historical context, I wondered if this is in some way the 21st century equivalent of the hope chest. During the 19th century and well through post-World War II period, many young women collected goods for marriage in such chests. From girlhood, a woman stockpiled linens, towels, flatware, and various other domestic goods for her future home. Year-by-year, she added things to her collection. The expectation was that she would one day marry and thus would need to be prepared. For most women, that expectation was right on. Unless well-educated or raised in material privilege, the best means of support for a woman was to be found through a union with a man. And of course, social and cultural expectations pointed directly to marriage, home, and family life as the culmination of success for American women.

1947 Hope Chest Advertisement

Ultimately, though, I have to conclude that preparing for a home – and particularly in the historical context – was a different thing than preparing for a wedding. The circumstances under which young women filled their hope chests veered far more toward the practical than the aesthetic (and, in fact, the emerging domestic aesthetic that tended toward the trendy or the store-bought – a particularly popular look in the newly developing postwar suburbs – helped make the keeping of a hope chest an increasingly outdated process from the 1950s on). In a time when brides and grooms couldn’t depend on a string of showers or the presentation of elaborate wedding gifts – or cash, as many prefer now – to mark the start of their union, they had to take responsibility for material and financial support during the early years of marriage before they entered into that relationship. For men, that often meant securing steady employment and the start of a nest egg. For women, that meant preparation of the necessities required of a home (and often steady employment and nest egg contribution until at least the birth of the first child, if not beyond).

In my research, I’ve read about many women who dreamed about their weddings since childhood. And clearly this is a popular trope in contemporary wedding culture. In one personal essay I read, a woman admitted to keeping a wedding binder during her 1980s girlhood, in which she included advertisements and articles from bridal magazines, all in anticipation of the wedding she would one day celebrate. So the practices found on Pinterest aren’t brand new. They’re just more public. I suppose so it goes in this increasingly public age – but this, I think, is where my discomfort lies. One woman’s willingness to make public her private wedding dreams allows too easily for the perpetuation of the stereotype that this is what all women are doing (or want to be doing).

Aside from the tried and true critiques we might make about overeager wedding pinners (they validate the power of what many critics call the “wedding-industrial complex”; they reveal the material undercurrent that marks so many elements of American life and culture; they contribute to the normalization and acceptance of narcissism; etc.), my biggest problem with the pinning going on here is how it further standardizes and entrenches the gendered division of unpaid labor in American life and romantic relationships for all women – even those without the time or inclination to imagine a fictive celebration.

Planning a wedding (a real wedding, not a Pinterest dream wedding) takes time – which can manifest as time away from work, family, friends, fitness, hobbies, you name it. And it is work. It falls into that category of unpaid labor that is often celebrated for continuing rituals, maintaining tradition, fostering family ties, and by which women are often judged, but is work that is virtually never rewarded or respected in the way any kind of paid labor very clearly is (see “paid” descriptor). What’s more, when it’s a labor assumed to be universally enjoyed by women, women can find themselves alone in completing it or condemned for not being enthralled with it. If Pinners are willing to see their visions through and take on labor of this kind (and, I suppose, are “lucky” enough to find partners who stay out of their way), that’s fine. But the possibility that all women might be expected to do the same – and might be viewed as a single monolithic bloc – is more troubling.

Karen M. Dunak is Assistant Professor of History at Muskingum University in New Concord, Ohio. She is the author of As Long as We Both Shall Love: The White Wedding in Postwar America (NYU Press, 2013).

We are all George Zimmerman: Trayvon Martin and the youth control complex

—Victor Rios

In ten years of studying inner city boys labeled at-risk by law enforcement and schools, I have found that poor Black Americans and Latinos are often deemed as culprits, lost causes, and menaces to society. One Black American boy in my study reported that on the day he was born nurses at the hospital commented in front of his mother, “Poor boy. He’s destined to become a dope dealer or drug addict just like his mother.”

In my observations at schools I witnessed a teacher tell a truant Latino eighth grade boy, “You have a prison cell waiting for when you turn eighteen.” On the streets I witnessed a police officer tell a seventeen year old Black American boy, “We want you to kill each other off, that way we don’t have to deal with locking you up.” These kinds of examples are countless in the lives of over 200 boys that I have interviewed and shadowed.

The reality is that poor urban boys grow up surrounded by a system of punitive social control that sees them as deficient students and criminal suspects that must be controlled and contained from young ages. School officials, law enforcement personnel, neighborhood watch volunteers, store clerks, jurors, and everyday citizens perceive and interact with these young people with fear, disdain, and circumspection. This youth control complex, a collective system of negative treatment based on racialized fears of young people of color, is responsible for the criminalization and systematic stripping of dignity that many young people like Trayvon Martin encounter on a day-to-day basis.

George Zimmerman is not just an outlying overzealous rogue vigilante that hunted down an innocent Black American boy. He very much represents mainstream America. We–schools, law enforcement, the media, intellectuals, politicians, and everyday citizens–are all involved in a system that creates and perpetuates fear and outcaste of a vulnerable, marginalized segment of our population. These young people grow up feeling hopeless, undignified, and failed by the system.

As Ronny, a seventeen year old Black American boy I followed for three years puts it, “It’s like I’m invisible, like I don’t exist, like people see me as good for nothing but to be in jail.” This youth control complex produces social death among many young people of color; they are alive but are not recognized as fellow human beings with the right to live productive lives. Instead we rely on surveillance, policing, prison bars, and stand your grounds laws to control, contain, incapacitate, and eliminate them.

The difference between George Zimmerman and the rest of us is that he pulled the trigger. We simply continue to mundanely mete out punitive treatment, stigma, and systematic stripping of dignity to young people of color, slowly killing their soul and their right to pursue happiness. By the time we sit in a courtroom to determine whether Trayvon Martin’s life is worth imposing a sanction on George Zimmerman, five white jurors have already been socialized and acculturated to criminalize young racialized bodies and to view the victim as a culprit.

Politicians and school and law enforcement administrators (including those that supervise neighborhood watch programs) must demand that individuals who interact with a diverse population be trained in understanding their cognitive biases and how these inform the treatment they impose on others. We must train ourselves to recognize and eliminate our inclinations to perceive and treat young people of color as suspects and instead treat them with the dignity they deserve. Listening to the voices of young people themselves who have lived a lifetime of encounters with the youth control complex might be a good first step.

Dr. Victor Rios is a  Professor of Sociology at the University of California, Santa Barbara. He is the author of Punished: Policing the Lives of Black and Latino Boys (NYU Press, 2011) and Street Life: Poverty, Gangs, and a Ph.D.

The meaning of July 4th

In celebration of the July Fourth holiday, we asked a few of our authors to discuss what Independence Day means to them (a la Beacon Broadside’s awesome blog post for the holiday last year). Three of our authors responded with mixed emotions, musings of its unsung heroes, and stories of forgotten holidays. Read their pieces below.

Beverly C. Tomek is author of Colonization and Its Discontents: Emancipation, Emigration, and Antislavery in Antebellum Pennsylvania (NYU Press, 2011).

July 4th brings mixed feelings for those of us who focus on issues of social justice in our scholarship and activism. On the one hand, it is a day of celebration, but on the other, it is a stark reminder of the contrast between the ideal and the real. Given the amount of time I spend reading and writing about antislavery, any mention of Independence Day takes me back to the antebellum years – to a time when a small group of dedicated reformers pushed this nation to broaden the scope of liberty.

July 4th was an important and highly contested date for the antislavery movement. It began when the American Colonization Society (ACS) tried to claim the patriotic holiday. Starting in the 1820s, the ACS encouraged ministers of various denominations across the young nation to take up special Independence Day collections to be used to send blacks to Liberia, where they were to help spread Christianity and American liberty to the “Dark Continent.” Most African Americans argued that the ACS’s scheme stood for anything but liberty, but most whites who opposed slavery in the 1820s and beyond believed in the Society’s mission.

On July 4, 1829, however, a white abolitionist named William Lloyd Garrison sparked an antislavery revolution as he reclaimed the holiday for advocates of freedom. By that point,  July 4th colonization collections had become widespread, and Garrison had been asked to deliver a speech in support of the movement. When he rose to speak, to the shock of his listeners, instead of celebrating American freedom and independence and supporting colonization, he chastised a nation that kept human beings in bondage. Slavery made a mockery of American ideals, and he set out to show his listeners that human bondage must end and it must not depend upon colonization.

Though Garrison’s speech sparked what would become known as the “immediate abolition” movement, the most famous July 4 speech of all came from black abolitionist Frederick Douglass. An escaped slave who had gained international celebrity status, Douglass was asked to speak at an Independence Day commemoration in Rochester, New York, in 1852. He electrified the audience when he asked “What have I, or those I represent, to do with your national independence? Are the great principles of political freedom and of natural justice, embodied in that Declaration of Independence, extended to us?” The answer was no:

“I say it with a sad sense of the disparity between us. I am not included within the pale of this glorious anniversary! Your high independence only reveals the immeasurable distance between us. The blessings in which you, this day, rejoice, are not enjoyed in common. The rich inheritance of justice, liberty, prosperity and independence, bequeathed by your fathers, is shared by you, not by me. The sunlight that brought light and healing to you, has brought stripes and death to me. This Fourth July is yours, not mine. You may rejoice, I must mourn. . . . I can to-day take up the plaintive lament of a peeled and woe-smitten people!”

Frederick Douglass’s address, “The Meaning of July Fourth for the Negro” (5 July 1852), is one of the most famous speeches in U.S. history.  It highlighted the disparity between the white Americans who celebrated “freedom” and “independence” on that date and the black Americans who lived their lives in chains, forced to work for the benefit of others.

Throughout much of the early nineteenth century, African Americans chose to celebrate West Indies Emancipation Day on August 1 instead of Independence Day on July 4th. That began to change when the Civil War became a war for freedom after the Emancipation Proclamation. Even so, especially after the regression that accompanied the end of Reconstruction, July 4th maintained some its tarnish for many disenfranchised Americans.

With recent Supreme Court rulings, this year the 4th of July will seem a little more legitimate to some who continue to labor for social justice, though true liberty remains elusive for far too many in the U.S. even today.  Perhaps someday all Americans will be able to share equally in the festivities and enjoy the day without reservation, but the fight is far from over.

Donna T. Haverty-Stacke is author of America’s Forgotten Holiday: May Day and Nationalism, 1867-1960 ​(NYU Press, 2008).

Every July 4th, Americans celebrate the moment in 1776 when the colonists declared their independence from Britain. Throughout America’s history different individuals and groups, including abolitionists, women’s rights activists, and civil rights crusaders, have claimed the promise of that founding moment voiced so eloquently in the Declaration of Independence: the promise of equality and freedom.

The same was true for American workers and political radicals. For several decades beginning in 1886, they made their demand for freedom not just on July 4th, but also on May 1st. From the beginning this new May Day holiday included many concerns, ranging from shorter work hours to basic union recognition to the broader goals of its socialist adherents for the overthrow of the capitalist order of the late-nineteenth and early-twentieth centuries. This annual event was thus complex, multifaceted and controversial.

Anarchists and many socialists refused to embrace nationalist references in their May 1st celebrations, pointedly referring to May Day as International Labor Day instead. But many other workers, and socialists, continued to claim the day’s American roots. They proudly marched in parades carrying both the red flag and the Stars and Stripes and demanded that the democratic promises of 1776 be extended into industry. Such expressions of a hybrid radical-American identity were not well received. Bans on the use of red flags complicated the actions of May Day’s supporters for decades. And then there were the various attempts to replace May Day demonstrations altogether with events like National Child Health Day during the 1920s and Loyalty Day during the 1950s.

May Day never became a national holiday. It is now virtually unknown in the land of its birth in large part because of the Cold War. America’s Forgotten Holiday tells the story of this celebration of dissent and the process of popular amnesia through which it has been displaced from the American calendar. It reminds us how many Americans at the turn of the twentieth century and beyond believed that the promise of 1776 was yet to be realized for the hundreds of thousands of industrial workers who remained fettered by the invisible chains of capitalism. This book recovers their struggle to effect that liberation through the politics of protest and popular celebration.

Gerard N. Magliocca is author of American Founding Son: John Bingham and the Invention of the Fourteenth Amendment (forthcoming from NYU Press, 2013).

This week when we celebrate the Fourth of July, we celebrate Jefferson’s statement that “all men are created equal.” But the Constitution did not include that ideal until John Bingham penned the Equal Protection Clause of the Fourteenth Amendment.

Congressman Bingham was, like Abraham Lincoln, inspired by what he called “our sublime Declaration of Independence” and frequently invoked that sacred text during the struggle against slavery. In 1859, he told the House of Representatives that the United States was not founded to further oppression, and that:

“It was not to this end that the fathers of the Republic put forth their great Declaration, and in defense of it walked through the fire and storm and darkness of a seven years’ war…It was not to this end that, after the victory was thus achieved, those brave old men, with the dust of Yorktown yet fresh upon their brows, and the blood of Yorktown yet fresh upon their garments, proclaimed to the world, and asked it to be held in everlasting remembrance that the rights for which America had contended were the rights of human nature.”

Seven years later, Bingham took the lead in revising the Constitution after the devastation of the Civil War, and in that process he wrote into our law the guarantee that no state shall “deny to any person the equal protection of the laws.”  The struggle to make equality a reality goes on, of course, but John Bingham is one of the unheralded heroes of that story.

Should Superman kill?

—Nickie D. Phillips and Staci Strobl

[This article originally appeared in The Wall Street Journal online. Read it here.]

Warning: this essay contains spoilers from the film “Man of Steel.”

Man of Steel

The reviews on the latest superhero blockbuster, “Man of Steel,” are in, and while generally positive, the movie has ignited a firestorm of debate among fans about the appropriateness of Superman’s actions—most notably, at the end of the climactic battle between Superman and the villainous Zod. Superman kills Zod with a quick snap of his neck. On his blog Thrillbent, prominent comic book writer Mark Waid (“Daredevil”) expresses his dismay at Superman’s actions describing them as a “total-fail moment.”

Waid refers to the widespread sentiment that what traditionally has made Superman heroic is that he “does not kill.” Waid’s post provoked a slew of comments including disagreement from some fans who justify Superman’s behavior as proportionate and necessary to restore social order in the wake of Zod’s attempt at mass genocide. From the reaction of some moviegoers, in our theater at least, the death scene was satisfying and elicited approving cheers.

Superheroes’ portrayed determinations about whether to kill villains and other characters along their paths to justice is a concept we call “deathworthiness” in our book Comic Book Crime: Truth, Justice, and the American Way (NYU Press, 2013).  Deathworthiness, a term borrowed from legal discourse about the death penalty, refers to the comic heroes’ self-described death penalty policy.

Comic book deathworthiness operates on a continuum from heroes who kill only the explicitly guilty to more extreme positions in which heroes will kill anyone, even innocents, encountered along the way. Notably, most contemporary comic books, though chock-full of bloodthirsty rhetoric and flashy fight scenes, most often end in villain incapacitation that falls short of death. Like the snapping of villain Maxwell Lord’s neck at the hands of Wonder Women in her “Wonder Woman #219,” when superheroes kill unexpectedly, fan controversy often ensues.

Our focus group research found that fan acceptance of a heroes’ depicted notion of deathworthiness hinges primarily on two factors: the seriousness of the threat (which, incidentally, in mainstream comic books nearly always reaches apocalyptic proportions rendering every situation one that is dire and requires drastic action), and the hero’s own personal, moral code.

At the extreme end of the deathworthiness continuum is the “Man of Steel” incarnation of Superman. Throughout his celebrated history, Superman almost never kills, with few exceptions. One of our focus group participants put it this way, “Superman is super-boring” because he has all the superhero powers imaginable and almost never engages in lethal violence.

Indeed, much of the discussion around Superman’s actions in “Man of Steel” points to his negligence toward innocents in Metropolis who presumably die as a result of his clash with Zod. For example, couldn’t Superman have easily taken the fight out to a desolate cornfield to avoid all that collateral damage?

Importantly, the “Man of Steel” debates reflect larger notions of justice in contemporary American society. Whether the threat is communism and the Cold War or terrorism in a post-9/11 context, discussions about deathworthiness prompt a re-examination of the concept of absolute power, the rule of law, and the difficulties balancing public safety with individual rights. To what extent should a hero go to protect public safety? And what is the significance when one of the most recognized cultural signifiers of the “American way” lethally revises his way of maintaining public safety?

The killing of Zod complicates the messianic nature of the Superman origin story—a story that Warner Bros. markets to Christian congregations through their Man of Steel Ministry Resource Site. If fans are rankled by the idea of Superman killing, devout Christians may similarly find it odd to equate Jesus, who preached peace and non-violence, with this Man of Steel who deems Zod deathworthy. Taken further, the aesthetic motif of “Man of Steel” borrows more from the dark, metallurgical violence in the “God of War” videogame than typical “no-kill” Superman comic books with their primary colors and emotive expressions.

The decision to kill Zod was recognized as controversial by the creators themselves, who were reportedly initially uncomfortable with the killing but were convinced by the larger creative team at Warner Bros. If there was any question about whether Superman is still relevant in today’s world (and among fans, there most certainly was), the discussions around the film indicate a resounding yes — fans really do want to debate heroes and their depicted moral standards.

For us, the significance of “Man of Steel” is not whether it has earned its blockbuster status by being a “good” movie, but in how Superman’s actions spark debate about deathworthiness and in the process challenge our notions of when—if ever—vigilante violence is appropriate.

Nickie D. Phillips and Staci Strobl are the authors of Comic Book Crime: Truth, Justice, and the American Way (NYU Press, 2013).