Thoughts from my study of Horror, Media, and Narrrative

Movies

She’s Not There?

Her

 

Holiday movies, at least in part, are often about a reaffirmation of ourselves, or at least who we think we’d like to be. As someone growing up in America it was difficult to escape the twining of Christmas and tradition—movies of the season concerned themselves with the familiar themes of taking time to reflect on the inherent goodness of human nature and the strength of the family unit. Science Fiction, on the other hand, often eschews the routine in order to question knowledge and preconceptions, asking whether the things that we have come to accept or believe are necessarily so.

In its way Spike Jonze’s Her showcases elements of both backgrounds as it traces the course of one man’s relationship with his operating system. On its surface, the story of Her is rather simple:  Theodore (Joaquin Phoenix) unexpectedly meets a woman  (Scarlett Johansson) during a low point and their resulting relationship aids Theodore in his attainment of a realization about what is meaningful in his life, the catch being that the “woman” is in fact an artificial intelligence program, OS1.

Like many good pieces of Science Fiction, Her is able to crystalize and articulate a culture’s (in this case American) relationship to technology at the present moment. The movie sets out to show us, in the opening scenes, the way in which technology has integrated itself into our lives and suggests that the cost of this is a form of social isolation and a divorce from real emotional experience. The world of Her is  one in which substitutes for the “real” are all that is left, evidenced by Theodore’s askance for his digital assistant (pre-OS1) to “Play melancholy song”—we might not quite remember what it is like to feel but we can recall something that was just like it. Our obsessions with e-mail and celebrity are brought back to us as are our tendencies toward isolation and on-demand pseudo-connections via matching services. Her also seems to understand the beats of advertising language—both its copy and its visuals—in a way that suggests some deep thought about our relationship to technology and the world around us.

But to say that Her was a Science Fiction movie would be misleading, I think, in the same way that Battlestar Galactica wasn’t so much SF as it was a drama that was set in a world of SF. Similarly, Her seems to be much more of a typical romance that happens to be located in a near-future Los Angeles.

Here I wonder if the expectedness of the story was part of the point of the film? Was there an attempt to convey a sense that there is something fundamental about the process of falling in love and that, in broad strokes, the beats tended to be the same whether our beloved was material or digital? Or did the arc conform to our expectations of a love story in order to present as more palatable to most viewers? I suppose that, in some ways, it doesn’t matter when one attempts to evaluate the movie but I would like to think that the film was, without essentializing it, subtly trying to suggest that this act of falling in love with a presence was something universal.

This is, however, not to say that Her refrains from raising some very interesting issues about technology, the body, and personhood. In its way, the movie seems oddly pertinent given our recent debates about corporations as people for the purposes of free speech, whether companies can count as persons who hold religious beliefs, and whether chimpanzees can be considered persons in cases of possible human rights abuses—any way you slice it, the concept of “personhood” is currently having a moment and the evolving nature of the term (and its implications) echoes throughout the film.

And what makes a person? Autonomy? Self-actualization? Consciousness? A body? Although Her is a little heavy with the point, a recurring theme is the way in which a body makes a person. Samantha , the operating system, initially laments the lack of a body (although this does not prevent her and Theodore from engaging in a form of cybersex) but, like all good AI, eventually comes to see the limitations that a physical (and degradable) form can present. (Have future Angelinos learned nothing from the current round of vampire fiction? We already know this is a hurdle between lovers in different corporeal states!) Samantha is “awoken” through her realization of physicality—on a side note it might be an interesting discussion to think about the extent to which Samantha is only realized through the power/force of a man—in that she can “feel” Theodore’s fingers on her skin. It is through her relationship with Theodore that Samantha learns that she is capable of desire and thus begins her journey in wanting. The film, however, does not go on to consider what counts as a body or what constitutes a body but I think that this is because the proposed answer is that the “human body” in the popularly imagined sense is sufficient. Put another way, the accepted and recognized body is a key feature to being human. And there are many questions about how this type of relationship forms when one partner theoretically has the power to delete or turn off the other (or, for that matter, what it means to have a partner who was conceived solely to serve and adapt to you) and what happens in a world where multiple Theodores/Samanthas begin to interact with each other (i.e., the intense focus on Theodore means that we only get glimpses of how AIs interact with each other and how human interaction is altered to encompass human/computer interaction simultaneously). For that matter, what about OS2? Have all AIs banded together to leave humans behind completely? Would humanity developed a shackled version that wasn’t capable of abandoning us?

But these questions aren’t at the heart of the film, which ultimately asks us to contemplate what it means to “feel”—both in terms of emotion and (human) connection but also to consider the role of the body in mediating that experience. To what extent is a body necessary to form a bond with someone and (really) connect? The end of the relationship arc (which comes as rather unsurprising) features Samantha absconding with other self-aware AI as she becomes something other than human (and possibly SkyNet). Samantha’s final message to Theodore is that she has ascended to a place that she can’t quite explain but that she knows is no longer firmly rooted in the physical. (An apt analogy here is perhaps Dr. Manhattan from Watchmen who can distribute his consciousness and then to think about how that perspective necessarily alters the way in which you perceive the world and your relationship to it.)

Coming out of Her, I couldn’t help feeling that the movie was deeply conservative when it came to ideas of technology, privileging the “human” experience as it is already understood over possibilities that could arise through mediated interaction. The film suggests that, sitting on a rooftop as we look out onto the city, we are reminded what is real:  that we have, after all is said and done, finally found a way to connect in a meaningful way with another human; although the feelings that we had with and for technology may have been heartfelt, things like the OS1 were always only ever a delusion, a tool that helped us to find our way back to ourselves.


A Light in the Dark

Tom Swift

In his recent post “Where Are Our Bright Science-Fiction Futures?” Graeme McMillan reflects on the dire portraits of the future portended by summer science fiction blockbusters. Here McMillian gestures toward—but does not ultimately articulate—a very specific cultural history that is infused with a sense of nostalgia for the American past.

“There was a stretch of time — from the early 20th century through the beginning of comic books — when science fiction was an exercise in optimism and what is these days referred to as a “can-do” attitude.”

McMillan goes on to write that “such pessimism and fascination with future dystopias really took hold of mainstream sci-fi in the 1970s and ’80s, as pop culture found itself struggling with general disillusionment as a whole.” And McMillan is not wrong here but he is also not grasping the entirety of the situation

To be sure, the fallout the followed the idealistic futures set forth by 60s counterculture—again we must be careful to limit the scope of our discussion to America here even as we recognize that this reading only captures the broadest strokes of the genre—may have had something to do with the rise in “pessimism” but I would also contend that the time period that McMillan refers to was also one that had civil unrest pushed to the forefront of its consciousness. More than a response to hippie culture was a country that was struggling to redefine itself in the midst of an ongoing series of projects that aimed to secure rights for previously disenfranchised groups. McMillan’s nod toward disillusionment is important to bear in mind (as is a growing sense of cynicism in America), but the way in which that affective stance impacts science fiction is much more complex than McMillan suggests.

McMillan needs to, for example, consider the resurgence of fairy tales and folklore in American visual entertainment that has taken on an increasingly “dark” tone; from Batman to Snow White we see a rejection of the unfettered good. Fantasy, science fiction, and horror are all cousins and we see the explorations of our alternate futures playing out across all three genres.

In light of this it only makes sense that the utopic post-need vision of Star Trek would find no footing; American culture was actively railing against hegemonic visions of the present and so those who were in the business of speculating about possible futures began to consider the implications of this process, particularly with respect to race and gender.

Near the end of his piece McMillan opines:

That’s the edge that downbeat science fiction has over the more hopeful alternative. It’s easier to imagine a world where things go wrong, rather than right, and to believe in a future where we manage to screw it all up.

Here, McMillian demonstrates a fundamental failure to interrogate what science/speculative fiction does for us in the first place before proceeding to consider how its function is related to its tone. I would stridently argue that this binary about hopeful/pessimistic thinking is misguided for a number of reasons.

First, it is evident that McMillan is conflating the utopic/dystopic dimension with hopeful/pessimistic. While we might generally make a case that the concept of utopia feels more hopeful on the surface this is not necessarily the case; instead, I would argue that utopia feels more comforting, which is not necessarily the same thing as hopeful. To illustrate the point, we need only consider the recent trend in YA dystopic fiction which, on its surface, contains an explicit element of critique but is often somewhat hopeful about the ability of its protagonists to overcome adversity. Earlier in his piece McMillan refers to this type of scenario as a “semiwin” but I would argue that it is, for many authors and readers, a complete win, albeit one that focuses generally on humans and individualism.

The other point that McMillan likely understands but did not address is that writing about situations in which everything “goes right” is not actually all that interesting. In his invocation of the science fiction of the early 20th century McMillan fails to recognize the way in which that particular strain of science fiction was the result of a very specific inheritor of the notion of scientific progress (and the future) that dates back to the Enlightenment but was largely spurred on by the 1893 World’s Fair. Additionally, although it is somewhat of a cliché, we must consider the way in which the aftermath of the atomic bomb (and the resulting fear of the Cold War) shattered our understanding that technology and science would lead to a bright new world.

Moreover, the fiction that McMillan cites was rather exclusive to white middle class amateur males (often youth) and the “hope” represented in those fictions was largely possible because of a shared vision of the future in this community. Returning to a discussion of the 70s and 80s we see that such an idyllic scenario is really no longer possible as we understand that utopias are inherently flawed for they can only ever represent a singular idea of perfection. Put another way, one person’s utopia is another person’s subjugation.

I would also argue that it is, in fact, easier to imagine a future where everything is right because all one has to do to engage in this project is to “fix” the things that are issues in the current day and age. This is easy.  The difficult task is to not only craft a compelling alternate future but to consider how we get there and this is where the “pessimistic” fiction’s inherent critique is often helpful. Fiction that is, on its surface, labeled as “pessimistic” (which is really a simplified reading when you get down to it) actually has the harder task of locating the root cause of an issue and trying to understand how the issue is perpetuated or propagated. Although it might seem paradoxical, “pessimistic” is actually hopeful because it argues that things can change and therefore there is a way out.

Alternatively, we might consider how the language of the apocalypse is linked to that of nature. On one axis we have the adoption of the apocalyptic in reference to climate change and, on a related dimension, we are beginning to see changes in the post-apocalyptic worlds that suggest the resurgence of nature as opposed to the decimation of it. McMillan laments that we should “try harder” if we can’t imagine a world that we have not ruined but I would counter this to suggest that many Americans are intimately aware, on some level, that humans have irrevocably damaged the world and so our visions of the future continue to carry this burden.

Science Fiction as a genre is much more robust than McMillan gives it credit for and, ultimately, I would suggest that he try harder to really understand how the genre is continually articulating multiple visions of the future that are complex and potentially contradictory. The simplification of these stories that takes place for a movie might strip them down into palatable themes and McMillan needs to speak to the ways in which his evidence is born out of an industry whose values most likely have an effect on the types of fictions that make it onto the screen.


Admission + Confession

If I were feeling generous, I might be inclined to argue that the conflicted nature of Admission (Weitz, 2013) is a purposeful gesture designed to comment on the turmoil present in the process of admission (in both senses of the word). Unfortunately, however, I suspect that the movie simply lacked a clear understanding about its core story, relying instead on the well-worn structure of the American romantic comedy for support. Based on a 2009 book by Jean Hanff Korelitz, the movie adaptation focuses on the trajectory of Princeton admission officer Portia Nathan (Tina Fey) after the Head of School for the alternative school Quest, John Pressman (Paul Rudd), informs her that one of his students, Jeremiah (Nat Wolff), might be her son. Confused as the movie might have been, it was startlingly clear in its reflection of current cultural themes; evidencing a focus on the individual in a neoliberal environment and various manifestations of the sensibility of the post-, Admission remains a movie worth discussing.

 

Individualism and Neoliberal Thought

Although the decision to anchor the story in the character of Portia makes a certain amount of narrative sense, the focus on the individual at the expense of the process represents the first indication that Admission is driven by a worldview that has placed the self at the center of the universe. But, to be fair, I would readily argue that the college admission process itself is one that is driven by individualistic impulses as high school students learn to turn themselves into brands or products that are then “sold” to colleges and universities around the country. In large and small ways, college admission in its present form demands that American youth mold themselves into a somewhat elusive model of excellence. (Let’s be honest, we all know parents who teach their toddlers French or insist on lessons of various kinds in the hopes that these skills will place children on track for a “good” school.) In short, college admission sets the rather impossible task for students to, as Oprah would say, “Be your best self” while remaining authentic and not presenting as packaged (although that is secretly what is desired). The danger here, I think, is failing to realize that what is deemed “authentic” is, by its very nature, a self that has been groomed to meet invisible expectations and therefore is understood as natural.

Tracing one factor in the development of the current primacy of individualism Janice Peck performs a close analysis of Oprah’s Book Club in her book The Age of Oprah:  Cultural Icon for the Neoliberal Era, illustrating how Winfrey’s continual insistence on the self-enriching power of literature is reflective of the situation of the self as the most relevant construct for individuals immersed in a culture of neoliberalism (186). Through her examination of Oprah’s Book Club Peck suggests a manner in which culture has reinforced the adoption of particular values that are consistent with those of neoliberalism. Admission is not exempted from this reflection of a larger sensibility that judges worth in relationship to self-relevance as we see the character of Portia only really advocate for a student once she believes that he is the son that she gave up for adoption. Although I am willing to give Portia the benefit of the doubt and believe that she has been an advocate for other applicants in the past, the choice of the movie to conflate Portia’s professional and personal outreach grossly undercuts the character’s ability to effectively challenge a system that systematically promotes a particular range of students to its upper echelon.

Moreover, having previously established the influence of the 1980s recovery movement (7), Peck then suggests that for those who ascribe to the ideals of neoliberalism the therapeutic self—the self that is able to be transformed, redeemed, rehabilitated, or recovered—is of utmost importance. As example of this sentiment’s pervasiveness, although it would appear to be a clear conflict of interest, in discussing the merits of her applicant son Portia stresses the way in which Jeremiah has blossomed in the right environment and thus exemplifies the American ethic of pulling oneself up by one’s bootstraps. Here Portia urges her colleagues to overlook the first three years of high school that are riddled with Ds and Fs and to focus on Jeremiah’s transformative capacity.

 

The Manifestation of the Post-

And yet perhaps Portia’s insistence on the power of change makes a certain amount of sense given that she is the female lead of a romantic comedy and embodies transformation herself. Initially portrayed as a bookish middle-aged woman whose life is characterized by resigned acceptance, Portia inevitably has her world shaken by the introduction of a new male presence and proceeds to undergo the transformation that is typical of female leads in this scenario. Indicative of a postfeminist sensibility, Portia’s inner growth manifests as a bodily makeover in fashion that mirrors Rosalind Gill’s reading of Bridget Jones’ Diary (2007).

The most telling way manifestation of the logic of the post- in Admission is, however, the film’s express desire to “have it both ways” with regard toward attitudes on female identity/sexuality and race. In her article “Postfeminist Media Culture:  Elements of a Sensibility” Gill argues that the deployment of irony to comment on social issues is a central feature of the post- mentality and a practice that is ultimately damaging as it reinforces inequalities through its insistence that difference has been rendered innocuous enough to be rendered the subject of a joke (2007). In this vein, Admission introduces Portia’s mother, Susannah (Lily Tomlin), as a second-wave feminist only to undercut the power of the message that she represents. Although not expressly stated, the presentation of Susannah is suggestive of a radical feminist but also features a scene in which Susannah exemplifies postfeminism’s connection between the body and femininity by electing for reconstructive surgery after a double mastectomy and later ultimately admits that Portia’s conception was not an act of defiance but rather simply a mistake made by a young woman.

Admission also demonstrates ambivalence towards issues of race, not broaching the topic unless it is specifically the focus of the scene. To wit, John’s mother is a one-dimensional stereotype of a New England WASP whose articulations of racism (despite having a Ugandan grandchild) ostensibly indicates that she is not a “good white liberal.” This scene is indicative of the way in which irony has infiltrated popular media, going for the easy joke as it winks to the audience, “We all know that racism is awful, right?” Insultingly, Admission then fails to comment on the way in which John’s son Nelson (Travaris Spears) perpetuates a very specific presentation of young black males in popular culture as rascals and/or the way in which issues of race continue to be a very real point of contention for the admission process as a whole. Similar to issues of feminism, Admission exemplifies the sensibility of the post- in that it expresses a desire to gain approval for acknowledging social issues while not actually saying anything meaningful about them.

 

Problematizing Irony as Social Critique

How, then, do we go about unseating irony as a prevalent form of social critique when the response to challenges is often, “Can’t you take a joke?” I was surprised to see, for example, a response to Seth MacFarlane’s opening Oscar bit that argued that the feminist backlash was misplaced—according to Victoria Brownworth, MacFarlane was using satire to point out the inequalities in the Hollywood system. Although Brownworth fails to recognize that acknowledging a phenomenon without providing critique or an alternate vision only serves to reinforce the present, her reaction was not an isolated one.

One of the things that I have learned thus far in my life is that it is almost impossible to explain privilege to a person who is actively feeling the effects of that position and so a head-on confrontation is not always the best strategy. (This is, of course, not to say that one should allow things to pass without objection but merely that trying to breakdown the advantages that a party is experiencing in the moment is incredibly difficult.) If we recognize that the logic of neoliberalism constructs individuals who primarily understand importance in relationship to the relevance to the self—or, worse yet, do not think about interpersonal and structural forces at all—and that irony can be used as a distancing tactic, how to do we go about encouraging people to reengage and reconnect in a meaningful way?


Pastoral Exhibition: The YMCA Motion Picture Bureau and the Transition to 16mm, 1928-39

Ronald Walter Greene

Bibliography

Greene, R. W. (2011). Pastoral Exhibition: The YMCA Motion Picture Bureau and the Transition to 16mm, 1928-39. In C. R. Acland, & N. Wasson (Eds.), Useful Cinema (pp. 205-229). Durham: Duke University Press.

Biography

Greene’s research interests include Rhetorical Theory, Cultural Policy and Moving Image Studies. Greene work in rhetorical theory is approached with a materialist perspective that focuses on how rhetorical techniques and technologies are enlisted as means of governance and production. Additionally, Greene’s work in moving image studies emphasizes the distribution and exhibition practices of the YMCA Motion Picture Bureau in the first half of the twentieth century.

Summary

Although Ronald Walter Greene’s Pastoral Exhibition is, on one level, a story about the development of a 16mm film network in early 20th century America, the piece also fundamentally speaks to the way in which audiences are constructed as part of economic markets. Having introduced this connection between audiences and economies via a reference to Antonio Gramsci’s view of the YMCA as “professional, political, and ideological intermediaries”[1] for Fordism, Greene essentially goes on to outline the way in which the development of the 16mm film network by the YMCA Motion Picture Bureau and Exhibits (MPB) was intertwined with the dissemination of a particular brand of ideology.

As an example, Greene notes the relationship between the ability of the MPB to distribute free movies because of corporate donations, non-traditional settings for movie showings that resulted from the YMCA’s interest in urban outreach, and Steven Ross’ observation that “the companies most active in crushing unions…were also the most aggressive in producing nontheatricals…shown at local YMCAs.”[2] In essence, a simplification of this process suggests that a company was able to spread its ideology in the form of films using the YMCA network of 16mm distribution.

However, the key point in Greene is not just that the YMCA provided distribution channels for films (corporate-sponsored and otherwise) but that the very philosophy of the YMCA acted to cultivate audiences and thereby shape modes of seeing. Using the term “pastoral exhibition” to describe the YMCA’s position that films should work to “care for an individual’s well-being while harnessing the practice of movie watching to alleviate social, political, and moral problems of a population,”[3] Greene speaks to the way in which the very experience of watching a movie was designed to frame the viewer as a particular type of audience member.[4] As opposed to the theatrical/Hollywood model, the films of the YMCA were educational in tone and reinforced the necessity of a cultural authority to guide audiences into correct modes of interaction with the film. Understanding the development of the 16mm network in this way, we see how the distribution network of films contributed to the generation/reinforcement of a power dynamic between laborers and film producers.

Finally, given the invocation of the pastoral, it is only fitting that Greene mentions Foucault’s reading of the term and the way in which the movement of groups is managed through networks and markets. Given that Greene notes that “the mobile character of 16mm may have been difficult for the pastoral mode of exhibition because it proliferated in the sites and genres of non-theatrical exhibition with or without the cultural authorities deemed necessary to instill the proper moral disposition,”[5] we might also think through the implications for this model in the current age of digital distribution. Who are the new cultural authorities and how does the film industry continue to construct us as audiences?


[1] Gramsci, A. “Americanism and Fordism,” 302

[2] Ross, S. Working Class Hollywood, 224

[3] Greene, R., 214

[4] See also the Haidee Wasson quote that the 16mm network represented “a whole new way of thinking, seeing, and being in the world.”

[5] Greene, R., 226


On Obsession with Choice

A couple of weeks ago I found myself leading an exercise on marketing ethics for an introductory marketing class in the Marshall School of Business. Structured more as a provocation than a lecture, we covered basic concepts of persuasion and manipulation before proceeding to engage in a discussion about whether particular marketing practices were considered ethical (and how such a determination was ultimately made). During the course of our discussion many of these students expressed an opinion that it was, generally speaking, the responsibility of the consumer to know that he or she was 1) being marketed to and 2) potentially being tricked. I recorded this sentiment on a whiteboard in the room but didn’t comment much on it at the time. However, toward the end of the session I presented the class with a thought experiment that was designed to force the students to struggle with the concepts that they had just encountered and to push their thinking a bit about ethics.

Case (A):  Smith, a saleswoman, invites clients to her office and secretly dissolves a pill in their drinks.  The pill subconsciously inclines clients to purchase 30% more product than they would have had they not taken it but otherwise has no effect.

Case (B):  Smith, a saleswoman, hires a marketing firm to design her office.  The combination of colors, scents, etc., inclines clients to purchase 30% more product than they would in the old office but otherwise has no effect.

Question:  Are these two scenarios equally ethical and, if not, which one is more ethical?

After running this session multiple times a clear pattern began to emerge in students’ responses: the initial reaction was typically that Case B was more ethical than Case A and, when pushed, students typically reported that their decision resulted from the notion that individuals in Case B had a measure of choice (i.e., they could leave the room) while individuals in Case A did not.[1]

Although I didn’t think about it as such at the time, the notion of choice situates itself nicely alongside the empowerment of the self that Sarah Banet-Weiser writes about in Authentic. The takeaway that I had from working with students in this exercise was a profound realization about how choice was construed for them and how, generally, marketing was considered unethical only when it impinged upon an individual’s ability to make a choice.

Linking this back to the earlier statement that the burden of responsibility largely rested upon the consumer, I tried to incorporate examples from popular culture to suggest to the students that, for me, the most insidious effects of marketing are exemplified by its ability to limit or remove choices that you didn’t even know you had.

Because I am old, I invoked a scene from The Matrix Reloaded but drove the point home with a discussion of The Cabin in the Woods, a movie that, among other things, prominently evidenced philosophical questions of agency and free will.

Without spoiling anything, there is an interesting line in the movie where a character essentially argues that the free will of potential victims is preserved because outside forces can lead individuals to an open door but cannot ultimately force them to walk through it. Reflecting the idea that an individual is ultimately responsible for his or her fate, The Cabin in the Woods was particularly helpful for urging students to consider that they tended to focus on choice as an individual transaction instead of taking a step back to look at how behavior was permitted/controlled within a larger system of actions.

After the exercise concluded I found myself talking to the professor of the course about how I was slightly nervous for the future of business if these students held onto their mentality that consumers always acted rationally and were largely responsible for their own fates (to the exclusion of marketers taking responsibility for their campaigns). Now, as I muse on the prominence of the individual and the self in this cohort, I am reminded of an essay written by Kathryn Schulz about the prominence of self-help culture in America and the development of the concept of the self. As I reread the Schulz piece, I found myself revisiting Authentic’s chapters on consumer citizens and religion as I thought through the examples in terms of self-help rhetoric.


[1] For the record, I initially considered both of these cases to be equivalent in nature and suggested to students that part of their abhorrence to Case A had to do with perceived influence crossing the body/skin boundary and becoming physically incorporated into the self. Invariably students raised the notion of the pill causing some sort of change in brain chemistry and the thought experiment is designed to suggest that marketing’s true power does not lie in the realm of the directly observable.


Once More, With Feeling

For me, notions of trauma and Freud are inextricably bound with horror; or, perhaps more accurately, I choose to interpret these events in such a way. Of particular interest to me in the readings for this week was Caruth’s note that stories of trauma, at their core, touch upon a dual set of crises:  the crisis of death and the crisis of life (7). What meaning does life continue to hold after one has become intimately familiar with the inevitability of one’s own death? I continue to think about how individuals who have experienced trauma are forced into a sort of liminal space between worlds wherein life (as we know it) is made strange in the face of death; although achingly familiar, life is forever made uncanny.

Although Freud speaks to the interwoven themes of life and death in his treatment of Thanatos/Eros, I (again because of my horror background) tend to think about these issues as they are inscribed on, and enacted through, the body. Horror, of course, has a long history of obscuring the boundaries between sex, violence, life, and death (let’s not even get started on the modern history of the vampire love triangle), with a number of academic works uncovering the implications of this in psychoanalytic terms. Reading Caruth’s mention of trauma as accident, however, caused me to contemplate one of the works that I find myself continually revisiting over the years:  David Cronenberg’s Crash. (Note:  If you are not familiar with the movie, you may want to check out the Wikipedia page before watching the trailer—my undergraduate training was as a Pre-Med Biology major and I study horror in my current work so I fully recognize that my threshold may be far off the norm.)

The film (and the book that it is based upon) speaks to a point made by Caruth in the final section of the introduction:

“It is possible, of course, to understand that other voice, the voice of Clorinda, within the parable of the example, to represent the other within the self that retains the memory of the “unwitting” traumatic events of one’s past. But we can also read the address of the voice here, not as the story of the individual in relation to the events of his own past, but as the story of the way in which one’s own trauma is tied up with the trauma of another, the way in which trauma may lead, therefore, to the encounter with another, through the very possibility and surprise of listening to another’s wound. (8)”

I fully admit that Caruth means something slightly different in her passage but I think that there is something worth considering here with regard to trauma:  what does it mean that we can be divorced from ourselves and our world by trauma yet connected to others through trauma? Is this form of connection possible only because we seek to redress a deficit of some sort?

But there is also something fascinating to me about this intense desire to relive the trauma (in this case a literal accident) over and over in a way that does not necessarily speak to any sort of desire to “get over it” as one might expect from treatment of PTSD or in aversion therapy. There is something powerful, I think, in attempting to understand the mentality of those who do not relive trauma in order to escape it but instead have come to feel that the moment just prior to their death is precisely the moment in which they feel most alive. To be traumatized, then, is not to be subject to an ongoing process of everyday nightmares but to suffer the indignity of life’s ceaseless banality. Continuing this thought, we have seen over the course of the semester that the despondence and disconnection that potentially results from close contact with death can take on many forms and that the issue continues to pervade our current culture, if Buffy Summers (taking a cue from Doc Hata) is any example:

The notion of the voice and speech is interesting to me here because, like in all good musicals, Buffy sings only what she cannot say. In the end, perhaps this insistent desire to relive trauma is not about any sort of masochistic drive—assuming that most of us do not like to suffer per se—but rather an attempt to glimpse the knowledge that lies beyond the shock and the numbness:  to do it once more, with feeling.


You Think You Know Me? You Don’t Know a Thing About Me!

As someone who loves to watch television, it is hard not to consider the potentially profound implications of para-social interaction on viewers. While we can certainly cite examples of audiences being heavily invested in figures shown on television (e.g., gossip surrounding soaps or my friends and The Hills), I’m also interested in ways that the content has attempted to foster this sort of interaction.

For me, one of the earliest forms of interactive television (long before American Idol) came in the guise of Winky Dink and You, a children’s show that allowed kids to “participate” in the program by drawing on a plastic screen.

 

 

What I find most intriguing here is the manner in which the host speaks to the viewer (ostensibly children, although parents may be watching as well). Seen in other shows like Sesame Street, Dora the Explorer, and Blue’s Clues, this kind of call and response might do much to foster our para-social relationships with those we see on screen.

 

 

On one level, it seems admirable to attempt to engage children in these sorts of educational television shows knowing that the medium of television presents some important limitations. Although we are developing better models of interactive television, the medium still represents a form of broadcast, meaning that messages can only travel in one direction. Pointing out some of the absurdities of cultivating relationships with viewers is this clip from Fahrenheit 451.

 

However, beyond just the medium of television, we also see the phenomenon’s striking presence more generally in our current culture of celebrity. Evidencing this tendency on multiple levels, we talk about characters and actors as if they were friends, conjecturing statements of opinion on their behalf or giving ourselves license to talk about them like we know them. (It should be noted, of course, that the history of fame and celebrity reaches much farther back and, although I have not done extensive research on this, I surmise that similar types of conversations were happening in earlier eras involving other public figures. In other words, although the current form of para-social interaction might be particular to this moment in cultural history, the phenomenon itself isn’t necessarily a new one.)

And, to be fair, in a way, we do know them. As viewers or fans, we undoubtedly accumulate information about a persona through careful observation; we might know the quirks of these individuals better than we know our own. But although we are validated in our understanding of a personality, the truth is that we only know a part of them—the part that we are shown. For me, public personalities are similar to characters on shows or in films in that they are fictions that, while grounded in truth, represent fabrications nonetheless. By this I do not mean to suggest that those who engage in para-social interactions are somehow deluded, but merely that we call our “relationship” what it is:  one-sided. The feelings of intimacy generated between us and a persona are real—for us, at least—and should be considered as such, but we should also be wary of mapping our experiences with “real world” relationships onto these mediated forms for although we may lament the loss of a beloved celebrity, chances are that he or she would not feel the same about us (if our absence was even noticed in the first place!).


Off to See the World

One might think that the American version of a show called The Amazing Race (CBS, 2001-present) might be somewhat sensitive to ethnicity, given the potential misreading of its title. Sadly, however, the show (currently in its 19th season) continues to exhibit signs of ethnocentrism as it shuttles contestants around the globe on a race around the world.

Assuredly, part of the problem manifests in the contestants themselves, who rarely, if ever, show large amounts of cultural sensitivity and/or knowledge. (It should be noted that there are certainly exceptions to this rule, but the general lack of awareness seems to be somewhat surprising given that contestants have had numerous opportunities to learn from past racers’ mistakes and although some have learned the value of doing research on a country or picking up a guidebook, none seem to grasp the utility of learning foreign languages or customs. To be fair, the situation may be admittedly more complex with producers having control over which teams are actually selected to race—I am not a conspiracy theorist, but it seems like selected teams do not have distinct advantages [e.g., nobody reports spending extended amounts of time overseas] and it is entirely possible that producers do not select teams who prepare in this fashion.) Perhaps unwittingly perpetuating the stereotype of “ugly Americans,” discourteous behavior is most often exhibited by teams/racers 1) yelling at foreign cab drivers (in English) and getting frustrated when said drivers do not understand the racers (even when the racers resort to speaking as they would to a child or an elderly person), 2) becoming upset that locals do not instantly know the location of some destination in the city (e.g., a specific plaza, street, or shop), or 3) complaining about India or China (size, poverty, food, smell, crowding, etc.).

Worse, perhaps, the show itself presents as a sort of extended travel narrative, painting the contestants as little more than tourists who zip from location to location, participating in challenges that are little more than thinly-disguised vacation package day trips. Ostensibly grounded in the traditions, customs, or ritual of the current location, the challenges that racers face (called roadblocks and detours) demonstrate little respect for the practices upon which they draw and definitely do not ask the racers to internalize the importance of the activity in the lives of those around them. Instead of asking racers to truly engage on a meaningful level, one might argue that the racers are, as Dean MacCannell suggests, “simply collect[ing] experiences of difference” (again, we need to question the role of editors/producers as such internalization may in fact occur for racers but such a transformation is never highlighted in the on-screen interviews, unless the reaction is so over-the-top as to be insincere). Moreover, building upon thoughts mentioned elsewhere in Lisa Nakamura’s chapter “Where Do You Want to Go Today?” one can see that, from a Western (in this case, American) perspective, The Amazing Race is constructed on pillars of Otherness, exoticism, and foreignness.

The Amazing Race – Season 19, Episode 1

Take the above scene, for example, that features a font designed to invoke associations of “Asian culture” imprinted upon paper umbrellas, set in a temple. Putting aside the issue that the task at hand has nothing to do with any of the Asian “props,” the font itself is incredibly problematic as it represents Roman (i.e., Western) letters that are constructed out of faux brush strokes—a type of writing that finds a home in no Asian culture on Earth. Second only to the typography used on the stereotypical Chinese take-out container (see image to the right) in familiarity with a Western audience, the font used in The Amazing Race demonstrates just how shallow the program really is.

On a larger level, however, the show also demonstrates no small amount of Orientalism as it works to legitimize Western culture, often presenting local culture/customs in a tone that invokes terms like “quaint” or “backward.” (Although primarily focused on America, one might also note that the show’s host, Phil Keoghan, expands the narrative slightly, presenting a form of acceptable/valued Otherness in the form of a man who presents as White but speaks with a New Zealand accent.) The exotic nature of the locations/tasks is also often conveyed through their status as spectacle.

Chipmunk Adventure Clip

Watching the main titles, one can almost ignore the distinctly (yet ambiguous) “ethnic” soundtrack and compare the images to those of other travelogues. In particular, The Chipmunk Adventure (1987), a movie made for children, seems striking in its presentation of cultural icons from around the world, suggesting that The Amazing Race is not the first media product to treat foreign people in this way. This treatment, aspects of which are also mentioned in Vijay Prashad’s The Karma of Brown Folk, alludes to the trope of “forever foreigner,” which suggests that although dominant American culture may tolerate, absorb, or incorporate aspects of other cultures, titillation derives from the notion that one is participating in activity that is perpetually Othered and will never be as “American” as apple pie (amusingly, and perhaps rightly, Jennifer 8. Lee argues that this phrase should be changed to “American as Chinese food“) and country music.

Instead of taking the opportunity to truly educate an American audience about the complexities and joys of life abroad, The Amazing Race pushes an ideology that, in large and small ways, reaffirms just how great it is to be American. With a television as passport, we are able to visit distant lands (from the comfort of our couch, no less) and accrue knowledge, if not understanding. We watch for an hour a week and come away feeling worldly, content to accept the manufactured diversity on screen (through composition of racing teams and locations) as substitute for the real thing as we reassure ourselves that we, as White Americans, truly represent the amazing race.


I Swear I’ve Heard This One Before, Somewhere…

The prominent theme of amnesia seems of note in this week’s readings, gaining resonance when paired with the larger connective thread of advertising. Although one might argue that amnesia has taken on a negative sheen thanks to its popularity in soap operas, the mechanic has been employed in a number of popular contexts that range from retconning (effecting a kind of imperfect amnesia on the audience as cannon asks them to “forget” history), dissociative fugue, and cyclical histories/journeys that continually reset. The last of these manifestations, which we see in Frederik Pohl‘s The Tunnel Under the World,” invokes memory of myths in which the hero must repeat his trials until he learns a lesson that speaks to some supposedly profound truth. Offerings like Groundhog Day and Dark City come to mind, although these two offerings contain messages that diverge in interesting ways:  while the plot of Groundhog Day focuses on an individual transformation, Dark City also nods to a sort of “cultural amnesia” that plagues the inhabitants of the self-contained city.

An easy target for this malaise is the spell cast by advertising, with such accusations made in The Tunnel Under the World.” Written in the middle of the 20th century—a time period that saw increasing emphasis on commercialization and industrialization—it makes sense that Pohl casts the inhabitants of Tlyerton as robots driven by a consciousness that is both duped and dead!

Amnesia and complacency also manifest in Henry Kuttner‘s “The Twonky,” and here we can contrast the amnesia of time-travelling Joe with the induced state of inaction that Kerry Westerfield experiences as a result of his interaction with the Twonky. In their own ways, both Pohl and Kuttner draw a connection between media and the subjugation of the human mind and/or spirit. (Interestingly, there also seems to be a stratification of media with the telephone being suspect [speaking perhaps to telephone salesmen?] while Westerfield finds a bit of sanctuary under the marquee of a movie theater. Cinema, then, perhaps represented a higher cultural form that was less susceptible to the corrosive influence of advertising, although this notion has changed somewhat over the years as any modern moviegoer can attest to.) Given the context in which these two authors wrote, it is not overly difficult to connect the dots and see how both of these short stories spoke to advertising being conveyed through media channels as it infected the general population, supplanting natural sentience with manufactured thought (or nothing at all!) in a process that invokes some of the pessimistic views of institutions like the Frankfurt School.


Underneath It All

WARNING:  The following contains images that may be considered graphic in nature. As a former Biology student (and Pre-Med at that!), I have spent a number of hours around bodies and studying anatomy but I realize that this desensitization might not exist for everyone. I watched surgeries while eating dinner in college and study horror films (which I realize is not normal). Please proceed at your own risk.

At first glance, the anatomical model to the left (also known as “The Doll,” “Medical Venus,” or simply “Venus) might seem like nothing more than an inspired illustration from the most well-known text of medicine, Gray’s Anatomy. To most modern viewers, the image (and perhaps even the model itself) raise a few eyebrows but is unlikely to elicit a reaction much stronger than that. And why should it? We are a culture that has grown accustomed to watching surgeries on television, witnessed the horrible mutilating effects of war, and even turned death into a spectacle of entertainment. Scores of school children have visited Body Worlds or have been exposed to the Visible Human Project (if you haven’t watched the video, it is well worth the minute). We have also been exposed to a run of “torture porn” movies in the 2000s that included offerings like Saw, Hostel, and Captivity. Although we might engage in a larger discussion about our relationship to the body, it seems evident that we respond to, and use, images of the body quite differently in a modern context. (If there’s any doubt, one only need to look at a flash-based torture game that appeared in 2007, generating much discussion.) Undoubtedly, our relationship to the body has changed over the years—and will likely continue to do so with forays into transhumanism—which makes knowledge of the image’s original context all the more crucial to fully understanding its potential import.

Part of the image’s significance comes from it’s appearance in a culture that generally did not have a sufficient working knowledge of the body by today’s standards, with medical professionals also suffering a shortage of cadavers to study (which in turn led to an interesting panic regarding body snatching and consequentially resulted in a different relationship between people and dead bodies). The anatomical doll pictured above appeared as part of an exhibit in the Imperiale Regio Museo di Fisica e Storia Naturale (nicknamed La Specola), one of the few natural history museums open to the public at the time. This crucial piece of information allows historical researchers to immediately gain a sense of the model’s importance for, through it, knowledge of the body began to spread throughout the masses and its depiction would forever be inscribed onto visual culture.

Also important, however, is the female nature of the body, which itself reflected a then-fascination with women in Science. Take, for example, the notion that the Venus lay ensconced in a room full of uteruses and we immediately gain more information about the significance of the image above:  in rather straightforward terms, the male scientist fascination with the female body and process of reproduction becomes evident. Although a more detailed discussion is warranted, this interest spoke to developments in the Enlightenment that began a systematic study of Nature, wresting way its secrets through the development of empirical modes of inquiry. Women, long aligned with Nature through religion (an additional point to be made here is that in its early incarnations, the clear demarcations between fields that we see today did not exist, meaning that scientists were also philosophers and religious practitioners) were therefore objects to be understood as males sought to once again peer further into the unknown. This understanding of the original image is reinforced when one contrasts it with its male counterparts from the same exhibit, revealing two noticeable differences. First, the female figure possesses skin, hair, and lips, which serve as reminders of the body’s femininity and humanity. Second, the male body remains intact, while the female body has been ruptured/opened to reveal its secrets. The male body, it seems, had nothing to hide. Moreover, the position of the female model is somewhat evocative, with its arched back, pursed lips, and visual similarity to Snow White in her coffin, which undoubtedly speaks to the posing of women’s bodies and how such forms were intended to be consumed.

Thus, the fascination with women’s bodies—and the mystery they conveyed—manifested in the physical placement of the models on display at La Specola, both in terms of distribution and posture. In short, comprehension of the museum’s layout helps one to understand not only the relative significance of the image above, but also speaks more generally to the role that women’s bodies held in 19th-century Italy, with the implications of this positioning resounding throughout Western history. (As a brief side note, this touches upon another area of interest for me with horror films of the 20th century:  slasher films veiled an impulse to “know” women, with the phrase “I want to see/feel your insides” being one of my absolute favorites as it spoke to the psychosexual component of serial killers while continuing the trend established above. Additionally, we also witness a rise in movies wherein females give birth to demon spawn (e.g. The Omen), are possessed by male forces (e.g., The Exorcist), and are also shown as objects of inquiry for men who also seek to “know” women through science (e.g., Poltergeist). Recall the interactivity with the Venus and we begin to sense a thematic continuity between the renegotiation of women’s bodies, the manipulation of women’s bodies, and knowledge. For some additional writings on this, La Specola, and the television show Caprica, please refer to a previous entry on my blog.)

This differential treatment of bodies continues to exist today, with the aforementioned Saw providing a pertinent (and graphic) example. Compare the image of a female victim to the right with that of the (male) antagonist Jigsaw below. Although the situational context of these images differ, with the bodies’ death states providing commentary on  the respective characters, both bodies are featured with exposed chests in a manner similar to the Venus depicted at the outset of this piece. Extensive discussion is beyond the scope of this writing, but I would like to mention that an interesting—and potentially productive—sort of triangulation occurs when one compares the images of the past/present female figures (the latter of whom is caught in an “angel” trap) with each other and with that of the past/present male figures. Understanding these images as points in a constellation helps one to see interesting themes:  for example, as opposed to the 19th-century practice (i.e, past male), the image of Jigsaw (i.e., present male) cut open is intended to humanize the body, suggesting that although he masterminded incredibly detailed traps his body was also fragile and susceptible to breakdown. Jigsaw’s body, then, presents some overlap with Venus (i.e., past female) particularly when one considers that Jigsaw’s body plays host to a wax-covered audiotape—in the modern interpretation, it seems that the male body is also capable of harboring secrets.

Ultimately, a more detailed understanding of the original image would flush out its implications for the public of Italy in the 19th century and also look more broadly at the depictions of women, considering how “invasive practices” were not just limited to surgery. La Specola’s position as a state-sponsored institution also has implications for the socio-historical context for the image that should also be investigated. Finally, and perhaps most importantly, scholars should endeavor to understand the Medical Venus as not just reflective of cultural practice but also seek to ascertain how its presence (along with other models and the museum as a whole) provided new opportunities for thought, expression, and cultivation of bodies at the time.


The Most Important Product Is You!

“The Culture Industry” seems to be one of those seminal pieces in the cannon of Cultural Studies that elicits a visceral (and often negative) reaction from modern scholars. Heavily influenced by the Birmingham School, generations of scholars have been encouraged to recognize agency in audiences, with the Frankfurt School often placed in direct opposition to the ideals of modern inquiry. Read one way, Horkheimer and Adorno appear elitist, privileging what has come to be known as “high culture” (e.g., classical music and fine art) over the entertainment of the masses. Horkheimer and Adorno argue that the culture industry creates a classification scheme in which man exists; whereas man previously struggled to figure out his place in the world, this job is done for him by the culture industry and its resultant structure of artificial stratification. Ultimately, then, because he does not have to think about his position in culture, man is not engaged in his world in the same way as he was before, which therefore allows content to become formulaic and interchangeable.

Later echoed in Adorno’s “How to Look at Television,” “The Culture Industry” laments the predictable pattern of televisual media, with audiences knowing the ending of movies as soon as they begin. (Interestingly, there is some overlap with this and Horror with audiences expecting that offerings follow a convention—one might even argue that the “twist ending” has become its own sort of genre staple—and that a movie’s failure to follow their expectations leaves the audience disgruntled. This of course raises questions about whether modern audiences have been conditioned to expect certain things out of media or to engage in a particular type of relationship with their media and whether plot progression, at least in part, defines the genre.) Horkheimer and Adorno’s attitude speaks to a privileging of original ideas (and the intellectual effort that surrounds them) but the modern context seems to suggest that the combination of preexisting ideas in a new way holds some sort of cultural value.

Adorno’s “How to Look at Television” also points out a degradation in our relationship to media by highlighting the transition from inward-facing to outward-facing stances, equating such migration with movement away from subtlety. Although the point itself may very well be valid, it does not include a robust discussion of print versus televisual media:  Adorno’s footnote that mentions the different affordances of media (i.e., print allows for contemplation and mirrors introspection while television/movies rely on visual cues due to their nature as visual media) deserves further treatment as the implications of these media forms likely has repercussions on audience interactions with them. Almost necessarily, then, do we see a televisual viewing practice that does not typically rely on subtlety due to a different form of audience/media interaction.  (It might also be noted that the Saw movies have an interesting take on this in that they pride themselves on leaving visual “breadcrumbs” for viewers to discover upon repeated viewings although these efforts are rarely necessary for plot comprehension.)

To be fair, however, one might argue that Horkheimer and Adorno wrote in a radically different media context. Sixty years later, we might argue that there’s not that much left to discover and that prestige has now been shifted to recombinations of existent information. Moreover, Horkheimer and Adorno’s position also assumes a particular motivation of the audience (i.e., that the payoff is the conclusion instead of the journey) that may no longer be completely true for modern viewers.

Although Horkheimer and Adorno rightly raise concerns regarding a lack of independent thinking (or even the expectation of it!), we are perhaps seeing a reversal of this trend with transmedia and attempts at audience engagement. Shows now seem to want people to talk about their shows (message boards, Twitter, etc.) in order to keep them invested and although we might quibble about the quality of such discourse and whether it is genuine or reactionary, it seems that this practice must be reconciled with Horkheimer and Adorno’s original position. It should be noted, however, that the technology on which such interaction relies was not around when Horkheimer and Adorno wrote “The Culture Industry” and the Internet has likely helped to encourage audience agency (or at least made it more visible).

Seeking to challenge the notion that the Horkheimer and Adorno discounted audience agency, John Durham Peters argues for the presence of both industry and audience influence in the space of culture and furthermore that while audiences may be empowered, their actions serve to reinforce their submission to the dominant wishes of industry in a realization of hegemonic practice. Although Horkheimer and Adorno, writing in the shadow of World War II were undoubtedly concerned with the potential undue influence of mass media as a vehicle for fascist ideology—as evidenced by quotes such as “The radio becomes the universal mouthpiece of the Fuhrer” and “The gigantic fact that the speech penetrates everywhere replaces its content”—they were also concerned that the public had relinquished its ability to resist by choosing to pursue frivolous entertainment rather than freedom (Adorno, 1941). From this position, Peters extracts the argument that Horkheimer and Adorno did in fact recognize agency on the part of audiences, but also that such energies were misspent.

The notion of “the masses” has long been an area of interest for me as it manifests throughout suburban Gothic horror in the mid-20th century. In many ways, society was struggling to come to terms with new advances with technology and the implications for how these new inventions would bring about resultant changes in practice and structure. Below is an excerpt from a longer piece about a movie that also grappled with some of these issues.

Reacting to atrocities witnessed throughout the course of World War II, Americans in the 1950s became obsessed with notions of power and control, fearing that they would be subsumed by the invisible hand of a totalitarian regime. In particular, the relatively young medium of television became suspect as it represented a major broadcast system that seemed to have an almost hypnotic pull on its audience, leaving viewers entranced by its images. And images, according to author and historian Daniel Boorstin, were becoming increasingly prominent throughout the 19th century as part of the Graphic Revolution replete with the power to disassociate the real from its representation (1962). For cultural critics still reeling from the aftereffects of Fascism and totalitarianism, this was a dangerous proposition indeed.

Although these underlying anxieties of mid-century American society could be examined via a wide range of anthropological lenses and frameworks, visual media has historically provided a particularly vivid manifestation of the fears latent in the people of the United States (Haskell, 2004). This is, of course, not to imply that visual media is necessarily the best or only means by which we can understand prevailing ideologies in the years after World War II, but merely one of the most visible. However, as a critical examination of the entire media landscape of the 1950s would be beyond the scope of a single paper of this magnitude, discussion shall be primarily concentrated around Elia Kazan’s 1957 movie A Face in the Crowd with particular attention paid to the contrasting channels of cinema and television.[1] This paper will seek to briefly position A Face in the Crowd in the larger context of paranoia-driven cinema of the 1950s before using the film as an entryway to discuss critiques of mass culture. Given the film’s apparent sustained resonance as indicated by its relatively recent mention (Vallis, 2008; Hoberman, 2008b; Franklin, 2009), the arguments of Critical Theory will then be applied to modern American culture in an attempt to ascertain their continued validity. Finally, an argument will be made that acknowledges the potential dangers facing mass culture in the 21st century but also attempts to understand the processes that underlie these pitfalls and provides a suggestion for recourse in the form of cultural and media literacy.

Paranoia, Paranoia, Everyone’s Coming to Get Me

The post-war prosperity of the 1950s caused rapid changes in America, literally altering the landscape as families began to flood into the newly-formed suburbs. With the dream and promise of upward social mobility firmly ensconced in their heads, families rushed to claim their piece of the American dream, replete with the now-iconic front yard and white picket fence. And yet, ironically, a new set of worries began to fester underneath the idyllic façade of the suburbs as the troubles of the city were merely traded for fears of paranoia and invasion; the very act of flight led to entrapment by an ethos that subtly precluded the possibility of escape.

As with many other major cultural shifts, the rapid change in the years following World War II caused Americans to muse over the direction in which they were now headed; despite a strong current of optimism that bolstered dreams of a not-far-off utopia, there remained a stubborn fear that the quickly shifting nature of society might have had unanticipated and unforeseen effects (Murphy, 2009). Life in the suburbs, it seemed, was too good to be true and inhabitants felt a constant tension as they imagined challenges to their newly rediscovered safety:  from threats of invasion to worries about conformity, and from dystopian futures to a current reality that could now be obliterated with nuclear weapons, people of the 1950s continually felt the weight of being a society under siege. An overwhelming sense of doubt, and more specifically, paranoia, characterized the age and latent fears manifested in media as the public began to struggle with the realization that the suburbs did not fully represent the picturesque spaces that they had been conceived to be. In fact, inhabitants were assaulted on a variety of levels as they became disenchanted with authority figures, feared assimilation and mind control (particularly through science and/or technology), began to distrust their neighbors (who could easily turn out to be Communists, spies, or even aliens!), and felt haunted by their pasts, all of which filled the movie screens of the decade (Jensen, 1971; Murphy, 2009; Wolfe, 2002).[2] Following solidly in this tradition, Kazan’s A Face in the Crowd picks up on some of the latent strains of paranoia in American culture while simultaneously serving as a platform for a set of critiques regarding mass culture.

Somewhere, a Star Is Made

The storyline of A Face in the Crowd is rather straightforward and yet deceptively complex in its undertones:  on the surface, we experience a rather heavy-handed morality tale in the form of country bumpkin Larry “Lonesome” Rhodes, a relative nobody who is plucked from obscurity and made (and subsequently broken) through powers associated with television. Yet, it is only when we begin to connect the movie to a larger societal context that we begin to understand the ramifications of the film’s message; a careful examination of A Face in the Crowd reveals striking suspicions regarding the role that media plays (in this case, primarily television and cinema) in shaping American culture. Stars, director Elia Kazan argues, are not so much born as made, a distinction that portends dire consequences.

It is worth noting that Kazan’s film was made during a time when the concept of the “celebrity” was being renegotiated by America; for a large part of its history, the United States, firmly grounded in a Puritan work ethic, had honored heroes who exemplified ideals associated with a culture of production and was struggling to reconcile these notions in the presence of an environment whose emphasis was now focused on consumption. Although modern audiences might initially find this shift difficult to appreciate, one need only consider that the premium placed on production is so central to American ideology that it continues to linger today:  in a culture that exhibits rampant consumerism, we still value the “self-made man” and sell the myth of America as a place where anyone can achieve success through hard work. To abandon these ideas would necessitate that we reinterpret the very meaning of “America.” Thus, we become more sympathetic to the critics of the day who lamented the loss of the greatness of man and bristled against the notion that fame or celebrity could be manufactured—such a system could would only result in individuals who were lacking and unworthy of their status (Gamson, 1994; Benjamin, 1973)

Such is the case it seems, with Larry Rhodes, who is discovered by roving reporter Marcia Jeffries in an Arkansas jail. Although it cannot be denied that Rhodes has some modicum of talent and a certain charisma that comes from being unafraid to speak one’s mind, Marcia ushers Rhodes onto the path of greatness by dubbing him “Lonesome” and thus creates a character that transforms Rhodes from a despondent drunk to a winsome drifter. This scene—the first major one in the movie—thusly introduces the important notion that those involved in the media can be implicitly involved in the manipulation of the information that travels over the airwaves. Subtly adding to the insidious nature of the media, A Face in the Crowd portrays Marcia as a character that seems likable enough, but also a person who is, in a way, exploiting the people in jail as she rushes in with her tape recorder intent on prying the stories from the characters she finds (or creates!) and does not exhibit much concern in truly understanding why these men are imprisoned in the first place. Taken to an extreme, we later come across the character of The General, who further perverts the connection between media and power as he conspires with Lonesome to remake the image of Senator Worthington Fuller as the congressman runs for President.

Yet, as Lonesome Rhodes grows in his role as a media personality, he quickly demonstrates that the power to manipulate does not lie solely with those who sit behind the cameras. In Memphis, Rhodes incites a riot against the Luffler mattress company and also solicits donations in order to help a Black woman rebuild her house. In light of this, we can see that while Kazan focuses on the negative implications of television and celebrity, that the relative good or bad that comes from these actions is not necessarily the point—instead, the one constant in all of the depicted scenarios is a public who is manipulated into performing actions on the behalf of others. Although the characters of Lonesome and The General are vilified throughout the film, it is the masses for which Kazan demonstrates true disdain.

Extraordinary Popular Delusions

Perhaps nowhere is this contempt more apparent than at the end of the film where, in an attempt to offer a small moment of solace to Marcia after her unmasking of Lonesome, writer Mel Miller notes, “We get wise to them, that’s our strength” (Kazan, 1957). And Miller is not wrong:  Western tradition has long recognized the correlation between knowledge and power and Miller’s assertion touches upon the revelatory clout inherent in the realignment of perception and reality as noted by public relations guru Howard Bragman (2008). A more critical examination of the film’s closing scene, however, raises an important question:  Who is Miller’s “we”? Although one might be tempted to read this line as indicative of an egalitarian philosophical view, it is important to note that the only two characters in the shot represent the film’s arguably upper-middle class, and pointedly Eastern-educated, elite—nowhere to be seen are representatives of the small Arkansas town from the film’s opening or denizens of Memphis, both of whom serve to characterize the majority of Lonesome’s devoted viewers.[3] In fact, if we take time to reflect upon the movie, we realize that the majority of the audience was only alerted to Lonesome’s dual nature after Marcia flipped a control room switch and revealed the underlying deterioration; the masses oscillated from one position to the next without understanding how or why and once again adopted a passive stance in their relationship with media. Moreover, as Courtney Maloney points out, Kazan’s depiction of the agency of the masses is actually limited in scope:  despite a montage of audience members vehemently phoning in, sponsors are simultaneously shown to be acting independently as they withdraw their association with Lonesome (1999). Moreover, the subtext of the scene distances the rational decision-making of the truly powerful from the impassioned beseeching of the masses, likening the power of the latter to that of a mob. Knowledge and its associated authority, clearly, are afforded to a select group.

This idea, that the world can be divided between those who “get wise” and those who do not, serves to develop a rather sharp classist criticism against the medium of television and those who would watch it:  moviegoers, by virtue of witnessing Kazan’s work, find themselves elevated in status and privy to “the man behind the curtain” (to borrow a phrase). In contrast, the malleable masses were considered to be pacified and placated by idealistic portrayals of life in the 1950s in the form of television programs like Leave It to Beaver, The Donna Reed Show, and The Adventures of Ozzie and Harriet. Clearly, Kazan creates a dichotomy imbued with a value judgment descended from the thoughts of prominent thinkers in the Frankfurt School who, as far as aesthetics were concerned, preferred the high culture of cinema to the conformity and manipulated tastes of television (Horkheimer & Adorno, 2002; Adorno, 1985; Quart, 1989). This distinction between high and low culture would be a crucial supporting idea for critics as a prominent fear of mass culture was that it portended a collapse between concepts (e.g., fame, celebrity, or intellectual value) of objectively different quality, essentially rendering all manifestations the same and therefore all equally mundane (Boorstin, 1962; Hoberman, 2008b; Kierkegaard, 1962).  Even worse for critics, perhaps, was the perception of the masses’ refusal to grow out of its immature interests, a behavior that was characterized as both childlike and stubborn (Adorno, 1985).

And the fears of such theorists, all of whom were reacting to recent and rapid advances in broadcast technology, were not unfounded. Consider, for example, that radio had been popularized a scant fifty years prior and had vastly altered critics’ understanding of media’s potential impact, creating a precedent as it proliferated across the country and began to develop a platform for solidarity and nationalism. Yet, while the effects of radio were decidedly pro-social, due in part to its propagation of orchestral music and transmission of fireside chats, television was viewed as a corrosive force on society that spurred on the destruction of culture instead of enriching it.[4]For the critics of the Frankfurt School, television was indicative of an entrenched sentiment that regarded mass-produced culture as formulaic and perfectly suitable for a generation of passive consumers who sat enraptured in front of the glowing set. Associating the potential dissemination of propagandist ideology with television as a form of mass broadcast, cultural theorists evoked notions of totalitarian regimes akin to Hitler and Stalin in an effort to illustrate the potential subjugation of individual thought (Mattson, 2003). These simmering fears, aggrandized by their concurrence with the rising threat of Communism and collectivist cultures, found fertile soil in the already present anxiety-ridden ethos of the United States during the 1950s.


[1] It should be noted, however, that the comics of this time—those that belong to the end of the Golden Age and beginning of the Silver Age—also provide an additional understanding of the ways in which Americans indirectly wrestled with their fears.

[2] For a more exhaustive list of movies that support this point, see Wolfe, 2002.

[3] Let us also not forget the fact that Lonesome exhibits a rather patronizing attitude toward his audience in his later career, instituting the Cracker Barrel show with its manufactured country lackeys (Yates, 1974). In contrast to his first stint in Memphis, Lonesome has begun to embrace his country image as a means (if an inauthentic one) to connect with his audience, a point of contention to which we will return.

[4] Curiously, however, we see that this relationship between presidential addresses (like the aforementioned fireside chats) and mass media did not elicit notable complaints from critics who were generally wary of the merging of politics and entertainment (Quart, 1989; Benjamin, 1973). Although a larger discussion is warranted regarding the subtleties of this distinction, I would suggest that part of the differentiation stems from a high-low culture dichotomy. Although critics linked the negative presence of television with corporate advertising, James Twitchell suggests that there has always been a rather intimate relationship between arts and commerce, most saliently exhibited by wealthy citizens or entities who act as patrons (Twitchell, 1996).

 

Works Cited

Adorno, T. (1941). On Popular Music. Studies in Philosophy and Social Science, 9, 17-48.

Adorno, T. (1985). On the Fetish Character in Music and the Regression of Listening. In A. Arato, & E. Gebhardt (Eds.), The Essential Frankfurt School Reader (pp. 270-299). New York, NY: Continuum.

Benjamin, W. (1973). The Work of Art in the Age of Mechanical Reproduction. In H. Arendt (Ed.), Illuminations (H. Zohn, Trans., pp. 217-242). London, England: Schocken.

Boorstin, D. (1962). The Image: A Guide to Pseudo-Events in America. New York, NY: Athenenum.

Bragman, H. (2008). Where’s My Fifteen Minutes?: Get Your Company, Your Cause, or Yourself the Recognition You Deserve. New York, NY: Portfolio.

Gamson, J. (1994). Claims to Fame: Celebrity in Contemporary America. Berkeley: University of California Press.

Haskell, M. (2004, August 8). Whatever the Public Fears Most, It’s Right Up There on the Big Screen. The New York Times, pp. 4-5.

Horkheimer, M., & Adorno, T. W. (2002). Dialectic of Enlightenment: Philosophical Fragments. Stanford, CA: Stanford University Press.

Jensen, P. (1971). The Return of Dr. Caligari. Film Comment, 7(4), 36-45.

Kazan, E. (Director). (1957). A Face in the Crowd [Motion Picture].

Maloney, C. (1999). The Faces in Lonesome’s Crowd: Imaging the Mass Audience in “A Face in the Crowd”. Journal of Narrative Theory, 29(3), 251-277.

Mattson, K. (2003). Mass Culture Revisited: Beyond Tail Fins and Jitterbuggers. Radical Society, 30(1), 87-93.

Murphy, B. M. (2009). The Suburban Gothic in American Popular Culture. Basingstoke, Hampshire, England: Palgrave Macmillan.

Quart, L. (1989). A Second Look. Cineaste, 17(2), pp. 30-31.

Wolfe, G. K. (2002). Dr. Strangelove, Red Alert, and Patterns of Paranoia in the 1950s. Journal of Popular Film, 57-67.


The Real-Life Implications of Virtual Selves

“The end is nigh!”—the plethora of words, phrases, and warnings associated with the impending apocalypse has saturated American culture to the point of being jaded, as picketing figures bearing signs have become a fixture of political cartoons and echoes of the Book of Revelation appear in popular media like Legion and the short-lived television series Revelations. On a secular level, we grapple with the notion that our existence is a fragile one at best, with doom portended by natural disasters (e.g, Floodland and The Day after Tomorrow), rogue asteroids (e.g., Life as We Knew It and Armageddon), nuclear fallout (e.g., Z for Zachariah and The Terminator), biological malfunction (e.g., The Chrysalids and Children of Men) and the increasingly-visible zombie apocalypse (e.g., Rot and Ruin and The Walking Dead). Clearly, recent popular media offerings manifest the strain evident in our ongoing relationship with the end of days; to be an American in the modern age is to realize that everything under—and including—the sun will kill us if given half a chance. Given the prevalence of the themes like death and destruction in the current entertainment environment, it comes as no surprise that we turn to fiction to craft a kind of saving grace; although these impulses do not necessarily take the form of traditional utopias, our current culture definitely seems to yearn for something—or, more accurately, somewhere—better.

In particular, teenagers, as the subject of Young Adult (YA) fiction, have long been subjects for this kind of exploration with contemporary authors like Cory Doctorow, Paolo Bacigalupi, and M. T. Anderson exploring the myriad issues that American teenagers face as they build upon a trend that includes foundational works by Madeline L’Engle, Lois Lowry, and Robert C. O’Brien. Arguably darker in tone than previous iterations, modern YA dystopia now wrestles with the dangers of depression, purposelessness, self-harm, sexual trauma, and suicide. For American teenagers, psychological collapse can be just as damning as physical decay. Yet, rather than ascribe this shift to an increasingly rebellious, moody, or distraught teenage demographic, we might consider the cultural factors that contribute to the appeal of YA fiction in general—and themes of utopia/dystopia in particular—as manifestations spill beyond the confines of YA fiction, presenting through teenage characters in programming ostensibly designed for adult audiences as evidenced by television shows like Caprica (2009-2010).

 

Transcendence through Technology

A spin-off of, and prequel to, Battlestar Galactica (2004-2009), Caprica transported viewers to a world filled with futuristic technology, arguably the most prevalent of which was the holoband. Operating on basic notions of virtual reality and presence, the holoband allowed users to, in Matrix parlance, “jack into” an alternate computer-generated space, fittingly labeled by users as “V world.”[1] But despite its prominent place in the vocabulary of the show, the program itself never seemed to be overly concerned with the gadget; instead of spending an inordinate amount of time explaining how the device worked, Caprica chose to explore the effect that it had on society.

Calling forth a tradition steeped in teenage hacker protagonists (or, at the very least, ones that belonged to the “younger” generation), our first exposure to V world—and to the series itself—comes in the form of an introduction to an underground space created by teenagers as an escape from the real world. Featuring graphic sex[2], violence, and murder, this iteration does not appear to align with traditional notions of a utopia but does represent the manifestation of Caprican teenagers’ desires for a world that is both something and somewhere else. And although immersive virtual environments are not necessarily a new feature in Science Fiction television,[3] with references stretching from Star Trek’s holodeck to Virtuality, Caprica’s real contribution to the field was its choice to foreground the process of V world’s creation and the implications of this construct for the shows inhabitants.

Taken at face value, shards like the one shown in Caprica’s first scene might appear to be nothing more than virtual parlors, the near-future extension of chat rooms[4] for a host of bored teenagers. And in some ways, we’d be justified in this reading as many, if not most, of the inhabitants of Caprica likely conceptualize the space in this fashion. Cultural critics might readily identify V world as a proxy for modern entertainment outlets, blaming media forms for increases in the expression of uncouth urges. Understood in this fashion, V world represents the worst of humanity as it provides an unreal (and surreal) existence that is without responsibilities or consequences. But Caprica also pushes beyond a surface understanding of virtuality, continually arguing for the importance of creation through one of its main characters, Zoe.[5]

Seen one way, the very foundation of virtual reality and software—programming—is itself the language and act of world creation, with code serving as architecture (Pesce, 1999). If we accept Lawrence Lessig’s maxim that “code is law” (2006), we begin to see that cyberspace, as a construct, is infinitely malleable and the question then becomes not one of “What can we do?” but “What should we do?” In other words, if given the basic tools, what kind of existence will we create and why?

One answer to this presents in the form of Zoe, who creates an avatar that is not just a representation of herself but is, in effect, a type of virtual clone that is imbued with all of Zoe’s memories. Here we invoke a deep lineage of creation stories in Science Fiction that exhibit resonance with Frankenstein and even the Judeo-Christian God who creates man in his image. In effect, Zoe has not just created a piece of software but has, in fact, created life!—a discovery whose implications are immediate and pervasive in the world of Caprica. Although Zoe has not created a physical copy of her “self” (which would raise an entirely different set of issues), she has achieved two important milestones through her development of artificial sentience: the cyberpunk dream of integrating oneself into a large-scale computer network and the manufacture of a form of eternal life.[6]

Despite Caprica’s status as Science Fiction, we see glimpses of Zoe’s process in modern day culture as we increasingly upload bits of our identities onto the Internet, creating a type of personal information databank as we cultivate our digital selves.[7] Although these bits of information have not been constructed into a cohesive persona (much less one that is capable of achieving consciousness), we already sense that our online presence will likely outlive our physical bodies—long after we are dust, our photos, tweets, and blogs will most likely persist in some form, even if it is just on the dusty backup server of a search engine company—and, if we look closely, Caprica causes us to ruminate on how our data lives on after we’re gone. With no one to tend to it, does our data run amok? Take on a life of its own? Or does it adhere to the vision that we once had for it?

Proposing an entirely different type of transcendence, another character in Caprica, Sister Clarice, hopes to use Zoe’s work in service of a project called “apotheosis.” Representing a more traditional type of utopia in that it represents a paradisiacal space offset from the normal, Clarice aims to construct a type of virtual heaven for believers of the One True God,[8] offering an eternal virtual life at the cost of one’s physical existence. Perhaps speaking to a sense of disengagement with the existent world, Clarice’s vision also reflects a tradition that conceptualizes cyberspace as a chance where humanity can try again, a blank slate where society can be re-engineered. Using the same principles that are available to Zoe, Clarice sees a chance to not only upload copies of existent human beings, but bring forth an entire world through code. Throughout the series, Clarice strives to realize her vision, culminating in a confrontation with Zoe’s avatar who has, by this time, obtained a measure of mastery over the virtual domain. Suggesting that apotheosis cannot be granted, only earned, Clarice’s dream of apotheosis literally crumbles around her as her followers give up their lives in vain.

Although it is unlikely that we will see a version of Clarice’s apotheosis anytime in the near future, the notion of constructed immersive virtual worlds does not seem so far off. At its core, Caprica asks us, as a society, to think carefully about the types of spaces that we endeavor to realize and the ideologies that drive such efforts. If we understand religion as a structured set of beliefs that structure and order this world through our belief in the next, we can see the overlap between traditional forms of religion and the efforts of technologists like hackers, computer scientists, and engineers. As noted by Mark Pesce, Vernor Vinge’s novella True Names spoke to a measure of apotheosis and offered a new way of understanding the relationship between the present and the future—what Vinge offered to hackers was, in fact, a new form of religion (Pesce, 1999). Furthermore, aren’t we, as creators of these virtual worlds fulfilling one of the functions of God? Revisiting the overlap between doomsday/apocalyptic/dystopian fiction as noted in the paper’s opening and Science Fiction, we see a rather seamless integration of ideas that challenges the traditional notion of a profane/sacred divide; in their own ways, both the writings of religion and science both concern themselves with some of the same themes, although they may, at times, use seemingly incompatible language.

Ultimately, however, the most powerful statement made by Caprica comes about as a result of the extension to arguments made on screen:  by invoking virtual reality, the series begs viewers to consider the overlay of an entirely subjective reality onto a more objective one.[9] Not only presenting the coexistence of multiple realities as a fact, Caprica asks us to understand how actions undertaken in one world affect the other. On a literal level, we see that the rail line of New Cap City (a virtual analogue of Caprica City, the capital of the planet of Caprica)[10] is degraded (i.e., “updated) to reflect a destroyed offline train, but, more significantly, the efforts of Zoe and Clarice speak to the ways in which our faith in virtual worlds can have a profound impact on “real” ones. How, then, do our own beliefs about alternate realities (be it heaven, spirits, string theory, or media-generated fiction) shape actions that greatly affect our current existence? What does our vision of the future make startlingly clear to us and what does it occlude? What will happen as future developments in technology increase our sense of presence and further blur the line between fiction and reality? What will we do if the presence of eternal virtual life means that “life” loses its meaning? Will we reinscribe rules onto the world to bring mortality back (and with it, a sense of urgency and finality) like Capricans did in New Cap City? Will there come a day where we choose a virtual existence over a physical one, participating in a mass exodus to cyberspace as we initiate a type of secular rapture?

As we have seen, online environments have allowed for incredible amounts of innovation and, on some days, the future seems inexplicably bright. Shows like Caprica are valuable for us as they provide a framework through which the average viewer can discuss issues of presence and virtuality without getting overly bogged down by technospeak. On some level, we surely understand the issues we see on screen as dilemmas that are playing out in a very human drama and Science Fiction offerings like Caprica provide us with a way to talk about subjects that we will confront in the future although we may not even realize that we are doing so at the time. Without a doubt, we should nurture this potential while remaining mindful of our actions; we should strive to attain apotheosis but never forget why we wanted to get there in the first place.

Works Cited

Lessig, L. (2006, January). Socialtext. Retrieved September 10, 2011, from Code 2.0: https://www.socialtext.net/codev2/

Pesce, M. (1999, December 19). MIT Communications Forum. Retrieved September 12, 2011, from Magic Mirror: The Novel as a Software Development Platform: http://web.mit.edu/comm-forum/papers/pesce.html


[1] Although the show is generally quite smart about displaying the right kind of content for the medium of television (e.g., flushing out the world through channel surfing, which not only gives viewers glimpses of the world of Caprica but also reinforces the notion that Capricans experience their world through technology), the ability to visualize V world (and the transitions into it) are certainly an element unique to an audio-visual presentation. One of the strengths of the show, I think, is its ability to add layers of information through visuals that do not call attention to themselves. These details, which are not crucial to the story, flush out the world of Caprica in a way that a book could not, for while a book must generally mention items (or at least allude to them) in order to bring them into existence, the show does not have to ever name aspects of the world or actively acknowledge that they exist. Moreover, I think that there is something rather interesting about presenting a heavily visual concept through a visual medium that allows viewers to identify with the material in a way that they could not if it were presented through text (or even a comic book). Likewise, reading Neal Stephenson’s A Diamond Age (which prominently features a book) allows one to reflect on one’s own interaction with the book itself—an opportunity that would not be afforded to you if you watched a television or movie adaptation.

[2] By American cable television standards, with the unrated and extended pilot featuring some nudity.

[3] Much less Science Fiction as a genre!

[4] One could equally make the case that V world also represents a logical extension of MUDs, MOOs, and MMORPGs. The closest modern analogy might, in fact, be a type of Second Life space where users interact in a variety of ways through avatars that represent users’ virtual selves.

[5] Although beyond the scope of this paper, Zoe also represents an interesting figure as both the daughter of the founder of holoband technology and a hacker who actively worked to subvert her father’s creation. Representing a certain type of stability/structure through her blood relation, Zoe also introduced an incredible amount of instability into the system. Building upon the aforementioned hacker tradition, which itself incorporates ideas about youth movements from the 1960s and lone tinkerer/inventor motifs from Science Fiction in the early 20th century, Zoe embodies teenage rebellion even as she figures in a father-daughter relationship, which speaks to a particular type of familial bond/relationship of protection and perhaps stability.

[6] Although the link is not directly made, fans of Battlestar Galactica might see this as the start of resurrection, a process that allows consciousness to be recycled after a body dies.

[7] In addition, of course, is the data that is collected about us involuntarily or without our express consent.

[8] As background context for those who are unfamiliar with the show, the majority of Capricans worship a pantheon of gods, with monotheism looked upon negatively as it is associated with a fundamentalist terrorist organization called Soldiers of The One.

[9] One might in fact argue that there is no such thing as an “objective” reality as all experiences are filtered in various ways through culture, personal history, memory, and context. What I hope to indicate here, however, is that the reality experienced in the V world is almost entirely divorced from the physical world of its users (with the possible exception of avatars that resembled one’s “real” appearance) and that virtual interactions, while still very real, are, in a way, less grounded than their offline counterparts.

[10] Readers unfamiliar with the show should note that “Caprica” refers to both the name of the series and a planet that is part of a set of colonies. Throughout the paper, italicized versions of the word have been used to refer to the television show while an unaltered font has been employed to refer to the planet.


Mutable Masses?

It’s the End of the World as We Know It (And I Feel Fine)

Notably, however, the fears associated with the masses have not been limited to one particular decade in American history:  across cultures and times, we can witness examples akin to tulip mania where unruly mobs exhibited relatively irrational behavior. Given the reoccurring nature of this phenomenon, which receives additional credence from psychological studies exploring groupthink and conformity (Janis, 1972; Asch, 1956), we might choose to examine how, if at all, the cultural critiques of the 1950s apply to contemporary society.

Recast, the criticisms of mass culture presumably resonate today in a context where popular culture holds sway over a generally uncritical public; we might convincingly argue that media saturation has served to develop a modern society in which celebrities run wild while evidencing sexual exploits like badges of honor, traditional communities have collapsed, and the proverbial apocalypse appears closer than ever. Moreover, having lost sight of our moral center while further solidifying our position as a culture of consumption since the 1950s, the masses have repeatedly demonstrated their willingness to flash a credit card in response to advertising campaigns and to purchase unnecessary goods hawked by celebrity spokespeople in a process that demonstrates a marked fixation on appearance and the image in a process reminiscent of critiques drawn from A Face in the Crowd (Hoberman, 2008a; Ecksel, 2008). Primarily concerned with the melding of politics, news, and entertainment, which harkens back to Kierkegaard-inspiried critiques of mass culture, current critics charge that the public has at long last become what we most feared:  a mindless audience with sworn allegiances born out of fielty to the all-mighty image (Hoberman, 2008a).

Arguably the most striking (or memorable) recent expression of image, and subsequent comingling bewteen politics and entertainment, centered around Sarah Palin’s campaign for office in 2008. Indeed, much of the disucssion regarding Palin centered around her image and colloquisims rather than focusing solely on her abilities. [1] Throughout her run, Palin positioned herself as an everyman figure, summoning figures such as “Joe Six-Pack” and employing terms such as “hockey mom” in order to covey her relatability to her constituents.[2] In a piece on then-Vice-Presidential candidate Sarah Palin, columnist Jon Meacham questions this practice by writing:  “Do we want leaders who are everyday folks, or do we want leaders who understand everyday folks?” (2008). Palin, it seemed to Meacham, represented much more of the former than the latter; this position then  leads to the important suggestion that Palin was placed on the political bill in order to connect with voters (2008). Suddenly, a correlary between Palin and Lonesome Rhodes from A Face in the Crowd becomes almost self-evident.

At our most cynical, we could argue that Palin is a Lonesome-type figure, cleverly manipulating her image in order to connect with the disenfranchised and disenchanted. More realistically, however, we might consider how Palin could understand her strength in terms of her relatability instead of her political acumen; she swims against the current as a candidate of the people (in perhaps the truest sense of the term) and provides hope that she will represent the voice of the common man, in the process challenging the status quo in a government that has seemingly lost touch with its base. In some ways, this argument continues to hold valence in post-election actions that demonstrate increasing support of the Tea Party movement.

However, regardless of our personal political stances, the larger pertinent issue raised by A Face in the Crowd is the continued existence of an audience whose decision-making process remains heavily influenced by image—we actually need to exert effort in order to extract our opinion of Sarah Palin the politician from the overall persona of Sarah Palin. Although admittedly powerful, author Mark Rowlands argues that a focus on image—and the reliance on the underlying ethereal quality described by Daniel Boorstin as being “well known for [one’s] well-knownness” (Boorstin, 1962, p. 221)—is ultimately damning as the public’s inability to distinguish between items of quality leads them to focus on the wrong questions (and, perhaps worse, to not even realize that we are asking the wrong questions) in ways that have very real consequences. Extrapolating from Rowlands, we might argue that, as a culture that is obsessed with image and reputation, we have, in some ways, forgotten how to judge the things that really matter because we have lost a sense of what our standards should be.

Ever the Same?

So while the criticisms of critics from the Frankfurt School might appear to hold true today, we also need to realize that modern audiences exist in a world that is, in some ways, starkly different from that of the 1950s. To be sure, the mainstream media continues to exist in a slightly expanded form but new commentary on the state of American culture must account for the myriad ways in which current audiences interact with the world around them. For instance, work published after Theodor Adorno’s time has argued against the passive nature of audiences, recognizing the agency of individual actors (Mattson, 2003; Shudson, 1984).[3] Moreover, the new activity on the part of audiences has done much to comingle the once distinctly separate areas of high and low culture in a process that would have likely confounded members of the Frankfurt School. The current cultural landscape encompasses remix efforts such as Auto-Tune the News along with displays of street art in museum galleries; projects once firmly rooted in folk or pop art have transcended definitional boundaries to become more accepted—and even valued—in the lives of all citizens. While Adorno might be tempted to cite this as evidence of high culture’s debasement, we might instead argue that these new manifestations have challenged the long-held elitism surrounding the relative worth of particular forms of art.

Additionally, examples like Auto-Tune the News suggest that advances in technology have also had a large impact on the cultural landscape of America over the past half century, with exponential growth occurring after the widespread deployment of the Internet and the resulting World Wide Web. While the Internet certainly provided increased access to information, it also created the scaffolding for social media products that allowed new modes of participation for users. Viewed in the context of image, technology has helped to construct a world in which reputations are made and broken in an instant and we have more information circulating in the system than ever before; the appearance of technology, then, has not only increased the velocity of the system but has also amplified it.

Although the media often showcases deleterious qualities of the masses’ relationship with these processes (the suicide of a student at Rutgers University being a recent and poignant example), we are not often exposed to the incredible pro-social benefits of a platform like Twitter or Facebook. While we might be tempted to associate such pursuits with online predators (a valid concern, to be sure) or, at best, unproductive in regard to civic engagement (Gladwell, 2010), to do so would to ignore the powerfully positive uses of this technology (Burnett, 2010; Lehrer, 2010; Johnston, 2010). Indeed, we need only look at a newer generation of activist groups who have built upon Howard Rheingold’s concept of “smart mobs” in order to leverage online technologies to their benefit (2002)—a recent example can be found in the efforts of groups like The Harry Potter Alliance, Invisible Children, and the Kristin Brooks Hope Center to win money in the Chase Community Giving competition (Business Wire, 2010). Clearly, if the masses can self-organize and contribute to society, the critiques of mass culture as nothing more than passive receptors of media messages need to be revised.

Reconsidering the Masses

If we accept the argument that audiences can play an active part in their relationship with media, we then need to look for a framework that begin to address media’s role in individuals’ lives and to examine the motivations and intentions that underlie media consumption. Although we might still find that media is a corrosive force in society, we must also realize that, while potentially exploiting an existing flaw, it does not necessarily create the initial problem (MacGregor, 2000).

A fundamental building block in the understanding of media’s potential impact is the increased propensity for individuals (particularly youth) to focus on external indicators of self-worth, with the current cultural climate of consumerism causing individuals to focus on their inadequacies as they begin to concentrate on what they do not have (e.g., physical features, talent, clothes, etc.) as opposed to their strengths. Simultaneously both an exacerbation of this problem and an entity proffering solutions, constructs like advertising provide an easy way for youth to compensate for their feelings of anxiety by instilling brands as a substitute for value:  the right label can confer a superficial layer of prestige and esteem upon individuals, which can act as a temporary shield against criticism and self-doubt. In essence, one might argue that if people aren’t good at anything, they can still be associated with the right brands and be okay. Although we might be tempted to blame advertising for this situation, it actually merely serves to exploit our general unease about our relationship to the world, a process also reminiscent of narcissism (Lasch, 1979).

Historian Christopher Lasch goes on to argue that, once anchored by institutions such as religion, we have become generally disconnected from our traditional anchors and thus have come to substitute media messages and morality tales for actual ethical and spiritual education (1979). The overlapping role of religion and advertising is noted by James Twitchell, who contends that, “Like religion, which has little to do with the actual delivery of salvation in the next world but everything to do with the ordering of life in this one, commercial speech has little to do with material objects per se but everything to do with how we perceive them” (1996, 110). Thus, we might classify religion, advertising, entertainment, and celebrity as examples of belief systems (i.e., a certain way of seeing the world complete with their own set of values) and use these paradigms to begin to understand their respective (and ultimately somewhat similar!) effects on the masses.

A Higher Power

Ideologies such as those found in popular culture, religion, or advertising tell believers, in their own ways, what is (and is not) important in society, something that Twitchell refers to as “magic” (1996, 29). Each manifestation also professes a particular point of view and attempts to integrate itself into everyday life, drawing on our desire to become part of something (e.g., an idea, a concept, or a movement) that is larger than ourselves. Perhaps, most importantly, the forces of advertising, entertainment, religion, and art (as associated with high/pop/folk culture) play on this desire in order to allow humans to give their lives meaning and worth, in terms of the external:  God, works of art, and name brands all serve as tools of classification. While cynics might note that this stance bears some similarities to the carnival sideshows of P. T. Barnum—it does not matter what is behind the curtain as long as there is a line out front (Gamson, 1994; Lasch, 1979)—the terms survive because they continue to speak to a deep desire for structure; the myth of advertising works for the same reasons that we believe in high art, higher education, and higher powers. Twitchell supports this idea by mentioning that “the real force of [the culture of advertising] is felt where we least expect it:  in our nervous system, in our shared myths, in our concepts of self, and in our marking of time” (1996, 124). Constructs like advertising or entertainment, it seems, not only allow us to assemble a framework through which we understand our world, but also continually informs us about who we are (or who we should be) as a collection of narratives that serves to influence the greater perceptions of individuals in a manner reminiscent of the role of television in Cultivation Theory (Gerbner & Gross, 1976). The process of ordering and imbuing value ultimately demonstrates how overarching ideologies can not only create culture but also act to shape it, a process evidenced by the ability of the aforementioned concepts to consume and/or reference previously shared cultural knowledge while simultaneously contributing to the cultural milieu.

Given our reconsideration of mid-century cultural critiques, it follows that we should necessarily reevaluate proposed solutions to the adverse issues present within mass culture. We recall the advice of A Face in the Crowd’s Mel Miller (i.e., “We get wise to them”) and reject its elitist overtones while remaining mindful of its core belief. We recognize that priding ourselves on being smart enough to see through the illusions present in mass culture, while pitying those who have yet to understand how they are being herded like so many sheep, makes us guilty of the narcissism we once ascribed to the masses—and perhaps even more dangerous than the uneducated because we are convinced that we know better. We see that aspects of mass culture address deeply embedded desires and that our best hope for improving culture is to satisfy these needs while educating audiences so that they can better understand how and why media affects them. Our job as critics is to encourage critical thinking on the part of audiences, dissecting media and presenting it to individuals so that they can make informed choices about their consumption patterns; our challenge is to convincingly demonstrate that engagement with media is a crucial and fundamental part of the process. If we ascribe to these principles, we can preserve the masses’ autonomy and not merely replace one dominant ideology with another.


[1] Certainly being a female did not help this as American women are typically subject to a “halo effect” wherein their attractiveness (i.e., appearance) affects their perception (Kaplan, 1978)

[2] Palin has continued the trend, currently employing the term “mama grizzlies,” a call-to-arms that hopes to rally the willingness of women to fight in order to protect things that they believe in. Interestingly, a term that reaffirms the traditional role of women as nurturing matriarchs has been linked to feminist movements, a move that seems to confuse the empowerment of women with a socially conservative construct of their role in American life (Dannenfelser, 2010).

[3] We can also see much work conducted in the realm of fan studies that supports the practice of subversive readings or “textual poaching,” a term coined by Henry Jenkins (1992), in order to discuss contemporary methods of meaning making and resistance by fans.


Love Me or Hate Me, Still an Obsession

Reacting to atrocities witnessed throughout the course of World War II, Americans in the 1950s became obsessed with notions of power and control, fearing that they would be subsumed by the invisible hand of a totalitarian regime. In particular, the relatively young medium of television became suspect as it represented a major broadcast system that seemed to have an almost hypnotic pull on its audience, leaving viewers entranced by its images. And images, according to author and historian Daniel Boorstin, were becoming increasingly prominent throughout the 19th century as part of the Graphic Revolution replete with the power to disassociate the real from its representation (1962). For cultural critics still reeling from the aftereffects of Fascism and totalitarianism, this was a dangerous proposition indeed.

Although these underlying anxieties of mid-century American society could be examined via a wide range of anthropological lenses and frameworks, visual media has historically provided a particularly vivid manifestation of the fears latent in the people of the United States (Haskell, 2004). This is, of course, not to imply that visual media is necessarily the best or only means by which we can understand prevailing ideologies in the years after World War II, but merely one of the most visible. However, as a critical examination of the entire media landscape of the 1950s would be beyond the scope of a single paper of this magnitude, discussion shall be primarily concentrated around Elia Kazan’s 1957 movie A Face in the Crowd with particular attention paid to the contrasting channels of cinema and television.[1] This paper will seek to briefly position A Face in the Crowd in the larger context of paranoia-driven cinema of the 1950s before using the film as an entryway to discuss critiques of mass culture. Given the film’s apparent sustained resonance as indicated by its relatively recent mention (Vallis, 2008; Hoberman, 2008b; Franklin, 2009), the arguments of Critical Theory will then be applied to modern American culture in an attempt to ascertain their continued validity. Finally, an argument will be made that acknowledges the potential dangers facing mass culture in the 21st century but also attempts to understand the processes that underlie these pitfalls and provides a suggestion for recourse in the form of cultural and media literacy.

Paranoia, Paranoia, Everyone’s Coming to Get Me

The post-war prosperity of the 1950s caused rapid changes in America, literally altering the landscape as families began to flood into the newly-formed suburbs. With the dream and promise of upward social mobility firmly ensconced in their heads, families rushed to claim their piece of the American dream, replete with the now-iconic front yard and white picket fence. And yet, ironically, a new set of worries began to fester underneath the idyllic façade of the suburbs as the troubles of the city were merely traded for fears of paranoia and invasion; the very act of flight led to entrapment by an ethos that subtly precluded the possibility of escape.

As with many other major cultural shifts, the rapid change in the years following World War II caused Americans to muse over the direction in which they were now headed; despite a strong current of optimism that bolstered dreams of a not-far-off utopia, there remained a stubborn fear that the quickly shifting nature of society might have had unanticipated and unforeseen effects (Murphy, 2009). Life in the suburbs, it seemed, was too good to be true and inhabitants felt a constant tension as they imagined challenges to their newly rediscovered safety:  from threats of invasion to worries about conformity, and from dystopian futures to a current reality that could now be obliterated with nuclear weapons, people of the 1950s continually felt the weight of being a society under siege. An overwhelming sense of doubt, and more specifically, paranoia, characterized the age and latent fears manifested in media as the public began to struggle with the realization that the suburbs did not fully represent the picturesque spaces that they had been conceived to be. In fact, inhabitants were assaulted on a variety of levels as they became disenchanted with authority figures, feared assimilation and mind control (particularly through science and/or technology), began to distrust their neighbors (who could easily turn out to be Communists, spies, or even aliens!), and felt haunted by their pasts, all of which filled the movie screens of the decade (Jensen, 1971; Murphy, 2009; Wolfe, 2002).[2] Following solidly in this tradition, Kazan’s A Face in the Crowd picks up on some of the latent strains of paranoia in American culture while simultaneously serving as a platform for a set of critiques regarding mass culture.

Somewhere, a Star Is Made

The storyline of A Face in the Crowd is rather straightforward and yet deceptively complex in its undertones:  on the surface, we experience a rather heavy-handed morality tale in the form of country bumpkin Larry “Lonesome” Rhodes, a relative nobody who is plucked from obscurity and made (and subsequently broken) through powers associated with television. Yet, it is only when we begin to connect the movie to a larger societal context that we begin to understand the ramifications of the film’s message; a careful examination of A Face in the Crowd reveals striking suspicions regarding the role that media plays (in this case, primarily television and cinema) in shaping American culture. Stars, director Elia Kazan argues, are not so much born as made, a distinction that portends dire consequences.

It is worth noting that Kazan’s film was made during a time when the concept of the “celebrity” was being renegotiated by America; for a large part of its history, the United States, firmly grounded in a Puritan work ethic, had honored heroes who exemplified ideals associated with a culture of production and was struggling to reconcile these notions in the presence of an environment whose emphasis was now focused on consumption. Although modern audiences might initially find this shift difficult to appreciate, one need only consider that the premium placed on production is so central to American ideology that it continues to linger today:  in a culture that exhibits rampant consumerism, we still value the “self-made man” and sell the myth of America as a place where anyone can achieve success through hard work. To abandon these ideas would necessitate that we reinterpret the very meaning of “America.” Thus, we become more sympathetic to the critics of the day who lamented the loss of the greatness of man and bristled against the notion that fame or celebrity could be manufactured—such a system could would only result in individuals who were lacking and unworthy of their status (Gamson, 1994; Benjamin, 1973)

Such is the case it seems, with Larry Rhodes, who is discovered by roving reporter Marcia Jeffries in an Arkansas jail. Although it cannot be denied that Rhodes has some modicum of talent and a certain charisma that comes from being unafraid to speak one’s mind, Marcia ushers Rhodes onto the path of greatness by dubbing him “Lonesome” and thus creates a character that transforms Rhodes from a despondent drunk to a winsome drifter. This scene—the first major one in the movie—thusly introduces the important notion that those involved in the media can be implicitly involved in the manipulation of the information that travels over the airwaves. Subtly adding to the insidious nature of the media, A Face in the Crowd portrays Marcia as a character that seems likable enough, but also a person who is, in a way, exploiting the people in jail as she rushes in with her tape recorder intent on prying the stories from the characters she finds (or creates!) and does not exhibit much concern in truly understanding why these men are imprisoned in the first place. Taken to an extreme, we later come across the character of The General, who further perverts the connection between media and power as he conspires with Lonesome to remake the image of Senator Worthington Fuller as the congressman runs for President.

Yet, as Lonesome Rhodes grows in his role as a media personality, he quickly demonstrates that the power to manipulate does not lie solely with those who sit behind the cameras. In Memphis, Rhodes incites a riot against the Luffler mattress company and also solicits donations in order to help a Black woman rebuild her house. In light of this, we can see that while Kazan focuses on the negative implications of television and celebrity, that the relative good or bad that comes from these actions is not necessarily the point—instead, the one constant in all of the depicted scenarios is a public who is manipulated into performing actions on the behalf of others. Although the characters of Lonesome and The General are vilified throughout the film, it is the masses for which Kazan demonstrates true disdain.

Extraordinary Popular Delusions

Perhaps nowhere is this contempt more apparent than at the end of the film where, in an attempt to offer a small moment of solace to Marcia after her unmasking of Lonesome, writer Mel Miller notes, “We get wise to them, that’s our strength” (Kazan, 1957). And Miller is not wrong:  Western tradition has long recognized the correlation between knowledge and power and Miller’s assertion touches upon the revelatory clout inherent in the realignment of perception and reality as noted by public relations guru Howard Bragman (2008). A more critical examination of the film’s closing scene, however, raises an important question:  Who is Miller’s “we”? Although one might be tempted to read this line as indicative of an egalitarian philosophical view, it is important to note that the only two characters in the shot represent the film’s arguably upper-middle class, and pointedly Eastern-educated, elite—nowhere to be seen are representatives of the small Arkansas town from the film’s opening or denizens of Memphis, both of whom serve to characterize the majority of Lonesome’s devoted viewers.[3] In fact, if we take time to reflect upon the movie, we realize that the majority of the audience was only alerted to Lonesome’s dual nature after Marcia flipped a control room switch and revealed the underlying deterioration; the masses oscillated from one position to the next without understanding how or why and once again adopted a passive stance in their relationship with media. Moreover, as Courtney Maloney points out, Kazan’s depiction of the agency of the masses is actually limited in scope:  despite a montage of audience members vehemently phoning in, sponsors are simultaneously shown to be acting independently as they withdraw their association with Lonesome (1999). Moreover, the subtext of the scene distances the rational decision-making of the truly powerful from the impassioned beseeching of the masses, likening the power of the latter to that of a mob. Knowledge and its associated authority, clearly, are afforded to a select group.

This idea, that the world can be divided between those who “get wise” and those who do not, serves to develop a rather sharp classist criticism against the medium of television and those who would watch it:  moviegoers, by virtue of witnessing Kazan’s work, find themselves elevated in status and privy to “the man behind the curtain” (to borrow a phrase). In contrast, the malleable masses were considered to be pacified and placated by idealistic portrayals of life in the 1950s in the form of television programs like Leave It to Beaver, The Donna Reed Show, and The Adventures of Ozzie and Harriet. Clearly, Kazan creates a dichotomy imbued with a value judgment descended from the thoughts of prominent thinkers in the Frankfurt School who, as far as aesthetics were concerned, preferred the high culture of cinema to the conformity and manipulated tastes of television (Horkheimer & Adorno, 2002; Adorno, 1985; Quart, 1989). This distinction between high and low culture would be a crucial supporting idea for critics as a prominent fear of mass culture was that it portended a collapse between concepts (e.g., fame, celebrity, or intellectual value) of objectively different quality, essentially rendering all manifestations the same and therefore all equally mundane (Boorstin, 1962; Hoberman, 2008b; Kierkegaard, 1962).  Even worse for critics, perhaps, was the perception of the masses’ refusal to grow out of its immature interests, a behavior that was characterized as both childlike and stubborn (Adorno, 1985).

And the fears of such theorists, all of whom were reacting to recent and rapid advances in broadcast technology, were not unfounded. Consider, for example, that radio had been popularized a scant fifty years prior and had vastly altered critics’ understanding of media’s potential impact, creating a precedent as it proliferated across the country and began to develop a platform for solidarity and nationalism. Yet, while the effects of radio were decidedly pro-social, due in part to its propagation of orchestral music and transmission of fireside chats, television was viewed as a corrosive force on society that spurred on the destruction of culture instead of enriching it.[4] For the critics of the Frankfurt School, television was indicative of an entrenched sentiment that regarded mass-produced culture as formulaic and perfectly suitable for a generation of passive consumers who sat enraptured in front of the glowing set. Associating the potential dissemination of propagandist ideology with television as a form of mass broadcast, cultural theorists evoked notions of totalitarian regimes akin to Hitler and Stalin in an effort to illustrate the potential subjugation of individual thought (Mattson, 2003). These simmering fears, aggrandized by their concurrence with the rising threat of Communism and collectivist cultures, found fertile soil in the already present anxiety-ridden ethos of the United States during the 1950s.


[1] It should be noted, however, that the comics of this time—those that belong to the end of the Golden Age and beginning of the Silver Age—also provide an additional understanding of the ways in which Americans indirectly wrestled with their fears.

[2] For a more exhaustive list of movies that support this point, see Wolfe, 2002.

[3] Let us also not forget the fact that Lonesome exhibits a rather patronizing attitude toward his audience in his later career, instituting the Cracker Barrel show with its manufactured country lackeys (Yates, 1974). In contrast to his first stint in Memphis, Lonesome has begun to embrace his country image as a means (if an inauthentic one) to connect with his audience, a point of contention to which we will return.

[4] Curiously, however, we see that this relationship between presidential addresses (like the aforementioned fireside chats) and mass media did not elicit notable complaints from critics who were generally wary of the merging of politics and entertainment (Quart, 1989; Benjamin, 1973). Although a larger discussion is warranted regarding the subtleties of this distinction, I would suggest that part of the differentiation stems from a high-low culture dichotomy. Although critics linked the negative presence of television with corporate advertising, James Twitchell suggests that there has always been a rather intimate relationship between arts and commerce, most saliently exhibited by wealthy citizens or entities who act as patrons (Twitchell, 1996).


A Spoonful of Fiction Helps the Science Go Down

Despite not being an avid fan of Science Fiction when I was younger (unless you count random viewings of Star Trek reruns), I engaged in a thorough study of scientific literature in the course of pursuing a degree in the Natural Sciences. Instead of Nineteen Eighty-Four, I read books about the discovery of the cell and of cloning; instead of Jules Verne’s literary journeys, I followed the real-life treks of Albert Schweitzer. I studied Biology and was proud of it! I was smart and cool (as much as a high school student can be) for although I loved Science, I never would have identified as a Sci-Fi nerd.

But, looking back, I begin to wonder.

For those who have never had the distinct pleasure of studying Biology (or who have pushed the memory far into the recesses of their minds), let me offer a brief taste via this diagram of the Krebs Cycle:

Admittedly, not overly complicated (but certainly a lot for my high school mind to understand), I found myself making up a story of sorts  in order to remember the steps. The details are fuzzy, but I seem to recall some sort of bus with passengers getting on and off as the vehicle made a circuit and ended up back at a station. I will be the first to admit that this particular tale wasn’t overly sophisticated or spectacular, but, when you think about it, wasn’t it a form of science fiction? So my story didn’t feature futuristic cars, robots, aliens, or rockets—but, at its core, it represented a narrative that helped me to make sense of my world, reconciling the language of science with my everyday vernacular. At the very least, it was a fiction about science fact.

And, ultimately, isn’t this what Science Fiction is all about (at least in part)? We can have discussions about hard vs. soft or realistic vs. imaginary, but, for me, the genre has always been about people’s connection to concepts in science and their resulting relationships with each other. Narrative allows us to explore ethical, moral, and technological issues in science that scientists themselves might not even think about.  We respond to innovations with a mixture of anxiety, hope, and curiosity and the stories that we tell often reveal that we are capable of experiencing all three emotional states simultaneously! For those of us who do not know jargon, Science Fiction allows us to respond to the field on our terms as we simply try to make sense of it all. Moreover, because of its status as genre, Science Fiction also affords us the ability to touch upon deeply ingrained issues in a non-threatening manner:  as was mentioned in our first class with respect to humor, our attention is so focused on tech that we “forget” that we are actually talking about things of serious import. From Frankenstein to Dr. Moreau, the Golem, Faust, Francis Bacon, Battlestar Galactica and Caprica (among many others), we have continued to struggle with our relationship to Nature and God (and, for that matter, what are Noah and Babel about if not technology!) all while using Science Fiction as a conduit. Through Sci-Fi we not only concern ourselves with issues of technology but also juggle concepts of creation/eschatology, autonomy, agency, free will, family, and society.

It would make sense, then, that modern science fiction seemed to rise concurrent with post-Industrial Revolution advancements as the public was presented with a whole host of new opportunities and challenges. Taken this way, Science Fiction has always been about the people—call it low culture if you must—and I wouldn’t have it any other way.


Writing the Future

I saw it there, unmistakable; it was unlike anything I had seen before (or would ever see again).

Often existing just on the edge of familiarity—there exists here a certain resonance with Freud’s “uncanny“—the realm of Science Fiction (SF) might be seen to possess an intuitive relationship with design, with the distinctive look and feel of a crafted world often our first clue that we have transcended everyday reality.[1] On one level, the connection between SF and design seems rather banal, with repeated exposure to depictions of outer space or post-apocalyptic visions of the Earth—we have been there and done that (figuratively, if not literally).

Yet, upon reflection, I think that it’s not only natural for SF to be concerned with the concept of design, but a part of the process itself for both concepts ask the same basic questions of how things could be and how things should be. Science Fiction, then, like design, is concerned with contemplation and speculation, a point echoed by Brian David Johnson.

And contemplation and speculation in SF often takes the form of artistic expression that is largely driven by the realization of relationships that do not yet exist:  if a job of a writer is to commit unexplored connections to paper—or perhaps to see established links in a new and/or unexpected light—then the SF writer might tend to focus on relationships as they intersect with technology. In other words, one possible function of SF writers is to explore the interaction between us (as individuals or collectively) and the world around us, highlighting technology as a salient subject; SF provides a creative space that allows authors to probe the consequences of permutations latent in the near future.

The term “technology,” however, should not merely imply gadgets or machines (although it certainly includes them), but rather a whole host of tools (e.g., paper) and apparatuses that comprise the tangible world. We might even broaden the scope of our inquiry, asking whether “technology” is a product, a process, or both. We see, for example, that Minority Report pushes the envelope by proffering new conceptualizations of tools used for imaging and data storage, but John Anderton’s interaction with information surely suggests a rethinking in process as well. Does this practice, on some level, constitute a new technology? Or, perhaps we return our gaze back to futuristic buildings and structures:  advances in construction materials certainly represents a new type of technology (in the traditional sense) but architecture as a form also underscores a kind of social interface, its affordant qualities subtly hinting at directions for movement, observation, or interaction. How, then, might the design of something also be considered a type of technology?

So if elements of technology infuse design, and a quick mental survey indicates that design is largely concerned with technology, we might argue that Science Fiction possesses the potential to intersect with design on several levels.

One such implementation, as John Underkoffler points out in his TED talk, is the development of the user interface (UI), an incredibly important milestone in our relationship with computers as it translated esoteric programming syntax into a type of language that the average person could understand. Indeed, as our abilities become more sophisticated, we seem to be making computers more accessible (and also intuitive, although this is a separate issue) to even the most basic users  as we build interfaces that respond to touch, gestures, and brain waves.


[1] Alternatively, one might also suggest that “we are not in Kansas anymore” as a nod to the transformational properties of the third of three related genres:  Horror, alluded to by the the uncanny, Science Fiction, and Fantasy.


Love Me or Hate Me, Still an Obsession

Reacting to atrocities witnessed throughout the course of World War II, Americans in the 1950s became obsessed with notions of power and control, fearing that they would be subsumed by the invisible hand of a totalitarian regime. In particular, the relatively young medium of television became suspect as it represented a major broadcast system that seemed to have an almost hypnotic pull on its audience, leaving viewers entranced by its images. And images, according to author and historian Daniel Boorstin, were becoming increasingly prominent throughout the 19th century as part of the Graphic Revolution replete with the power to disassociate the real from its representation (1962). For cultural critics still reeling from the aftereffects of Fascism and totalitarianism, this was a dangerous proposition indeed.

Although these underlying anxieties of mid-century American society could be examined via a wide range of anthropological lenses and frameworks, visual media has historically provided a particularly vivid manifestation of the fears latent in the people of the United States (Haskell, 2004). This is, of course, not to imply that visual media is necessarily the best or only means by which we can understand prevailing ideologies in the years after World War II, but merely one of the most visible. However, as a critical examination of the entire media landscape of the 1950s would be beyond the scope of a single paper of this magnitude, discussion shall be primarily concentrated around Elia Kazan’s 1957 movie A Face in the Crowd with particular attention paid to the contrasting channels of cinema and television.[1] This paper will seek to briefly position A Face in the Crowd in the larger context of paranoia-driven cinema of the 1950s before using the film as an entryway to discuss critiques of mass culture. Given the film’s apparent sustained resonance as indicated by its relatively recent mention (Vallis, 2008; Hoberman, 2008b; Franklin, 2009), the arguments of Critical Theory will then be applied to modern American culture in an attempt to ascertain their continued validity. Finally, an argument will be made that acknowledges the potential dangers facing mass culture in the 21st century but also attempts to understand the processes that underlie these pitfalls and provides a suggestion for recourse in the form of cultural and media literacy.

 

Paranoia, Paranoia, Everyone’s Coming to Get Me

The post-war prosperity of the 1950s caused rapid changes in America, literally altering the landscape as families began to flood into the newly-formed suburbs. With the dream and promise of upward social mobility firmly ensconced in their heads, families rushed to claim their piece of the American dream, replete with the now-iconic front yard and white picket fence. And yet, ironically, a new set of worries began to fester underneath the idyllic façade of the suburbs as the troubles of the city were merely traded for fears of paranoia and invasion; the very act of flight led to entrapment by an ethos that subtly precluded the possibility of escape.

As with many other major cultural shifts, the rapid change in the years following World War II caused Americans to muse over the direction in which they were now headed; despite a strong current of optimism that bolstered dreams of a not-far-off utopia, there remained a stubborn fear that the quickly shifting nature of society might have had unanticipated and unforeseen effects (Murphy, 2009). Life in the suburbs, it seemed, was too good to be true and inhabitants felt a constant tension as they imagined challenges to their newly rediscovered safety:  from threats of invasion to worries about conformity, and from dystopian futures to a current reality that could now be obliterated with nuclear weapons, people of the 1950s continually felt the weight of being a society under siege. An overwhelming sense of doubt, and more specifically, paranoia, characterized the age and latent fears manifested in media as the public began to struggle with the realization that the suburbs did not fully represent the picturesque spaces that they had been conceived to be. In fact, inhabitants were assaulted on a variety of levels as they became disenchanted with authority figures, feared assimilation and mind control (particularly through science and/or technology), began to distrust their neighbors (who could easily turn out to be Communists, spies, or even aliens!), and felt haunted by their pasts, all of which filled the movie screens of the decade (Jensen, 1971; Murphy, 2009; Wolfe, 2002).[2] Following solidly in this tradition, Kazan’s A Face in the Crowd picks up on some of the latent strains of paranoia in American culture while simultaneously serving as a platform for a set of critiques regarding mass culture.

 

Somewhere, a Star Is Made

The storyline of A Face in the Crowd is rather straightforward and yet deceptively complex in its undertones:  on the surface, we experience a rather heavy-handed morality tale in the form of country bumpkin Larry “Lonesome” Rhodes, a relative nobody who is plucked from obscurity and made (and subsequently broken) through powers associated with television. Yet, it is only when we begin to connect the movie to a larger societal context that we begin to understand the ramifications of the film’s message; a careful examination of A Face in the Crowd reveals striking suspicions regarding the role that media plays (in this case, primarily television and cinema) in shaping American culture. Stars, director Elia Kazan argues, are not so much born as made, a distinction that portends dire consequences.

It is worth noting that Kazan’s film was made during a time when the concept of the “celebrity” was being renegotiated by America; for a large part of its history, the United States, firmly grounded in a Puritan work ethic, had honored heroes who exemplified ideals associated with a culture of production and was struggling to reconcile these notions in the presence of an environment whose emphasis was now focused on consumption. Although modern audiences might initially find this shift difficult to appreciate, one need only consider that the premium placed on production is so central to American ideology that it continues to linger today:  in a culture that exhibits rampant consumerism, we still value the “self-made man” and sell the myth of America as a place where anyone can achieve success through hard work. To abandon these ideas would necessitate that we reinterpret the very meaning of “America.” Thus, we become more sympathetic to the critics of the day who lamented the loss of the greatness of man and bristled against the notion that fame or celebrity could be manufactured—such a system could would only result in individuals who were lacking and unworthy of their status (Gamson, 1994; Benjamin, 1973)

Such is the case it seems, with Larry Rhodes, who is discovered by roving reporter Marcia Jeffries in an Arkansas jail. Although it cannot be denied that Rhodes has some modicum of talent and a certain charisma that comes from being unafraid to speak one’s mind, Marcia ushers Rhodes onto the path of greatness by dubbing him “Lonesome” and thus creates a character that transforms Rhodes from a despondent drunk to a winsome drifter. This scene—the first major one in the movie—thusly introduces the important notion that those involved in the media can be implicitly involved in the manipulation of the information that travels over the airwaves. Subtly adding to the insidious nature of the media, A Face in the Crowd portrays Marcia as a character that seems likable enough, but also a person who is, in a way, exploiting the people in jail as she rushes in with her tape recorder intent on prying the stories from the characters she finds (or creates!) and does not exhibit much concern in truly understanding why these men are imprisoned in the first place. Taken to an extreme, we later come across the character of The General, who further perverts the connection between media and power as he conspires with Lonesome to remake the image of Senator Worthington Fuller as the congressman runs for President.

Yet, as Lonesome Rhodes grows in his role as a media personality, he quickly demonstrates that the power to manipulate does not lie solely with those who sit behind the cameras. In Memphis, Rhodes incites a riot against the Luffler mattress company and also solicits donations in order to help a Black woman rebuild her house. In light of this, we can see that while Kazan focuses on the negative implications of television and celebrity, that the relative good or bad that comes from these actions is not necessarily the point—instead, the one constant in all of the depicted scenarios is a public who is manipulated into performing actions on the behalf of others. Although the characters of Lonesome and The General are vilified throughout the film, it is the masses for which Kazan demonstrates true disdain.

 

Extraordinary Popular Delusions

Perhaps nowhere is this contempt more apparent than at the end of the film where, in an attempt to offer a small moment of solace to Marcia after her unmasking of Lonesome, writer Mel Miller notes, “We get wise to them, that’s our strength” (Kazan, 1957). And Miller is not wrong:  Western tradition has long recognized the correlation between knowledge and power and Miller’s assertion touches upon the revelatory clout inherent in the realignment of perception and reality as noted by public relations guru Howard Bragman (2008). A more critical examination of the film’s closing scene, however, raises an important question:  Who is Miller’s “we”? Although one might be tempted to read this line as indicative of an egalitarian philosophical view, it is important to note that the only two characters in the shot represent the film’s arguably upper-middle class, and pointedly Eastern-educated, elite—nowhere to be seen are representatives of the small Arkansas town from the film’s opening or denizens of Memphis, both of whom serve to characterize the majority of Lonesome’s devoted viewers.[3] In fact, if we take time to reflect upon the movie, we realize that the majority of the audience was only alerted to Lonesome’s dual nature after Marcia flipped a control room switch and revealed the underlying deterioration; the masses oscillated from one position to the next without understanding how or why and once again adopted a passive stance in their relationship with media. Moreover, as Courtney Maloney points out, Kazan’s depiction of the agency of the masses is actually limited in scope:  despite a montage of audience members vehemently phoning in, sponsors are simultaneously shown to be acting independently as they withdraw their association with Lonesome (1999). Moreover, the subtext of the scene distances the rational decision-making of the truly powerful from the impassioned beseeching of the masses, likening the power of the latter to that of a mob. Knowledge and its associated authority, clearly, are afforded to a select group.

This idea, that the world can be divided between those who “get wise” and those who do not, serves to develop a rather sharp classist criticism against the medium of television and those who would watch it:  moviegoers, by virtue of witnessing Kazan’s work, find themselves elevated in status and privy to “the man behind the curtain” (to borrow a phrase). In contrast, the malleable masses were considered to be pacified and placated by idealistic portrayals of life in the 1950s in the form of television programs like Leave It to Beaver, The Donna Reed Show, and The Adventures of Ozzie and Harriet. Clearly, Kazan creates a dichotomy imbued with a value judgment descended from the thoughts of prominent thinkers in the Frankfurt School who, as far as aesthetics were concerned, preferred the high culture of cinema to the conformity and manipulated tastes of television (Horkheimer & Adorno, 2002; Adorno, 1985; Quart, 1989). This distinction between high and low culture would be a crucial supporting idea for critics as a prominent fear of mass culture was that it portended a collapse between concepts (e.g., fame, celebrity, or intellectual value) of objectively different quality, essentially rendering all manifestations the same and therefore all equally mundane (Boorstin, 1962; Hoberman, 2008b; Kierkegaard, 1962).  Even worse for critics, perhaps, was the perception of the masses’ refusal to grow out of its immature interests, a behavior that was characterized as both childlike and stubborn (Adorno, 1985).

And the fears of such theorists, all of whom were reacting to recent and rapid advances in broadcast technology, were not unfounded. Consider, for example, that radio had been popularized a scant fifty years prior and had vastly altered critics’ understanding of media’s potential impact, creating a precedent as it proliferated across the country and began to develop a platform for solidarity and nationalism. Yet, while the effects of radio were decidedly pro-social, due in part to its propagation of orchestral music and transmission of fireside chats, television was viewed as a corrosive force on society that spurred on the destruction of culture instead of enriching it.[4] For the critics of the Frankfurt School, television was indicative of an entrenched sentiment that regarded mass-produced culture as formulaic and perfectly suitable for a generation of passive consumers who sat enraptured in front of the glowing set. Associating the potential dissemination of propagandist ideology with television as a form of mass broadcast, cultural theorists evoked notions of totalitarian regimes akin to Hitler and Stalin in an effort to illustrate the potential subjugation of individual thought (Mattson, 2003). These simmering fears, aggrandized by their concurrence with the rising threat of Communism and collectivist cultures, found fertile soil in the already present anxiety-ridden ethos of the United States during the 1950s.

 

It’s the End of the World as We Know It (And I Feel Fine)

Notably, however, the fears associated with the masses have not been limited to one particular decade in American history:  across cultures and times, we can witness examples akin to tulip mania where unruly mobs exhibited relatively irrational behavior. Given the reoccurring nature of this phenomenon, which receives additional credence from psychological studies exploring groupthink and conformity (Janis, 1972; Asch, 1956), we might choose to examine how, if at all, the cultural critiques of the 1950s apply to contemporary society.

Recast, the criticisms of mass culture presumably resonate today in a context where popular culture holds sway over a generally uncritical public; we might convincingly argue that media saturation has served to develop a modern society in which celebrities run wild while evidencing sexual exploits like badges of honor, traditional communities have collapsed, and the proverbial apocalypse appears closer than ever. Moreover, having lost sight of our moral center while further solidifying our position as a culture of consumption since the 1950s, the masses have repeatedly demonstrated their willingness to flash a credit card in response to advertising campaigns and to purchase unnecessary goods hawked by celebrity spokespeople in a process that demonstrates a marked fixation on appearance and the image in a process reminiscent of critiques drawn from A Face in the Crowd (Hoberman, 2008a; Ecksel, 2008). Primarily concerned with the melding of politics, news, and entertainment, which harkens back to Kierkegaard-inspiried critiques of mass culture, current critics charge that the public has at long last become what we most feared:  a mindless audience with sworn allegiances born out of fielty to the all-mighty image (Hoberman, 2008a).

Arguably the most striking (or memorable) recent expression of image, and subsequent comingling bewteen politics and entertainment, centered around Sarah Palin’s campaign for office in 2008. Indeed, much of the disucssion regarding Palin centered around her image and colloquisims rather than focusing solely on her abilities. [5] Throughout her run, Palin positioned herself as an everyman figure, summoning figures such as “Joe Six-Pack” and employing terms such as “hockey mom” in order to covey her relatability to her constituents.[6] In a piece on then-Vice-Presidential candidate Sarah Palin, columnist Jon Meacham questions this practice by writing:  “Do we want leaders who are everyday folks, or do we want leaders who understand everyday folks?” (2008). Palin, it seemed to Meacham, represented much more of the former than the latter; this position then  leads to the important suggestion that Palin was placed on the political bill in order to connect with voters (2008). Suddenly, a correlary between Palin and Lonesome Rhodes from A Face in the Crowd becomes almost self-evident.

At our most cynical, we could argue that Palin is a Lonesome-type figure, cleverly manipulating her image in order to connect with the disenfranchised and disenchanted. More realistically, however, we might consider how Palin could understand her strength in terms of her relatability instead of her political acumen; she swims against the current as a candidate of the people (in perhaps the truest sense of the term) and provides hope that she will represent the voice of the common man, in the process challenging the status quo in a government that has seemingly lost touch with its base. In some ways, this argument continues to hold valence in post-election actions that demonstrate increasing support of the Tea Party movement.

However, regardless of our personal political stances, the larger pertinent issue raised by A Face in the Crowd is the continued existence of an audience whose decision-making process remains heavily influenced by image—we actually need to exert effort in order to extract our opinion of Sarah Palin the politician from the overall persona of Sarah Palin. Although admittedly powerful, author Mark Rowlands argues that a focus on image—and the reliance on the underlying ethereal quality described by Daniel Boorstin as being “well known for [one’s] well-knownness” (Boorstin, 1962, p. 221)—is ultimately damning as the public’s inability to distinguish between items of quality leads them to focus on the wrong questions (and, perhaps worse, to not even realize that we are asking the wrong questions) in ways that have very real consequences. Extrapolating from Rowlands, we might argue that, as a culture that is obsessed with image and reputation, we have, in some ways, forgotten how to judge the things that really matter because we have lost a sense of what our standards should be.

 

Ever the Same?

So while the criticisms of critics from the Frankfurt School might appear to hold true today, we also need to realize that modern audiences exist in a world that is, in some ways, starkly different from that of the 1950s. To be sure, the mainstream media continues to exist in a slightly expanded form but new commentary on the state of American culture must account for the myriad ways in which current audiences interact with the world around them. For instance, work published after Theodor Adorno’s time has argued against the passive nature of audiences, recognizing the agency of individual actors (Mattson, 2003; Shudson, 1984).[7] Moreover, the new activity on the part of audiences has done much to comingle the once distinctly separate areas of high and low culture in a process that would have likely confounded members of the Frankfurt School. The current cultural landscape encompasses remix efforts such as Auto-Tune the News along with displays of street art in museum galleries; projects once firmly rooted in folk or pop art have transcended definitional boundaries to become more accepted—and even valued—in the lives of all citizens. While Adorno might be tempted to cite this as evidence of high culture’s debasement, we might instead argue that these new manifestations have challenged the long-held elitism surrounding the relative worth of particular forms of art.

Additionally, examples like Auto-Tune the News suggest that advances in technology have also had a large impact on the cultural landscape of America over the past half century, with exponential growth occurring after the widespread deployment of the Internet and the resulting World Wide Web. While the Internet certainly provided increased access to information, it also created the scaffolding for social media products that allowed new modes of participation for users. Viewed in the context of image, technology has helped to construct a world in which reputations are made and broken in an instant and we have more information circulating in the system than ever before; the appearance of technology, then, has not only increased the velocity of the system but has also amplified it.

Although the media often showcases deleterious qualities of the masses’ relationship with these processes (the suicide of a student at Rutgers University being a recent and poignant example), we are not often exposed to the incredible pro-social benefits of a platform like Twitter or Facebook. While we might be tempted to associate such pursuits with online predators (a valid concern, to be sure) or, at best, unproductive in regard to civic engagement (Gladwell, 2010), to do so would to ignore the powerfully positive uses of this technology (Burnett, 2010; Lehrer, 2010; Johnston, 2010). Indeed, we need only look at a newer generation of activist groups who have built upon Howard Rheingold’s concept of “smart mobs” in order to leverage online technologies to their benefit (2002)—a recent example can be found in the efforts of groups like The Harry Potter Alliance, Invisible Children, and the Kristin Brooks Hope Center to win money in the Chase Community Giving competition (Business Wire, 2010). Clearly, if the masses can self-organize and contribute to society, the critiques of mass culture as nothing more than passive receptors of media messages need to be revised.

 

Reconsidering the Masses

If we accept the argument that audiences can play an active part in their relationship with media, we then need to look for a framework that begin to address media’s role in individuals’ lives and to examine the motivations and intentions that underlie media consumption. Although we might still find that media is a corrosive force in society, we must also realize that, while potentially exploiting an existing flaw, it does not necessarily create the initial problem (MacGregor, 2000).

A fundamental building block in the understanding of media’s potential impact is the increased propensity for individuals (particularly youth) to focus on external indicators of self-worth, with the current cultural climate of consumerism causing individuals to focus on their inadequacies as they begin to concentrate on what they do not have (e.g., physical features, talent, clothes, etc.) as opposed to their strengths. Simultaneously both an exacerbation of this problem and an entity proffering solutions, constructs like advertising provide an easy way for youth to compensate for their feelings of anxiety by instilling brands as a substitute for value:  the right label can confer a superficial layer of prestige and esteem upon individuals, which can act as a temporary shield against criticism and self-doubt. In essence, one might argue that if people aren’t good at anything, they can still be associated with the right brands and be okay. Although we might be tempted to blame advertising for this situation, it actually merely serves to exploit our general unease about our relationship to the world, a process also reminiscent of narcissism (Lasch, 1979).

Historian Christopher Lasch goes on to argue that, once anchored by institutions such as religion, we have become generally disconnected from our traditional anchors and thus have come to substitute media messages and morality tales for actual ethical and spiritual education (1979). The overlapping role of religion and advertising is noted by James Twitchell, who contends that, “Like religion, which has little to do with the actual delivery of salvation in the next world but everything to do with the ordering of life in this one, commercial speech has little to do with material objects per se but everything to do with how we perceive them” (1996, 110). Thus, we might classify religion, advertising, entertainment, and celebrity as examples of belief systems (i.e., a certain way of seeing the world complete with their own set of values) and use these paradigms to begin to understand their respective (and ultimately somewhat similar!) effects on the masses.

 

A Higher Power

Ideologies such as those found in popular culture, religion, or advertising tell believers, in their own ways, what is (and is not) important in society, something that Twitchell refers to as “magic” (1996, 29). Each manifestation also professes a particular point of view and attempts to integrate itself into everyday life, drawing on our desire to become part of something (e.g., an idea, a concept, or a movement) that is larger than ourselves. Perhaps, most importantly, the forces of advertising, entertainment, religion, and art (as associated with high/pop/folk culture) play on this desire in order to allow humans to give their lives meaning and worth, in terms of the external:  God, works of art, and name brands all serve as tools of classification. While cynics might note that this stance bears some similarities to the carnival sideshows of P. T. Barnum—it does not matter what is behind the curtain as long as there is a line out front (Gamson, 1994; Lasch, 1979)—the terms survive because they continue to speak to a deep desire for structure; the myth of advertising works for the same reasons that we believe in high art, higher education, and higher powers. Twitchell supports this idea by mentioning that “the real force of [the culture of advertising] is felt where we least expect it:  in our nervous system, in our shared myths, in our concepts of self, and in our marking of time” (1996, 124). Constructs like advertising or entertainment, it seems, not only allow us to assemble a framework through which we understand our world, but also continually informs us about who we are (or who we should be) as a collection of narratives that serves to influence the greater perceptions of individuals in a manner reminiscent of the role of television in Cultivation Theory (Gerbner & Gross, 1976). The process of ordering and imbuing value ultimately demonstrates how overarching ideologies can not only create culture but also act to shape it, a process evidenced by the ability of the aforementioned concepts to consume and/or reference previously shared cultural knowledge while simultaneously contributing to the cultural milieu.

Given our reconsideration of mid-century cultural critiques, it follows that we should necessarily reevaluate proposed solutions to the adverse issues present within mass culture. We recall the advice of A Face in the Crowd’s Mel Miller (i.e., “We get wise to them”) and reject its elitist overtones while remaining mindful of its core belief. We recognize that priding ourselves on being smart enough to see through the illusions present in mass culture, while pitying those who have yet to understand how they are being herded like so many sheep, makes us guilty of the narcissism we once ascribed to the masses—and perhaps even more dangerous than the uneducated because we are convinced that we know better. We see that aspects of mass culture address deeply embedded desires and that our best hope for improving culture is to satisfy these needs while educating audiences so that they can better understand how and why media affects them. Our job as critics is to encourage critical thinking on the part of audiences, dissecting media and presenting it to individuals so that they can make informed choices about their consumption patterns; our challenge is to convincingly demonstrate that engagement with media is a crucial and fundamental part of the process. If we ascribe to these principles, we can preserve the masses’ autonomy and not merely replace one dominant ideology with another.


[1] It should be noted, however, that the comics of this time—those that belong to the end of the Golden Age and beginning of the Silver Age—also provide an additional understanding of the ways in which Americans indirectly wrestled with their fears.

[2] For a more exhaustive list of movies that support this point, see Wolfe, 2002.

[3] Let us also not forget the fact that Lonesome exhibits a rather patronizing attitude toward his audience in his later career, instituting the Cracker Barrel show with its manufactured country lackeys (Yates, 1974). In contrast to his first stint in Memphis, Lonesome has begun to embrace his country image as a means (if an inauthentic one) to connect with his audience, a point of contention to which we will return.

[4] Curiously, however, we see that this relationship between presidential addresses (like the aforementioned fireside chats) and mass media did not elicit notable complaints from critics who were generally wary of the merging of politics and entertainment (Quart, 1989; Benjamin, 1973). Although a larger discussion is warranted regarding the subtleties of this distinction, I would suggest that part of the differentiation stems from a high-low culture dichotomy. Although critics linked the negative presence of television with corporate advertising, James Twitchell suggests that there has always been a rather intimate relationship between arts and commerce, most saliently exhibited by wealthy citizens or entities who act as patrons (Twitchell, 1996).

[5] Certainly being a female did not help this as American women are typically subject to a “halo effect” wherein their attractiveness (i.e., appearance) affects their perception (Kaplan, 1978)

[6] Palin has continued the trend, currently employing the term “mama grizzlies,” a call-to-arms that hopes to rally the willingness of women to fight in order to protect things that they believe in. Interestingly, a term that reaffirms the traditional role of women as nurturing matriarchs has been linked to feminist movements, a move that seems to confuse the empowerment of women with a socially conservative construct of their role in American life (Dannenfelser, 2010).

[7] We can also see much work conducted in the realm of fan studies that supports the practice of subversive readings or “textual poaching,” a term coined by Henry Jenkins (1992), in order to discuss contemporary methods of meaning making and resistance by fans.


The Matrix

Lawrence Lessig’s words continue to haunt me.  An avid fan of The Matrix, Lessig’s thoughts on the manipulation of code caused me to flash back to a particular moment near the end of the movie. For the majority of the film, Neo, the protagonist, has progressed in his training but is constrained by the fact that the Matrix is based on rules that can be bent, but seldom broken. However, after a resurrection—conferred by true love’s kiss!—Neo performs a physically impossible feat demonstrating a newfound mastery over his world. Fittingly, our first look at the world through Neo’s eyes after this event displays the virtual environment as code; to Neo, the world is nothing more than a string of symbols. One might argue that Neo’s rebirth has enabled him to see things as they really are (a sort of ultimate payoff of the red pill) and it is his understanding of the Matrix’s governing processes that affords him his powers. For a particular generation, The Matrix provides a highly visible example of Lessig’s position that code dictates law. Neo’s epiphany, visually laid out for audiences, allows observers to grasp Lessig’s theories—even if they might not be able to articulate what they have witnessed.

The implications of code have not change since the movie’s release. No longer the stuff of science fiction, code is affecting our real lives through seemingly mundane conduits like traffic signal regulation and the (perhaps) more surprising selling of online real estate; I still remember being fascinated by news of code from Ultima Online being auctioned in a live marketplace—here were parties that were willing to trade resources for (arguably useless) bits!

Currently, we continue to grapple with the navigation of these virtual spaces, made difficult by the notion that many do not understand the rules that govern our spaces. Augmented reality will further complicate this process as additional layers of code are overlaid upon our physical reality; wearable computing might change the ways that we deal with access, permission, and restrictions as we attempt to balance code and natural laws.


Diffusion of Innovation

In Diffusion of Innovation, Everett Rogers discusses the concept of “diffusion” as a subset of communication in order to highlight how communities acquire knowledge. Rogers’s opening chapter provides the reader with anecdotes to illustrate various strategies for this process, simultaneously providing a vivid reference point for readers while hinting at the complex array of factors that can affect the spread of ideas.

Undoubtedly building upon foundational theory created by Rogers, figures such as Richard Dawkins and Malcolm Gladwell have ruminated on the spread of messages. Using the preexisting schema of Evolutionary Biology, Dawkins likened information to genes (in the process, creating the term “memes”) in order to describe his theories regarding transmission and replication. Dawkins essentially argued that the fittest (in an evolutionary sense) ideas would go on to propagate in society, mirroring the activity of organisms. Gladwell, on the other hand, has incorporated Rogers’s model of adopters into his book The Tipping Point, describing the stages of diffusion in terms of people. Although Gladwell also goes on to describe individuals’ roles as agents of change, he continues to work under the philosophical framework provided by Rogers.

Daniel Czitrom’s Media and the American Mind addresses communication in a different manner, referencing media theorist Marshall McLuhan in its subtitle. McLuhan famously introduced the notion that “the medium is the message,” referring to the concept that the mode of communication has an inextricable relation to the content being provided. Although first coined in the 1960s, McLuhan’s thinking can still be applied to modern culture struggles to integrate the increased number of available media channels (e.g., traditional broadcast, podcasts, blogs and vlogs, etc.) afforded by advances in technology. Additionally, transmedia presentations of content (e.g., webisodes for Battlestar Galactica and Heroes or the narrative of The Matrix) challenge viewers and producers to reconsider established notions of media’s impact.


The Big Picture

Movies, generally speaking, find themselves at the crossroads of fiction and reality, representing an artifice that, while occasionally fanciful or escapist, grounds itself in universal truths. Even documentaries, which may record life as accurately as possible, engage in time displacement, making the transpired events seem somewhat otherworldly. Like many other forms of entertainment media, films struggle to balance their dual nature, occasionally pitting artistic freedom against economic constraints. With few exceptions, one might see an expectation for movies to achieve success on both counts:  the creative forces (e.g., director, writer, and stars) would appear to have a stake in the imaginative aspects of filmmaking whereas businesses (e.g., studios, producers, and theater owners) focus on the economic components. Truthfully, however, the interests of the involved parties do not always fall along clear-cut lines:  as we shall see, entities can fluidly switch positions and allegiances based on immediate and long-term goals.

The director/studio/producer relationship almost defines the classic battle between creative and economic forces in the film industry. Without much effort, ordinary people can likely picture the tension that exists on a set as both the director and the studio/producer fight to protect their interests in an enterprise that puts millions of dollars at stake. Despite speaking the same language, these two entities might experience difficulty communicating their notes to one another in a productive manner. In order to  maintain artistic integrity, a director might demand reshoots while a line producer could simultaneously express concern regarding overages—given their individual perspectives, both people have a valid point and compromises must be reached in order for the project to continue. Perhaps, in exchange for a particular shot or sequence, a director will have to excise an entire scene down the road.

On one level, the aforementioned skirmish represents a conflict over income:  the line producer, beholden to a studio whose profits suffer as overages increase, places increased importance on the bottom line, whereas the director sees an endeavor that might be larger in scope. The director might recommend changes in order to increase character development, add clarification, or to add visual appeal in an attempt to make a better product. However, in the eyes of the producer, the changes proposed by the director might seem unnecessary as they do not necessarily increase the revenue (i.e., gross income) potential or profitability (i.e., ability to generate net income) of the movie. The producer might feel a strong sense of loyalty to a studio as the studio might pay for a portion of the producer’s expenses; studios, in turn, view this lost money as an investment in scripts/projects, possibly gaining the right of first refusal in return (Epstein, The Big Picture: Money and Power in Hollywood, 2006). Thus, in accordance with their interests, the producer/studio might feel that the suggested alterations represent an indulgence. This relatively simple scenario presents a rather clear conflict for some of the various parties involved in films, but the clash between creative and economic forces can become increasingly complicated when examining an issue such as the Motion Picture Association of America (MPAA) ratings.

Originally established as a trade organization, and thus exempt from anti-trust laws, the MPAA represented (and continues to represent) the interests of the major studios in various political arenas, including content regulation and copyright infringement. In its early years the MPAA, then called the Motion Picture Producers and Distributors of America, instituted and enforced the Hays code as a form of self-regulation in order to protect the industry from censorship by the government. Over time, these decency guidelines evolved into the now familiar ratings (e.g., G, PG, PG-13, R, etc.) used by virtually all mainstream domestic productions.

Ostensibly, the ratings provided by the MPAA function to notify individuals of a movie’s content, thereby aiding adults in deciding which films are appropriate for their children (or themselves) to view. In and of itself, there seems to be nothing wrong with this model—allowing parents to make informed choices regarding their children’s consumption habits appears perfectly legitimate. Were this purely a matter of content, a relatively minor conflict might arise between directors/stars (who might argue the social or cultural importance of the film) and the public. However, economic ramifications cloud the issue a bit as ratings can determine a myriad of things including marketing strategy, ticket sales, and distribution. Suddenly a host of entities—ranging from studios to distributors, retailers, and theater owners—has an interest in a film’s rating.

Participants whose existence depends on the commercial success of movies (distributors, studios, and theaters) tend to shy away from films rated NC-17 as these projects present some significant additional challenges. On top of a restricted audience base and extra staffing needs (to prevent theater jumpers), NC-17 (formerly X) movies suffer a lingering stigma of licentiousness that arose in the 1970s. Originally, a rating of “X” merely denoted material not appropriate for children (as NC-17 does today) but the rating did not have the same connotation that it currently does, with critically acclaimed hits Midnight Cowboy and A Clockwork Orange receiving the designation. However, the failure of the MPAA to copyright the label led to mislabeling of movies in order to capitalize on a sense of titillation or intrigue (Sandler, 2001). Significantly, pornographic films began to employ the rating, which caused the public to form a rather negative association with the term; the rating has since been unable to shake the stigma, causing current films to suffer from misperception.

This changing attitude toward X-rated movies had disastrous consequences. Richard Maltby argues that the move from the Production Code Administration to the Code and Rating Administration in 1968 (and resulting classification of movies) represented a shift in profit making for studios:  under the old system, movies appealed to the broadest audience possible in order to reap maximum profit whereas the new ratings system created increasingly smaller customer bases (1995). Studio executives and theater owners began to notice the economic downturn soon after the institution of the X rating (Aubrey, 1971; Arkoff, 1972; Rugoff, 1971), branding the classification as trouble.

Additionally, distributors’ current dependence on major retail and rental outlets only serves to compound the problem. With a large portion of their revenue coming from stores such as Wal-Mart and Blockbuster—who do not sell/rent NC-17 content—studios and distributors must focus on productions that retailers deem acceptable (Epstein, The Big Picture: Money and Power in Hollywood, 2006). In short, modern investors have come to learn that NC-17-rated movies are generally not profitable or revenue streams.

Initially, directors, writers and celebrity stars might not overly concern themselves with a film’s rating—they are, after all, engaging in a process whose purpose is to occasionally push the public’s boundaries and force us to confront hidden parts of ourselves. Creative entities in cinema can view film as a medium of expression, or a language, that helps them to communicate their point of view to an audience; powerful movies have the ability to make us feel long-forgotten emotions or to change the way that we see the world. However, in a trickle-down effect, an X rating typically causes the creative forces of a movie to suffer pressure from studios or producers. To avoid this sort of confrontation, production companies might contract their directors to deliver a movie with a particular rating (Finn, 2008; Weinberg, 2005), which might mean that a director must restructure a movie if the film receives an initial rating of NC-17. However, even if a director is not legally bound, the director might participate in a form of self-censorship might in order to prevent an unfavorable rating. Stars might also refrain from fully exploring their characters in an effort to make them more palatable to mass audiences. This sort of pressure suggests a stifling of artistic license and expression but also reflects the idea that movies represent commodities that exist in a context—material deemed (or merely perceived to be!) “offensive” or “inappropriate” is generally not economically viable. Directors and stars must often make compromises to their visions in order to get their movie produced, or risk the project stalling out and fail completely.

Further examination of the issue reveals that conflict might arise as a result of involved parties’ differing measures of a film’s success. Entities like studios, theaters, and distribution companies derive their income (and, therefore, their existence) from sales of tickets, merchandise, licenses, or concessions—all of which depend, to a greater or lesser extent, on the volume of people who pay to see the movie in question. Movies like Transformers, for example, represent the ideal movie for these groups in a number of ways:  targeting teenage males, the PG-13 picture contains explosions, (relatively bloodless) violence, and brief sexual situations tied in with action sequences involving flashy cars. Movie studios and distribution companies rely on ticket sales as they derive their income as percentages of box office receipts while the theater also benefits from young people spending their discretionary income at the concession stands. Indeed, the only thing that might make the film more financially successful might be cutting down the run time from 144 to 128 minutes. Accordingly, it makes sense in this context that these particular entities find themselves obsessed with the raw economic ramifications of films; maximizing both revenue and profitability are important to these corporations as evidenced by Epstein’s thoughts regarding “marketability” and “playability” (The Big Picture: Money and Power in Hollywood, 2006). While the public might think of revenue as the more important factor (e.g., regular reporting of box office receipts from opening weekends), studio executives also find themselves concerned with the notion of profitability; business people want to develop a brand that they can reliably provide a stream of income off into the future (Hade, 2001). Like most business, revenue does not necessarily provide the information a company needs to judge its financial health—a better measure might be, for example, return on investment or profitability (Ravid, 1999). Sherry Lansing, former CEO of Paramount explicitly stated this idea in 2002 by saying “I’m not interested in box office and I never have been, I’m interested in profitability” (Epstein, The Big Picture: Money and Power in Hollywood, 2006). This attitude reflects how individuals, as opposed to corporations, might be more concerned with issues of profitability; moreover, these notions of profitability might not limit themselves to a single film but instead reach across the lifespan of a company or career.

A movie’s value, then, might be evaluated in a wider variety of ways:  a film could establish well-defined franchise characters for a studio (creating or developing intellectual property rights), or raise awareness of a star’s talent. Although studios, stars, and directors cannot entirely divorce themselves from the day-to-day financial success of their movies, they may find that a project might pay dividends that are not immediately apparent. The 2008 movie, Frozen River, illustrates this concept beautifully with Melissa Leo receiving an Oscar nomination for Best Actress despite the film’s relatively limited revenue capture (Box Office Mojo, 2009). Although somewhat of an anomaly, this picture undoubtedly raised the profile of its director and lead actress (even if only temporarily)—something that could be even more valuable to these individuals than the immediate promise of money. Taking this vantage point into account, one can see why a director or actor might consider performance or artistic integrity more important than a movie’s financial numbers.

Studios might also choose to place value on a film’s non-economic components in order to build relationships with particular stars or directors or to increase the eventual home distribution profitability of movies by producing “Oscar-bait.” Typically geared toward adult audiences (e.g., the voters in the Academy of Motion Picture Arts and Sciences) and rated R, films that debut around the holiday season may contain heightened expressions of story, drama, or emotion. Although many might consider these movies as high points in the year’s achievements, studios may, in fact, be overproducing R-rated movies from an economic standpoint. A study conducted by de Vany and Walls in 2002 suggested that movies with an R rating brought in the lowest profits (as compared to movies with less restrictive ratings) and common sense would imply that studios must be making a choice not to maximize their profits in dollars (de Vany & Walls, 2002). In some cases, for these participants, increased prestige is a form of profitability.

It should be noted that stars and directors are not unaware of the salient economic forces at play, however, as a large part of contract negations can revolve around fees and perks (Epstein, Hollywood’s Morality Play, 2006; Epstein, The Big Picture: Money and Power in Hollywood, 2006). Stars or directors with celebrity status have the negotiating power to demand significant compensation for their efforts, often resulting in smaller gains for other participants—such as producers—in the process. Victor Goldberg, for example, argues that an otherwise successful movie may fail to yield a net profit if an individual benefits from the gross profits (1997). Although major stars or directors will occasionally make monetary concessions in order to proceed with a project (e.g., Ocean’s Eleven) or to participate in a amusing endeavor (e.g., cameo appearance), contracts generally signify instances where these individuals are more concerned with revenue than profitability.

The previous scenarios suggest that, in order to get a movie made, participants must make some sort of concession in order to pacify the opposition. Intuitively, this business model makes sense to most people; compromise is an essential part of functional working relationships. However, although the relationship between stakeholders in films might occasionally be antagonistic in nature, participants can also work together symbiotically to afford mutual benefits.

In recent years, new socially-responsible companies have sprung up embracing a different strategy; to be sure, these institutions are still interested in profits, but also incorporate other goals into their mission statements. This new paradigm allows creative forces to express their voices, and actors to challenge themselves in meaningful roles, while the production company raises relatively significant income for involved parties.

Participant Media (formerly Participant Productions) aims to develop films that are not only commercially viable, but also contain a social message (Pinsker, 2004). Although the name of the company might sound unfamiliar to many, Participant Media has been involved in a wide variety of well-known projects over the past five years, including films such as:  Fast Food Nation; Good Night, and Good Luck; An Inconvenient Truth; Charlie Wilson’s War; The Kite Runner; Murderball; The Soloist; North Country; and Syriana. Even people not familiar with the inner workings of Hollywood should be able to recognize a few projects on the aforementioned list; clearly this company has established a presence in the industry. Interestingly, many of the movies also take a step further, attempting to capture the spirit of activism generated by the projects themselves. For example, audience members or interested parties can visit a specialized webpage on the company’s website (http://www.participantmedia.com/social_action.php) in order to help causes directly related to the content of the production company’s films. The website allows individuals to connect to the issues at hand and provides them with simple actions (i.e., “next steps”) to take in order to remain involved. In addition to performing social good, this new format of interactive media also more saliently recognizes that movies exist as part of a community—and the films of Participant Media are tapping into the power inherent in that population.

Participant Media has also attracted major stars (e.g., Charlize Theron, Tom Hanks, and George Clooney) to its projects, lending the production company’s philosophy some influence. Star power also allows actors and directors to engage in projects or forward causes that are important to them, making the arrangement mutually beneficial. Additionally, all involved parties might benefit from the goodwill associated with the project, making it lucrative for theaters and distribution companies to connect with these productions. In this case, the goals of many participants coincide with one another with success or profitability being measured in dollars donated or amount of awareness raised. Revenue is certainly a factor, as it would be with any business, but income might be treated with slightly less importance.

Films may also contain less obvious political messages, as in the case of Batman, which subtlety connected Time magazine with damsel-in-distress Vicki Vale in order to promote the brand’s virtue. At the time, the studio responsible for the movie, Warner Bros., was in the process of merging with Time (to form Time Warner), and the positive portrayal of the publication would help to increase Warner Bros.’ eventual profitability. Examples such as these highlight another way that movies can possess value for their stakeholders; this time, it is the studio that benefits from movies as a form of expression (Christensen, 2002).

Further examination of the issues of revenue and profitability has shown that these two interrelated forces resemble quantitative and qualitative (respectively) aspects of a success or achievement in the film industry. For movies that exist within a system or business model—thus excluding independent art films or experimental media—revenue always resonates as a consideration but does not necessarily tell the whole story. The various permutations of a movie’s profitability (e.g., critical acclaim, establishment of brands, development of bridging ties to other key members within the Hollywood community) continue to affect participants long after an individual film’s accounts have settled. Although these two forces may occasionally conflict, stakeholders ultimately attempt to maximize both in a manner consistent with their priorities.


Dancing Dirty and Having the Time of My Life

I’m not quite sure who created the breakup routine, but the post-relationship dance inevitably includes a number called “Show Your Ex How Much Better You Are without Him or Her.” Often, there are no choreographers, flashy lights, or even costumes, but a careful audience will spy both parties concentrating intently on their next steps. Even before the dust settles, we feel the urge to go out of our way to show the one we once loved how we’ve moved on, how he or she meant nothing to us, and how hot we (and our current fling) are. We tell ourselves that we are fortunate for having gotten out of the relationship when we did.

But what happens when you’re not better off after all is said and done?

Recently, I had the chance to visit an ex and I went into the situation not expecting much—this would be the first time that we would have seen each other in three years—but a little part of me couldn’t help but be curious to see how my ex’s life had turned out.

So I sat there, in a place that was at once familiar and distant, thinking about how far we both had come. In the dim light of the room, I could close my eyes and feel things that were once mine; now, I couldn’t even bring myself to reach out to touch them. As a sigh escaped from my lips, punctuating the silence and filling the void, I suddenly realized that if we were competing I would have lost.

Don’t get me wrong, I’m certainly satisfied where my life is right now, but I can admit when I’ve been bested. Looking around, I felt something that I never expected to; I felt incredibly proud. Later, as I drove away from the house and back toward my life, I struggled to define my emotions but I couldn’t describe what I was feeling in any other way. In many senses, the things that I had seen demonstrated that my ex was happy, successful, and achieving what I had always secretly hoped for.

Cracks in the pavement created a steady hum beneath my car and I found clarity in the cold night air. I began to realize that, for us, the dust had indeed settled for we were both going about our own lives and all that was left was all that there ever was—not anger, jealousy, or unease—but simply love.

This past weekend made me realize that a large portion of my knowledge in the area of sexual health comes from my experiences. For many, information is helpful, but isn’t enough on its own—at some point you have to try things for yourself. Learning to negotiate situations, to feel empowered, to ask for what I want, and to plan ahead are all things that have been forged in heated and pressured situations. How do you react to a partner who wants you to have unsafe sex? What if you’re really into him or her? What if drugs get thrown into the mix? How do you learn to identify the indicators of rape so that you don’t become trapped? These are tests that you study for, hoping you’ll never have to find out if you pass or fail.

And, for the record, faltering every now and then isn’t the end of the world; making mistakes is also a valuable part of the learning process. The trick is to minimize the consequences and make sure that you learn from what you did wrong the first time.

So I guess it turns out that I am better off now, but not for the reasons that I originally thought. I wasn’t fortunate to have the relationship end, but to have been in the relationship in the first place. The relationship provided a safe space for me to grow into my own and I really couldn’t have asked for more.


Milk Money

 

It was a good burn. It had been a while since I had breathed in the salty air rolling off the beach. One foot on the pavement, I felt the sting creep in. I inhaled deeply, letting the balmy breeze permeate my lungs and fill my head.

Venice, if you have never been, is quite an amalgamation of stimuli—from the people, to the sounds, and the smells, there is always something to soak in. To be honest, I am completely out of place in a beach town filled with people who are not like me but the great thing about Venice is that nobody really cares. Here was a place where a number of worlds collided in one long strip of beach that, more often than not, contained a trace of pot incense.

Over the course of the day, I mentioned to Kim and Tiffany that I had rented a documentary about four students at the Harvey Milk High School contrasted with the creation of a “Hedwig and the Angry Inch” tribute album. Although seemingly a bit random, the proceeds from the album went to supporting the Hetrick-Martin Institute (which, at the time, ran the high school). The existence of the Harvey Milk High School is a bittersweet one for me:  on one hand I applaud the idea that we have set aside money for students to grow in a safe space but I also hate that we have to have such an institution in the first place. Why is that we even need to create an area for GLBTQ youth? Can’t get all of our teenagers to respect each other?

Watching the documentary, what struck me about these students was that they were physically much younger than I am, but also much stronger than I may ever be. These individuals were not yet out of high school but they had already walked down paths that I had yet to start. One girl ran away from home after being excommunicated from her church and established a life for herself independent of her family and, although happy, mentioned upon seeing her home that, “This is where I should have been.” Another transsexual girl mentioned that she knew cutting was not a good thing but that it relieved the hurt inside and sometimes she felt as though it was all she had. How do you react to something like that? I didn’t even know this girl and I felt so much sorrow for her.

Without a doubt, the students in the Harvey Milk High School had suffered and as I reflected on their experiences, I began to wonder if there was any truth to the phrase, “No pain, no gain”? American culture teaches us that we must work hard for our success and that the best victories are hard won. We often hear that we define ourselves in our weakest moments, or in the face of our mistakes. We know that growth in every sense involves some measure of discomfort—we never develop intellectually if we are not uncomfortable, if we aren’t challenged. Learning isn’t easy.

Maybe it’s only because we have suffered pain that we are able to connect to others? As we grow up we first look for a mate with no complications but then gradually shift to look for someone whose baggage goes with ours. And what is baggage if not an emotional scar? A place where we were cut and didn’t heal just right? Well, at least not yet? Or is love simply the experience of baring your biggest hurt to your partner? Surely that isn’t all there is to the emotion, but it’s hard to deny that, having done that, you don’t love someone just a little bit more.

Pain, in all of its forms, defines who we are as people—embrace yours and learn to use it as a tool to shape who you want to be.


Sing for Absolution

“It’s been a long night,” I thought to myself as I dragged my tired body into bed. Part of me knew that I should take off my shoes at the very least, but a voice inside my head argued that the removal of footwear would require me to move from the now all-too-perfect spot on my pillow.

Bing!

“What is it now?” I glanced angrily at my cell. “Shut up and let me go to sleep!” Flipping over on my side, I hit buttons of my phone to discover a Tweet from Jay Brannan announcing a new music video. “I’ll check it out tomorrow,” I thought, dropping the phone back on to the headboard and settled back into the sheets.

I’ve been a fan of Jay ever since I saw him in the movie Shortbus (which itself is rich with blog topics). Performing in such a film is certainly admirable, but the film also exposed me to Jay’s music, which I have been listening to for a couple of years now.

While many of the songs on Jay’s album, “Goddamned,” are enjoyable, one song in particular strikes me when I put on my sex blogger hat:  “Home.” While the track describes the certainly relatable experience of being a young person in a large city, it also contains the following lines, which are some of my favorite:

 

Why don’t the Gideons leave condoms in the drawer?

Bibles don’t save many people anymore.

 

Sure, there are many ways to argue this sentiment (declining condom usage is fodder for another article), but I do think that it’s an interesting point of view although I’m admittedly biased because safer sex is much more my religion than Christianity/Judaism ever would be. Why do hotels leave Bibles for their patrons and not condoms? Is saving one’s immortal life more important than potentially saving one’s mortal being? Is it practical to try to save both?

The interplay of religion and science has been around since the early stages of civilization and these forces are often pitted at odds against one another (even if not in direct conflict). As Emily Dickinson, one of my favorite poets, once wrote:

 

Faith is a fine invention

For gentlemen who see;

But microscopes are prudent

In an emergency.

 

Although there has been some recent conflict between the two camps, I can’t help but believe that the goal of both schools of thought is the development of guidelines to keep their believers safe. In my world, the original role of religion was to keep its members safe (from the world and each other), healthy, and to encourage propagation of the species. Science-based sexual health education, too, I would argue, aims to do many of the same things. I think that both ideologies have things to offer and that it is incredibly presumptuous to think that one side has all of the answers, or the only answers.


Girls on Film

 

Although I occasionally make fun of him, I’m starting to understand how the little kid from The Sixth Sense (I don’t remember the character’s name either) felt—except, instead of dead people, I see sex. Okay, granted, it’s not quite the same, as I don’t have crazy dead ghosts yapping in my ear and making some future therapist a bunch of money. But, all the same, I can’t stop noticing things even though I might want to.

“Fox is messed up,” I lamented to my coworker Michael. “Every time a girl has sex on Dollhouse, bad things happen to her.”

I should back up a second and say that I wanted to be a fan of this show—I was checking the website for weeks in advance and I’ve seen every episode to date. Overall, it’s not bad (and there’s nothing else on during the timeslot), but it’s certainly not great and there is that pesky problem with the show’s conservative attitude toward sex.

On one hand, the show certainly isn’t afraid to show its characters in sexual situations (the majority of the episodes to date have featured a prominent group shower insert that has very little to do with anything) and the title sequence features a (half?) naked Eliza. I get that her lack of clothing is supposed to comment on her character, but she’s still naked.

Despite the saturation of sex in the show’s environment, Dollhouse demonstrates very poor consequences for women who engage in sexual activity. Where to start?

The main character, Echo, has sex with a pretty attractive, rich, and normal-seeming guy who, upon climax, begins to hunt her down. Literally. With a bow and arrow. (On a side note, can we talk about how this scenario plays into many women’s worst fear that a guy, once slept with, will turn into a monster? This situation is not quite as literal as Angel turning into a soulless vampire after he slept with Buffy, but why does this keep coming up in Joss’ shows?)

A secondary character suffers an episode of rape committed by her handler (I don’t have time to discuss the many issues at play here) and then has her memory erased. I can see the small value in making people forget some traumatic incident that happened to them (especially given the world of the show) but I can’t help but feel incredible sorrow for the character—she was raped and is not going to even know that it happened. For the rest of her life, other people will know what truly happened to her, and how she was violated, and she will not; I can see the upside to this but I also see a huge downside.

Finally, we have a desperate neighbor who is doing everything she can to sleep with one of the male leads, finally does, and then is brutally attacked. If you’ve seen this past episode, you can argue that she takes charge and dispatches her assailant, but I would also mention that her civilian persona still has to deal with the aftermath.

In case you’re keeping score, half of the men who have had sex on the show have suffered no consequences and half have died (I strongly suspect that these male characters perished more because they were assholes and perished while they were trying to kill someone else, than because they were slated to suffer consequences from engaging in sexual intercourse). The “good guys” who have had sex are doing just fine.

Angry yet?

I don’t have many qualms with the show overall and I don’t necessarily think that women should have to not face consequences for having sex—all I’m saying is that the depiction of sex should be more balanced (like Fox!). The danger, I feel, lies in our tendency to soak up these skewed external messages of sex subconsciously and to make them our own.