Thoughts from my study of Horror, Media, and Narrrative

Archive for September, 2011

Off to See the World

One might think that the American version of a show called The Amazing Race (CBS, 2001-present) might be somewhat sensitive to ethnicity, given the potential misreading of its title. Sadly, however, the show (currently in its 19th season) continues to exhibit signs of ethnocentrism as it shuttles contestants around the globe on a race around the world.

Assuredly, part of the problem manifests in the contestants themselves, who rarely, if ever, show large amounts of cultural sensitivity and/or knowledge. (It should be noted that there are certainly exceptions to this rule, but the general lack of awareness seems to be somewhat surprising given that contestants have had numerous opportunities to learn from past racers’ mistakes and although some have learned the value of doing research on a country or picking up a guidebook, none seem to grasp the utility of learning foreign languages or customs. To be fair, the situation may be admittedly more complex with producers having control over which teams are actually selected to race—I am not a conspiracy theorist, but it seems like selected teams do not have distinct advantages [e.g., nobody reports spending extended amounts of time overseas] and it is entirely possible that producers do not select teams who prepare in this fashion.) Perhaps unwittingly perpetuating the stereotype of “ugly Americans,” discourteous behavior is most often exhibited by teams/racers 1) yelling at foreign cab drivers (in English) and getting frustrated when said drivers do not understand the racers (even when the racers resort to speaking as they would to a child or an elderly person), 2) becoming upset that locals do not instantly know the location of some destination in the city (e.g., a specific plaza, street, or shop), or 3) complaining about India or China (size, poverty, food, smell, crowding, etc.).

Worse, perhaps, the show itself presents as a sort of extended travel narrative, painting the contestants as little more than tourists who zip from location to location, participating in challenges that are little more than thinly-disguised vacation package day trips. Ostensibly grounded in the traditions, customs, or ritual of the current location, the challenges that racers face (called roadblocks and detours) demonstrate little respect for the practices upon which they draw and definitely do not ask the racers to internalize the importance of the activity in the lives of those around them. Instead of asking racers to truly engage on a meaningful level, one might argue that the racers are, as Dean MacCannell suggests, “simply collect[ing] experiences of difference” (again, we need to question the role of editors/producers as such internalization may in fact occur for racers but such a transformation is never highlighted in the on-screen interviews, unless the reaction is so over-the-top as to be insincere). Moreover, building upon thoughts mentioned elsewhere in Lisa Nakamura’s chapter “Where Do You Want to Go Today?” one can see that, from a Western (in this case, American) perspective, The Amazing Race is constructed on pillars of Otherness, exoticism, and foreignness.

The Amazing Race – Season 19, Episode 1

Take the above scene, for example, that features a font designed to invoke associations of “Asian culture” imprinted upon paper umbrellas, set in a temple. Putting aside the issue that the task at hand has nothing to do with any of the Asian “props,” the font itself is incredibly problematic as it represents Roman (i.e., Western) letters that are constructed out of faux brush strokes—a type of writing that finds a home in no Asian culture on Earth. Second only to the typography used on the stereotypical Chinese take-out container (see image to the right) in familiarity with a Western audience, the font used in The Amazing Race demonstrates just how shallow the program really is.

On a larger level, however, the show also demonstrates no small amount of Orientalism as it works to legitimize Western culture, often presenting local culture/customs in a tone that invokes terms like “quaint” or “backward.” (Although primarily focused on America, one might also note that the show’s host, Phil Keoghan, expands the narrative slightly, presenting a form of acceptable/valued Otherness in the form of a man who presents as White but speaks with a New Zealand accent.) The exotic nature of the locations/tasks is also often conveyed through their status as spectacle.

Chipmunk Adventure Clip

Watching the main titles, one can almost ignore the distinctly (yet ambiguous) “ethnic” soundtrack and compare the images to those of other travelogues. In particular, The Chipmunk Adventure (1987), a movie made for children, seems striking in its presentation of cultural icons from around the world, suggesting that The Amazing Race is not the first media product to treat foreign people in this way. This treatment, aspects of which are also mentioned in Vijay Prashad’s The Karma of Brown Folk, alludes to the trope of “forever foreigner,” which suggests that although dominant American culture may tolerate, absorb, or incorporate aspects of other cultures, titillation derives from the notion that one is participating in activity that is perpetually Othered and will never be as “American” as apple pie (amusingly, and perhaps rightly, Jennifer 8. Lee argues that this phrase should be changed to “American as Chinese food“) and country music.

Instead of taking the opportunity to truly educate an American audience about the complexities and joys of life abroad, The Amazing Race pushes an ideology that, in large and small ways, reaffirms just how great it is to be American. With a television as passport, we are able to visit distant lands (from the comfort of our couch, no less) and accrue knowledge, if not understanding. We watch for an hour a week and come away feeling worldly, content to accept the manufactured diversity on screen (through composition of racing teams and locations) as substitute for the real thing as we reassure ourselves that we, as White Americans, truly represent the amazing race.


Legends of the Fall

When reading the fiction of Cordwainer Smith, I found myself making connections to Richard Matheson‘s I Am Legend. Although I would classify I Am Legend as more of a horror story than a work of Science Fiction—that being said, the genres have a tendency to overlap and a strict distinction, for this current article, is not necessary—both pieces were published in the 1950s, a time assuredly rife with psychological stress. Although we certainly witness an environment coming to terms with the potential impact of mass media and advertising (see discussion from last week’s class), I also associate the time period with the incredible mental reorganization that resulted for many due to the increased migration to the suburbs—a move that would cause many to grapple with issues of competition, conformity, routine, and paranoia. In a way, just as 1950s society feared, the threat did really come from within.

And “within,” in this case, did not just mean that one’s former neighbors could, one day, wake up and want to eat you (one of the underlying themes in zombie apocalypse films set in suburbia) but also that one’s mental state was subject to bouts of dissatisfaction, depression, and isolation. Neville (the last human in Matheson’s book, who must fight off waves of vampires) and each of the protagonists in Smith’s stories is othered in their own ways and although Smith overtly points to themes of empowerment/disenfranchisement, I could not help but wonder about the psychological stress that each character endured as a result of a sense of isolation. Martel (“Scanners Live in Vain“) fights to retain his humanity (and connection to it) through his wife and cranching, Elaine (“The Dead Lady of Clown Town“) falls in love with the Hunter and fuses with D’joan while Lord Jestocost (“The Ballad of Lost C’mell“) falls in love with an ideal, and finally Mercer (“A Planet Named Shayol“) unwittingly chooses community over isolation by refusing to give up his personality and eyesight.

Throughout the stories of Matheson and Smith, we see that the end result of warfare is a shift in (or acceptance of) a new form of ideology. (This makes sense particularly if we take Smith’s position from Psychological Warfare that “Freedom cannot be accorded to persons outside the ideological pale,” indicating that there will necessarily be winners and losers in the battle necessitated by differences in ideology.) In particular, however, I found “Scanners Live in Vain” and “The Dead Lady of Clown Town” most interesting in that their conclusions point to the mythologizing of characters (Parizianski and Elaine), which is the same sort of realization had by Matheson’s Neville as he comes to terms with the concept that although he was, in his own story, the protagonist, he will be remembered as a conquered antagonist of new humanity. Neville, like Parizianski and Elaine, has become legend.

Ultimately, I think that Smith’s stories weave together a number of interrelated questions:  “What is the role of things that have become obsolete?” “What defines a human (or humanity)?” “How is psychological warfare something that is not just done to us by other, but by ourselves?” and, finally, “If psychological warfare is an act that is committed to replace and eradicate ‘faulty’ ideology, what is our role in crafting a new system of values and myths. What does it mean that we become legends?”


Canon Fodder?

To be sure, broadcast media are constructed around an agenda. Although we might argue whether the underlying goals of mass media are anti- or pro-social, it does not seem unreasonable to suggest that we are continually being influenced (alternatively manipulated, persuaded, informed, etc.) by media. This particular view does preclude the possibility of audience agency, but also suggests that we must remain mindful of top-down messaging, no matter what kind of meaning is construed by viewers.

Often caught in the middle of this tension are media outlets, who evidence a complicated relationship as corporate subsidiaries who may or may not be working in the public interest.[1] To become cynical and overly suspect of media channels is to disengage from the system but failing to question the broad range of factors that shape media production is to be naïve and to remove oneself in another fashion—where, then, is the happy medium? How much energy can and should be invested into understanding how mass media intersects with all levels of life (e.g., individual, interpersonal, and communal)? When have we done our due diligence and when have we become paranoid?

I often wonder if the impulse to ignore the impact of media on our lives is less willful and more of a survival instinct. If we accept that modern humans are now assaulted with thousands of things that demand our attention, perhaps the drive to search for simplified narratives makes intuitive sense—our brain would shut down if we attempted to juggle all of the outside factors which, at any given moment, may or may not be affecting us. As such, broad declarations (helped along by catchphrases or clever wording) like “video games cause violent kids” or “television rots your brain” represent stable, if perhaps incorrect, positions about the ways in which media intersects with our lives. Taking the stance that “the Internet is for porn,” for example, uncomplicates one’s position to the Internet, removing all of the conditional clauses—the hemming and the hawing—and, in short, eliminating all of the subtlety.

And while I certainly do not expect all people to be as inherently interested in media as I am—to do so would be arrogant and short-sighted—I do often wonder about how I can encourage other people to want to engage with the subject. More than just modes of resistance, I think it is important for individuals to think about what kinds of power are inherent in the media (and its associated systems) and how this power can be leveraged. Resistance and disruption are one possible outcome, but I am much more inclined to develop a broader framework and set of skills.

Speaking to this notion of complexity, Eva Illouz comments on the value of canonic texts, noting that they are valuable not for some inherent quality of “excellence,” but because they offer new ways to organize knowledge. These structures, along with the resulting viewpoints, also provide theorists with a vocabulary to employ while supporting and contesting their positions and it is this process—what Illouz calls “tension”—that allows us to refine and articulate our perspectives. Tensions forces us to not only define the boundaries of our own thoughts but defend how, when, and why we draw the lines that we do.

Ultimately, although the products of this process are important, I think that it, in contrast to the practices of most modern Americans, speaks to a fundamentally different mode of engagement with media. We can complain all we like, lamenting about the manipulation of the masses, but we must also take a close look at ourselves, for we too play a role in all of this. If we are truly concerned with the inability of the average American to think critically about the potential influence of media, we must roll up our sleeves, get dirty, and engage with those whom we wish to help—and do so on their level. We cannot expect citizens to wake up one day and realize that they had it all wrong (nor did they necessarily) and we cannot chastise people for not intuitively grasping what is, to us, so clear. Rather, we need to think carefully about how we can encourage people to grow into a position where they actively question the media that they are exposed to.


[1] Let us ignore, for the moment, the outlets at the extremes of this continuum (e.g., state-run propaganda machines or pirate radio stations).


Something in the Air Tonight

Something in the Air Tonight – PowerPoint Slides

1660 was a year of great upheaval in England, with the beginning of the English Restoration marked by the ascendancy of Charles II to the throne. That same year, another event—lesser known, although no less revolutionary—occurred, which would affect the future of Science forever:  the invention of air.

Now I don’t mean that someone found a container and mixed together 78% Nitrogen, 21% Oxygen, and some other stuff. Air as a gas had existed for millennia. And I don’t mean air as an idea or concept. Rather, I mean the invention of air as an object of inquiry—something that could be studied and was worthy of such scrutiny.

Using the recently invented vacuum pump as an experimental apparatus, Robert Boyle published a book called New Experiments Physico-Mechanicall, a volume that aimed to describe the properties of air. Although this milestone may seem somewhat uninteresting to non-science geeks, the thoughts forwarded in this work formed the cornerstone of Boyle’s Law, which would later become incorporated into our fundamental understanding of how gasses behaved in closed systems. In short, Boyle’s publication helped to change the way in which we saw the world, rendering the once-invisible apparent, if still ephemeral.

But when we think about air today—if at all—we don’t stop to ponder how it works. We just know that it does. We instinctually know that creating a small vacuum at the top of a straw will cause the liquid below to move up due to a pressure differential and this, in many ways, demonstrates the true power of science, for many of its principles are simply accepted as truth.

In her book Always Already New, Lisa Gitelman draws a parallel between the adoption of scientific knowledge and acceptance of media, arguing that both must fight to prove themselves and, having done so, proceed to weave themselves into our lives until they become unremarkable and it becomes difficult for us to ever imagine how we functioned without it. Divorcing media from technology, Gitelman suggests that a key point in understanding the impact of media is describing the social experience that arises around new forms of media and tracing how these experiences evolve over time. Indeed, the transitions between introduction, acceptance, and banality often tell us much about the socio-cultural context in which media reside, with concerns inevitably transitioning from “What is this new technology replacing (i.e., what is lost)?” to “What are the health implications (e.g., is this going to give me cancer)?”  to “What are the implications for the community (e.g., will this bring about the apocalypse)?”

Gitleman further argues that as we forget the social processes that govern media, allowing its protocols to become invisible, media gains a sort of authority and legitimacy, as the state of being influenced by media becomes “the way it always was” even though it wasn’t. Take a second and think about how protocols surrounding media—all media, not just new media—have become incorporated into your life. How do you interact with media? What are the rules (official or otherwise) that govern such behavior?

In front of us is a sampling of the ways in which we might interact with media and, latent in these actions, is a set of protocols that instruct our behavior. But, more than just guiding our interaction with the media, Gitelman argues that these protocols also serve to update and stabilize our sense of the abstract public, with communities rising around shared ritual. Another way to think about this is that we ascertain our position in the community by locating ourselves within an ecology of practice.

In some ways overlapping with community bounded by taste, we see that similar patterns of interaction with, or response to, media helps to delineate those who are like us from those who are not. Speaking of taste how many of you prefer the ad to the left? The right? No preference?

In late 2009, IKEA decided to change its font from Futura to Verdana, a process that has little, if any, inherent significance. The switch, however, provoked no small amount of discussion online, with ardent supporters arguing against equally strident naysayers. Aesthetics aside, the interesting take home message from all of this is the way in which fonts—and typography in general—represent precisely the type of incorporation that Gitelman was talking about with respect to media. As media becomes naturalized, we tend to focus on the content such that methods of production become invisible. When we encounter text, we register what is said before we think about how it’s presented. To quote Adrian Frutiger,

“If you remember the shape of your spoon at lunch, it has to be the wrong shape. The spoon and the letter are tools; one to take food from the bowl, the other to take information off the page…When it is a good design, the reader has to feel comfortable because the letter is both banal and beautiful.”

Gitelman goes on to introduce other forms of inscription, namely recorded sound and new media, suggesting similarities between the cultural relationships that surrounded both sets of protocols.

“Whole new modes of inscription—such as capturing sounds by phonograph in 1878, or creating and saving digital files today—make sense as a result of social processes that define their efficacy as simultaneously material and semiotic.”

We see resonance between the early Dictaphone and speech-to-text input software like visual voicemail or between the gramophone and code-to-speech programs like auto-translation. Gitelman warns, however, that inscriptive media also are inextricably connected to history and attempts to examine these artifacts historically are necessarily affected for we are studying the process of inscription through the products those processes produced! Problematic, to say the least. I suggest that the first step to successful study is to attempt an understanding of the factors that guide our inquiry:  our primary sources for understanding the phenomenon of recorded sound come from print, which means that we must necessarily question the relationship of print to recorded sound at the time.

How did these two media forms coevolve, abut, and compete? If we accept Habermas’ position that the protocols of print media and speech were ensconced in public life and that recorded sound helped to reshape the public, we immediately see the need to question written accounts of recorded sound.

Ultimately, Gitelman’s point is that the socio-historical investigation of media presents a dense and complex web of associations for the would-be researcher, with recorded sound intersecting with family structures, gender, economic demands, and socio-political concepts. The introduction of recorded sound, like that of new media, necessitated a corresponding response in established social structures as it floated out from the gramophone and through the ether, leaving a trail of revolution and restructuring…just like the last time we invented air.


I Swear I’ve Heard This One Before, Somewhere…

The prominent theme of amnesia seems of note in this week’s readings, gaining resonance when paired with the larger connective thread of advertising. Although one might argue that amnesia has taken on a negative sheen thanks to its popularity in soap operas, the mechanic has been employed in a number of popular contexts that range from retconning (effecting a kind of imperfect amnesia on the audience as cannon asks them to “forget” history), dissociative fugue, and cyclical histories/journeys that continually reset. The last of these manifestations, which we see in Frederik Pohl‘s The Tunnel Under the World,” invokes memory of myths in which the hero must repeat his trials until he learns a lesson that speaks to some supposedly profound truth. Offerings like Groundhog Day and Dark City come to mind, although these two offerings contain messages that diverge in interesting ways:  while the plot of Groundhog Day focuses on an individual transformation, Dark City also nods to a sort of “cultural amnesia” that plagues the inhabitants of the self-contained city.

An easy target for this malaise is the spell cast by advertising, with such accusations made in The Tunnel Under the World.” Written in the middle of the 20th century—a time period that saw increasing emphasis on commercialization and industrialization—it makes sense that Pohl casts the inhabitants of Tlyerton as robots driven by a consciousness that is both duped and dead!

Amnesia and complacency also manifest in Henry Kuttner‘s “The Twonky,” and here we can contrast the amnesia of time-travelling Joe with the induced state of inaction that Kerry Westerfield experiences as a result of his interaction with the Twonky. In their own ways, both Pohl and Kuttner draw a connection between media and the subjugation of the human mind and/or spirit. (Interestingly, there also seems to be a stratification of media with the telephone being suspect [speaking perhaps to telephone salesmen?] while Westerfield finds a bit of sanctuary under the marquee of a movie theater. Cinema, then, perhaps represented a higher cultural form that was less susceptible to the corrosive influence of advertising, although this notion has changed somewhat over the years as any modern moviegoer can attest to.) Given the context in which these two authors wrote, it is not overly difficult to connect the dots and see how both of these short stories spoke to advertising being conveyed through media channels as it infected the general population, supplanting natural sentience with manufactured thought (or nothing at all!) in a process that invokes some of the pessimistic views of institutions like the Frankfurt School.


Love out of Nothing at All?: A re-examination of popular culture’s presence in the college application

Key phrases:

College application essay, identity as narrative, popular culture, digital media literacy, self-branding

Session type:

Structured talk (30 minutes), discussion (30 minutes)

Target audience:

Secondary school counselors, CBOs

Abstract:

Harry Potter. Twilight. Video games. Twitter.

 The media environment that surrounds today’s applicants seems rife with topics that likely sit high atop lists that solemnly declare, “Bad Essay Ideas.” And, perhaps, not without reason, for the typical college application essay is one that often treats these subjects (along with more traditional ones like leadership, sports, or community service) lightly, evidencing a cursory understanding of the material at best. Students seem to struggle to infuse meaning into activities that appear on resumes, attempting to convince admission officers—and perhaps themselves—that these pursuits constituted time well spent.

 But what if we could encourage students to rethink their engagement in these activities, while also challenging them to respond to the question, “Why does this matter?” Instead of asking students to conform to a process that privileges particular activities over others, how might we inspire young people to cultivate genuine interests while simultaneously thinking critically about the implications of their actions? Similarly, how might we encourage adults to recognize the potential nascent political themes of Harry Potter, see young people negotiating family structures and gender roles through Twilight, witness creativity and collaboration through video games, and understand how Twitter can develop the skill of curation? Instead of promoting the chasm between digital media/popular culture and education, how can we use the space to promote the skills that our students will need to be competitive in the 21st century?

Description:

College attendance and completion (at a four-year institution) has come to represent a significant demarcation in American society with studies showing a positive correlation between obtainment of a bachelor’s degree and total lifetime income. But more so than a mere economic advantage, higher education represents an opportunity for social mobility and the accumulation of social/cultural capital. If we accept that college attendance represents at least a partial transformative experience, we realize that understanding who is accepted is important.

Informal reports from educators (an opinion pieces in The Chronicle of Higher Education) have hinted that the current generation of college students display a wide range of skills and intelligences but also appear to be distracted by social media platforms such as Facebook and Twitter while in class, suggesting that digital media is generally seen as inhabiting a space separate from education (although this might be changing, albeit slowly).

However, I suggest that some of the types of skills professors desire (e.g., critical thinking, academic inquiry, engagement, and risk-taking) can be, and are, cultivated through pop culture and digital media use/production but it is my belief that, as a whole, the undergraduate admission process systematically devalues participation in such spaces, privileging more traditional—and readily understood—activities. There seems to be a potential disconnect, then, between selection criteria and the skills that schools hope to attract; if an institution values traits like proactivity, are admission officers fully sensitive to the range of ways in which such a trait might present or manifest? Or have we become overly influenced on quantitative measures like GPA and test scores and the relative stability they purport to provide? If such a bias exists, a possible effect of the college application structure (and the American educational system) is to cause those involved in the admission process to internalize a mental barrier between digital media and education.

It seems evident that the admission selection process (as reflective of an institution’s values) plays a large part in shaping who is able to attend a given school. Highly-selective schools, however, seem to have a disproportionate amount of influence in American culture as their practices create a stance that other colleges and universities either aspire or react to. Therefore the position that highly-selective institutions take on the integration of digital media and education likely has a trickle-down effect that affects the admission profession as a whole and is likely internalized by college counselors and high school students who aim to be accepted by these schools.

Ultimately, I hope to foster discussion between high school students, high school college counselors, and admission officers that examines how we collectively conceptualize and articulate the value of the connection between pop culture, digital media and education. I argue that higher-order skills can be cultivated by youth practices such as remix but that incongruent language employed by youth and adults makes recognition of this process difficult. After giving a short talk that explores the ways in which the everyday practices of youth can be seen as valuable, I will ask participants to join in a discussion that seeks to uncover strategies to enable youth to articulate their process and how we can challenge our peers to become more sensitive to the manifestation of traits that mark a “successful student.”


 

Biography:

A 6-year veteran of undergraduate admission at the University of Southern California (Los Angeles, CA) Chris Tokuhama was responsible for coordinating the University’s merit-based scholarship process and 8-year combined Baccalaureate/M.D. program. Working closely with high school populations, Chris became interested in issues that ranged from self-harm to educational access and equity, which has helped to inform his current research interests in digital media literacy, learning, and youth cultures. In addition to his role as an advocate for youth in Education, which included a Journal of College Admission publication on the effects of branding in the admission process, Chris studies the relationship of personal identity to the body as a doctoral student in USC’s Annenberg School for Communication and Journalism. Employing lenses that range from Posthumanism (with forays into Early Modern Science and Gothic Horror), the intersection of technological and community in Transhumanism, and the transcendent potential of the body contained in religion, Chris examines how changing bodies portrayed in media reflect or demand a renegotiation in the sense of self, acting as visual shorthand for shared anxieties. When not pursuing his studies, Chris enjoys working with 826LA and drinking over-priced coffee.


Underneath It All

WARNING:  The following contains images that may be considered graphic in nature. As a former Biology student (and Pre-Med at that!), I have spent a number of hours around bodies and studying anatomy but I realize that this desensitization might not exist for everyone. I watched surgeries while eating dinner in college and study horror films (which I realize is not normal). Please proceed at your own risk.

At first glance, the anatomical model to the left (also known as “The Doll,” “Medical Venus,” or simply “Venus) might seem like nothing more than an inspired illustration from the most well-known text of medicine, Gray’s Anatomy. To most modern viewers, the image (and perhaps even the model itself) raise a few eyebrows but is unlikely to elicit a reaction much stronger than that. And why should it? We are a culture that has grown accustomed to watching surgeries on television, witnessed the horrible mutilating effects of war, and even turned death into a spectacle of entertainment. Scores of school children have visited Body Worlds or have been exposed to the Visible Human Project (if you haven’t watched the video, it is well worth the minute). We have also been exposed to a run of “torture porn” movies in the 2000s that included offerings like Saw, Hostel, and Captivity. Although we might engage in a larger discussion about our relationship to the body, it seems evident that we respond to, and use, images of the body quite differently in a modern context. (If there’s any doubt, one only need to look at a flash-based torture game that appeared in 2007, generating much discussion.) Undoubtedly, our relationship to the body has changed over the years—and will likely continue to do so with forays into transhumanism—which makes knowledge of the image’s original context all the more crucial to fully understanding its potential import.

Part of the image’s significance comes from it’s appearance in a culture that generally did not have a sufficient working knowledge of the body by today’s standards, with medical professionals also suffering a shortage of cadavers to study (which in turn led to an interesting panic regarding body snatching and consequentially resulted in a different relationship between people and dead bodies). The anatomical doll pictured above appeared as part of an exhibit in the Imperiale Regio Museo di Fisica e Storia Naturale (nicknamed La Specola), one of the few natural history museums open to the public at the time. This crucial piece of information allows historical researchers to immediately gain a sense of the model’s importance for, through it, knowledge of the body began to spread throughout the masses and its depiction would forever be inscribed onto visual culture.

Also important, however, is the female nature of the body, which itself reflected a then-fascination with women in Science. Take, for example, the notion that the Venus lay ensconced in a room full of uteruses and we immediately gain more information about the significance of the image above:  in rather straightforward terms, the male scientist fascination with the female body and process of reproduction becomes evident. Although a more detailed discussion is warranted, this interest spoke to developments in the Enlightenment that began a systematic study of Nature, wresting way its secrets through the development of empirical modes of inquiry. Women, long aligned with Nature through religion (an additional point to be made here is that in its early incarnations, the clear demarcations between fields that we see today did not exist, meaning that scientists were also philosophers and religious practitioners) were therefore objects to be understood as males sought to once again peer further into the unknown. This understanding of the original image is reinforced when one contrasts it with its male counterparts from the same exhibit, revealing two noticeable differences. First, the female figure possesses skin, hair, and lips, which serve as reminders of the body’s femininity and humanity. Second, the male body remains intact, while the female body has been ruptured/opened to reveal its secrets. The male body, it seems, had nothing to hide. Moreover, the position of the female model is somewhat evocative, with its arched back, pursed lips, and visual similarity to Snow White in her coffin, which undoubtedly speaks to the posing of women’s bodies and how such forms were intended to be consumed.

Thus, the fascination with women’s bodies—and the mystery they conveyed—manifested in the physical placement of the models on display at La Specola, both in terms of distribution and posture. In short, comprehension of the museum’s layout helps one to understand not only the relative significance of the image above, but also speaks more generally to the role that women’s bodies held in 19th-century Italy, with the implications of this positioning resounding throughout Western history. (As a brief side note, this touches upon another area of interest for me with horror films of the 20th century:  slasher films veiled an impulse to “know” women, with the phrase “I want to see/feel your insides” being one of my absolute favorites as it spoke to the psychosexual component of serial killers while continuing the trend established above. Additionally, we also witness a rise in movies wherein females give birth to demon spawn (e.g. The Omen), are possessed by male forces (e.g., The Exorcist), and are also shown as objects of inquiry for men who also seek to “know” women through science (e.g., Poltergeist). Recall the interactivity with the Venus and we begin to sense a thematic continuity between the renegotiation of women’s bodies, the manipulation of women’s bodies, and knowledge. For some additional writings on this, La Specola, and the television show Caprica, please refer to a previous entry on my blog.)

This differential treatment of bodies continues to exist today, with the aforementioned Saw providing a pertinent (and graphic) example. Compare the image of a female victim to the right with that of the (male) antagonist Jigsaw below. Although the situational context of these images differ, with the bodies’ death states providing commentary on  the respective characters, both bodies are featured with exposed chests in a manner similar to the Venus depicted at the outset of this piece. Extensive discussion is beyond the scope of this writing, but I would like to mention that an interesting—and potentially productive—sort of triangulation occurs when one compares the images of the past/present female figures (the latter of whom is caught in an “angel” trap) with each other and with that of the past/present male figures. Understanding these images as points in a constellation helps one to see interesting themes:  for example, as opposed to the 19th-century practice (i.e, past male), the image of Jigsaw (i.e., present male) cut open is intended to humanize the body, suggesting that although he masterminded incredibly detailed traps his body was also fragile and susceptible to breakdown. Jigsaw’s body, then, presents some overlap with Venus (i.e., past female) particularly when one considers that Jigsaw’s body plays host to a wax-covered audiotape—in the modern interpretation, it seems that the male body is also capable of harboring secrets.

Ultimately, a more detailed understanding of the original image would flush out its implications for the public of Italy in the 19th century and also look more broadly at the depictions of women, considering how “invasive practices” were not just limited to surgery. La Specola’s position as a state-sponsored institution also has implications for the socio-historical context for the image that should also be investigated. Finally, and perhaps most importantly, scholars should endeavor to understand the Medical Venus as not just reflective of cultural practice but also seek to ascertain how its presence (along with other models and the museum as a whole) provided new opportunities for thought, expression, and cultivation of bodies at the time.


The Most Important Product Is You!

“The Culture Industry” seems to be one of those seminal pieces in the cannon of Cultural Studies that elicits a visceral (and often negative) reaction from modern scholars. Heavily influenced by the Birmingham School, generations of scholars have been encouraged to recognize agency in audiences, with the Frankfurt School often placed in direct opposition to the ideals of modern inquiry. Read one way, Horkheimer and Adorno appear elitist, privileging what has come to be known as “high culture” (e.g., classical music and fine art) over the entertainment of the masses. Horkheimer and Adorno argue that the culture industry creates a classification scheme in which man exists; whereas man previously struggled to figure out his place in the world, this job is done for him by the culture industry and its resultant structure of artificial stratification. Ultimately, then, because he does not have to think about his position in culture, man is not engaged in his world in the same way as he was before, which therefore allows content to become formulaic and interchangeable.

Later echoed in Adorno’s “How to Look at Television,” “The Culture Industry” laments the predictable pattern of televisual media, with audiences knowing the ending of movies as soon as they begin. (Interestingly, there is some overlap with this and Horror with audiences expecting that offerings follow a convention—one might even argue that the “twist ending” has become its own sort of genre staple—and that a movie’s failure to follow their expectations leaves the audience disgruntled. This of course raises questions about whether modern audiences have been conditioned to expect certain things out of media or to engage in a particular type of relationship with their media and whether plot progression, at least in part, defines the genre.) Horkheimer and Adorno’s attitude speaks to a privileging of original ideas (and the intellectual effort that surrounds them) but the modern context seems to suggest that the combination of preexisting ideas in a new way holds some sort of cultural value.

Adorno’s “How to Look at Television” also points out a degradation in our relationship to media by highlighting the transition from inward-facing to outward-facing stances, equating such migration with movement away from subtlety. Although the point itself may very well be valid, it does not include a robust discussion of print versus televisual media:  Adorno’s footnote that mentions the different affordances of media (i.e., print allows for contemplation and mirrors introspection while television/movies rely on visual cues due to their nature as visual media) deserves further treatment as the implications of these media forms likely has repercussions on audience interactions with them. Almost necessarily, then, do we see a televisual viewing practice that does not typically rely on subtlety due to a different form of audience/media interaction.  (It might also be noted that the Saw movies have an interesting take on this in that they pride themselves on leaving visual “breadcrumbs” for viewers to discover upon repeated viewings although these efforts are rarely necessary for plot comprehension.)

To be fair, however, one might argue that Horkheimer and Adorno wrote in a radically different media context. Sixty years later, we might argue that there’s not that much left to discover and that prestige has now been shifted to recombinations of existent information. Moreover, Horkheimer and Adorno’s position also assumes a particular motivation of the audience (i.e., that the payoff is the conclusion instead of the journey) that may no longer be completely true for modern viewers.

Although Horkheimer and Adorno rightly raise concerns regarding a lack of independent thinking (or even the expectation of it!), we are perhaps seeing a reversal of this trend with transmedia and attempts at audience engagement. Shows now seem to want people to talk about their shows (message boards, Twitter, etc.) in order to keep them invested and although we might quibble about the quality of such discourse and whether it is genuine or reactionary, it seems that this practice must be reconciled with Horkheimer and Adorno’s original position. It should be noted, however, that the technology on which such interaction relies was not around when Horkheimer and Adorno wrote “The Culture Industry” and the Internet has likely helped to encourage audience agency (or at least made it more visible).

Seeking to challenge the notion that the Horkheimer and Adorno discounted audience agency, John Durham Peters argues for the presence of both industry and audience influence in the space of culture and furthermore that while audiences may be empowered, their actions serve to reinforce their submission to the dominant wishes of industry in a realization of hegemonic practice. Although Horkheimer and Adorno, writing in the shadow of World War II were undoubtedly concerned with the potential undue influence of mass media as a vehicle for fascist ideology—as evidenced by quotes such as “The radio becomes the universal mouthpiece of the Fuhrer” and “The gigantic fact that the speech penetrates everywhere replaces its content”—they were also concerned that the public had relinquished its ability to resist by choosing to pursue frivolous entertainment rather than freedom (Adorno, 1941). From this position, Peters extracts the argument that Horkheimer and Adorno did in fact recognize agency on the part of audiences, but also that such energies were misspent.

The notion of “the masses” has long been an area of interest for me as it manifests throughout suburban Gothic horror in the mid-20th century. In many ways, society was struggling to come to terms with new advances with technology and the implications for how these new inventions would bring about resultant changes in practice and structure. Below is an excerpt from a longer piece about a movie that also grappled with some of these issues.

Reacting to atrocities witnessed throughout the course of World War II, Americans in the 1950s became obsessed with notions of power and control, fearing that they would be subsumed by the invisible hand of a totalitarian regime. In particular, the relatively young medium of television became suspect as it represented a major broadcast system that seemed to have an almost hypnotic pull on its audience, leaving viewers entranced by its images. And images, according to author and historian Daniel Boorstin, were becoming increasingly prominent throughout the 19th century as part of the Graphic Revolution replete with the power to disassociate the real from its representation (1962). For cultural critics still reeling from the aftereffects of Fascism and totalitarianism, this was a dangerous proposition indeed.

Although these underlying anxieties of mid-century American society could be examined via a wide range of anthropological lenses and frameworks, visual media has historically provided a particularly vivid manifestation of the fears latent in the people of the United States (Haskell, 2004). This is, of course, not to imply that visual media is necessarily the best or only means by which we can understand prevailing ideologies in the years after World War II, but merely one of the most visible. However, as a critical examination of the entire media landscape of the 1950s would be beyond the scope of a single paper of this magnitude, discussion shall be primarily concentrated around Elia Kazan’s 1957 movie A Face in the Crowd with particular attention paid to the contrasting channels of cinema and television.[1] This paper will seek to briefly position A Face in the Crowd in the larger context of paranoia-driven cinema of the 1950s before using the film as an entryway to discuss critiques of mass culture. Given the film’s apparent sustained resonance as indicated by its relatively recent mention (Vallis, 2008; Hoberman, 2008b; Franklin, 2009), the arguments of Critical Theory will then be applied to modern American culture in an attempt to ascertain their continued validity. Finally, an argument will be made that acknowledges the potential dangers facing mass culture in the 21st century but also attempts to understand the processes that underlie these pitfalls and provides a suggestion for recourse in the form of cultural and media literacy.

Paranoia, Paranoia, Everyone’s Coming to Get Me

The post-war prosperity of the 1950s caused rapid changes in America, literally altering the landscape as families began to flood into the newly-formed suburbs. With the dream and promise of upward social mobility firmly ensconced in their heads, families rushed to claim their piece of the American dream, replete with the now-iconic front yard and white picket fence. And yet, ironically, a new set of worries began to fester underneath the idyllic façade of the suburbs as the troubles of the city were merely traded for fears of paranoia and invasion; the very act of flight led to entrapment by an ethos that subtly precluded the possibility of escape.

As with many other major cultural shifts, the rapid change in the years following World War II caused Americans to muse over the direction in which they were now headed; despite a strong current of optimism that bolstered dreams of a not-far-off utopia, there remained a stubborn fear that the quickly shifting nature of society might have had unanticipated and unforeseen effects (Murphy, 2009). Life in the suburbs, it seemed, was too good to be true and inhabitants felt a constant tension as they imagined challenges to their newly rediscovered safety:  from threats of invasion to worries about conformity, and from dystopian futures to a current reality that could now be obliterated with nuclear weapons, people of the 1950s continually felt the weight of being a society under siege. An overwhelming sense of doubt, and more specifically, paranoia, characterized the age and latent fears manifested in media as the public began to struggle with the realization that the suburbs did not fully represent the picturesque spaces that they had been conceived to be. In fact, inhabitants were assaulted on a variety of levels as they became disenchanted with authority figures, feared assimilation and mind control (particularly through science and/or technology), began to distrust their neighbors (who could easily turn out to be Communists, spies, or even aliens!), and felt haunted by their pasts, all of which filled the movie screens of the decade (Jensen, 1971; Murphy, 2009; Wolfe, 2002).[2] Following solidly in this tradition, Kazan’s A Face in the Crowd picks up on some of the latent strains of paranoia in American culture while simultaneously serving as a platform for a set of critiques regarding mass culture.

Somewhere, a Star Is Made

The storyline of A Face in the Crowd is rather straightforward and yet deceptively complex in its undertones:  on the surface, we experience a rather heavy-handed morality tale in the form of country bumpkin Larry “Lonesome” Rhodes, a relative nobody who is plucked from obscurity and made (and subsequently broken) through powers associated with television. Yet, it is only when we begin to connect the movie to a larger societal context that we begin to understand the ramifications of the film’s message; a careful examination of A Face in the Crowd reveals striking suspicions regarding the role that media plays (in this case, primarily television and cinema) in shaping American culture. Stars, director Elia Kazan argues, are not so much born as made, a distinction that portends dire consequences.

It is worth noting that Kazan’s film was made during a time when the concept of the “celebrity” was being renegotiated by America; for a large part of its history, the United States, firmly grounded in a Puritan work ethic, had honored heroes who exemplified ideals associated with a culture of production and was struggling to reconcile these notions in the presence of an environment whose emphasis was now focused on consumption. Although modern audiences might initially find this shift difficult to appreciate, one need only consider that the premium placed on production is so central to American ideology that it continues to linger today:  in a culture that exhibits rampant consumerism, we still value the “self-made man” and sell the myth of America as a place where anyone can achieve success through hard work. To abandon these ideas would necessitate that we reinterpret the very meaning of “America.” Thus, we become more sympathetic to the critics of the day who lamented the loss of the greatness of man and bristled against the notion that fame or celebrity could be manufactured—such a system could would only result in individuals who were lacking and unworthy of their status (Gamson, 1994; Benjamin, 1973)

Such is the case it seems, with Larry Rhodes, who is discovered by roving reporter Marcia Jeffries in an Arkansas jail. Although it cannot be denied that Rhodes has some modicum of talent and a certain charisma that comes from being unafraid to speak one’s mind, Marcia ushers Rhodes onto the path of greatness by dubbing him “Lonesome” and thus creates a character that transforms Rhodes from a despondent drunk to a winsome drifter. This scene—the first major one in the movie—thusly introduces the important notion that those involved in the media can be implicitly involved in the manipulation of the information that travels over the airwaves. Subtly adding to the insidious nature of the media, A Face in the Crowd portrays Marcia as a character that seems likable enough, but also a person who is, in a way, exploiting the people in jail as she rushes in with her tape recorder intent on prying the stories from the characters she finds (or creates!) and does not exhibit much concern in truly understanding why these men are imprisoned in the first place. Taken to an extreme, we later come across the character of The General, who further perverts the connection between media and power as he conspires with Lonesome to remake the image of Senator Worthington Fuller as the congressman runs for President.

Yet, as Lonesome Rhodes grows in his role as a media personality, he quickly demonstrates that the power to manipulate does not lie solely with those who sit behind the cameras. In Memphis, Rhodes incites a riot against the Luffler mattress company and also solicits donations in order to help a Black woman rebuild her house. In light of this, we can see that while Kazan focuses on the negative implications of television and celebrity, that the relative good or bad that comes from these actions is not necessarily the point—instead, the one constant in all of the depicted scenarios is a public who is manipulated into performing actions on the behalf of others. Although the characters of Lonesome and The General are vilified throughout the film, it is the masses for which Kazan demonstrates true disdain.

Extraordinary Popular Delusions

Perhaps nowhere is this contempt more apparent than at the end of the film where, in an attempt to offer a small moment of solace to Marcia after her unmasking of Lonesome, writer Mel Miller notes, “We get wise to them, that’s our strength” (Kazan, 1957). And Miller is not wrong:  Western tradition has long recognized the correlation between knowledge and power and Miller’s assertion touches upon the revelatory clout inherent in the realignment of perception and reality as noted by public relations guru Howard Bragman (2008). A more critical examination of the film’s closing scene, however, raises an important question:  Who is Miller’s “we”? Although one might be tempted to read this line as indicative of an egalitarian philosophical view, it is important to note that the only two characters in the shot represent the film’s arguably upper-middle class, and pointedly Eastern-educated, elite—nowhere to be seen are representatives of the small Arkansas town from the film’s opening or denizens of Memphis, both of whom serve to characterize the majority of Lonesome’s devoted viewers.[3] In fact, if we take time to reflect upon the movie, we realize that the majority of the audience was only alerted to Lonesome’s dual nature after Marcia flipped a control room switch and revealed the underlying deterioration; the masses oscillated from one position to the next without understanding how or why and once again adopted a passive stance in their relationship with media. Moreover, as Courtney Maloney points out, Kazan’s depiction of the agency of the masses is actually limited in scope:  despite a montage of audience members vehemently phoning in, sponsors are simultaneously shown to be acting independently as they withdraw their association with Lonesome (1999). Moreover, the subtext of the scene distances the rational decision-making of the truly powerful from the impassioned beseeching of the masses, likening the power of the latter to that of a mob. Knowledge and its associated authority, clearly, are afforded to a select group.

This idea, that the world can be divided between those who “get wise” and those who do not, serves to develop a rather sharp classist criticism against the medium of television and those who would watch it:  moviegoers, by virtue of witnessing Kazan’s work, find themselves elevated in status and privy to “the man behind the curtain” (to borrow a phrase). In contrast, the malleable masses were considered to be pacified and placated by idealistic portrayals of life in the 1950s in the form of television programs like Leave It to Beaver, The Donna Reed Show, and The Adventures of Ozzie and Harriet. Clearly, Kazan creates a dichotomy imbued with a value judgment descended from the thoughts of prominent thinkers in the Frankfurt School who, as far as aesthetics were concerned, preferred the high culture of cinema to the conformity and manipulated tastes of television (Horkheimer & Adorno, 2002; Adorno, 1985; Quart, 1989). This distinction between high and low culture would be a crucial supporting idea for critics as a prominent fear of mass culture was that it portended a collapse between concepts (e.g., fame, celebrity, or intellectual value) of objectively different quality, essentially rendering all manifestations the same and therefore all equally mundane (Boorstin, 1962; Hoberman, 2008b; Kierkegaard, 1962).  Even worse for critics, perhaps, was the perception of the masses’ refusal to grow out of its immature interests, a behavior that was characterized as both childlike and stubborn (Adorno, 1985).

And the fears of such theorists, all of whom were reacting to recent and rapid advances in broadcast technology, were not unfounded. Consider, for example, that radio had been popularized a scant fifty years prior and had vastly altered critics’ understanding of media’s potential impact, creating a precedent as it proliferated across the country and began to develop a platform for solidarity and nationalism. Yet, while the effects of radio were decidedly pro-social, due in part to its propagation of orchestral music and transmission of fireside chats, television was viewed as a corrosive force on society that spurred on the destruction of culture instead of enriching it.[4]For the critics of the Frankfurt School, television was indicative of an entrenched sentiment that regarded mass-produced culture as formulaic and perfectly suitable for a generation of passive consumers who sat enraptured in front of the glowing set. Associating the potential dissemination of propagandist ideology with television as a form of mass broadcast, cultural theorists evoked notions of totalitarian regimes akin to Hitler and Stalin in an effort to illustrate the potential subjugation of individual thought (Mattson, 2003). These simmering fears, aggrandized by their concurrence with the rising threat of Communism and collectivist cultures, found fertile soil in the already present anxiety-ridden ethos of the United States during the 1950s.


[1] It should be noted, however, that the comics of this time—those that belong to the end of the Golden Age and beginning of the Silver Age—also provide an additional understanding of the ways in which Americans indirectly wrestled with their fears.

[2] For a more exhaustive list of movies that support this point, see Wolfe, 2002.

[3] Let us also not forget the fact that Lonesome exhibits a rather patronizing attitude toward his audience in his later career, instituting the Cracker Barrel show with its manufactured country lackeys (Yates, 1974). In contrast to his first stint in Memphis, Lonesome has begun to embrace his country image as a means (if an inauthentic one) to connect with his audience, a point of contention to which we will return.

[4] Curiously, however, we see that this relationship between presidential addresses (like the aforementioned fireside chats) and mass media did not elicit notable complaints from critics who were generally wary of the merging of politics and entertainment (Quart, 1989; Benjamin, 1973). Although a larger discussion is warranted regarding the subtleties of this distinction, I would suggest that part of the differentiation stems from a high-low culture dichotomy. Although critics linked the negative presence of television with corporate advertising, James Twitchell suggests that there has always been a rather intimate relationship between arts and commerce, most saliently exhibited by wealthy citizens or entities who act as patrons (Twitchell, 1996).

 

Works Cited

Adorno, T. (1941). On Popular Music. Studies in Philosophy and Social Science, 9, 17-48.

Adorno, T. (1985). On the Fetish Character in Music and the Regression of Listening. In A. Arato, & E. Gebhardt (Eds.), The Essential Frankfurt School Reader (pp. 270-299). New York, NY: Continuum.

Benjamin, W. (1973). The Work of Art in the Age of Mechanical Reproduction. In H. Arendt (Ed.), Illuminations (H. Zohn, Trans., pp. 217-242). London, England: Schocken.

Boorstin, D. (1962). The Image: A Guide to Pseudo-Events in America. New York, NY: Athenenum.

Bragman, H. (2008). Where’s My Fifteen Minutes?: Get Your Company, Your Cause, or Yourself the Recognition You Deserve. New York, NY: Portfolio.

Gamson, J. (1994). Claims to Fame: Celebrity in Contemporary America. Berkeley: University of California Press.

Haskell, M. (2004, August 8). Whatever the Public Fears Most, It’s Right Up There on the Big Screen. The New York Times, pp. 4-5.

Horkheimer, M., & Adorno, T. W. (2002). Dialectic of Enlightenment: Philosophical Fragments. Stanford, CA: Stanford University Press.

Jensen, P. (1971). The Return of Dr. Caligari. Film Comment, 7(4), 36-45.

Kazan, E. (Director). (1957). A Face in the Crowd [Motion Picture].

Maloney, C. (1999). The Faces in Lonesome’s Crowd: Imaging the Mass Audience in “A Face in the Crowd”. Journal of Narrative Theory, 29(3), 251-277.

Mattson, K. (2003). Mass Culture Revisited: Beyond Tail Fins and Jitterbuggers. Radical Society, 30(1), 87-93.

Murphy, B. M. (2009). The Suburban Gothic in American Popular Culture. Basingstoke, Hampshire, England: Palgrave Macmillan.

Quart, L. (1989). A Second Look. Cineaste, 17(2), pp. 30-31.

Wolfe, G. K. (2002). Dr. Strangelove, Red Alert, and Patterns of Paranoia in the 1950s. Journal of Popular Film, 57-67.


It’s My Privilege

I mentioned a poem to my discussion section today, in order to help my students see the importance of an issue that is dear to my heart. As minorities, I feel that we need to band together–there is a common theme that runs through these talks in that we are all being persecuted, in some fashion, because of who we are. We are hated or punished for things that we shouldn’t have to be apologetic about. Gender issues is one of the things that gets me up in the morning and causes me to get so mad that I can’t think straight; I’m so hurt sometimes by what I see that my only reaction is anger. In women, I see stories of struggle similar to my own; yet, unlike in other situations, I am in the position of power. I am an ally. I am part of the population who has power to change things. When it comes down to it, it’s not about reaching down to help women up–it’s about reaching across. It’s about realizing that women are, as they have always been, equals. It’s about realizing that those without power are stronger than I could ever be for they have put up with these issues all of their lives. Women are smart, are survivors, are capable, and tough. We ask so much of women and they have risen to the challenge; in a world that stacks the odds against them, women have managed to thrive. Surely, they are not the only ones to have done so, but their victories should be celebrated.

Below is the aforementioned poem that has helped me to understand the world from a different point of view. I hope that it helps you to do the same.

privilege
a poem for men who don’t understand what we mean when we say they have it

D.A. Clarke

reprinted from Banshee, Peregrine Press
Copyright (c) 1981 D. A. Clarke. All Rights Reserved

privilege is simple:
going for a pleasant stroll after dark,
not checking the back of your car as you get in, sleeping soundly,
speaking without interruption, and not remembering
dreams of rape, that follow you all day, that woke you crying, and
privilege
is not seeing your stripped, humiliated body
plastered in celebration across every magazine rack, privilege
is going to the movies and not seeing yourself
terrorized, defamed, battered, butchered
seeing something else

privilege is
riding your bicycle across town without being screamed at or
run off the road, not needing an abortion, taking off your shirt
on a hot day, in a crowd, not wishing you could type better
just in case, not shaving your legs, having a decent job and
expecting to keep it, not feeling the boss’s hand up your crotch,
dozing off on late-night busses, privilege
is being the hero in the TV show not the dumb broad,
living where your genitals are totemized not denied,
knowing your doctor won’t rape you

privilege is being
smiled at all day by nice helpful women, it is
the way you pass judgment on their appearance with magisterial authority,
the way you face a judge of your own sex in court and
are over-represented in Congress and are not strip searched for a traffic ticket
or used as a dart board by your friendly mechanic, privilege
is seeing your bearded face reflected through the history texts
not only of your high school days but all your life, not being
relegated to a paragraph
every other chapter, the way you occupy
entire volumes of poetry and more than your share of the couch unchallenged,
it is your mouthing smug, atrocious insults at women
who blink and change the subject — politely — privilege
is how seldom the rapist’s name appears in the papers
and the way you smirk over your PLAYBOY

it’s simple really, privilege
means someone else’s pain, your wealth
is my terror, your uniform
is a woman raped to death here, or in Cambodia or wherever
wherever your obscene privilege
writes your name in my blood, it’s that simple,
you’ve always had it, that’s why it doesn’t
seem to make you sick to your stomach,
you have it, we pay for it, now
do you understand?


In the Affirmative

Race is one of those things that immediately causes most people to take a position. We have all grown up in a world that is still struggling with racial equality and we have all been exposed to the racial profiling that took place after 9/11. Outwardly, we all recognize that it is no longer PC to call someone by a racial slur or to discriminate in an overt manner—and this is where we begin to enter dangerous territory.

Many of my students have grown up in an environment that shuns racism; we all profess to believe in equality. We think the lack of lynch mobs or ethnic cleansing in our surroundings means that we’ve somehow moved past all of this. But we still have Minute Men, we still have genocide, we still have the KKK, and we still have people dragged behind pickup trucks with their faces melting against asphalt. We exist in a country that is becoming more polarized than ever and it is frankly a little frightening. We are learning to turn our backs on each other and form communities that ascribe to the same beliefs that we do.

Racial issues affect all of you.

If you think that this statement is untrue, look at the world around you. Think about your place in your community and the niche that is carved out for you by others. Where does society tell you that you can exist? What is it safe for you to be? How much of this is determined by your physical features?

On a related note, the concept of Affirmative Action was explored by Thursday’s session—something that I happen to know a little about. Some students voiced concerns over the practice while others stated that they did not support it. Let me start off by saying that I get where these students were coming from as I was no different in college. Like it or not, however, all of you have been affected by Affirmative Action. USC as an institution values diversity and practices Affirmative Action; the term, however, does not mean what most people assume it to. In our eyes, Affirmative Action is about providing equity and access to education. You might think that such programs lend a helping hand to indigents at the expense of “more qualified” individuals; I would challenge you, however, to think about what makes one student more qualified. Is it test scores? Is it GPA? Is it the fact that you went to a fancy prep school and deserve to be at USC? Do you think that this somehow makes you better than someone else?

Now think about how many other people are just like you.

Affirmative Action aims to recognize the strengths that different individuals can bring to the table. Do Latinos and Blacks who have had to struggle to finish high school have a different perspective on the world than Asians (who might have benefited from positive aspects of the Model Minority myth)? Do these students see things in a way that you don’t? Is there a benefit to interacting with them and learning how other people think?

Affirmative Action doesn’t just apply to Blacks and Latinos, however. Are you Southeast Asian? Are you first generation? Are you from a low socioeconomic class? Did you have to work in high school to help your family? Are you from a state that does not typically send a lot of students to USC? Are you from a minority religion? Do you hold atypical political beliefs? Are you a female interested in Math or Science? Are you a male interested in Communication? If any of the above are true, then you have benefitted from the type of thinking that supports Affirmative Action.

Moreover, you all benefit from the diversity that Affirmative Action creates. The depth of experiences that you have at USC is in part due to the voices that we bring in. Every student has value.

And, to turn things a bit, if you think that Affirmative Action is wrong, let’s think about football. Many of the students on the team were individuals who may have scored lower than you on SATs or received lower GPAs. Why aren’t as many people upset that these “lesser qualified” people were admitted? Is it because you enjoy going to football games? Do you only extract value from people when it suits you? My point is that the entire USC community benefits from the presence of gifted athletes (who manage to graduate just fine, by the way) and that these individuals—analogous to ethnic minorities—can bring something invaluable to the table.

I think that the reaction against Affirmative Action stems from fear:  we instinctively lash out in order to protect ourselves when we feel threatened by the encroachment of undesirables.  We want to secure our hard-earned victories and may feel that our achievement are cheapened by the acceptance of people whom we do not respect.

Fight it.

Fight to see the similarities that you have with others; fight to see their worth. Think about how important it is for other people to see you and fight to feel the same way about others. Fight against the indoctrination that you’ve suffered for so long that has engrained these patterns of thinking into your minds. Fight the urge to think that you’re more important than you are. Fight the need to feel comfortable and fight the urge to judge. Fight for your life and fight for your life to be the way that it should be. Fight to understand the things that we’ve been talking about this semester; fight to find meaning in our discussions. Fight to make the world better for your children, for your friends, and for yourself. Fight for people who don’t have a voice. Fight in whatever way you can…but just fight.

It has been my pleasure to work with you this semester and there’s no real way to convey how hopeful I am that this will be a turning point in your lives. I don’t expect that you’ll all become crusaders for API rights (nor should you feel compelled to), but I do hope that we’ve been able to get you to see things for the first time or to feel empowered to make change happen. Take the critical thinking skills that you’ve learned from CIRCLE and go out and find your cause. We’ve got a long way to go, but you’ve already taken the first steps.


The Real-Life Implications of Virtual Selves

“The end is nigh!”—the plethora of words, phrases, and warnings associated with the impending apocalypse has saturated American culture to the point of being jaded, as picketing figures bearing signs have become a fixture of political cartoons and echoes of the Book of Revelation appear in popular media like Legion and the short-lived television series Revelations. On a secular level, we grapple with the notion that our existence is a fragile one at best, with doom portended by natural disasters (e.g, Floodland and The Day after Tomorrow), rogue asteroids (e.g., Life as We Knew It and Armageddon), nuclear fallout (e.g., Z for Zachariah and The Terminator), biological malfunction (e.g., The Chrysalids and Children of Men) and the increasingly-visible zombie apocalypse (e.g., Rot and Ruin and The Walking Dead). Clearly, recent popular media offerings manifest the strain evident in our ongoing relationship with the end of days; to be an American in the modern age is to realize that everything under—and including—the sun will kill us if given half a chance. Given the prevalence of the themes like death and destruction in the current entertainment environment, it comes as no surprise that we turn to fiction to craft a kind of saving grace; although these impulses do not necessarily take the form of traditional utopias, our current culture definitely seems to yearn for something—or, more accurately, somewhere—better.

In particular, teenagers, as the subject of Young Adult (YA) fiction, have long been subjects for this kind of exploration with contemporary authors like Cory Doctorow, Paolo Bacigalupi, and M. T. Anderson exploring the myriad issues that American teenagers face as they build upon a trend that includes foundational works by Madeline L’Engle, Lois Lowry, and Robert C. O’Brien. Arguably darker in tone than previous iterations, modern YA dystopia now wrestles with the dangers of depression, purposelessness, self-harm, sexual trauma, and suicide. For American teenagers, psychological collapse can be just as damning as physical decay. Yet, rather than ascribe this shift to an increasingly rebellious, moody, or distraught teenage demographic, we might consider the cultural factors that contribute to the appeal of YA fiction in general—and themes of utopia/dystopia in particular—as manifestations spill beyond the confines of YA fiction, presenting through teenage characters in programming ostensibly designed for adult audiences as evidenced by television shows like Caprica (2009-2010).

 

Transcendence through Technology

A spin-off of, and prequel to, Battlestar Galactica (2004-2009), Caprica transported viewers to a world filled with futuristic technology, arguably the most prevalent of which was the holoband. Operating on basic notions of virtual reality and presence, the holoband allowed users to, in Matrix parlance, “jack into” an alternate computer-generated space, fittingly labeled by users as “V world.”[1] But despite its prominent place in the vocabulary of the show, the program itself never seemed to be overly concerned with the gadget; instead of spending an inordinate amount of time explaining how the device worked, Caprica chose to explore the effect that it had on society.

Calling forth a tradition steeped in teenage hacker protagonists (or, at the very least, ones that belonged to the “younger” generation), our first exposure to V world—and to the series itself—comes in the form of an introduction to an underground space created by teenagers as an escape from the real world. Featuring graphic sex[2], violence, and murder, this iteration does not appear to align with traditional notions of a utopia but does represent the manifestation of Caprican teenagers’ desires for a world that is both something and somewhere else. And although immersive virtual environments are not necessarily a new feature in Science Fiction television,[3] with references stretching from Star Trek’s holodeck to Virtuality, Caprica’s real contribution to the field was its choice to foreground the process of V world’s creation and the implications of this construct for the shows inhabitants.

Taken at face value, shards like the one shown in Caprica’s first scene might appear to be nothing more than virtual parlors, the near-future extension of chat rooms[4] for a host of bored teenagers. And in some ways, we’d be justified in this reading as many, if not most, of the inhabitants of Caprica likely conceptualize the space in this fashion. Cultural critics might readily identify V world as a proxy for modern entertainment outlets, blaming media forms for increases in the expression of uncouth urges. Understood in this fashion, V world represents the worst of humanity as it provides an unreal (and surreal) existence that is without responsibilities or consequences. But Caprica also pushes beyond a surface understanding of virtuality, continually arguing for the importance of creation through one of its main characters, Zoe.[5]

Seen one way, the very foundation of virtual reality and software—programming—is itself the language and act of world creation, with code serving as architecture (Pesce, 1999). If we accept Lawrence Lessig’s maxim that “code is law” (2006), we begin to see that cyberspace, as a construct, is infinitely malleable and the question then becomes not one of “What can we do?” but “What should we do?” In other words, if given the basic tools, what kind of existence will we create and why?

One answer to this presents in the form of Zoe, who creates an avatar that is not just a representation of herself but is, in effect, a type of virtual clone that is imbued with all of Zoe’s memories. Here we invoke a deep lineage of creation stories in Science Fiction that exhibit resonance with Frankenstein and even the Judeo-Christian God who creates man in his image. In effect, Zoe has not just created a piece of software but has, in fact, created life!—a discovery whose implications are immediate and pervasive in the world of Caprica. Although Zoe has not created a physical copy of her “self” (which would raise an entirely different set of issues), she has achieved two important milestones through her development of artificial sentience: the cyberpunk dream of integrating oneself into a large-scale computer network and the manufacture of a form of eternal life.[6]

Despite Caprica’s status as Science Fiction, we see glimpses of Zoe’s process in modern day culture as we increasingly upload bits of our identities onto the Internet, creating a type of personal information databank as we cultivate our digital selves.[7] Although these bits of information have not been constructed into a cohesive persona (much less one that is capable of achieving consciousness), we already sense that our online presence will likely outlive our physical bodies—long after we are dust, our photos, tweets, and blogs will most likely persist in some form, even if it is just on the dusty backup server of a search engine company—and, if we look closely, Caprica causes us to ruminate on how our data lives on after we’re gone. With no one to tend to it, does our data run amok? Take on a life of its own? Or does it adhere to the vision that we once had for it?

Proposing an entirely different type of transcendence, another character in Caprica, Sister Clarice, hopes to use Zoe’s work in service of a project called “apotheosis.” Representing a more traditional type of utopia in that it represents a paradisiacal space offset from the normal, Clarice aims to construct a type of virtual heaven for believers of the One True God,[8] offering an eternal virtual life at the cost of one’s physical existence. Perhaps speaking to a sense of disengagement with the existent world, Clarice’s vision also reflects a tradition that conceptualizes cyberspace as a chance where humanity can try again, a blank slate where society can be re-engineered. Using the same principles that are available to Zoe, Clarice sees a chance to not only upload copies of existent human beings, but bring forth an entire world through code. Throughout the series, Clarice strives to realize her vision, culminating in a confrontation with Zoe’s avatar who has, by this time, obtained a measure of mastery over the virtual domain. Suggesting that apotheosis cannot be granted, only earned, Clarice’s dream of apotheosis literally crumbles around her as her followers give up their lives in vain.

Although it is unlikely that we will see a version of Clarice’s apotheosis anytime in the near future, the notion of constructed immersive virtual worlds does not seem so far off. At its core, Caprica asks us, as a society, to think carefully about the types of spaces that we endeavor to realize and the ideologies that drive such efforts. If we understand religion as a structured set of beliefs that structure and order this world through our belief in the next, we can see the overlap between traditional forms of religion and the efforts of technologists like hackers, computer scientists, and engineers. As noted by Mark Pesce, Vernor Vinge’s novella True Names spoke to a measure of apotheosis and offered a new way of understanding the relationship between the present and the future—what Vinge offered to hackers was, in fact, a new form of religion (Pesce, 1999). Furthermore, aren’t we, as creators of these virtual worlds fulfilling one of the functions of God? Revisiting the overlap between doomsday/apocalyptic/dystopian fiction as noted in the paper’s opening and Science Fiction, we see a rather seamless integration of ideas that challenges the traditional notion of a profane/sacred divide; in their own ways, both the writings of religion and science both concern themselves with some of the same themes, although they may, at times, use seemingly incompatible language.

Ultimately, however, the most powerful statement made by Caprica comes about as a result of the extension to arguments made on screen:  by invoking virtual reality, the series begs viewers to consider the overlay of an entirely subjective reality onto a more objective one.[9] Not only presenting the coexistence of multiple realities as a fact, Caprica asks us to understand how actions undertaken in one world affect the other. On a literal level, we see that the rail line of New Cap City (a virtual analogue of Caprica City, the capital of the planet of Caprica)[10] is degraded (i.e., “updated) to reflect a destroyed offline train, but, more significantly, the efforts of Zoe and Clarice speak to the ways in which our faith in virtual worlds can have a profound impact on “real” ones. How, then, do our own beliefs about alternate realities (be it heaven, spirits, string theory, or media-generated fiction) shape actions that greatly affect our current existence? What does our vision of the future make startlingly clear to us and what does it occlude? What will happen as future developments in technology increase our sense of presence and further blur the line between fiction and reality? What will we do if the presence of eternal virtual life means that “life” loses its meaning? Will we reinscribe rules onto the world to bring mortality back (and with it, a sense of urgency and finality) like Capricans did in New Cap City? Will there come a day where we choose a virtual existence over a physical one, participating in a mass exodus to cyberspace as we initiate a type of secular rapture?

As we have seen, online environments have allowed for incredible amounts of innovation and, on some days, the future seems inexplicably bright. Shows like Caprica are valuable for us as they provide a framework through which the average viewer can discuss issues of presence and virtuality without getting overly bogged down by technospeak. On some level, we surely understand the issues we see on screen as dilemmas that are playing out in a very human drama and Science Fiction offerings like Caprica provide us with a way to talk about subjects that we will confront in the future although we may not even realize that we are doing so at the time. Without a doubt, we should nurture this potential while remaining mindful of our actions; we should strive to attain apotheosis but never forget why we wanted to get there in the first place.

Works Cited

Lessig, L. (2006, January). Socialtext. Retrieved September 10, 2011, from Code 2.0: https://www.socialtext.net/codev2/

Pesce, M. (1999, December 19). MIT Communications Forum. Retrieved September 12, 2011, from Magic Mirror: The Novel as a Software Development Platform: http://web.mit.edu/comm-forum/papers/pesce.html


[1] Although the show is generally quite smart about displaying the right kind of content for the medium of television (e.g., flushing out the world through channel surfing, which not only gives viewers glimpses of the world of Caprica but also reinforces the notion that Capricans experience their world through technology), the ability to visualize V world (and the transitions into it) are certainly an element unique to an audio-visual presentation. One of the strengths of the show, I think, is its ability to add layers of information through visuals that do not call attention to themselves. These details, which are not crucial to the story, flush out the world of Caprica in a way that a book could not, for while a book must generally mention items (or at least allude to them) in order to bring them into existence, the show does not have to ever name aspects of the world or actively acknowledge that they exist. Moreover, I think that there is something rather interesting about presenting a heavily visual concept through a visual medium that allows viewers to identify with the material in a way that they could not if it were presented through text (or even a comic book). Likewise, reading Neal Stephenson’s A Diamond Age (which prominently features a book) allows one to reflect on one’s own interaction with the book itself—an opportunity that would not be afforded to you if you watched a television or movie adaptation.

[2] By American cable television standards, with the unrated and extended pilot featuring some nudity.

[3] Much less Science Fiction as a genre!

[4] One could equally make the case that V world also represents a logical extension of MUDs, MOOs, and MMORPGs. The closest modern analogy might, in fact, be a type of Second Life space where users interact in a variety of ways through avatars that represent users’ virtual selves.

[5] Although beyond the scope of this paper, Zoe also represents an interesting figure as both the daughter of the founder of holoband technology and a hacker who actively worked to subvert her father’s creation. Representing a certain type of stability/structure through her blood relation, Zoe also introduced an incredible amount of instability into the system. Building upon the aforementioned hacker tradition, which itself incorporates ideas about youth movements from the 1960s and lone tinkerer/inventor motifs from Science Fiction in the early 20th century, Zoe embodies teenage rebellion even as she figures in a father-daughter relationship, which speaks to a particular type of familial bond/relationship of protection and perhaps stability.

[6] Although the link is not directly made, fans of Battlestar Galactica might see this as the start of resurrection, a process that allows consciousness to be recycled after a body dies.

[7] In addition, of course, is the data that is collected about us involuntarily or without our express consent.

[8] As background context for those who are unfamiliar with the show, the majority of Capricans worship a pantheon of gods, with monotheism looked upon negatively as it is associated with a fundamentalist terrorist organization called Soldiers of The One.

[9] One might in fact argue that there is no such thing as an “objective” reality as all experiences are filtered in various ways through culture, personal history, memory, and context. What I hope to indicate here, however, is that the reality experienced in the V world is almost entirely divorced from the physical world of its users (with the possible exception of avatars that resembled one’s “real” appearance) and that virtual interactions, while still very real, are, in a way, less grounded than their offline counterparts.

[10] Readers unfamiliar with the show should note that “Caprica” refers to both the name of the series and a planet that is part of a set of colonies. Throughout the paper, italicized versions of the word have been used to refer to the television show while an unaltered font has been employed to refer to the planet.


Mutable Masses?

It’s the End of the World as We Know It (And I Feel Fine)

Notably, however, the fears associated with the masses have not been limited to one particular decade in American history:  across cultures and times, we can witness examples akin to tulip mania where unruly mobs exhibited relatively irrational behavior. Given the reoccurring nature of this phenomenon, which receives additional credence from psychological studies exploring groupthink and conformity (Janis, 1972; Asch, 1956), we might choose to examine how, if at all, the cultural critiques of the 1950s apply to contemporary society.

Recast, the criticisms of mass culture presumably resonate today in a context where popular culture holds sway over a generally uncritical public; we might convincingly argue that media saturation has served to develop a modern society in which celebrities run wild while evidencing sexual exploits like badges of honor, traditional communities have collapsed, and the proverbial apocalypse appears closer than ever. Moreover, having lost sight of our moral center while further solidifying our position as a culture of consumption since the 1950s, the masses have repeatedly demonstrated their willingness to flash a credit card in response to advertising campaigns and to purchase unnecessary goods hawked by celebrity spokespeople in a process that demonstrates a marked fixation on appearance and the image in a process reminiscent of critiques drawn from A Face in the Crowd (Hoberman, 2008a; Ecksel, 2008). Primarily concerned with the melding of politics, news, and entertainment, which harkens back to Kierkegaard-inspiried critiques of mass culture, current critics charge that the public has at long last become what we most feared:  a mindless audience with sworn allegiances born out of fielty to the all-mighty image (Hoberman, 2008a).

Arguably the most striking (or memorable) recent expression of image, and subsequent comingling bewteen politics and entertainment, centered around Sarah Palin’s campaign for office in 2008. Indeed, much of the disucssion regarding Palin centered around her image and colloquisims rather than focusing solely on her abilities. [1] Throughout her run, Palin positioned herself as an everyman figure, summoning figures such as “Joe Six-Pack” and employing terms such as “hockey mom” in order to covey her relatability to her constituents.[2] In a piece on then-Vice-Presidential candidate Sarah Palin, columnist Jon Meacham questions this practice by writing:  “Do we want leaders who are everyday folks, or do we want leaders who understand everyday folks?” (2008). Palin, it seemed to Meacham, represented much more of the former than the latter; this position then  leads to the important suggestion that Palin was placed on the political bill in order to connect with voters (2008). Suddenly, a correlary between Palin and Lonesome Rhodes from A Face in the Crowd becomes almost self-evident.

At our most cynical, we could argue that Palin is a Lonesome-type figure, cleverly manipulating her image in order to connect with the disenfranchised and disenchanted. More realistically, however, we might consider how Palin could understand her strength in terms of her relatability instead of her political acumen; she swims against the current as a candidate of the people (in perhaps the truest sense of the term) and provides hope that she will represent the voice of the common man, in the process challenging the status quo in a government that has seemingly lost touch with its base. In some ways, this argument continues to hold valence in post-election actions that demonstrate increasing support of the Tea Party movement.

However, regardless of our personal political stances, the larger pertinent issue raised by A Face in the Crowd is the continued existence of an audience whose decision-making process remains heavily influenced by image—we actually need to exert effort in order to extract our opinion of Sarah Palin the politician from the overall persona of Sarah Palin. Although admittedly powerful, author Mark Rowlands argues that a focus on image—and the reliance on the underlying ethereal quality described by Daniel Boorstin as being “well known for [one’s] well-knownness” (Boorstin, 1962, p. 221)—is ultimately damning as the public’s inability to distinguish between items of quality leads them to focus on the wrong questions (and, perhaps worse, to not even realize that we are asking the wrong questions) in ways that have very real consequences. Extrapolating from Rowlands, we might argue that, as a culture that is obsessed with image and reputation, we have, in some ways, forgotten how to judge the things that really matter because we have lost a sense of what our standards should be.

Ever the Same?

So while the criticisms of critics from the Frankfurt School might appear to hold true today, we also need to realize that modern audiences exist in a world that is, in some ways, starkly different from that of the 1950s. To be sure, the mainstream media continues to exist in a slightly expanded form but new commentary on the state of American culture must account for the myriad ways in which current audiences interact with the world around them. For instance, work published after Theodor Adorno’s time has argued against the passive nature of audiences, recognizing the agency of individual actors (Mattson, 2003; Shudson, 1984).[3] Moreover, the new activity on the part of audiences has done much to comingle the once distinctly separate areas of high and low culture in a process that would have likely confounded members of the Frankfurt School. The current cultural landscape encompasses remix efforts such as Auto-Tune the News along with displays of street art in museum galleries; projects once firmly rooted in folk or pop art have transcended definitional boundaries to become more accepted—and even valued—in the lives of all citizens. While Adorno might be tempted to cite this as evidence of high culture’s debasement, we might instead argue that these new manifestations have challenged the long-held elitism surrounding the relative worth of particular forms of art.

Additionally, examples like Auto-Tune the News suggest that advances in technology have also had a large impact on the cultural landscape of America over the past half century, with exponential growth occurring after the widespread deployment of the Internet and the resulting World Wide Web. While the Internet certainly provided increased access to information, it also created the scaffolding for social media products that allowed new modes of participation for users. Viewed in the context of image, technology has helped to construct a world in which reputations are made and broken in an instant and we have more information circulating in the system than ever before; the appearance of technology, then, has not only increased the velocity of the system but has also amplified it.

Although the media often showcases deleterious qualities of the masses’ relationship with these processes (the suicide of a student at Rutgers University being a recent and poignant example), we are not often exposed to the incredible pro-social benefits of a platform like Twitter or Facebook. While we might be tempted to associate such pursuits with online predators (a valid concern, to be sure) or, at best, unproductive in regard to civic engagement (Gladwell, 2010), to do so would to ignore the powerfully positive uses of this technology (Burnett, 2010; Lehrer, 2010; Johnston, 2010). Indeed, we need only look at a newer generation of activist groups who have built upon Howard Rheingold’s concept of “smart mobs” in order to leverage online technologies to their benefit (2002)—a recent example can be found in the efforts of groups like The Harry Potter Alliance, Invisible Children, and the Kristin Brooks Hope Center to win money in the Chase Community Giving competition (Business Wire, 2010). Clearly, if the masses can self-organize and contribute to society, the critiques of mass culture as nothing more than passive receptors of media messages need to be revised.

Reconsidering the Masses

If we accept the argument that audiences can play an active part in their relationship with media, we then need to look for a framework that begin to address media’s role in individuals’ lives and to examine the motivations and intentions that underlie media consumption. Although we might still find that media is a corrosive force in society, we must also realize that, while potentially exploiting an existing flaw, it does not necessarily create the initial problem (MacGregor, 2000).

A fundamental building block in the understanding of media’s potential impact is the increased propensity for individuals (particularly youth) to focus on external indicators of self-worth, with the current cultural climate of consumerism causing individuals to focus on their inadequacies as they begin to concentrate on what they do not have (e.g., physical features, talent, clothes, etc.) as opposed to their strengths. Simultaneously both an exacerbation of this problem and an entity proffering solutions, constructs like advertising provide an easy way for youth to compensate for their feelings of anxiety by instilling brands as a substitute for value:  the right label can confer a superficial layer of prestige and esteem upon individuals, which can act as a temporary shield against criticism and self-doubt. In essence, one might argue that if people aren’t good at anything, they can still be associated with the right brands and be okay. Although we might be tempted to blame advertising for this situation, it actually merely serves to exploit our general unease about our relationship to the world, a process also reminiscent of narcissism (Lasch, 1979).

Historian Christopher Lasch goes on to argue that, once anchored by institutions such as religion, we have become generally disconnected from our traditional anchors and thus have come to substitute media messages and morality tales for actual ethical and spiritual education (1979). The overlapping role of religion and advertising is noted by James Twitchell, who contends that, “Like religion, which has little to do with the actual delivery of salvation in the next world but everything to do with the ordering of life in this one, commercial speech has little to do with material objects per se but everything to do with how we perceive them” (1996, 110). Thus, we might classify religion, advertising, entertainment, and celebrity as examples of belief systems (i.e., a certain way of seeing the world complete with their own set of values) and use these paradigms to begin to understand their respective (and ultimately somewhat similar!) effects on the masses.

A Higher Power

Ideologies such as those found in popular culture, religion, or advertising tell believers, in their own ways, what is (and is not) important in society, something that Twitchell refers to as “magic” (1996, 29). Each manifestation also professes a particular point of view and attempts to integrate itself into everyday life, drawing on our desire to become part of something (e.g., an idea, a concept, or a movement) that is larger than ourselves. Perhaps, most importantly, the forces of advertising, entertainment, religion, and art (as associated with high/pop/folk culture) play on this desire in order to allow humans to give their lives meaning and worth, in terms of the external:  God, works of art, and name brands all serve as tools of classification. While cynics might note that this stance bears some similarities to the carnival sideshows of P. T. Barnum—it does not matter what is behind the curtain as long as there is a line out front (Gamson, 1994; Lasch, 1979)—the terms survive because they continue to speak to a deep desire for structure; the myth of advertising works for the same reasons that we believe in high art, higher education, and higher powers. Twitchell supports this idea by mentioning that “the real force of [the culture of advertising] is felt where we least expect it:  in our nervous system, in our shared myths, in our concepts of self, and in our marking of time” (1996, 124). Constructs like advertising or entertainment, it seems, not only allow us to assemble a framework through which we understand our world, but also continually informs us about who we are (or who we should be) as a collection of narratives that serves to influence the greater perceptions of individuals in a manner reminiscent of the role of television in Cultivation Theory (Gerbner & Gross, 1976). The process of ordering and imbuing value ultimately demonstrates how overarching ideologies can not only create culture but also act to shape it, a process evidenced by the ability of the aforementioned concepts to consume and/or reference previously shared cultural knowledge while simultaneously contributing to the cultural milieu.

Given our reconsideration of mid-century cultural critiques, it follows that we should necessarily reevaluate proposed solutions to the adverse issues present within mass culture. We recall the advice of A Face in the Crowd’s Mel Miller (i.e., “We get wise to them”) and reject its elitist overtones while remaining mindful of its core belief. We recognize that priding ourselves on being smart enough to see through the illusions present in mass culture, while pitying those who have yet to understand how they are being herded like so many sheep, makes us guilty of the narcissism we once ascribed to the masses—and perhaps even more dangerous than the uneducated because we are convinced that we know better. We see that aspects of mass culture address deeply embedded desires and that our best hope for improving culture is to satisfy these needs while educating audiences so that they can better understand how and why media affects them. Our job as critics is to encourage critical thinking on the part of audiences, dissecting media and presenting it to individuals so that they can make informed choices about their consumption patterns; our challenge is to convincingly demonstrate that engagement with media is a crucial and fundamental part of the process. If we ascribe to these principles, we can preserve the masses’ autonomy and not merely replace one dominant ideology with another.


[1] Certainly being a female did not help this as American women are typically subject to a “halo effect” wherein their attractiveness (i.e., appearance) affects their perception (Kaplan, 1978)

[2] Palin has continued the trend, currently employing the term “mama grizzlies,” a call-to-arms that hopes to rally the willingness of women to fight in order to protect things that they believe in. Interestingly, a term that reaffirms the traditional role of women as nurturing matriarchs has been linked to feminist movements, a move that seems to confuse the empowerment of women with a socially conservative construct of their role in American life (Dannenfelser, 2010).

[3] We can also see much work conducted in the realm of fan studies that supports the practice of subversive readings or “textual poaching,” a term coined by Henry Jenkins (1992), in order to discuss contemporary methods of meaning making and resistance by fans.


Love Me or Hate Me, Still an Obsession

Reacting to atrocities witnessed throughout the course of World War II, Americans in the 1950s became obsessed with notions of power and control, fearing that they would be subsumed by the invisible hand of a totalitarian regime. In particular, the relatively young medium of television became suspect as it represented a major broadcast system that seemed to have an almost hypnotic pull on its audience, leaving viewers entranced by its images. And images, according to author and historian Daniel Boorstin, were becoming increasingly prominent throughout the 19th century as part of the Graphic Revolution replete with the power to disassociate the real from its representation (1962). For cultural critics still reeling from the aftereffects of Fascism and totalitarianism, this was a dangerous proposition indeed.

Although these underlying anxieties of mid-century American society could be examined via a wide range of anthropological lenses and frameworks, visual media has historically provided a particularly vivid manifestation of the fears latent in the people of the United States (Haskell, 2004). This is, of course, not to imply that visual media is necessarily the best or only means by which we can understand prevailing ideologies in the years after World War II, but merely one of the most visible. However, as a critical examination of the entire media landscape of the 1950s would be beyond the scope of a single paper of this magnitude, discussion shall be primarily concentrated around Elia Kazan’s 1957 movie A Face in the Crowd with particular attention paid to the contrasting channels of cinema and television.[1] This paper will seek to briefly position A Face in the Crowd in the larger context of paranoia-driven cinema of the 1950s before using the film as an entryway to discuss critiques of mass culture. Given the film’s apparent sustained resonance as indicated by its relatively recent mention (Vallis, 2008; Hoberman, 2008b; Franklin, 2009), the arguments of Critical Theory will then be applied to modern American culture in an attempt to ascertain their continued validity. Finally, an argument will be made that acknowledges the potential dangers facing mass culture in the 21st century but also attempts to understand the processes that underlie these pitfalls and provides a suggestion for recourse in the form of cultural and media literacy.

Paranoia, Paranoia, Everyone’s Coming to Get Me

The post-war prosperity of the 1950s caused rapid changes in America, literally altering the landscape as families began to flood into the newly-formed suburbs. With the dream and promise of upward social mobility firmly ensconced in their heads, families rushed to claim their piece of the American dream, replete with the now-iconic front yard and white picket fence. And yet, ironically, a new set of worries began to fester underneath the idyllic façade of the suburbs as the troubles of the city were merely traded for fears of paranoia and invasion; the very act of flight led to entrapment by an ethos that subtly precluded the possibility of escape.

As with many other major cultural shifts, the rapid change in the years following World War II caused Americans to muse over the direction in which they were now headed; despite a strong current of optimism that bolstered dreams of a not-far-off utopia, there remained a stubborn fear that the quickly shifting nature of society might have had unanticipated and unforeseen effects (Murphy, 2009). Life in the suburbs, it seemed, was too good to be true and inhabitants felt a constant tension as they imagined challenges to their newly rediscovered safety:  from threats of invasion to worries about conformity, and from dystopian futures to a current reality that could now be obliterated with nuclear weapons, people of the 1950s continually felt the weight of being a society under siege. An overwhelming sense of doubt, and more specifically, paranoia, characterized the age and latent fears manifested in media as the public began to struggle with the realization that the suburbs did not fully represent the picturesque spaces that they had been conceived to be. In fact, inhabitants were assaulted on a variety of levels as they became disenchanted with authority figures, feared assimilation and mind control (particularly through science and/or technology), began to distrust their neighbors (who could easily turn out to be Communists, spies, or even aliens!), and felt haunted by their pasts, all of which filled the movie screens of the decade (Jensen, 1971; Murphy, 2009; Wolfe, 2002).[2] Following solidly in this tradition, Kazan’s A Face in the Crowd picks up on some of the latent strains of paranoia in American culture while simultaneously serving as a platform for a set of critiques regarding mass culture.

Somewhere, a Star Is Made

The storyline of A Face in the Crowd is rather straightforward and yet deceptively complex in its undertones:  on the surface, we experience a rather heavy-handed morality tale in the form of country bumpkin Larry “Lonesome” Rhodes, a relative nobody who is plucked from obscurity and made (and subsequently broken) through powers associated with television. Yet, it is only when we begin to connect the movie to a larger societal context that we begin to understand the ramifications of the film’s message; a careful examination of A Face in the Crowd reveals striking suspicions regarding the role that media plays (in this case, primarily television and cinema) in shaping American culture. Stars, director Elia Kazan argues, are not so much born as made, a distinction that portends dire consequences.

It is worth noting that Kazan’s film was made during a time when the concept of the “celebrity” was being renegotiated by America; for a large part of its history, the United States, firmly grounded in a Puritan work ethic, had honored heroes who exemplified ideals associated with a culture of production and was struggling to reconcile these notions in the presence of an environment whose emphasis was now focused on consumption. Although modern audiences might initially find this shift difficult to appreciate, one need only consider that the premium placed on production is so central to American ideology that it continues to linger today:  in a culture that exhibits rampant consumerism, we still value the “self-made man” and sell the myth of America as a place where anyone can achieve success through hard work. To abandon these ideas would necessitate that we reinterpret the very meaning of “America.” Thus, we become more sympathetic to the critics of the day who lamented the loss of the greatness of man and bristled against the notion that fame or celebrity could be manufactured—such a system could would only result in individuals who were lacking and unworthy of their status (Gamson, 1994; Benjamin, 1973)

Such is the case it seems, with Larry Rhodes, who is discovered by roving reporter Marcia Jeffries in an Arkansas jail. Although it cannot be denied that Rhodes has some modicum of talent and a certain charisma that comes from being unafraid to speak one’s mind, Marcia ushers Rhodes onto the path of greatness by dubbing him “Lonesome” and thus creates a character that transforms Rhodes from a despondent drunk to a winsome drifter. This scene—the first major one in the movie—thusly introduces the important notion that those involved in the media can be implicitly involved in the manipulation of the information that travels over the airwaves. Subtly adding to the insidious nature of the media, A Face in the Crowd portrays Marcia as a character that seems likable enough, but also a person who is, in a way, exploiting the people in jail as she rushes in with her tape recorder intent on prying the stories from the characters she finds (or creates!) and does not exhibit much concern in truly understanding why these men are imprisoned in the first place. Taken to an extreme, we later come across the character of The General, who further perverts the connection between media and power as he conspires with Lonesome to remake the image of Senator Worthington Fuller as the congressman runs for President.

Yet, as Lonesome Rhodes grows in his role as a media personality, he quickly demonstrates that the power to manipulate does not lie solely with those who sit behind the cameras. In Memphis, Rhodes incites a riot against the Luffler mattress company and also solicits donations in order to help a Black woman rebuild her house. In light of this, we can see that while Kazan focuses on the negative implications of television and celebrity, that the relative good or bad that comes from these actions is not necessarily the point—instead, the one constant in all of the depicted scenarios is a public who is manipulated into performing actions on the behalf of others. Although the characters of Lonesome and The General are vilified throughout the film, it is the masses for which Kazan demonstrates true disdain.

Extraordinary Popular Delusions

Perhaps nowhere is this contempt more apparent than at the end of the film where, in an attempt to offer a small moment of solace to Marcia after her unmasking of Lonesome, writer Mel Miller notes, “We get wise to them, that’s our strength” (Kazan, 1957). And Miller is not wrong:  Western tradition has long recognized the correlation between knowledge and power and Miller’s assertion touches upon the revelatory clout inherent in the realignment of perception and reality as noted by public relations guru Howard Bragman (2008). A more critical examination of the film’s closing scene, however, raises an important question:  Who is Miller’s “we”? Although one might be tempted to read this line as indicative of an egalitarian philosophical view, it is important to note that the only two characters in the shot represent the film’s arguably upper-middle class, and pointedly Eastern-educated, elite—nowhere to be seen are representatives of the small Arkansas town from the film’s opening or denizens of Memphis, both of whom serve to characterize the majority of Lonesome’s devoted viewers.[3] In fact, if we take time to reflect upon the movie, we realize that the majority of the audience was only alerted to Lonesome’s dual nature after Marcia flipped a control room switch and revealed the underlying deterioration; the masses oscillated from one position to the next without understanding how or why and once again adopted a passive stance in their relationship with media. Moreover, as Courtney Maloney points out, Kazan’s depiction of the agency of the masses is actually limited in scope:  despite a montage of audience members vehemently phoning in, sponsors are simultaneously shown to be acting independently as they withdraw their association with Lonesome (1999). Moreover, the subtext of the scene distances the rational decision-making of the truly powerful from the impassioned beseeching of the masses, likening the power of the latter to that of a mob. Knowledge and its associated authority, clearly, are afforded to a select group.

This idea, that the world can be divided between those who “get wise” and those who do not, serves to develop a rather sharp classist criticism against the medium of television and those who would watch it:  moviegoers, by virtue of witnessing Kazan’s work, find themselves elevated in status and privy to “the man behind the curtain” (to borrow a phrase). In contrast, the malleable masses were considered to be pacified and placated by idealistic portrayals of life in the 1950s in the form of television programs like Leave It to Beaver, The Donna Reed Show, and The Adventures of Ozzie and Harriet. Clearly, Kazan creates a dichotomy imbued with a value judgment descended from the thoughts of prominent thinkers in the Frankfurt School who, as far as aesthetics were concerned, preferred the high culture of cinema to the conformity and manipulated tastes of television (Horkheimer & Adorno, 2002; Adorno, 1985; Quart, 1989). This distinction between high and low culture would be a crucial supporting idea for critics as a prominent fear of mass culture was that it portended a collapse between concepts (e.g., fame, celebrity, or intellectual value) of objectively different quality, essentially rendering all manifestations the same and therefore all equally mundane (Boorstin, 1962; Hoberman, 2008b; Kierkegaard, 1962).  Even worse for critics, perhaps, was the perception of the masses’ refusal to grow out of its immature interests, a behavior that was characterized as both childlike and stubborn (Adorno, 1985).

And the fears of such theorists, all of whom were reacting to recent and rapid advances in broadcast technology, were not unfounded. Consider, for example, that radio had been popularized a scant fifty years prior and had vastly altered critics’ understanding of media’s potential impact, creating a precedent as it proliferated across the country and began to develop a platform for solidarity and nationalism. Yet, while the effects of radio were decidedly pro-social, due in part to its propagation of orchestral music and transmission of fireside chats, television was viewed as a corrosive force on society that spurred on the destruction of culture instead of enriching it.[4] For the critics of the Frankfurt School, television was indicative of an entrenched sentiment that regarded mass-produced culture as formulaic and perfectly suitable for a generation of passive consumers who sat enraptured in front of the glowing set. Associating the potential dissemination of propagandist ideology with television as a form of mass broadcast, cultural theorists evoked notions of totalitarian regimes akin to Hitler and Stalin in an effort to illustrate the potential subjugation of individual thought (Mattson, 2003). These simmering fears, aggrandized by their concurrence with the rising threat of Communism and collectivist cultures, found fertile soil in the already present anxiety-ridden ethos of the United States during the 1950s.


[1] It should be noted, however, that the comics of this time—those that belong to the end of the Golden Age and beginning of the Silver Age—also provide an additional understanding of the ways in which Americans indirectly wrestled with their fears.

[2] For a more exhaustive list of movies that support this point, see Wolfe, 2002.

[3] Let us also not forget the fact that Lonesome exhibits a rather patronizing attitude toward his audience in his later career, instituting the Cracker Barrel show with its manufactured country lackeys (Yates, 1974). In contrast to his first stint in Memphis, Lonesome has begun to embrace his country image as a means (if an inauthentic one) to connect with his audience, a point of contention to which we will return.

[4] Curiously, however, we see that this relationship between presidential addresses (like the aforementioned fireside chats) and mass media did not elicit notable complaints from critics who were generally wary of the merging of politics and entertainment (Quart, 1989; Benjamin, 1973). Although a larger discussion is warranted regarding the subtleties of this distinction, I would suggest that part of the differentiation stems from a high-low culture dichotomy. Although critics linked the negative presence of television with corporate advertising, James Twitchell suggests that there has always been a rather intimate relationship between arts and commerce, most saliently exhibited by wealthy citizens or entities who act as patrons (Twitchell, 1996).


The Truth Shall Set You Free?

Young people handle dystopia every day:  in their lives, their dysfunctional families, their violence-ridden schools.

—Lois Lowry[1]

The Age of Information.

Today, more than ever, individuals are awash in a sea of information that swirls around us invisible as it is inescapable. In many ways, we are still grappling with the concept as struggle to sort, filter, and conceptualize that which surrounds us. We complain about the overbearing nature of algorithms—or, perhaps more frighteningly, do not comment at all—but this is not the first time that Western society has pondered the role and influence of information in our lives.

Access to information provides an important thematic lens through which we can view dystopic fiction and although it does not account for the entirety of the genre’s appeal in and of itself (or, for that matter, the increase in its popularity), we will see that understanding the attraction of dystopia provides some insight into the the societies that produce it and elucidates the ways in which the genre allows individuals to reflect on themes present in the world around them—themes that are ultimately intimately connected with the access and flow of information. My interest here lies specifically in YA dystopic fiction and its resonance with the developmental process of teenagers.

Lois Lowry’s quote suggests that today’s youth might be familiar with tangible aspects of dystopia even if they do not necessarily exist in a state of dystopia themselves; dystopia, then, is fundamentally relatable to youth.[2] Interpersonal violence in schools—on both the physical and virtual levels—has become a growing problem and can be seen as a real life analogue to the war-torn wastelands of YA dystopia; although the physical destruction present in fiction might not manifest in the everyday, youth may identify with the emotional states of those who struggle to survive.[3] And, given the recent and high-profile nature of bullying, issues of survival are likely salient for modern youth.[4]

As a writer, it should come as no surprise that Lowry, like literary critic Darko Suvin, primarily describes the concept of dystopia in literary terms; while a valid, if limited perspective, this does not preclude the term also possessing socio-political implications, with one potentially arguing that the relatable nature of dystopia extends far beyond the iterations outlined by Lowry into the realm of ideology.[5] On a basic level, dystopia often asks protagonists to perform a type of self-assessment while simultaneously evaluating preexisting hierarchal structures and systems of authority.[6] Given that this process asks individuals to contrast themselves with the society that surrounds them, one might make the argument that the themes of utopia and dystopia possess an implicit political element, regardless of authors’ intentions.

Moreover, consider the prevalent construct of the secret as a defining characteristic of dystopian societies like those presented in the classic works of Brave New World and Nineteen Eighty-Four.[7] Often located in the cultural history of the dystopia (e.g., “What events caused us to reach this point?”) or the sustained lies of the present (e.g., “This is for your protection”), acquisition of new (hidden) knowledge represents a fundamental part of the protagonist’s—and, by extension, the reader’s—journey. For young adults, this literary progression can mirror the development occurring in real life as individuals challenge established notions during the coming-of-age process; viewed through the lens of anthropology, dystopian fiction represents a liminal space for both the protagonist and the reader in which old assumptions and knowledge are questioned during a metaphorical rite of passage. [8],[9] And, although the journey itself provides a crucial model trajectory for youth, perhaps more important, however, is the nature of the secret being kept:  as Lowry alludes to, modern youth undoubtedly realize that their world—our world—like that of any dystopia, contains elements of ugliness. The real secret, then, is not the presence of a corrupted underbelly but rather why rot exists in the first place.

Aside from the type of knowledge or even the issues latent in its accessibility, however, we can see that modern culture is undergoing a rather radical reconfiguration with regard to the social structures surrounding information flow. Although we still struggle with the sometimes antagonistic relationship between citizens and the State mirrored in classic and YA dystopia, we have also developed another dimension:  citizen versus citizen. Spurred on by innovations in technology that have made mobile gadgetry increasingly affordable and accessible to the public, on-location reporting has grown from the relatively useful process of crowdsourcing information to a practice that includes surveillance, documentation, and vigilante justice as we display our moral outrage over someone else’s ungodly behavior through platforms like paparazzi photos, tweeting of overheard conversations, and the ever-popular blog—we, in effect, have assumed the mantle of Big Brother. It would seem that, like Dr. Moreau, we have been granted knowledge and ability without wisdom.

Moreover, let us consider how youth currently exist in a culture of confession that was not apparent during previous cycles of utopia/dystopia. Spurred on in part by daytime talk shows, reality television, press conference apologies, and websites like PostSecret, the current environment is suffused with secrets and those willing to share their intimate stories for a price. Somewhat in opposition to confession’s traditional role in Catholicism, secrets now play an active role in public life despite their private nature, a process that mirrors the juxtaposition of personal and public histories by protagonists in YA dystopia.[10],[11] Moreover, we quickly come to see the increased relevancy of this trend when we consider how individuals, groups, organizations, and societies begin to define themselves in terms of the secrets that they hold about others and themselves. The prevalence of events like corporate espionage, copyright infringement lawsuits, and breakdowns in communication between youth and parents all point to entities that wish to contain and restrict information flow. If being an American in the 20th century meant being defined by material possessions, being an American in the 21st century is to be defined by information and secrets. And, if this is indeed the case, how might we view our existence as one that occurs in a series of ever-expanding dystopias? As it turns out, Lowry might have been more correct than she realized when she noted young people’s familiarity with dystopia.

But perhaps this development is not so surprising if we consider the increasing commodification of knowledge in postmodern culture. If we ascribe to Jean-Francois Lyotard’s argument regarding the closely intertwined relationship between knowledge and production—specifically that the cultivation of new knowledge in order to further production—and therefore that information sets are a means to an end and not an end in and of themselves, we witness a startling change in the relationship between society and knowledge.[12] In opposition to the idealistic pursuit that occurred during the Enlightenment period, modern conceptualizations seem to understand knowledge in terms of leverage—in other words, we, like all good consumers, perennially ask the question, “What can you do for me?” Furthermore, the influence of commercialism on Education (i.e., the institution charged with conveying information from one generation to the next) has been probed, conjecturing that educational priorities might be dictated by concerns of the market.[13] Notably, these cultural shifts have not disavowed the value of knowledge but have changed how such worth is determined and classified.

The Frankfurt School’s pessimistic views of mass culture’s relationship with economic influences and independent thought aside, Lyotard also points to the danger posed by the (then) newly-formed entity of the multinational corporation as a body that could potentially supersede or subvert the authority of the nation-state.[14] Businesses like Facebook and Google accumulate enormous amounts of information (often with our willing, if unwitting, participation) and therefore amass incredible power, with the genius of these organizations residing in their ability to facilitate access to our own information! Without castigating such companies—although some assuredly do—we can glimpse similarities between these establishments’ penchant for controlling the dissemination of information and the totalitarian dictatorships prevalent in so many dystopian societies. In spite of the current fervor surrounding the defense of rights outlined in the Constitution, we largely continue to ignore how companies like Google and Facebook have gained the potential to impact concepts like freedom of assembly, freedom of speech, and freedom of information; algorithms designed to act as filters allow us to cut through the noise but also severely reduce our ability to conceptualize what is missing. These potential problems, combined with current debates over issues like privacy, piracy, and Net Neutrality indicate that power no longer solely resides in knowledge but increasingly in access to it.


[1] Lois Lowry, quoted in Hintz, Carrie, and Elaine Ostry. Utopian and Dystopian Writing for Children and Young Adults. (New York: Routledge, 2003).

[2] One might even argue that those who read dystopian fiction most likely do not inhabit a dystopian world, for they would not have the leisure time to consume such fiction.

[3] This point, of course, should not be taken in a manner that discounts the legitimate struggles of children who grow up in conflict states.

[4] See Ken Rigby, New Perspectives on Bullying. London: Jessica Kingsley Publishers, 2002and Marilyn A. Campbell “Cyber Bullying: An Old Problem in a New Guise?” Australian Journal of Guidance and Counseling 15, no. 1 (2005): 68-76.

[5] Clare Archer-Lean, “Revisiting Literary Utopias and Dystopias: Some New Genres.” Social Alternatives 28, no. 3 (2009): 3-7.

[6] Kennon, Patricia. “‘Belonging’ in Young Adult Dystopian Fiction: New Communities Created by Children.” Papers: Explorations into Children’s Literature 15, no. 2 (2005): 40-49.

[7] Patrick Parrinder, “Entering Dystopia, Entering Erewhon.” Critical Survey 17, no. 1 (2005): 6-21.

[8] Hintz and Ostry, Utopian and Dystopian. 2003.

[9] Parrinder, “Entering Dystopia, Entering Erewhon.” 2005.

[10] Shannon McHugh and Chris Tokuhama, “PostSecret: These Are My Confessions.” The Norman Lear Center. June 10, 2010. http://blog.learcenter.org/2010/06/postsecret_these_are_my_confes.html

[11] John Stephens, “Post-Disaster Fiction: The Problematics of a Genre.” Papers: Explorations into Children’s Literature 3, no. 3 (1992): 126-130.

[12] Jean-Francois Lyotard, The Postmodern Condition: A Report on Knowledge. (Manchester: Manchester University Press, 1979).

[13] Suzanne de Castell and Mary Bryson, “Retooling Play: Dystopia, Dysphoria, and Difference.” In From Barbie to Mortal Kombat, edited by Justine Cassell and Henry Jenkins. (Cambidge: The MIT Press, 1998).

[14] Lyotard, The Postmodern Condition. 1979.


I Believe That Children Are Our Future

Kids say the darndest things. Or so we’re told. Maybe, then, it is only fitting that we have turned children’s responses into a form of entertainment as adults exhibit a general reluctance to truly understand what children are saying; instead of striving to understanding the process of meaning making in the world of children, we filter their words through perspectives that, in some cases, have entirely forgotten what it means to be a kid.

In June 2011, an article published in the Wall Street Journal sparked robust debate about the appropriateness of the themes proffered by current Young Adult (YA) fiction, which ultimately culminated in a virtual discussion identified by “#YASaves,” on the social messaging service Twitter.[1] Although some of the themes mentioned in the #YASaves discussion, like self-harm, eating disorders, and abuse, seem outside the scope of YA dystopia, the larger issue of concern over youth’s exposure to “darkness” speaks to an overarching perception of children derived from views prevalent in Romanticism.

Consistent with the Romantic idolization of nature, children were heralded as pure symbols of the future who had not yet conformed to the mores of society.[2] (And here we see the humor presented by shows like Kids Say the Darndest Things, for we, as “knowing” adults, can juxtapose the answers of children with the “correct” responses.) Informed by a Romantic tradition that presupposed the legitimacy of children’s perspectives, privileging them over those of more traditional authorities, this stance also suggests that teenage protagonists are largely not responsible for understanding the intricacies of how their environments operate, expecting the realized world to instead align with their personal vision. Illustrating the potential pitfall of this practice, we need only look back a few years to the exclusive utopian vision promoted by President George W. Bush; dystopian for everyone who did not share his view, Bush’s “utopia” legitimized only one version of the truth (his).[3] Although discontent may be an integral part of the impetus to change, we begin to glimpse elements of narcissism and indignation as protagonists develop a moral imperative for their actions.

Building upon this model (and undoubtedly bolstered by the counter-cultural movements of the 1960s) mid-20th century YA fiction increasingly began to shoulder youth with the responsibility and expectation of overthrowing the generations that had come prior while simultaneously delegitimizing the state of adolescence through trajectories that necessitated the psychological growth of protagonists.[4] In order to save the world, teenage protagonists must inevitably sacrifice their innocence and thus become emblematic of the very institution they sought to oppose.

And even if the teenage protagonists of YA fiction represent those select few who transcend the impulse to do nothing, are they ultimately reactionary and thusly not truly empowered? An initial reading of genres like YA dystopian fiction might suggest that readers can extract philosophical lenses or skills through their identification with protagonists who struggle not only to survive but to thrive. However, further rumination causes one to question the accessibility of the supposed themes of empowerment at play:  although characters in dystopian fiction provide value by suggesting that hegemonic forces can be challenged, the trajectories of these extraordinary figures rarely do much to actively cultivate or encourage the enactment or development of similar abilities in the real world. In essence, young readers are exposed to the ideals, but not realistic actionable steps. Furthermore, although Roberta Seelinger Trites correctly cites power and powerlessness as integral issues in YA dystopia, one is left to question whether true power is a result of internal struggle and achievement or is instead conferred upon the protagonist through some external force.[5] Perhaps a product of a youth mindset that tends to focus on the self, teenage protagonists often fail to recognize (and thus comment on) the role of external factors that aid their quests; Katniss Everdeen in Suzanne Collins’ The Hunger Games trilogy, for example, routinely fails to mention (or seemingly appreciate) the ways in which her success are intimately connected to those who bestow gifts of various kinds upon her. Further challenging notions of empowerment, although Katniss develops throughout the course of the trilogy, she gives no indication that she would have become involved in rebellion had she not been forced (i.e., chosen) into a situation that she could not escape.

Echoing this idea, Lara Cain-Gray sees similar trends in the dystopian tendencies of teen realist fiction. In her analysis of Sonya Hartnett’s Butterfly, Cain-Gray argues that the protagonist, Plum, longs for some measure of extraordinariness—a saving grace from a dystopian world born out of banality.[6] Here again we see that agency is ascribed to an external source as characters yearn for salvation; individuals long for someone to save them because they have not yet learned how to save themselves. Regardless of later strides made by Plum, a lack of scaffolding means that her model remains inaccessible to readers unless they have also received a jump start. If we refer back to the idea that utopia and dystopia inherently contain political elements, it seems to follow that encouraging a wider recognition of, and sensitivity to, existing social structures might address gaps in the developmental process and help youth to become more active in real life, while combatting the adult-imposed label of apathy that is currently in vogue.

Perhaps the problem lies in how we traditionally conceptualize youth as political agents (if at all). Although there are assuredly exceptions to this, the primary readership for YA dystopia—loosely bounded by an age demographic that includes individuals between 12-18 years of age—largely does not possess a type of political power commonly recognized in the United States. Prohibited from voting, a majority of the YA audience is often not formally encouraged to exercise any form of political voice; it is not until they near the age of adulthood that the process even begins to take shape with, at best, a course on Civics in high school. And, in absence of a structured educational process that promotes reflection, critical thinking, outreach, and activism, youth might be seen to cobble together their political knowledge from sources readily available to them. As author Jack Zipes suggests in his book Sticks and Stones:  The Troublesome Success of Children’s Literature from Slovenly Peter to Harry Potter, youth seek out agency through literature like dystopian fiction.[7] However, one might argue that what youth are really after is a sense of empowerment that they are unable to find elsewhere in meaningful quantities.

Elizabeth Braithwaite comments on one such example of the YA dystopia’s potential political influence and agency in her discussion of post-disaster fiction. Building upon work by Erich Fromm, Braithwaite notes the important difference between social orders labeled as “freedom from” and “freedom to”:

Fromm explains that the two types of freedom are very different:  a person can be free of constraints, be they obviously negative or the ‘sweet bondage of paradise’, without necessarily being ‘free to govern himself, to realize his individuality.’[8]

Although “freedom from” represents a necessary pre-condition, it would seem that a true(r) sense of agency is the province of “freedom to.” And yet much of the rhetoric surrounding the current state of politics seems to center around the former as we talk fervently about liberation from dictatorships in the Middle East during the spring of 2011 or freedom from oppressive government in the United States.

On a level arguably more immediately pressing for a teenage readership, however, let us invoke the issue of bullying, which has become a somewhat high-profile topic in recent educational news. In line with the discussion surrounding forms of oppression elsewhere, much of the rhetoric present in this topic focuses on a removal of the negative—and admittedly quite caustic—influences of teenage aggressors. Prompted by a rash of high-profile suicides attributed to this phenomenon, New York Times columnist Dan Savage started a project entitled “It Gets Better.” Ostensibly designed to encourage youth to refrain from suicide (and, to a lesser extent, self-harm), “It Gets Better” seemed to effuse a position saturated with the ideology of “freedom from.” Although an admirable attempt, “It Gets Better” ultimately projects a hope for a static utopia free from bullying—which, as has been previously demonstrated, inevitably leads to a dystopia of one sort or another. By telling youth that things will get better someday (i.e., not now) we are ultimately choosing to withhold information about how to make it better. Intentional or not, we have begun to slide into a practice of knowledge containment that mirrors the regimes of dystopian societies as we fail to challenge youth to become active participants in the process of change. Propelled by thinking grounded in a stance of “freedom from,” we are, in indirect ways, in the name of protection or aid, stripping youth’s access to information that would act to empower them.

In marked contrast, we witness a different tonality in movements like those involved in the support of gay marriage or the Dream Act. Perhaps coincidentally, both efforts have embraced the notion of “coming out” and the liberation that this freedom of self-expression brings. “Freedom to,” it would seem, allows individuals in the modern age to effectively begin the process of challenging patriarchal and heteronormative stances—as any child of the 1970s and Marlo Thomas’ “Free to Be…You and Me” well knows.

So what do we do, then, with the complex space represented by the intersection of youth, adults, publishers, and YA fiction? Ultimately, I argue for a reevaluation of the value of youth voices in discussion surrounding YA fiction. As adults, our natural inclination may be to protect children, but we must also endeavor to understand the long-term implications of our actions—after all, isn’t our real goal to equip the next generation with the tools that they will need to become successful citizens of the world? We must walk a narrow line, fighting our tendency to view modern youth as romanticized wunderkind while respecting the demographic as one that is increasingly capable of amazing resilience. If our generation is to have any hope of disrupting the adversarial cycle so prevalent in YA dystopian fiction, we must take it upon ourselves to educate youth in a way that encourages their empowerment while remaining open to all that they have to teach us. It is only through this integration, and a more sophisticated flow of information, that we can hope to avoid the manufacture of a disenfranchised generation destined to suffer the ultimate indignity of being born into a dystopia. To get there, we must whole-heartedly engage with children, seeking to understand the ways in which they process information and perceive their environment. Although we are armed with mountains of theory, we need to realize that we do not necessarily know better—we merely know differently. We need to take the time to truly listen to our youth and attempt to see the world through their eyes:  focus groups can be used to ascertain descriptive language while large-scale surveys provide an element of generalizability. Inventories might help researchers get a sense of things like the pervasiveness of self-harm or the recuperative value of YA fiction. Follow-up interviews or focus groups could help us to evaluate the effectiveness of treatment programs, allowing us to alter our course should the need arise. In short, we need to actually talk (and listen!) to those whom we would serve.


[1] See Meghan Cox Gurdon, “Darkness Too Visible.” The Wall Street Journal. June 4, 2011 and Sherman Alexie, “Why the Best Kids Books Are Written in Blood.” The Wall Street Journal. June 9, 2011 for constrasting views on this topic.

[2] Hintz and Ostry, Utopian and Dystopian. 2003.

[3] See Sargent, “In Defense of Utopia.” 2006 and Maureen F. Moran, “Educating Desire: Magic, Power, and Control in Tanith Lee’s Unicorn Trilogy.” In Utopian and Dystopian Writing for Children and Adults, 139-155. (New York: Routledge, 2003).

[4] See Elizabeth Braithwaite, “Post-Disaster Fiction for Young Adults: Some Trends and Variations.” Papers: Explorations into Children’s Literature 20, no. 1 (2010): 5-19 and Roberta Seelinger Trites, Disturbing the Universe: Power and Repression in Adolescent Literature. (Iowa City: University of Iowa Press, 2000).

[5] Trites, Disturbing the Universe. 2000.

[6] Lara Cain-Gray,  “Longing For a Life Less Ordinary: Reading the Banal as Dystopian in Sonya Hartnett’s Butterfly.” Social Alternatives 28, no. 3 (2009): 35-38.

[7] Jack Zipes, Sticks and Stones: The Troublesome Success of Children’s Literature from Slovenly Peter to Harry Potter. (New York: Routledge, 2002).

[8] See Elizabeth Braithwaite, “Post-Disaster Fiction for Young Adults: Some Trends and Variations.” Papers: Explorations into Children’s Literature 20, no. 1 (2010): 5-19 and Erich H. Fromm, Escape from Freedom. (New York: Henry Holt and Company, 1994: 34).


Question Authority

When I was younger, I distinctly remember being amazed by feats of mathemagic. I would sit for hours and watch individuals (mostly men) perform incredible feats of mental gymnastics as they manipulated numbers in a whole host of ways. It was only later that I would come to see that what these individuals were doing was not “magic” (at least not in the traditional sense of the term, although the feats were no less amazing because of this) but instead employing a heretofore invisible set of rules that instructed them on how to proceed.

Interest in heuristics, then, was a natural progression for as I came to study Social Psychology. Here, in front of me was an entire set of rules—mental shortcuts to be specific—that governed behavior. Not just content to understand how we interacted with our world, I was driven to understand why. Why did we make mental leaps that sometimes led to errors in judgment? Why did we draw the correlations that we did? Why was some information thought of as more pertinent in certain situations than others? Sure, Evolutionary Psychology provided some answers but I had been schooled in a diathesis-stress model and was never satisfied to attribute the phenomena that I observed to biology (although I certainly did not discount it as a factor, either).

Popularized in books like Blink, heuristics have again entered into our consciousness as we have increasingly come to examine rapid decision-making processes and judgment under pressure (or lack of it). The answer to “What were they thinking?!” might very well be that “they” might not have been thinking anything—well, at least not consciously. More accurately, subjects were engaged in cognitive processes but simply not aware of it.

And we are seeing a close cousin of this thought process play out in the world of technology, computing, and algorithms. Although the execution is not entirely the same, we witness a scenario in which machines follow a set of rules (as they are wont to do) without regard for the implications of their actions as they realize a more sophisticated form of Turing machine, for code is law. Ultimately, as Science Fiction occasionally comments, our creations outlast us, running amok as they continue to abide by instructions intended for a world that has long since passed. In these worst-case scenarios, action has become divorced from meaning.

But it is not only machines that are subject to this fallacy, for humans are equally susceptible to over-reliance on stable structures like laws. Although usually pro-social, we tend to encounter problems when we fail to continually evaluate ordinances in the context of an ever-changing world:  just because things have always done a certain way does not necessarily mean that they should be. Our refusal to engage in the various processes that act to shape our worldviews—of which television/media is just one—means that we allow someone else to dictate our reality to us. I believe that heuristics play a valuable role in our lives as they reduce cognitive processing time and perhaps allow us to react in time to save our lives, but also that the absence of critical reflection on these structures leads to things like the formation of stereotypes, susceptibility to manipulative advertising, and inflexible adherence to religious doctrine. Ultimately, we need to re-engage with our world and be willing to puzzle over the ways in which it affects us and in which we affect it, for it is unacceptable to play the victim card and say that media like television corrupt our minds if we are unwilling or unable to demand better from it. Education in the form of media literacy is not sufficient to prevent processes like Cultivation Theory from having an effect on us (even theorists are subject to it!), but it is a crucial first step.


I Hate You So Much Right Now

The sun wasn’t doing anything to help things. Sweat began to pool under my collar, causing an unbearable urge to scratch—made worse by the fact that I couldn’t move a muscle. I stamped my foot in frustration as the word escaped my lips.

Brownie.

To be honest, it was the first thing that I thought of. I stood there, watching my classmate crumple in front of me as tears began to well in her eyes. This, I think, was when I committed my first hate crime.

I was seven.

I’m neither particularly proud of this moment nor ashamed of what happened. I don’t view myself as exceptionally racist, but I recognize my biases. This story is important to me because it reminds me that we are all capable of committing hate crimes—these are not things that are just perpetuated by other people. I can recall the way that I felt on that day in second grade and I realize that people engaging in these heinous acts must feel something similar. This is not to say that any amount of prejudice is acceptable, but I think that it is important to be just as hard on ourselves as we are on others.

In my last post, I talked a bit about power and I think that some of the same ideas apply to this week. This time, however, it’s personal. How do we react to our perceived loss of power? What do we do when we’re up against a wall? When we’re strung out and broken? When we think that there’s a demon inside of us? What do we look like when we’re grasping at straws? We’ll use anything, and everything, that we can to try get back to where we once were. Calling to mind scenes from True Blood, it’s Tara throwing things off of the mantle to make herself feel better, it’s Jason willing to dance on a webcam, it’s Lettie Mae pulling her cards out left and right. There’s so much in the show about possession, and drawing lines, and standing your ground. Who has the power? Who wants it? Who needs it? Who doesn’t have it? Who merely feels like he doesn’t have it?

Recall the idea of “the Other,” as well: power is all about the “haves” and the “have nots.” It’s about the fear that stems from feeling powerless and misusing power. It’s personifying the fear that we have into characters that we can relate to, and, more importantly, name.

Shows like this, or Battlestar Gallactica, are interesting in our post-9/11 world because they are so much about the powerless striking out in fear against those who they think can harm them. It’s weird to me, because I don’t think people become terrorists (or individuals who commit hate crimes) unless they feel like they are backed against a wall and don’t have much to lose. Terrorism is the language of the oppressed, of the beaten down, and of the people who are desperate to regain a semblance of power and control. Our fear of those who we perceive as more powerful is one of those dirty things that we don’t like to think about because I think it makes us too similar to “terrorists.” There’s certainly the whole X-Men/mutant thing where we see people with powers vs. people without them but it’s the same story over and over with different characters and us playing different roles. It’s no wonder that we respond to these sorts of stories when so much of our history has been a power struggle over various things—perhaps we’ve been programmed to identify with this concept.


A Spoonful of Fiction Helps the Science Go Down

Despite not being an avid fan of Science Fiction when I was younger (unless you count random viewings of Star Trek reruns), I engaged in a thorough study of scientific literature in the course of pursuing a degree in the Natural Sciences. Instead of Nineteen Eighty-Four, I read books about the discovery of the cell and of cloning; instead of Jules Verne’s literary journeys, I followed the real-life treks of Albert Schweitzer. I studied Biology and was proud of it! I was smart and cool (as much as a high school student can be) for although I loved Science, I never would have identified as a Sci-Fi nerd.

But, looking back, I begin to wonder.

For those who have never had the distinct pleasure of studying Biology (or who have pushed the memory far into the recesses of their minds), let me offer a brief taste via this diagram of the Krebs Cycle:

Admittedly, not overly complicated (but certainly a lot for my high school mind to understand), I found myself making up a story of sorts  in order to remember the steps. The details are fuzzy, but I seem to recall some sort of bus with passengers getting on and off as the vehicle made a circuit and ended up back at a station. I will be the first to admit that this particular tale wasn’t overly sophisticated or spectacular, but, when you think about it, wasn’t it a form of science fiction? So my story didn’t feature futuristic cars, robots, aliens, or rockets—but, at its core, it represented a narrative that helped me to make sense of my world, reconciling the language of science with my everyday vernacular. At the very least, it was a fiction about science fact.

And, ultimately, isn’t this what Science Fiction is all about (at least in part)? We can have discussions about hard vs. soft or realistic vs. imaginary, but, for me, the genre has always been about people’s connection to concepts in science and their resulting relationships with each other. Narrative allows us to explore ethical, moral, and technological issues in science that scientists themselves might not even think about.  We respond to innovations with a mixture of anxiety, hope, and curiosity and the stories that we tell often reveal that we are capable of experiencing all three emotional states simultaneously! For those of us who do not know jargon, Science Fiction allows us to respond to the field on our terms as we simply try to make sense of it all. Moreover, because of its status as genre, Science Fiction also affords us the ability to touch upon deeply ingrained issues in a non-threatening manner:  as was mentioned in our first class with respect to humor, our attention is so focused on tech that we “forget” that we are actually talking about things of serious import. From Frankenstein to Dr. Moreau, the Golem, Faust, Francis Bacon, Battlestar Galactica and Caprica (among many others), we have continued to struggle with our relationship to Nature and God (and, for that matter, what are Noah and Babel about if not technology!) all while using Science Fiction as a conduit. Through Sci-Fi we not only concern ourselves with issues of technology but also juggle concepts of creation/eschatology, autonomy, agency, free will, family, and society.

It would make sense, then, that modern science fiction seemed to rise concurrent with post-Industrial Revolution advancements as the public was presented with a whole host of new opportunities and challenges. Taken this way, Science Fiction has always been about the people—call it low culture if you must—and I wouldn’t have it any other way.


I Know This Much to Be True

4 out of 5 dentists agree. Or so we’re told. But how often do we stop to question how this data was obtained—just who are these dentists? Although the catchphrase has managed to burrow itself into our collective psyche, the data behind this survey has never been released to the public (not that it really matters anymore, anyway).

I suggest that, above and beyond attributions of authority or group preference, the Trident slogan works because we exist in a society that demands data, equating the presence of scientific inquiry with legitimacy. We have come, since the Enlightenment, to accept Science as the structuring philosophy of our world (perhaps in lieu of Religion) and what else is data but evidence of that process? Although data is itself abstract, it represents something tangible (or at least quantifiable) for opinions were counted, preferences measured, and votes collected. We have become trained to respond to data, unaware of how statistics and “facts” can be manipulated. We have become reliant on data’s ability to simplify our world, unwittingly engaging in a trade-off that ignores nuance in favor of broad strokes; in a world rapidly becoming overwhelmingly complicated, we demand clear and readily apparent answers.

What we do not demand, however, is scientific rigor.

As a public, we do not generally care how data is obtained, only occasionally pausing to note flagrant violations in collection methods (exceptions are of course made for specific lines of inquiry but the broader point here is one of everyday experience). How often, for example, do we take the time to ask how respondents were selected for a candidate popularity survey? What types of questions were asked and what language was used to ask them? Were the questions asked in a particular order? And, again, just who are these people being asked? We not only fail to demand rigor from those who would present data, but also from ourselves as we blithely accept that the graphics on the nightly news broadcast represent “the truth.” Data tells how we are different from others but it is to our detriment that we rarely ask ourselves why.

One consequence of this lack of action is the overwhelming influence of the social sciences on Americans’ thinking (as noted in Sarah Igo’s The Averaged American), with an especially profound impact on the way that we conceptualize ourselves, particularly in relation to others. Once the practice of surveys had been generally accepted, it seemed that most anything could be measured and therefore every aspect of life, identity, and thought had a theoretical mean; the legacy of this new paradigm was (and still is) a perpetual state of unease as self-evaluation through data sets coupled with a mid-century culture already juggling paranoia, neighbors, the suburbs, and conformity.

And surveys, like Alfred Kinsey’s (in)famous investigation, laid bare the most private aspects of our lives even as it refocused our attention on the concept of normalcy—a shift that directed our attention toward commonalities instead of outliers. In retrospect, this change seemed somewhat natural as the intent of the surveys being conducted in the mid-20th century was to establish and define a national identity with the data collected suggesting that an “average American” was indeed a possibility. In other words, our thoughts about who we were (along with who we could/should be) came out of exposure to heretofore unseen aspects of ourselves—moving forward, our theories were shaped by our experiences as we incorporated our knowledge of data into various identities (e.g., personal, community, national).

Experiences, it might be argued, have much to do with the formulations of our theories, as exemplified by the role of the black swan (the animal, although Kinsey would undoubtedly have much to say about the recent Aronofsky offering) in Early Modern Science:  once presumed to be a nonexistent creature, discovery of black swans in Australia pointed to the possibility of highly improbable outliers that caused a fundamental rethinking of prior assumptions. Prior to the discovery of black swans, an entire set of assumptions was made by philosophers/scientists grounded in the idea that swans could only ever be white; based on a lifetime of experience, people came to see the world in a particular way, which determined the types of questions the way in which they viewed the world, the types of questions they could ask, and perhaps more importantly, kept them unaware of the types of questions that could not be asked.

Experience, then, can cause investigators to develop a type of confirmation bias as they unconsciously (or consciously!) begin to collect data that confirms preexisting beliefs about how the world operates. Although the scientific method exists to shield us from this type of behavior, experience can be a difficult influence to mitigate as it ingrains in us a particular way of seeing/interacting with the world and constantly challenging stable environmental patterns would cause us much cognitive stress. To take this practice to the extreme would be unfeasible as we would be paralyzed by inaction while we analyzed the veracity of everything around us so instead I suggest that we, as a first (and smaller) step take a page from epistemology and simply begin by training ourselves to ask the question, “How is it that I know what I know?” As responsible scholars we need to be transparent about our theoretical foundations and honest with ourselves as we actively reflect on our process and our results.

 


Television Is My Religion

I want to start out with a provocation:  In our current age, television has become a form of religion, with the screen our altar and actors our saints.

This is, of course, not to say that television supplants other forms of traditional religion (and I would go further to suggest that any antagonism or dissonance between these types of worship says more about you than it does about the strains of belief themselves), but merely that our relationship with the medium has come to reflect many of the qualities that we associate with institutional religion as television has come to assume a pervasive, public, and central part of our lives, with our identities constructed, in part, around our position to TV. We form rituals around television viewing, regularly sitting down in front of our sets to watch True Blood instead of in pews. Or, if we judge importance through money spent instead of time, we might consider how a television is likely the single most expensive appliance we own or just how much we spend on cable per year. And, for some of us, television is the venue through which we connect to foreign others, supplementing the worship of God with a steadfast belief in Albus Dumbledore.

 

And what is religion, anyway?

I’ve always found it slightly ironic that my name alludes to the support of a religion that I often find myself at odds with; growing up, I had always associated the term “God” with a prominent figure in Western monotheistic religions. When I was younger, I recognized that, on some level, this notion of the Christian God was being forced upon me and I spent much of my life forming my identity in opposition to this conceptualization—I needed to escape from the oppressive and pervasive nature of the theology in order to attempt to craft my own sense of self. It has been difficult to learn that there is more to Christianity than evangelicals and that not everyone is trying to tell me how to live my life. Kant has had a large influence on my worldview and I do not think that God’s existence can be proven (or disproven); I also do not believe in a God that created the universe or exiled Adam and Eve from the Garden. This does not, however, mean that God’s existence does not have any impact on my life—God exists for those who believe in Him and the actions that result from those beliefs are very real to me. Moreover, many of the tropes that inform my work in identity and narrative derive from Christian tradition; religion, along with myths, fairy tales, and a host of other informal stories, all shape the way that we learn to view ourselves and our relationships to the world around us. So, although I continue to refrain from identifying as Christian, I would argue that I am closer to God today than I have ever been and that part of this process has come about through critical reflection on the incredible amount of television that I watch.

And stories, whether they are found in religion or on television, possess the ability to convey incredibly complex ideas to us in a way that we cannot always fully articulate. For example, take the story of Caprica’s “There Is Another Sky,” which is a familiar one if you’ve been exposed to any amount of entertainment growing up; it is the story of Alice, of Dorothy, of Neo, and of many others who have gone on a quest to become a hero. And, although he would not have described himself in terms of heroics, it is also partially the story of Jesus. Throughout the episode, various characters were admonished to “wake up” or expressed a desire to return home. Each has been ushered along by guides who have demonstrated that the power to change, to belong, to be, or become, existed in us all along. These heroes have all ventured into the darkness and found their way back to the world of the living; each of these heroes has woken up and tapped into the power that this revelation brings.

This journey is the same one we undergo when dealing with grief and death:  when our loved ones die, we travel with them to the land of the dead; for a time being, a part of us dies as well. We hear the call to come back to the world of the living but also whispers from the underworld. We are scared to embark upon this path because we fear that we will become lost and will not be able to make our way back to the land of the living; we fear that we will lose ourselves in the darkness. (As a side note, this is also what the “There Is Another Sky” of the title refers to via an Emily Dickinson poem.) Funerals, whether experienced in a church or through a screen, act as rituals to transcend the everyday, allowing us to learn a script for letting go  of the dead and returning to the surface.

So if we take a step back and consider Berger’s argument for the cyclical relationship between society and human beings through a process of production/externalization and consumption/internalization in conjunction with Gerbner and Gross’ Cultivation Theory, we can readily see a case made for television fulfilling some of the same core functions of religion. Television, as a product of man, follows its own internal logic and, through its existence and subsequent consumption, forces an in-kind response by its audiences. Television’s logic, then, structures and orders the world in a fashion similar to that of religion, with Gerbner and Gross suggesting that the process of identification is proportional to the amount of television consumed. In short, television, like religion, helps us to make sense of our world.

*TWO NOTES

1) Reality television, in particular, provides a fruitful arena for further exploration of these concepts due to its current popularity and ability to blur the line between authenticity/fabrication. Borrowing from a heritage in documentary film making, the genre assumes a sheen of objectivity while nevertheless evidencing elements of manipulation by editors and writers. Moreover, the accessibility of its “stars” (due to their status as “normal” people) make the salience of their behavioral scripts that much more evident for people who would wish to use them as models of successful/unsuccessful behavior. Although dangerous due to a general lack of situational information/context, viewers might be tempted to repeat behavior that they see on screen, hastening the process of internalization, for it was undertaken by someone “just like me.”

2) Making a similar case for advertising’s ability to act as religion, James Twitchell contends that, “like religion, which has little to do with the actual delivery of salvation in the next world but everything to do with the ordering of life in this one, commercial speech has little to do with material objects per se but everything to do with how we perceive them” (1996). While some might object to the mixing of influences in areas such as advertising and religion, a certain amount of comingling is inevitable if we classify each entity as a belief system—a certain way of seeing the world complete with its own set of values—and understand that individuals might incorporate multiple elements into their particular worldview. (I might also suggest that a large part of the Catholic church’s growth was due to its efforts of self-promotion and advertising.) Aspects such as religion and advertising tell believers, in their own ways, what is (and is not) important in society, something that Twitchell refers to as “magic” (1996). Each characteristic also professes a particular point of view and attempts to integrate itself into everyday life, drawing on our desire to become part of something (e.g., an idea, a concept, or a movement) that is larger than ourselves. Perhaps, most importantly, these forces play on this desire in order to allow humans to give their lives meaning and worth, with a common thread being that followers can classify themselves in terms of the external:  God, works of art, name brands, etc. Although the attraction may assume different forms, it survives because it continues to speak to a deep desire for structure—advertising works the same reason that we believe in high art, higher education, and higher powers.

The process of ordering and imbuing value ultimately demonstrates how advertising can not only create culture but also act to shape it, a process also evidenced by marketing techniques’ ability to consume and/or reference previously shared cultural knowledge while simultaneously contributing to the cultural milieu. The concurrent horizontal and vertical spread of advertising is reminiscent of memes, a concept created by evolutionary biologist Richard Dawkins. According to Dawkins, memes represent discrete units of cultural knowledge that propagate in a particular society (analogous to genes) through a number of transmission methods (1976). While the concept of memetics certainly spans across areas other than advertising, Dawkins notably included, as examples of memes, catch phrases (i.e., slogans), melodies (i.e., jingles), and fads. Consequentially, although advertising inevitably forms a new type of culture in societies, ads also serve to broaden exposure to, or strengthen the connections of, existing aspects of culture for those subjected to it as they burrow deep into our collective society.

Despite the intricate and multi-faceted nature of its impact, we can use the narrative characteristics of advertising as framework for understanding its influence. On a basic level, the format of advertising typically takes the form of a loose narrative, complete with implied back-story—television spots, in particular, provide a salient example of this. Yet, the messages present in advertising can also cause us to question our sense of self as we evaluate our belief systems and values as previously mentioned. Consider how personal identities can result from narrative or actually be narrative; sentences containing “to be” verbs can be unpacked to reveal a larger narrative structure that can help us to “cope with new situations in terms of our past experience and gives us tools to plan for the future” (Sfard & Prusak, 2005). Twitchell supports this idea by mentioning that “the real force of Adcult is felt where we least expect it:  in our nervous system, in our shared myths, in our concepts of self, and in our marking of time” (1996, p. 124). Advertising, it seems, not only allows us to construct a framework through which we understand our world, but also continually informs us about who we are (or who we should be) as a collection of narratives that serves to influence the greater perceptions of youth in a manner reminiscent of the role of television in Cultivation Theory (Gerbner & Gross, 1976).

Works Cited

Dawkins, R. (1976). The Selfish Gene. Oxford: Oxford University Press.

Gerbner, G., & Gross, L. (1976). Living with television: The violence profile. The Journal of Communication, 26(2), 172-199.

Sfard, A., & Prusak, A. (2005). Telling identities: In search of an analytic tool for investigating learning as a culturally shaped activity. Educational Researcher, 34(4), 14-22.

Twitchell, J. (1996). Adcult USA: The Triumph of Advertising in American Culture. New York: Columbia University Press.