Thoughts from my study of Horror, Media, and Narrrative

Publications

The Philosophy of Horror

Chris Tokuhama

Thomas Fahy, Ed. The Philosophy of Horror. Lexington, KY: The University Press of Kentucky, 2010.259 pp. Cloth. ISBN 978-0-8131-2573-2. $39.95.

            The Philosophy of Horror, a collection of essays edited by Thomas Fahy, belongs to a series of volumes called The Philosophy of Popular Culture. The classification is important here as books in the series are targeted toward a general audience and endeavor to introduce traditional philosophical concepts through examples in popular culture. In addition to an introduction by the editor, The Philosophy of Horror contains fourteen chapters that are largely (with two exceptions) grounded in particular media artifacts that span television, film, and print. Mostly based in traditional conceptualizations of the horror genre, the volume also notably includes media that might be classified as “thrillers” (e.g., Truman Capote’s In Cold Blood and Patrick Süskind’s Perfume), a move that acts to expand the definition of horror beyond a genre in order to encompass an emotional state or a type of relationship between audience and artifact.

            Indeed, editor Thomas Fahy creates this framework through his introduction wherein a story about skydiving allows him to describe the emotive experience of interacting with a piece of horror fiction. Referencing the book’s namesake, Noël Carroll’s The Philosophy of Horror; or, Paradoxes of the Heart, Fahy notes that, at its core, horror presents a paradox:  twin forces of attraction and repulsion, fear and relief, or suffering and justice appear as consistent themes throughout the works that are described with the label of horror. As any scholar of horror will well note, although the setting may appear fantastical, the central issues in any piece of horror are grounded in the human experience. To this end, Fahy notes that the following key philosophical concepts are evidenced throughout the book:  morality, identity, cultural history, and aesthetics.

            Taking the broadest view of horror, the book’s first two chapters—Philip J. Nickel’s “Horror and the Idea of Everyday Life:  On Skeptical Threats in Psycho and The Birds” and Philip Tallon’s “Through a Mirror, Darkly:  Art-Horror as a Medium for Moral Reflection”—each attempt to discern and elucidate the function of horror. Ultimately, both essays reflect upon horror’s ability to explore the fundamental human sense of vulnerability and fragility; in one respect, this sense of insecurity certainly relates to the perennial issue of mortality that pervades most horror but also to the larger philosophical question of morality for horror also forces the question of who gets to live and why. Importantly, however, both authors move past the susceptibility of the human body in order to discuss a wider range of vulnerabilities:  in the tradition of the postmodern, absolutes are questioned and assumptions are questioned in ways that ultimately lead individuals to become introspective as they examine their own preconceptions regarding how the world works and which moral positions are justified.

            In addition, however, the kind of questioning suggested by the book’s first two essays naturally sets the stage for an examination of identity; the process whereby one deconstructs one’s value system almost necessarily involves a period of reflection on who one is to begin with. Dealing with the theme of identity most directly, Amy Kind’s “The Vampire with a Soul:  Angel and the Quest for Identity” thinks through the implications and responsibilities of having a soul. The key contribution Kind puts forth is to divorce the possession of a soul from notions of personhood, instead pondering the way in which a soul makes one an individual. For us as humans, this distinction makes little sense but the realm of the fantastic offers a great space for us to consider how alternate beings (in this case demons, but we might also include androids) do not necessarily become “human,” but can in fact become individuated.

            Moreover, just as Kind’s essay speaks to a need to reevaluate the world and our preconceptions of it, Jessica O’Hara’s “Making Their Presence Known:  TV’s Ghost-Hunter Phenomenon in a ‘Post-’World” uses the trope of paranormal investigation television to think through ways in which the world around us is perceived and how those insights are examined. O’Hara’s work also bridges the gap between identity and cultural history for it, on one level, necessarily juxtaposes the present with the past; one way in which to read the popularity of shows about ghosts is to consider that they may speak to the cultural renegotiation between private and public space in the wake of the terrorist attacks of September 11, 2001. Although one might make an argument that these ghost shows more prominently feature domestic spaces and therefore privilege the private sphere, a stronger position might suggest that, at their core, ghost stories also speak to the most grievous defilement of privacy and security:  the home invasion.

            This theme of unease with the domestic space is also echoed in John Lutz’s “From Domestic Nightmares to the Nightmare of History,” which looks at subjugation in The Shining on three levels:  domestic, colonialism, and commodification. Much more than a clichéd “things are not what they seem” The Shining ruminates on abuse(s) in various settings and the way in which these themes are circulated throughout our identity as Americans. Unlike the narratives of the ghost hunters, cultural black marks like slavery, internment, and colonization evidence a need for resolution that allows us to appropriately repent and then move forward as we wash our hands of responsibility regarding the violation.

            And violation, it would seem, is also a core component of Jeremy Morris’ “The Justification of Torture-Horror:  Retribution and Sadism in Saw, Hostel¸ and The Devil’s Rejects” and Fahy’s “Hobbes, Human Nature, and the Culture of American Violence in Truman Capote’s In Cold Blood.” The films of the torture-horror genre—specifically, here, those that have been released in the mid-2000’s and have been wryly labeled “torture porn” for their graphic and voyeuristic elements—obviously and overtly speak to a type of violation of the body that clearly aligns with a desecration of the self and, resultantly, one’s identity. Returning to the ever-present paradox in horror, Morris looks at the unstable definitions of torturer and tortured and questions how we, to a degree, participate in both roles. Moreover, we once again witness the familiar themes of powerlessness and agency that appear in O’Hara’s essay on ghost hunting while also transforming these issues into something more visceral and personal. Films like Saw and Hostel not only cause us to contemplate the unpleasantness of having torture visited upon ourselves but also ways in which we are complicit in torture or, as an extreme, might participate in the torture of others in order to preserve our own safety. Along with Fahy’s essay on Capote’s In Cold Blood, Morris asks us to think past “senseless” violence in order to consider the unsettling realization that we are all harboring secret monsters and capable of untold brutality if pressured.

            In a way, Fahy’s essay works to transition between Morris and Lorena Russell’s “Ideological Formations of the Nuclear Family in The Hills Have Eyes” as it continues to ruminate on the capacity for violence even as it gestures beyond violation of the person toward a transgression of interpersonal structures. In the case of Fahy and Capote, we are witness to the aftermath that permeates a small town in the wake of a vicious murder while Russell chooses to examine the way in which The Hills Have Eyes comments on the breakdown of the nuclear family. Centering her arguments in the ideology of the family, Russell presents a series of arguments about the family that continue to resonate today; in particular, one of the strongest points that Russell makes is to consider how the original film and its remake speak to the growing divide between urban and rural sensibilities (here it should be noted that the horror films of the 1970s often spoke to this disjuncture, although such critique was not usually tied so closely to family structures). Like with most films in the genre, the real horror is realizing that the term “monster” is relative and that we are all monsters in a given light; moreover, the danger presented by those who are like us is often the more hazardous as it represents the threat that we never see coming.

            Shifting away from identity and toward cultural history, we also have John Lutz’ “Zombies of the World, Unite:  Class Struggle and Alienation in Land of the Dead,” and Paul A. Cantor’s “The Fall of the House of Ulmer:  Europe vs. America in the Gothic Vision of The Black Cat.” Although well argued, Lutz’ essay adds the least to the its respective field of study as it retreads upon the position that zombies can be read as critiques on class and race in America. Cantor’s essay, on the other hand, provides an interesting counterpoint to the rest of the essays in the book as it uses The Black Cat to think through foreign perceptions of America in the post-World War I period.

            This element of critical commentary focuses on the aesthetic in the final essays of the book—Susann Cokal’s “’Hot with Rapture and Cold with Fear’:  Grotesque, Sublime, and Postmodern Tranformation in Patrick Süskind’s Perfume,” Robert F. Gross’ “Shock Value:  A Deleuzean Encounter with James Purdy’s Narrow Rooms,” Ann C. Hall’s “Making Monsters:  The Philosophy of Reproduction in Mary Shelly’s Frankenstein and the Universal Films Frankenstein and The Bride of Frankenstein,” and David MacGregor Johnston’s “Kitch and Camp and Things That Go Bump in the Night; or Sontag and Adorno at the (Horror) Movies”—and resembles more traditional forms of film analysis. Of the four, Johnston and Gross’ essays are the most enlightening, although the latter may well represent the most challenging piece to read in the entire book.

            With its range in topics and perspectives, The Philosophy of Horror is a good choice for those who are fans of horror or who are looking to situate themselves within the field of study. The essays in this volume may very well spark a reader’s interest and introduce new arguments but will also undoubtedly leave them reaching for a more substantive volume on their subject of inquiry.

Chris Tokuhama is a doctoral student at the University of Southern California’s Annenberg School for Communication and Journalism where he studies how the definition of “the body” is being contested in American culture. Particularly interested in the confluence of horror, religion, gender, and youth, Chris is currently working on projects that explore the ways in which children are configured in the shadow of apocalypse, catastrophe, and trauma.


Why the “Cult” of Mormonism Misses the Mark

The question of Mormonism’s role in this election cycle refuses to die.

Over the weekend, much ado was made regarding Reverend Robert Jeffress’ assertion that Mormonism was a cult, with editorials and articles appearing across media outlets. Although I recognize that the dispute supposedly at the heart of this matter is whether or not Mormonism is, in fact, a form of Christianity, I also suspect that this entire discussion is being overplayed because of its proximity to the Republican nomination process. I, for one, have not seen many (if any) crusades to dissuade Mormons from calling themselves Christians in other contexts. For that matter, this is not the first time that America has broached the subject, but we seem to have forgotten that Mitt Romney had to defend his religion the last time we went through all of this four years ago. We could go back and forth over the distinction between religion and cult—see other discussions regarding the nature of Scientology or the perception of early Christianity in a Jewish society—but I believe that this would be time spent unwisely.

Instead, the more problematic line from Jeffress at the Value Voters Summit was, “Do we want a candidate who is a good moral person, or do we want a candidate who is a born-again follower of Jesus Christ?” Putting aside the false dichotomy between a “good moral person” and a “born-again follower of Jesus Christ”—which incidentally suggests that a candidate who identifies as born-again Christian is not a good moral person—the underlying message subtly implies supporting Christians over good moral people. Of course the two categories are not mutually exclusive, but I think that reporters missed a great opportunity to disentangle emotionally-charged words from thoughtful political action. Even when the topic was mentioned, discussion quickly moved onto another distraction:  the Constitutional injunction against religious testing prior to assuming public office. Instead of publishing headlines like “Cantor Doesn’t Believe Religion Should be Factor in 2012,” which, besides being misleading and not truly reflective of the article’s body, news media have an obligation to explain to voters why religion does matter in the political process. Values do matter and religion undoubtedly speaks to a portion of that—just not all of it. We know from reports like those of the Pew Research Center for the People and the Press that religion does impact voting, so why pretend otherwise? The opportunity that the press has, however, is to challenge pundits, politicians, and the public not to use “religion” to mean more than it should.

Moreover, another missed opportunity for the media was Jeffress’ assertion that Romeny was a “fine family person” but still not a Christian, given that he was speaking to a crowd ostensibly gathered in support of family values. Shouldn’t this statement, particularly at this function, cause reporters to question exactly what types of values are being upheld? Doesn’t Jeffress’ statement call for an examination of exactly what is meant by terms like “Christian” and “Mormon”? Ultimately it is these values that will determine the potential President’s policy, not the moniker of a religion.

_________________________________

Chris Tokuhama is a doctoral student at the USC Annenberg School for Communication and Journalism where he studies the relationship of personal identity to the body. Employing lenses that range from Posthumanism (with forays into Early Modern Science and Gothic Horror), the intersection of technological and community in Transhumanism, and the transcendent potential of the body contained in religion, Chris examines how changing bodies portrayed in media reflect or demand a renegotiation in the sense of self, acting as visual shorthand for shared anxieties.

Read up on Chris’ pop culture musings or follow him on Twitter as he tries to avoid the Flavor Aid.


Light Up the Sky Like a Flame

But what is reality television? Although the genre seems to defy firm definitions, we, like Justice Stewart, instinctually “know it when [we] see it.” The truth is that reality television spans a range of programming, from clip shows like America’s Funniest Home Videos, to do-it-yourself offerings on The Food Network, investigative reporting on newsmagazines like 60 Minutes, the docu-soap Cops, and many other sub-genres in between, including the reality survival competition that forms the basis for The Hunger Games. Although a complete dissection of the genre is beyond the scope of this chapter—indeed, entire books have been written on the subject—reality television and its implications will serve as a lens by which we can begin to understand how Katniss experiences the profound effects of image, celebrity, and authenticity throughout The Hunger Games.

She Hits Everyone in the Eye

For the residents of Panem, reality television is not just entertainment—it is a pervasive cultural entity that has become inseparable from citizens’ personal identity. Although fans of The Hunger Games can likely cite overt allusions to reality television throughout the series, the genre also invokes a cultural history rife with unease regarding the mediated image in the United States.

Reacting to atrocities witnessed throughout the course of World War II, Americans in the 1950s became obsessed with notions of power and control, fearing that they would be subsumed by the invisible hand of a totalitarian regime. In particular, the relatively young medium of television became suspect as it represented a major broadcast system that seemed to have a hypnotic pull on its audience, leaving viewers entranced by its images. And images, according to author and historian Daniel Boorstin, were becoming increasingly prominent throughout the 19th century as part of the Graphic Revolution replete with the power to disassociate the real from its representation. Boorstin argued that although the mass reproduction of images might provide increased levels of access for the public, the individual significance of the images declined as a result of their replication; as the number of images increased, the importance they derived from their connection to the original subject became more diffuse. And, once divorced from their original context, the images became free to take on a meaning all their own. Employing the term “pseudo-event” to describe an aspect of this relationship, Boorstin endeavored to illuminate shifting cultural norms that had increasingly come to consider the representation of an event more significant than the event itself.

Katniss unwittingly touches upon Boorstin’s point early inThe Hunger Games, noting that the Games exert their control by forcing Tributes from the various districts to kill another while the rest of Panem looks on. Katniss’ assertion hints that The Hunger Games hold power primarily because they are watched, voluntarily or otherwise; in a way, without a public to witness the slaughter, none of the events in the Arena matter. Yet, what Katniss unsurprisingly fails to remark upon given the seemingly ever-present nature of media in Panem is that the events of The Hunger Games are largely experienced through a screen; although individuals may witness the Reaping or the Tribute’s parade in person, the majority of their experiences result from watching the Capitol’s transmissions. Without the reach of a broadcast medium like television (or, in modern culture, streaming Internet video), the ability of The Hunger Games to effect subjugation would be limited in scope, for although the Games’ influence would surely be felt by those who witnessed such an event in person, the intended impact would rapidly decline as it radiated outward. Furthermore, by formulating common referents, a medium like television facilitates the development of a mass culture, which, in the most pessimistic conceptualizations, represents a passive audience ripe for manipulation. For cultural critics of the Frankfurt School (1923-1950s), who were still reeling from the aftereffects of Fascism and totalitarianism, this was a dangerous proposition indeed. Although the exact nature of modern audiences is up for debate, with scholars increasingly championing viewers’ active participation with media, Panem has seemingly realized a deep-seeded fear of the Frankfurt School. It would appear, then, that The Hunger Games function as an oppressive force precisely because of its status as a mediated spectacle of suffering.

But perhaps we should not be so hard on Katniss. Growing up in an environment that necessitated the cultivation of skills like hunting and foraging, Katniss’ initial perspective is firmly grounded in a world based on truth. Plants, for example, must be checked (and double-checked!) to ensure their genuineness, lest a false bite result in death. In order for Katniss to survive, not only must she be able to identify plants but must also trust in their authenticity; prior to her experience in the Arena, Katniss undoubtedly understands the world in rather literal terms, primarily concerned with objects’ functional or transactional value. However, as hinted by Boorstin, additional layers of meaning exist beyond an item’s utility—layers that Katniss has not yet been trained to see.

Echoing portions of Boorstin’s previous work, French philosopher Jean Baudrillard conceptualized four types of value that objects could possess in modern society: functional, transactional, symbolic, and sign. Admittedly a more complex theory than the description provided herein, we can momentarily consider how Baudrillard’s value categories of “functional” and “transactional” might align with Boorstin’s previously introduced concept of the “real,” while “symbolic” and “sign” evidence an affinity toward “representation.” Whereas the functional and transactional value of items primarily relates to their usefulness, the categories of “symbolic” and “sign” are predominantly derived as a result of the objects’ relationship to other objects (sign) or to actors (symbolic). Accordingly, being relatively weak in her comprehension of representation’s nuances, Katniss characteristically makes little comment on Madge’s gift of a mockingjay pin. However, unbeknownst to Katniss (and most likely Madge herself), Madge has introduced one of the story’s first symbols, in the process imbuing the pin with an additional layer of meaning. Not just symbolic in a literary sense, the mockingjay pin gains significance because it is attached to Katniss, an association that will later bear fruit as fans well know.

Before moving on, let’s revisit the import of The Hunger Games in light of Baudrillard: what is the value of the Games? Although some might rightly argue that The Hunger Games perform a function for President Snow and the rest of the Capitol, this is not the same as saying the Games hold functional value in the framework outlined by Baudrillard. The deaths of the Tributes, while undeniably tragic, do not in and of themselves fully account for The Hunger Games’ locus of control. In order to supplement Boorstin’s explanation of how The Hunger Games act to repress the populace with the why, Baudrillard might point to the web of associations that stem from the event itself: in many ways, the lives and identities of Panem’s residents are defined in terms of a relationship with The Hunger Games, meaning that the Games possess an enormous amount of value as a sign. The residents of the Capitol, for example, evidence a fundamentally different association with The Hunger Games, viewing it as a form of entertainment or sport, while the denizens of the Districts perceive the event as a grim reminder of a failed rebellion. Holding a superficial understanding of The Hunger Games’ true import when we first meet her, Katniss could not possibly comprehend that her destiny is to become a symbol, for the nascent Katniss clearly does not deal in representations or images. Katniss, at this stage in her development, could not be the famed reality show starlet known as the “girl on fire” even if she wanted to.

By All Accounts, Unforgettable

Returning briefly to reality television, we see that Panem, like modern America, finds itself inundated with the genre, whose pervasive tropes, defined character (stereo)types, and ubiquitous catchphrases have indelibly affected us as we subtly react to what we see on screen. Although we might voice moral outrage at offerings like The Jersey Shore or decry the spate of shows glamorizing teen pregnancy, perhaps our most significant response to unscripted popular entertainment is a fundamental shift in our conceptualization of fame and celebrity. Advancing a premise that promotes the ravenous consumption of otherwise non-descript “real” people by a seemingly insatiable audience, reality television forwards the position that anyone—including us!—can gain renown if we merely manage to get in front of a camera. Although the hopeful might understand this change in celebrity as democratizing, the cynic might also argue that fame’s newfound accessibility also indicates its relative worthlessness in the modern age; individuals today can, as the saying goes, simply be famous for being famous.

Encapsulated by Mark Rowlands’ term “vfame,” the relative ease of an unmerited rise in reputation indicates how fame in the current cultural climate has largely divorced from its original association with distinguished achievement. Although traditional vestiges of fame have not necessarily disappeared, it would appear that vfame has become a prominent force in American culture—something Katniss surely would not agree with. Recalling, in part, Kierkegaard’s thoughts on nihilism, vfame’s appearance stems from an inability of people to distinguish quality (or perhaps lack of concern in doing so), resulting in all things being equally valuable and hence equally unimportant. This, in rather negative terms, is the price that we pay for the democratization of celebrity: fame—or, more accurately, vfame—is uniformly available to all in a manner that mirrors a function of religion and yet promises a rather empty sort of transcendence. Although alluring, vfame is rather unstable as it is tied to notions of novelty and sensation as opposed to fame, which is grounded by its association with real talent or achievement; individuals who achieve vfame typically cannot affect the longevity of their success in substantial terms as they were not instrumental in its creation to begin with. Stars in the current age, as it were, are not born so much as made. Moreover, the inability of the public to distinguish quality leads us to focus on the wrong questions (and, perhaps worse, to not even realize that we are asking the wrong questions) in ways that have very real consequences; although vfame and its associated lapse in thinking might be most obvious in the realm of celebrities, it also manifests in other institutions such as politics. As a culture that is obsessed with image and reputation, we have, in some ways, forgotten how to judge the things that really matter because we have lost a sense of what our standards should be.

Born out of an early to mid-20th century society in which the concept of the “celebrity” was being renegotiated by America, concepts like vfame built upon an engrained cultural history of the United States that was firmly steeped in a Puritan work ethic. Americans, who had honored heroes exemplifying ideals associated with a culture of production, were struggling to reconcile these notions in the presence of an environment now focused on consumption. Although Katniss, as proxy for modern audiences, might initially find this shift difficult to appreciate, one need only consider that the premium placed on production is so central to American ideology that it continues to linger today: in a culture that exhibits rampant consumerism, we still value the “self-made man” and sell the myth of America as a place where anyone can achieve success through hard work. To abandon these ideas would necessitate that we reinterpret the very meaning of “America.” Thus, we become more sympathetic to the critics of the day who lamented the loss of the greatness of man and bristled against the notion that fame or celebrity could be manufactured—such a system would only result in individuals who were lacking and unworthy of their status. To this day, our relationship with celebrities is a tenuous and complex one at best, for although we celebrate the achievements of some, we continue to flock to the spectacle created by the public meltdown of others, unable or unwilling to help; we vacillate between positions of adulation, envy, contempt, and pity, ever poised for incensement but all too willing to forgive.

Perhaps it should come as no surprise that reality television puts us a little on edge, as the genre represents a fundamental blurring of fact and fiction. Celebrities, we see, are just like us—just like our neighbors, who, through the magic of reality television, can become stars! Ever-shifting classifications leave us on unstable ground. But also consider the aforementioned philosophy of Boorstin: stars are, among other things, individuals whose images are important enough to be reproduced, which causes “celebrity” to transition from a type of person to a description of how someone is represented in society. In other words, we witness a shift from a term that labels who someone is to a term that designates who someone seems to be. Celebrities, it might be argued, derive at least a portion of their power in modern culture because they embody a collection of images that has been imbued with some sort of significance. Ultimately, it seems that much of our unease with celebrity and fame centers on notions of authenticity.

All I Can Think of Are Hidden Things

Long before Katniss ever becomes a celebrity herself, she exhibits disdain for the Capitol and its residents, evidencing a particularly adverse reaction to things she considers artificial. As previously discussed, authenticity played a particular role in Katniss’ growth and her ability to survive: for Katniss, a false image literally represented an affront on the level of life or death, for a lapse in judgment could have resulted in possible electrocution or poisoning. Concordantly, Katniss dismisses the strange colors of the Capital along with the characteristic features of its citizens—stylists, in particular, are purported to be grotesque—because she is not readily able to reconcile these visuals with her established worldview. As Katniss operates on a literal level, directly associating identity with appearance, the self can only present in one way (in this case, relatively unadorned) and maintain its authenticity.

Like Katniss, we too may be tempted to summarily reject the unfamiliar; our modern anxieties might best be encapsulated by the question: What to do with a problem like Lady Gaga? Perhaps the strongest contemporary mass image that mirrors the visual impact of the stylists on Katniss (followed closely by New York socialite Jocelyn Wildenstein), Lady Gaga suffers continual criticism for her over-the-top theatrical presentations. With dresses made from meat and Hello Kitty heads, it is all too easy to write Lady Gaga as “attention-starved,” simplifying her presence to the succinct “weird.” Yet, it seems rash to write off Lady Gaga and the world of fame as nothing more than frivolity and fluff, for pop culture is only as vapid as our disinclination to engage in it.

Consider, for example, how the Capitol and its residents (of whom a prominent one would undoubtedly be Lady Gaga) embody the spirit of Decadence, a particularly prominent theme in Victorian culture. A reaction to the 19th century movement of Romanticism, Decadence championed concepts like artifice, which served to demonstrate man’s ability to rebel against, and possibly tame, the natural order. Although this inclination toward the unnatural manifested in myriad ways, French poet and philosopher Chrarles Baudelaire viewed women’s use of cosmetics as a particular site of interest, for proper application did not just enhance a woman’s beauty but acted to transform her, allowing transcendence through artifice.

With this in mind, we begin to understand the innate control wielded by figures such as Cinna and Caesar Flickman. Perceived as facile by some, these two men represent a class of individuals adept at understanding the power inherent in fame, reputation, celebrity, and appearance; in the Capitol, image mongers such as these hold sway. Although one reading of these characters plants them firmly in the realm of artifice, painting them as masters of emotional manipulation and spectacle, an alternate view might consider how these two have come to recognize a shift toward a new localized reality—one that Katniss must adapt to or perish.

And yet, despite their commonality, these two individuals also underscore fundamentally different approaches to image: Caesar (and, perhaps, by extension, the Capitol) wields his power in order to mask or redirect while Cinna endeavors to showcase a deep-seeded quality through the management of reputation and representation. Coexisting simultaneously, these two properties of illusion mirror the complimentary natures of Peeta and Katniss with regard to image. Peeta, skilled in physical camouflage, exhibits an emotional candidness that Katniss is initially unready, or unwilling, to match; Katniss, very much the inverse of Peeta, is characterized by traits associated with hunting, finding, and sight in the “real” world all while maintaining a level of emotive subterfuge. Over the course of the 74th Hunger Games, however, Katniss quickly learns to anticipate how her actions in the Arena will affect her representation and reputation beyond the battlefield. With the help of Haymitch, Katniss begins to better understand the link between a robust virtual self and a healthy physical one as she pauses for the cameras and plays up her affection for Peeta in exchange for much-needed rewards of food and medicine. As she matures, Katniss comes into alignment with Cinna and Caesar, individuals who, despite being participatory members of a system arguably deemed inauthentic, distinguish themselves from the majority of Panem by understanding how image works; Cinna and Caesar (and later Katniss) are not just powerful, but empowered and autonomous.

Herein lies the true import of Collins’ choice to weave the trope of reality television into the fabric of The Hunger Games: throughout the trilogy, the audience is continually called upon to question the nature of authenticity as it presents in the context of a media ecology. Ultimately, the question is not whether Katniss (or anyone else) maintains a sense of authenticity by participating in the games of the Capitol—trading a true self for a performed self—but rather an askance of how we might effect multiple presentations of self without being inauthentic. How does Katniss, in her quest to survive, embody Erving Goffman’s claims that we are constantly performing, altering our presentation as we attempt to cater to different audiences? Is Katniss truly being inauthentic or does she ask us to redefine the concept of authenticity and its evaluation? Struggling with these very questions, users of social media today constantly juggle notions of authenticity and self-presentation with platforms like Facebook and Twitter forming asynchronous time streams that seamlessly coexist alongside our real-life personas. Which one of these selves, if any, is authentic? Like Katniss, we are born into the world of the “real” without a ready ability to conceptualize the power latent in the virtual, consequentially resenting what we do not understand.


Consumption: A Modern Affliction

There was, for conspicuous consumption, perhaps no time quite as memorable as the 1980s in the history of the United States. In particular, the ideology codified by Bret Easton Ellis’ American Psycho encapsulated past transgressions while simultaneously heralding the arrival of a new trend in domestic identities. Patrick Bateman, the book’s protagonist, continually relates to his environment through image and demonstrates an adept understanding of social structures, using the language of branding to translate goods into value. For Bateman, manufactured products play an integral role in defining the nature of interpersonal relationships and his emotional state is often linked to the relative worth of his possessions as compared to the property of others (Ellis 1991). The brand holds such incredible power for Bateman and his peers that Patrick is not surprised at a colleague mistaking him for Marcus Halberstam, another character in the book—Bateman reasons that the two men share a number of similar traits, noting that Marcus “also has a penchant for Valentino suits and clear prescription glasses,” thus cementing, for Bateman at least, the connection between definition of self-identity and consumer goods (Ellis, 89). In the view of individuals like Patrick Bateman, the clothes literally make the man.

While the example of American Psycho might appear dated to some, one only needs to update the novel’s objects in order to glimpse a striking similarity between the pervasive consumer-oriented culture of the 1980s and that of modern youth. Apple’s iPod has replaced the Walkman, caffeine has become the generally accepted drug of choice, and an obsession with social networking profiles has supplanted a preoccupation with business cards. To be sure, Ellis’ depiction does not map precisely on modern teenage culture as some elements of society have changed over the years (and Ellis describes a world of professional twenty-somethings who participate in a setting somewhat alien to most contemporary high school students), but one can argue that the core theme of identification with branding creates a common link between the world of Ellis’ 1980s Manhattan and the space inhabited by current college applicants.

In order to further understand the effects that consumer culture might have on modern youth, this paper will first explore a brief history of branding in the United States throughout the 20th Century in order to develop a context and precedent for the argument that the current generation of students applying to college has developed in a society that is saturated with branding, marketing, and advertising; this environment has, in turn, allowed youth to conceptualize themselves as brands and to think of their projected image in terms of brand management. During the course of this article, discussion will also mention the history of the term “teenager” to demonstrate that it was closely linked with marketing since the descriptor’s creation and that this sentiment has impacted the manner in which American society has conceptualized the demographic. By reviewing the modern history of branding, I hope to demonstrate that although the consequences of a consumer culture might manifest uniquely in today’s youth, the oft-lamented incident is not merely a product of our times.

This paper will also attempt to address the commoditization of the college applicant by examining the confluence between branding culture, youth culture, and the admission process in order to show that students are not the only ones whose perspectives are shaped by the influences of consumerism. After a proposal of how and why branding affects modern culture, I suggest that we, as admission officers, can unconsciously encourage students’ dependence on the paradigm of branding (and its associated vocabulary) as we come to rely on the ability of the framework created by branding culture to activate networks of associations that, in turn, further aid us in readily understanding and conceptualizing applicants. To this end, the cognitive organizational function of branding as a type of narrative structure will be explored. Further supporting this position, an argument will be made that latent biases in the college application process may also help to reinforce the high/low culture dichotomy by privileging particular kinds of actions and experiences over others. A trickle-down effect then encourages youth applying to college to adopt the language of branding in order to present themselves as an ideal candidate for a particular institution, thus consecrating the importance of branding in the bidirectional relationship between the individual and the institution.

Living in America at the End of the Millennium

The history of consumer culture in the United States provides an important context for understanding the actions and attitudes of contemporary applicants. In fact, to discuss the history of the American teenager is to recount, in part, past socio-cultural effects of marketing. Exploring the roots of consumerism in the 1960s,[i] the following account will attempt to, with broad strokes, relay key points regarding the integration of branding and marketing into youth culture.

The 1960s marked a particular period of unrest in America as Baby Boomers began to clash with the G.I. Generation. Perhaps most significantly, the focus of discourse at this time shifted toward issues of youth culture with deep-seeded frustrations beginning to turn into anger as young adults struggled to define and express their individuality; the anti-establishment movement desperately wanted to break free from the control exuded by the State and corporations, eventually maturing a countercultural sentiment started by the Beat Generation into a milieu that gave birth to hippies and war protests. Baby Boomers, as a demographic group, also occupied a rather unique place in American history, coming into young adulthood during a time of post-war prosperity and the solidification of the middle-class. Suddenly, upward social movement became increasingly possible for a generation that enjoyed increased amounts of leisure time and disposable income. This period simultaneously saw the birth of the Cultural Studies movement, which began to recognize that individuals were not merely passive consumers but people who possessed a sense of agency (Arvidsson 2006). Although formal study would not flourish until the 1970s, the creation of the Birmingham Centre for Contemporary Cultural Studies would prove to be a pivotal milestone in the understanding of branding and youth, for social scientists now had a systematic way to investigate the phenomenon brewing in the hearts and minds of the Baby Boomers.

Cultural observers also quickly noticed the shifting economic trend and began to express their findings in prominent publications of the time; Dwight MacDonald labeled the American teenager as a “merchandising frontier,” a comment that would not go unnoticed by marketing companies looking to capitalize on this new trend (1958). In fact, although the term “teenager” had only recently emerged in literature, companies such as Hires Root Beer had already begun peer-to-peer campaigns among youth in order to promote a product, thus demonstrating recognition of the teenager as a potential consumer (Quart 2003a). The understanding of the teenager as a marketing demographic would prove to be a label that would continue to influence youth through the rest of the century. The development of the teenage market, along with the corresponding rise of teen-oriented culture and identity, caries through to the present:  seeds sown by Beatlemania have helped to develop an environment that permits fervor for teen idols like Justin Bieber. Perhaps more disconcerting is the relatively recent extension of this phenomenon, with marketers aiming at the “tween” audience (loosely conceptualized as 8-12 years of age) using children’s programming media such as animation and Radio Disney as their chosen vehicles (Donahue and Cobo 2009; McDonald 2007). However, irrespective of their status as tween or teen, American youth can arguably be understood to exist in an aspirational culture that highlights the benefits of consumption.

Before proceeding further, it should be noted that the connection between youth and products is a rather neutral manifestation despite its current negative connotation. We can, for example, consider how individuals in the 1970s appropriated products in forms of resistance and how the movement of Punk essentially imbued recycled products with new and innovative meanings in the creation of a powerful subculture. The current generation of students has also matured in a culture of new media, whose hallmark is that consumers are simultaneously producers. Many are most likely aware of the possibilities of these new platforms—from Twitter, to Facebook, to YouTube—on some level, but the extent of production may elude those who are not actively involved; even individuals wholly enmeshed in this environment might not consider how mainstays like the creation of Internet memes (e.g., LOLcats), the various “Cons” (e.g., ROFLCon, VidCon, Comic-Con, etc.), and a culture of remix serve to position individuals squarely in a setting defined by its consumptive and productive practices. The challenge is, however, that the current generation’s products have become less tangible and more abstract:  products now consist of things like data, intellectual property, and Negroponte’s “bits” (1995). Ultimately, it is the focus on individuals’ relationship to consumerism, often embodied, but not necessarily caused, by a connection with products, that results in observed negative aspects.

The most readily salient effect of this consumerist culture mixed with the cult of celebrity—and, if recent documentaries like Race to Nowhere are to believed, an overemphasis on achievement—is that children start to focus on their inadequacies as they begin to concentrate on what they don’t have (e.g., physical features, talent, clothes, etc.) rather than on their strengths. Brands, however, provide an easy way for youth to compensate for their feelings of anxiety by acting as a substitute for value:  the right label can confer a superficial layer of prestige and esteem upon teens, which can act as a temporary shield against criticism and self-doubt. In essence, one might argue that if teens aren’t good at anything, they can still be rich and be okay (Quart 2003b). For some, this reliance on branding might explain a relative lack of substance amongst the teenage population, but the ramifications of a culture dominated by consumerism extend much further.

Brands can also be understood in the context of their ability to create and foster communities, prominently demonstrated by users’ sworn allegiance to Macintosh computers and Apple.[ii] The concept of a brand (or even a logo) can provide many of the benefits that come with membership to a group and, as such, also serve to define adopters’ identities. Conceptualizing brand as a community is a particularly powerful thought when considering teenagers, an age group comprised of individuals who are arguably searching for a sense of belonging. Indeed, the very act of consumption can be thought of as a practice whereby individuals work to construct their self-identities and a common social world through products and the shared sets of meaning that those goods embody (Kates 2002; Belk 1988). In a manner that mirrors the underlying theme of American Psycho we thus begin to see that manufactured items start to possess a value beyond their utilitarian function through a process that seems natural and inherent; it is only when we begin to privilege particular commodities—and communities by extension—that we being to understand the negative role that branding can play for teenagers.

Further complicating the relationship, branding culture also exerts an influence on youth through lifestyle. Although the basis of this connection can be seen in the relationship between consumer culture and branding, brands can affect the process in more indirect ways. A number of factors, for example, from the media emphasis on teen culture to increased pressure surrounding college admission, might be forcing adolescents to classify themselves earlier than ever. Emphasis placed on entrance to selective universities provides an excellent demonstration of the drastic changes that young people have had to undergo in the early part of their lives; for many students aspiring to elite schools, college acceptance (and attendance) confers a particular type of status and failure to achieve this goal by the age of 18 represents an extremely large disappointment.  In order to secure this dream, young people might begin to package themselves—a “successful applicant” is no longer a student who did his best, but rather one who meets a specific set of criteria—turning their lives into a product, which they hope to sell to colleges and universities.

Branding associated with college admission showcases how marketing has developed into the promotion of a particular lifestyle, as opposed to a means of distinguishing and differentiating products (or, perhaps more cynically, as an extension of this process). In many areas, the mystique of the brand has become the important factor for consideration; the actual quality of an item does not seem to be as important as its perceived value.

The Rise of the Ad (Captandum)

When considering the state of modern youth, however, one might not see the packaging process associated with college admission as much of an anomaly. Children growing up in recent decades have been exposed to large amounts of media and advertising, which has served to cultivate a latent affinity with embedded narrative forms. The term “Adcult,” coined by University of Florida professor James Twitchell, depicts contemporary American society as an arena saturated with the lingering influences of commercialism (1996). Although the phrase results from a combination of “advertising” and “culture,” one can easily imagine Twitchell describing a group whose ideology revolves around concepts of marketing through a play on the word “cult.”

Advertising and branding, largely products of consumer culture, have a rather obvious economic impact; while one can certainly debate the mechanism(s) behind this process, one need only compare similar products with and without marketing schemes to ascertain that advertising can have an impact on manufactured goods. Rooted in the economic sphere, the development and presence of advertising is closely linked to surpluses in products—excess space in media, radio parts, and merchandise have all forwarded the need for, or existence of, advertising—and thusly can be understood in terms of monetary systems. As a pertinent example, compare the presence and impact of advertising on culture before and after the Industrial Revolution, when machinery allowed for the development of excess amounts of merchandise.

Staying solely within the framework of Economics, consider that advertising can help individuals to organize knowledge and to make informed choices about the world. In some ways, advertising tells consumers how their money can be best spent or utilized, given that currency is a limited resource. Yet, while arguably functional, anyone who has experienced a good piece of advertising knows that the reach of marketing exceeds the limits of economics—exemplary ads have the power to make us feel something. Although informal research can support the idea that memorable advertisements often influence us on an emotional level, a study by Stayman and Batra suggests that affective states resulting from advertising exposure can be stored and retrieved for later recall (1991). While the authors freely admit that they did not ascertain the exact mechanism for this process, one might posit that emotional responses to ads could result from the way that advertisements interact with our established belief systems and identity structures.

Continuing in the same vein, Twitchell contends that, “like religion, which has little to do with the actual delivery of salvation in the next world but everything to do with the ordering of life in this one, commercial speech has little to do with material objects per se but everything to do with how we perceive them” (1996, 110). While some might object to the mixing of influences in areas such as Advertising, Religion, Education, or Art, (interestingly some overlap is acceptable but the issue remains murky) a certain amount of comingling is inevitable if we classify each entity as a belief system—a certain way of seeing the world complete with its own set of values—and understand that individuals might incorporate multiple elements into their particular worldview. Ideologies such as Religion or Advertising tell believers, in their own ways, what is (and is not) important in society, something that Twitchell refers to as “magic” (1996, 29). Each characteristic also professes a particular point of view and attempts to integrate itself into everyday life, drawing on our desire to become part of something (e.g., an idea, a concept, or a movement) that is larger than ourselves. Perhaps, most importantly, the forces of Advertising, Religion, Education, and Art play on this desire in order to allow humans to give their lives meaning and worth, with a common thread being that followers can classify themselves in terms of the external:  God, works of art, name brands, etc. Cynics might note that this phenomenon is not unlike the practice of carnival sideshows mentioned in Twitchell’s Adcult—it does not matter what is behind the curtain as long as there is a line out front (1996). Although the attraction may assume different forms, it survives because it continues to speak to a deep desire for structure—the myth of advertising works for the same reasons that we believe in high art, higher education, and higher powers. The process of ordering and imbuing value ultimately demonstrates how advertising can not only create culture but also act to shape it, a process also evidenced by marketing techniques’ ability to consume and/or reference previously shared cultural knowledge while simultaneously contributing to the cultural milieu.[iii]

Despite the intricate and multi-faceted nature of its impact, we can use the narrative characteristics of advertising as framework for understanding its influence. On a basic level, the format of advertising typically takes the form of a loose narrative, complete with implied back-story—television spots, in particular, provide a salient example of this. Yet, the messages present in advertising can also cause us to question our sense of self as we evaluate our belief systems and values as previously mentioned. Consider how personal identities can result from narrative or actually be narrative; sentences containing “to be” verbs can be unpacked to reveal a larger narrative structure that can help us to “cope with new situations in terms of our past experience and gives us tools to plan for the future” (Sfard and Prusak 2005, 16). Twitchell supports this idea by mentioning that “the real force of Adcult is felt where we least expect it:  in our nervous system, in our shared myths, in our concepts of self, and in our marking of time” (1996, 124). Advertising, it seems, not only allows us to construct a framework through which we understand our world, but also continually informs us about who we are (or who we should be) as a collection of narratives that serves to influence the greater perceptions of youth in a manner reminiscent of the role of television in Cultivation Theory (Gerbner and Gross 1976).

The Medium Is the Message

Understanding the process by which the framework of branding affects contemporary society enables modern scholars to conceptualize how consumer culture can shift (or even create) paradigmatic structures that have far-reaching effects for college applicants. Recasting branding and advertising as manifestations of modern myths proves crucial to understanding how the messages, as narrative, help to convey complex ideas in a relatable format, making sense out of a potentially overwhelming wave of information. Consider how the first iterations of narrative, myths and legends, informed the populace about the rules of a world (e.g., why the sun rose or how humans had come to be) in a process that mirrors one of the previously discussed functions of advertising; although many have now come to accept scientific explanations in lieu of (or possibly in conjunction with) these tales, the fact remains that stories can serve to develop cognitive scaffolding as we evaluate foreign concepts. Narrative structure provides a guide for people to follow as they absorb additional information, easing the progression of learning (Perlich and Whitt 2010). This educational element, similar to the one existent in the concept of play, allows individuals to learn and internalize intricate lessons without any overt effort. However, when considering this process, it is important to realize that narrative, in choosing which facts to highlight, also chooses which facts to exclude from a story, which might be just as significant.

For some, the process of inclusion and exclusion might seem oddly similar to the creation (or recording) of history; certain facts become relevant and serve to shape our perception of an event while others fade into obscurity. If we were to take a second, however, and think about this notion, we would realize that narratives often served as the first oral histories for a given population. Individuals entrusted with this position in these societies were the “keepers of information,” whose ability to recount narrative shaped their community’s collective memory, and, thus, a key part of the community’s combined sense of identity (Eyerman 2004; Williams 2001). Performing a similar role as the oral historians of the past, modern society’s sense of shared knowledge can be understood to be influenced by the commercial storytelling that is branding (Twitchell 2004). The ramification of branding’s ability to affect American culture in this manner is profound:  with its capacity to color perceptions, branding can influence the communal pool that forms the basis for social norms and cultural capital.

The notion of narrative’s impact on the sense of self is an interesting one to consider, particularly in youth-oriented marketing, as it affects individuals who are in the process of forming their identities (as opposed to adults whose self-concepts might be, one might argue, more static); in a process analogous to branding, adolescents try on different personalities like clothes, looking to see what fits. While not entirely insidious, teen marketing can exploit this natural process by providing shortcuts to identity through the power of branding. Altering perceptions, branding can activate particular sets of associations that have been engrained by marketing into adolescents and therefore act as a value heuristic for youth. For teenagers navigating the social circles of their peer groups, labels can make an enormous difference.

Tricks are for Kids?

            Young people, however, are not the only ones prone to mental shortcuts; adults—including those who make evaluative judgments—have also been conditioned to rely on heuristics as guidelines, using experience to help them determine which rules to keep (Dhami 2003; McGraw, Hasecke and Conger 2003). While heuristics generally provide users with an accurate conclusion, they are notoriously fallible and consistently exploitable.[iv] The question then becomes:  if adults are subject to heuristics in decision-making processes and these heuristics are sometimes faulty, what heuristic(s) might be active during the evaluation of candidates for admission and how might this affect our method?

Even if we grant that the particular nuances of the application review will differ by individual institution, we can still examine the admission process in terms of branding culture. File evaluation partially rests upon our ability to sort, organize, and simplify massive amounts of information in order to gain perspective on our applicant pool. While reading the application, we filter the information through our own unique lenses—the networked set of thoughts, associations, and biases that we bring to the table—as we attempt to develop a context for the student represented by the file in front of us. Buzzwords (e.g., “President,” “Scout,” “Legacy,” “2400,” “Minority”) in the application, acting like puzzle pieces, instantly activate particular collections of neural pathways as we begin to ascribe value; buzzwords, then, can be understood to function in a manner similar to brands and advertising.

Harkening back to the continuum of high culture and low culture, we can also think about how some key terms are privileged over others. How, for example, does the president of a school club differ from the president of an online guild? Knowing nothing else, I believe that many of my colleagues would favor the established activity over the unknown. For these individuals, I would argue that past experiences with students had most likely factored into the development of a heuristic regarding student desirability, resulting in a series of mental leaps that, over time, would grow into instinct. While good readers learn to continually challenge themselves and check their biases, there might be a systematic devaluation of particular identities in the admission process—an opinion piece by Ross Douthat in the New York Times suggests that lower-class whites might just be such a demographic (2010)—not out of active bigotry but simply because the brand does not resonate with any of our pre-set associations regarding a successful student. Worse, perhaps, we unwittingly privilege individuals with large amounts of social capital (and its inherent advantages), favoring those who know to participate in the “right” activities.

In a similar vein, my research at the Annenberg School for Communication and Journalism hopes to provoke discussion in this area by attempting to look at the trajectory between popular culture and civic engagement; in essence my colleagues and I hope to discover how seemingly innocuous activities in the realm of pop might actually allow students to develop skills that allow them to participate meaningfully in their communities. We believe that popular culture can act as a training ground for young people, allowing them to cultivate skills in the areas of rhetoric, agency, and self-efficacy before applying their talents in the “real” world. We recognize that the actions and experiences undergone in the world of pop culture can be ambiguous and difficult to understand; we also argue, though, that these traits are no less valuable to youth because they are not easily comprehendible. For us, some of the most amazing things happen in fandoms related to the iconic world of Harry Potter, YouTube communities of Living Room Rock Gods, and political statements in World of Warcraft (From Participatory Culture to Public Participation 2010). Ultimately, we hope to challenge public perceptions regarding participation in fan communities, demonstrating that popular culture fills a uniquely productive role in the lives of its participants.

 

The Next Big Thing

In our attempts to do good, we preach admission tips at college fairs and workshops telling students how they can develop their applications and stand out from their peers without coming across as packaged. We tell applicants to cultivate a point of view, or an image, or a passion—yet, how is this, ultimately, different from asking a student to define and market a brand? Are we subtly encouraging our youth to turn themselves into products with the additional askance that they not seem like man-made fabrications? What is our ethical responsibility in responding to a college culture infused with lovemarks and their concept of loyalty beyond reason (Roberts 2005)? Does the structure of our applications cause students to begin to consider themselves in terms of taglines and talking points as they scramble to mold themselves into the image of the ideal student? This is not our intent, but I fear that it is our future. If we, as professionals in Higher Education, do not understand the possible implications of branding culture upon ourselves, our students, and our occupation, we cannot hope to begin to address the commoditization of college applicants.


NOTES

[i] A more complete history would begin with the post-war economic boom of the 1950s, but mention of this is omitted in the interest of space as it is not directly relevant to the youth population. There are, however, interesting examples in this decade of branding’s movement away from mere signification to a means of differentiating the self in a culture dominated by norms of conformity. More information on the phenomenon of conformity and avoidance of ostentatious display can be found in William Whyte’s The Organization Man (1956).

[ii] It should be noted that Apple seemed to grasp this concept fairly early on and developed a successful series of ad campaigns around the idea of community, most notably the “Think Different” slogan and the recent rash of “Mac vs. PC” television spots. The “Think Different” campaign, in particular, positioned users of Macs as a group in league with great thinkers of the modern era and also invoked the principle of psychological reactance in order to further strengthen the inter-community bonds.

[iii] The concurrent horizontal and vertical spread of advertising is reminiscent of memes, a concept created by evolutionary biologist Richard Dawkins. According to Dawkins, memes represent discrete units of cultural knowledge that propagate in a particular society (analogous to genes) through a number of transmission methods (1976). While the concept of memetics certainly spans across areas other than advertising, Dawkins notably included, as examples of memes, catch phrases (i.e., slogans), melodies (i.e., jingles), and fads. Consequentially, although advertising inevitably forms a new type of culture in societies, ads also serve to broaden exposure to, or strengthen the connections of, existing aspects of culture for those subjected to it as they burrow deep into our collective society.

[iv] There are many volumes written on this subject from the perspectives of both Social Psychology and Advertising. As a brief example, I will mention that a fairly common heuristic positions cost as directly proportional to value. The foundation for this equation lies in the belief that more expensive items tend to be better quality, more exclusive, or somehow desired. For a more comprehensive review of heuristics in the realm of persuasion, please see Influence:  The Psychology of Persuasion (Cialdini 1984).


These Are My Confessions

 

PostSecret raises a number of questions for me, specifically how the community art project reflects our current culture of confession. In particular, my work has focused my attention on youth and I often wonder how the current state of media might have affected the success of a movement like PostSecret.

Growing up, I remember watching the first seasons of The Real World and Road Rules on MTV (yes, I’m that old) and was always entranced by the confessional monologues. As a teen, the confessionals possessed a conspiratorial allure, for I was now privy to insider information about the inner workings of the group. However, looking back, I wonder if this constant exposure to the format of the confessional has changed the way that I think about my secrets.

The confessional format has become rather commonplace on the slew of reality shows that have filled the airwaves of the past decade and the practice creates, for me, an interesting metaphor for how Americans have to come to learn to deal with our struggles. As confessors sit in an isolation booth, they simultaneously talk to nobody and to everybody; place this in stark contrast to the typical connotation of “confession” and its associated images of an intimate discussion with a priest.

PostSecret, in some ways, is merely a more vivid take on this practice of reality show confessions; we hold our secrets in until we get the chance to broadcast them out across the interwebs. We oscillate between silence and shouting—perhaps we’ve forgotten how to talk? As Shannon mentioned, we might tend to our secrets, keeping them safe because we derive our identity from the things that we hide. We are desperate to make connections, to find validation, and to be heard.

Connection and validation are things that PostSecret definitely provides, but the development of the Voice is perhaps the reason that I am simply in love with the project. In its own way, PostSecret allows participants to declare and refine their identities but also allows individuals to see that their voices matter and are heard. I often work with young writers and one of the things that strikes me the most is that many of these children do not believe that they have anything to say or that no one cares about their point of view. Breaking this preconception takes some time, but some students are able to realize the unique power that they wield and leverage their Voices to create potent statements.

Should you find yourself with five minutes to kill, I encourage you to head over to the blog. Seeing the secrets presented on the site have changed my life.