Thoughts from my study of Horror, Media, and Narrrative

Books

Not so Much a Teaching as an Intangling

Not so Much a Teaching as an Intangling

Stanley Fish

 

 

Bibliography

Fish, S. (1967). Surprised by Sin. Cambridge: Harvard University Press.

 

 

Biography

Fish is associated with the concept of “interpretive communities,” a concept that suggests that a reader’s response to a given text is shaped by subjective experience. Although Fish would argue that no single reading of a text exists, the concept of interpretive communities suggests that, based on experience, a particular reading of a text is likely to be more salient than others. In the case of Milton, Fish often points to the way in which a reader is influenced by Christianity.

 

Although trained as a medievalist, Fish had no formal training in Milton studies when he began teaching a course in the subject at the University of California, Berkeley. Fish’s book, Surprised by Sin, was important in the field of Milton studies as it attempted to reconcile the divide that had formed between schools of thought that venerated Milton (e.g., William Blake and Percy Shelley) and those that disparaged him (e.g., T.S. Eliot and F.R. Leavis) by suggesting that the difficulty that readers experienced when reading the poem was not evidence of a failing on the part of the author but rather a strategy by Milton to help the reader better grasp the subject matter.[1]

 

 

Summary

In “Not So Much a Teaching as an Intangling,” an excerpt from his book Surprised by Sin, Fish utilizes a reader-centered approach in order to argue that Milton’s diction in Paradise Lost was designed to arouse a measure of self-examination on the part of the reader that could be traced back to dissonance between expectation and experience on the part of the reader. Here we begin to see a strategy that juxtaposes the successes/failures of the poem with those of its author—a contrast to Formalism and Structuralism which would not have directly engaged with such issues. In particular, Fish focuses on a rereading of the way in which Milton’s poem seems to qualify itself, arguing such an action is not a weakness of Milton but instead a deliberate effort on the part of the author to dislocate the reader and cause him or her to question an initial reading or interpretation.

 

As example, Fish introduces lines 292-294 of Book I in order to illustrate the way in which a reader’s initial understanding might be subsequently challenged:

 

His spear, to equal which the tallest Pine

Hewn on Norwegian Hills to be the Mast

Of some great Ammiral,[2] were but a wand.

 

Fish writes that a reader’s instinct here is to compare a spear and a pine in terms of their physical similarities as objects and, while this is one way in which to understand a notion of “equal,” it is not, ultimately what Milton intends. Presented with the rather unique problem of navigating between concreteness and grandeur, Fish writes that Milton structures his words this way so that “we are relieved of the necessity of believing the image true, but permitted to retain the solidity it offers our straining imaginations” (201).

 

One point of criticism here is that although Fish advocates for interpretative communities and a viewpoint grounded in readers’ responses to texts, Fish’s analysis gestures toward acceptance of a singular reaction that resolves the elements of Milton into a particular understanding of the text. Fish, then, is focused on readers but does not go so far as to allow for multiple readings/responses that would appear in postmodernism and suffers criticism by individuals like philosopher Martha Nussbaum who comments on the tendency of Fish to resist conflict in his analysis. Additionally, of particular note is the way in which the ideal reader evidences a Christian sensibility, which is only relevant if one is considering the likely audience for Milton’s poem when he initially wrote it.[3]

 

Fish’s larger point with this example, however, is to suggest that Milton’s aim is to gesture toward a reality that is beyond the range of normal human experience and perception. Fish argues that traditional similes are tied to a time and a place and that the subject matter of Milton’s poem exists outside of these boundaries, which means that the reader’s sense of lack or inadequacy is crucial for Fish as it speaks to the emotions that Adam and Eve experienced as they sought something just outside of their grasp.

 

In his analysis Fish also attempts to develop a distinction between two types of argumentation in Paradise Lost:  rhetorical and logical. Aligning the first with Satan and the latter with God, Fish seems to create an either/or binary that is particularly focused on displaying the inadequacies of the reader for reasons previously discussed. On page 209 Fish writes:

 

“The reader who fails repeatedly before the pressures of the poem soon realizes that his difficulty proves its major assertions—the fact of the Fall, and his own (that is Adam’s) responsibility for it, and the subsequent woes of the human situation…The reader who falls before the lures of Satanic rhetoric displays again the weakness of Adam, and his inability to avoid repeating that fall throughout indicates the extent to which Adam’s lapse has made the reassertion of right reason impossible.”

 

Although Fish argues for the productivity of the self-realization that results from a confrontation with one’s failings, the underlying assumption here is that rhetoric is present to mislead the reader. It is, however, unclear whether Milton himself would have supported a similar opposition between rhetoric and logic as his writings in Of Education seem to indicate that both were intended to be used in conjunction with one another.


[1] Milton’s poem has also been traditionally polarizing with battle lines being drawn around how one responded to the depiction of Satan.

[2] Interestingly, Dictionary.com provides the following definition:  An obsolete form of admiral. “The mast of some great ammiral” –Milton.

[3] See, for example, “And [Milton’s} readers who share this Christian view of history will be prepared to make the connection that exists potentially in the detail of the narrative” (208).


Seen but Not Heard: Feminist Narratives of Girlhood

Seen but Not Heard:

Feminist Narratives of Girlhood

Caitlin Moran, How to Be a Woman, New York: Harper Perennial, 2011, 320 pp., $15.99 (paperback).

Peggy Orenstein, Cinderella Ate My Daughter:  Dispatches from the Front Lines of the New Girlie-Girl Culture, New York: Harper, 2011, 256 pp., $25.99 (hardcover).

Reviewed by

Chris Tokuhama

University of Southern California

           The first thing one needs to know about Caitlin Moran’s How to Be a Woman (Harper Perennial, 2011) is that it is not an academic book, nor does it claim to be. Moran, a columnist for the London paper The Times, rightly asserts that the movement of feminism is too important to be discussed solely by academics and endeavors to use vignettes from her life to illustrate particular ways in which the question of feminism infiltrates meaningfully into the everyday lives of ordinary individuals. In and of itself, this effort represents a perfectly admirable attempt to reintroduce notions of feminism into mainstream culture but good intentions can only carry one so far.

            Ultimately, when boiled down to its purest essence, Moran’s assertion that she has “stuff to say” (12) is really what this book is all about. Moran has assembled a collection of shorter pieces loosely linked by the fact that they all derive their thrust from a moment in which an experience has given her some insight into the condition of being a woman—and a pointedly white and heterosexual one at that—in the United Kingdom. Given Moran’s background as a columnist, one is not surprised that her book should take this form and, indeed, one might be inclined to deem the project successful if the book were conceived simply as a memoir of sorts. Instead, however, Moran positions her book in the tradition of the feminist practice of consciousness raising and readers must question what sorts of insights are gained from perusing this particular text.

            “But wait!” Moran might argue, “I’m not a feminist academic!” (12) And she would be correct in that assertion. What the caveat does not excuse, however, is a demonstrated lack of rigor in thought or practice. As one example, Moran cites an “Amnesty International survey that found that 25 percent of people believe a woman is still to blame for being raped if she dresses ‘provocatively’” (203) which might very well be true but Moran does not provide any means to verify such a statement. It is precisely because feminism is such an important issue that Moran should do her due diligence and not allow her position to be undermined by an easy attack; Moran should force her detractors to confront her ideas and not her evidence, which is frustrating since Moran has some really good ideas.

            For example, one of the themes that runs throughout Moran’s book is the way in which being a woman (i.e., female identity) is manifested through, and displayed on, the body and that women’s internalized sense of how to appropriately discipline their bodies plays a key part in becoming a woman in the United Kingdom and the United States. Pubic hair, in particular, occupies a bit of Moran’s attention as its initial appearance and subsequent removal remain closely linked to conceptualizations of womanhood and femininity. A notable section of Moran’s second chapter discusses how technical considerations of shooting pornography marketed to heterosexual men—again we must be wary that this constitutes conjecture for Moran provides no sources—have been imbued with a layer of cultural meaning that consequentially influences women’s grooming habits. Women, in short, are affected by a cultural product that likely does not have their best interests in mind and it is precisely this type of revelation that illustrates the continued relevance of feminism. And yet it is also interesting to note when and where Moran draws arbitrary lines:  pubic hair, for example, should be trimmed but not waxed. But why, you might ask? Here Moran misses an opportunity to discuss the larger implications of the way in which women (and men) have been socialized to relate to women’s bodies and although Moran correctly notes that pubic hair is different from other forms of ancillary hair in that it is sexualized, she fails to touch on the broader issue of how hair management (of which trimming would surely be included) is related to perceptions and enactment of femininity.

            And of course it would seem rather impossible to discuss female bodies and femininity without broaching the subject of the vagina. Moran muses on a conversation with her younger sister, “So now, in 1989, we have no word for ‘vagina’ at all—and with all the stuff that’s going on down there, we feel we need one.” (56) Although Moran goes on to talk about the various euphemisms that women have for their vagina, she does not touch upon the way in which this practice points to the way in which language plays a crucial role in configuring, maintaining, and enacting the relative subjugation of women. Moran notes, for example, that terms describing the vagina have the ability to cause discomfort but ultimately portrays this phenomenon as empowering—she notes that the most offensive male counterpart is “dick”—but not discuss the ways in which this difference is actually indicative of a problem. What does it mean, for example, that we are much more familiar with, and accepting of, dicks? Nothing particularly good for women, most likely. Compounding the problem, Moran makes a critique about how “pussy” evidences a disconnect between women and their vaginas but does not comment on the way in which a refusal to embrace “vagina” ultimately leads to the same conclusion. Here, instead of making a compelling argument about the way in which language can be used to excavate relationships, Moran merely produces a polemic about the vagina’s various names that ultimately boils down to a description of her personal taste without investigating how her taste—and here one might certainly nod toward Bourdieu—was cultivated in the first place.

And the vagina performs a key function for Moran as she provides a catchy, if unhelpful, survey on page 75 to determine if one is in fact a feminist:  Do you have a vagina? Do you want to be in charge of it? (Feminists, by the way, should answer “yes” to both.) Like Naomi Wolf’s Vagina, we see a way in which the vagina is made to stand in for the entirety of womanhood, essentially reducing the meaningful elements of a woman to her vagina. Surely, this is a provocative question but not incredibly feminist in the long run. Moreover, what about those who do not have a vagina (e.g., men)? How are they supposed to figure out if they are feminists or not? Compounding the problem, we must investigate what it means to be in charge of one’s vagina:  in the abstract, one might state that being in charge means that one should be able to do whatever one likes with one’s vagina but we are left to question how such a practice manifests in the real world. Here, Moran’s ambiguity allows her to assume a position that is difficult to counter for who would argue that women should not have control over their own bodies in theory? Moran provides a good sound bite that is ultimately meaningless, however, for there are many ways in which the actions of men (and women) do not evidence a belief that total control of the vagina belongs to the women who bear them.

And yet perhaps the most problematic way in which Moran’s ambiguity affects her writing rests in the rather causal way she employs the term “the patriarchy.” On one hand the term is easy enough to define but where Moran fails is in her refusal to explain exactly what “the patriarchy” encompasses; patriarchy manifests in a variety of forms and through myriad agents as it operates on individual, interpersonal, and institutional levels. Here one senses another distinct limitation in her work:  How to Be a Woman is written by Moran to other women like Moran. For Moran, “the patriarchy” does not need to be defined because its meaning is assumed. Ultimately Moran’s overly simplistic attempts to define feminism and patriarchy also do a larger disservice as Moran fails to address the notion that individuals may benefit from feminism without ever being feminist themselves. Moran’s assumptions about feminism occlude the nuanced ways in which individuals can work to support both feminism and patriarchal hegemony in a manner that does not produce internal conflict.

In contrast to Moran’s efforts, one feels compelled to laud a work like Peggy Orenstein’s Cinderella Ate My Daughter:  Dispatches from the Front Lines of the New Girlie-Girl Culture (HarperCollins, 2011) for its ability to use personal narrative as an entrée to discuss the way in which female gender roles are configured and interpreted on a variety of levels. Using her experiences with her daughter as a narrative backbone, Orenstein carefully develops a series of thoughts about the effect that princess culture has on contemporary children.

Primarily focused on the influence of markets, Orenstein shows how economic concerns have played a large part in shaping the world that girls experience today. From the concept of Disney Princesses as an effort to revitalize a flagging corporate consumer products division to the way in which American Girl dolls promote intergenerational female bonding through consumption to the mapping of a family’s aspirations for social mobility onto child beauty pageant contestants, Orenstein illustrates how disparate aspects of girlhood are connected to each other and to a larger system of meaning. It is precisely because of the influence of marketing, Orenstein argues, that the transgressive core of “girl power” has been eschewed for the faux empowerment of “girlz.” The insidious bargain that girls strike is to gain claims toward empowerment by using consumption to reaffirm traditional gender roles. Even as fewer opportunities become salient for young girls—here reference is made to a classroom exercise in which young girls chose to imagine themselves as a princess, a fairy, a butterfly, or a ballerina in contrast to boys who assumed a variety of roles—Orenstein explores how performance of gender has become increasingly divorced from notions of female pleasure. Particularly notable are the ways in which Orenstein uses new communications technologies like social media and picture messaging to showcase how young girls’ identities have become, in part, more externally focused with the cultivation of the self as a kind of real-time performance piece that lies parallel to one’s physical existence. Sexting, for example, is not a post-feminist celebration of the body but rather constitutes a functional practice where girls demonstrate their ability to use their bodies as means toward particular ends (e.g., keeping a boyfriend). Orenstein also suggests that young girls develop a form of internalized self-surveillance as they learn to see themselves and their bodies as others do. Connecting this to chapters on body image and princess gowns, Orenstein builds a case for how body, femininity, and self are intimately related for girls; for many girls, how one feels is related to how one perceives one’s body to look. Ultimately, Orenstein challenges readers to question exactly what kind of practical power is provided by an empowerment that continues to be grounded in perceptions of the female body.

In contrast to the ambiguity that Moran displays about being a woman at the end of her book, Orenstein develops a clear plan of action that asks individuals to consider how they participate in the maintenance of a culture that might be detrimental for girls. Although both authors ground their analysis in the trappings of everyday life, the key to Orenstein’s success is the way in which she calls for a type of engagement that extends beyond Moran’s askance that readers get up on a chair and proclaim “I am a feminist!” In the end, it is a shame, really, for frank discussion of feminism’s importance is sorely needed in today’s society and Caitlin Moran owes the awkward thirteen-year-olds of the world—including the one who forms the core of her story, herself—better.


Once More, With Feeling

For me, notions of trauma and Freud are inextricably bound with horror; or, perhaps more accurately, I choose to interpret these events in such a way. Of particular interest to me in the readings for this week was Caruth’s note that stories of trauma, at their core, touch upon a dual set of crises:  the crisis of death and the crisis of life (7). What meaning does life continue to hold after one has become intimately familiar with the inevitability of one’s own death? I continue to think about how individuals who have experienced trauma are forced into a sort of liminal space between worlds wherein life (as we know it) is made strange in the face of death; although achingly familiar, life is forever made uncanny.

Although Freud speaks to the interwoven themes of life and death in his treatment of Thanatos/Eros, I (again because of my horror background) tend to think about these issues as they are inscribed on, and enacted through, the body. Horror, of course, has a long history of obscuring the boundaries between sex, violence, life, and death (let’s not even get started on the modern history of the vampire love triangle), with a number of academic works uncovering the implications of this in psychoanalytic terms. Reading Caruth’s mention of trauma as accident, however, caused me to contemplate one of the works that I find myself continually revisiting over the years:  David Cronenberg’s Crash. (Note:  If you are not familiar with the movie, you may want to check out the Wikipedia page before watching the trailer—my undergraduate training was as a Pre-Med Biology major and I study horror in my current work so I fully recognize that my threshold may be far off the norm.)

The film (and the book that it is based upon) speaks to a point made by Caruth in the final section of the introduction:

“It is possible, of course, to understand that other voice, the voice of Clorinda, within the parable of the example, to represent the other within the self that retains the memory of the “unwitting” traumatic events of one’s past. But we can also read the address of the voice here, not as the story of the individual in relation to the events of his own past, but as the story of the way in which one’s own trauma is tied up with the trauma of another, the way in which trauma may lead, therefore, to the encounter with another, through the very possibility and surprise of listening to another’s wound. (8)”

I fully admit that Caruth means something slightly different in her passage but I think that there is something worth considering here with regard to trauma:  what does it mean that we can be divorced from ourselves and our world by trauma yet connected to others through trauma? Is this form of connection possible only because we seek to redress a deficit of some sort?

But there is also something fascinating to me about this intense desire to relive the trauma (in this case a literal accident) over and over in a way that does not necessarily speak to any sort of desire to “get over it” as one might expect from treatment of PTSD or in aversion therapy. There is something powerful, I think, in attempting to understand the mentality of those who do not relive trauma in order to escape it but instead have come to feel that the moment just prior to their death is precisely the moment in which they feel most alive. To be traumatized, then, is not to be subject to an ongoing process of everyday nightmares but to suffer the indignity of life’s ceaseless banality. Continuing this thought, we have seen over the course of the semester that the despondence and disconnection that potentially results from close contact with death can take on many forms and that the issue continues to pervade our current culture, if Buffy Summers (taking a cue from Doc Hata) is any example:

The notion of the voice and speech is interesting to me here because, like in all good musicals, Buffy sings only what she cannot say. In the end, perhaps this insistent desire to relive trauma is not about any sort of masochistic drive—assuming that most of us do not like to suffer per se—but rather an attempt to glimpse the knowledge that lies beyond the shock and the numbness:  to do it once more, with feeling.


Mind over Matter? Mind as Matter!

If Cyberpunk was concerned with mapping, defining, and controlling the Metaverse—the space out there—then this week’s readings largely seemed to concern themselves with disciplining the space in here, invoking aspects of Cartesian dualism along the way.

Helping us to transition from Cyberpunk to Cyborg Feminisim is Pat Cadigan’s Mindplayers. Thematically, the book seemed to express striking elements of the cyberpunk ethic (as much as the movement can be said to express a unified voice) with the subjugation of the body to the mind. Perhaps expected from the book’s title, I was still struck at how personhood was defined in terms of the mind and not the body.

In any case, Gladney (who was apparently still going by that name for the sake of convenience) had passed all the critical points in redevelopment and become a person, again or for the first time, depending on your point of view. He was certainly not the same person—the man who had emerged from the blank brain was (probably?) reminiscent of his former self but no more self than he was anyone else. (200)

This is, of course, not to imply that the point of view expressed in Mindplayers is at all invalid, for one readily sees the important role that the mind plays in cases like Terri Shaivo’s with respect to identity and personhood. However, at the same time and for all of the interesting thought experiments that occur in the overlap between mental and virtual worlds, I think that Cyberpunk does us a disservice by leaving out discussion of the body.

And what are the causes and implications of such a move?

I don’t think that we, as a culture, have forgotten about our bodies but I do think that Cyberpunk has helped to naturalize the sense that we have domesticated, controlled, and tamed our bodies such that the “body” is simply not worth talking about anymore. Cyberpunk is, of course, not solely responsible for this act as we have come to do much to discipline our bodies over the years (easiest to see in the realm of health/sanitation and cleansing the body, removing/cutting hair, etc.) but it did seem to further drive a wedge between the potential of the mind and that of the body.

Getting away from the Cartesian dualism, I do not think that we can only talk about the body at the expense of the mind, but I am incredibly interested in understanding how these two perspectives are linked. Although we know that both systems can operate independently of one another (with reflexes on one hand and brain activity after death on the other), I am most curious about the ways in which the mind knows the world only through the body and how the body can have its own form of memory.

Speaking somewhat to this point of view, Cyborg Feminism also joins the discussion at roughly the same time as Cyberpunk (if perhaps a tiny bit later). Although not expressly concerned with these issues of the body per se, I see Cyborg Feminism as a movement that seeks to reincorporate discussion of the body by challenging the notion of the body’s construction as natural or sacrosanct. In Cyborg Feminism we see the beginning of themes that will continue on in Transhumanism as we cobble together new bodies out of bits of technology, as the movement argues for the extension of the human body. (As a side note, I am just starting to look at the movement from Posthumanism to Transhumanism and how this trajectory mirrors the transition from Cyberpunk to Cyborg Feminism with respect to the obliteration/eradication/subjugation of the body and its reformation.) Ultimately, however, I am interested in what all of this might mean for the ways in which we define our bodies—beyond discussion of bioethics (although this is surely a part of the conversation), I am curious to see whether we will work to reconcile new manifestations of the body with our evolving identities or instead seek to repress them.


I Gotta Watch My Body–I’m Not Just Anybody

Although I admittedly worked backward from The Matrix, slowly discovering Blade Runner and Snow Crash as I delved deeper into Science Fiction. In retrospect, I realize that the genre of Science Fiction is much broader than the theme of cyberpunk, but, as a child growing up in the 90s, the mainstream Science Fiction that I encountered seemed to belong to this subgenre. I suspect that I, like many others, was drawn in by the aesthetic more than the content per se (I was not heavily into technobabble much less willing to identify in any way as a computer geek), but I also wonder if the genre spoke to me on another level as well.

Going back over the works now, I find myself struck by the concept of embodiment present throughout much of the fiction. Incorporating the creation of computer technologies like the Internet and virtual reality into their work, many authors seemed to speculate on the eventual cultural impacts on the traditional mind/body duality as technological societies progressed into the future (e.g., Don Riggs’ “disembodied” and “trans-embodied”). Looking over the works of cyberpunk, there seem to be many interesting thought experiments regarding the nature of the body, what constituted it, and how our brain worked with/against our bodies. The question that I am left with is:  What happened to all of that discussion?

Although I have not done an extensive study of modern Science Fiction, it seems like much of the issue appears to be settled. In recent memory, Surrogates comes to mind as an example of body swapping (albeit between live and mechanical bodies) but doesn’t seem to explore the impact of bodies’ interactions with the world around them and how this sensation also serves to constitute the construction of the body (i.e., the body is not merely bounded by skin). The logic of the movie seems to indicate that one can readily swap bodies (with a slight sense of disorientation as one moves from one body to the next) but never really addresses the issue that it is entirely possible that these bodies exist in two slightly different worlds because they react to their environments in different ways.

I struggle with this because I wonder if we have, in a way, given up on our bodies as things that are fallible and subject to decay. We feel betrayed by our bodies when auto-immune diseases manifest and are all too aware that our bodies will wither with age (if we don’t get cancer first). For me, the major impulse in Cyberpunk seemed to be a desire to figure out a way to upload one’s mind to a distributed network, becoming one with the machine in consciousness, if not in body. And yet, in recent years, the focus seems to have swung toward the other end of the continuum (if one can indeed place such things on a linear scale) as we seek to incorporate increasingly advanced biomechanical parts in our bodies. Despite the flourishing of artificial limbs and synthetic organs, we seem to have ceased discussion on what this means for how we conceptualize and define a body. Perhaps the quiet has resulted from our culture coming to a conclusion about how the body is constituted? I think, however, that we have, in a fashion, forgotten about our bodies and how they are not merely containers for our brains. Rather, they are part of a system, with our minds accruing knowledge by virtue of experiencing things through the filter of our body as our bodies, in turn, provide a way for our minds to interact with the physical world around us.


Hunger Games ID

QR code for Hunger Games ARG…! Scan, please, if you have a chance!

 

 

 

 


Legends of the Fall

When reading the fiction of Cordwainer Smith, I found myself making connections to Richard Matheson‘s I Am Legend. Although I would classify I Am Legend as more of a horror story than a work of Science Fiction—that being said, the genres have a tendency to overlap and a strict distinction, for this current article, is not necessary—both pieces were published in the 1950s, a time assuredly rife with psychological stress. Although we certainly witness an environment coming to terms with the potential impact of mass media and advertising (see discussion from last week’s class), I also associate the time period with the incredible mental reorganization that resulted for many due to the increased migration to the suburbs—a move that would cause many to grapple with issues of competition, conformity, routine, and paranoia. In a way, just as 1950s society feared, the threat did really come from within.

And “within,” in this case, did not just mean that one’s former neighbors could, one day, wake up and want to eat you (one of the underlying themes in zombie apocalypse films set in suburbia) but also that one’s mental state was subject to bouts of dissatisfaction, depression, and isolation. Neville (the last human in Matheson’s book, who must fight off waves of vampires) and each of the protagonists in Smith’s stories is othered in their own ways and although Smith overtly points to themes of empowerment/disenfranchisement, I could not help but wonder about the psychological stress that each character endured as a result of a sense of isolation. Martel (“Scanners Live in Vain“) fights to retain his humanity (and connection to it) through his wife and cranching, Elaine (“The Dead Lady of Clown Town“) falls in love with the Hunter and fuses with D’joan while Lord Jestocost (“The Ballad of Lost C’mell“) falls in love with an ideal, and finally Mercer (“A Planet Named Shayol“) unwittingly chooses community over isolation by refusing to give up his personality and eyesight.

Throughout the stories of Matheson and Smith, we see that the end result of warfare is a shift in (or acceptance of) a new form of ideology. (This makes sense particularly if we take Smith’s position from Psychological Warfare that “Freedom cannot be accorded to persons outside the ideological pale,” indicating that there will necessarily be winners and losers in the battle necessitated by differences in ideology.) In particular, however, I found “Scanners Live in Vain” and “The Dead Lady of Clown Town” most interesting in that their conclusions point to the mythologizing of characters (Parizianski and Elaine), which is the same sort of realization had by Matheson’s Neville as he comes to terms with the concept that although he was, in his own story, the protagonist, he will be remembered as a conquered antagonist of new humanity. Neville, like Parizianski and Elaine, has become legend.

Ultimately, I think that Smith’s stories weave together a number of interrelated questions:  “What is the role of things that have become obsolete?” “What defines a human (or humanity)?” “How is psychological warfare something that is not just done to us by other, but by ourselves?” and, finally, “If psychological warfare is an act that is committed to replace and eradicate ‘faulty’ ideology, what is our role in crafting a new system of values and myths. What does it mean that we become legends?”


The Most Important Product Is You!

“The Culture Industry” seems to be one of those seminal pieces in the cannon of Cultural Studies that elicits a visceral (and often negative) reaction from modern scholars. Heavily influenced by the Birmingham School, generations of scholars have been encouraged to recognize agency in audiences, with the Frankfurt School often placed in direct opposition to the ideals of modern inquiry. Read one way, Horkheimer and Adorno appear elitist, privileging what has come to be known as “high culture” (e.g., classical music and fine art) over the entertainment of the masses. Horkheimer and Adorno argue that the culture industry creates a classification scheme in which man exists; whereas man previously struggled to figure out his place in the world, this job is done for him by the culture industry and its resultant structure of artificial stratification. Ultimately, then, because he does not have to think about his position in culture, man is not engaged in his world in the same way as he was before, which therefore allows content to become formulaic and interchangeable.

Later echoed in Adorno’s “How to Look at Television,” “The Culture Industry” laments the predictable pattern of televisual media, with audiences knowing the ending of movies as soon as they begin. (Interestingly, there is some overlap with this and Horror with audiences expecting that offerings follow a convention—one might even argue that the “twist ending” has become its own sort of genre staple—and that a movie’s failure to follow their expectations leaves the audience disgruntled. This of course raises questions about whether modern audiences have been conditioned to expect certain things out of media or to engage in a particular type of relationship with their media and whether plot progression, at least in part, defines the genre.) Horkheimer and Adorno’s attitude speaks to a privileging of original ideas (and the intellectual effort that surrounds them) but the modern context seems to suggest that the combination of preexisting ideas in a new way holds some sort of cultural value.

Adorno’s “How to Look at Television” also points out a degradation in our relationship to media by highlighting the transition from inward-facing to outward-facing stances, equating such migration with movement away from subtlety. Although the point itself may very well be valid, it does not include a robust discussion of print versus televisual media:  Adorno’s footnote that mentions the different affordances of media (i.e., print allows for contemplation and mirrors introspection while television/movies rely on visual cues due to their nature as visual media) deserves further treatment as the implications of these media forms likely has repercussions on audience interactions with them. Almost necessarily, then, do we see a televisual viewing practice that does not typically rely on subtlety due to a different form of audience/media interaction.  (It might also be noted that the Saw movies have an interesting take on this in that they pride themselves on leaving visual “breadcrumbs” for viewers to discover upon repeated viewings although these efforts are rarely necessary for plot comprehension.)

To be fair, however, one might argue that Horkheimer and Adorno wrote in a radically different media context. Sixty years later, we might argue that there’s not that much left to discover and that prestige has now been shifted to recombinations of existent information. Moreover, Horkheimer and Adorno’s position also assumes a particular motivation of the audience (i.e., that the payoff is the conclusion instead of the journey) that may no longer be completely true for modern viewers.

Although Horkheimer and Adorno rightly raise concerns regarding a lack of independent thinking (or even the expectation of it!), we are perhaps seeing a reversal of this trend with transmedia and attempts at audience engagement. Shows now seem to want people to talk about their shows (message boards, Twitter, etc.) in order to keep them invested and although we might quibble about the quality of such discourse and whether it is genuine or reactionary, it seems that this practice must be reconciled with Horkheimer and Adorno’s original position. It should be noted, however, that the technology on which such interaction relies was not around when Horkheimer and Adorno wrote “The Culture Industry” and the Internet has likely helped to encourage audience agency (or at least made it more visible).

Seeking to challenge the notion that the Horkheimer and Adorno discounted audience agency, John Durham Peters argues for the presence of both industry and audience influence in the space of culture and furthermore that while audiences may be empowered, their actions serve to reinforce their submission to the dominant wishes of industry in a realization of hegemonic practice. Although Horkheimer and Adorno, writing in the shadow of World War II were undoubtedly concerned with the potential undue influence of mass media as a vehicle for fascist ideology—as evidenced by quotes such as “The radio becomes the universal mouthpiece of the Fuhrer” and “The gigantic fact that the speech penetrates everywhere replaces its content”—they were also concerned that the public had relinquished its ability to resist by choosing to pursue frivolous entertainment rather than freedom (Adorno, 1941). From this position, Peters extracts the argument that Horkheimer and Adorno did in fact recognize agency on the part of audiences, but also that such energies were misspent.

The notion of “the masses” has long been an area of interest for me as it manifests throughout suburban Gothic horror in the mid-20th century. In many ways, society was struggling to come to terms with new advances with technology and the implications for how these new inventions would bring about resultant changes in practice and structure. Below is an excerpt from a longer piece about a movie that also grappled with some of these issues.

Reacting to atrocities witnessed throughout the course of World War II, Americans in the 1950s became obsessed with notions of power and control, fearing that they would be subsumed by the invisible hand of a totalitarian regime. In particular, the relatively young medium of television became suspect as it represented a major broadcast system that seemed to have an almost hypnotic pull on its audience, leaving viewers entranced by its images. And images, according to author and historian Daniel Boorstin, were becoming increasingly prominent throughout the 19th century as part of the Graphic Revolution replete with the power to disassociate the real from its representation (1962). For cultural critics still reeling from the aftereffects of Fascism and totalitarianism, this was a dangerous proposition indeed.

Although these underlying anxieties of mid-century American society could be examined via a wide range of anthropological lenses and frameworks, visual media has historically provided a particularly vivid manifestation of the fears latent in the people of the United States (Haskell, 2004). This is, of course, not to imply that visual media is necessarily the best or only means by which we can understand prevailing ideologies in the years after World War II, but merely one of the most visible. However, as a critical examination of the entire media landscape of the 1950s would be beyond the scope of a single paper of this magnitude, discussion shall be primarily concentrated around Elia Kazan’s 1957 movie A Face in the Crowd with particular attention paid to the contrasting channels of cinema and television.[1] This paper will seek to briefly position A Face in the Crowd in the larger context of paranoia-driven cinema of the 1950s before using the film as an entryway to discuss critiques of mass culture. Given the film’s apparent sustained resonance as indicated by its relatively recent mention (Vallis, 2008; Hoberman, 2008b; Franklin, 2009), the arguments of Critical Theory will then be applied to modern American culture in an attempt to ascertain their continued validity. Finally, an argument will be made that acknowledges the potential dangers facing mass culture in the 21st century but also attempts to understand the processes that underlie these pitfalls and provides a suggestion for recourse in the form of cultural and media literacy.

Paranoia, Paranoia, Everyone’s Coming to Get Me

The post-war prosperity of the 1950s caused rapid changes in America, literally altering the landscape as families began to flood into the newly-formed suburbs. With the dream and promise of upward social mobility firmly ensconced in their heads, families rushed to claim their piece of the American dream, replete with the now-iconic front yard and white picket fence. And yet, ironically, a new set of worries began to fester underneath the idyllic façade of the suburbs as the troubles of the city were merely traded for fears of paranoia and invasion; the very act of flight led to entrapment by an ethos that subtly precluded the possibility of escape.

As with many other major cultural shifts, the rapid change in the years following World War II caused Americans to muse over the direction in which they were now headed; despite a strong current of optimism that bolstered dreams of a not-far-off utopia, there remained a stubborn fear that the quickly shifting nature of society might have had unanticipated and unforeseen effects (Murphy, 2009). Life in the suburbs, it seemed, was too good to be true and inhabitants felt a constant tension as they imagined challenges to their newly rediscovered safety:  from threats of invasion to worries about conformity, and from dystopian futures to a current reality that could now be obliterated with nuclear weapons, people of the 1950s continually felt the weight of being a society under siege. An overwhelming sense of doubt, and more specifically, paranoia, characterized the age and latent fears manifested in media as the public began to struggle with the realization that the suburbs did not fully represent the picturesque spaces that they had been conceived to be. In fact, inhabitants were assaulted on a variety of levels as they became disenchanted with authority figures, feared assimilation and mind control (particularly through science and/or technology), began to distrust their neighbors (who could easily turn out to be Communists, spies, or even aliens!), and felt haunted by their pasts, all of which filled the movie screens of the decade (Jensen, 1971; Murphy, 2009; Wolfe, 2002).[2] Following solidly in this tradition, Kazan’s A Face in the Crowd picks up on some of the latent strains of paranoia in American culture while simultaneously serving as a platform for a set of critiques regarding mass culture.

Somewhere, a Star Is Made

The storyline of A Face in the Crowd is rather straightforward and yet deceptively complex in its undertones:  on the surface, we experience a rather heavy-handed morality tale in the form of country bumpkin Larry “Lonesome” Rhodes, a relative nobody who is plucked from obscurity and made (and subsequently broken) through powers associated with television. Yet, it is only when we begin to connect the movie to a larger societal context that we begin to understand the ramifications of the film’s message; a careful examination of A Face in the Crowd reveals striking suspicions regarding the role that media plays (in this case, primarily television and cinema) in shaping American culture. Stars, director Elia Kazan argues, are not so much born as made, a distinction that portends dire consequences.

It is worth noting that Kazan’s film was made during a time when the concept of the “celebrity” was being renegotiated by America; for a large part of its history, the United States, firmly grounded in a Puritan work ethic, had honored heroes who exemplified ideals associated with a culture of production and was struggling to reconcile these notions in the presence of an environment whose emphasis was now focused on consumption. Although modern audiences might initially find this shift difficult to appreciate, one need only consider that the premium placed on production is so central to American ideology that it continues to linger today:  in a culture that exhibits rampant consumerism, we still value the “self-made man” and sell the myth of America as a place where anyone can achieve success through hard work. To abandon these ideas would necessitate that we reinterpret the very meaning of “America.” Thus, we become more sympathetic to the critics of the day who lamented the loss of the greatness of man and bristled against the notion that fame or celebrity could be manufactured—such a system could would only result in individuals who were lacking and unworthy of their status (Gamson, 1994; Benjamin, 1973)

Such is the case it seems, with Larry Rhodes, who is discovered by roving reporter Marcia Jeffries in an Arkansas jail. Although it cannot be denied that Rhodes has some modicum of talent and a certain charisma that comes from being unafraid to speak one’s mind, Marcia ushers Rhodes onto the path of greatness by dubbing him “Lonesome” and thus creates a character that transforms Rhodes from a despondent drunk to a winsome drifter. This scene—the first major one in the movie—thusly introduces the important notion that those involved in the media can be implicitly involved in the manipulation of the information that travels over the airwaves. Subtly adding to the insidious nature of the media, A Face in the Crowd portrays Marcia as a character that seems likable enough, but also a person who is, in a way, exploiting the people in jail as she rushes in with her tape recorder intent on prying the stories from the characters she finds (or creates!) and does not exhibit much concern in truly understanding why these men are imprisoned in the first place. Taken to an extreme, we later come across the character of The General, who further perverts the connection between media and power as he conspires with Lonesome to remake the image of Senator Worthington Fuller as the congressman runs for President.

Yet, as Lonesome Rhodes grows in his role as a media personality, he quickly demonstrates that the power to manipulate does not lie solely with those who sit behind the cameras. In Memphis, Rhodes incites a riot against the Luffler mattress company and also solicits donations in order to help a Black woman rebuild her house. In light of this, we can see that while Kazan focuses on the negative implications of television and celebrity, that the relative good or bad that comes from these actions is not necessarily the point—instead, the one constant in all of the depicted scenarios is a public who is manipulated into performing actions on the behalf of others. Although the characters of Lonesome and The General are vilified throughout the film, it is the masses for which Kazan demonstrates true disdain.

Extraordinary Popular Delusions

Perhaps nowhere is this contempt more apparent than at the end of the film where, in an attempt to offer a small moment of solace to Marcia after her unmasking of Lonesome, writer Mel Miller notes, “We get wise to them, that’s our strength” (Kazan, 1957). And Miller is not wrong:  Western tradition has long recognized the correlation between knowledge and power and Miller’s assertion touches upon the revelatory clout inherent in the realignment of perception and reality as noted by public relations guru Howard Bragman (2008). A more critical examination of the film’s closing scene, however, raises an important question:  Who is Miller’s “we”? Although one might be tempted to read this line as indicative of an egalitarian philosophical view, it is important to note that the only two characters in the shot represent the film’s arguably upper-middle class, and pointedly Eastern-educated, elite—nowhere to be seen are representatives of the small Arkansas town from the film’s opening or denizens of Memphis, both of whom serve to characterize the majority of Lonesome’s devoted viewers.[3] In fact, if we take time to reflect upon the movie, we realize that the majority of the audience was only alerted to Lonesome’s dual nature after Marcia flipped a control room switch and revealed the underlying deterioration; the masses oscillated from one position to the next without understanding how or why and once again adopted a passive stance in their relationship with media. Moreover, as Courtney Maloney points out, Kazan’s depiction of the agency of the masses is actually limited in scope:  despite a montage of audience members vehemently phoning in, sponsors are simultaneously shown to be acting independently as they withdraw their association with Lonesome (1999). Moreover, the subtext of the scene distances the rational decision-making of the truly powerful from the impassioned beseeching of the masses, likening the power of the latter to that of a mob. Knowledge and its associated authority, clearly, are afforded to a select group.

This idea, that the world can be divided between those who “get wise” and those who do not, serves to develop a rather sharp classist criticism against the medium of television and those who would watch it:  moviegoers, by virtue of witnessing Kazan’s work, find themselves elevated in status and privy to “the man behind the curtain” (to borrow a phrase). In contrast, the malleable masses were considered to be pacified and placated by idealistic portrayals of life in the 1950s in the form of television programs like Leave It to Beaver, The Donna Reed Show, and The Adventures of Ozzie and Harriet. Clearly, Kazan creates a dichotomy imbued with a value judgment descended from the thoughts of prominent thinkers in the Frankfurt School who, as far as aesthetics were concerned, preferred the high culture of cinema to the conformity and manipulated tastes of television (Horkheimer & Adorno, 2002; Adorno, 1985; Quart, 1989). This distinction between high and low culture would be a crucial supporting idea for critics as a prominent fear of mass culture was that it portended a collapse between concepts (e.g., fame, celebrity, or intellectual value) of objectively different quality, essentially rendering all manifestations the same and therefore all equally mundane (Boorstin, 1962; Hoberman, 2008b; Kierkegaard, 1962).  Even worse for critics, perhaps, was the perception of the masses’ refusal to grow out of its immature interests, a behavior that was characterized as both childlike and stubborn (Adorno, 1985).

And the fears of such theorists, all of whom were reacting to recent and rapid advances in broadcast technology, were not unfounded. Consider, for example, that radio had been popularized a scant fifty years prior and had vastly altered critics’ understanding of media’s potential impact, creating a precedent as it proliferated across the country and began to develop a platform for solidarity and nationalism. Yet, while the effects of radio were decidedly pro-social, due in part to its propagation of orchestral music and transmission of fireside chats, television was viewed as a corrosive force on society that spurred on the destruction of culture instead of enriching it.[4]For the critics of the Frankfurt School, television was indicative of an entrenched sentiment that regarded mass-produced culture as formulaic and perfectly suitable for a generation of passive consumers who sat enraptured in front of the glowing set. Associating the potential dissemination of propagandist ideology with television as a form of mass broadcast, cultural theorists evoked notions of totalitarian regimes akin to Hitler and Stalin in an effort to illustrate the potential subjugation of individual thought (Mattson, 2003). These simmering fears, aggrandized by their concurrence with the rising threat of Communism and collectivist cultures, found fertile soil in the already present anxiety-ridden ethos of the United States during the 1950s.


[1] It should be noted, however, that the comics of this time—those that belong to the end of the Golden Age and beginning of the Silver Age—also provide an additional understanding of the ways in which Americans indirectly wrestled with their fears.

[2] For a more exhaustive list of movies that support this point, see Wolfe, 2002.

[3] Let us also not forget the fact that Lonesome exhibits a rather patronizing attitude toward his audience in his later career, instituting the Cracker Barrel show with its manufactured country lackeys (Yates, 1974). In contrast to his first stint in Memphis, Lonesome has begun to embrace his country image as a means (if an inauthentic one) to connect with his audience, a point of contention to which we will return.

[4] Curiously, however, we see that this relationship between presidential addresses (like the aforementioned fireside chats) and mass media did not elicit notable complaints from critics who were generally wary of the merging of politics and entertainment (Quart, 1989; Benjamin, 1973). Although a larger discussion is warranted regarding the subtleties of this distinction, I would suggest that part of the differentiation stems from a high-low culture dichotomy. Although critics linked the negative presence of television with corporate advertising, James Twitchell suggests that there has always been a rather intimate relationship between arts and commerce, most saliently exhibited by wealthy citizens or entities who act as patrons (Twitchell, 1996).

 

Works Cited

Adorno, T. (1941). On Popular Music. Studies in Philosophy and Social Science, 9, 17-48.

Adorno, T. (1985). On the Fetish Character in Music and the Regression of Listening. In A. Arato, & E. Gebhardt (Eds.), The Essential Frankfurt School Reader (pp. 270-299). New York, NY: Continuum.

Benjamin, W. (1973). The Work of Art in the Age of Mechanical Reproduction. In H. Arendt (Ed.), Illuminations (H. Zohn, Trans., pp. 217-242). London, England: Schocken.

Boorstin, D. (1962). The Image: A Guide to Pseudo-Events in America. New York, NY: Athenenum.

Bragman, H. (2008). Where’s My Fifteen Minutes?: Get Your Company, Your Cause, or Yourself the Recognition You Deserve. New York, NY: Portfolio.

Gamson, J. (1994). Claims to Fame: Celebrity in Contemporary America. Berkeley: University of California Press.

Haskell, M. (2004, August 8). Whatever the Public Fears Most, It’s Right Up There on the Big Screen. The New York Times, pp. 4-5.

Horkheimer, M., & Adorno, T. W. (2002). Dialectic of Enlightenment: Philosophical Fragments. Stanford, CA: Stanford University Press.

Jensen, P. (1971). The Return of Dr. Caligari. Film Comment, 7(4), 36-45.

Kazan, E. (Director). (1957). A Face in the Crowd [Motion Picture].

Maloney, C. (1999). The Faces in Lonesome’s Crowd: Imaging the Mass Audience in “A Face in the Crowd”. Journal of Narrative Theory, 29(3), 251-277.

Mattson, K. (2003). Mass Culture Revisited: Beyond Tail Fins and Jitterbuggers. Radical Society, 30(1), 87-93.

Murphy, B. M. (2009). The Suburban Gothic in American Popular Culture. Basingstoke, Hampshire, England: Palgrave Macmillan.

Quart, L. (1989). A Second Look. Cineaste, 17(2), pp. 30-31.

Wolfe, G. K. (2002). Dr. Strangelove, Red Alert, and Patterns of Paranoia in the 1950s. Journal of Popular Film, 57-67.


The Real-Life Implications of Virtual Selves

“The end is nigh!”—the plethora of words, phrases, and warnings associated with the impending apocalypse has saturated American culture to the point of being jaded, as picketing figures bearing signs have become a fixture of political cartoons and echoes of the Book of Revelation appear in popular media like Legion and the short-lived television series Revelations. On a secular level, we grapple with the notion that our existence is a fragile one at best, with doom portended by natural disasters (e.g, Floodland and The Day after Tomorrow), rogue asteroids (e.g., Life as We Knew It and Armageddon), nuclear fallout (e.g., Z for Zachariah and The Terminator), biological malfunction (e.g., The Chrysalids and Children of Men) and the increasingly-visible zombie apocalypse (e.g., Rot and Ruin and The Walking Dead). Clearly, recent popular media offerings manifest the strain evident in our ongoing relationship with the end of days; to be an American in the modern age is to realize that everything under—and including—the sun will kill us if given half a chance. Given the prevalence of the themes like death and destruction in the current entertainment environment, it comes as no surprise that we turn to fiction to craft a kind of saving grace; although these impulses do not necessarily take the form of traditional utopias, our current culture definitely seems to yearn for something—or, more accurately, somewhere—better.

In particular, teenagers, as the subject of Young Adult (YA) fiction, have long been subjects for this kind of exploration with contemporary authors like Cory Doctorow, Paolo Bacigalupi, and M. T. Anderson exploring the myriad issues that American teenagers face as they build upon a trend that includes foundational works by Madeline L’Engle, Lois Lowry, and Robert C. O’Brien. Arguably darker in tone than previous iterations, modern YA dystopia now wrestles with the dangers of depression, purposelessness, self-harm, sexual trauma, and suicide. For American teenagers, psychological collapse can be just as damning as physical decay. Yet, rather than ascribe this shift to an increasingly rebellious, moody, or distraught teenage demographic, we might consider the cultural factors that contribute to the appeal of YA fiction in general—and themes of utopia/dystopia in particular—as manifestations spill beyond the confines of YA fiction, presenting through teenage characters in programming ostensibly designed for adult audiences as evidenced by television shows like Caprica (2009-2010).

 

Transcendence through Technology

A spin-off of, and prequel to, Battlestar Galactica (2004-2009), Caprica transported viewers to a world filled with futuristic technology, arguably the most prevalent of which was the holoband. Operating on basic notions of virtual reality and presence, the holoband allowed users to, in Matrix parlance, “jack into” an alternate computer-generated space, fittingly labeled by users as “V world.”[1] But despite its prominent place in the vocabulary of the show, the program itself never seemed to be overly concerned with the gadget; instead of spending an inordinate amount of time explaining how the device worked, Caprica chose to explore the effect that it had on society.

Calling forth a tradition steeped in teenage hacker protagonists (or, at the very least, ones that belonged to the “younger” generation), our first exposure to V world—and to the series itself—comes in the form of an introduction to an underground space created by teenagers as an escape from the real world. Featuring graphic sex[2], violence, and murder, this iteration does not appear to align with traditional notions of a utopia but does represent the manifestation of Caprican teenagers’ desires for a world that is both something and somewhere else. And although immersive virtual environments are not necessarily a new feature in Science Fiction television,[3] with references stretching from Star Trek’s holodeck to Virtuality, Caprica’s real contribution to the field was its choice to foreground the process of V world’s creation and the implications of this construct for the shows inhabitants.

Taken at face value, shards like the one shown in Caprica’s first scene might appear to be nothing more than virtual parlors, the near-future extension of chat rooms[4] for a host of bored teenagers. And in some ways, we’d be justified in this reading as many, if not most, of the inhabitants of Caprica likely conceptualize the space in this fashion. Cultural critics might readily identify V world as a proxy for modern entertainment outlets, blaming media forms for increases in the expression of uncouth urges. Understood in this fashion, V world represents the worst of humanity as it provides an unreal (and surreal) existence that is without responsibilities or consequences. But Caprica also pushes beyond a surface understanding of virtuality, continually arguing for the importance of creation through one of its main characters, Zoe.[5]

Seen one way, the very foundation of virtual reality and software—programming—is itself the language and act of world creation, with code serving as architecture (Pesce, 1999). If we accept Lawrence Lessig’s maxim that “code is law” (2006), we begin to see that cyberspace, as a construct, is infinitely malleable and the question then becomes not one of “What can we do?” but “What should we do?” In other words, if given the basic tools, what kind of existence will we create and why?

One answer to this presents in the form of Zoe, who creates an avatar that is not just a representation of herself but is, in effect, a type of virtual clone that is imbued with all of Zoe’s memories. Here we invoke a deep lineage of creation stories in Science Fiction that exhibit resonance with Frankenstein and even the Judeo-Christian God who creates man in his image. In effect, Zoe has not just created a piece of software but has, in fact, created life!—a discovery whose implications are immediate and pervasive in the world of Caprica. Although Zoe has not created a physical copy of her “self” (which would raise an entirely different set of issues), she has achieved two important milestones through her development of artificial sentience: the cyberpunk dream of integrating oneself into a large-scale computer network and the manufacture of a form of eternal life.[6]

Despite Caprica’s status as Science Fiction, we see glimpses of Zoe’s process in modern day culture as we increasingly upload bits of our identities onto the Internet, creating a type of personal information databank as we cultivate our digital selves.[7] Although these bits of information have not been constructed into a cohesive persona (much less one that is capable of achieving consciousness), we already sense that our online presence will likely outlive our physical bodies—long after we are dust, our photos, tweets, and blogs will most likely persist in some form, even if it is just on the dusty backup server of a search engine company—and, if we look closely, Caprica causes us to ruminate on how our data lives on after we’re gone. With no one to tend to it, does our data run amok? Take on a life of its own? Or does it adhere to the vision that we once had for it?

Proposing an entirely different type of transcendence, another character in Caprica, Sister Clarice, hopes to use Zoe’s work in service of a project called “apotheosis.” Representing a more traditional type of utopia in that it represents a paradisiacal space offset from the normal, Clarice aims to construct a type of virtual heaven for believers of the One True God,[8] offering an eternal virtual life at the cost of one’s physical existence. Perhaps speaking to a sense of disengagement with the existent world, Clarice’s vision also reflects a tradition that conceptualizes cyberspace as a chance where humanity can try again, a blank slate where society can be re-engineered. Using the same principles that are available to Zoe, Clarice sees a chance to not only upload copies of existent human beings, but bring forth an entire world through code. Throughout the series, Clarice strives to realize her vision, culminating in a confrontation with Zoe’s avatar who has, by this time, obtained a measure of mastery over the virtual domain. Suggesting that apotheosis cannot be granted, only earned, Clarice’s dream of apotheosis literally crumbles around her as her followers give up their lives in vain.

Although it is unlikely that we will see a version of Clarice’s apotheosis anytime in the near future, the notion of constructed immersive virtual worlds does not seem so far off. At its core, Caprica asks us, as a society, to think carefully about the types of spaces that we endeavor to realize and the ideologies that drive such efforts. If we understand religion as a structured set of beliefs that structure and order this world through our belief in the next, we can see the overlap between traditional forms of religion and the efforts of technologists like hackers, computer scientists, and engineers. As noted by Mark Pesce, Vernor Vinge’s novella True Names spoke to a measure of apotheosis and offered a new way of understanding the relationship between the present and the future—what Vinge offered to hackers was, in fact, a new form of religion (Pesce, 1999). Furthermore, aren’t we, as creators of these virtual worlds fulfilling one of the functions of God? Revisiting the overlap between doomsday/apocalyptic/dystopian fiction as noted in the paper’s opening and Science Fiction, we see a rather seamless integration of ideas that challenges the traditional notion of a profane/sacred divide; in their own ways, both the writings of religion and science both concern themselves with some of the same themes, although they may, at times, use seemingly incompatible language.

Ultimately, however, the most powerful statement made by Caprica comes about as a result of the extension to arguments made on screen:  by invoking virtual reality, the series begs viewers to consider the overlay of an entirely subjective reality onto a more objective one.[9] Not only presenting the coexistence of multiple realities as a fact, Caprica asks us to understand how actions undertaken in one world affect the other. On a literal level, we see that the rail line of New Cap City (a virtual analogue of Caprica City, the capital of the planet of Caprica)[10] is degraded (i.e., “updated) to reflect a destroyed offline train, but, more significantly, the efforts of Zoe and Clarice speak to the ways in which our faith in virtual worlds can have a profound impact on “real” ones. How, then, do our own beliefs about alternate realities (be it heaven, spirits, string theory, or media-generated fiction) shape actions that greatly affect our current existence? What does our vision of the future make startlingly clear to us and what does it occlude? What will happen as future developments in technology increase our sense of presence and further blur the line between fiction and reality? What will we do if the presence of eternal virtual life means that “life” loses its meaning? Will we reinscribe rules onto the world to bring mortality back (and with it, a sense of urgency and finality) like Capricans did in New Cap City? Will there come a day where we choose a virtual existence over a physical one, participating in a mass exodus to cyberspace as we initiate a type of secular rapture?

As we have seen, online environments have allowed for incredible amounts of innovation and, on some days, the future seems inexplicably bright. Shows like Caprica are valuable for us as they provide a framework through which the average viewer can discuss issues of presence and virtuality without getting overly bogged down by technospeak. On some level, we surely understand the issues we see on screen as dilemmas that are playing out in a very human drama and Science Fiction offerings like Caprica provide us with a way to talk about subjects that we will confront in the future although we may not even realize that we are doing so at the time. Without a doubt, we should nurture this potential while remaining mindful of our actions; we should strive to attain apotheosis but never forget why we wanted to get there in the first place.

Works Cited

Lessig, L. (2006, January). Socialtext. Retrieved September 10, 2011, from Code 2.0: https://www.socialtext.net/codev2/

Pesce, M. (1999, December 19). MIT Communications Forum. Retrieved September 12, 2011, from Magic Mirror: The Novel as a Software Development Platform: http://web.mit.edu/comm-forum/papers/pesce.html


[1] Although the show is generally quite smart about displaying the right kind of content for the medium of television (e.g., flushing out the world through channel surfing, which not only gives viewers glimpses of the world of Caprica but also reinforces the notion that Capricans experience their world through technology), the ability to visualize V world (and the transitions into it) are certainly an element unique to an audio-visual presentation. One of the strengths of the show, I think, is its ability to add layers of information through visuals that do not call attention to themselves. These details, which are not crucial to the story, flush out the world of Caprica in a way that a book could not, for while a book must generally mention items (or at least allude to them) in order to bring them into existence, the show does not have to ever name aspects of the world or actively acknowledge that they exist. Moreover, I think that there is something rather interesting about presenting a heavily visual concept through a visual medium that allows viewers to identify with the material in a way that they could not if it were presented through text (or even a comic book). Likewise, reading Neal Stephenson’s A Diamond Age (which prominently features a book) allows one to reflect on one’s own interaction with the book itself—an opportunity that would not be afforded to you if you watched a television or movie adaptation.

[2] By American cable television standards, with the unrated and extended pilot featuring some nudity.

[3] Much less Science Fiction as a genre!

[4] One could equally make the case that V world also represents a logical extension of MUDs, MOOs, and MMORPGs. The closest modern analogy might, in fact, be a type of Second Life space where users interact in a variety of ways through avatars that represent users’ virtual selves.

[5] Although beyond the scope of this paper, Zoe also represents an interesting figure as both the daughter of the founder of holoband technology and a hacker who actively worked to subvert her father’s creation. Representing a certain type of stability/structure through her blood relation, Zoe also introduced an incredible amount of instability into the system. Building upon the aforementioned hacker tradition, which itself incorporates ideas about youth movements from the 1960s and lone tinkerer/inventor motifs from Science Fiction in the early 20th century, Zoe embodies teenage rebellion even as she figures in a father-daughter relationship, which speaks to a particular type of familial bond/relationship of protection and perhaps stability.

[6] Although the link is not directly made, fans of Battlestar Galactica might see this as the start of resurrection, a process that allows consciousness to be recycled after a body dies.

[7] In addition, of course, is the data that is collected about us involuntarily or without our express consent.

[8] As background context for those who are unfamiliar with the show, the majority of Capricans worship a pantheon of gods, with monotheism looked upon negatively as it is associated with a fundamentalist terrorist organization called Soldiers of The One.

[9] One might in fact argue that there is no such thing as an “objective” reality as all experiences are filtered in various ways through culture, personal history, memory, and context. What I hope to indicate here, however, is that the reality experienced in the V world is almost entirely divorced from the physical world of its users (with the possible exception of avatars that resembled one’s “real” appearance) and that virtual interactions, while still very real, are, in a way, less grounded than their offline counterparts.

[10] Readers unfamiliar with the show should note that “Caprica” refers to both the name of the series and a planet that is part of a set of colonies. Throughout the paper, italicized versions of the word have been used to refer to the television show while an unaltered font has been employed to refer to the planet.


The Truth Shall Set You Free?

Young people handle dystopia every day:  in their lives, their dysfunctional families, their violence-ridden schools.

—Lois Lowry[1]

The Age of Information.

Today, more than ever, individuals are awash in a sea of information that swirls around us invisible as it is inescapable. In many ways, we are still grappling with the concept as struggle to sort, filter, and conceptualize that which surrounds us. We complain about the overbearing nature of algorithms—or, perhaps more frighteningly, do not comment at all—but this is not the first time that Western society has pondered the role and influence of information in our lives.

Access to information provides an important thematic lens through which we can view dystopic fiction and although it does not account for the entirety of the genre’s appeal in and of itself (or, for that matter, the increase in its popularity), we will see that understanding the attraction of dystopia provides some insight into the the societies that produce it and elucidates the ways in which the genre allows individuals to reflect on themes present in the world around them—themes that are ultimately intimately connected with the access and flow of information. My interest here lies specifically in YA dystopic fiction and its resonance with the developmental process of teenagers.

Lois Lowry’s quote suggests that today’s youth might be familiar with tangible aspects of dystopia even if they do not necessarily exist in a state of dystopia themselves; dystopia, then, is fundamentally relatable to youth.[2] Interpersonal violence in schools—on both the physical and virtual levels—has become a growing problem and can be seen as a real life analogue to the war-torn wastelands of YA dystopia; although the physical destruction present in fiction might not manifest in the everyday, youth may identify with the emotional states of those who struggle to survive.[3] And, given the recent and high-profile nature of bullying, issues of survival are likely salient for modern youth.[4]

As a writer, it should come as no surprise that Lowry, like literary critic Darko Suvin, primarily describes the concept of dystopia in literary terms; while a valid, if limited perspective, this does not preclude the term also possessing socio-political implications, with one potentially arguing that the relatable nature of dystopia extends far beyond the iterations outlined by Lowry into the realm of ideology.[5] On a basic level, dystopia often asks protagonists to perform a type of self-assessment while simultaneously evaluating preexisting hierarchal structures and systems of authority.[6] Given that this process asks individuals to contrast themselves with the society that surrounds them, one might make the argument that the themes of utopia and dystopia possess an implicit political element, regardless of authors’ intentions.

Moreover, consider the prevalent construct of the secret as a defining characteristic of dystopian societies like those presented in the classic works of Brave New World and Nineteen Eighty-Four.[7] Often located in the cultural history of the dystopia (e.g., “What events caused us to reach this point?”) or the sustained lies of the present (e.g., “This is for your protection”), acquisition of new (hidden) knowledge represents a fundamental part of the protagonist’s—and, by extension, the reader’s—journey. For young adults, this literary progression can mirror the development occurring in real life as individuals challenge established notions during the coming-of-age process; viewed through the lens of anthropology, dystopian fiction represents a liminal space for both the protagonist and the reader in which old assumptions and knowledge are questioned during a metaphorical rite of passage. [8],[9] And, although the journey itself provides a crucial model trajectory for youth, perhaps more important, however, is the nature of the secret being kept:  as Lowry alludes to, modern youth undoubtedly realize that their world—our world—like that of any dystopia, contains elements of ugliness. The real secret, then, is not the presence of a corrupted underbelly but rather why rot exists in the first place.

Aside from the type of knowledge or even the issues latent in its accessibility, however, we can see that modern culture is undergoing a rather radical reconfiguration with regard to the social structures surrounding information flow. Although we still struggle with the sometimes antagonistic relationship between citizens and the State mirrored in classic and YA dystopia, we have also developed another dimension:  citizen versus citizen. Spurred on by innovations in technology that have made mobile gadgetry increasingly affordable and accessible to the public, on-location reporting has grown from the relatively useful process of crowdsourcing information to a practice that includes surveillance, documentation, and vigilante justice as we display our moral outrage over someone else’s ungodly behavior through platforms like paparazzi photos, tweeting of overheard conversations, and the ever-popular blog—we, in effect, have assumed the mantle of Big Brother. It would seem that, like Dr. Moreau, we have been granted knowledge and ability without wisdom.

Moreover, let us consider how youth currently exist in a culture of confession that was not apparent during previous cycles of utopia/dystopia. Spurred on in part by daytime talk shows, reality television, press conference apologies, and websites like PostSecret, the current environment is suffused with secrets and those willing to share their intimate stories for a price. Somewhat in opposition to confession’s traditional role in Catholicism, secrets now play an active role in public life despite their private nature, a process that mirrors the juxtaposition of personal and public histories by protagonists in YA dystopia.[10],[11] Moreover, we quickly come to see the increased relevancy of this trend when we consider how individuals, groups, organizations, and societies begin to define themselves in terms of the secrets that they hold about others and themselves. The prevalence of events like corporate espionage, copyright infringement lawsuits, and breakdowns in communication between youth and parents all point to entities that wish to contain and restrict information flow. If being an American in the 20th century meant being defined by material possessions, being an American in the 21st century is to be defined by information and secrets. And, if this is indeed the case, how might we view our existence as one that occurs in a series of ever-expanding dystopias? As it turns out, Lowry might have been more correct than she realized when she noted young people’s familiarity with dystopia.

But perhaps this development is not so surprising if we consider the increasing commodification of knowledge in postmodern culture. If we ascribe to Jean-Francois Lyotard’s argument regarding the closely intertwined relationship between knowledge and production—specifically that the cultivation of new knowledge in order to further production—and therefore that information sets are a means to an end and not an end in and of themselves, we witness a startling change in the relationship between society and knowledge.[12] In opposition to the idealistic pursuit that occurred during the Enlightenment period, modern conceptualizations seem to understand knowledge in terms of leverage—in other words, we, like all good consumers, perennially ask the question, “What can you do for me?” Furthermore, the influence of commercialism on Education (i.e., the institution charged with conveying information from one generation to the next) has been probed, conjecturing that educational priorities might be dictated by concerns of the market.[13] Notably, these cultural shifts have not disavowed the value of knowledge but have changed how such worth is determined and classified.

The Frankfurt School’s pessimistic views of mass culture’s relationship with economic influences and independent thought aside, Lyotard also points to the danger posed by the (then) newly-formed entity of the multinational corporation as a body that could potentially supersede or subvert the authority of the nation-state.[14] Businesses like Facebook and Google accumulate enormous amounts of information (often with our willing, if unwitting, participation) and therefore amass incredible power, with the genius of these organizations residing in their ability to facilitate access to our own information! Without castigating such companies—although some assuredly do—we can glimpse similarities between these establishments’ penchant for controlling the dissemination of information and the totalitarian dictatorships prevalent in so many dystopian societies. In spite of the current fervor surrounding the defense of rights outlined in the Constitution, we largely continue to ignore how companies like Google and Facebook have gained the potential to impact concepts like freedom of assembly, freedom of speech, and freedom of information; algorithms designed to act as filters allow us to cut through the noise but also severely reduce our ability to conceptualize what is missing. These potential problems, combined with current debates over issues like privacy, piracy, and Net Neutrality indicate that power no longer solely resides in knowledge but increasingly in access to it.


[1] Lois Lowry, quoted in Hintz, Carrie, and Elaine Ostry. Utopian and Dystopian Writing for Children and Young Adults. (New York: Routledge, 2003).

[2] One might even argue that those who read dystopian fiction most likely do not inhabit a dystopian world, for they would not have the leisure time to consume such fiction.

[3] This point, of course, should not be taken in a manner that discounts the legitimate struggles of children who grow up in conflict states.

[4] See Ken Rigby, New Perspectives on Bullying. London: Jessica Kingsley Publishers, 2002and Marilyn A. Campbell “Cyber Bullying: An Old Problem in a New Guise?” Australian Journal of Guidance and Counseling 15, no. 1 (2005): 68-76.

[5] Clare Archer-Lean, “Revisiting Literary Utopias and Dystopias: Some New Genres.” Social Alternatives 28, no. 3 (2009): 3-7.

[6] Kennon, Patricia. “‘Belonging’ in Young Adult Dystopian Fiction: New Communities Created by Children.” Papers: Explorations into Children’s Literature 15, no. 2 (2005): 40-49.

[7] Patrick Parrinder, “Entering Dystopia, Entering Erewhon.” Critical Survey 17, no. 1 (2005): 6-21.

[8] Hintz and Ostry, Utopian and Dystopian. 2003.

[9] Parrinder, “Entering Dystopia, Entering Erewhon.” 2005.

[10] Shannon McHugh and Chris Tokuhama, “PostSecret: These Are My Confessions.” The Norman Lear Center. June 10, 2010. http://blog.learcenter.org/2010/06/postsecret_these_are_my_confes.html

[11] John Stephens, “Post-Disaster Fiction: The Problematics of a Genre.” Papers: Explorations into Children’s Literature 3, no. 3 (1992): 126-130.

[12] Jean-Francois Lyotard, The Postmodern Condition: A Report on Knowledge. (Manchester: Manchester University Press, 1979).

[13] Suzanne de Castell and Mary Bryson, “Retooling Play: Dystopia, Dysphoria, and Difference.” In From Barbie to Mortal Kombat, edited by Justine Cassell and Henry Jenkins. (Cambidge: The MIT Press, 1998).

[14] Lyotard, The Postmodern Condition. 1979.


I Believe That Children Are Our Future

Kids say the darndest things. Or so we’re told. Maybe, then, it is only fitting that we have turned children’s responses into a form of entertainment as adults exhibit a general reluctance to truly understand what children are saying; instead of striving to understanding the process of meaning making in the world of children, we filter their words through perspectives that, in some cases, have entirely forgotten what it means to be a kid.

In June 2011, an article published in the Wall Street Journal sparked robust debate about the appropriateness of the themes proffered by current Young Adult (YA) fiction, which ultimately culminated in a virtual discussion identified by “#YASaves,” on the social messaging service Twitter.[1] Although some of the themes mentioned in the #YASaves discussion, like self-harm, eating disorders, and abuse, seem outside the scope of YA dystopia, the larger issue of concern over youth’s exposure to “darkness” speaks to an overarching perception of children derived from views prevalent in Romanticism.

Consistent with the Romantic idolization of nature, children were heralded as pure symbols of the future who had not yet conformed to the mores of society.[2] (And here we see the humor presented by shows like Kids Say the Darndest Things, for we, as “knowing” adults, can juxtapose the answers of children with the “correct” responses.) Informed by a Romantic tradition that presupposed the legitimacy of children’s perspectives, privileging them over those of more traditional authorities, this stance also suggests that teenage protagonists are largely not responsible for understanding the intricacies of how their environments operate, expecting the realized world to instead align with their personal vision. Illustrating the potential pitfall of this practice, we need only look back a few years to the exclusive utopian vision promoted by President George W. Bush; dystopian for everyone who did not share his view, Bush’s “utopia” legitimized only one version of the truth (his).[3] Although discontent may be an integral part of the impetus to change, we begin to glimpse elements of narcissism and indignation as protagonists develop a moral imperative for their actions.

Building upon this model (and undoubtedly bolstered by the counter-cultural movements of the 1960s) mid-20th century YA fiction increasingly began to shoulder youth with the responsibility and expectation of overthrowing the generations that had come prior while simultaneously delegitimizing the state of adolescence through trajectories that necessitated the psychological growth of protagonists.[4] In order to save the world, teenage protagonists must inevitably sacrifice their innocence and thus become emblematic of the very institution they sought to oppose.

And even if the teenage protagonists of YA fiction represent those select few who transcend the impulse to do nothing, are they ultimately reactionary and thusly not truly empowered? An initial reading of genres like YA dystopian fiction might suggest that readers can extract philosophical lenses or skills through their identification with protagonists who struggle not only to survive but to thrive. However, further rumination causes one to question the accessibility of the supposed themes of empowerment at play:  although characters in dystopian fiction provide value by suggesting that hegemonic forces can be challenged, the trajectories of these extraordinary figures rarely do much to actively cultivate or encourage the enactment or development of similar abilities in the real world. In essence, young readers are exposed to the ideals, but not realistic actionable steps. Furthermore, although Roberta Seelinger Trites correctly cites power and powerlessness as integral issues in YA dystopia, one is left to question whether true power is a result of internal struggle and achievement or is instead conferred upon the protagonist through some external force.[5] Perhaps a product of a youth mindset that tends to focus on the self, teenage protagonists often fail to recognize (and thus comment on) the role of external factors that aid their quests; Katniss Everdeen in Suzanne Collins’ The Hunger Games trilogy, for example, routinely fails to mention (or seemingly appreciate) the ways in which her success are intimately connected to those who bestow gifts of various kinds upon her. Further challenging notions of empowerment, although Katniss develops throughout the course of the trilogy, she gives no indication that she would have become involved in rebellion had she not been forced (i.e., chosen) into a situation that she could not escape.

Echoing this idea, Lara Cain-Gray sees similar trends in the dystopian tendencies of teen realist fiction. In her analysis of Sonya Hartnett’s Butterfly, Cain-Gray argues that the protagonist, Plum, longs for some measure of extraordinariness—a saving grace from a dystopian world born out of banality.[6] Here again we see that agency is ascribed to an external source as characters yearn for salvation; individuals long for someone to save them because they have not yet learned how to save themselves. Regardless of later strides made by Plum, a lack of scaffolding means that her model remains inaccessible to readers unless they have also received a jump start. If we refer back to the idea that utopia and dystopia inherently contain political elements, it seems to follow that encouraging a wider recognition of, and sensitivity to, existing social structures might address gaps in the developmental process and help youth to become more active in real life, while combatting the adult-imposed label of apathy that is currently in vogue.

Perhaps the problem lies in how we traditionally conceptualize youth as political agents (if at all). Although there are assuredly exceptions to this, the primary readership for YA dystopia—loosely bounded by an age demographic that includes individuals between 12-18 years of age—largely does not possess a type of political power commonly recognized in the United States. Prohibited from voting, a majority of the YA audience is often not formally encouraged to exercise any form of political voice; it is not until they near the age of adulthood that the process even begins to take shape with, at best, a course on Civics in high school. And, in absence of a structured educational process that promotes reflection, critical thinking, outreach, and activism, youth might be seen to cobble together their political knowledge from sources readily available to them. As author Jack Zipes suggests in his book Sticks and Stones:  The Troublesome Success of Children’s Literature from Slovenly Peter to Harry Potter, youth seek out agency through literature like dystopian fiction.[7] However, one might argue that what youth are really after is a sense of empowerment that they are unable to find elsewhere in meaningful quantities.

Elizabeth Braithwaite comments on one such example of the YA dystopia’s potential political influence and agency in her discussion of post-disaster fiction. Building upon work by Erich Fromm, Braithwaite notes the important difference between social orders labeled as “freedom from” and “freedom to”:

Fromm explains that the two types of freedom are very different:  a person can be free of constraints, be they obviously negative or the ‘sweet bondage of paradise’, without necessarily being ‘free to govern himself, to realize his individuality.’[8]

Although “freedom from” represents a necessary pre-condition, it would seem that a true(r) sense of agency is the province of “freedom to.” And yet much of the rhetoric surrounding the current state of politics seems to center around the former as we talk fervently about liberation from dictatorships in the Middle East during the spring of 2011 or freedom from oppressive government in the United States.

On a level arguably more immediately pressing for a teenage readership, however, let us invoke the issue of bullying, which has become a somewhat high-profile topic in recent educational news. In line with the discussion surrounding forms of oppression elsewhere, much of the rhetoric present in this topic focuses on a removal of the negative—and admittedly quite caustic—influences of teenage aggressors. Prompted by a rash of high-profile suicides attributed to this phenomenon, New York Times columnist Dan Savage started a project entitled “It Gets Better.” Ostensibly designed to encourage youth to refrain from suicide (and, to a lesser extent, self-harm), “It Gets Better” seemed to effuse a position saturated with the ideology of “freedom from.” Although an admirable attempt, “It Gets Better” ultimately projects a hope for a static utopia free from bullying—which, as has been previously demonstrated, inevitably leads to a dystopia of one sort or another. By telling youth that things will get better someday (i.e., not now) we are ultimately choosing to withhold information about how to make it better. Intentional or not, we have begun to slide into a practice of knowledge containment that mirrors the regimes of dystopian societies as we fail to challenge youth to become active participants in the process of change. Propelled by thinking grounded in a stance of “freedom from,” we are, in indirect ways, in the name of protection or aid, stripping youth’s access to information that would act to empower them.

In marked contrast, we witness a different tonality in movements like those involved in the support of gay marriage or the Dream Act. Perhaps coincidentally, both efforts have embraced the notion of “coming out” and the liberation that this freedom of self-expression brings. “Freedom to,” it would seem, allows individuals in the modern age to effectively begin the process of challenging patriarchal and heteronormative stances—as any child of the 1970s and Marlo Thomas’ “Free to Be…You and Me” well knows.

So what do we do, then, with the complex space represented by the intersection of youth, adults, publishers, and YA fiction? Ultimately, I argue for a reevaluation of the value of youth voices in discussion surrounding YA fiction. As adults, our natural inclination may be to protect children, but we must also endeavor to understand the long-term implications of our actions—after all, isn’t our real goal to equip the next generation with the tools that they will need to become successful citizens of the world? We must walk a narrow line, fighting our tendency to view modern youth as romanticized wunderkind while respecting the demographic as one that is increasingly capable of amazing resilience. If our generation is to have any hope of disrupting the adversarial cycle so prevalent in YA dystopian fiction, we must take it upon ourselves to educate youth in a way that encourages their empowerment while remaining open to all that they have to teach us. It is only through this integration, and a more sophisticated flow of information, that we can hope to avoid the manufacture of a disenfranchised generation destined to suffer the ultimate indignity of being born into a dystopia. To get there, we must whole-heartedly engage with children, seeking to understand the ways in which they process information and perceive their environment. Although we are armed with mountains of theory, we need to realize that we do not necessarily know better—we merely know differently. We need to take the time to truly listen to our youth and attempt to see the world through their eyes:  focus groups can be used to ascertain descriptive language while large-scale surveys provide an element of generalizability. Inventories might help researchers get a sense of things like the pervasiveness of self-harm or the recuperative value of YA fiction. Follow-up interviews or focus groups could help us to evaluate the effectiveness of treatment programs, allowing us to alter our course should the need arise. In short, we need to actually talk (and listen!) to those whom we would serve.


[1] See Meghan Cox Gurdon, “Darkness Too Visible.” The Wall Street Journal. June 4, 2011 and Sherman Alexie, “Why the Best Kids Books Are Written in Blood.” The Wall Street Journal. June 9, 2011 for constrasting views on this topic.

[2] Hintz and Ostry, Utopian and Dystopian. 2003.

[3] See Sargent, “In Defense of Utopia.” 2006 and Maureen F. Moran, “Educating Desire: Magic, Power, and Control in Tanith Lee’s Unicorn Trilogy.” In Utopian and Dystopian Writing for Children and Adults, 139-155. (New York: Routledge, 2003).

[4] See Elizabeth Braithwaite, “Post-Disaster Fiction for Young Adults: Some Trends and Variations.” Papers: Explorations into Children’s Literature 20, no. 1 (2010): 5-19 and Roberta Seelinger Trites, Disturbing the Universe: Power and Repression in Adolescent Literature. (Iowa City: University of Iowa Press, 2000).

[5] Trites, Disturbing the Universe. 2000.

[6] Lara Cain-Gray,  “Longing For a Life Less Ordinary: Reading the Banal as Dystopian in Sonya Hartnett’s Butterfly.” Social Alternatives 28, no. 3 (2009): 35-38.

[7] Jack Zipes, Sticks and Stones: The Troublesome Success of Children’s Literature from Slovenly Peter to Harry Potter. (New York: Routledge, 2002).

[8] See Elizabeth Braithwaite, “Post-Disaster Fiction for Young Adults: Some Trends and Variations.” Papers: Explorations into Children’s Literature 20, no. 1 (2010): 5-19 and Erich H. Fromm, Escape from Freedom. (New York: Henry Holt and Company, 1994: 34).


A Spoonful of Fiction Helps the Science Go Down

Despite not being an avid fan of Science Fiction when I was younger (unless you count random viewings of Star Trek reruns), I engaged in a thorough study of scientific literature in the course of pursuing a degree in the Natural Sciences. Instead of Nineteen Eighty-Four, I read books about the discovery of the cell and of cloning; instead of Jules Verne’s literary journeys, I followed the real-life treks of Albert Schweitzer. I studied Biology and was proud of it! I was smart and cool (as much as a high school student can be) for although I loved Science, I never would have identified as a Sci-Fi nerd.

But, looking back, I begin to wonder.

For those who have never had the distinct pleasure of studying Biology (or who have pushed the memory far into the recesses of their minds), let me offer a brief taste via this diagram of the Krebs Cycle:

Admittedly, not overly complicated (but certainly a lot for my high school mind to understand), I found myself making up a story of sorts  in order to remember the steps. The details are fuzzy, but I seem to recall some sort of bus with passengers getting on and off as the vehicle made a circuit and ended up back at a station. I will be the first to admit that this particular tale wasn’t overly sophisticated or spectacular, but, when you think about it, wasn’t it a form of science fiction? So my story didn’t feature futuristic cars, robots, aliens, or rockets—but, at its core, it represented a narrative that helped me to make sense of my world, reconciling the language of science with my everyday vernacular. At the very least, it was a fiction about science fact.

And, ultimately, isn’t this what Science Fiction is all about (at least in part)? We can have discussions about hard vs. soft or realistic vs. imaginary, but, for me, the genre has always been about people’s connection to concepts in science and their resulting relationships with each other. Narrative allows us to explore ethical, moral, and technological issues in science that scientists themselves might not even think about.  We respond to innovations with a mixture of anxiety, hope, and curiosity and the stories that we tell often reveal that we are capable of experiencing all three emotional states simultaneously! For those of us who do not know jargon, Science Fiction allows us to respond to the field on our terms as we simply try to make sense of it all. Moreover, because of its status as genre, Science Fiction also affords us the ability to touch upon deeply ingrained issues in a non-threatening manner:  as was mentioned in our first class with respect to humor, our attention is so focused on tech that we “forget” that we are actually talking about things of serious import. From Frankenstein to Dr. Moreau, the Golem, Faust, Francis Bacon, Battlestar Galactica and Caprica (among many others), we have continued to struggle with our relationship to Nature and God (and, for that matter, what are Noah and Babel about if not technology!) all while using Science Fiction as a conduit. Through Sci-Fi we not only concern ourselves with issues of technology but also juggle concepts of creation/eschatology, autonomy, agency, free will, family, and society.

It would make sense, then, that modern science fiction seemed to rise concurrent with post-Industrial Revolution advancements as the public was presented with a whole host of new opportunities and challenges. Taken this way, Science Fiction has always been about the people—call it low culture if you must—and I wouldn’t have it any other way.


Coming in from the Cold

Although utopia—and perhaps more commonly, dystopia—has come to be regularly associated with the genre of Science Fiction (SF), it seems prudent to assert that utopia is not necessarily a subgenre of SF. Instead, a result of the shift toward secular and rational thinking in the Enlightenment, the modern notions of progress and idealism inherent in Western utopian thought find themselves intimately connected to science and technology in various forms. Early twentieth century American figures like Tom Swift, for example, articulated the optimism and energy associated with youth inventors, highlighting the promise associated with youth and new technology. [1]

After Robert A. Heinlin’s partnership with Scribner’s helped to legitimize science fiction in the late 1940s through the publication of Rocket Ship Galileo, the genre began to flourish and, like other contemporary works of fiction, increasingly reflected concerns of the day.[2] Still reeling from the aftereffects of World War II, American culture juggled the potential destruction and utility of atomic energy while simultaneously grappling with a pervasive sense of paranoia that manifested during the Cold War. As with many other major cultural shifts, the rapid change in the years following World War II caused Americans to muse over the direction in which they were now headed; despite a strong current of optimism that bolstered dreams of a not-far-off utopia (see Tomorrowland in Disneyland), there remained a stubborn fear that the quickly shifting nature of society might have had unanticipated and unforeseen effects.[3]Very much grounded in an anxiety-filled relationship with developing technology, this new ideological conflict undercut the optimism afforded by consumer technology’s newfound modern conveniences. Life in the suburbs, it seemed, was too good to be true and inhabitants felt a constant tension as they imagined challenges to their newly rediscovered safety: from threats of invasion to worries about conformity, from dystopian futures to a current reality that could now be obliterated with nuclear weapons, people of the 1950s continually felt the weight of living in a society under siege. An overwhelming sense of doubt, and more specifically, paranoia, characterized the age with latent fears manifesting in literature and media as the public began to struggle with the realization that the suburbs did not fully represent the picturesque spaces that they had been conceived to be. In fact, inhabitants were assaulted on a variety of levels as they became disenchanted with authority figures, feared assimilation and mind control (particularly through science and/or technology), began to distrust their neighbors (who could easily turn out to be Communists, spies, or even aliens!), and felt haunted by their pasts. [4] In short, the utopia promised by access to cars, microwave dinners, and cities of the future only served to breed frustration in the 1960s as life did not turn out to be as idyllic as advertised.

Suggesting that utopian and dystopian notions were not intrinsically linked to technology, this pattern would repeat itself in the 1980s after the promises of the Civil Rights, environmental, Women’s Liberation, and other counter-cultural movements like the Vietnam War protests faltered. To be sure, gains were made in each of these arenas, reflected in an increase in utopian science fiction during the 1970s, but stalling momentum—and a stagnating economy—caused pessimism and disillusionment to follow a once burgeoning sense of optimism during the 1980s.[5]On a grander scale, bolstered by realizations that societies built upon the once-utopian ideals of fascism and communism had failed, the 1980s became a dark time for American political sentiment and fiction, allowing for the development of dystopian genres like cyberpunk that mused on the collapse of the State as an effective beneficial regulating entity. [6] Reflected in films like The Terminator, a larger travesty manifested during the decade through an inability to devise systemic solutions to society’s problems, as we instead coalesced our hopes in the formation of romanticized rebel groups with individualist leaders.[7] Writing in 1985, author John Berger opined that repeated promises from Progressive moments in the past had contributed to society’s growing sense of impatience. [8] A powerful sentiment that holds resonance today, we can see reflections of Berger’s statement in President Obama’s campaign slogan and the backlash that followed his election to office—“Hope,” it seems, capitalized upon our expectations for a future filled with change but also sowed the seeds of discontent as the American public failed to witness instantaneous transformation. For many in the United States, a lack of significant, tangible, and/or immediate returns caused fractured utopian dreams to quickly assume the guise of dystopian nightmares.

Furthermore, these cycles set a precedent for the current cultural climate: the promises of new developments in communication technologies like the Internet—particularly relevant is its ability to lower the barriers of access to information—have turned dark as we have come to recognize the dangers of online predators and question the appropriateness of sexting. Moreover, technological advances that allow for the manipulation of the genetic code—itself a type of information—have allowed us to imagine a future that foresees the elimination of disease while simultaneously raising issues of eugenics and bioethics. Shifting our focus from the void of space to the expanses of the mind, utopian and dystopian fiction appears to be musing on the intersection of information (broadly defined) and identity. Spanning topics that feature cybernetic implants, issues of surveillance and privacy, or even the simple knowledge that a life unencumbered by technology is best, ultimately it is access to, and our relationship with, information that links many of the current offerings in utopian/dystopian Science Fiction.


[1] Francis J. Molson, “American Technological Fiction for Youth: 1900-1940.” In Young Adult Science Fiction, edited by C. W. Sullivan III. (Westport: Greenwood Press, 1999).

[2] Comics at this time, for example, also spoke to cultural negotiations of science and progress. For more about the establishment of Science Fiction as a genre, see C. W. Sullivan III, “American Young Adult Science Fiction Since 1947.” In Young Adult Science Fiction, edited by C. W. Sullivan III. (Westport: Greenwood Press, 1999).

[3] Bernice M. Murphy, The Suburban Gothic in American Popular Culture. (Basingstoke: Palgrave Macmillan, 2009).

[4] See Paul Jensen, “The Return of Dr. Caligari.” Film Comment 7, no. 4 (1971): 36-45 or Wolfe, Gary K. “Dr. Strangelove, Red Alert, and Patterns of Paranoia in the 1950s.” Journal of Popular Film, 2002: 57-67 for further discussion.

[5] Peter Fitting, “A Short History of Utopian Studies.” Science Fiction Studies 36, no. 1 (March 2009): 121-131.

[6] Lyman Tower Sargent, “In Defense of Utopia.” Diogenes 209 (2006): 11-17.

[7] Constance Penley, “Time Travel, Primal Scene, and the Critical Dystopia.” Camera Obscura 5, no. 3 15 (1986): 66-85.

[8] John Berger, The White Bird. (London: Chatto, 1985).


I Don’t Want Much, Just Everything You Are…and a Little Bit More

“This implies, does it not, that in order to raise a generation of children who can reach their full potential, we must find a way to make their lives interesting. And the question I have for you, Mr. Hackworth, is this: Do you think that schools accomplish that? Or are they like the schools Wordsworth complained of?”

–Neal Stephenson, The Diamond Age

Fifteen years after these words are written, we are still struggling to answer the question posed by Science Fiction author Neal Stephenson. Increasingly, we are finding that our American educational system does not raise a generation of children to reach their full potential; arguments about mental acuity aside, we seem to suffer from a generation of college applicants that is, well, rather uninteresting. This is not to say that there aren’t amazing students out there–there definitely are some–but they are more the exception than the rule.

To combat this, we have seen a rise in adult-driven initiatives that aim to cultivate interesting children. Although I don’t disagree with the sentiment, I do disagree with the practice. Fantastic trips and summer camps are not, in and of themselves, the problem. (Certainly, I think we have come adopt a rather distorted view of what’s important and, on some level, we’ve all heard these arguments before. Bigger is better, theater audiences want to see their money on stage, news headlines scream at us, spectacle is rampant, etc.) Rather, I take issue with the idea that many applicants try to substitute someone else’s story for their own:  time and time again, I have come across students who traveled to poor villages, or did research, or spent the summer living in European hostels and they typically tell me the same story. These students tell me the central narrative of what they were supposed to have learned or experienced on these adventures and, sometimes, force themselves to have those experiences whether they are genuine or not. Without realizing it, many of subscribed to the notion that there is a typical experience one is supposed to have in the Costa Rican jungles and they recount this like it was the most magical awakening. And, to be fair, it might have been, but I would argue that the shift in perspective is only part of it–everyone goes through an awakening at some point in his or her life–what I want from students is to understand what this change wrought in them. How did you learn something that forever changed the way that you saw the world, such that you couldn’t ever go back?

Or we extol the virtues of Boredom as a provider of quiet spaces free from stimulation, forgetting that, with the incredible, restless youth have also managed to enact incredible amounts of destruction. The practices of contemplation, introspection, and awareness can result from boredom but we are mistaken if we consider boredom to be a prerequisite.

Ultimately, I think that teaching kids to cultivate a passion is not the same as demanding mastery–sure, passion may lead to mastery and I’m not trying to stifle that process–but all I really want is for a student to want to be smarter, to be braver, to be more inquisitive. Simply put, all I really want is for a student to want to be more. If this is our goal, the trips and the flashy photos and the houses built all melt away for we see that we can have–that we do have–meaningful experiences every day. We don’t need to “discover” hidden truths but we do need to reconsider what’s happening around, to, and in us. I think we need to train kids how to understand the import of their “normal” lives and, perhaps more importantly, how to translate these lessons learned into purposeful action.


Light Up the Sky Like a Flame

But what is reality television? Although the genre seems to defy firm definitions, we, like Justice Stewart, instinctually “know it when [we] see it.” The truth is that reality television spans a range of programming, from clip shows like America’s Funniest Home Videos, to do-it-yourself offerings on The Food Network, investigative reporting on newsmagazines like 60 Minutes, the docu-soap Cops, and many other sub-genres in between, including the reality survival competition that forms the basis for The Hunger Games. Although a complete dissection of the genre is beyond the scope of this chapter—indeed, entire books have been written on the subject—reality television and its implications will serve as a lens by which we can begin to understand how Katniss experiences the profound effects of image, celebrity, and authenticity throughout The Hunger Games.

She Hits Everyone in the Eye

For the residents of Panem, reality television is not just entertainment—it is a pervasive cultural entity that has become inseparable from citizens’ personal identity. Although fans of The Hunger Games can likely cite overt allusions to reality television throughout the series, the genre also invokes a cultural history rife with unease regarding the mediated image in the United States.

Reacting to atrocities witnessed throughout the course of World War II, Americans in the 1950s became obsessed with notions of power and control, fearing that they would be subsumed by the invisible hand of a totalitarian regime. In particular, the relatively young medium of television became suspect as it represented a major broadcast system that seemed to have a hypnotic pull on its audience, leaving viewers entranced by its images. And images, according to author and historian Daniel Boorstin, were becoming increasingly prominent throughout the 19th century as part of the Graphic Revolution replete with the power to disassociate the real from its representation. Boorstin argued that although the mass reproduction of images might provide increased levels of access for the public, the individual significance of the images declined as a result of their replication; as the number of images increased, the importance they derived from their connection to the original subject became more diffuse. And, once divorced from their original context, the images became free to take on a meaning all their own. Employing the term “pseudo-event” to describe an aspect of this relationship, Boorstin endeavored to illuminate shifting cultural norms that had increasingly come to consider the representation of an event more significant than the event itself.

Katniss unwittingly touches upon Boorstin’s point early inThe Hunger Games, noting that the Games exert their control by forcing Tributes from the various districts to kill another while the rest of Panem looks on. Katniss’ assertion hints that The Hunger Games hold power primarily because they are watched, voluntarily or otherwise; in a way, without a public to witness the slaughter, none of the events in the Arena matter. Yet, what Katniss unsurprisingly fails to remark upon given the seemingly ever-present nature of media in Panem is that the events of The Hunger Games are largely experienced through a screen; although individuals may witness the Reaping or the Tribute’s parade in person, the majority of their experiences result from watching the Capitol’s transmissions. Without the reach of a broadcast medium like television (or, in modern culture, streaming Internet video), the ability of The Hunger Games to effect subjugation would be limited in scope, for although the Games’ influence would surely be felt by those who witnessed such an event in person, the intended impact would rapidly decline as it radiated outward. Furthermore, by formulating common referents, a medium like television facilitates the development of a mass culture, which, in the most pessimistic conceptualizations, represents a passive audience ripe for manipulation. For cultural critics of the Frankfurt School (1923-1950s), who were still reeling from the aftereffects of Fascism and totalitarianism, this was a dangerous proposition indeed. Although the exact nature of modern audiences is up for debate, with scholars increasingly championing viewers’ active participation with media, Panem has seemingly realized a deep-seeded fear of the Frankfurt School. It would appear, then, that The Hunger Games function as an oppressive force precisely because of its status as a mediated spectacle of suffering.

But perhaps we should not be so hard on Katniss. Growing up in an environment that necessitated the cultivation of skills like hunting and foraging, Katniss’ initial perspective is firmly grounded in a world based on truth. Plants, for example, must be checked (and double-checked!) to ensure their genuineness, lest a false bite result in death. In order for Katniss to survive, not only must she be able to identify plants but must also trust in their authenticity; prior to her experience in the Arena, Katniss undoubtedly understands the world in rather literal terms, primarily concerned with objects’ functional or transactional value. However, as hinted by Boorstin, additional layers of meaning exist beyond an item’s utility—layers that Katniss has not yet been trained to see.

Echoing portions of Boorstin’s previous work, French philosopher Jean Baudrillard conceptualized four types of value that objects could possess in modern society: functional, transactional, symbolic, and sign. Admittedly a more complex theory than the description provided herein, we can momentarily consider how Baudrillard’s value categories of “functional” and “transactional” might align with Boorstin’s previously introduced concept of the “real,” while “symbolic” and “sign” evidence an affinity toward “representation.” Whereas the functional and transactional value of items primarily relates to their usefulness, the categories of “symbolic” and “sign” are predominantly derived as a result of the objects’ relationship to other objects (sign) or to actors (symbolic). Accordingly, being relatively weak in her comprehension of representation’s nuances, Katniss characteristically makes little comment on Madge’s gift of a mockingjay pin. However, unbeknownst to Katniss (and most likely Madge herself), Madge has introduced one of the story’s first symbols, in the process imbuing the pin with an additional layer of meaning. Not just symbolic in a literary sense, the mockingjay pin gains significance because it is attached to Katniss, an association that will later bear fruit as fans well know.

Before moving on, let’s revisit the import of The Hunger Games in light of Baudrillard: what is the value of the Games? Although some might rightly argue that The Hunger Games perform a function for President Snow and the rest of the Capitol, this is not the same as saying the Games hold functional value in the framework outlined by Baudrillard. The deaths of the Tributes, while undeniably tragic, do not in and of themselves fully account for The Hunger Games’ locus of control. In order to supplement Boorstin’s explanation of how The Hunger Games act to repress the populace with the why, Baudrillard might point to the web of associations that stem from the event itself: in many ways, the lives and identities of Panem’s residents are defined in terms of a relationship with The Hunger Games, meaning that the Games possess an enormous amount of value as a sign. The residents of the Capitol, for example, evidence a fundamentally different association with The Hunger Games, viewing it as a form of entertainment or sport, while the denizens of the Districts perceive the event as a grim reminder of a failed rebellion. Holding a superficial understanding of The Hunger Games’ true import when we first meet her, Katniss could not possibly comprehend that her destiny is to become a symbol, for the nascent Katniss clearly does not deal in representations or images. Katniss, at this stage in her development, could not be the famed reality show starlet known as the “girl on fire” even if she wanted to.

By All Accounts, Unforgettable

Returning briefly to reality television, we see that Panem, like modern America, finds itself inundated with the genre, whose pervasive tropes, defined character (stereo)types, and ubiquitous catchphrases have indelibly affected us as we subtly react to what we see on screen. Although we might voice moral outrage at offerings like The Jersey Shore or decry the spate of shows glamorizing teen pregnancy, perhaps our most significant response to unscripted popular entertainment is a fundamental shift in our conceptualization of fame and celebrity. Advancing a premise that promotes the ravenous consumption of otherwise non-descript “real” people by a seemingly insatiable audience, reality television forwards the position that anyone—including us!—can gain renown if we merely manage to get in front of a camera. Although the hopeful might understand this change in celebrity as democratizing, the cynic might also argue that fame’s newfound accessibility also indicates its relative worthlessness in the modern age; individuals today can, as the saying goes, simply be famous for being famous.

Encapsulated by Mark Rowlands’ term “vfame,” the relative ease of an unmerited rise in reputation indicates how fame in the current cultural climate has largely divorced from its original association with distinguished achievement. Although traditional vestiges of fame have not necessarily disappeared, it would appear that vfame has become a prominent force in American culture—something Katniss surely would not agree with. Recalling, in part, Kierkegaard’s thoughts on nihilism, vfame’s appearance stems from an inability of people to distinguish quality (or perhaps lack of concern in doing so), resulting in all things being equally valuable and hence equally unimportant. This, in rather negative terms, is the price that we pay for the democratization of celebrity: fame—or, more accurately, vfame—is uniformly available to all in a manner that mirrors a function of religion and yet promises a rather empty sort of transcendence. Although alluring, vfame is rather unstable as it is tied to notions of novelty and sensation as opposed to fame, which is grounded by its association with real talent or achievement; individuals who achieve vfame typically cannot affect the longevity of their success in substantial terms as they were not instrumental in its creation to begin with. Stars in the current age, as it were, are not born so much as made. Moreover, the inability of the public to distinguish quality leads us to focus on the wrong questions (and, perhaps worse, to not even realize that we are asking the wrong questions) in ways that have very real consequences; although vfame and its associated lapse in thinking might be most obvious in the realm of celebrities, it also manifests in other institutions such as politics. As a culture that is obsessed with image and reputation, we have, in some ways, forgotten how to judge the things that really matter because we have lost a sense of what our standards should be.

Born out of an early to mid-20th century society in which the concept of the “celebrity” was being renegotiated by America, concepts like vfame built upon an engrained cultural history of the United States that was firmly steeped in a Puritan work ethic. Americans, who had honored heroes exemplifying ideals associated with a culture of production, were struggling to reconcile these notions in the presence of an environment now focused on consumption. Although Katniss, as proxy for modern audiences, might initially find this shift difficult to appreciate, one need only consider that the premium placed on production is so central to American ideology that it continues to linger today: in a culture that exhibits rampant consumerism, we still value the “self-made man” and sell the myth of America as a place where anyone can achieve success through hard work. To abandon these ideas would necessitate that we reinterpret the very meaning of “America.” Thus, we become more sympathetic to the critics of the day who lamented the loss of the greatness of man and bristled against the notion that fame or celebrity could be manufactured—such a system would only result in individuals who were lacking and unworthy of their status. To this day, our relationship with celebrities is a tenuous and complex one at best, for although we celebrate the achievements of some, we continue to flock to the spectacle created by the public meltdown of others, unable or unwilling to help; we vacillate between positions of adulation, envy, contempt, and pity, ever poised for incensement but all too willing to forgive.

Perhaps it should come as no surprise that reality television puts us a little on edge, as the genre represents a fundamental blurring of fact and fiction. Celebrities, we see, are just like us—just like our neighbors, who, through the magic of reality television, can become stars! Ever-shifting classifications leave us on unstable ground. But also consider the aforementioned philosophy of Boorstin: stars are, among other things, individuals whose images are important enough to be reproduced, which causes “celebrity” to transition from a type of person to a description of how someone is represented in society. In other words, we witness a shift from a term that labels who someone is to a term that designates who someone seems to be. Celebrities, it might be argued, derive at least a portion of their power in modern culture because they embody a collection of images that has been imbued with some sort of significance. Ultimately, it seems that much of our unease with celebrity and fame centers on notions of authenticity.

All I Can Think of Are Hidden Things

Long before Katniss ever becomes a celebrity herself, she exhibits disdain for the Capitol and its residents, evidencing a particularly adverse reaction to things she considers artificial. As previously discussed, authenticity played a particular role in Katniss’ growth and her ability to survive: for Katniss, a false image literally represented an affront on the level of life or death, for a lapse in judgment could have resulted in possible electrocution or poisoning. Concordantly, Katniss dismisses the strange colors of the Capital along with the characteristic features of its citizens—stylists, in particular, are purported to be grotesque—because she is not readily able to reconcile these visuals with her established worldview. As Katniss operates on a literal level, directly associating identity with appearance, the self can only present in one way (in this case, relatively unadorned) and maintain its authenticity.

Like Katniss, we too may be tempted to summarily reject the unfamiliar; our modern anxieties might best be encapsulated by the question: What to do with a problem like Lady Gaga? Perhaps the strongest contemporary mass image that mirrors the visual impact of the stylists on Katniss (followed closely by New York socialite Jocelyn Wildenstein), Lady Gaga suffers continual criticism for her over-the-top theatrical presentations. With dresses made from meat and Hello Kitty heads, it is all too easy to write Lady Gaga as “attention-starved,” simplifying her presence to the succinct “weird.” Yet, it seems rash to write off Lady Gaga and the world of fame as nothing more than frivolity and fluff, for pop culture is only as vapid as our disinclination to engage in it.

Consider, for example, how the Capitol and its residents (of whom a prominent one would undoubtedly be Lady Gaga) embody the spirit of Decadence, a particularly prominent theme in Victorian culture. A reaction to the 19th century movement of Romanticism, Decadence championed concepts like artifice, which served to demonstrate man’s ability to rebel against, and possibly tame, the natural order. Although this inclination toward the unnatural manifested in myriad ways, French poet and philosopher Chrarles Baudelaire viewed women’s use of cosmetics as a particular site of interest, for proper application did not just enhance a woman’s beauty but acted to transform her, allowing transcendence through artifice.

With this in mind, we begin to understand the innate control wielded by figures such as Cinna and Caesar Flickman. Perceived as facile by some, these two men represent a class of individuals adept at understanding the power inherent in fame, reputation, celebrity, and appearance; in the Capitol, image mongers such as these hold sway. Although one reading of these characters plants them firmly in the realm of artifice, painting them as masters of emotional manipulation and spectacle, an alternate view might consider how these two have come to recognize a shift toward a new localized reality—one that Katniss must adapt to or perish.

And yet, despite their commonality, these two individuals also underscore fundamentally different approaches to image: Caesar (and, perhaps, by extension, the Capitol) wields his power in order to mask or redirect while Cinna endeavors to showcase a deep-seeded quality through the management of reputation and representation. Coexisting simultaneously, these two properties of illusion mirror the complimentary natures of Peeta and Katniss with regard to image. Peeta, skilled in physical camouflage, exhibits an emotional candidness that Katniss is initially unready, or unwilling, to match; Katniss, very much the inverse of Peeta, is characterized by traits associated with hunting, finding, and sight in the “real” world all while maintaining a level of emotive subterfuge. Over the course of the 74th Hunger Games, however, Katniss quickly learns to anticipate how her actions in the Arena will affect her representation and reputation beyond the battlefield. With the help of Haymitch, Katniss begins to better understand the link between a robust virtual self and a healthy physical one as she pauses for the cameras and plays up her affection for Peeta in exchange for much-needed rewards of food and medicine. As she matures, Katniss comes into alignment with Cinna and Caesar, individuals who, despite being participatory members of a system arguably deemed inauthentic, distinguish themselves from the majority of Panem by understanding how image works; Cinna and Caesar (and later Katniss) are not just powerful, but empowered and autonomous.

Herein lies the true import of Collins’ choice to weave the trope of reality television into the fabric of The Hunger Games: throughout the trilogy, the audience is continually called upon to question the nature of authenticity as it presents in the context of a media ecology. Ultimately, the question is not whether Katniss (or anyone else) maintains a sense of authenticity by participating in the games of the Capitol—trading a true self for a performed self—but rather an askance of how we might effect multiple presentations of self without being inauthentic. How does Katniss, in her quest to survive, embody Erving Goffman’s claims that we are constantly performing, altering our presentation as we attempt to cater to different audiences? Is Katniss truly being inauthentic or does she ask us to redefine the concept of authenticity and its evaluation? Struggling with these very questions, users of social media today constantly juggle notions of authenticity and self-presentation with platforms like Facebook and Twitter forming asynchronous time streams that seamlessly coexist alongside our real-life personas. Which one of these selves, if any, is authentic? Like Katniss, we are born into the world of the “real” without a ready ability to conceptualize the power latent in the virtual, consequentially resenting what we do not understand.


Free to Be…What, Exactly?

In June 2011, an article published in The Wall Street Journal sparked robust debate about the appropriateness of the themes proffered by current YA fiction, which ultimately culminated in a virtual discussion identified by “#YASaves,” on the social messaging service Twitter. Although some of the themes mentioned in the #YASaves discussion like self-harm, eating disorders, and abuse seem outside the scope of YA dystopia (in that they are not always elements in the genre), the larger issue of concern over youth’s exposure to “darkness” speaks to an overarching perception of children derived from views prevalent in Romanticism.

Consistent with the Romantic idolization of nature, children were heralded as pure symbols of the future who had not yet conformed to the mores of society. Building upon this model and undoubtedly bolstered by the counter-cultural movements of the 1960s, YA fiction increasingly began to shoulder youth with the responsibility and expectation of overthrowing the generations that had come prior while simultaneously delegitimizing the state of adolescence through trajectories that necessitated the psychological growth of protagonists. In order to save the world, teenage protagonists must inevitably sacrifice their innocence and thus become emblematic of the very institution they sought to oppose.

(more…)


Harry Potter Round-Up

Question from Harry Potter: “Is it worth believing in love without evidence of its transformative power?”

Although the answer is more complicated, it isworth it as the simple act of believing possesses the power to be transformative in and of itself.

http://religion.blogs.cnn.com/2011/07/13/my-take-why-were-drawn-to-harry-potters-theology/

***

Reading about Jesus as a Hufflepuff in this blog post. Which, of course it’s not that simple but Helgawas gracious in ways that the others were not. Self/others + tangible/intangible categorizations become murky with this one…

http://experimentaltheology.blogspot.com/2011/07/jesus-would-be-hufflepuff.html

***

“We’re just left with the monolith, the Harry Potter Experience, which feels distinctly Muggle-wrought: theme parks, movie memorabilia circling the globe, and Pottermore, Rowling’s new digital project, which, despite her promises of a fan-dominated site, may have been created simply to sell e-books.”

http://www.newyorker.com/online/blogs/books/2011/07/harry-potter-deathly-hallows-it-all-ends.html#ixzz1SLEX6Qf6


Diffusion of Innovation

In Diffusion of Innovation, Everett Rogers discusses the concept of “diffusion” as a subset of communication in order to highlight how communities acquire knowledge. Rogers’s opening chapter provides the reader with anecdotes to illustrate various strategies for this process, simultaneously providing a vivid reference point for readers while hinting at the complex array of factors that can affect the spread of ideas.

Undoubtedly building upon foundational theory created by Rogers, figures such as Richard Dawkins and Malcolm Gladwell have ruminated on the spread of messages. Using the preexisting schema of Evolutionary Biology, Dawkins likened information to genes (in the process, creating the term “memes”) in order to describe his theories regarding transmission and replication. Dawkins essentially argued that the fittest (in an evolutionary sense) ideas would go on to propagate in society, mirroring the activity of organisms. Gladwell, on the other hand, has incorporated Rogers’s model of adopters into his book The Tipping Point, describing the stages of diffusion in terms of people. Although Gladwell also goes on to describe individuals’ roles as agents of change, he continues to work under the philosophical framework provided by Rogers.

Daniel Czitrom’s Media and the American Mind addresses communication in a different manner, referencing media theorist Marshall McLuhan in its subtitle. McLuhan famously introduced the notion that “the medium is the message,” referring to the concept that the mode of communication has an inextricable relation to the content being provided. Although first coined in the 1960s, McLuhan’s thinking can still be applied to modern culture struggles to integrate the increased number of available media channels (e.g., traditional broadcast, podcasts, blogs and vlogs, etc.) afforded by advances in technology. Additionally, transmedia presentations of content (e.g., webisodes for Battlestar Galactica and Heroes or the narrative of The Matrix) challenge viewers and producers to reconsider established notions of media’s impact.