By Holly Willis
Cinema, the primary vehicle for storytelling in the 20th century, is in the midst of exciting transformation as the tools, practices, venues and viewing habits mutate in new directions. What is cinematic storytelling when it becomes virtual, augmented, generative, live and playable?
In July 2016, a company named Niantic released the Pokémon Go app, which allows people to spot and capture Pokémon characters using their mobile devices aided by augmented reality (AR) technology that appears to layer the creatures over real world locations. The free app quickly achieved more than 500 million downloads and, within two months of its launch, had generated $500 million in revenue. The project is being touted as instrumental in helping familiarise global audiences to augmented reality technologies, paving the way for an influx of other AR experiences in the near future.
During the same first week of September that generated these Pokémon Go statistics, a short virtual reality (VR) experience titled Henry produced by the Oculus Story Studio within the Facebook-owned Oculus VR company earned an Emmy Award for best outstanding original interactive program. Designed specifically to prove that a VR experience can indeed generate powerful character identification and empathy, Henry features a spiky hedgehog as its protagonist, and a single moment of eye-to-eye contact within the space of the story contributes to a sense of emotional connection, despite the fact that viewers have the ability to move around, looking wherever they choose. The short VR experience, and the approval bestowed upon it by the American television industry, plays a role similar to that of Pokémon Go in introducing a gentle, playful story as a means for gaining popular interest in a new technology.
However, July 2016 also marked another type of high-profile media experience when Diamond “Lavish” Reynolds live-streamed to Facebook video footage captured by her phone of the aftermath of the shooting death of her boyfriend, Philando Castile, during a traffic stop. Millions of viewers saw the video – and sadly, viewers all over the world have seen many other portraits of police brutality as well – but the ability to live-stream events as they unfold raises a host of issues, many related to ethics; it also announces yet another permutation in moving image culture in the first two decades of the twenty-first century, one that in its obvious difference from the two entertainment-oriented examples cited here, helps sketch the radical breadth, complexity and rapid transformation of contemporary moving image culture. It is perhaps axiomatic to say it, but what we might cautiously dub “the cinematic” is in the midst of a massive reimagining.
Indeed, for 25 years, cinema, the primary vehicle for visual storytelling in the twentieth century, has been steadily morphing and changing. The shift from celluloid to digital media at the turn of the century marked one tremendous shift; larger social and cultural shifts, and the need for new kinds of stories, mark another. As a result, cinema has been steadily moving well beyond the two-dimensional screen of the movie theatre, television and computer screen. Movies are now produced, distributed and exhibited in new ways. They are experienced and shared differently. They are bigger thanks to larger screens, including IMAX, but also smaller as when viewed on mobile devices. And the terms we use are changing: cinema can now be live, playable, immersive, mobile, ambient, spatialised, volumetric and generative. Understood in its broadest sense, the cinematic now illuminates museums and galleries in the form of large-scale projections and video installations; it appears as integral components of architectural sites and the built environment in the form of urban screens and large-scale data visualisations; moving images are rampant on mobile devices, which may invite gesture as a form of interaction; websites increasingly seek to create new interfaces for storytelling; and even the increasingly sophisticated cut scenes that punctuate video games bring cinematic elements to game consoles as well.
[ms-protect-content id=”544″]
One of the most dramatic instances of contemporary moving image culture is the proliferation of large-scale generative artworks exhibited in public spaces. Los Angeles-based artists Refik Anadol and Susan Nardulis, for example, have been commissioned to design a project titled “Convergence” for the five-story façade of a new building in downtown LA. The artwork, which will be unveiled in January 2017, will integrate real-time data, including social media posts, traffic information, news feeds and even oceanographic and tectonic data streams designed specifically “to present an ever-changing cinematic narrative of Los Angeles”. Doug Aitken’s Mirror, installed on the exterior of the Seattle Art Museum, similarly incorporates sensor data, as well as video footage, to create what the artist dubs an “urban earthwork”. Convergence and Mirror represent a new trend in large-scale public artworks that transform dynamically produced data into moving images, and concomitantly invite us as well to reimagine our understanding of the “cinematic”.
Yet another compelling trend is live cinema, which refers to the live, real-time mixing of images and sound for a live audience, where the sounds and images no longer exist in a fixed and finished form but evolve as they occur. The artist’s role becomes performative and the audience’s role becomes participatory. The work of Cloud Eye Control is emblematic of this trend. The three-person media art collective includes Miwa Matreyek, Chi-wang Yang and Anna Oxygen who produce performances that layer together multiple projections, music and actors. The group’s 2015 project Half Life responds to the 2011 Fukushima Daiichi nuclear crisis, and considers ecological disaster and a world in turmoil. The performance creates an often magical cinematic world that is inhabited by a live actress. The result is compelling: it is often impossible to tell what aspects of the image are projected and what may be “real”, and the resulting layering reminds us of our own contemporary existence within a matrix of screens, hovering between virtual and real.
In addition, the tools that we use to make movies and the age-old practices and workflows that enable filmmaking at an industrial level are transforming. New directions include attempts to create increasingly immersive cinematic experiences. Barco, for example, is a theatre experience developer responsible for the Barco Escape, a three-screen system that augments the traditional movie theatre’s front screen with screens to the left and right in order to produce a feeling of immersion. Films must be produced specifically for these theatres, and Star Trek Beyond, released in summer 2016, experimented with the technique.
Other examples of new tools include the proliferation of consumer grade 360-degree cameras such as the Ricoh Theta and Samsung’s Gear 360, each costing less than $400.00 and allowing users to easily capture spherical video footage, which can then be automatically stitched together on personal computers. Looking toward more high-end technologies, the Lytro Immerge is described as the “light field solution for cinematic VR”, which seeks to allow artists to meld computer-generated content and live action footage easily and seamlessly. The Lytro camera – which looks nothing like a camera, instead resembling Seattle’s Space Needle – maps the flow of light within a space, or, in industry terms, a volume. Rather than simply creating a 2-dimensional image, then, it renders a 3-dimensional space. The result is the shift from camera representation to the capturing of information.
In addition to theatres and cameras, cinematic workflows are shifting, too. Where traditional filmmaking was once a linear process that over the first decades of the last century gained the efficiency of a factory assembly line, moving step-by-step through pre-production, production and post-production, we now have practices such as transmedia story design, which attends to the possibility of multiple platforms for a story’s unfolding, and worldbuilding, which has been championed by Alex McDowell and the World Building Media Lab. Rather than starting with a script, acquiring funding, entering pre-production, then production and so on, worldbuilding instead starts by designing a world, and then allowing the stories that may be nascent within that world to emerge, honed for multiple media platforms. For McDowell, and many other artists exploring virtual and augmented reality, designing stories for a 360-degree world-space insists that we relinquish control of character point-of-view, and with that comes the demand for truly rethinking how, in very practical terms, we tell these new stories.
What we are witnessing in these shifts, then, is the dismantling and reconfiguration of cinema, which has splintered into dozens of hybrid practices that engage not just sound and image, but interaction design and user experience. Viewing this shift historically, it’s clear that the expansion we’re witnessing currently isn’t entirely unique; the origins of cinema include a diverse array of experiments, from explorations of multi-screen exhibition, for example, to the integration of live events and film projection. Film historian Scott Bukatman used the words “delirium and immersion” to describe the United States of the late 1890s as cinema evolved, saying that our world had become “enveloping, inescapable, and incomprehensible, literally overwhelming”.1 It was artists and their ability to produce controlled experiences of delirium and immersion through various kinds of early cinematic experimentation, who helped viewers to comprehend that world.
Jumping forward several decades, Gene Youngblood’s book Expanded Cinema was published in 1970 in order to describe and assess the proliferation of practices enabled by video and the mixing of live performance and moving images of that era.2 Youngblood describes the desire to produce a kind of oceanic consciousness as a key element of these expanded cinema efforts. A flurry of more recent book publications further attests to a rich and robust history of media-based experimentation throughout the last century.3
Contemporary explorations of the cinematic, however, emerge from a shared sense of urgency sparked by the realisation that, as we move further into the 21st century, we are in the midst of a vast cultural transformation. We can sense this shift all around us, in the feeling of rapid change that makes us feel rushed and scattered; in the experience of data deluge and the profusion of information, alongside the concomitant knowledge that every one of us is generating a cloud of data that is being continuously tracked, aggregated and assessed; and in the proliferation of screens that have helped create a culture of distraction and disconnection.
Film scholars Anna Munster and Timothy Murray characterise our contemporary digital experience as baroque, connecting the present’s sense of excess, serial connectivity and fractured subjectivity to similar qualities inspired by baroque artworks of the past.4 Our experience includes an accelerated pace of change, an inundation in data and a growing awareness of a world defined by systems and computation. As we witness tremendous shifts, intense conflicts and massive institutional failures, we need new kinds of stories and ways to communicate specifically the dense layerings of reality and the shift from representation to information. Given this moment and its changes, it is an exciting time to take stock. What can our experiments with new tools, practices and story experiences reveal about who we are at this moment?
Featured image: Half Life | Cloud Eye Control | 2015. Half Life is a live cinema production that explores the fear experienced after the 2011 Fukushima Daiichi nuclear disaster.
Photo by: Steve Gunther
[/ms-protect-content]
About the Author
Holly Willis is a Professor in the School of Cinematic Arts at the University of Southern California, where she also serves as the Chair of the Division of Media Arts + Practice. She is the author of Fast Forward: The Future(s) of the Cinematic Arts, recently published by Wallflower Press.
References:
1. Scott Bukatman, Matters of Gravity: Special Effects and Supermen in the 20th Century (Durham, North Carolina: Duke University Press, 2002), 114.
2. Gene Youngblood, Expanded Cinema (New York: Dutton, 1970).
3. See, for example, Gloria Sutton The Experience Machine: Stan VanDerBeek’s Movie-Drome and Expanded Cinema (Cambridge, MA: The MIT Press, 2015), Andrew V. Uroskie, Between the Black Box and the White Cube: Expanded Cinema and Postwar Art (Chicago: University of Chicago Press, 2014) and David Curtis, A. L. Rees, Duncan White and Steven Ball, Expanded Cinema: Art, Performance, Film (London: Tate, 2011).
4. See Timothy Murray, Digital Baroque: New Media Art and Cinematic Folds (Minneapolis: University of Minnesota Press, 2008) and Anna Munster, Materializing New Media: Embodiment in Information Aesthetics (Lebanon, New Hampshire: Dartmouth College Press, 2006).