Virtual Reality technology brings Beckett into a new age of theatre

You stand in the centre of an unlit room, entirely in darkness. You swivel, and a spotlight shines on the head of a woman protruding above the lip of a beige ceramic urn.  In rapid-fire intonation, she begins: “Yes strange the darkness best and the darker the worse till all dark then all well for the time but it will come the time will come the thing is there you’ll see it get off me keep off me all dark all still all over wiped out-”.

So begins the disorientating experience of Virtual Play, a virtual reality reimagining of Samuel Beckett’s theatrical text, Play. The original 1963 performance consisted of three identical knee-height urns, each containing an actor, with faces “so lost in age and aspect to seem almost part of the urns”. The actor’s speech is provoked by a moving spotlight, which Beckett called “the integrator”. They speak when the light is on them, and remain silent when it is not. The final stage direction in the script is “repeat play”, seemingly perpetually.

In its original, strange imagining, Play presents an interaction between the light operator and the actor, mediated by technology, while the audience members are static, passive viewers of the exchange. Since the 60s, however, the ways we consume art and media have changed dramatically and the technologies available to us have evolved, allowing for more interactivity and direct engagement. In a collaboration with the Trinity Centre for Beckett Studies, Trinity’s V-SENSE group have created a collection, in an exploration of the new narrative possibilities presented by the cutting-edge capture technique of free-viewpoint video, typically accessed  through virtual and augmented reality headsets.

V-SENSE is a team of 20+ researchers who work at the intersection of computer vision, computer graphics and media signal processing. Their work involves free-viewpoint video (FVV), 360-video, light-field technologies and visual effects and animation, as well as virtual, augmented and mixed reality.

Néill O’Dwyer, the leader of the Virtual Play project, tells me that: “Play was chosen [as a medium] because it specifically engages the question of dialogue and interactivity”. By donning a VR headset, composed of headphones and chunky rectangular goggles, a participant becomes immersed in the researcher’s digital rendering of the play. In their version, the power is transferred over to the viewer, whose gaze determines which actor speaks, and for how long. As O’Dwyer puts it, “your gaze becomes the spotlight”. Not only that, the participant is no longer chained to their theatre seat, but can instead move about the space, walking around the urns, bending down, looking up.

“By placing the viewer at the centre of the storytelling process, they are more appropriately assimilated to the virtual world and are henceforth empowered to explore, discover and decode the story, as opposed to passively watching and listening,” says O’Dwyer.

The virtual environment was constructed in a game engine, a software framework designed for the creation and development of video games. The actors, directed by Nick Johnson, were filmed using free-viewpoint video against a green screen with a multiple camera setup involving seven DSLR cameras, and then transposed into the environment. As a 3D model for every single frame of the virtual world needs to be constructed, all the clips the researchers filmed needed to be synchronised so that each frame of the actor’s movement aligns with each point of view. The images must also, obviously, be synchronised with the audio files, which were captured separately. In a highly labour-intensive process, the figures from each take of each camera are separated from their green screen background, and the raw footage is exported as a series of images. A virtual 3D model of each figure is constructed and a photo-realistic model is created using all of these inputs. The resulting effect is of figures undeniably virtual, but convincingly present in space.

Sound presented several challenges in the creation of Virtual Play. Enda Bates played a significant role in creating a credible environment for the participant, through the use of 6 degrees of freedom spatial audio. In VR, as in reality, we use sound to situate ourselves in space, to tell us how far or near other objects are to ourselves. Therefore, it was central to the creation of the experience that the actor’s speech sounded like it moved through real space, by ensuring that the timbre and ratio of direct and indirect audio signals changed naturally as the participant moves closer or further away from each actor, while maintaining a high degree of clarity. In addition, removing microphones and cabling in post-production is a time-consuming task, so tiny microphones were placed discreetly on the underside of each urn’s rim.

For this project, Bates drew on his experience creating 360 videos, a community for which has been rapidly expanding. These 360 videos abound on Youtube, including ones created by Bates for the Trinity Creative Initiative. For under E500, anyone can buy a 360 camera and upload their videos online. On the surface, the concept of these videos is similar to that of virtual reality, in that the participant stands in the centre of an environment and has the ability to look around. However, the Virtual Play creators are quick to highlight the differences. In these videos, the user does not have spatial control; they only have the ability to look around them from a position chosen by the movie-maker, whereas the participants of Virtual Play have control over where they are spatially.

In addition, Virtual Play participants have a degree of control over the narrative. A central goal of the Virtual Play project was to address ongoing concerns in the creative cultural sector, regarding the question of narrative progression in a virtual reality environment. When the participant has control over what occurs in the environment, how is narrative to be approached? Should the sequence of the actor’s speeches only play out in the order outlined by Beckett, or should they be fully randomised, based entirely on the gaze of the viewer? And when the participant’s gaze falls upon an actor, should the actor continue where the participant left them? Or should it consider where the previous actor left off, and go from there? The interactivity that virtual reality allows mean the participant is no longer an audience member, but an editor. Authorship thus comes into question, when, to a degree, the user creates the story.

Virtual and augmented reality offer exciting new possibilities for storytelling and Trinity’s research demonstrates one way the technology could be used to create a compelling experience. The work of Trinity’s V-SENSE group demonstrate that expertly crafted VR can be made here in Trinity without breaking the bank.

Aisling Grace

Aisling Grace was the Editor-in-Chief of the 66th Volume of Trinity News. She was also formerly Online Editor and Deputy News Editor.