By Sonali Rai, Broadcast and Audio Description Manager, RNIB (Royal National Institute of Blind People)
With the physical world in lockdown and people increasingly finding solace in virtual space, the arts and culture sector has been exploring new channels to keep audiences engaged, including a surge of interactive tours of galleries and reproductions of works of art in the 360-degree format.
If you’ve not caught up with accessibility innovations in arts, culture and the wider media sector, you probably think that extended reality (XR) experiences are unsuitable for people who have visual impairments. But the ground-breaking work done within the EU-funded Immersive Accessibility Project (ImAc) showcased a vastly different reality.
As part of the 30-month project, which ended in March this year, RNIB and its partners explored ways of making XR experiences as immersive, enjoyable and informative for people with sight loss as they are for the sighted audience. The team started by assessing the role of traditional tools such as audio description and subtitles and how they could be adapted for these strikingly visual and interactive stories. Feedback from focus groups conducted in the initial stage of the project indicated the need to rethink the techniques for the production and delivery of traditional access features and instead align the style of delivery to reflect the spatial depth of the environment.
Talking specifically of audio description, which aids the understanding of media by people with varying degrees of sight loss, it was apparent that in order to understand the form that a description track could take, one must examine the original content, its presentation style and the expectations of viewers before offering a range of alternatives that viewers could personalise to their preferred mode. An example would be the ability to access the description with an ambience track (background sounds for a particular scene) that conveys the essence of the work of art far better than simply listening to the objective verbal description of the piece.
Changing the narrative of audio description
In the light of feedback gathered during the ImAc Project on user preferences and expectations of people who use audio description, we studied a range of experiences that represent the different formats being used in the arts and culture sector. There is no doubt that like linear content, interactive content gets delivered in different forms. Interactive VR formats, or narrator-led 360-degree videos are the broad categories within which most of these experiences fall, but each story is structured differently. So, when it comes to making these accessible to blind viewers, that approach would have to be customised as well.
Breaking up the experience
Most audio-visual experiences can be categorised by two equally important aspects – the content and the mode of navigation by which someone accesses the content. This is just as relevant for immersive experiences.
360-degree videos, often called the simplest form of immersive content, are relatively easy to tackle from an accessibility perspective for viewers who rely on audio description. So long as the distribution platform allows viewers to navigate using their preferred assistive technology, whether it is a screen reader or magnifier, it is primarily the content that needs attention. YouTube is one such platform.
YouTube supports uploading and playback of 180° or 360° spherical videos on computers in Chrome, Firefox, MS Edge and Opera browsers. You can also watch 180° and 360° videos on the YouTube app or on the YouTube VR app available on most VR headsets. For an immersive experience, learn more about watching with a VR headset.
The alternative is the ImAc Player, developed as part of the Project, which offers an enhanced user interface with high colour contrast and can also be controlled by a voice assistant such as an Amazon Echo. Users can activate standard audio description and extended tracks when provided.
As for the making the content accessible, the challenge in these videos is to sustain the level of immersion despite the introduction of an external character (the audio describer) into the scene. One of the case studies in the ImAc Project looked at possible solutions for audio describing a travel documentary series shot in the 360-degree format. The series, led by an on-screen anchor, takes viewers on a tour of the Holy Land. In an ideal scenario, an enhanced narration or integrated description, as it is often called, would have eliminated the need for a separate audio description track, but as that was not the case, a few different presentation styles were trialled and tested with focus groups. These included:
This mimicked the traditional description that we see on TV, in films and other media. A measured delivery that objectively described those on-screen elements that did not have an audio clue.
In this, the style of script and delivery was friendlier and more intimate. The aim was to make it seem as if the describer was addressing the viewer directly from the location, which helped build a relationship between the two. The description tracks were also given an ambience track for an enhanced experience. The audio description tracks were tagged to the action on screen in the 360-degree space which enhanced orientation for the viewer.
Extended audio description
In this, the videos with the conversational description were interspersed by additional tracks that could be triggered by the viewer. These were used to set the scene and describe elements such as lighting, props, characters and their costumes, or even share interesting trivia – anything that would help visualise the scene. The rationale here was that greater familiarity with the scene is linked to greater immersion. The audio description tracks were tagged to the action on screen in the 360-space, which enhanced orientation for the viewer.
Note, these samples will soon be available on the RNIB website.
Interactive VR Experience
Compared with the 360-degree immersive video experiences, interactive VR can present more of a challenge for assistive technology users, particularly in relation to navigation.
Take, for example, the National Gallery’s VR Experience of its Sainsbury Wing powered by Oculus and Matterport, which allows viewers to enjoy their collection of paintings from the Early Renaissance period. Interacting with the experience visually is straightforward, one chooses the desired perspective, whether it is the Dollhouse or the floorplan to select an artwork, which one can then zoom into for a detailed view.
As important as the experience of navigation and striking graphics are for this experience, the basic purpose seems to come down to sharing the artwork with the viewers. So, the question is, could an alternative be offered to viewers with sight loss in the form of an accessible list of the works of art on a webpage linked to its audio description? It is worth considering if the description should be of the standard treatment that most museums offer already, or could the experience be made immersive in a way to match the gamified experience that is extended to sighted viewers?
Listen to the sample of an immersive descriptions package for Claude Monet’s Champs de Coquelicots
Building new experiences
In time, other solutions will emerge. Some even as immersive as the binaural audio experience of the exhibition of Visitors to Versailles produced by the Metropolitan Museum of Art in New York, but with enough description built into the narrative so that it eliminates the need for a separate audio description track.
If you are designing an immersive experience and you would like to speak to RNIB and VocalEyes about how to make it accessible to people with sight loss, drop us an email on [email protected]. We look forward to hearing you.