Narrative techniques

Since 6DoF AAR is a very young medium and art form, its narrative language is yet to be developed. One of the main goals of the Full-AAR project is to identify, use, and test several narrative techniques in practice.

Identifying and conceptualising characteristic narrative techniques of AAR is an ongoing process with many possible arrival angles and methods. The narrative framework described on these pages recycles many concepts from other storytelling mediums such as cinema, video games, audio plays, etc. What sets the techniques of AAR apart from the ones of other media is their adaptation to the unique characteristics of 6DoF AAR: the use of spatialised virtual audio, interplay between real and virtual, and interactivity based on the user's location and movements.

In academia, a number of frameworks have already been introduced to conceptualise storytelling in AR as well as sound design in virtual reality settings. For AR storytelling, Robert Azuma suggests three approaches where both real and virtual play an essential role: reinforcing, reskinning, and remembering.1) The reinforcing strategy, for example, involves choosing a naturally captivating real-world environment and using AR augmentations to enhance its inherent appeal, creating a unique and more compelling experience than either virtual content or reality alone. This would describe a typical approach when creating AAR stories for home museums and historical sites.

For sound design more specifically, one recently proposed framework is MAARS (Multilayered Audio-Affect Research System) by Olsen et al., which categorises sounds based on their context, purpose, and emotional impact.2)

One can also adopt a widely used concepts from the cinematic sound design tradition, one of them being the classification of sounds into diegetic and nondiegetic. An expansion of that is the IEZA (Interface, Effect, Zone, Affect) framework of the video game domain, proposed by Huiberts and Van Tol, which adds upon the diegetic/non-diegetic divide and introduces a dimension focused on whether sounds are triggered by the user or the environment.3)

These frameworks (among others) should be useful tools to get a grasp on the narrative (sound) design of AAR, too. In complement to them, the narrative techniques presented on these pages hopefully provide a slightly more detailed and concrete way to approach the topic.

1)
Azuma, Ronald. ‘Location-Based Mixed and Augmented Reality Storytelling’. 2nd Edition of Fundamentals of Wearable Computers and Augmented Reality, edited by Barfield Woodword, CRC Press, 2015, pp. 259–76.
2)
Olsen, Alvaro F., et al. Multilayered Affect-Audio Research System for Virtual Reality Learning Environments. Audio Engineering Society, 2022. www.aes.org, https://www.aes.org/e-lib/browse.cfm?elib=2184
3)
Ibid., p. 2