In 6DoF AAR, an essential narrative consideration is the relationship of virtual sounds to their physical counterparts. In other words, there may be many narrative consequences depending on whether a sound is attached to a real-world object or not, whether it matches with the object or not, etc. Further, the acoustic space around the listener may or may not match with the real one, and that appears to be a powerful storytelling tool in this medium.
In the Full-AAR project, we are also utilising the interactive capabilities the tracking technology offers us. Hence, those possibilities combined with the virtual spatial audio brings a whole new subset of narrative techniques to be explored.
Most of the narrative techniques presented here and tested in the project are based on the original identification by Matias Harju1).
This page is still a work in progress.
Congruence and divergence are, of course, highly contextual; e.g. a dog doesn't normally talk (dog + talk = mismatch), but in a story it can (dog + talk = match). Also, drawing the line may be difficult, e.g. crackling sound of fire in a fireplace with a pile of unlit wood would match and mismatch at the same time. Another example would be illustrative sound design and sounds produced by imagination that would be a congruence with the narrative and dream images but, at the same time, a divergence with the real environment around the user.
In the context of museum items, Cliffe12) discusses the difference between augmenting silent objects (e.g., a photograph) and augmenting silenced objects (e.g. a radio receiver that no more produces sound). This is one very nice way to approach the issue.
When transforming the space into something else, the acoustic properties of the imagined new space won't necessarily match the real surroundings. E.g., the reverb decay may much shorter. However, to embed or 'glue' the sounds of the new environment to the user's real environment it may be necessary to use some amount of the real-space acoustics with the new sounds.