Spatial and contextual relationship between a sound and its source
In 6DoF AAR, an essential narrative consideration is the relationship of virtual sounds to their physical counterparts. In other words, there may be many narrative consequences depending on whether a sound is attached to a real-world object or not, whether it matches with the object or not, etc. Further, the acoustic space around the listener may or may not match with the real one, and that appears to be a powerful storytelling tool in this medium.
In the Full-AAR project, we are also utilising the interactive capabilities the tracking technology offers us. Hence, those possibilities combined with the virtual spatial audio brings a whole new subset of narrative techniques to be explored.
Most of the narrative techniques presented here and tested in the project have been identified by Matias Harju in his master's thesis1).
This page is still a work in progress.
Spatial relationship between sound and source
Attachment
- virtual sound attached to a real-world object (visible/tangible/scentable)
- character's voice emanating from a picture frame
- sound heard behind a door
- street sounds through a window… etc.
- however, according to our experience, if the user tracking in a headphone-based system is not stable and the sound source keeps constantly moving, the visual cue may not help to magnetize the sound
- in case of contextual mismatch (see below), perhaps try to sell the mismatch through story
- if sound is within-reach (see below), with binaural 6DoF, requires accurate and low-latency head tracking
- Härmä et al. (2003) labelled this approach as localized acoustic events connected to real world objects4)
- Cliffe (2022) calls the physical object to which the virtual sound is attached as audio augmented object5)
Detachment/Acousmêtre
- sound is “acousmatic” and has no perceivable counterpart in the real environment, but is still relative to the environment, not user's head
- ghost
- imagined sounds, memories
- acousmatic transformation of place… etc.
- could be considered as a move from pure augmented reality towards virtual
- potentially challenging to make believable; also difficult to spatialise so that sound's position is clearly perceivable
- on the other hand, acousmatic ('ghostlike') sounds may be more forgiving for their positional accuracy than sounds that are supposed to be attached to an object
- ways to potentially improve plausibility:
- foleys
- believable acting
- interaction with the player (asking questions, asking player to do something, etc.)
- directional sound emitter ('mouth') moving around by either prefixed motion capture animations or AI-based pathfinding navigation
- high-quality virtual acoustic rendering
- introducing the character first 'out-of-sight' (behind door, talking from another room), then revealing them as being invisible
Locative audio (location affiliation)
- soundscapes audible inside certain areas or zones
- basic principle of locative audio experiences such as audio walks and some museum audio tours
- possible to realise with proximity sensors without 6DoF tracking
- can utilise 6DoF, or be head-locked
Spatial offset
- e.g., sound of an airplane appears to be lagging behind
- e.g., 'zooming' into a distant sound source as if using audio binoculars
- risk of appearing as an error, if not well motivated narratively
Within-reach
- sound inside the 'play area'
- user can walk around the sound and go very close
- sets high requirements for virtual audio
- with binaural 6DoF, requires accurate and low-latency head tracking
Out-of-reach
- sound outside of the 'play area'
- sounds leaking from adjacent rooms, behind windows…
- ambience sounds
- possible to realise by using ambisonics instead of object-based audio
Near field
- sounds very close (< 1 m) to the listener's head6)
- either head-locked or utilising head-tracking
- fly buzzing around head, ASMR-style sounds… etc.
3DoF
- sounds relative or 'attached' to the user's location, i.e. move with the user, but still support head tracking
- typically used for ambisonic ambiences, but can be applied to object-based audio sources, too
- sometimes useful for narrative or sound design reasons in e.g., memory scenes where the real environment can be contextually faded slightly to the background
Spatial (a)synchronisation
- the coordinates of the sounds are relative to something else than the real environment
- 3DoF technique would be one example (relative to the user)
- in wilder scenarios sounds could be relative to another user
- Härmä et al called this freely-floating acoustic events7)
- other than 3DoF, perhaps not the most useable technique…
Contextual relationship between sound and source
Match
- sound contextually matches its real-world counterpart
- radio programme played from a radio receiver
- environmental sounds match the real-life surroundings… etc.
- Schraffenberger and Heide (2014) talk about virtual content relating or becoming a part of a physical element or object8)
- the sound would complement or reinforce the object
Alternative match
- sound aligns with its real-world source but presents a different interpretation of the actual object or scenario
- two users are viewing a documentary film; nevertheless, they perceive distinct voiceovers, each providing a unique interpretation of the visual events
- a bad smell begins to permeate the room; one user hears a car engine idling outside, suggesting the smell comes from the tailpipe, while the other perceives a hiss as if a gas pipe were broken
Mismatch
- sound does not match its real-world counterpart
- dog sounds emanating from a person
- acousmatic transformation, ie. environmental sounds displaced from the real-life surroundings (you stand in a gallery space but hear forest around you)
Match and mismatch are, of course, highly contextual; e.g. a dog doesn't normally talk (dog + talk = mismatch), but in a story it can (dog + talk = match). Also, drawing the line may be difficult, e.g. crackling sound of fire in a fireplace with a pile of unlit wood would be a match and mismatch at the same time. Another example would be illustrative sound design and sounds produced by imagination that would be a match with the narrative and dream images but, at the same time, a mismatch with the real environment around the user.
In the context of museum items, Cliffe9) discusses the difference between augmenting silent objects (e.g., a photograph) and augmenting silenced objects (e.g. a radio receiver that no more produces sound). This is one very nice way to approach the issue.
When transforming the space into something else, the acoustic properties of the imagined new space won't necessarily match the real surroundings. E.g., the reverb decay may much shorter. However, to embed or 'glue' the sounds of the new environment to the user's real environment it may be necessary to use some amount of the real-space acoustics with the new sounds.
First-person sounds
- user assumes a character's role: character's speech, foleys and other sounds attached to the user
- May be challenging to get working, unless introduced carefully in the beginning of the experience.
- 'Augmented humans'10)
Additive enhancement
- additional sound or effect attached to a real-world sound
- a healthy car sound is augmented with a squeaky belt sound
- an old radio receiver humming in the real world with a virtual radio programme augmented on top of that
Masking
- real-world sound masked by a virtual sound
- requires an acoustically isolated auditory display system (e.g. closed-back headphones)
Manipulation
- real-world sound manipulated by replacement
- benefits from an acoustically isolated auditory display system