HRTF is the acronym of Head Related Transfer Function. In other words, an HRTF correspond to the phase and frequency response of our head when a sound is coming to us. The tangible mechanical reaction is the direction of our head takes when the sounds reach our eardrums.
Paris – December 08, 2017 – Dimitri SINGER – Co-founder & CEO – 3D Sound Labs
HRTF are the phase and frequency response of our head when a sound is coming to us. These changes, or little micro movements, are dictated by the structure of our head: nose, forehead, mouth, hair, bone density, auricles…and our body : shoulders, arm, feet (in the case the sound is coming from below or the top. Every obstacle and elements of us that the sound hits before reaching our eardrums, are going to change the sound, altering frequencies and phases of the incoming sound. Some acoustic alterations are going to encounter the original sound.
With all these changes, our brain is used of these features, and it develops reflexes to understand the direction the sound is coming from. To demonstrate this mechanism, when eyes are closed, people can still identify the location of the source of a incoming sound in a quiet environment.
To sum up, our ears act as directional acoustic sensors. HRTF are Head-Related Transfer Function, and they describe the impact that the listener’s anatomy has on the sound from any given location.
HRTF are individual
To complete the idea about HRTF, our listening skills depend on our acoustic anatomy : our ears. Thanks to them, in the real world, we walk in a soundscape within we hear sound. And the incoming sound interacts with our body. With our ears, we determine where they are coming from. The particularity is humans have different sized heads and torsos. Anatomically, ear shapes are very individual as well. If an ear is different, properties of scattered waves from them will be different.
Severals spatially distributed open cavities and protuberances influences our spatial sound localization The recurrent problem is with headphones. Our spatial sound localization mechanism when we put on headphones are effectively reduced because of headphones’ anatomy and part of the cues that allow us to tell where sound is coming from with classic headphones.
The key to create an accurate and realistic sound experience is knowing listeners individual auditory anatomy and it influences the way we hear sound.
HRTF is your acoustic fingerprint.
Actual work on HRTF
The actual work on hrtf is simple : provide the best realistic sound experience with a sound personalized calibration boarded in an headset. The initial idea is to measure attributes of a unique received sound at the two ears . To do that, a sound comparison at two ears is needed.
As we explain at the beginning of this article, to understand this phenomenon, imagine you in a quiet forest, having the silence broken by an unexpected noise, like the sound of a shotgun near of you ft. Acting by reflex, people immediately turn their heads toward the source of the sound. Turning toward the sound seems almost instinctive. In an instant, your brain determines the sound’s location.
A person’s ability to localize a sound’s location comes from the brain’s analysis of the sound’s attributes. In one part, an important attribute to pinpoint the source is with the difference between the sound that your right ear hears and the sound that your left ear hears. In another part, the interactions between the sound waves and your head and body helps to set the source. Together, these are the aural cues that the brain uses to figure out where a sound came from.
In our example, the shot fired somewhere to your right. Because the sound travels as physical waves through the air, it reaches your right ear a fraction of a second before it reaches your left. In addition, the sound is weaker by the time it reached your left ear. This reduction in volume is because of the natural dissipation of the sound wave and because your head absorbs and reflects a bit of the sound.
The difference in volume between your left and right ears is defined as the interaural level difference (ILD).The delay is called the interaural time difference (ITD).
The brain interprets these differences in the wave’s shape, using them to find the sound’s origin.
The actual work on HRTFs has a subtle but complex effect on the shape of the wave, in order to locate the direction of a sound and replicate this model as an algorithm integrated in a headset. To sum up, 3 parameters are importants to calculate HRTF :
- Interaural Time Difference : Arrival time differences of a sound at each ear
- Interaural Level Difference : Level differences of a sound at each ear
- Spectral cues from interactions with one’s anatomy
The difficulties of HRTF
HRTF functions magically convert stereo sound into 3D sound, but they are computationally expensive. In addition, Every individual has their own HRTF and hears the world uniquely.
For a realistic and immersive experience, this requires a customization step, and technology that use customized data that can use your own HRTF over conventional generic binaural audio. One of the problem is to build a database of ear shape. As this subject is young, some datas are missing.
To experience fully immersive and accurate audio, personal HRTF calibration must be tested frequently, and adapted wit audio engine used by the software that spreads the movie, when you play a game, or listen to music. Otherwise audio cannot be perfectly localized by the listener.
References for this article :