Close up of woman is using the black tru

Learn more about Sympan®

Benefits and applications of the Sonic Reality Engine®

 

The Simulation of Sonic Reality®

The way we hear in nature is defined by the indivisible combination of the way that soundwave propagate through space to reach our ears and the way that our auditory system - the ears, the auditory cortex and the brain - respond to those soundwaves. We call this the first principles of hearing.

We first conceived of Sonic Reality® when we went back to these first principles looking for improvements to spatial audio. We used these first principles to help us think about how could digitally process sound into a more natural, convincing and higher quality spatial audio experience than we could get from HRTF. 

 

On our journey of discovery, we put together the physical laws of acoustics along with the duplex theory of sound localisation. The duplex theory describes how the brain takes the bio-mechanical percussive impulses on each eardrum that is generated by soundwaves in the ear canal and uses differences in the time of arrival, intensity and phase to account for the location of the sound in space. From here we started to think about how we could incorporate these first principles into a new model. 

The new model that we developed started to conceive of Sonic Reality as a mathematical simulation of the acoustic physics - a physics engine for audio - and an approach that could derive set of coefficients from the model to parameterise an audio filtering process. Along the journey, reflecting on what we were learning and what we knew of the HRTF process and the insight we had gained from our conversations with potential customers like you we realised the key difference in our approach was that we were attempting to simulate our sonic reality whereas HRTF is merely concerned with emulating the original recording sphere of the HRIR recordings. 

This encapsulation of all our insights really enabled us to start thinking differently about the processing of spatial audio. What we were trying to. Why we were trying to do. Where the pain points were. It liberated us from the engineering constraints that HRTF imposes on binaural spatial audio. Consequently, with this fresh approach, we were able to tackle processing issues, resolving quality problems, eliminating the confusion and constraints associated with localisation, finding the path to ultra-high angular resolution and above all the ability to easily customise for the end user's anatomy.  

That's why we believe the simulation of Sonic Reality is a big step forward for spatial audio processing. It will take the listening experience so much closer to natural hearing. And with your help, make Sympan® the Sonic Reality Engine® the defining standard for the next generation of volumetric spatial audio.

 

Sympan® for Embedded Spatial Audio

For True Wireless Sound and Gaming Headsets

The hearable market place is redefining personal audio. At Kinicho we have a long-held belief that the rise of feature-rich hearables will not only change the way we experience audio, but it will also usher in a new computing paradigm where sound is part of a rich tapestry of immersive interactivity. Critical to this vision will be the inclusion of real-time spatial audio as part of the core at-the-ear processing just like ANC and Transparency are today. And further into the future, the hearable device will become a computing platform in itself.

In the TWS space, our clients and partners told us the main challenge was power - as one put it, "we're going to need 16 hours plus of power to facilitate a natural augmentation of audio into the daily experience and we'll need services that can use the power budget economically."


We took that challenge seriously, developing a solution that delivers outstanding quality spatial audio in real-time with the least amount of computational processing possible. Driving down the computational requirements is a direct benefit to being economical with the power budget. 

Another key feature of the Sympan® embedded approach is our low memory requirement. Our efficient process not only eliminates the need for regular memory reads, saving compute cycles too, but there is no need for excessive memory bandwidth that comes with the HRIR database.


Processing at-the-ear brings immediate gains for real-time head tracking too. Combining the DSP with orientation data from an IMU means the audio response to a user's head motion happens in real-time - and no delay makes for a realistic spatial audio experience. Processing at-the-ear eliminates round-trip times normally associated with spatial audio.


Our approach facilitates offers a range of new experiences to the end listener even with legacy content. Using spatial audio can give listeners a new appreciation of stereo audio. We can position the left and right channels outside of the healed and let the magic stereophonic happening. Even the best headphones still suffer from stereo separation and in-head localisation. 


And new content like holophonic calling and spatialised personal assistants change the way we can interact with devices and the information and communications ecosystem.  

But it's not just TWS headsets that can benefit from our solution. We believe Gaming headsets will gain a massive boost from at-the-ear processing too. Gamers really understand latency - they know adding latency decreases response times and nothing costs in gaming like slow reaction times. 


Gamers accept gaming headsets are tethered because wireless adds latency. With the right data bandwidth between the PC/Console, a full 7.1 audio track can be streamed to a gaming headset and rendered live in game-time. This will truly make the gaming experience more immersive. 

And wearables are not the devices where Sympan® can add great value. We believe that any hardware where spatial audio processing could improve the user experience - from smartphone to in-car entertainment units - would gain from our advanced approach.

 

Sympan® the Software Library

Immersive content is growing sector touching many different vertical markets beyond gaming and entertainment. We always saw the Sonic Reality Engine® as a solution that would change the way immersive audio is created and consumed. 


Using our software library - available in 32-bit and 64-bit implementations for PC, macOS, Linux, IOS and Android - the developer has a ready solution to implementing true volumetric spatial audio for both 3DoF and 6DoF applications. The library is accompanied by an easy-to-use SDK that takes object-based spatial meta-data and uses it to parameterise a volumetric audio output.

This approach works for games engines, where the object data is readily available, and in other applications where the developer can define their own spatial relationship through our simple approach. 

We're really excited by how this will change the way that producers engage in the creation and post-production of immersive content. Our own experience of working with HRTF is echoed by a lot of creatives and content produces. It's painful and as well as being a productivity drag it also hampers the creative process. A big difference in the way Sympan(r) works is the separation of sound design from spatial mixing. The Sonic Reality  Engine® intelligently handles the spatial mix - our simulation approach is true to the laws of physics and the propagation of sound in a space - leaving the creative producer to focus on the craft of sound design or audio/music production. 

For applications where sonic accuracy is critical, the Sonic Reality Engine® solution is scalable capable of using high sample rates up to 384KHz while remaining zero-latency. This means the resulting volumetric audio is available in incredible detail and dynamic range.  

Contact Us to discuss your software-based spatial audio application needs
 

Sympan® as a Service

The Spatial-Audio-as-Service concept was designed for three very different use cases but where each has the need to let some other computing devices handle the spatialistion.

While hearbles are now starting to process at-the-ear and mobile devices have increasingly powerful computation available natively, there are situations where complex immersive soundscape are not just possible - for instance, TWS still have channel bandwidth limitations because of Bluetooth or smartphones might need to divert their computing power for Visual process or AI and so on. In scenarios where low network latencies exist in high Quality of Service (QoS) networks like in 5G services the orientation data from head motion tracking can be streamed up to the cloud and a volumetric binaural stream can be returned. Low network latencies of 5G combined with Sympan's zero-latency processing model make it possible to envisage round-trip times of less than 30ms. Round-trip latency of this length would be undetectable. For some content applications -information service for example - latency greater than this but below 120 ms would still be viable.

For other applications where the high levels of sonic complexity are required using compute-power in the cloud would facilitate the richest experiences. We envisage this would be most applicable for implementations where environmental responses like high-orders of reflection would enhance the sonic experience or where there are a great number of individualised sound sources to be processed.

Finally, but by no means least, offline content processing for legacy media such as the back catalogue of music and film soundtracks can be upmixed into static spatial material for binaural listening. For example, music streaming services could enhance the stereophonic experience for their end consumer to give them a real spatialised externalised experience. Similarly, movie and TV streaming service could pre-render their surround tracks into theatre-like sonic experiences.