Frequently Asked Questions

What You Need to Know

What is the difference between simulation and emulation in spatial audio?

Emulation

In the HRTF-based model of spatial, an audio source that needs to be directionalised into to 3D space is processed through emulation of a spherical speaker array.


The desired position for the audio is used to identify head-related impulse signals in from a database of audio samples. This database is created with a dummy-head microphone in a specially constructed speaker system that can record entries for the database and different angles of rotation and elevation on the surface of a sphere. 

When new audio is played through this system, it like asking what would the dummy-head microphone have heard if we'd had played this audio at this position on the edge of the sphere. 

No one really hears in this way - we don't walk around in a bubble of speakers. In the late 1960s when this approach was created is was very neat. But in the 21st century with our difference in computing power, there is a better solution.


Simulation

Our solution is to create a physical simulation of the virtual world - or Sonic Reality(r) as we prefer to call it. This is just a mathematical abstract using the known laws of physics as they apply to sound.


We use this simulation to generate a set of coefficients to process the audio as if it was propagating through space, including passing through obstructions and reflecting off surfaces on the way. 


We use next-generation advance DSP to do this filtering for each ear. It is very fast and high quality.

What is zero-latency and why is it important?

Zero-latency means that our processing solution does not take time to process. We effectively process in real-time. 


This hugely important when head-tracking is involved. For the listen to be convinced of the realism, the audio should change as the head changes.

Researchers have shown that latency above 30ms is detectable by listeners to spatial audio. And this detracts from the listener's experience of 'reality'.  

Real-time head-tracking combined with high-angular resolution can eliminate the "front/back confusions" that listeners have where they struggle to distinguish if a sound is at the front or back because it is equidistant from each ear. Zero-latency allows the brain to take more 'readings' because of the micro-movements of the head and remove the confusion.

What is HiFi about your solution?

A known problem with HRTF is the resulting audio suffers from artefacts of the filtering process. The two principal artefacts from this process are high-frequency noise and comb-filtering. 

The filters we use in Sympan(r) we do add any processing artefats and our simulation process is free from comb-filtering because it uses a completely different processing approach than the virtual speaker model. 

This means that the sonic quality of your source audio is retained, subject only to the simulated effects of propagation. The processing is high fidelity because it leaves you source audio intact.  

What is ultra-high angular resolution and how does it compare to the state of the art?

When we talk about angular resolution, we're talking about the combined effects of focus on a source, how 'wide' it sounds and little it has moved to for a detectable change in the audio.


In our approach, we use a 7th Order Spherical Harmonic model to derive our angular interpolation. This allows us to really focus in at sub-1° levels. 

In comparison, the 'high definition' approach of most implementations of HRTF is 3rd Order Ambisonics - although many implementations are still 1st Order. So our basic model offers several orders of magnitude high resolution.

A further difference in our approach means that even when comparing 7th Order Ambisonics to out 7th Order model we're getting greater angular resolution because of our simulation technique.

How is your approach customisable and how does it benefit the listener?

The listener is able to easily customise the width of their head* and this improves accuracy in the localisation of the audio. 

In our simulation model, we have two independent ears - just like we do in nature. And they're not fixed to the head of dummy-head microphone like they are in HRTF. They exist discreetly in 6 DoF (degrees of freedom) so can be spaced to match the listener's actual anatomy.


The distance between the ears in adult humans ranges from 130mm to 170mm. The KU100 Dummy-head microphone has a width of 180mm. These differences have a big effect on the angles and distort the sound image. This is is a major cause of the "Cone of Confusion" which describes users confusion in determining the location of sound as it moves around the quadrants of a sphere.

* Manufacturers could alsoenginer this into their devices to make it even easier for the end-user.

Is your solution software or hardware?

the Sympan(r) Sonic Reality Engine(r) is a software solution that can be deployed as

  • firmware for embeddable hardware - usually DSP or other Integrated Chip

  • cross-platform software library and SDK for inclusion of 3rd part applications

  • software-as-a-service model for cloud based spatial audio rendering

Is your solution available commercially?

Yes Sympan is available for licencing on volume-based royalty model. Please contact us to discuss the best terms for your requirements.

How can I request a Demo?

Please use our contact page to request a demo. We have a number of different demonstration approaches that will allow give you experience of our solution appropriate to your technical needs. 

During the COVID crisis, we have developed an online demonstration suite that allows us to take you through the features of Sympan(r) and remain COVID secure.

How can I invest in your startup?

We are currently preparing for an investment round. If you would like a pitch-deck to consider whether investing in Kinicho and the future of audio could be for you, please use our contact page.