I've created a simple 2channel in, 2channel out + FIR filters in Designer. The FIR is frac32. The audio works fine with a unit impulse filter (1, 0, 0, 0, 0....). When I use my actual filter, I get garbled distortion out. My input has a peak value of -20 dBFS, so there's no way that this is clipping.
Here's server info, design file attached:
Name: aweOS
AWECore Version: 8.B.1.5
Processor type: CortexA53
CPU clock rate: 1 GHz
Profile clock rate: 20 MHz
Sample rate: 48000 Hz
Basic block size: 32 samples
Communication buffer size: 264 words
Is floating point: Yes
Is FLASH supported: No
Size of 'int': 4
User version: 1
Heap padding
Core ID: 0 [0:0]
Static core: yes
Threads: 4
Pins:
Input: 16 channels fract
Output: 16 channels fract
5:44pm
Hi John,
Can you confirm if this is your desired frequency response?
Have you tried processing any test files through this layout in Native mode instead of on the Cortex-A53? If so, does the distortion occur there too?
When running on the Cortex-A53, what is the CPU% reported in AWE Server and when using Tools -> Profile Running Layout?
Thanks,
Michael
6:03pm
You may also wish to review this web discussion about FIR filter headroom.
audio - Estimating effect of filter on headroom - Signal Processing Stack Exchange
12:34pm
Sorry for the delay. Thanks for the good info. I don't think that gain and clipping was the problem, the craziness I heard happens at even small input levels. CPU was fine when I checked it.
Your calculated FR is correct.
The problem goes away if I convert to float and then use the float FIR. The type of distortion was very strange, it wasn't like a clipping, it was total robot noise garbage. I'll try this with the native processing at some point and see if it's platform specific.