I have an application that uses CMdaAudioInputStream in order to record raw PCM data from the microphone.
The CMdaAudioInputStream class has a member function ReadL() that is used to retrieve data from the microphone's hardware buffers. You pass this function a descriptor buffer, and the ReadL() function is supposed to asynchronously callback (using MMdaAudioInputStreamCallback::MaiscBufferCopied()) when the descriptor you passed in has been FILLED with audio data.
This works fine on Series 60 v2.0/2.1 based phones. My application reads 700ms worth of PCM data at a time.
I have a problem on Series 60 v2.6 based phones (6630/6680) however. A call to CMdaAudioInputStream::ReadL() always results in only 320 bytes of PCM being read into the passed descriptor when MMdaAudioInputStreamCallback::MaiscBufferCopied() is called. This means my application has to handle 50 asynchronous callbacks per second in order to record audio in real-time!
Currently it can't keep up, recording audio at around 90% speed due to other asynchronous callbacks and timers in the system.
Does anybody know of a workaround for this problem?
My only other options are:
- to put my audio component on a separate real-time priority thread.
- to use CMMFDevSound instead of CMda* classes
Does anyone has any experience of CMMFDevSound and can confirm that audio streaming using it works better than the CMda* classes that are supposed to wrap around it?
Thanks for any responses.