Thank you for your fast comments. I'm a little bit further forward.
I've found the source for _recorder.pyd (src/ext/recorder/). It looks like to do anything here I would have to write in C++ and tackle CMdaAudioPlayerUtility. For now I'd rather build a wav file out of raw data, write the file and then play it. That's a horrible kludge but it meets my immediate requirements.
I've had a quick look at smidi.py. It looks interesting and it may be useful ... if it can handle recorded speech data ...
What I want to do is concatenate data from separate audio files, and play the result. The application is very basic limited vocabulary speech synthesis. I'm writing my own instead of using audio.say() for language and quality reasons. The application works on vanilla python (ie on PC).
A concrete example: A robot greeter in a company hallway, which says "Hello <person's name>" when someone enters the building. The greet function will look like this:
It seems that for a PyS60 version of this function, makeWaveWriteObj would have to write an audio file and return the filename. I'm looking for a way to just agglomerate the data and send it straight to audio (like I can on PC).
hello = open('hello.raw').read()
name = open('%s.raw' % name).read()
data = hello + name
w = makeWaveWriteObj(data) # exercise for the reader
play(w) # platform dependent eg use PyAudio on PC
All the best