Recording and Playback using Audio Queue Services

Tram Ho

Audio Queue Services provides a direct and effortless way to record and play audio. It allows our application to use recording and playback devices (such as microphones and speakers) without having to know how to interact with the hardware interface. It also allows you to use complex codecs without knowing how codecs work.

Despite being a high-level interface, Audio Queue Services supports a few advanced features. It, for example, provides detailed timing control to aid in playback scheduling and synchronization. It can be used to synchronize playback of multiple audio queues, to be able to play multiple sounds in parallel, to independently control playback levels of multiple sounds, and to perform playback. repeat. Audio Queue Services and AVAudioPlayer are good options for AVAudioPlayer compressed audio formats on iOS devices. Actually, now AVAudioEngine also very good.

People often use Audio Queue Services in conjunction with Audio File Services or with Audio File Stream Services .

Use the Audio Queue Callback Functions for Recording and Playback

Like other audio services, we interact with audio queue objects using callbacks and properties. To recording, we implement a callback function whose job is to receive the audio data buffers provided by the recording audio queue object, and then write it to storage. The audio queue object will call our callback function when there is a new buffer of recorded data. The figure below illustrates this process:

For playback, the callback function plays the opposite role. It will be called until the playback of the audio queue object needs an additional audio buffer to play. The callback function then reads a small amount of audio packets from the memory and passes them into one of the audio queue object’s buffer. The Audio queue object will play the buffers one after another.

Create an Audio Queue Object

To use the Audio Queue Services, we first need to create an audio queue object. Although there are two different tasks, the data type is the AudioQueueRef .

  • To create a recording audio queue object, use AudioQueueNewInput .
  • To create an audio queue object playback, use AudioQueueNewOutput

To create an audio queue object for playback, do the following three steps:

  1. Create a data structure to handle the information needed by the audio queue, such as the audio format for the data we want to play.
  2. Define a callback function to handle the audio queue buffers. This callback uses Audio File Services to read the file that we want to play.
  3. Initialize the audio queue playback using the AudioQueueNewOutput function.

Control the Audio Queue Playback level

The Audio queue object gives us two ways to control playback level.

To set the playback level directly, use the AudioQueueSetParameter function with the kAudioQueueParam_Volume . Changes will be applied immediately.

You can also set the playback level for an audio queue buffer, using the AudioQueueEnqueueBufferWithParameters function. In fact, this allows us to assign the audio queue settings to the audio queue buffer we are waiting for. These changes will be applied when the audio queue buffer starts playing.

In both cases, the level changes will stay the same until we change it again.

Determine the Audio Queue Playback level

We can obtain the current playback level value from an audio queue object by querying its kAudioQueueProperty_CurrentLevelMeterDB property. The value of this property is an array of AudioQueueLevelMeterState struct, each channel 1 array. About the structure of the struct:

Play multiple sounds in parallel

To play multiple sounds in parallel, we create a playback audio queue object for each sound. For each of those audio queues, schedule the first audio buffer to start at the same time using the AudioQueueEnqueueBufferWithParameters function.

The audio format is extremely important when we play lots of audio in parallel on iOS. Because playback of a certain compression format in iOS requires an efficient use of the hardware codec. Only one instance of one of the following formats can be played on the device at a time:

  • AAC
  • ALAC (Apple Lossless)
  • MP3

To play multiple high quality parallel sounds, use linear PCM or IMA4 audio.

The following list describes how iOS supports audio formats for single or multi-playback:

  • Linear PCM and IMA / ADPCM (IMA4) : You can play multiple audio in linear PCM or IMA4 format in parallel in iOS without cpu resource issue.
  • AAC, MP3 and Apple Lossless (ALAC) use a lot of hardware-based decoding on iOS. We can play only one sound at a time.

Playback with Positioning using OpenAL

The Audio API is in the OpenAL framework and is built on Core Audio, and it’s optimized for audio positioning during playback. OpenAL makes things a breeze when we play, position, mix and move sounds, using an interface emulated from OpenGL. OpenAL and OpenGL use the same coordinate system, allowing us to synchronize sound movements and graphic objects on the screen. OpenAL directly uses Core Audio’s I / O audio unit for lowest playback latency.

For all of these reasons, OpenAL is the best choice for playing sound effects in iOS game apps. However, it’s also a good choice for other iOS apps with playback.

OpenAL 1.1 in iOS does not yet support audio capture. So for recording in iOS, we use Audio Queue Services .

OpenAL in OS X is made up of specs of OpenAL 1.1 with extensions. OpenAL in iOS has two features that extend Apple’s own:

  • alBufferDataStaticProcPtr follows the use of a pattern for alBufferData but eliminates a buffer data copy.
  • alcMacOSXMixerOutputRateProcPtr allows us to control the sample rate of the underlying mixer. For Mac, for an example of using OpenAL in Core Audio see Services/OpenALExample in Core Audio SDK.

Translation and reference from the Core Audio Programming Guide

Share the news now

Source : Viblo