What is Core Audio?

Tram Ho

Core Audio is a digital audio infrastructure of iOS and OS X. It includes a set of frameworks designed to handle audio for your application. Let’s see what we can do with Core Audio.

Core Audio in iOS and OS X

Core Audio is integrated into iOS and OS X to provide processing capabilities with high performance and low latency.

In OS X, the majority of Core Audio services are on Hardware Abstraction Layer (HAL) as shown below. Audio signals are transmitted to and from the hardware through HAL. We can access HAL using Audio Hardware Services in Core Audio framework when needing real-time audio. Core MIDI (Musical Instrument Digital Interface) framework provides similar interfaces to work with MIDI data and devices.

You will see the application tier services of Core Audio located in Audio Toolbox and Audio Unit framework.

  • Use Audio Queue Services to record, playback, pause, loop and synchronize audio.
  • Use Audio File, Converter, and Codec Services to read and write from disk and perform audio format conversion. In OS X you can create custom codecs.
  • Use Audio Unit Services and Audio Processing Graph Services to organize audio units (also called audio plug-ins) in the application. In OS X you can create custom audio units for use in your application or for other applications.
  • Use Music Sequencing Services to play MIDI-based controls and music data.
  • Use Core Audio Clock Services for audio and MIDI synchronization, and manage time format.
  • Use System Sound Services to play system sounds and sound effects.

Core Audio in iOS is optimized to calculate the resources available in battery-powered mobile platforms. Therefore no APIs for services need to be managed very closely by the operating system such as HAL and I / O Kit. However, there are a few other services added in iOS that are not available on OS X. For example, Audio Session Services allows us to manage audio behaviors of applications that run on mobile devices or iPods.

iOS Core Audio Architecture

A bit of Digital Audio and Linear PCM

Most Core Audio services use and manipulate audio in linear pulse-code-modulated (or linear PCM ) format, the most popular digital audio format. Digital audio when the record will create PCM data by calculating the analog audio signal strength regularly (the main sampling rate refers to this) and converting each sample (sample) into a value number.

Standard compact disc (CD) audio uses sampling rate 44.1 kHz with 16-bit integer to represent 1 sample (also called bit depth )

  • A sample is a single digital value for a single channel.
  • A frame is a set of time duplicates. For example, stereo audio has two channels, so there are always 2 samples per frame, one left channel and one right channel, while mono audio has only 1 channel, so one frame only has 1 sample.
  • A packet is a set consisting of one or more consecutive frames. In linear PCM audio, a packet has only 1 frame. In other compression formats it depends. Packet defines the smallest number of frames for an audio format.

In linear PCM audio, a sample value changes linearly with the amplitude of the original signal it represents. For example, 16-bit sample integers with audio CD standards allow 65,536 possible values ​​between silence and maximum. The difference in amplitude from one value to the next is always the same.

The Core Audio data structure, declared in the CoreAudioTypes.h header file, can describe linear PCM in any sample rate and bit depth. We will explain more in the article about Audio Data Formats.

In OS X, Core Audio expects native audio data in native-endian format, 32-bit floating-point, linear PCM format. We can use Audio Converter Services to translate audio data between different versions of linear PCM. You can use these converters to translate between linear PCM and audio compression formats like MP3 and Apple Lossless. Core Audio in OSX supports codecs to translate most popular audio formats (although it does not provide encoders to convert to MP3).

iOS uses integer and fixed-point audio data. The calculation results will be faster and take less battery when processing audio. iOS provides a Converter audio unit and includes interface from Audio Converter Services. For more details on what is called canonical audio data formats for iOS and OS X, let’s look forward to the following articles.

In iOS and OS X, Core Audio also supports most common file formats for storing and playing audio data.

Audio Units

Audio units are plug-ins for processing audio data. In OS X, a single audio unit has a tag that can be used simultaneously by multiple channels and applications.

iOS provides a set of audio units that are optimized for performance on mobile platforms. You can develop audio units to use in your app. However, because you have to statically link the custom audio units to the app, the audio units that you develop yourself cannot be used by other iOS apps.

The audio units provided in iOS are not available in the user interface. Their main job is to provide low-latency audio for the app. To know more about audio units on iPhone, let’s look forward to the following articles.

For Mac apps, you can use system-supplied or third-party-supplied audio units. You can also develop an audio unit but a product. Users can mount your audio units in applications such as GarageBand or Logic Studio, as well as other audio unit hosting applications.

Some Mac audio units only work underground to simplify tasks for you, such as splitting signals or communicating with hardware. Others appear on the screen, with its own UI, to provide the ability to manipulate and process signals. For example, effect units can help us mimic real-world sounds, like guitaris’s distortion box. Some audio units generate signals, possibly due to programming or answering MIDI input.

Some examples of audio units can include:

  • A signal processor (such as a high-pass filter, reverb, compressor, or distortion unit). Each is an effect unit and performs digital signal processing (DSP) in the same way as a hardware effect box or outboard signal processor.
  • A musical instrument or software synthesizer. This is called an instrument unit (or sometimes a music device) and is often used to generate music notes in response to MIDI input.
  • A signal source. Unlike instrument units, a generator unit is not activated by MIDI input but through code. For example, a generator unit can calculate and generate sine waves, or get source data from a file or network stream.
  • An interface to communicate hardware. This is called I / O units.
  • A converter format. A converter unit can translate data between linear PCM forms, merge or split audio streams, or perform time and pitch changes.
  • A mixer or panner. A unit mixer can combine audio tracks. A panner unit can apply stereo or 3D panning effect.
  • An effect unit working offline. An offline effect unit that performs tasks that are too specialized in the processor or simply cannot be performed in real time. For example, an effect that performs time reversal on a file should be applied offline.

In OS X we can mix audio units together according to user requirements. The following figure shows a basic chain of audio units. There is an instrument unit to generate signals based on data received from an outboard MIDI keyboard. Audio generated pass through unit effects for more bandpass filtering effects and distortion. Such a chain of audio units is called an audio processing graph .

If you develop DSP audio code that you want to use for multiple apps, you need to pack the code into an audio unit.

If you develop Mac audio apps, audio unit support will allow you and your users to take advantage of existing audio unit libraries to expand the app’s capabilities.

Hardware Abstraction Layer

Core Audio uses a hardware abstraction layer (HAL) to provide a consistent and easy-to-use interface for applications to interact with hardware. HAL can also provide time information for your app to simplify synchronization or delay adjustment.

In most cases, your code does not interact directly with HAL. Apple supports a special audio unit, called AUHAL unit in OS X and AURemoteIO unit in iOS, allowing you to pass audio from another audio unit to the hardware. Similarly, input from hardware is routed through AUHAL unit (or AURemoteIO on iOS) and is provided to other audio units in the chain.

AUHAL unit (or AURemoteIO unit) also takes care of the data format conversion or the channel mapping needed to translate audio data between audio units and hardware.

Conclusion

The above are some of the initial knowledge and concepts when working with Core Audio, hoping to be a useful reference for you.


Translate and reference from Core Audio Overview

Share the news now

Source : Viblo