NAV Navbar
cpp swift javascript
  • Introduction
  • Overview
  • Mach1 Internal Angle Standard
  • Mach1Encode API
  • Mach1Decode API
  • Mach1DecodePositional API
  • Mach1Transcode API
  • Common Issues
  • Introduction

    M1SDK includes APIs to allow developers to design applications that can encode or pan to a spatial audio render from audio streams and/or playback and decode Mach1Spatial 8channel spatial audio mixes with orientation to decode the correct stereo output sum of the user's current orientation. Additionally the M1SDK allows users to safely convert surround/spatial audio mixes to and from the Mach1Spatial or Mach1Horizon VVBP formats.

    VVBP or Virtual Vector Based Panning is a controlled virtual version of traditional VBAP (Vector Based Amplitude Panning) or SPS (Spatial PCM Sampling). These formats are designed for simplicity and ease of use & implementation both for the content creators and the developers. The spatial audio mixes are based on only amplitude based coefficients changes for both encoding and decoding, and unlike many other spatial audio approaches, there are no additional signal altering processes (such as room modeling, delays or filters) to create coherent and accurate spatial sound fields and play them back from a first person headtracked perspective. Due to the simplicity of the format and cuboid vector space it relies on, it is also ideal for converting and carrying surround and spatial audio mixes without altering the mix to do so, making it an ideal server side audio middleman container. Bringing controlled post-produced spatial audio into new mediums easily.

    Overview

    The Mach1 Spatial SDK includes three components and libraries:

    Mach1Encode and Mach1Decode are supported on OSX 10.7+, Windows 10+, iOS 9.0+ and Android API 19+. Unity 4.0+ and Unreal Engine 4.10+ examples are available and said engines are supported too on the aforementioned platforms.

    Mach1Transcode is supported on macOS, linux and Windows, game engine support coming soon.

    Mach1 Internal Angle Standard

    Positional 3D Coords

    Orientation Euler

    Mach1Encode API

    Mach1Encode allows you to transform input audio streams into the Mach1Spatial VVBP 8 channel format. Included are functions needed for mono, stereo or quad/FOA audio streams. The input streams are referred to as Points in our SDK.

    The typical encoding process starts with creating an object of a class Mach1EncodeCore, and setting it up as described below. After that, you're meant to generate Points by calling generatePointResults() on the object of this class. You'll get as many points as there are input channels and as many gains in each point as there are output channels. You then copy each input channel to each output channel with the according gain.

    Summary of Use

    The Mach1Encode API is designed to aid in developing tools for inputting to a Mach1 VVBP/SPS format. They give access to common calculations needed for the audio processing and UI/UX handling for panning/encoding Mach1 VVBP/SPS formats via the following common structure:

    void update(){
        m1Encode.setRotation(rotation);
        m1Encode.setDiverge(diverge);
        m1Encode.setPitch(pitch);
        m1Encode.setStereoRotate(sRotation);
        m1Encode.setStereoSpread(sSpread);
        m1Encode.setAutoOrbit(autoOrbit);
        m1Encode.setIsotropicEncode(enableIsotropicEncode);
        m1Encode.setInputMode(Mach1EncodeInputModeType::Mach1EncodeInputModeMono);
        m1Decode.setDecodeAlgoType(Mach1DecodeAlgoSpatial);
    
        mtx.lock();
        m1Encode.generatePointResults();
    
        m1Decode.beginBuffer();
        decoded = m1Decode.decode(decoderRotationY, decoderRotationP, decoderRotationR, 0, 0);
        m1Decode.endBuffer();
    
        std::vector<float> volumes = this->volumes;
        mtx.unlock();
    }
    
    func update(decodeArray: [Float], decodeType: Mach1DecodeAlgoType){
        m1Encode.setRotation(rotation: rotation)
        m1Encode.setDiverge(diverge: diverge)
        m1Encode.setPitch(pitch: height)
        m1Encode.setAutoOrbit(setAutoOrbit: true)
        m1Encode.setIsotropicEncode(setIsotropicEncode: true)
        m1Encode.setStereoSpread(setStereoSpread: stereoSpread)
        m1Encode.setInputMode(inputMode: type)
        m1Encode.setOutputMode(outputMode: type)
    
        m1Encode.generatePointResults()
    
        //Use each coeff to decode multichannel Mach1 Spatial mix
        var volumes : [Float] = m1Encode.getResultingVolumesDecoded(decodeType: decodeType, decodeResult: decodeArray)
    
        for i in 0..<players.count {
            players[i].volume = volumes[i] * volume
        }
    
    let m1Encode = null;
    let m1EncodeModule = Mach1EncodeModule();
    m1EncodeModule.onInited = function() {
        m1Encode = new(m1EncodeModule).Mach1Encode();
    };
    function update() {
        m1Encode.setRotation(params.rotation);
        m1Encode.setDiverge(params.diverge);
        m1Encode.setPitch(params.pitch);
        m1Encode.setStereoRotate(params.sRotation);
        m1Encode.setStereoSpread(params.sSpread);
        m1Encode.setAutoOrbit(params.autoOrbit);
        m1Encode.setIsotropicEncode(params.enableIsotropicEncode);
    
        m1Encode.generatePointResults();
    }
    

    Installation

    Import and link the appropriate target device's / IDE's library file.

    Generate Point Results

    Returns the resulting points coefficients based on selected and calculated input/output configuration.

    m1Encode.generatePointResults();
    
    m1Encode.generatePointResults()
    
    m1Encode.generatePointResults();
    

    Set Input Mode

    Sets the number of input streams to be positioned as points.

    if (inputKind == 0) { // Input: MONO
        m1Encode.inputMode = M1Encode::INPUT_MONO;
    }
    if (inputKind == 1) { // Input: STERO
        m1Encode.inputMode = M1Encode::INPUT_STEREO;
    }
    if (inputKind == 2) { // Input: Quad
        m1Encode.inputMode = M1Encode::INPUT_QUAD;
    }
    if (inputKind == 3) { // Input: AFORMAT
        m1Encode.inputMode = M1Encode::INPUT_AFORMAT;
    }
    if (inputKind == 4) { // Input: BFORMAT
        m1Encode.inputMode = M1Encode::INPUT_BFORMAT;
    }
    
    var type : Mach1EncodeInputModeType = Mach1EncodeInputModeMono
    m1Encode.setInputMode(inputMode: type)
    
    if(soundFiles[soundIndex].count == 1) {
        type = Mach1EncodeInputModeMono
    }
    else if(soundFiles[soundIndex].count == 2) {
        type = Mach1EncodeInputModeStereo
    }
    else if (soundFiles[soundIndex].count == 4) {
        if (quadMode){
            type = Mach1EncodeInputModeQuad
        }
        if (aFormatMode){
            type = Mach1EncodeInputModeAFormat
        }
        if (bFormatMode){
            type = Mach1EncodeInputModeBFormat
        }
    }
    
    if (params.inputKind == 0) { // Input: MONO
        m1Encode.setInputMode(m1Encode.Mach1EncodeInputModeType.Mach1EncodeInputModeMono);
    }
    if (params.inputKind == 1) { // Input: STERO
        m1Encode.setInputMode(m1Encode.Mach1EncodeInputModeType.Mach1EncodeInputModeStereo);
    }
    if (params.inputKind == 2) { // Input: Quad
        m1Encode.setInputMode(m1Encode.Mach1EncodeInputModeType.Mach1EncodeInputModeQuad);
    }
    if (params.inputKind == 3) { // Input: AFORMAT
        m1Encode.setInputMode(m1Encode.Mach1EncodeInputModeType.Mach1EncodeInputModeAFormat);
    }
    if (params.inputKind == 4) { // Input: BFORMAT
        m1Encode.setInputMode(m1Encode.Mach1EncodeInputModeType.Mach1EncodeInputModeBFormat);
    }
    

    Set Output Mode

    Sets the output spatial format, Mach1Spatial or Mach1Horizon

    if (outputKind == 0) { // Output: 4CH Mach1Horizon
        m1Encode.outputMode = M1Encode::OUTPUT_4CH;
    }
    if (outputKind == 1) { // Output: 8CH Mach1Spatial
        m1Encode.outputMode = M1Encode::OUTPUT_8CH;
    }
    
    if (outputKind == 0) { // Output: 4CH Mach1Horizon
        m1Encode.setOutputMode(outputMode: Mach1EncodeOutputMode4Ch)
    }
    if (outputKind == 1) { // Output: 8CH Mach1Spatial
        m1Encode.setOutputMode(outputMode: Mach1EncodeOutputMode8Ch)
    }
    
    if (params.outputKind == 0) { // Output: Mach1Horizon / Quad
        m1Encode.setOutputMode(m1Encode.Mach1EncodeOutputModeType.Mach1EncodeOutputMode4Ch);
    }
    if (params.outputKind == 1) { // Output: Mach1Spatial / Cuboid
        m1Encode.setOutputMode(m1Encode.Mach1EncodeOutputModeType.Mach1EncodeOutputMode8Ch);
    }
    

    Set Rotation

    m1Encode.setRotation = rotation;
    
    m1Encode.setRotation(rotation: rotation)
    
    m1Encode.setRotation(params.rotation);
    

    Rotates the point(s) around the center origin of the vector space.

    UI value range: 0.0 -> 1.0 (0-360)

    Set Diverge

    m1Encode.setDiverge = diverge;
    
    m1Encode.setDiverge(diverge: diverge)
    
    m1Encode.setDiverge(params.diverge);
    

    Moves the point(s) to/from center origin of the vector space.

    UI value range: -1.0 -> 1.0

    Set Pitch/Height

    m1Encode.setPitch = pitch;
    
    m1Encode.setPitch(pitch: height)
    
    m1Encode.setPitch(params.pitch);
    

    Moves the point(s) up/down the vector space.

    UI value range: -1.0 -> 1.0

    Set Stereo Rotation

    m1Encode.setStereoRotate = sRotation;
    
    m1Encode.setStereoRotate(setStereoRotate: stereoRotate)
    
    m1Encode.setStereoRotate(params.sRotation);
    

    Rotates the two stereo points around the axis of the center point between them.

    UI value range: -180.0 -> 180.0

    Set Stereo Spread

    m1Encode.setStereoSpread = sSpread;
    
    m1Encode.setStereoSpread(setStereoSpread: stereoSpread)
    
    m1Encode.setStereoSpread(params.sSpread);
    

    Increases or decreases the space between the two stereo points.

    UI value range: 0.0 -> 1.0

    Set Auto Orbit

    m1Encode.setAutoOrbit = autoOrbit;
    
    m1Encode.setAutoOrbit(setAutoOrbit: true)
    
    m1Encode.setAutoOrbit(params.autoOrbit);
    

    When active both stereo points rotate in relation to the center point between them so that they always triangulate toward center of the cuboid.

    default value: true

    Set Isotropic / Periphonic

    m1Encode.setIsotropicEncode = enableIsotropicEncode;
    
    m1Encode.setIsotropicEncode(setIsotropicEncode: true)
    
    m1Encode.setIsotropicEncode(params.enableIsotropicEncode);
    

    When active encoding behavior acts evenly with distribution across all azimuth/rotation angles and all altitude/pitch angles.

    default value: true

    Inline Mach1Encode Object Decoder

    //Use each coeff to decode multichannel Mach1 Spatial mix
    volumes = m1Encode.getResultingVolumesDecoded(decodeType, decodeArray)
    
    for (int i = 0; i < 8; i++) {
        players[i].volume = volumes[i] * volume
    }
    
    //Use each coeff to decode multichannel Mach1 Spatial mix
    var volumes : [Float] = m1Encode.getResultingVolumesDecoded(decodeType: decodeType, decodeResult: decodeArray)
    
    for i in 0..<players.count {
        players[i].volume = volumes[i] * volume
    }
    

    This function allows designs where only previewing or live rendering to decoded audio output is required without any step of rendering or exporting to disk. This enables designs where developers can stack and sum multiple Mach1Encode object's decoded outputs instead of using Mach1Encode objects to write to a master 8 channel intermediary file. Allowing shorthand versions of Mach1Encode->Mach1Decode->Stereo if only live playback is needed.

    Mach1Decode API

    Mach1Decode supplies the functions needed to playback the Mach1Spatial VVBP 8channels to a stereo stream based on the device's orientation, this can be used for mobile device windowing or first person based media such as AR/VR/MR without any additional processing effects required.

    Summary Use

    The Mach1Decode API is designed to be used the following way:

    Setup Step (setup/start):

    void setup(){
        mach1Decode.setDecodeAlgoType(Mach1DecodeAlgoSpatial);
        mach1Decode.setPlatformType(Mach1PlatformDefault);
        mach1Decode.setFilterSpeed(1.0f);
    }
    void loop(){
        mach1Decode.beginBuffer();
        mach1Decode.decode(deviceYaw, devicePitch, deviceRoll);
        mach1Decode.endBuffer();
    }
    
    override func viewDidLoad() {
        mach1Decode.setDecodeAlgoType(newAlgorithmType: Mach1DecodeAlgoSpatial)
        mach1Decode.setPlatformType(type: Mach1PlatformiOS)
        mach1Decode.setFilterSpeed(filterSpeed: 1.0)
    }
    func update() {
        mach1Decode.beginBuffer()
        let decodeArray: [Float]  = mach1Decode.decode(Yaw: Float(deviceYaw), Pitch: Float(devicePitch), Roll: Float(deviceRoll))
        mach1Decode.endBuffer()
    }
    
    let mach1Decode = null;
    let mach1DecodeModule = Mach1DecodeModule();
    mach1DecodeModule.onInited = function() {
        mach1Decode = new(mach1DecodeModule).Mach1Decode();
        mach1Decode.setPlatformType(mach1Decode.Mach1PlatformType.Mach1PlatformOfEasyCam);
        mach1Decode.setDecodeAlgoType(mach1Decode.Mach1DecodeAlgoType.Mach1DecodeAlgoSpatial);
        mach1Decode.setFilterSpeed(0.95);
    };
    function update() {
        mach1Decode.beginBuffer();
        var decoded = mach1Decode.decode(params.decoderRotationY, params.decoderRotationP, params.decoderRotationR);
        mach1Decode.endBuffer();
    }
    

    Audio Loop:

    Decode Callback Mode Options

    The Mach1Decode class's decode function returns the coefficients for each external audio player's gain/volume to create the spatial decode per update at the current angle. The timing of when this callback can be designed with two different modes of use:

    To utilize the first mode simply supply the int your audio player's are using for bufferSize and current index of sample in that buffer to the decode(yaw, pitch, roll, bufferSize, sampleIndex) function to syncronize and update with the audio callback To utilize the second mode simply supply 0 values to bufferSize and sampleIndex

    Installation

    Import and link the appropriate target device's / IDE's library file.

    Set Platform Type

    Set the Angular Type for the target device via the enum Use the setPlatformType function to set the device's angle order and convention if applicable:

    Preset Types(enum):

    mach1Decode.setPlatformType(Mach1PlatformDefault);
    
    mach1Decode.setPlatformType(type: Mach1PlatformType.Mach1PlatformiOS)
    
    mach1Decode.setPlatformType(m1Decode.Mach1PlatformType.Mach1PlatformOfEasyCam);
    

    Angle Order Conventions

    1. Order of Yaw, Pitch, Roll (Defined as angle applied first, second and third).
    2. Direction of transform around each pole's positive movement (left or right rotation).
    3. integer Range before tranform completes 2(PI).

    Euler Angle Orders:

    Set Filter Speed

    Filter speed determines the amount of angle smoothing applied to the orientation angles used for the Mach1DecodeCore class. 1.0 would mean that there is no filtering applied, 0.1 would add a long ramp effect of intermediary angles between each angle sample. It should be noted that you will not have any negative effects with >0.9 but could get some orientation latency when <0.85. The reason you might want angle smoothing is that it might help remove a zipper effect seen on some poorer performing platforms or devices.

    float filterSpeed = 1.0f;
    
    mach1Decode.setFilterSpeed(filterSpeed);
    
    mach1Decode.setFilterSpeed(filterSpeed: 1.0)
    
    mach1Decode.setFilterSpeed(0.95);
    

    Set Decoding Algorithm

    Use this function to setup and choose the required Mach1 decoding algorithm.

    Mach1 Decoding Algorithm Types:

    void setDecodeAlgoType(Mach1DecodeAlgoType newAlgorithmType);
    
    func setDecodeAlgoType(newAlgorithmType: Mach1DecodeAlgoType)
    
    mach1Decode.setDecodeAlgoType(m1Decode.Mach1DecodeAlgoType.Mach1DecodeAlgoSpatial);
    

    Mach1DecodeAlgoSpatial

    Mach1Spatial. 8 Channel spatial mix decoding from our cuboid configuration. This is the default and recommended decoding utilizing isotropic decoding behavior.

    Mach1DecodeAlgoAltSpatial

    Mach1Spatial. 8 Channel spatial mix decoding from our cuboid configuration. This is a Periphonic decoding weighted more toward yaw, prone to gimbal lock but can be useful for use cases that only need 2 out of 3 input angle types.

    Mach1DecodeAlgoHorizon

    Mach1Horizon. 4 channel spatial mix decoding for compass / yaw only configurations. Also able to decode and virtualize a first person perspective of Quad Surround mixes.

    Mach1DecodeAlgoHorizonPairs

    Mach1HorizonPairs. 8 channel spatial mix decoding for compass / yaw only that can support headlocked / non-diegetic stereo elements to be mastered within the mix / 8 channels. Supports and decodes Quad-Binaural mixes.

    Mach1DecodeAlgoSpatialPairs

    Mach1SpatialPairs. Periphonic stereo pairs decoding. This function of decoding is deprecated and only helpful for experimental use cases!

    Begin Buffer

    Call this function before reading from the Mach1Decode buffer.

    mach1Decode.beginBuffer();
    
    mach1Decode.beginBuffer()
    
    mach1Decode.beginBuffer();
    

    End Buffer

    Call this function after reading from the Mach1Decode buffer.

    mach1Decode.endBuffer();
    
    mach1Decode.endBuffer()
    
    mach1Decode.endBuffer();
    

    Decoding

    The decode function's purpose is to give you updated volumes for each input audio channel for each frame in order for spatial effect to manifest itself. There are two versions of this function - one for cases when you might not need very low latency or couldn't include C/C++ directly, and another version for C/C++ high performance use.

    If using on audio thread, high performance version is recommended if possible.

    Default Isotropic Decoding [recommended]:

    lower performance version for non audio thread operation or for use in managed languages

    std::vector<float> volumes = mach1Decode.decode(float deviceYaw, float devicePitch, float deviceRoll);
    
    let decodeArray: [Float]  = mach1Decode.decode(Yaw: Float(deviceYaw), Pitch: Float(devicePitch), Roll: Float(deviceRoll))
    
    var decoded = m1Decode.decode(params.decoderRotationY, params.decoderRotationP, params.decoderRotationR);
    

    you can get a per sample volumes frame if you specify the buffer size and the current sample index

    std::vector<float> volumes = mach1Decode.decode(float deviceYaw, float devicePitch, float deviceRollint bufferSize, int sampleIndex);
    
    // high performance version is meant to be used on the audio thread, it puts the resulting channel volumes
    // into a float array instead of allocating a result vector. Notice the pointer to volumeFrame array passed. The array itself has to have a size of 18 floats
    
    float volumeFrame [18];
    
    mach1Decode.decode(float deviceYaw, float devicePitch, float deviceRoll, float *volumeFrame, bufferSize, int sampleIndex);
    

    Example of Using Decoded Coefficients

    Input orientation angles and return the current sample/buffers coefficients

    Sample based example

    volumes = mach1Decode.decode(deivceYaw, devicePitch, deviceRoll);
    
    for (int i = 0; i < 8; i++) {
        playersLeft[i].setVolume(volumes[i * 2] * overallVolume);
        playersRight[i].setVolume(volumes[i * 2 + 1] * overallVolume);
    }
    
    //Send device orientation to mach1Decode object with the preferred algo
    mach1Decode.beginBuffer()
    let decodeArray: [Float]  = mach1Decode.decode(Yaw: Float(deviceYaw), Pitch: Float(devicePitch), Roll: Float(deviceRoll))
    mach1Decode.endBuffer()
    
    //Use each coeff to decode multichannel Mach1 Spatial mix
    for i in 0...7 {
        players[i * 2].volume = Double(decodeArray[i * 2])
        players[i * 2 + 1].volume = Double(decodeArray[i * 2 + 1])
    }
    

    Buffer based example

    //16 coefficients of spatial, 2 coefficients of headlocked stereo
    float volumes[18];
    
    mach1Decode.beginBuffer();
    for (size_t i = 0; i < samples; i++)
    {
        mach1Decode.decode(Yaw, Pitch, Roll, volumes, samples, i);
    
        for (int j = 0; j < 8; j++)
        {
            sndL += volumes[j * 2 + 0] * buffer[j][idx];
            sndR += volumes[j * 2 + 1] * buffer[j][idx];
        }
    
        buf[i * 2 + 0] = (short) (sndL * (SHRT_MAX-1));
        buf[i * 2 + 1] = (short) (sndR * (SHRT_MAX-1));
    }
    mach1Decode.endBuffer();
    bufferRead += samples;
    

    Get Current Time

    Returns the current elapsed time in milliseconds (ms) since Mach1Decode object's creation.

    Get Log

    Returns a string of the last log message (or empty string if none) from Mach1DecodeCAPI binary library. Use this to assist in debug with a list of input angles and the associated output coefficients from the used Mach1Decode function.

    Mach1DecodePositional API

    Mach1DecodePositional allows the 3DOF orientation decoding to decode in a dev environment that supports 6DOF with positional movement. It does this by referencing the user's device to a location and adding an additional layer of rotations and attenuations to the spatial decoding.

    Unity & Unreal Engine

    Please view the examples in examples/Unity|UnrealEngine to see deployment of Mach1Spatial mixes with positional rotation and attenuation applied. These fucntions can be viewed from the M1Base class used in both examples and are called by creating a new object in the game engine and attaching Mach1SpatialActor or Mach1SpatialDecode.cs to view the setup for a Mach1 Spatial mix layer.

    Summary Use

    The Mach1DecodePositional API is designed to be added if 6DOF or positional placement of Mach1Decode objects are needed. Once added and used for updating the object's and referencable device/camera it will calculate the positional and rotational angles and distances and result them via the same useable coefficients that are used for Mach1Decode, as per the following way:

    void setup(){
        mach1DecodePositional.setDecodeAlgoType(Mach1DecodeAlgoSpatial);
        mach1DecodePositional.setPlatformType(Mach1PlatformDefault);
        mach1DecodePositional.setUseAttenuation(bool useAttenuation);
        mach1DecodePositional.setAttenuationCurve(float attenuationCurve);
        mach1DecodePositional.setUsePlaneCalculation(bool usePlaneCalculation);
    }
    
    void loop(){
        mach1DecodePositional.setListenerPosition(Mach1Point3D devicePos);
        mach1DecodePositional.setListenerRotation(Mach1Point3D deviceRot);
        mach1DecodePositional.setDecoderAlgoPosition(Mach1Point3D objPos);
        mach1DecodePositional.setDecoderAlgoRotation(Mach1Point3D objRot);
        mach1DecodePositional.setDecoderAlgoScale(Mach1Point3D objScale);
    
        mach1DecodePositional.getDist();
        mach1DecodePositional.getCoefficients(float* result);
    }
    
    override func viewDidLoad() {
        mach1DecodePositional.setDecodeAlgoType(newAlgorithmType: Mach1DecodeAlgoSpatial)
        mach1DecodePositional.setPlatformType(type: Mach1PlatformiOS)
        mach1DecodePositional.setFilterSpeed(filterSpeed: 1.0)
        mach1DecodePositional.setUseAttenuation(useAttenuation: true)
        mach1DecodePositional.setUsePlaneCalculation(bool: false)
    }
    func update() {
        mach1DecodePositional.setListenerPosition(point: (devicePos))
        mach1DecodePositional.setListenerRotation(point: Mach1Point3D(deviceRot))
        mach1DecodePositional.setDecoderAlgoPosition(point: (objectPosition))
        mach1DecodePositional.setDecoderAlgoRotation(point: Mach1Point3D(objectRotation))
        mach1DecodePositional.setDecoderAlgoScale(point: Mach1Point3D(x: 0.1, y: 0.1, z: 0.1))
    
        mach1DecodePositional.evaluatePositionResults()
    
        var attenuation : Float = mach1DecodePositional.getDist()
        attenuation = mapFloat(value: attenuation, inMin: 0, inMax: 3, outMin: 1, outMax: 0)
        attenuation = clampFloat(value: attenuation, min: 0, max: 3)
        mach1DecodePositional.setAttenuationCurve(attenuationCurve: attenuation)
    
        var decodeArray: [Float] = Array(repeating: 0.0, count: 18)
        mach1DecodePositional.getCoefficients(result: &decodeArray)
    }
    

    Setup Step (setup/start):

    Audio Loop:

    Installation

    Import and link the appropriate target device's / IDE's library file.

    For Unity: - Import the Custom Asset Package

    For Unreal Engine: - Add the Mach1Spatial Plugin to your project

    Setup per Spatial Soundfield Position

    The following are functions to aid in how positional distance effects the overall gain of an mach1decode object to any design. The resulting distance calculations can also be used for any external effect if created.

    Attenuation/Falloff

    void setUseAttenuation(bool useAttenuation);
    
    func setUseAttenuation(useAttenuation: Bool)
    

    Boolean turning on/off distance attenuation calculations on that mach1decode object

    Reference positional point/plane/shape

    void setUsePlaneCalculation(bool usePlaneCalculation);
    
    func setUsePlaneCalculation(bool usePlaneCalculation: Bool)
    

    This very long named function can set whether the rotational pivots of a mach1decode soundfield are by referencing the device/camera to a positional point or the closest point of a plane (and further the closest plane of a shape). This allows each mach1decode object to be placed with more design options to prevent soundfield scenes from rotating when not needed.

    Set Filter Speed

    float filterSpeed = 1.0f;
    
    mach1Decode.setFilterSpeed(filterSpeed);
    
    mach1Decode.setFilterSpeed(filterSpeed: 1.0)
    

    Filter speed determines the amount of angle smoothing applied to the orientation angles used for the Mach1DecodeCore class. 1.0 would mean that there is no filtering applied, 0.1 would add a long ramp effect of intermediary angles between each angle sample. It should be noted that you will not have any negative effects with >0.9 but could get some orientation latency when <0.85. The reason you might want angle smoothing is that it might help remove a zipper effect seen on some poorer performing platforms or devices.

    Setup for Advanced Settings

    Mute Controls

    void setMuteWhenOutsideObject(bool muteWhenOutsideObject);
    
    func setMuteWhenOutsideObject(muteWhenOutsideObject: Bool)
    

    Similar to the setUseClosestPointRotationMuteInside these functions give further control over placing a soundfield positionally and determining when it should/shouldn't output results.

    void setMuteWhenInsideObject(bool muteWhenInsideObject);
    
    func setMuteWhenInsideObject(muteWhenInsideObject: Bool)
    

    Mute mach1decode object (all coefficifient results becomes 0) when outside the positional reference shape/point

    Mute mach1decode object (all coefficifient results becomes 0) when inside the positional reference shape/point

    Manipulate input angles for positional rotations

    void setUseYawForRotation(bool useYawForRotation);
    
    func setUseYawForRotation(bool useYawForRotation: Bool)
    

    Ignore Yaw angle rotation results from pivoting positionally

    void setUsePitchForRotation(bool usePitchForRotation);
    
    func setUsePitchForRotation(bool usePitchForRotation: Bool)
    

    Ignore Pitch angle rotation results from pivoting positionally

    void setUseRollForRotation(bool useRollForRotation);
    
    func setUseRollForRotation(bool useRollForRotation: Bool)
    

    Ignore Roll angle rotation results from pivoting positionally

    Update per Spatial Soundfield Position

    Updatable variables for each mach1decode object. These are also able to be set once if needed.

    void setCameraPosition(Mach1Point3DCore* pos);
    
    func setListenerPosition(point: Mach1Point3D)
    

    Updates the device/camera's position in desired x,y,z space

    void setCameraRotation(Mach1Point3DCore* euler);
    
    func setListenerRotation(point: Mach1Point3D)
    

    Updates the device/camera's orientation with Euler angles (yaw, pitch, roll)

    void setCameraRotationQuat(Mach1Point4DCore* quat);
    
    func setListenerRotationQuat(point: Mach1Point4D)
    

    Updates the device/camera's orientation with Quaternion

    void setDecoderAlgoPosition(Mach1Point3DCore* pos);
    
    func setDecoderAlgoPosition(point: Mach1Point3D)
    

    Updates the decode object's position in desired x,y,z space

    void setDecoderAlgoRotation(Mach1Point3DCore* euler);
    
    func setDecoderAlgoRotation(point: Mach1Point3D)
    

    Updates the decode object's orientation with Euler angles (yaw, pitch, roll)

    void setDecoderAlgoRotationQuat(Mach1Point4DCore* quat);
    
    func setDecoderAlgoRotationQuat(point: Mach1Point4D)
    

    Updates the decode object's orientation with Quaternion

    void setDecoderAlgoScale(Mach1Point3DCore* scale);
    
    func setDecoderAlgoScale(point: Mach1Point3D)
    

    Updates the decode object's scale in desired x,y,z space

    Applying Resulting Coefficients

    void evaluatePositionResults();
    
    func evaluatePositionResults()
    

    Calculate!

    void getCoefficients(float *result);
    
    func getCoefficients(result: inout [Float])
    

    Get coefficient results for applying to mach1decode object for rotational and positional, replaces the results from: mach1Decode.decode

    Return Relative Comparisons

    float getDist();
    
    func getDist() -> Float
    

    Get normalized distance between mach1decode object and device/camera cpp Mach1Point3D getCurrentAngle(); swift func getCurrentAngle() -> Mach1Point3D cpp Mach1Point3D getCoefficientsRotation(); swift func getCoefficientsRotation() -> Mach1Point3D

    Update Falloff/Attenuation

    void setAttenuationCurve(float attenuationCurve);
    
    func setAttenuationCurve(attenuationCurve: Float)
    

    Set a resulting float of that curve to the current buffer

    Mach1Transcode API

    Mach1Transcode includes functions for use cases that utilizing Mach1Spatial's agnostic abilities and allows 1:1 VBAP style conversions from any surround or spatial audio format and to any other surround or spatial audio format. This is very helpful for apps that have certain input requirements but different output requirements based on whether the app is launched for VR/AR/MR or just mobile use without completely redesigning the application's structure for audio. This is also a recommended method of carrying one master spatial audio container and at endpoints converting it as needed without adverse signal altering effects seen in other spatial audio formats.

    Usage

    Rapidly offline render to and from Mach1 formats.

    Example in command line for converting Mach1Spatial mix to First Order ambisonics: ACNSN3D

    m1-transcode fmtconv -in-file /path/to/file.wav -in-fmt M1Spatial -out-fmt ACNSN3D -out-file /path/to/output.wav -out-file-chans 0
    

    Example in command line for converting 7.1 film mix to Mach1Spatial

    m1-transcode fmtconv -in-file /path/to/file.wav -in-fmt SevenOnePT_Cinema -out-fmt Mach1Spatial -out-file /path/to/output.wav
    

    Example in command line for converting Mach1Spatial to Mach1HorizonPairs (quad-binaural compliant)

    m1-transcode fmtconv -in-file /path/to/file.wav -in-fmt M1Spatial -out-fmt Mach1HorizonPairs -out-file /path/to/output.wav -out-file-chans 2
    

    Suggested Metadata Spec [optional]

    Mach1 Spatial = mach1spatial-8

    Mach1 Spatial+ = mach1spatial-12

    Mach1 Spatial++ = mach1spatial-16

    Mach1 StSP = mach1stsp-2

    Mach1 Horizon = mach1horizon-4

    Mach1 Horizon Pairs = mach1horizon-8

    Metadata is not required for decoding any Mach1 VVBP format, and often it is not recommended to rely on auto-detection methods but instead rely on UI/UX for user input upon uploading a Mach1 multichannel audio file for safest handling. This is due to their being several possible 8 channel formats and unless there are proper methods to filter and detect and handle each one, user input will be a safer option. There are many oppurtunities for transcoding or splitting a multichannel audio file all of which could undo metadata or apply false-positive metadata due to the many audio engines not built to handle multichannel solutions safely.

    If autodetection is still required, use the following suggested specifications which will be applied to mixes that run out of M1-Transcoder and soon m1-transcode directly:

    Example:

      Metadata:
        comment         : mach1spatial-8
    

    Examples of Metadata Spec

    ffmpeg (wav output): -metadata ICMT="mach1spatial-8"

    ffmpeg (vorbis output): -metadata spatial-audio='mach1spatial-8'

    ffmpeg (aac output): -metadata comment='mach1spatial-8'

    libsndfile (wav output): outfiles[i].setString(0x05, "mach1spatial-8");

    Formats Supported

    Mach1 Formats

    Traditional / Surround Formats

    Ambisonic Formats (special thanks to VVAudio)

    Common Issues

    The following is a list of common heard issues during implementation and include audio tools to help find these issues as well as basic descriptions of their behavior and how they can be avoided.

    Orientation Latency Issues

    Orientation Rate Issues (Zipper)

    Audio/Visual Sync Issues

    Spatial Decoding Phase Issues

    Last updated: 2019-June-21

    © Copyright 2017-2019, Mach1. All rights reserved.