Augmented Face Documentation

Overview

The goal of this documentation is to describe the principles behind the XZIMG Augmented Face solution. The XZIMG Augmented Face for Unity is a library that provides multi-platform face detection and face tracking using an input video-stream. It takes images as inputs and outputs the 3D pose of the face detected in the images.

Trial version

If you are using the trial version of XZIMG Augmented Face solution, the detection is intentionally stop from time to time. Moreover, the experience will stop automatically after a little while. Even though this protection mechanism degrades slightly the resulting experience, we think it is enough for trying-out the robustness of our detection engine and prepare projects before purchasing the professional version of the Augmented Face solution.

Unity Package

The Augmented Vision SDK plugin for Unity3d consists of a compressed archive organized as a Unity project. It contains the following elements:

  • Materials and 3D objects (such as a mask for the face) are stored the directory ./Assets/Face/ and ./Assets/Glasses/
  • Useful Shaders are stored in the ./Assets/Script/ directory.
  • The C# scripts to launch the plugin and control its execution are stored in ./Assets/Script/.
  • The ./Assets/Plugins/ includes binary plugins for different available platforms: a .dll dynamic library for MS Windows (32/64 bits), a .a library for iOS, and a .so/.jar for Android.
  • A working scene example is provided in the ./Assets/Scene directory.

The Unity package contains a face model that is positioned on the face during the tracking experience. This wireframe remains invisible and plays the role of masking background elements. When designing a new face tracking experience, you will have to import and position your assets (such as the sunglasses in the illustration) on top of the face model.

face model and 3D model of sunglasses

Tuning the scene

When launching the project and existing scene stored in the ./Asset/Scene directory, you will have access to an existing hierarchy that is ready to work.

Here are some parameters that are exposed in order to tune the scene (see the Inspector -> Script section).

  1. Use Native Capture: indicates if the application uses Unity webcamTexture class or the internal native video capture module provided by XZIMG (available on Mobiles).
  2. Use Native Capture: select a camera from its index (for desktop PC).
  3. Video Capture Mode: camera resolution mode (0: 320x240, 1: 640x480, 2: 1280x720, 3: 1920x1080). It is advised to set this parameter to 1 (640x480) or eventually 2 or 3 when using usb HD camera on MS Windows.
  4. Use Frontal: use the frontal camera (change only for Mobiles).
  5. Mirror Video: Mirror the video display (object positions and orientations are automatically transformed accordingly).
  6. Video Plane Fitting Mode: In case screen aspect ratio and video aspect ratio are different, you can choose to fit the video plane horizontally or vertically.
  7. Camera FOVX: Adjust the camera horizontal field of view of the camera you use. Default value is 60 degrees.
  8. Capture Device Orientation: indicates the camera real orientation (for PC). Some experiences can take advantage of mirror like portrait rendering).

Tuning the Camera

On PC and Mobiles, you may want to tune the camera parameters. When possible, it is recommended to reduce camera shutter as much as possible and to avoid setting too much gain (it’s always a tradeoff between these two values). High camera shutter will introduce motion blur making it difficult to detect an object when moving too fast. Setting too much gain will introduce white noise leading to possible instabilities.

Plateforms

MS Windows

When using XZIMG Augmented Face plugin on MS Windows, you might have issues when running the application (in the Editor or as a MS Windows application). If this is the case, please try to install the Microsoft Visual 2013 Redists. You can also verify that the plugin dynamic libraries exists (xzimgAugmentedFace.dll files in the ./Asset/Plugins/ directory)

HTML5

You can Build/Run WebGL through Unity. Just select the WebGL platform in the Build Settings window. Be patient because the build itself can take few minutes.

XZIMG Augmented Vision for webGL/HTML5

Note: Augmented Face is dependent to the HTML5 WebRTC standard to support the camera feed in real-time. Most browsers now support the Real-Time Communications (RTC).

Mobiles

With mobiles (Android / iOS), video capture class has been redefined internally for performance reasons. As a consequence, users should be very careful when setting Use Frontal and Mirror Video parameters.

Moreover, it is important to avoid the use of the Auto Rotation mode as the default orientation. Instead, it is best to set the device default orientation to portrait or other static values in the Player Settings section for Android or iOS.

Android

Android version of the plugin is treated using a .jar package that aims to make a bridge between Unity and the face-tracking native plugin.

  • The native Android plugin is stored in the ./plugins/Android/libs/armeabi-v7a directory.
  • The rigidfacetracking.jar class is stored in the ./plugins/Android/ directory and has a companion Android manifest file that will be used while compiling Android apk.

With Android, please assert that you are using OpenGLES2 rendering Graphic API.

XZIMG Augmented Face Unity Project Settings / OpenGL panel

iOS

iOS version consists in a static .a library that is linked with Unity using XCode. This process is done automatically when building a scene.

Due to a Unity bug, there is an extra parameter to fill in with iOS in the plugin inspector's panel: Screen Orientation iOS (see the screenshot below). The choosen orientation mode must correspond to the one specified in the Player Settings -> Resolution and Presentation sub-panel. This extra parameter will be deleted as soon as a fix is provided by Unity.

XZIMG Augmented Face Unity plugin orientation setup for iOS

Please assert that you are using OpenGLES2 rendering Graphic API, the plugin background video texturation module doesn't work with other versions of OpenGLES or with Metal.

XZIMG Augmented Face Unity Project Settings / OpenGL panel

Flash

You can build experiences for your favorite web-browser using flashDevelop. We provide a separate working sample to facilitate integration. Please note that it is not possible to use Unity Editor to create Flash based face tracking applications.

XZIMG Augmented Face: Building a flash based face-tracking using flashDevelop

API Definition

Unity plugin is composed of different native library, consisting in: a couple (.so and .jar) of libraries for Android and a .a library for iOS. Each lirabries exports C like or java like functions. With certain versions of the professional version you can have access to native samples to avoid the usage of Unity and takes advantage of OpenGLES2 rendering capabilities.

The remaining of the section details main functions and their signatures.

General API

The general API can be accessed by PC (Windows) Android and iOS.
/**
* Initialize face tracking and face detectors (parameters are for debug only)
*/
public void xzimgInitializeRigidTracking(char *pathToPCA, char *pathToSVM, char *pathToEyesSVM) 
/**
* Set the horizontal focal angle and processing size
* @param camfovx (input) horizonal focal value in radian
* @param camWidth (input) size in pixels of the image to process
* @param camHeight (input) size in pixels of the image to process
* @param idxScreenOrientation (input) rotation mode of the rendering: 0 landscape; 1 is rotated counter-clockwize, ...
* @param idxDeviceOrientation (input) rotation mode of the screen: 0 landscape; 1 is rotated counter-clockwize, ...
*/
 public void xzimgSetCalibration(double camfovx, int camWidth, int camHeight, int idxScreenOrientation, int idxDeviceOrientation) 
/**
* Rigid Face Tracking on a given image (PC version)
* @param pImage (input) Pointer on the given image
* @param pTrackingData (input) some parameters to tune the tracking engine
* @param pFaceData (ouput) face information                             
* @return 1 if a face is detected or tracked / 0 when no face detected / -1 if pImage is null / -2 if pixel format is erroneous 
*/
public int xzimgRigidTracking(expImage *pImage, expTrackingData *pTrackingData, expRigidFaceData *pFaceData) 
/**
* Release face trackign engine. To be called before end of program.
*/
public void xzimgReleaseRigidTracking() 
/**
* Restart face-tracking initialization
*/
public void xzimgRestartTracking() 

iOS API

iOS has some specific API functions to handle native video capture and few specific optimizations.
/**
* Initialize face tracking and face detectors engines for iOS, and initialize the video capture according to specified options
* @param pVideoCaptureOptions video capture options for iOS
* @return >0 if success
*/
public int xzimgFaceApiInitializeRigidTracking(xmgApiVideoCaptureOptions *pVideoCaptureOptions) 
/**
* Rigid Face Tracking on internally capture images for iOS
* @param pTrackingData (input) some parameters to tune the tracking engine
* @param pFaceData (ouput) detected face information                             
* @return 1 if a face is detected or tracked / 0 when no face detected 
*/
public int xzimgFaceApiRigidTracking(expTrackingData *pTrackingData, expRigidFaceData *pFaceData) 
/**
* Release face trackign engine for iOS
*/
public void xzimgFaceApiReleaseRigidTracking() 

Android API

Android has some specific API functions (available as a Java library) to handle native video capture and few specific optimizations. The Android API is Java based and integration can be done with Android Studio (you can have access to low level native C like API as well, see Desktop API definition). There is an existing sample in the solution that will help you to understand how to inherit from the face detection main Activity.
/**
* Open the Camera and Initialize the Native tracking according to parameters
* @param cameraMode camera resolution 0 (320x240) - 1 (640x480) - 2 (1280x720)
* @param isFrontal indicates if the frontal camera has to be used 
* @param fovx horizontal field of view in radiant
* @param rotateMode image rotation mode (landscape left, right ...)
*/
public void StartCameraAndInitialize(int cameraMode, boolean isFrontal, double fovx, int rotateMode, boolean highQuality) 
/**
* Set the GL Texture index to be filled with video images
* @param textureID_ptr index of the texture to be filled
* @param uvTextureID_ptr index of the second (uv) texture to be filled
*/
public float[] xzimgAugmentedFaceDetect(int textureID_ptr, int uvTextureID_ptr) 
/**
* Get the position and orientation of the object (relative to the camera)
* @return Euler angles compliant with Unity
*/
public float[] GetPose() 
/**
* Return if a face object is detected
* @return >0 if detected
*/
public int GetDetect() 
/**
* Specify manually device  orientation
* @param idxOrientation current orientation index (0 is landscape left to 4)
*/
public void SetNewDeviceOrientation(int idxOrientation) 
/**
* Release the video and the Native tracking
*/
public void Release()

Troubleshootings

XCode Errors

If following error occurs in XCode: XZIMG Augmented Face/Vision XCode Error due to CoreImage missing

Assert that CoreImage package is added when building the application with XCode. You can add this package once for all in Unity by selecting the iOS library and setting dependencies as illustrated.

XZIMG Augmented Face/Vision Depence with CoreImage

Background Video is Invisible

Please assert that you are using OpenGLES2 rendering Graphic API, the plugin background video texturation module doesn't work with other versions of OpenGLES or with Metal.

Augmented Vision Documentation

Overview

This manual describes the principles behind the XZIMG Augmented Vision Solution. XZIMG Augmented Vision for Unity is a library that provides multi-platform object detection on a given input video-stream. Grabbing images as inputs, it outputs precise 3D position and orientation (poses) of detected objects.

With XZIMG Augmented Vision, you will be able to detect black and white fiducial markers, framed-images and natural images.

Trial version

If you are using the Trial version of XZIMG Augmented Vision Solution, the detection is intentionally stop from time to time. Moreover, the experience will stop automatically after a little while. Even though this protection mechanism degrades slightly the resulting experience, we think it is enough for trying-out the robustness of our detection engine and prepare projects before purchasing the professional version of the Augmented Vision solution.

Unity Package

The Augmented Vision SDK plugin for Unity3d consists of a compressed archive organized as a Unity project. It contains the following folders:

  • The ./Assets/Plugins/ includes binary plugins for different available platforms: a .dll dynamic library for MS Windows (32/64 bits), a .a library for iOS, and a .so and a .jar for Android.
  • The ./Assets/Script/ directory contains c# scripts controlling the plugin, and bridges to call plugin functions.
  • A working scene example is provided in the ./Assets/Scene directory.
  • The ./Assets/Marker/ Contains images of available markers.
  • The ./Assets/Resources/ directory contains important shaders for rendering the video capture stream. It also contains image of targets to detect.

Tuning the scene

When launching the project and existing scene stored in the ./Asset/Scene directory, you will have access to an existing hierarchy that is ready to work.

You’ll have the choice between the three types of detector (see the red rectangle): marker detector, image detector and framed-image detector. You can select only one of the 3 options according to your scenario, the others must remain unselected objects.

If you are using image or framed-image detectors, you will have to build classifiers associated with the image you want to detect. This process is explained in the next section.

For each type of detection few parameters are exposed in the inspector panel in order to let you modify the detector properties. These parameters are preset by default. Corresponding inspector panel is separated in two parameter sections: the parameters for video capture and the parameters for the detection.

  1. Video Capture Mode: Camera resolution mode (0: 320x240, 1: 640x480, 2: 1280x720, 3: 1920x1080). It is advised to set this parameter to 1 (640x480) or eventually 2 or 3 when using usb HD camera on MS Windows.
  2. High Precision: when selected, the detector will use higher resolution images to detect the targets. If you are using natural images in your scenario, it is advise to let this box unchecked.
  3. Use Frontal: use the frontal camera (change only for Mobiles).
  4. Mirror Video: Mirror the video display (object positions and orientations are automatically transformed accordingly).
  5. Video Plane Fitting Mode: In case screen aspect ratio and video aspect ratio are different, you can choose to fit the video plane horizontally or vertically.
  6. Video Plane Scale: experimental parameter to scale up/down the video plane.
  7. Camera FOVX: Adjust the camera horizontal field of view of the camera you use. Default value is 60 degrees.

  1. Marker Type: indicates what kind of marker should be active for detection (see next section).
  2. Object Pivot Links: a list of links between scene objects and target we want to detect.
  3. Scene Pivot: to indicate which scene Object has to be modified according to the detected object position.
  4. Classifier: the image to detect converted to a .bytes classifier (for image and marker image detectors). .bytes classifier creation is detailed below).
  5. Marker Index: to indicate the marker associated with the Scene Pivot (only for marker detection).
  6. Image Real Width: to indicate the real size of the the target.
  7. Recursive Filter: when selected, the resulting pose will be smoothed using a recursive filter.
  8. Filter Strength: Strength of the filter if active.

Classifier Creation

When using natural image detector or framed-image detector, the user will have to specify which image he wants to detect in the video stream. But before being able to identify an image, the engine needs to learn its appearance and construct a corresponding classifier (as a .bytes resource file). This is done in few steps:

  1. Resize your image with the largest dimension (width or height) at approximately 400 pixels.
  2. Copy your .jpg image to the ./Assets/Resources/ directory and right click on it to select the “Create -> XZIMG Natural Image Classifier” (or “Create -> XZIMG Framed-Image Classifier”) menu. See illustration here-under. After a little while, the corresponding classifier is created as a .bytes file.
  3. Your classifier must now be visible in the ./Assets/Resources directory of the Project panel. Drag-and-drop the (.bytes) classifier of interest to the Classifier field in the inspector panel.
  4. Your are ready to run Unity3d to verify if the image is correctly detected.

AR Targets

Black and White Markers

The following types of markers are available for detection. These markers are stored in the ./Assets/Markers/ directory. You can select the type of marker you want to detect in the plugin's inspector panel. There are conventional markers (type 2, type 3, type 4 and type 5) and special markers (type 6 and type 7) which have a larger border to improve detection with optical blur effects. If you choose 2x2 (type 2) markers, you will be able to detect only 2 different markers. When using 5x5 (type 5) markers, you can detect up to 4096 combinations.

2x2 marker (type1) 3x3 marker (type2) 4x4 marker (type3) 5x5 marker (type4)

Framed Images

Framed-images are natural images surrounded by a black rectangle. The black rectangle helps to stabilize the detection of the target resulting in a fast and stable frame to frame tracking.

Images contained in the frame must be non-rotational-symmetric and textured enough. It is easy to try-out your image by pressing the Play button in the Unity Editor to verify if your framed-image is working properly.

Natural Images

Natural Images are images you can find in magazines, playing cards, painting, … They must be non-rotational-symmetric and textured enough. Natural images shouldn’t be too glossy because they will emit specular patterns that disturbs the detection. If it is possible, try to use mat papers when printing the target images.

Tuning the Camera

On PC and Mobiles, you may want to tune the camera parameters. When possible, it is recommended to reduce camera shutter as much as possible and to avoid setting too much gain (it’s always a tradeoff between these two values). High camera shutter will introduce motion blur making it difficult to detect an object when moving too fast. Setting too much gain will introduce white noise leading to possible instabilities.

Plateforms

MS Windows

When using XZIMG Augmented Vision plugin on MS Windows, you might have issues when running the application (in the Editor or as a .exe). If this is the case, please try to install the Microsoft Visual 2013 Redists. You can also verify that the plugin dynamic libraries exists (xzimgAugmentedVision.dll files in the ./Asset/Plugins/ directory)

HTML5

You can Build/Run WebGL through Unity. Just select the WebGL platform in the Build Settings window. Be patient because the build itself can take few minutes.

XZIMG Augmented Vision for webGL/HTML5

Note: Augmented Face is dependent to the HTML5 WebRTC standard to support the camera feed in real-time. Most browsers now support the Real-Time Communications (RTC).

Mobiles

With mobiles (Android or iOS), video capture class has been redefined internally for performance reasons. As a consequence, users should be very careful when setting Use Frontal and Mirror Video parameters.

Moreover, it is important to avoid the use of the Auto Rotation mode as the default orientation. Instead, it is best to set the device default orientation to portrait or other static values as illustrated below

Android

Android version of the plugin is treated using a .jar package that aims to make a bridge between Unity and the face-tracking native plugin.

  • The native Android plugin is stored in the ./plugins/Android/libs/armeabi-v7a directory.
  • The rigidfacetracking.jar class is stored in the ./plugins/Android/ directory and has a companion Android manifest file that will be used while compiling Android apk.

With Android, please assert that you are using OpenGLES2 rendering Graphic API.

XZIMG Augmented Face Unity Project Settings / OpenGL panel

iOS

iOS version consists in a static ".a" library that is linked with Unity using XCode. This process is done automatically when building a scene.

Please assert that you are using OpenGLES2 rendering Graphic API, the plugin background video texturation module doesn't work with other versions of OpenGLES or with Metal.

Here are the common Player Settings options:

XZIMG Augmented Vision Unity Project Settings / OpenGL panel

Troubleshootings

XCode Errors

If following error occurs: XZIMG Augmented Face/Vision XCode Error due to CoreImage missing

Assert that CoreImage package is added when building the application with XCode. You can add this package once for all in Unity by selecting the iOS library and setting dependencies as illustrated.

XZIMG Augmented Face/Vision Depence with CoreImage

Background Video is Invisible

Please assert that you are using OpenGLES2 rendering Graphic API, the plugin background video texturation module doesn't work with other versions of OpenGLES or with Metal.

Magic Face Documentation

Overview

XZIMG Magic Face is a software solution that provides multi-platform and real-time facial features detection and tracking using an input video stream. It takes images as input and it outputs the position of the facial features on the face. These facial features can be detected in the image plane in 2D, or in the 3D space using a projective model. Its robustness and efficiency make it the ideal solution for designing face-replacement and makeup applications.

XZIMG Magic Face 1.0 - Ideal solution for face replacement and makeup applications

The solution takes the form of a Unity plugin (native libraries plugged into Unity) and native samples using OpenGLES (native libraries are plugged into XCode or Android Studio). In each case a sample is provided to ease the integration. Once face features are detected, we can map one or a plurality of texture masks on to the face.

XZIMG Magic Face solution can export experiences on multiple platforms. We are currently supporting Windows, Android, iOS, and WebGL/HTML5.

XZIMG Magic Face 1.0 - Paul Newman is Augmented with a Mask

 

Trial version

If you are using the trial version of XZIMG Magic Face solution, a watermark will be displayed in the front of the rendering view. Moreover, the experience will stop automatically after a little while. Even though we understand that this protection mechanism degrades slightly the resulting experience, you can still evaluate the robustness of our solution and prepare projects to convince your clients.

 

Facial Features

XZIMG Magic Face solution provides facial features locations. There exists two versions of face features masks: the 51 face features model and the 68 face features model, both are illustrated in the next page.

  • The 68 face features model includes face contour. It is well adapted for face replacement using reference faces or toy maks.
  • The 51 face features model includes only face features that lie inside the face. This model is better adapted for makeup like applications.
  • The 73 face features model includes face contour and forehead. it's a simple extension of the 68 face features model with 5 new features located on to the forehead.

XZIMG Magic Face 1.0 - The 68 face features model
The 68 face features model. This model includes face feature on the contour of the face.

XZIMG Magic Face 1.0 - The 51 face features model
The 51 face features model contains only internal features.

Unity Package

The plugin consists of following files stored in the Unity project directory:

  • Shaders, materials and masks are stored in the ./Resources/ directory.
  • Face classifiers are stored in the ./Resources/ directory (see for example regressor-51LM.bytes).
  • The C# scripts to launch the plugin and control its execution are stored in ./Script/.
  • The native independant libraries are stored in the ./Plugins/ directory:
    • for MS Windows are provided two .dll files (one for x86 architecture and another one for x64),
    • for iOS is provided a .a static library, and
    • for Android a .so library and corresponding .jar java library.
  • The scenes examples are provided in the ./Scene/ directory.

Selecting Unity scene

There exist 3 different modes to detect facial features.

XZIMG Magic Face 1.0 - Scene Selection

  1. Magic Face 2D Features: 2D image facial features detection. With this mode 2D image coordinates are obtained from the face tracking process.
  2. Magic Face 3D Features: a 3D facial features detection mode. With this mode a 3D deformable model is obtained along with a 6 degree of freedom pose and projective information.
  3. Magic Face 3D Tracking: a rigid face tracking engine (similar to the Augmented Face product). This is work in progress...

Tuning Unity scene

When launching the project and sample existing scene (stored in the ./Asset/Scene directory), you will have access to an existing scene hierarchy that can run directly.

However, there are exposed parameters that allows you to tune the scene (see the inspector assigned to the script object).

XZIMG Magic Face 1.0 - Parameters Panel

  1. Face Features Mode: select between different face feature models (51 face features model is adapted for makeup applications while 68 face features model targets face replacement applications).
  2. Render Object: link to the unity GameObject container for the face (created if non assigned).
  3. Render Texture: link to the resource image (.png / jpg) to be loaded as a mask texture.
  4. Render Texture Width: dimensions of the mask texture image (these values has to be filled manualy!).
  5. Render Texture Height: dimensions of the mask texture image (these values has to be filled manualy!).
  6. Transparency: transparency to be applied to the rendering shader.
  7. Capture Device Orientation: indicates the camera real orientation (for PC). Some experiences can take advantage of mirror like portrait rendering).
  8. Draw Texture: Draw the texture versus draw the model mesh.
  9. Video Parameters: refer to Augmented Face.

Tuning the Camera

On PC and Mobiles, you may want to tune the camera parameters. When possible, it is recommended to reduce camera shutter as much as possible and to avoid setting too much gain (it’s mostly a trade-off between these two values). High camera shutter will introduce motion blur making it difficult to detect an object when moving too fast. Setting too much gain will introduce white noise leading to possible instabilities.

Plateforms

MS Windows

When using XZIMG Magic Face on MS Windows, you might have issues when running the application (in the Editor or as a MS Windows application). If this is the case, please try to install the Microsoft Visual 2013 Redists. You can also verify that the plugin dynamic libraries exists (xzimg-MagicFace-SDK.dll files in the ./Asset/Plugins/ directory)

Mobiles

With mobiles (Android / iOS), video capture class has been redefined internally for performance reasons. As a consequence, users should be very careful when setting Use Frontal and Mirror Video parameters.

Moreover, it is important to avoid the use of the Auto Rotation mode as the default orientation. Instead, it is best to set the device default orientation to portrait or other static values in the Player Settings section for Android or iOS.

With Android and iOS, assert that you are using OpenGLES2 rendering Graphic API. Background video texturation module won't work with other versions of OpenGLES or with Metal.

XZIMG Augmented Face Unity Project Settings / OpenGL panel

Android

Android version of the plugin is treated using a .jar package that aims to make a bridge between Unity and the face-tracking native plugin.

  • The native Android plugin is stored in the ./plugins/Android/libs/armeabi-v7a directory.
  • The rigidfacetracking.jar class is stored in the ./plugins/Android/ directory and has a companion Android manifest file that will be used while compiling Android apk.

iOS

iOS version consists in a static .a library that is linked with Unity using XCode. This process is done automatically when building a scene.

WebGL/HTML5

HTML5 experiences can be exported through Unity. In case you encounter some issue with camera opening (this can be the case due to a Unity bug in the Unity 5.5.x version), you are advised to use a previous version of Unity from the 5.4.x collection (you'll find a package dedicated for the Unity 5.4.x version in our download section).

Native Samples

The native samples avoid the usage of Unity to provide example of integration of the face features detection functionality. The native samples are available for both iOS and Android and takes advantage of OpenGLES2 rendering capabilities.

These native samples use native libraries consisting in: a couple (.so and .jar) of libraries for Android and a .a library for iOS. Each lirabries exports C like or java like functions.

Android API

The Android API is Java based and integration can be done with Android Studio (you can have access to low level native C like API as well). There is an existing sample in the solution that will help you to understand how to inherit from the face detection main Activity.

The remaining of the section details main functions' signatures and corresponding documentation.

/**
* Initialize the augmented face engine
* @param width: size of the processed frames (should be the same as video capture resolution)
* @param height: size of the processed frames (should be the same as video capture resolution)
* @param fovx_degree: unused
* @param nbFaceFeatures number of face features to be detected: 51 or 68 (with contour)
*/
public void initialize(int width, int height, double fovx_degree, int nbFaceFeatures) 
/**
* Release the augmented face engine
*/
public void release() 
/**
* Detect face and track features in 2D
* @param yuvFrame Current image in native yuv pixel format
* @param orientation Orientation for the face detection (0 for landscape left, 1 for portrait, ...)
* @param width size of the video frame
* @param height size of the video frame
* @return null if no face is detected
*    [0] indicates if a face is detected (>0)
*    [1..6] returns face pose (not computed in 2D case)
*    [7] indicates the number of features
*    [8..8+number of features*2]    returns face features coordinates
*    [9+ number of features*2]  indicates the number of triangles
*    [9+ number of features*2 +1 , ...] indicates the triangles indices
*/
public float [] trackFaceFeatures2D(ByteBuffer yuvFrame, int orientation, int width, int height) 
/**
* Add a colored image as a layer using GL Surface View
* @param context current context
* @param resourceId index of the image resource to be rendered
* @param x translate the image (normalized image coordinates)
* @param y translate the image (normalized image coordinates)
* @return zero or less if failed
*/
public int add2DImage(final Context context, final int resourceId, float x, float y) 
/**
* Add a colored image as a layer using GL Surface View
* @param bitmap image to be displayed
* @param x translate the image (normalized image coordinates)
* @param y translate the image (normalized image coordinates)
* @return zero or less if failed
*/
public int add2DImage(Bitmap bitmap, float x, float y) 
/**
* Add an effect to the face by rendering a GL layer
* @param context current context
* @param resImageIdx index of the image resource to be rendered
* @param resFaceFeaturesIdx index of the text resource to be used for identifying face features locations
* @param type type of effect to be rendered (0 to add a texture on top of the face)
*/
public int addEffect(final Context context, final int resImageIdx, final int resFaceFeaturesIdx, int type) 
/**
* Add an effect to the face by rendering a GL layer
* @param bitmap image to be displayed
* @param faceFeaturesLocations array that contains image coordinates of face features
* @param type type of effect to be rendered (0 to add a texture on top of the face)
*/
public int addEffect(Bitmap bitmap, float [] faceFeaturesLocations, int type) 
/**
* Get the rendered RGB image after applying the materials
* @return rgb byte buffer that contains the frame
*/
public ByteBuffer getAugmentedImageRGB() 
/**
* Display/Hide face features
* @param renderFaceFeatures
*/
public void setRenderFaceFeatures(boolean renderFaceFeatures)
/**
* Display/Hide static images
* @param renderStaticImageLayers
*/
public void setRenderStaticImageLayers(boolean renderStaticImageLayers)
/**
* Display/Hide face textures
* @param renderFaceTextureLayers
*/
public void setRenderFaceTextureLayers(boolean renderFaceTextureLayers) 

iOS API

The iOS API is objective-C based and developed with XCode. The XCode sample provides necessary information for simple integration.
/**
 *	Initialize the detection engine
 *	@param camWidth captured image size
 *	@param camHeight captured image size
 *	@param nbFaceFeatures face features model (51 or 68)
 */
int ICV_DLL_SIGNATURE Initialize(int camWidth, int camHeight, int nbFaceFeatures)
/**
*	Release the detection engine
*/
void ICV_DLL_SIGNATURE Release()
/**
 *	Initialize the GL Engine
 */
void ICV_DLL_SIGNATURE InitializeGL()
/**
 *	Release the GL Engine
 */
void ICV_DLL_SIGNATURE ReleaseGL()
/**
 *	Add a 2D Texture to be fitted on the face
 *	@param ptrImage RGBA pixels of the image
 *	@param width
 *	@param height
 *	@param uvCoordinates texture coordinates between O..1 corresponding to face features
 */
void ICV_DLL_SIGNATURE AddEffect(unsigned char *ptrImage, int width, int height, float *uvCoordinates)
/**
 *	Add a 2D Texture in front of the video plane
 *	@param ptrImage RGBA pixels of the image
 *	@param width
 *	@param height
 *	@param deltaX translation of the image position (0..1)
 *	@param deltaY translation of the image position (0..1)
 */
void ICV_DLL_SIGNATURE AddStaticLayer(unsigned char *ptrImage, int width, int height, float deltaX, float deltaY)
/**
 *	Detect/Track Face and find face features locations
 *	@param yuvImage yuv pixels to texture the video plane
 *	@param orientation face orientation for detection (0 landscape left, 1 portrait upside down, 2 landscape right, 3 portrait)
 *	@param xmgFaceData face result (filled when a face is detected)
 */
int ICV_DLL_SIGNATURE TrackNonRigidFaces2D(unsigned char *ptrYUVImage, int orientation, int width, int height, xmgFaceData &xmgFaceData);
/**
 *	Render with openGL
 *	@param faceData face features information
 *	@param yuvImage yuv pixels to texture the video plane
 *	@param width
 *	@param height
 *	@param renderFaceFeatures render or not face features with blue circles
 *	@param renderStaticLayer render or not the static layer
 *	@param renderFaceTexture render or not the face texture
 */
void ICV_DLL_SIGNATURE Render(xmgFaceData &faceData, unsigned char *yuvImage, int width, int height, float scaleX, float scaleY, bool renderFaceFeatures, bool renderStaticLayer, bool renderFaceTexture);

Following class is useful to exchange face data with the face detection library.

class xmgFaceData
{
public:
	xmgFaceData()
	{
		m_faceDetected = 0;
		m_nbLandmarks = 0;
		m_nbTriangles = 0;
		m_landmarks = nullptr;
		m_triangles = nullptr;
	}
	
	int m_faceDetected;
	
	int m_nbLandmarks;
	float *m_landmarks;
	int m_nbTriangles;
	int *m_triangles;
};

Troubleshootings

XCode Errors

If following error occurs in XCode: XZIMG Augmented Face/Vision XCode Error due to CoreImage missing

Assert that CoreImage package is added when building the application with XCode. You can add this package once for all in Unity by selecting the iOS library and setting dependencies as illustrated.

XZIMG Augmented Face/Vision Depence with CoreImage

Background Video is Invisible

Please assert that you are using OpenGLES2 rendering Graphic API, the plugin background video texturation module doesn't work with other versions of OpenGLES or with Metal.

Rendering problems

If your texture looks like the following:

XZIMG Magic Face 1.0 - Texturation Mapping issue

You might have forgotten to specify correct dimensions for the texture image mask:

XZIMG Magic Face 1.0 - Width and Height parameters

Exported experience is not playing on windows

You have to ensure that the Microsoft redistribuable packages for Visual Studio 2013 are properly installed. If not you can download them from Microsoft's website.