Augmented Face Documentation

Overview

The goal of this documentation is to describe the principles behind the XZIMG Augmented Face solution. The XZIMG Augmented Face for Unity is a library that provides cross-platform face detection and face tracking using an input video-stream. It takes images as inputs and outputs the 3D pose of the face detected in the images.

Trial version

If you are using the trial version of XZIMG Augmented Face solution, the detection is intentionally stop from time to time. Moreover, the experience will stop automatically after a little while. Even though this protection mechanism degrades slightly the resulting experience, we think it is enough for trying-out the robustness of our detection engine and prepare projects before purchasing the professional version of the Augmented Face solution.

Unity Package

The Augmented Vision SDK plugin for Unity3d consists of a compressed archive organized as a Unity project. It contains the following elements:

  • Materials and 3D objects (such as a mask for the face) are stored the directory ./Assets/Face/ and ./Assets/Glasses/
  • Useful Shaders are stored in the ./Assets/Script/ directory.
  • The C# scripts to launch the plugin and control its execution are stored in ./Assets/Script/.
  • The ./Assets/Plugins/ includes binary plugins for different available platforms: a .dll dynamic library for MS Windows (32/64 bits), a .a library for iOS, and a .so/.jar for Android.
  • A working scene example is provided in the ./Assets/Scene directory.

The Unity package contains a face model that is positioned on the face during the tracking experience. This wireframe remains invisible and plays the role of masking background elements. When designing a new face tracking experience, you will have to import and position your assets (such as the sunglasses in the illustration) on top of the face model.

face model and 3D model of sunglasses

Tuning the scene

When launching the project and existing scene stored in the ./Asset/Scene directory, you will have access to an existing hierarchy that is ready to work.

Here are some parameters that are exposed in order to tune the scene (see the Inspector -> Script section).

  1. Use Native Capture: indicates if the application uses Unity webcamTexture class or the internal native video capture module provided by XZIMG (available on Mobiles).
  2. Use Native Capture: select a camera from its index (for desktop PC).
  3. Video Capture Mode: camera resolution mode (0: 320x240, 1: 640x480, 2: 1280x720, 3: 1920x1080). It is advised to set this parameter to 1 (640x480) or eventually 2 or 3 when using usb HD camera on MS Windows.
  4. Use Frontal: use the frontal camera (change only for Mobiles).
  5. Mirror Video: Mirror the video display (object positions and orientations are automatically transformed accordingly).
  6. Video Plane Fitting Mode: In case screen aspect ratio and video aspect ratio are different, you can choose to fit the video plane horizontally or vertically.
  7. Camera FOVX: Adjust the camera horizontal field of view of the camera you use. Default value is 60 degrees.
  8. Capture Device Orientation: indicates the camera real orientation (for PC). Some experiences can take advantage of mirror like portrait rendering).

Tuning the Camera

On PC and Mobiles, you may want to tune the camera parameters. When possible, it is recommended to reduce camera shutter as much as possible and to avoid setting too much gain (it’s always a tradeoff between these two values). High camera shutter will introduce motion blur making it difficult to detect an object when moving too fast. Setting too much gain will introduce white noise leading to possible instabilities.

Plateforms

MS Windows

When using XZIMG Augmented Face plugin on MS Windows, you might have issues when running the application (in the Editor or as a MS Windows application). If this is the case, please try to install the Microsoft Visual 2013 Redists. You can also verify that the plugin dynamic libraries exists (xzimgAugmentedFace.dll files in the ./Asset/Plugins/ directory)

HTML5

You can Build/Run WebGL through Unity. Just select the WebGL platform in the Build Settings window. Be patient because the build itself can take few minutes.

XZIMG Augmented Vision for webGL/HTML5

Note: Augmented Face is dependent to the HTML5 WebRTC standard to support the camera feed in real-time. Most browsers now support the Real-Time Communications (RTC).

Mobiles

When exporting on Mobiles (Android / iOS), it is important to avoid the use of the Auto Rotation mode as the default orientation. Instead, it is best to set the device default orientation to portrait or other static values in the Player Settings section for Android or iOS.

Android

Android version of the plugin is treated using a .jar package that aims to make a bridge between Unity and the face-tracking native plugin.

  • The native Android plugin is stored in the ./plugins/Android/libs/armeabi-v7a directory.
  • The rigidfacetracking.jar class is stored in the ./plugins/Android/ directory and has a companion Android manifest file that will be used while compiling Android apk.

iOS

iOS version consists in a static .a library that is linked with Unity using XCode. This process is done automatically when building a scene.

API Definition

Unity plugin is composed of different native library, consisting in: a couple (.so and .jar) of libraries for Android and a .a library for iOS. Each lirabries exports C like or java like functions. With certain versions of the professional version you can have access to native samples to avoid the usage of Unity and takes advantage of OpenGLES2 rendering capabilities.

The remaining of the section details main functions and their signatures.

General API

The general API can be accessed by PC (Windows) Android and iOS.
/**
* Initialize face tracking and face detectors (parameters are for debug only)
*/
public void xzimgInitializeRigidTracking(char *pathToPCA, char *pathToSVM, char *pathToEyesSVM) 
/**
* Set the horizontal focal angle and processing size
* @param camfovx (input) horizonal focal value in radian
* @param camWidth (input) size in pixels of the image to process
* @param camHeight (input) size in pixels of the image to process
* @param idxScreenOrientation (input) rotation mode of the rendering: 0 landscape; 1 is rotated counter-clockwize, ...
* @param idxDeviceOrientation (input) rotation mode of the screen: 0 landscape; 1 is rotated counter-clockwize, ...
*/
 public void xzimgSetCalibration(double camfovx, int camWidth, int camHeight, int idxScreenOrientation, int idxDeviceOrientation) 
/**
* Rigid Face Tracking on a given image (PC version)
* @param pImage (input) Pointer on the given image
* @param pTrackingData (input) some parameters to tune the tracking engine
* @param pFaceData (ouput) face information                             
* @return 1 if a face is detected or tracked / 0 when no face detected / -1 if pImage is null / -2 if pixel format is erroneous 
*/
public int xzimgRigidTracking(expImage *pImage, expTrackingData *pTrackingData, expRigidFaceData *pFaceData) 
/**
* Release face trackign engine. To be called before end of program.
*/
public void xzimgReleaseRigidTracking() 
/**
* Restart face-tracking initialization
*/
public void xzimgRestartTracking() 

iOS API

iOS has some specific API functions to handle native video capture and few specific optimizations.
/**
* Initialize face tracking and face detectors engines for iOS, and initialize the video capture according to specified options
* @param pVideoCaptureOptions video capture options for iOS
* @return >0 if success
*/
public int xzimgFaceApiInitializeRigidTracking(xmgApiVideoCaptureOptions *pVideoCaptureOptions) 
/**
* Rigid Face Tracking on internally capture images for iOS
* @param pTrackingData (input) some parameters to tune the tracking engine
* @param pFaceData (ouput) detected face information                             
* @return 1 if a face is detected or tracked / 0 when no face detected 
*/
public int xzimgFaceApiRigidTracking(expTrackingData *pTrackingData, expRigidFaceData *pFaceData) 
/**
* Release face trackign engine for iOS
*/
public void xzimgFaceApiReleaseRigidTracking() 

Android API

Android has some specific API functions (available as a Java library) to handle native video capture and few specific optimizations. The Android API is Java based and integration can be done with Android Studio (you can have access to low level native C like API as well, see Desktop API definition). There is an existing sample in the solution that will help you to understand how to inherit from the face detection main Activity.
/**
* Open the Camera and Initialize the Native tracking according to parameters
* @param cameraMode camera resolution 0 (320x240) - 1 (640x480) - 2 (1280x720)
* @param isFrontal indicates if the frontal camera has to be used 
* @param fovx horizontal field of view in radiant
* @param rotateMode image rotation mode (landscape left, right ...)
*/
public void StartCameraAndInitialize(int cameraMode, boolean isFrontal, double fovx, int rotateMode, boolean highQuality) 
/**
* Set the GL Texture index to be filled with video images
* @param textureID_ptr index of the texture to be filled
* @param uvTextureID_ptr index of the second (uv) texture to be filled
*/
public float[] xzimgAugmentedFaceDetect(int textureID_ptr, int uvTextureID_ptr) 
/**
* Get the position and orientation of the object (relative to the camera)
* @return Euler angles compliant with Unity
*/
public float[] GetPose() 
/**
* Return if a face object is detected
* @return >0 if detected
*/
public int GetDetect() 
/**
* Specify manually device  orientation
* @param idxOrientation current orientation index (0 is landscape left to 4)
*/
public void SetNewDeviceOrientation(int idxOrientation) 
/**
* Release the video and the Native tracking
*/
public void Release()

Troubleshootings

XCode Errors

If following error occurs in XCode: XZIMG Augmented Face/Vision XCode Error due to CoreImage missing

Assert that CoreImage package is added when building the application with XCode. You can add this package once for all in Unity by selecting the iOS library and setting dependencies as illustrated.

XZIMG Augmented Face/Vision Depence with CoreImage

Background Video is Invisible

Please assert that you are using OpenGLES2 rendering Graphic API, the plugin background video texturation module doesn't work with other versions of OpenGLES or with Metal.

Augmented Vision Documentation

Overview

This manual describes the principles behind the XZIMG Augmented Vision Solution. XZIMG Augmented Vision for Unity is a library that provides cross-platform object detection on a given input video-stream. Grabbing images as inputs, it outputs precise 3D position and orientation (poses) of detected objects.

With XZIMG Augmented Vision, you will be able to detect black and white fiducial markers, framed-images and natural images.

Trial version

If you are using the Trial version of XZIMG Augmented Vision Solution, the detection is intentionally stop from time to time. Moreover, the experience will stop automatically after a little while. Even though this protection mechanism degrades slightly the resulting experience, we think it is enough for trying-out the robustness of our detection engine and prepare projects before purchasing the professional version of the Augmented Vision solution.

Unity Package

The Augmented Vision SDK plugin for Unity3d consists of a compressed archive organized as a Unity project. It contains the following folders:

  • The ./Assets/Plugins/ includes binary plugins for different available platforms: a .dll dynamic library for MS Windows (32/64 bits), a .a library for iOS, and a .so and a .jar for Android.
  • The ./Assets/Script/ directory contains c# scripts controlling the plugin, and bridges to call plugin functions.
  • A working scene example is provided in the ./Assets/Scene directory.
  • The ./Assets/Marker/ Contains images of available markers.
  • The ./Assets/Resources/ directory contains important shaders for rendering the video capture stream. It also contains image of targets to detect.

Tuning the scene

When launching the project and existing scene stored in the ./Asset/Scene directory, you will have access to an existing hierarchy that is ready to work.

You’ll have the choice between the three types of detector (see the red rectangle): marker detector, image detector and framed-image detector. You can select only one of the 3 options according to your scenario, the others must remain unselected objects.

If you are using image or framed-image detectors, you will have to build classifiers associated with the image you want to detect. This process is explained in the next section.

For each type of detection few parameters are exposed in the inspector panel in order to let you modify the detector properties. These parameters are preset by default. Corresponding inspector panel is separated in two parameter sections: the parameters for video capture and the parameters for the detection.

  1. Video Capture Mode: Camera resolution mode (0: 320x240, 1: 640x480, 2: 1280x720, 3: 1920x1080). It is advised to set this parameter to 1 (640x480) or eventually 2 or 3 when using usb HD camera on MS Windows.
  2. High Precision: when selected, the detector will use higher resolution images to detect the targets. If you are using natural images in your scenario, it is advise to let this box unchecked.
  3. Use Frontal: use the frontal camera (change only for Mobiles).
  4. Mirror Video: Mirror the video display (object positions and orientations are automatically transformed accordingly).
  5. Video Plane Fitting Mode: In case screen aspect ratio and video aspect ratio are different, you can choose to fit the video plane horizontally or vertically.
  6. Video Plane Scale: experimental parameter to scale up/down the video plane.
  7. Camera FOVX: Adjust the camera horizontal field of view of the camera you use. Default value is 60 degrees.

  1. Marker Type: indicates what kind of marker should be active for detection (see next section).
  2. Object Pivot Links: a list of links between scene objects and target we want to detect.
  3. Scene Pivot: to indicate which scene Object has to be modified according to the detected object position.
  4. Classifier: the image to detect converted to a .bytes classifier (for image and marker image detectors). .bytes classifier creation is detailed below).
  5. Marker Index: to indicate the marker associated with the Scene Pivot (only for marker detection).
  6. Image Real Width: to indicate the real size of the the target.
  7. Recursive Filter: when selected, the resulting pose will be smoothed using a recursive filter.
  8. Filter Strength: Strength of the filter if active.

Classifier Creation

When using natural image detector or framed-image detector, the user will have to specify which image he wants to detect in the video stream. But before being able to identify an image, the engine needs to learn its appearance and construct a corresponding classifier (as a .bytes resource file). This is done in few steps:

  1. Resize your image with the largest dimension (width or height) at approximately 400 pixels.
  2. Copy your .jpg image to the ./Assets/Resources/ directory and right click on it to select the “Create -> XZIMG Natural Image Classifier” (or “Create -> XZIMG Framed-Image Classifier”) menu. See illustration here-under. After a little while, the corresponding classifier is created as a .bytes file.
  3. Your classifier must now be visible in the ./Assets/Resources directory of the Project panel. Drag-and-drop the (.bytes) classifier of interest to the Classifier field in the inspector panel.
  4. Your are ready to run Unity3d to verify if the image is correctly detected.

AR Targets

Black and White Markers

The following types of markers are available for detection. These markers are stored in the ./Assets/Markers/ directory. You can select the type of marker you want to detect in the plugin's inspector panel. There are conventional markers (type 2, type 3, type 4 and type 5) and special markers (type 6 and type 7) which have a larger border to improve detection with optical blur effects. If you choose 2x2 (type 2) markers, you will be able to detect only 2 different markers. When using 5x5 (type 5) markers, you can detect up to 4096 combinations.

2x2 marker (type1) 3x3 marker (type2) 4x4 marker (type3) 5x5 marker (type4)

Framed Images

Framed-images are natural images surrounded by a black rectangle. The black rectangle helps to stabilize the detection of the target resulting in a fast and stable frame to frame tracking.

Images contained in the frame must be non-rotational-symmetric and textured enough. It is easy to try-out your image by pressing the Play button in the Unity Editor to verify if your framed-image is working properly.

Natural Images

Natural Images are images you can find in magazines, playing cards, painting, … They must be non-rotational-symmetric and textured enough. Natural images shouldn’t be too glossy because they will emit specular patterns that disturbs the detection. If it is possible, try to use mat papers when printing the target images.

Tuning the Camera

On PC and Mobiles, you may want to tune the camera parameters. When possible, it is recommended to reduce camera shutter as much as possible and to avoid setting too much gain (it’s always a tradeoff between these two values). High camera shutter will introduce motion blur making it difficult to detect an object when moving too fast. Setting too much gain will introduce white noise leading to possible instabilities.

Plateforms

MS Windows

When using XZIMG Augmented Vision plugin on MS Windows, you might have issues when running the application (in the Editor or as a .exe). If this is the case, please try to install the Microsoft Visual 2013 Redists. You can also verify that the plugin dynamic libraries exists (xzimgAugmentedVision.dll files in the ./Asset/Plugins/ directory)

HTML5

You can Build/Run WebGL through Unity. Just select the WebGL platform in the Build Settings window. Be patient because the build itself can take few minutes.

XZIMG Augmented Vision for webGL/HTML5

Note: Augmented Face is dependent to the HTML5 WebRTC standard to support the camera feed in real-time. Most browsers now support the Real-Time Communications (RTC).

Mobiles

With mobiles (Android or iOS), video capture class has been redefined internally for performance reasons. As a consequence, users should be very careful when setting Use Frontal and Mirror Video parameters.

Moreover, it is important to avoid the use of the Auto Rotation mode as the default orientation. Instead, it is best to set the device default orientation to portrait or other static values as illustrated below

Android

Android version of the plugin is treated using a .jar package that aims to make a bridge between Unity and the face-tracking native plugin.

  • The native Android plugin is stored in the ./plugins/Android/libs/armeabi-v7a directory.
  • The rigidfacetracking.jar class is stored in the ./plugins/Android/ directory and has a companion Android manifest file that will be used while compiling Android apk.

iOS

iOS version consists in a static ".a" library that is linked with Unity using XCode. This process is done automatically when building a scene.

Troubleshootings

XCode Errors

If following error occurs: XZIMG Augmented Face/Vision XCode Error due to CoreImage missing

Assert that CoreImage package is added when building the application with XCode. You can add this package once for all in Unity by selecting the iOS library and setting dependencies as illustrated.

XZIMG Augmented Face/Vision Depence with CoreImage

Background Video is Invisible

Please assert that you are using OpenGLES2 rendering Graphic API, the plugin background video texturation module doesn't work with other versions of OpenGLES or with Metal.

Magic Face Documentation

Overview

XZIMG Magic Face is a Solution which provides cross-platform and real-time facial features detection and tracking in the 3D Space using images captured from a video stream. Facial features can be detected in the image plane (2D), or in the 3D space using a projective camera model and a deformable face model. Magic Face provides efficient and robust results which provides an ideal solution for designing face-replacement and make-up applications.

XZIMG Magic Face 1.0 - Ideal solution for face replacement and makeup applications - copyright @xzimg

Starting from Magic face 3.0, the Solution also contains:

  • a real-time segmentation engine (which segment hair and background),
  • a real-time emotion detector which can retrieve simple emotions such as happiness, surprised...,
  • an eye tracking which provide pupils coordinates in 2D and 3D.
Listed deep learning / AI functionalities are available in real-time for both Android and iOS. The last versions (from Magic Face 3.0 and after) also provide standard Action Units (AU) values to parametrise face deformations and animated avatar properly. Note that this is Work in Progress...

The solution takes the form of a plugin (native libraries plugged into Unity). In each case a sample is provided to ease the integration. Once face features are detected, we can map one or a plurality of texture masks on to the face.

XZIMG Magic Face solution can export experiences on multiple platforms. We are currently supporting Windows, Android, iOS, and WebGL/HTML5. Note that from Magic Face 2.0, macOS player is supported (beta) when using a standard MAC Camera.

XZIMG Magic Face 1.0 - Paul Newman is Augmented with a Mask - copyright @xzimg

 



Trial version

If you are using the trial version of XZIMG Magic Face solution, a watermark will be displayed in the front of the rendering view. Moreover, the experience will stop automatically after a predefined period of time.

 

Face Features and Models

One main objective of XZIMG Magic Face solution is to provide facial features locations in real-time. The model is constructed based on 68 landmarks located inside the face and on its contours as illustrated bellow.

XZIMG Magic Face 1.0 - The 68 face features model - copyright @xzimg
The 68 face features model. This model includes face feature on the contour of the face.

The model controls a dense model which in turn is used as a support for mapping a texture. The model are available in the ./Resources directory as face-model.obj and should contain exactly 730 vertices when imported in Unity. In case one needs to modify the scenario and add new effects on top of the face, the model has to be edited in a 3D editing tool (like blender). Face model textures and texture coordinates can be modified while leaving the vertices and their orders unmodified.

Body Parts Segmentation

From Magic Face 3.0, the solution contains a segmentation engine base on recent deep learning developments. We are relying on powerful optimized RCNN (Recurrent Convolutional Neural Networks) to achieve real-time segmentation of the scene. We provide segmentation for hair and body.

The segmentation is computed when calling the xzimgMagicSegmentationProcess API function. The segmentation result is obtained by calling the xzimgMagicSegmentationGetSegmentationImage API function.

Different example shaders are included in the Asset/Resources/Shader/ folder. VideoShaderHairDying.shader is dedicate to dye the user(s) hairs with preselected colors while VideoShaderBodySegmentation.shader will remove the background and replace it by a transparent layer image. Note that the process of densely and robustly classify image pixels is a computationally expensive task. As a consequence, the API won't deliver real-time results for older devices.

XZIMG Magic Face 3.0 - using deep learning to segment in real-time - copyright @xzimg

Emotions Detection

XZIMG Magic Face 3.0 provides an emotion detection engine. This process can be activated by selecting the mode in the inspector window in Unity. This process is based on a CNN (Convolutional Neural Network) and has been optimized to work on Mobile phones with limited computations. Apart from neutral emotion, it provides different expressions such as happiness, angriness, sadness, surpise.

XZIMG Magic Face 3.0 - using deep learning to detect multiple emotions in real-time - copyright @xzimg

Unity Plugin Package

The plugin consists of the following files stored in the Unity project directory:

  • Shaders, materials and masks are stored in the ./Resources/ directory.
  • Face classifiers (which are necessary to detect and track faces) and Deep Learning classifiers for segmentation are stored in the ./Resources/ directory.
  • The C# scripts to launch the plugin and control its execution are stored in ./Script/.
  • The native independant libraries are stored in the ./Plugins/ directory:
    • for MS Windows are provided two .dll files (one for x86 architecture and another one for x64),
    • for HTML5 is provided a .bc library,
    • for iOS is provided a .a static library, and
    • for Android a .so library and corresponding .jar java library.
  • The main scene is provided in the ./Scene/ directory.

The main scene (stored in the ./Asset/Scene directory) contains a working scenario. You will have access to an existing scene hierarchy that can run directly. You can modify different parameters to access different functionalities.

Tuning Parameters in Unity

Different parameters are exposed and will help you to change face detector properties through Unity's inspector section.

XZIMG Magic Face 3.0 - Parameters Panel

  1. Segmentation Mode: select between different segmentations: None, Hair, Body and Body robust (available on iOS only).
  2. Face Detection: Engine will detect face and face features (default mode).
  3. Face Emotions: Engine will detect face emotions (Netural, Happiness, Surprised, Angriness, Sadness).
  4. Track Eeys Positions: Engine will detect eyes position in 2D and 3D.
  5. Display Action Unity: Engine will retrun and display Action Units.
  6. Nb Max Faces: How many faces would you like to be detected at the same time?
  7. Rendered Face Objects Size: where you define a certain number of masks and corresponding textures.
  8. Rendered Pivot: Object in the scene that will be linked with the first face discovered.
  9. Render Texture: link to the resource texture to be adjusted on the top of the pivot mesh.
  10. Render Texture Width / Height: dimensions of the mask texture image (these values has to be filled manually!).
  11. Render Shader: Shader to be used for rendering this face mask.
  12. Render Mode: Some modes to change the rendering effects (experimental).
  13. Capture Device Orientation: Indicates the capture device physical orientation in case you are using a real camera that is positioned in portrait mode (parameter only available on Desktop platforms).
  14. Dye Color: To select the default color for hair dying process.
  15. Sreen Debug / Segmentation Debug: Displays debug text and debug windows on top of the rendering screen. Note that the debug mask for segmentation is an expensive process and must be set visible for debugging only.
  16. Custom 3D Object: you can use this field to link a 3D object in the scene to the face. This is useful to add a virtual hat or sunglasses…
There are also parameters to define camera capture properties:
  1. Use Native Capture: Indicates if the application uses Unity webcamTexture class or the internal native video capture module provided by XZIMG (available on Mobiles).
  2. Video Capture Index: Select a camera from its index (for desktop PC).
  3. Video Capture Mode: Camera resolution mode (0: 320x240, 1: 640x480, 2: 1280x720, 3: 1920x1080). It is advised to set this parameter to 1 (640x480) or eventually 2 or 3 when using usb HD camera on MS Windows.
  4. Use Frontal: Use the frontal camera (change only for Mobiles).
  5. Mirror Video: Mirror the video display (object positions and orientations are automatically transformed accordingly).
  6. Video Plane Fitting Mode: In case screen aspect ratio and video aspect ratio are different, you can choose to fit the video plane horizontally or vertically.
  7. Camera Vertical FOV: Adjust the camera vertical field of view of the camera you use. Default value is 45 degrees.

Tuning the Camera on PC

On PC and Mobiles, you may want to tune the camera parameters. When possible, it is recommended to reduce camera shutter as much as possible and to avoid setting too much gain (it’s mostly a trade-off between these two values). High camera shutter will introduce motion blur making it harder to detect an object when moving too fast. Setting too much gain will introduce white noise leading to possible instabilities.

Plateforms

MS Windows

When using XZIMG Magic Face on MS Windows, you might have issues when running the application (in the Editor or as a MS Windows application). If this is the case, please try to install the Microsoft Visual 2017 Redists. You can also verify that the plugin dynamic libraries exists (xzimgMagicFace.dll files in the ./Asset/Plugins/ directory)

Mobiles

With mobiles (Android / iOS), video capture class has been redefined internally for performance reasons. As a consequence, users should carefully set Use Frontal and Mirror Video parameters.

Moreover, it is important to avoid the use of the Auto Rotation mode as the default orientation. Instead, it is best to set the device default orientation to portrait or other static values in the Player Settings section for Android or iOS.

Android

Android version of the plugin is treated using a .jar package that aims to make a bridge between Unity and the face-tracking native plugin.

  • The native Android plugin is stored in the ./plugins/Android/libs/armeabi-v7a directory.
  • The rigidfacetracking.jar class is stored in the ./plugins/Android/ directory and has a companion Android manifest file that will be used while compiling Android apk.

iOS

iOS version consists in a static .a library that is linked with Unity using XCode. This process is done automatically when building a scene.

WebGL/HTML5

HTML5 experiences can be exported through Unity.

How to Change Video Source

To replay videos, to support specific capture devices, … you might want to use specific video frame as input to the face tracking engine. On desktop, default video capture is provided by Unity class WebCamTexture while on iOS and Android, we use a native implementation of the video capture which is hidden inside our plugin.

To Change the video source, you can follow the following steps:

  • Check that the video capture is provided by the Unity layer and not natively, see the plugin parameters below:

  • With the non native video capture mode activated, we can use the following function to launch a face detection/tracking on a given image:
    private xmgMagicFaceBridge.xmgImage m_image;
    xmgMagicFaceBridge.xzimgMagicFaceTrackNonRigidFaces(ref m_image,...) 
  • You can prepare the input image using the provided prepareImage() function, where the pixel handle describe the memory placement of the image:
    xmgMagicFaceBridge.PrepareImage(ref m_image, captureWidth, captureHeight, 4, m_myWebCamEngine.m_PixelsHandle.AddrOfPinnedObject()); 
Note: you have to input an image that has the same size as the size declared at plugin initialization: xmgInitParams.m_processingWidth, and xmgInitParams.m_processingHeight)

Plugin API functions

API function are detailed and documented in the xmgMagicFaceBridge.cs script file.

Troubleshootings

XCode Errors

If following error occurs in XCode: XZIMG Augmented Face/Vision XCode Error due to CoreImage missing

Assert that CoreImage package is added when building the application with XCode. You can add this package once for all in Unity by selecting the iOS library and setting dependencies as illustrated.

deactivate XZIMG native video source

Video Stream is upside down

On certain devices (identified are Nexus 6 series), the video is captured upside-down by the hardware layer of the capture. As a consequence, displayed video is inverted. To improve on this issue, we have added the parameter <m_flippedHorizontaly to the xmgImage class, you can set this parameter to true whenever it’s required.

Mobile App. is too slow

Video capture module can be slow on some low end device because it relies on hardware implementation. We suggest that you reduce video capture resolution to 640x480 on older devices to keep a decent frame rate.

Video Stream is blurred

If rendered video seems blurry, it’s possible that the requested video resolution is not available for the phone you are using, in that case you can try to switch to different modes. For example, you can try to set the videoCaptureMode to 2 to get HD-720p resolution. For example, on a Google Pixel 2 it is recommended to use HD video format as SD mode doesn’t work properly.

macOS video capture

With macOS, it is not always easy to get the video capture working. We have done our best to deal with most camera within Unity, but in some cases, you might face some issues. If that’s the case, try to switch to different resolutions mode with the videoCaptureMode parameter and check the xmgVideoCaptureParameters.cs script to verify that the video capture resulution is correct.

Exported experience is not working on windows

Please ensure that the Microsoft redistribuable packages for Visual Studio 2017 are properly installed. If not, you can download them from Microsoft's website.