DeepAR SDK for Web v1.3.1

DeepAR SDK for Web is an augmented reality SDK that allows users to integrate advanced, Snapchat-like face lenses in the browser environment.

The trial version has DeepAR logo, can be run only for a limited time and it works only on localhost

Changelog

v1.3.2 - 31/01/2019

  • External camera

v1.3.1 - 08/01/2019

  • Multi face tracking
  • setCanvasSize function to reset the viewport size

v1.3 - 02/01/2019

  • Added onFaceTracked

v1.2.5 - 05/11/2018

  • Added audio playback

v1.2.4 - 18/10/2018

  • Improved API and documentation
  • Transition to Web Assembly

Usage

Steps for integrating the DeepAR SDK for Web:

  • Open the html file where you want to integrate the DeepAR SDK. In this tutorial, we will use the empty html document.
<!DOCTYPE HTML>
<html>
<head>
<title>DeepAR</title>
</head>
<body>

</body>
</html>
  • Add the canvas element inside body:
<canvas id="deepar-canvas"></canvas>

The canvas id must be "deepar-canvas"

  • Upload the necessary data files:

    • model (models-68-extreme.cl)
    • effect files
    • library (deepar.js and deepar.wasm)
  • Add the following script just before the body closing tag:

<script type="text/javascript" src="lib/deepar.js"></script>
<script type="text/javascript">

  // Initialize the DeepAR object
  var deepAR = DeepAR({ 
    canvasWidth: window.innerWidth, 
    canvasHeight: window.innerHeight,
    canvas: document.getElementById('deepar-canvas'),
    onInitialize: function() {
      // start video immediately after the initalization, mirror = true
      deepAR.startVideo(true);
      deepAR.switchEffect(0, 'slot', './effects/aviators', function() {
        // effect loaded
      });
    }
  });

  deepAR.onCameraPermissionAsked = function() {
    console.log('camera permission asked');
  };

  deepAR.onCameraPermissionGranted = function() {
    console.log('camera permission granted');
  };

  deepAR.onCameraPermissionDenied = function() {
    console.log('camera permission denied');
  };

  deepAR.onScreenshotTaken = function(photo) {
    console.log('screenshot taken');
  };

  deepAR.onImageVisibilityChanged = function(visible) {
    console.log('image visible', visible);
  };

  deepAR.onFaceVisibilityChanged = function(visible) {
    console.log('face visible', visible);
  };

  deepAR.onVideoStarted = function() {
    console.log('video started')
  };

  deepAR.onFaceTracked = function(detected, translation, rotation, poseMatrix, landmarks, landmarks2d) {

  }

  // download the face tracking model
  deepAR.downloadFaceTrackingModel('models/models-68-extreme.cl');
</script>

API

DeepAR(settings)

Used to initialize the DeepAR SDK. Example usage:

var deepAR = DeepAR({
    canvasWidth: window.innerWidth, 
    canvasHeight: window.innerHeight,
    canvas: document.getElementById('deepar-canvas'),
    onInitialize: function() {
      // called when the initialization is finished
    }
});

Settings: - canvasWidth, canvasHeight: width and height of the canvas surface where the camera feed with AR objects will be rendered. - canvas: canvas DOM object - onInitialize: the function which will be called when the initialization is finished. No other API should be used before that.

switchEffect(face, slot, path, callback)

Downloads the effect file and loads it into the engine. The method used to switch any effect in the scene. Effects are placed in slots. Every slot is identified by its unique name and can hold one effect at any given moment. Every subsequent call to this method removes the effect that was previously displayed in this slot. The face parameter is used to specify the on which face the effect will be applied. The allowed values are 0,1,2,3 - 0 is the first face detected, 2 the second, etc. Tracking more than one face is enabled only in special SDK version. Using this method user can combine multiple effects in the scene. When the effect is downloaded and loaded into the engine, the callback will be called.

downloadFaceTrackingModel(path, callback)

Downloads the face tracking model and initializes the face tracking. When it's done, the callback is invoked.

startVideo(mirror, videoOptions)

Starts the video feed. Video feed will be horizontally mirrord if mirror=true. Parameter videoOption will be passed to getUserMedia, like this:

navigator.mediaDevices.getUserMedia({video: videoOptions})

Here's the order of events which will be called after startVideo: 1. onCameraPermissionAsked 2. onCameraPermissionGranted or onCameraPermissionDenied 3. onVideoStarted

setVideoElement(videoElement, mirror)

Used to pass the HTMLVideoElement to the DeepAR SDK. The SDK will grab frames from the video element and render the frames with masks/filters to the canvas. This method should be used instead of startVideo when you want to handle getUserMedia outside of the SDK or you need to apply the masks to any video stream.

stopVideo()

Stops the video feed.

fireTrigger(name)

This method allows the user to fire a custom animation trigger for model animations from code. To fire a custom trigger, the trigger string must match the custom trigger set in the Studio when creating the effect.

changeParameter(gameObject, component, parameter, value)

This method allows the user to change parameters during runtime. The method requires a Game Object name, Component name, Parameter name and a float value.

  • gameObject is the name of the Game Object in the hierarchy as visible in the Studio. The name should be unique for the method to work properly.
  • component is the internal component type. Currently, the only supported component type for this method is "MeshRenderer". The component must exist on the game object.
  • parameter is the name or the parameter to be changed, for example, the name of the blendshape on a mesh. The parameter must exist for the component.
  • value is the new value to be applied to the parameter.
// Change the weight of the blendShape1.cuteBS blendshape on the Mesh Game Object to 5.0 (5%).
deepAR.changeParameter('Mesh', 'MeshRenderer', 'blendShape1.cuteBS', 5.0);

changeParameterVector(gameObject, component, parameter, x, y, z, w)

Same as above except this method can be used to change the value of the four component vector. Can be used to change material color. If multiple Game Objects share the same material, changing the parameter once for any Game Object using the material will change it for all Game Objects. To change a uniform on a material, such as color, the parameter must use the internal uniform name. These names can be found in the shader files.

// Change the color of the u_diffuseColor uniform (diffuse color for Standard shader) on the material applied to the Mesh Game Object to a semi-transparent green.        
deepAR.changeParameterVector('Mesh', 'MeshRenderer', 'u_diffuseColor', 0.0, 1.0, 0.0, 0.25);

setFaceDetectionSensitivity(sensitivity)

This method allows the user to change face detection sensitivity. The sensitivity parameter can range from 0 to 3, where 0 is the fastest but might not recognize smaller (further away) faces, and 3 is the slowest but will find smaller faces. By default, this parameter is set to 1.

takeScreenshot

Captures a screenshot of the current screen. Event onScreenshotTaken will be called after it's done.

setCanvasSize(width, height)

Changes the canvas size. It should be called after the engine is initialized (onInitialize has been called).

Events

onInitialize()

The method is called when the engine is initialized.

onScreenshotTaken(photo)

Called when the screenshot is taken. The photo parameter is the result of HTMLCanvasElement.toDataURL("image/png").

onCameraPermissionAsked()

Called when the camera permission is asked.

onCameraPermissionGranted()

Called when the camera permission is granted.

onCameraPermissionDenied()

Called when the camera permission is denied

onFaceVisibilityChanged(visible)

Called whenever the face enters or exits the camera field of view.

onImageVisibilityChanged

Called whenever the image enters or exits the camera field of view.

onVideoStarted

Called when the video is started.

onFaceTracked(detected, translation, rotation, poseMatrix, landmarks, landmarks2d)

Called for each camera frame with the face data.

Running the example

The DeepAR SDK for Web package contains the full example. The easiest way to run the example (on macOS) is to open the terminal, go to the example directory and run the server.ph script python server.py, open the browser and go to http://localhost:8888/

Supported browsers

  • Desktop
    • Google Chrome 66+
    • Safari 11.1+
    • Firefox 60+
    • Edge 42+
  • iOS
    • Safari on iOS 11
  • Android
    • Google Chrome 66+