Furhat Gesture Capture Tool (Beta)

Gesture Capture tool can be used to create life-like and expressive facial expressions, gaze, or lip movements that can be played out on the robot. The tool converts a recording of a face from a motion capture toolkit into a gesture that can be played out on the robot. The tool can create gestures that can be incorporated into the robots automatic behavior or it can create full canned responses incorporating speech, lip-, eye- and head movements.

The tool is currently released as a Beta and may undergo major changes before a a fully supported version is available.

Input: a .csv file generated by the iPhone app Live Link Face
Output: a Furhat gesture file in .json format.

The tool requires an iPhone and Live Link Face app that make use of the ARKit that is supported by the FaceCore face engine in the Furhat Platform (available in version 2.0.0 and later). The Gesture Capture Tool translates these parameters into the ARKitparams in the Furhat Skill Framework.

Unfortunately, we currently do not have any tools to support facial recording on Android phones.

Download the Gesture Capture Tool (Beta):

You need to first download the Live Link Face app on iPhone X or newer model.

Recording a gesture

Next, record the facial expression using the Live Link Face app. Transfer the data generated by the app to your computer. It will be a folder containing a .csv file, a .mov file (with a video of your recording), and two more files.

Turn the captured face into a Furhat gesture

Run the Gesture Capture Tool and open the .csv file of your captured facial expression.

To make a gesture with default settings, just click Generate Gesture. A .json file with the specified gesture name will be created and exported to the directory of the Gesture Capture Tool.

Screenshot of the _Gesture Capture Tool_

Settings

There are some settings you can edit.

  • Gesture Name - This determines the name of the gesture and .json file.

  • ms per frame - The minimum milliseconds between frames. A value of 50 here will create a framerate of slightly under 20. A value of 100 will create a frame rate of slightly under 10

  • Delay gesture x ms - This is an offset value that adds some time before the gesture starts. If the gesture starts at frame zero, it will begin in a sudden and not very organic way. A value of 500 means half a second of easing in to the gesture.

  • Articulation modifier - All parameters for facial movements (everything except for neck/head movements) will be multiplied by this value. A value of 1.2 means the gesture will be more exaggerated. A value of 0.9 means it will be less articulated

  • Speed modifier - This will affect the speed of the gesture. A value of 2.0 will make it twice as fast. 0.5 means half the speed. If you want to synchronize the gesture with sound, this is recommended to leave at 1.0

  • Facial movements to bring into gesture - These four checkboxes all default to being checked. Unchecking one of these will ignore those facial features when the gesture is created. So unchecking the Eyes box will mean that the generated gesture will contain all the facial features except for the eye movements (including blinking). Unchecking the Neck means there will be no neck/head movements built in to the gesture.

  • Add code for audio - This will add a frame right at the beginning of the json file which will make a sound file play from the resources folder (defaulting to subfolder audio). You have to manually put the sound file into the resources folder. You can if you want use the .mov file which was generated by Live Link Face and export the audio from it using your favorite video editor, and then convert that audio file into .wav. You can change the file name and location as well as the time (to sync the audio with movements) as you like once the .json is imported into IntelliJ.

{
  "time": [
    0.5
  ],
  "persist": false,
  "params": {},
  "audio": "classpath:audio/soundFile.wav"
},

Using the gesture

Open the example skill and then move the .json file you just generated in IntelliJ into the resources folder. In general.kt in the flow you will see a line like this:

val testGesture = getResourceGesture("/lipSync.json")

Usage Example.

Move the fle into IntelliJ skill into <FurhatSkill>/resources/gestures/smileAndWink.json Load the .json so it can be used a a gesture in your skill.

val resource = RecordedgestureSkill::class.java.getResourceAsStream("/gestures/smileAndWink.json")
if (resource != null) {
    val smileAndWinkGesture = Record.fromJSON(BufferedReader(InputStreamReader(resource)).readText()) as Gesture
}

The gesture can now be used in the skill with the command: furhat.gesture(smileAndWinkGesture)

Gestures that include gaze or head movements can interfere with the autobehaviour of the robot and the gesture might not look exactly the way you want. You should consider to temporarily turn off Furhat's auto-behavior before running the gesture using:

furhat.setDefaultMicroexpression(blinking = false, facialMovements = false, eyeMovements = false)

And turn it on again after using:

furhat.setDefaultMicroexpression(blinking = true, facialMovements = true, eyeMovements = true)