Skip to content

Furhat AI Creator character parameters

There are four tabs that lets you configure different dimensions of your character and the interaction. Here we cover some of the parameters that you can change. There are also tooltips within the application itself to help you further.

Character

This is where you describe your character, choose how it looks and how it speaks.

Character description

The main way you control your character is through the Character Description under the "Character" tab.

NOTE

This is different from the Short Description, which only helps you recognize the character when selecting it.

The Character Description is where you prompt the model and tell it what kind of character it should play.
You can be:

  • Very elaborate → Create a vivid character with a rich backstory, clear personality traits, and examples of behavior.
  • Very brief → Let the AI make its own (often more generic) interpretation.

The richer and clearer you are in your description, the closer the character will behave to your intentions.

Interaction

This is where can control some more of the robots 'meta' behaviour.

Require user attention

Enabling Require User Attention means Furhat will only respond when a user is looking at it.

  • Furhat can detect where users are looking.
  • If two people are speaking to each other (not looking at Furhat), Furhat will stay silent.
  • Furhat continues listening and remembering the conversation even when quiet, to respond appropriately when attention returns.

Disengagement threshold

Furhat uses face detection to decide if it’s in an active conversation:

  • If it detects at least one face within interaction distance, it starts a conversation.
  • If no face is detected, Furhat waits for a certain time (default 3 seconds), then terminates the conversation and restarts the character.

You can configure this:

  • Increase or decrease the time before disengagement.
  • Set to infinite to prevent Furhat from automatically ending conversations.

NOTE

A drawback with setting the disengagement threshold too high is that Furhat won't recognize if a new person replaces the old one.

Examples of losing face detection:

  • User moves out of Furhat's field of view.
  • User covers their face (e.g., with a phone).

Use camera

By enabling camera use:

  • Furhat sends snapshots to an image-to-text model.
  • Furhat can then describe the scene and answer questions about it.

Examples:

  • "What am I holding in my hand?"
  • "What am I wearing?"

This visual context enriches Furhat’s conversational abilities.

NOTE

Enabling camera will increase response time of the robot.

Integration

Take the interactions to the next level by adding external knowledge or connect to external APIs.

Knowledge

You can extend Furhat's knowledge by adding external URL links:

  • The content of each URL is parsed and stored in a database.
  • The character can access and use this information.

The following document types are supported:

  • Web pages (HTML), such as Wikipedia
  • PDF, CSV, TXT files
  • Google Documents and Sheets

NOTE

There is a limit to how much external knowledge you can add, and the system shows how much space is used when you save the character.

NOTE

The content needs to be accessible. For example, if you link to a Google Document and it requires some authentication, the content won't be accessible. Additionally, the program will not crawl an entire website. It will only parse the content that is available by directly going to the URL. If content is loaded by pressing buttons on the website, it will not be included in the knowledge.

Action

Action is a method where you describe a function the AI model can call using JSON format.

Why use action?

It allows Furhat to interact with other services or perform real-time operations by calling real functions.

How does it work?

  1. You define functions (name, parameters, types).
  2. The AI determines whether a function needs to be called, potentially with parameters.
  3. The function is executed by calling the service (a GET or POST to a URL) with parameters

The JSON should follow the OpenAPI 3.1 specification, with some modifications.

Example: Using an external Open API for weather

You want your character to fetch real-time weather information.
You define a connection like this:

json
{
  "servers": [{"url": "https://api.open-meteo.com"}],
  "paths": {
    "/v1/forecast?current=temperature_2m,wind_speed_10m,rain,showers": {
      "get": {
        "description": "Get current weather for a specific location",
        "operationId": "GetWeatherCurrent",
        "parameters": [
          {
            "name": "latitude",
            "in": "query",
            "description": "The latitude to retrieve the weather for",
            "required": true,
            "schema": { "type": "string" }
          },
          {
            "name": "longitude",
            "in": "query",
            "description": "The longitude to retrieve the weather for",
            "required": true,
            "schema": { "type": "string" }
          }
        ]
      }
    },
    "/v1/forecast?daily=temperature_2m_max,rain_sum": {
      "get": {
        "description": "Get the upcoming weather for a specific location",
        "operationId": "GetWeatherForecast",
        "parameters": [
          {
            "name": "latitude",
            "in": "query",
            "description": "The latitude to retrieve the weather for",
            "required": true,
            "schema": { "type": "string" }
          },
          {
            "name": "longitude",
            "in": "query",
            "description": "The longitude to retrieve the weather for",
            "required": true,
            "schema": { "type": "string" }
          }
        ]
      }
    }
  }
}

Overview of the weather API structure

PartMeaning
serversBase server URL (https://api.open-meteo.com)
pathsAvailable API paths and operations
/v1/forecast?current=...Endpoint to get the current weather. Unlike the OpenAPI specification, hardcoded query parameters can be added here.
getDenotes a GET request (GET and POST are supported)
descriptionDescription of endpoints and parameters. These are important for the LLM to use them properly.
operationIdOperation name for easier reference
parametersRequest parameters.
inWhere to put the parameters when calling the URL. Can be query, path, header or body
requiredWhether the parameter is required or not.
schemaThe JSON schema for the parameter.

In addition, Creator supports the following extra arguments for parameters (not part of the OpenAPI specification):

PartMeaning
x-json-pathDefines JSON path for the endpoint which will be used to extract specific content from the retrieved response (assuming the response is JSON formatted).
x-valueDefines a hardcoded value for a parameter. The LLM will not be asked to generate values for such parameters. This can for example be useful when passing an API key to the service, in either the header or the query. This can also be used to pass a snapshot from the robot's camera to the service, as a base64 string, by providing the value <CAMERA>. Since the string is quite long, it is recommended to set in=body, using a POST request (see below).

Placement of parameters

Depending on the in argument in the parameter, the parameter will be added to the call in different ways:

PartMeaning
queryThe parameter is added to the query, for example https://www.myserver.com?paramName=paramValue
pathThe parameter is added to the path. You need to mark in the path with curly braces where it should be inserted. For example, https://www.myserver.com/{paramName} will be translated into https://www.myserver.com/paramValue
headerThe parameter will be added as key/value to the header.
bodyThe parameter value will be added to the body (only for POST). Only one parameter can be added to the body.

Handling asyncronous requests with Webhooks

The Action service should always respond immediately. If you need to handle delayed responses, you can first respond with something immediately, and then later call back with a webhook. To do this, your service needs to be able to access the IP address of the robot, and make a POST to http://ROBOT-IP:7973/webhook with a message in the body. Make sure the message is formatted in a way that is interpretable by the LLM, in the context of the dialogue.

Note that webhooks do not necessarily need to be preceded by an Action call, you can make a webhook call with any message at any point (for example if the robot needs to react to some external event).

Logging

By turning 'Log Conversations' on, everything that's being said will be transcribed and saved in a database. You can then download all your data and analyse the conversations afterwards. This is a great feature for survey taking or other use cases where the content of the conversation is of interest at a later point.