Appearance
Furhat AI Creator character parameters
There are four tabs that lets you configure different dimensions of your character and the interaction. Here we cover some of the parameters that you can change. There are also tooltips within the application itself to help you further.
Character
This is where you describe your character, choose how it looks and how it speaks.
Character description
The main way you control your character is through the Character Description under the "Character" tab.
NOTE
This is different from the Short Description, which only helps you recognize the character when selecting it.
The Character Description is where you prompt the model and tell it what kind of character it should play.
You can be:
- Very elaborate → Create a vivid character with a rich backstory, clear personality traits, and examples of behavior.
- Very brief → Let the AI make its own (often more generic) interpretation.
The richer and clearer you are in your description, the closer the character will behave to your intentions.
Interaction
This is where can control some more of the robots 'meta' behaviour.
Require user attention
Enabling Require User Attention means Furhat will only respond when a user is looking at it.
- Furhat can detect where users are looking.
- If two people are speaking to each other (not looking at Furhat), Furhat will stay silent.
- Furhat continues listening and remembering the conversation even when quiet, to respond appropriately when attention returns.
Disengagement threshold
Furhat uses face detection to decide if it’s in an active conversation:
- If it detects at least one face within interaction distance, it starts a conversation.
- If no face is detected, Furhat waits for a certain time (default 3 seconds), then terminates the conversation and restarts the character.
You can configure this:
- Increase or decrease the time before disengagement.
- Set to infinite to prevent Furhat from automatically ending conversations.
NOTE
A drawback with setting the disengagement threshold too high is that Furhat won't recognize if a new person replaces the old one.
Examples of losing face detection:
- User moves out of Furhat's field of view.
- User covers their face (e.g., with a phone).
Use camera
By enabling camera use:
- Furhat sends snapshots to an image-to-text model.
- Furhat can then describe the scene and answer questions about it.
Examples:
- "What am I holding in my hand?"
- "What am I wearing?"
This visual context enriches Furhat’s conversational abilities.
NOTE
Enabling camera will increase response time of the robot.
Integration
Take the interactions to the next level by adding external knowledge or connect to external APIs.
Knowledge
You can extend Furhat's knowledge by adding external URL links:
- The content of each URL is parsed and stored in a database.
- The character can access and use this information.
The following document types are supported:
- Web pages (HTML), such as Wikipedia
- PDF, CSV, TXT files
- Google Documents and Sheets
NOTE
There is a limit to how much external knowledge you can add, and the system shows how much space is used when you save the character.
NOTE
The content needs to be accessible. For example, if you link to a Google Document and it requires some authentication, the content won't be accessible. Additionally, the program will not crawl an entire website. It will only parse the content that is available by directly going to the URL. If content is loaded by pressing buttons on the website, it will not be included in the knowledge.
Action
Action is a method where you describe a function the AI model can call using JSON format.
Why use action?
It allows Furhat to interact with other services or perform real-time operations by calling real functions.
How does it work?
- You define functions (name, parameters, types).
- The AI outputs the function name and parameters as a structured JSON object.
- The function is executed
The JSON should follow the OpenAPI 3.1 specification, with some modifications.
Example: Using an external Open API for weather
You want your character to fetch real-time weather information.
You define a connection like this:
json
{
"servers": [{"url": "https://api.open-meteo.com"}],
"paths": {
"/v1/forecast?current=temperature_2m,wind_speed_10m,rain,showers": {
"get": {
"description": "Get current weather for a specific location",
"operationId": "GetWeatherCurrent",
"parameters": [
{
"name": "latitude",
"in": "query",
"description": "The latitude to retrieve the weather for",
"required": true,
"schema": { "type": "string" }
},
{
"name": "longitude",
"in": "query",
"description": "The longitude to retrieve the weather for",
"required": true,
"schema": { "type": "string" }
}
]
}
},
"/v1/forecast?daily=temperature_2m_max,rain_sum": {
"get": {
"description": "Get the upcoming weather for a specific location",
"operationId": "GetWeatherForecast",
"parameters": [
{
"name": "latitude",
"in": "query",
"description": "The latitude to retrieve the weather for",
"required": true,
"schema": { "type": "string" }
},
{
"name": "longitude",
"in": "query",
"description": "The longitude to retrieve the weather for",
"required": true,
"schema": { "type": "string" }
}
]
}
}
}
}Overview of the weather API structure
| Part | Meaning |
|---|---|
| servers | Base server URL (https://api.open-meteo.com) |
| paths | Available API paths and operations |
| /v1/forecast?current=... | Endpoint to get the current weather. Unlike the OpenAPI specification, hardcoded query parameters can be added here. |
| get | Denotes a GET request (GET and POST are supported) |
| description | Description of endpoints and parameters. These are important for the LLM to use them properly. |
| operationId | Operation name for easier reference |
| parameters | Request parameters. |
| in | Where to put the parameters when calling the URL. Can be query, path or header |
| required | Whether the parameter is required or not. |
| schema | The JSON schema for the parameter. |
In addition, Creator supports the following extra parts (outside of the OpenAPI specification):
| Part | Meaning |
|---|---|
| x-json-path | Defines JSON path for the endpoint which will be used to extract specific content from the retrieved response. |
| x-value | Defines a hardcoded value for a parameter. The LLM will not be asked to generate values for such parameters. |
Logging
By turning 'Log Conversations' on, everything that's being said will be transcribed and saved in a database. You can then download all your data and analyse the conversations afterwards. This is a great feature for survey taking or other use cases where the content of the conversation is of interest at a later point.