Changelog

2.8.3 (Released 2025-05-16)

  • Camera Stream Enhancement: Exposed camera stream for Research Robots to all devices on the same network, allowing any user to access the camera stream from research robots.
  • Local Camera Stream Access: Camera stream is now available locally for Skills to access on all robot types. Note that camera stream must be enabled for external access from the web interface Settings.
  • Visual Improvements: Oral cavity textures now appear smoother with no artifacts or pixelation.
  • Facial Expression Fix: All characters now display a neutral face with properly positioned eyebrows, resolving the issue where characters appeared sad.
  • IMPORTANT: Users must backup their previously installed character packs before updating to this version. Character pack imports are now deprecated - you will not be able to import character packs after updating. If you're currently working with character packs, do not upgrade until you're ready to recreate all characters using our new Face Editor.
  • SDK Launcher Fix: Resolved an issue preventing users to launch Furhat AI Creator from the launcher
  • AI Creator Logging: Fixed issues in the logging system for AI Creator.

2.8.2 (Released 2025-04-29)

  • Introduced FaceEditor, a new tool for customizing face properties such as texture, color, and skin.
  • Enabled Creator Trial for all users (both virtual and physical robots), allowing broader access to creation tools.
  • Added Advanced Creator Features for enhanced control and customization.
  • Updated FaceCore for full compatibility with the new FaceEditor.
  • Skills can now access the camera on all device types.
  • Reintroduced support for blocky and remoteapi launch directly from the SDK launcher.
  • Added support for Apple Silicon in the SDK; the issue with missing eyes has been resolved.
  • Improved WiFi connectivity, making it more robust and reliable in varying environments.
  • Skills can now access the camera on non-research robots.
Known Issues
  • Custom Charpacks Preventing Face Loading - Uploading custom charpacks to the robot currently causes face assets to fail loading. Removing the charpacks resolves the issue. This will be addressed in the next release by disabling the use of custom charpacks. Custom faces should instead be created and managed via the Face Editor.

  • Missing Tongue Texture - In some cases, the tongue texture fails to load correctly, resulting in the inside of the mouth appearing with only two shades of red. This visual issue will be fixed in the next release.

  • SDK Launcher: Skill Launch Buttons Failure - The new buttons for launching skills in the SDK Launcher may not work reliably in the current version. We are fixing this in the next version.

  • SDK Launcher Update Fails on Windows (Certification Error) - Updating the SDK Launcher on Windows systems currently fails due to a certificate validation error. This issue will be resolved in an upcoming release once the certificate has been updated.

2.8.1 (Released 2025-02-26)

  • Fixed bug with external audio files not playing

2.8.0 (Released 2025-01-16)

  • Watch whats’s new in 2.8.0 here
  • Furhat Creator: A tool for quickly creating AI-powered interactions using a simple interface
  • Furhat Enterprise: A tool for integrating Furhat with Microsoft Co-Pilot to create enterprise solutions.
  • Support for ElevenLabs TTS, adding more natural voices and voice cloning.
  • New lip sync engine to improve lip sync.
  • New Kotlin API for accessing Camera and Audio feeds from the skill.

2.7.2 (Released 2023-12-14)

A patch release to fix a few problematic behaviors:

  • Fixed event queue synchronization leading to facecore not starting with some autosart skills.
  • Bring skill and system logs back in the web interface and in the SDK launcher.
  • Fixed robot boot crash issue on unsupported networks.
  • Fixed robots getting stuck in idle mode while running skills.

2.7.1 (Released 2023-08-23)

A patch release with two fixes:

  • Fixed on-face menu issue where the robot becomes unresponsive.
  • Fixed backwards compatibility for voice genders and getting the list of available voices.

2.7.0 (Released 2023-07-27)

  • Moved away from khttp library for deprecation reasons.
  • Upgraded system and skills to kotlin 1.8. Migration documentation is available here.

IMPORTANT NOTE: It includes breaking changes to skills project handling. Skill developers must now use this version to continue with the last commons versions.

Bug fixes

  • Fixed responsiveness of the web interface's side menu.
  • Fixed Azure synthesizer and recognizer for credentials located in the following regions: eastasia, eastus, northeurope, westeurope, southeastasia, westus.

2.6.1 (Released 2023-05-04)

  • A patch release with two fixes:
    • Added the Arabic NLU library to furhat commons to fix listen commands in Arabic.
    • Fixed boot issue where the face does not fully show and the robot is unresponsive.

2.6.0 (Released 2023-04-19)

  • Add an audio feed to be able to pipe both audio in and out for research robots.
  • Small skill flow improvements:
    • Added possibility to not reset the face and neck in the stopGestures() function
    • Improve askYN function signature

Bug fixes

  • Local skills will be shown in the skill library section even if the robot has never been connected to internet.
  • Fix unresponsive overwrite dialog box when uploading skills on the robot.

2.5.0 (Released 2023-01-30)

  • Add Azure TTS: over 400 new voices available!
  • Microsoft Azure credentials for speech synthesis and recognizer are now included! Go to Settings > Voice/Settings > Recognizer and select the provisioned key for a region of your choice. Only available for Standard and Premium robots.
  • Add built-in support for Arabic grammar and intents
  • Minor improvements to the built-in intents for English (such as Yes)
  • Improved user position estimates by undistorting the camera feed and re-working the internals
  • You may want to increase the interaction space size slightly in your skills following the upgrade, as users tend to be estimated as slightly further away
  • If using the camera feed for research robots, the imagery will look slightly different, with a slightly smaller field of view and less of a fish-eye effect
  • After starting a self-hosted WiFi network from the robot's on-face menu, the password is now shown for easy access
  • The robot's MAC addresses are now available on the robot's on-face menu (useful if your network requires whitelisting certain devices)
  • Some error messages will now be spoken by the robot itself
  • Improved the blank skill template

Bug fixes

  • Amazon Polly credentials will no longer disappear
  • Configuration management had a bug where some settings were not being updated/stored correctly (e.g. research robot type)
  • Minor fixes to the web interface: styling and remove the camera's built-in microphone from being shown (only applicable for recent robot units)
  • Fixed bug relating to log messages filling up the disk under certain conditions
  • Fixed issue with furhat.attendAll() function

2.4.1 (Released 2022-11-17)

  • A patch release for the SDK to fix an issue concerning skill dependencies.
    • Fixes new users skill creations failures in the SDK.
    • Fully compatible with robots running 2.4.0, for which this patch is not applicable.

NOTE: Older skills will still lead to a dependency issue if missing a local cache. When encountering this issue, add jcenter() to the repositories section of the build.gradle for your skill project.

2.4.0 (Released 2022-07-04)

  • New functionality is exposed through the furhat.attend() function:
    • Movement speed to use when switching to a new attention target
    • Behavioral gazeMode: DEFAULT, EYES, DEADZONE, and HEADPOSE.
    • Manual control of the head pose, independent from the eye gaze, through either a head rotation or location target
  • 14 new FaceCore characters for the adult mask: Dorothy, Fernando, Gyeong, Hanan, Hayden, Lamin, Maurice, Nazar, Omar, Patricia, Rania, Vinnie, Yumi, Zhen
  • 2 new FaceCore characters for the child mask: Billy and Devan

Additional improvements:

  • Add the ability to set FaceCore characters through enums
  • Add option to say a single utterance with a specific voice
  • New Amazon Polly voices can now be reached through PollyNeuralVoices.* (already accessible in the web interface and by manual entry)
  • Azure speech-to-text limit for simultaneous language identification raised from 2 to 4 languages
  • SpeechRecPhrases reworked internally to improve performance
  • Restructured on-face menu network settings (hosted hotspot and WiFi) for clarity
  • Virtual users can now be dragged around without having the virtual user's gaze switch back to the robot
  • Under-the-hood support for custom FaceCore characters (get in touch if you want to try it out!)

Various bug fixes:

  • Fixed issue where robot could end up listening to itself
  • Hiding skill.properties message on starting a skill
  • Small changes to the Furhat Studio web interface
  • Skill Library skills with auto-start enabled now keep this status after being updated
  • Prevent crash on corrupted Skill Library cache
  • Servo control loop tuned to perform better on newer robots (Furhat-365 and later)
  • Spoken IP address from on-face menu will always be played with the same English voice. Previously some voices could cause it to be read as a date or similar
  • Fixed issue with unintended lipsync on pre-recorded audio due to cache error
  • Glancing on user leave now works as expected
  • Error handling and feedback when pre-made NLU intent used for unsupported language
  • Added warning when getResourceGesture was not found in the specified location
  • Character and mask setting does not re-trigger if already set to the requested values, causing slightly better performance

2.3.0 (Released 2022-03-31)

  • Improvements to the WebRTC streamer.
  • Even smoother head movements.
  • Updated the skill template when creating a new skill project.
  • Ability to start the support mode from the on-face menu.
  • Improvements to the Web Interface to ensure that settings menu items are accessible on smaller screens.
  • Lip syncing gestures now have the highest priority.
  • Fixes a bug related to the ability to add persistance to neck gestures (NOTE: Previously only the face parameters were persistent, so custom persistent gestures may now behave slightly different)
  • Reduced CPU usage as the neck gesture update frequency was decreased.
  • Minor fixes to the Isabel character.

Known issues

  • Fading the face in and out does not work in FaceCore on the SDK at the moment. This regression will be fixed in a future update. The robot fading is unaffected.
  • Touch is not working on monitors connected to the Furhat robot when running FaceCore.

2.2.2 (Released 2022-03-22)

  • A patch release to fix an issue with the dashboard view in the web-interface. Due to 3rd party library unavailability.
    • No new skill library skills have been released, so they might be marked as deprecated.
    • No new commons has been released.
    • SDK might have the same issues, but we will fix that in 2.3.0.

2.2.1 (Released 2022-02-14)

  • A patch release to fix critical issues with the robot software:
    • Fixes robot web interface being unresponsive when powering up the robot without internet connection available.
    • Fixes bug where faulty uploaded .skill files caused the robot to be unresponsive.

2.2.0 (Released 2022-01-31)

  • Introduction of the Skill Library to Furhat Library.
  • Re-worked head trajectory planning. You should now experience less abrupt neck movements, but be aware that some timing may be different.
  • Apache Log4j patched (to version 2.17.1).
  • Several updates to the supported languages (and voices) for the supported engines:
    • Google Recognizer gets 11 new languages.
    • Microsoft Recognizer now supports another 66 languages.
    • Amazon Polly now has built-in support for all their voices (15 'new' ones).
  • FaceCore improvements:
    • Properly fading in the last used character on reboot.
    • Faster character switching.
    • Minor improvements to the Jane, Kione, and Isabel characters.
    • Some improved blendshapes, e.g. for lipsync
  • Bug fix relating to Gesture Capture Tool checkbox selections, v0.0.7

2.1.0 (Released 2021-11-15)

  • Improvements for our new FaceCore Engine:
    • Fade in/out the face with furhat.setVisibility().
    • Revised some of the blendshapes to e.g. provide more natural lip movements during speech.
    • New characters: Alex, Jamie, Brooklyn and Isabel.
    • Updated characters: default, Titan.
    • Characters can now be set inside utterances.
    • Tone down brightness of LEDs on the virtual robot.
  • Fixes for our new FaceCore Engine:
    • Fixed eyes not looking up/down until switching masks (side-effect: removed X|Y_GAZE_CONTROL ARKitParams).
    • Fixed encoding bug where the Virtual Furhat face would either lack microexpressions or be rendered improperly when using certain number furmats in Windows.
    • Fixed issue where the face would not come up at all due to detecting incorrect resolution, requiring robot restart.
  • Several ease-of-access functions were added to furhat-commons:
    • getResourceGesture(<relativeFilePath>) to load a recorded gesture from a .json file in the resources folder (this function was previously in the Furhat library)
    • furhat.faces() to get the current and available faces when using FaceCore
  • Extended the Swedish PersonName entity with about 1200+ names (Thanks again Samuel!)
  • Added possibility to change current user mid-utterance (without direction of arrival)

Bugfixes

  • Fixes infinite retries to connect to Amazon Polly TTS.
  • Fixes dialogLogger not being terminated at skill termination.
  • Fixes gestures not being terminated at skill termination.
  • Fixes async audio in gestures.
  • Fixes to speech recognizer phrases from intents.

NOTE: Touch on an external USB-C monitor currently does not work while FaceCore is running - workarounds include using an external device to show the web interface (e.g. iPad or laptop) or switching to the legacy OSG face engine.

2.0.0 (Released 2021-09-01)

  • New FaceCore face engine can now be enabled in face settings on the robot:
    • ARKitParams allow for more control of facial expressions.
    • CharParams allow for temporarily modifying the underlying geometry of the face, e.g. bulging eyes.
    • Masks and characters replace models and textures.
    • New more diverse appearances.
    • Virtual Furhat in the SDK updated to reflect the changes in the robot face engine.
  • Asset Collection has been added to the Furhat library.
  • Robot now has access to built-in skills:
    • Remote API is no longer in beta and supports FaceCore character params.
    • Blockly is no longer in beta and has more options for managing specific users and creating multi-party interactions.
  • Facial capture tools available for creating realistic gestures to run on FaceCore.
  • Support for upcoming hardware revisions.
  • Support for product tiering.

Bugfixes

  • Fixes a bug where furhat.listen() without a user speaking would result in a never-ending loop.
  • Fixes a bug where skills could no longer be created.
  • Fixes a bug showing wrong IP addresses in the network settings page in the web interface.
  • Fixes a bug with the face menu not updating the IP address when connecting/disconnecting to a network.
  • Head trajectory planning has been revised and is now the same on both the Virtual Furhat and robot.
  • Improved servo positioning for head pose - angles are now correct (NOTE: this may affect how old gestures involving neck movement look).
  • Brush up of web interface, including new icons and removal of legacy text editor fields.

1.26.1 (Released 2021-07-02)

  • More under the hood updates to the platform to support upcoming changes to the animation handler.
  • Camera stream now includes annotations regarding user IDs and locations in the frame.

Bugfixes

  • Fixes a bug in camera view not working on Chromium based browsers.
  • Fixes a bug in camera view where the stream failed to restart after crashing.

1.26.0 (Released 2021-05-12)

  • We are currently preparing a big internal update, which includes many under the hood changes. Keep an eye on our other channels for the ability to do some beta testing!
  • Textures in the web-interface are now sorted alphabetically.

1.25.0 (Released 2021-04-01)

NOTE: Due to Bintray sundown, repository hosting furhat-commons is changed. Read more about what you need to change in your skill source here.

  • BETA! First stage implementation of Barge-in with interruptable speech.
  • Added French and German intents and entities.
  • Users have access to ASR alternatives in a more streamlined way, read more about it here.

Bugfixes

  • Fixes skill template importing an empty nlu file.
  • Improved system robustness when connected to unreliable internet. There is now a trigger that can be used in the flow as described here.

1.24.0 (Released 2021-02-19)

  • Implemented a utility class AudioPlayer to play sound clips on internal/external speakers.
  • Implemented a feature to add automatic smile back to the skills.
  • Added mapping of deprecated bertil mask model to adult.
  • In addition to attending with head gesture, Furhat can now attend a user with eyes only.

Bugfixes

  • Fixes a bug with graceful handling of missing Azure ASR credentials.
  • Fixes a bug with randomNoRepeat.
  • Fixes a bug with Polly Neural voices where unsupported SSML tags were read out.

1.23.1 (Released 2021-01-07)

  • Fixes a bug where state initialization would throw an exception.

Note: This is a patch release for furhat-commons. Updated SDKs are available to facilitate new skill projects defaulting to the new furhat-commons. Robots do not require an update.

1.23.0 (Released 2020-12-10)

  • Implemented System Wide Volume settings on the robot, accessible from the web interface and the skill.
  • Web interface updates:
    • Adds ability to mute internal speakers, for when the robot is connected to an external speaker.
    • Adds more places where speech can be tested to the Wizard and Voice Settings tab.
  • Adds the ability to add LED color, Audio or texture changes to gestures.
  • Improvements to the Yes intent.
    • Now automatically adds examples based on the question Furhat asked. For example when the robot asks "Can you do that", "I can" now counts as a Yes.
    • Note: This functionality only works in English.

Bugfixes

  • Fixes the wink gesture to be more wink-like.
  • Fixes a bug where prominence was not added to synthesized speech.
    • Note: Prominence is not added to speech that is already cached, clear the synthesizer cache first in the Web interface > Settings > Voice.

1.22.0 (Released 2020-10-05)

  • Implemented an update notification in the web interface, notifying the user of a new release.
  • All bounding boxes are now visible in the web interface camera stream.

Bugfixes

  • Security fix related to local audio stream subscriptions.
  • Fixed a rare bug where occasionally the robot's face turned blue.
  • Fixed a bug where usage of Spanish and Chinese name entities resulted in an ArrayIndexOutOfBoundsException.
  • Fixed internal microphones reporting wrong number of channels.
  • Fixed virtual furhat desktop icon.
  • Removed titlebar from external monitor Chromium window.

1.21.0 (Released 2020-09-03)

Important: Upgrading the robot over a strong network connection is advised due to upgrade package size.

  • Camera software improvements increasing overall performance and user detection accuracy.
  • An update to Virtual Furhat now supports LEDs in the SDK.

Bugfixes

  • Fixes a bug where the system would be unresponsive when accessing the camera feed while running remotely on a robot.
  • Fixes a bug where audio files were no longer uploaded to our online log viewer.

Known issues

  • System is unable to recover from a temporary network loss while running a skill.

1.20.0 (Released 2020-06-26)

Important: Due to changes in skill GUI implementation, skills using a GUI need to be rebuilt using 1.20.0 furhat-commons.

  • Robots can now have types (research/production)
  • Implementation of new camera software
    • Adds functionality to 'listen' to the camera feed on research robots, read more about it here.
    • Paves the way for future improvements in accuracy.
    • Important note: The implementation of new camera software might change/alter the user detection behavior. We encourage thorough testing after updating to make sure the desired behavior is achieved.
  • Added support for Spanish NLU including built-in intents, entities and grammars.
  • Added Polly neural voices classes, you can access them like this PollyNeuralVoice.Matthew() more on it here.
  • Voice classes can now be inherited from which allows for:
    • Custom text transformations this allows the developer to override existing voices with their own transform function

Bugfixes

  • Improves stability and speed of the alternative classification engine.
  • General stability improvements.
  • Fixes bug where missing library from Azure recognizer causes SDK to fail on startup.

1.19.0 (Released 2020-05-05)

  • Implemented an alternative intent classification engine. Read this to try it out.
  • Added support for Mandarin NLU including built-in intents, entities and grammars.
  • User gesture detection. This feature allows skill developers to detect when the user is smiling, read more here.
  • MS Azure Beta: Due to changes at Microsoft, the Azure recognizer no longer works in releases prior to 1.19.0. Release 1.19.0 tracks the very latest Microsoft changes.

Bugfixes

  • Fixed a bug where logging an error twice could cause the system to crash.
  • Fixes a bug where if the microphone is unplugged during startup causes the system to crash.
  • Fixes a bug where the SDK would wrongly state that there are no voices available.
  • Fixes a bug where Microsoft Azure recognizer would return silence, even though the user was speaking.

1.18.0 (Released 2020-04-03)

  • Total number of textures on the robot is increased to 22. This includes a new "blank" texture.

Bugfixes

  • Fixed a bug with behavior in utterance.
  • Fixed gaze pan and tilt parameters being mixed.
  • Fixed input language set when explicitly flagged to not be changed.
  • Fixed uncaught exceptions when starting Google ASR without internet connection.

1.17.0 (Released 2020-03-05)

  • Beta implementation of Microsoft Azure ASR. Further information on our ASR providers can be found in the documentation
  • Added functionality to set the recognizer as either Google or Azure.
  • All Microphone input is treated as mono (previously stereo).

Bugfixes

  • Fixed connecting to (and disconnecting from) wireless SSIDs with spaces in the name.
  • Fixed a bug that occured after a user engaged Furhat after a long period of inactivity.
  • Fixed intents not displaying in the dashboard when raised.
  • Fixed a bug where Furhat would not return to the default state after terminating a skill.

1.16.0 (Released 2020-02-05)

  • All cached synthesizer voice files can now be deleted from Voice settings page.
  • A progress bar is now displayed during skill upload.
  • When unplugging a microphone during a listen, an onResponse is returned with aborted flag.
  • Improved Amazon Polly synthesizer robustness in long running use-cases with added service restart when failing.

Bugfixes

  • Fixed unreasonable empty sound in the end of on-board TTS voice when using slow rates.
  • Fixed web-interface continuously refreshing on the external monitor without internet connection.
  • Fixed notifications for failed or partial skill upload.
  • Fixed notifications for overwriting existing skills.
  • Fixed user not being notified of AWS credential authentication error when logging to cloud.
  • Fixed skill delete dialogue being displayed under the console log.
  • Fixed a bug where attending locations in quick succession would require a robot restart.

1.15.0 (Released 2019-12-19)

  • Upgraded our face detection libraries, decreasing CPU load/temperature.
  • Enabled hot-plugging USB microphones.
  • Stopping a skill now returns Furhat to a default state (face looking forward, eyes open, LEDs off, stop speaking and listening).
  • New skills now point to HTTPS instead of HTTP when connecting to furhat-commons repository. It's related to this.
  • We are also hosting our repositories on JCenter now, so that old skills can be built without changing the build script.
  • Additional fonts are now included on the platform to display Chinese, Korean and Japanese characters on the external monitor.

Bugfixes

  • Fixed speaker selection menu displaying empty selection.
  • Fixed a minor memory leak in java.
  • Fixed incorrect speaker selection resulting in speech thread crashing.
  • Fixed global voice tags for Acapela voices.
  • Fixed a bug with the pattern +(word1 / word2 / word3) in entity grammars.
  • Fixed tokenization of numbers for logographic languages.
  • Fixed a bug where mandarin commas wouldn't split up entities as intended.
  • Fixed recognition phrases not being cleared after skill termination.
  • Fixed furhatos.event.responses.ResponseVoice returning a duplicate voice list.

1.14.0 (Released 2019-11-12)

  • A new Networking settings page to connect, disconnect and forget network connections in the Web Interface.
  • Changes to User Management, most notably adding thresholds to user's attention (see: Checking users' attention)
  • Furhat now includes fonts to display utf-8 encoded icons in text on the external monitor.

Bugfixes

  • Fixed speech thread failing to restart correctly after errors.
  • Fixed multi-channel mic support.
  • Fixed changing Web Interface password in the settings menu.
  • Fixed minor issues in NLU regarding logographic languages.
  • Fixed missing cmn-hans-CN language code for Google ASR.

1.13.3

  • Fixed setVoice crashing when not defining the voice by name.
  • Performance improvements by switching from Kotlin to Java reflections.

1.13.2

  • Fixed Web Interface authentication timing out too quickly.

1.13.1

  • Fixed inconsistency in skill voice selection.

1.13.0

  • Added 10 new Polly Neural voices. Please see Amazon documentation regarding spported SSML tags.
  • Web Interface authentication architecture improvements.

Bugfixes

  • Fixed Furhat upgrade failing on partial package download.
  • Fixed external monitor touch not registering on skill gui text entry fields.
  • Fixed Furhat hanging on ActionListenStop when senseSpeech is returned with the aborted flag.
  • Fixed warnings regarding missing .pho files being displayed when using automated lipsync cloud service.
  • Fixed missing Arabic language code (needed for Google ASR).

Known issues

  • External USB-C touch monitor needs to be connected before powering on the robot to be recognized.
  • Uploading a skill through the web interface does not prompt the user when overwriting an existing skill with the same name.
  • When exiting a skill, Furhat is not reset to default settings but rather keeps the latest state from the exited skill.
  • Video feed in the Web Interface can display artifacts on first load.
  • Microphones do not automatically appear when plugged in, go to the microphone page and click the refresh button.

1.12.1

  • Fixed Furhat freezing on boot when Google ASR credentials file is empty.

1.12.0

  • Skill lists in Web Interface are now alphabetically sorted.
  • Robot hostname is now displayed on the OSD menu.
  • ASR test button now gives a warning when Google Recognizer credentials are missing.
  • Extended the following built-in intents with Swedish translations: date, time, numbers, colors and greetings". Do you want to help us translating Furhat intents to your language? Please contact us!.
  • Disconnecting a microphone is now recognized as an error.

Bugfixes

  • Fixed a bug where sometimes auto-started skills would start before crucial system services.
  • Fixed skills getting stuck in a listening state when configuring the recognizer or when internet is lost during listen.
  • Fixed system crash when internet is lost while using Amazon Polly voices.
  • Fixed onResponseFailed not triggering properly when Google credentials are not present.

Known issues

  • External USB-C touch monitor needs to be connected before powering on the robot to be recognized.
  • Uploading a skill through the web interface does not prompt the user when overwriting an existing skill with the same name.
  • When exiting a skill, Furhat is not reset to default settings but rather keeps the latest state from the exited skill.
  • Video feed in the Web Interface can display artifacts on first load.
  • Microphones do not automatically appear when plugged in, go to the microphone page and click the refresh button.

1.11.0

  • Skill developers can now use Kotlin 1.3 in skills since the furhat-commons library has been updated to 1.3. Migration documentation is available here.
  • Web Interface password can now be changed in security tab in the web interface, or reset to default by the robot rotary button.
  • Robot listening is now faster initiated.

Bugfixes

  • Fixed a memory-leak in flows related to state-transitions and parallel flows.
  • Fixed URL handling of audio URLs with trailing white-spaces.

1.10.0

Web Interface updates

  • Better support for touch navigation.
  • Improved voice selection.
  • You can now see classified intents to the message log in the web interface's dashboard.

Bugfixes

  • Fixed bug with locally hosted audio-file playback in Utterances.
  • Fixed an issue where multiple listen calls were able to run in parallel, leading to unexpected behavior. Now, only one listen can run at any given time.
  • Fixed web interface settings menu not being usable when side bar is collapsed.
  • Fixed a bug where multiple skill GUI's would not show up on the web-interface as buttons.

1.9.0

General

  • Added initial support for USB-C touch enabled external monitors with an experimental virtual keyboard.
    • External Monitor touch support is calibrated against a known set of monitors. Experience may vary with monitors using other kind of digitizer hardware.
    • On-screen touch keyboard support on the external monitor is only partial. May not work correctly with all kinds of text input fields in web pages.
    • When manually dismissing the on-screen keyboard, a reload of the web browser may be necessary because of a virtual artifact.
  • Added a modal-parameter to furhat.ask() to identify if the calling states response should be active.
  • addTrigger in flow to add a trigger on-the-fly.
  • Default prio for gestures (to handle eye close better).
  • Support for audio playback of files located in the resource folder on the robot.

Web Interface

  • Users are now able to add textures through web interface.
  • A circular button is displayed over the gaze panel to allow for draggable head movements.
  • Performance update through optimized GPU usage in canvas elements.

NLU

  • Improved handling of interim responses.
  • Added stopwords to improve NLU.
  • Swedish localization of (some) NLU and dialog responses.

Speech

  • Localization of built-in dialog responses.

Audio input

  • Microphone selection removed from SDK, sound input device should be selected at the system level.

Skill development

  • Added functionality to check if the skill is running on the SDK with a method isVirtual().

Bugfixes

  • Fixed a bug related to faulty confirmation errors for saving Google recognizer credentials.
  • Fixed issues with parallel flow and multi-lingual NLU.
  • Fixed problem with +words not working as intended in intent examples.
  • Fixed a concurrent modification exception in the flow.
  • Fixed remote GUI links to support full length URLs.
  • Fixed issue where selected microphone would disapear when another device as plugged in.
  • Fixed bug where the priority tag in a trigger did not prioritise the top state in the stack. Therefore, skills that were using the priority tag in state's triggers will no longer fire triggers in the same order.

Known issues

  • Uploading a skill through the web interface does not prompt the user when overwriting an existing skill with the same name.
  • When exiting a skill, Furhat is not reset to default settings but rather keeps the latest state from the exited skill.
  • Video feed in the Web Interface can display artifacts on first load.
  • Remote hosting of more than one skill Remote GUIs is currently buggy.
  • Microphones do not automatically appear when plugged in, go to the microphone page and click the refresh button.

1.8.5

  • Added support for Mandarin and Cantonese dialects as input languages.

1.8.4

  • System update to fix Amazon Polly voices not showing up after API changes.

1.8.3

  • Fixed a bug in web interface authentication.

1.8.2

  • Several fixes to Google speech recognition implementation including upgrading to the latest libraries.
  • UTF8 encoding now explicitly set when launching the SDK in windows.
  • Fixed microphone selection for the SDK in Windows with a non-English system language.
  • The list of available textures in the SDK has been revised to reflect the available textures on the robot.
  • Virtual Furhat model updated.

1.8.1

  • Added functionality to the web interface's home page to test Google ASR with a simple button press.
  • Fixed a bug in GoogleRecognizerProcessor where any non-silence was considered speech.
  • Fixed a bug on SDK in Windows displaying and not pronouncing non-latin characters properly.
  • Fixed a known issue related to Virtual Furhat head movements.
  • Fixed a known issue related to skill play buttons not stopping previously launched skills in the web interface.
  • Fixed a known issue related to GazeEvents flooding the events log.

1.8.0

General SDK

  • New docs section on Testing skills containing information about the dashboard and how to specifically test NLU (intent classification).
  • The SDK now has log4j2.properties file where you can decide what log-behavior you want to have for the SDK. You really only need to modify this if you are experiencing some oddity and we ask you for log outputs to help diagnose the problem.
  • Removed unnecessary log outputs from development server and skill.
  • Visual improvements to Virtual Furhat.

General Robot

  • Support mode of robots added, allowing robot operators to give remote access to Furhat Robotics' support technicians through the web interface. See Support mode.
  • Skills can be now started from the on-face menu (navigated through the rotary button on the back of the robot).
  • A skill can be set to auto-start on robot startup. You do this with a checkbox in the skills list of the web interface.
  • Current WiFi connection can now be forgotten through the on-face menu.
  • Unique robot identity now available to the skill developer using the FURHAT_ROBOT_ID environment variable. You can access it in Kotlin/Java by System.getenv("FURHAT_ROBOT_ID")
  • Improved the accuracy of the robot's gaze.
  • The robot will now only list usable audio devices in the web interface.

Gestures

  • Duration and strength parameters now supported in Gesture API, allowing your gestures to be defined as functions. Each stock gesture is now provided as both a variable and function to allow these parameters, for example furhat.gesture(Gestures.Smile) and furhat.gesture(Gestures.Smile(strength = 1.5)). See Gestures.
  • Implemented basic head movement parameters allowing gestures to nod, shake and roll. Added default gestures showcasing these in the Gestures singleton object (you can test these from the web interface as-well). These gestures also accept an iteration parameter allowing you to define how many times the gesture should be executed, for example shaking the head twice: furhat.gestures(Gestures.Shake(iterations = 2)).

Web interface

  • Situation panel can now toggle between top and side view.
  • Removed the audio output selectors from SDK to instead rely on the system settings.
  • Added possibility to simulate user head-rotation (basically where a user is looking at) in top view for virtual users.
  • Interaction message log now shows information about which user spoke an utterance in a multi-party setting.
  • Wizard buttons now have a key attribute that allows you to assign a keyboard-button-press to a button. These keys are case insensitive, i.e if you press the e-key on your keyboard, it will trigger both onButton(key = "E") and onButton(key = "e").
  • Wizard buttons now have a visible attribute that allows you to hide specific buttons, likely in combination with the key attribute mentioned above.
  • Improvements to the look and feel of the web interface, especially for tablet devices.

User management

  • A SingleUserEngagementPolicy is added for single-user interactions, where only one user is expected to interact with Furhat. See Engaged users.
  • Added support for elliptical interaction spaces in the SimpleEngagementPolicy.
  • Added method furhat.attendAll() to attend all users. Furhat will toggle between engaged users.
  • New method to detect if a user is attending (facing) a location, user.isAttending() : Boolean. See Checking Furhat's Attention.
  • Furhat will now not look down when the camera looses an attended user. It's up to the skill developer now to determine what should happen in these cases (by implementing the onUserLeave handler)

NLU

  • WildcardEntity can be used to match arbitrary strings, as part of an intent. See Wildcard entities.
  • Speech recognition alternatives are supported through furhat.param.recognitionAlternatives (default is 1, i.e. only to consider the top result). Increasing this number makes the system consider several recognition alternatives and pick the one that matches the current active intents best. See Increasing recognition alternatives
  • Each intent can now have a confidence threshold that deviates from the default (by overriding getConfidenceThreshold()). Default confidence threshold can now be set through furhat.param.intentConfidenceThreshold (default is 0.5). See Intent classification
  • Words in intent examples can be preceded with a + in order to mark it as mandatory for the intent (in the same way as entities are considered mandatory).
  • Added method to raise intents, raise(intent), in addition to the previously existing raise(response, [intent]).

Speech

  • Several improvements to utterances (see Utterances)
    • Utterances support blocking parts (i.e. things that take time), using blocking { }
    • Utterances support +delay() to add delays in speech
    • Utterances can now include other utterances using +myOtherUtterance
  • Typed voices added, allowing you to get typed support for voice gestures such as Cereproc's William's non verbal sounds (ehm, mm, hmm etc). More info at Speech docs

Listening

  • All versions of furhat.ask and furhat.listen can take all timeout parameters (endSil, timeout, maxSpeech). Default timeout parameters can be set through the furhat.param object. See Listening docs.
  • It is now possible to handle incremental (interim) speech recognition results while the user is still speaking. See onInterimResponse handler.

Bugfixes

  • Pressing the ESC key on the robot will no longer cause the face to crash.
  • Fixed a bug related to face calibration sliders not working.
  • Fixed bug where Virtual Furhat would sometimes not properly show certain textures.
  • Fixed bug where the camera feed occasionally would not show up in the web interface.
  • Fixed bug where IDs of users were switching when many users were visible in camera.
  • Problems with rare corruption in speech input solved.
  • Problem with the speech recognizer not responding fixed.
  • Fixed bug where a virtual user created in the web interface would appear in the wrong position.
  • Fixed bug where NLU intents and entities would always use the en-US examples despite the input language being set to another language.

Known issues

  • Virtual furhat head movements Nod, Shake and Roll produce somewhat abnormal movement patterns.
  • Uploading a skill through the web interface does not prompt the user when overwriting an existing skill with the same name.
  • It is possible for Furhat to look cross-eyed when focusing at a point too close to the face.
  • When exiting a skill, Furhat is not reset to default settings but rather keeps the latest state from the exited skill.
  • When skill is running, clicking on play buttons will not send a command to stop the current skill and start a new one.
  • GazeEvents floods the events log even when there is no change. This can cause the web interface to become unresponsive on slower computers.

1.4.1

  • Bugfixes:
    • The event SenseSkillGUIConnected, that Skill GUIs require to let the skill know a skill has connected, was re-added after being accidentally removed.
    • Removed the currently broken text-input in dashboard and wizard interface.

1.4.0

General

  • Bugfixes in various areas, notably in speech input where several improvements have been done.
  • New page added to the web interface for Wizarding Furhat skills - with more space and assignable sections for wizard buttons.

Skill creation and runtime

  • The SDK is now started with a standalone script instead of using the previous gradle script, allowing a much faster startup, see getting started.
  • The skills page of the web interface has been removed and skill creation moved to a command-line tool that also has a command for packaging skills so that they can be uploaded to the robot building and creating skills.
  • Skills now run as binaries on robots, allowing a much faster startup. As part of this, on-robot compilation of skills has been removed.
  • Credentials for Amazon Web Services and Google Cloud are now not longer needed for 3rd party developers but instead these functions come bundled with the SDK.

NLU improvements

  • Pre-loading and re-loading of intents, for faster and more dynamic natural language understanding, see NLU docs.
  • Support for lists of intents for handling several intents in the same way, see in NLU docs.
  • Multi-intent classification, allowing you to catch two intents in one, for example "hi there, I want to order a burger". See in Listening docs.
  • Complex Enum entities now supports wildcards, allowing you to catch patterns like "remind @who to @what". See in NLU docs.
  • Added Wikidata NLU entities that allow lookup of rich and deep entities for chit-chat purposes. See in NLU docs.
  • Changed Grammar entity syntax to be more compact. See in NLU docs.

Flow improvements

  • Added snippets - a conversational building block that can be used together with the normal state machine to build chit-chat type skills or enrich your existing task-oriented skills with chit-chat. See Snippets.
  • Support for raising responses and optionally attaching a new intent. See how this allows you to pass the response object around in Flow docs and how this can be used to, similar to lists of intents as mentioned above, group functionality for several types of intents - in NLU docs.
  • The syntax has changed for Utterances used in ask. Previously you could do furhat.ask({ +"A text to be spoken" }) but we have now moved the utterance builder last so that you can do a more Kotlinesque furhat.ask { +"A text to be spoken" } without the parentheses. A similar change for furhat.say will be implemented in the next release.
  • Added partial states and the possibility to include these into your states, as a flexible way of reusing triggers without using inheritance. More info, see Flow docs.

Logging

  • Dialog logger functionality added, allowing you to log robot and user utterances, matched intents and audio of user-speech. The logging can be done locally or to the cloud. See Logging docs.
  • Flow logger added, allowing you to log state transitions and triggers to trouble-shoot complex flows. See Logging docs.

Legacy versions

Important note: Releases from 1.4.0 onwards work only with Generation-2 Furhat robots (our current product). Legacy releases up to 0.3.5, support Generation-1 robots and are documented in legacy documentation.

0.3.5

  • Bugfix for issue related to Amazon Polly voices.

0.3.4

  • Introduction of an Utterance object that can be used instead of a string in furhat.say() and furhat.ask(). An Utterance allows for behaviors (such as gestures) in the middle of the text, audio playback, and randomized parts. Read more
  • Voice ssml functionality callable from within flow without hard-coding ssml for different providers. Read more
  • Introduction of automatic behavior (microexpressions- blinking, facial movements, eye movements) configuration . Read more
  • Hierarchy of Gestures to resolve conflicting gestures (for example CloseEyes and Blink conflicting) as well as persistent gestures. Read more
  • Hosted GUI's can now have a static port defined in skill.
  • SkillGUI removed in favor of HostedGUI and RemoteGUI. Read more
  • Bugfixes

0.3.3

  • Fixes to hosted GUI's running deployed on robot that previously was using the wrong broker address.

0.3.2

  • Fixes to remote GUI default hostname.
  • Important bugfixes.

0.3.1

  • Fixed bug related to stemming for Intents when using input languages other than Swedish, German, and English.

0.3.1

  • New trigger: onTime for executing timed actions. See onTime docs.
  • Delay method delay() is now correctly accepting events while waiting. See Delay docs
  • The FlowEvent class has been deprecated. Instead, use the Event class.
  • Added convenient method to create events. See Raising and sending events
  • Anonymous called states added for convenient wrapping of blocking methods (for example API-calls). See Calling anonymous states
  • Important: To prevent undesired side-effects, from now on, triggers in caller state will cancel execution in called states. To avoid this, you must add the parameter instant=true to the trigger. Such triggers are not allowed to call other states. In most cases, you will not have to change much in your old skills. See Flow docs for further information.
  • furhat.glance, furhat.attend, and furhat.gesture are now all per default called asynchronously.

Required changes:

  • In Interaction state, add (instant=true) to onUserEnter and onUserLeave triggers. i.e. onUserLeave(instant = true) due to the above change.
  • In Interaction state, change glance(it, 1) to glance(it) since 1 implies 1 millisecond which was an error in the skill template.
  • Change FlowEvent to Event in your flows.

0.2.1

  • The "Roll" parameter in the face settings menu has adjusted it's scale changed from being -0.3 - 0.3, to now being -1.0-1.0

0.2.0

Required change (A) : New Skill loading system. Skills will no longer be loaded from furhatSkills.gradle .

  • For SDK users, please remove your furhatSkills.gradle file and reimport skills either via the interface or by adding the absolute path to the skills in skillLocations.txt. Skills should be newline separated.
  • For Robots, please export your skills before updating, then delete the skills from the skills list, do the update and then reimport your Skills.

Required change (B) : Skills now have to define their furhat-commons version explicitly in their build.gradle file. To always use the latest version of furhat-commons use "+" instead of a number. Here is an example build.gradle file .


buildscript {
    repositories {
        maven { url "http://furhatrobotics.bintray.com/furhat" }
        mavenCentral()
    }

    dependencies {
        classpath 'com.furhatrobotics.furhatos:skill-configuration:+'
    }
}

plugins {
    id "org.jetbrains.kotlin.jvm" version "1.2.30"
}

apply plugin: 'com.furhatrobotics.skill-configuration'

repositories {
    maven { url "http://furhatrobotics.bintray.com/furhat" }
    jcenter()
}

dependencies {
    compile 'com.furhatrobotics.furhatos:furhat-commons:0.2.8'
}

Required change (C) : Intents that use examples need to update their getExamples(lang: Language?) method to using Language instead of Language?.

  • Added support for dynamic intents through the classes SimpleIntent and DynamicIntent, see NLU docs.
  • Fixed issue with importing skills and skills causing issues with SDK when using different furhat-commons versions.
  • Robots created with this will be able to start without an internet connection.
  • Random blocks now consistently working as intended.
  • Fixed bug loading Color enum, and general bug for using enums with onResponse.
  • States that are called will show their buttons on the web interface alongside buttons from the state that called them.
  • Added a reference to UserManager in Furhat.
  • Console is now showing errors and logs from the System and Skill.
  • Cereproc gesture tags now work as part of other text.
  • Classes implementing the interface TextGenerator should no longer implement fromPattern(), use generate() instead
  • Updated NLU engine. Contact us if your NLU examples now cause odd behaviour.

0.1.7

  • Internal builds for testing

0.1.6

  • SDK's will now update to the latest version when first installed.
  • FurhatActionWrapper renamed to FurhatAPI: intended use is outside of the flow.
  • FurhatFlowWrapper renamed to Furhat: intended to be used inside the flow. Extend methods to this class when you need reusable methods inside the flow (see docs on this here).
  • Sample Welsh text updated, from native speakers feedback.

0.1.5

  • Improved template for skills containing boilerplate situation management (handling users entering and leaving). New skills will use this template. See skill files.

0.1.4

  • Default voice is now Brian, a British voice from Amazon Polly
  • Automatically set the microphone when first starting up the SDK to the default OS microphone.
  • No longer able to adjust furhat's gaze when clicking on the gaze panel. Should make it easier to add virtual users without accidentally moving furhat's gaze.
  • Now "stemming" enum entities by default. Adding the word "apple" will by default also handle cases like "apples".

0.1.3

  • BugFix: SkillGUIs now able correctly using templates

0.1.2

  • Bugfix: furhat now continously attends users
  • Bugfix: New Linux and Windows virtual furhats, fixing issue where some machines can't start the virtual furhat.
  • Bugfix: Linux skills will now appear inside the root folder's skills/ folder
  • Bugfix: All textures use the same configuration file, no more needing to manually save all textures to the same configuration.

0.1.1

  • Bugfix: Texture change is now enabled on Mac devices.
  • Moved the file containing the google ASR credentials (google_credentials.json) to the properties folder.

Required change: move your google_credentials.json into properties/ or re-enter it in the web interface.

  • Added several methods to furhat: furhat.stopSpeaking(), furhat.stopListening(), and furhat.isSpeaking, furhat.isListening.
  • Performance fix: New virtual furhat builds that use less memory. Now running at 30 fps. If you experience lag, you may have an improved experience by reducing the window size.
  • Persona have been removed.

Required change: Use methods like furhat.setTexture(String) and furhat.setVoice(String)

0.1.1

  • Introduced furhat and users top level objects and moved all actions and user management methods as object methods. Remaining top-level methods are all flow-related (i.e transitioning between states etc.).

Required change: Refactor according to actions and users docs, for example change all say() calls to furhat.say() and currentUser to users.current

  • Renamed createdSkills.gradle to furhatSkills.gradle.

Required change: If you had a createdSkills.gradle please rename it to furhatSkills.gradle

  • Renamed skill/ folder to skills/.

Required change: Rename your skill folder and update your furhatSkills.gradle file

  • Properties (*.properties) files now live inside properties/ with the exception of gradle.properties and log4j2.properties that should be in the root.

Required change: If you had property files, for example synthesizer.properties with your amazon credentials, please move them to properties/

  • createPersona() has been removed and a Persona class has been introduced. For usage, see Persona. The previous setPersona() has been moved to the furhat top level class, so you use it with furhat.setPersona(Persona(...))

Required change: Refactor your persona management according to the Persona docs.

0.0.19

  • Bugfixes.