A while back I played a round with the HASS Voice Assistant, and pretty easily got to a point where STT and TTS were working really well on my local installation. Also got the hardware to build wyoming satellites with wakeword recognition.

However, what kept me from going through the effort of setting everything up properly (and finally getting fucking Alexa out of my house) was the “all or nothing” approach HASS seemingly has to intent recognition. You either:

  • use the build in Assistant conversation agent, which is a pain in the ass because it matches what your STT recognized 1:1, letter by letter, so it’s almost impossible to actually get it to do something unless you spoke perfectly (and forget, for example, about putting something on your ToDo list; Todo, todo, To-Do,… are all not recognized, and have fun getting your STT to reliably generate the ToDo spelling!), or
  • you slap a full-blown LLM behind it, either forcing you to again rely on a shitty company, or host the LLM locally; but even in the latter case and on decent (not H100, of course, but with a GPU at least) hardware, the results were slow and shit, and due to context size limitations, you can just forget about exposing all your entities to the LLM Agent.
  • You also have the option of combining the two approaches; match exactly first, if no intent recognized, forward to LLM; but in practice, that just means that sometimes, you get what you wanted (“all lights off” with a 70% success rate, I’d say), and still a lot of the time you have to wait for ages for a response that may be correct, but often isn’t from the LLM.

What I’d like is a third option, doing fuzzy matching on what the STT generated. Indeed, there seems to have been multiple options for that through rhasspy, but that project appears to be dead? The HASS integration has not been updated in over 4 years, and the rhasspy repos are archived as of earlier this month.

Besides, it was not entirely clear to me if you could just use the intent recognition part of the project, forgoing the rest in favor of what HASS already brings to the table.

At this point, I am willing to implement a custom conversation agent, but wanted to make sure first that I haven’t simply missed an obvious setting/addon/… for HASS.

My questions are:

  • are you using the HASS Voice Assistant without an LLM?
  • if so, how do you get your intents to be recognized reliably?
  • do you know of any setting/project/addon helping with that?

Cheers! Have a good start into the working week…!

  • tyler@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    17 hours ago

    Would love to know what you find. I started to use Willow months before the creator passed away and it seemed like the only option available (not the best option, literally the only option due to all the reasons you listed). If you find something I’d love to know.

    • smiletolerantly@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      16 hours ago

      Never heard about willow before - is it this one? Seems there is still recent activity in the repo - did the creator only recently pass away? Or did someone continue the project?

      How’s your experience been with it?

      And sure, will do!

      • tyler@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Yeah it was relatively recent. I think earlier this year. Can’t remember exactly, it’s been a longgggg year. I never managed to get it integrated with HA and the creator passed away and nobody knew if it was going to get picked up by anyone else so I just fully stopped trying.