Skip to content

A Step-by-Step Guide to Using Local AI with Home Assistant

A Step-by-Step Guide to Using Local AI with Home Assistant 1

With the latest Home Assistant release, you can now have an entirely local AI that helps you control your smart home. This is made possible by the integration of Home Assistant Assist and Ollama. Let’s explore how this setup works, its pros and cons, and whether it’s usable at this stage. I’ll also share how to set it up, so you can try it out yourself. Let’s dive in!

Setting Up Home Assistant and Ollama

Step 1: Preparing the Environment

First, ensure you have Home Assistant up and running. Any Home Assistant installation type will do.

If you don’t have Home Assistant yet, then register for my upcoming Home Assistant webinar where we will talk about different installation types and their pros and cons. It’s free, and you can find the link below:

https://automatelike.pro/webinar

Tired of Reading? Watch My YouTube Video!

If you prefer watching a video over reading, check out my YouTube video on this topic. It covers everything discussed here and provides a visual walkthrough of the setup process.

Step 2: Running Ollama Locally

Then you need to start the Ollama on a device that is in the same network as your Home Assistant

  • Set Up Ollama: Download the Ollama client from the Ollama website. It’s available for Windows, Linux, and Mac. I’m using a Mac with an M1 processor and it is working decent enough on it for tests and playing.

After downloading Ollama, start the project. You should see a cute Ollama icon indicating that it’s running (at least I see it on Mac, I’m pretty sure it’s the same on Windows and probably on Linux).

Next, expose your Ollama setup to your local network so Home Assistant can connect to it:

  1. Export to Host: Use the export command to make Ollama accessible over your local network. Follow the instructions for your OS on this link:
    https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server
  2. Restart your Ollama app

  1. Set IP and Port: Typically, you’ll use your laptop’s IP address and a default port (e.g., 11434).

Step 3: Integrating Ollama with Home Assistant

To integrate Ollama with Home Assistant:

  1. Add the Ollama Integration:
  2. Type IP and Port: Enter the IP address and port of the device where Ollama is running. The default Ollama port is 11434 and as example my laptop IP is 192.168.8.38 and this is what I entered:
Adding the Home Assistant Ollama integration
Don’t forget to type http:// in front of the IP or it will not work.
  1. Select LLM Model: Choose the best available model, such as llama3.1:latest. This model is powerful and comes from Meta. If you are reading this in the future then select llama4 or 5 or whatever is the newest version.
llama3.1:latest LLM model is the best working local model at the moment
llama3.1:latest LLM model is the best working local model at the moment.
  • Edit your Home Assistant Pipeline: Click on Assist button (upper right) > down arrow > Manage assistants and select llamaX.X (currently llama3.1 in the future who knows) as conversation agent.
Edit your Pipeline to add LLAMA as conversation agent
Edit your Pipeline to add LLAMA as conversation agent
  • Click on the settings wheel button
    • Add Instructions: Add specific instructions if you want your Assistant to act in a specific way (ex. Harry Potter, Super Mario, Passive-aggressive, etc.)
    • Select Assist if you want to allow Assist to control your devices or No control if you want only to query for information and statuses.
No control is equal to read only, Assist is equal to full control
Ensure the AI has the right permissions. No control is equal to read only, Assist is equal to full control

Glossary

Here are some terms that might help you understand this setup better:

  • Home Assistant: An open-source home automation platform that focuses on privacy and local control.
  • Ollama: A local AI client that integrates with Home Assistant to provide AI-powered automation.
  • LLM (Large Language Model): A type of AI model designed to understand and generate human language.
  • Meta: The company formerly known as Facebook, which developed the LLaMA AI models.
  • Expose Switch: A toggle in Home Assistant that determines whether a device can be controlled by the AI.

If you want more, than Download my Smart Home Glossary it is one big PDF file that is full of smart home wisdom and it is entirely free

Advanced Configuration

Controlling Devices

To control devices with the Home Assistant Assist and local AI:

  1. Turn on Expose Switch: Ensure the Expose switch is turned on for the devices you want to control. You can find the menu by clicking on the device of interest > settings wheel > voice assistants. Or go to Settings > Voice assistants > Expose for bulk control

Defining Aliases

You can help Home Assistant Assist identify devices better by defining aliases:

  1. Select Device: Click on an entity, then on the settings wheel.
  2. Define Aliases: Add as many aliases as you want for better recognition by Home Assistant Assist and local AI
Aliases are optional, but recommended. Exposing is a must if you want to test the AI and HA Assist
Aliases are optional, but recommended. Exposing is a must if you want to test the AI and HA Assist

Testing the Local AI

Basic Commands

In my test environment, everything is ready to be used. I clicked on the Assist button and started typing commands. Here are a few examples:

  1. Command: “My eyes hurt, can you stop the show?”
  • Result: The AI turned off all devices in the living room, even though I only wanted the TV off.
  1. Command: “I’m bored, can you entertain me in the living room?”
  • Result: The AI turned on all devices in the living room, including the lights. So, it probably thinks that if there are some lights I should be entertained.
Asking Local AI to control my devices is now possible but not always successful
Asking Local AI to control my devices is now possible but not always successful

Inconsistent Responses

The responses from the Ollama and the local AI model are inconsistent. Sometimes it understands the command correctly, but other times it doesn’t. Here are some observations:

  • Correct Responses: Occasionally, it does exactly what I ask.
  • Incorrect Responses: Often, it either turns on/off everything in a room or says it cannot help.
  • Hallucinations: The AI sometimes makes up information, such as saying lights are off when they aren’t.

Fun with AI

I set my AI to respond in a sassy and passive-aggressive style for fun. Although it doesn’t always respond this way, it’s amusing when it does. If you want to do the same:

  1. Edit Pipeline: Go to manage assistants and click on the settings button next to conversation agent field.
  2. Add Instructions: Add to the existing instructions something like “be sassy and passive-aggressive.”

Is Local AI usable ATM?

While the Ollama local AI integration with full control in Home Assistant is a significant step forward, it’s not yet reliable for production use. It’s great for playing and testing, but not consistent enough for daily use. So apart for experimenting in safe sandbox I recommend to postpone using AI in Home Assistant for now.

Maybe the upcoming collaboration between Home Assistant and Nvidia will reveal a product that will handle Local LLMs fast enough and there will be an omptimized for Home Assistant LLM that will be smart enough, but I guess we have to wait and see.


Subscribe for my newsletter. Thank you for exploring local AI with me. I’m Kiril, and I look forward to seeing you in the next guide. Bye for now!

Leave a Reply

Your email address will not be published. Required fields are marked *