# Home Assistant Model Context Protocol integration
## TL;DR
Completing these steps will let you have an LLM Powered Web scraper in Home
Assistant through the [Model Context Protocol](https://github.com/modelcontextprotocol) with an example of how you could make a template entity for extracting new headlines for a display.
## Pre-requisites
This assumes you already know about the following:
- Home Assistant
- Voice Assistant with Conversation Agent (OpenAI, Google Gemini, Anthropic, etc)
- Python virtual environment for running MCP server
## Install MCP Proxy & Fetch MCP Server
This will install [mcp-proxy](https://github.com/sparfenyuk/mcp-proxy) and [MCP Fetch Server](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch).
Install dependencies:
```bash
$ uv pip install mcp-proxy mcp-server-fetch
```
The next step will start an SSE proxy and spawn a local stdio MCP server that can fetch web pages. The proxy
server spawns the command without any environment variables, so we set our path.
```bash
$ mcp-proxy --sse-port 42783 --env PATH "${PATH}" -- uv run mcp-server-fetch
...
INFO: Uvicorn running on http://127.0.0.1:42783 (Press CTRL+C to quit)
```
The SSE server is now exposed at `http://127.0.0.1:42783/sse`. You can set flags
to change the IP and port the proxy listens on.
## Configure Model Context Protocol Integration
> [!IMPORTANT]
NOTE: This integration is currently in review https://github.com/home-assistant/core/pull/135058
[](https://my.home-assistant.io/redirect/config_flow_start/?domain=mcp)
Manually [add the integration](https://my.home-assistant.io/redirect/config_flow_start/?domain=mcp)
Set the SSE Server URL to your MCP proxy server SSE endpoint e.g. `http://127.0.0.1:42783/sse`. Make sure the URL ends with `/sse`.
The integration will create a new LLM API called `mcp-fetch` that is available to conversation agents. It does not add any other entities or devices.
## Configure Conversation Agent
1. Navigate to your existing conversation agent integration and reconfigure it
1. Set the *LLM Control* to `mcp-fetch`
1. Update the prompt to be something simple for now such as: `You are an agent for Home Asisstant, with access to tools through an external server. This external server enables LLMs to fetch web page contents.`
## Try it Out
Open the conversation agent and ask it to fetch a web page:
## Prompt Engineering
Lets now try to use the tool to make a sensor. We should ask the model to respond more succintly:
```
action: conversation.process
data:
agent_id: conversation.google_generative_ai
text: >-
Please visit bbc.com and summarize the headlines. Please respond with
succinct output as the output will be used as headline for an eInk display
with limited space.
```
We could even improve this by giving some few-shot example headlines, however, the model follows instructions well already and produces a nice headline:
```
response:
speech:
plain:
speech: Trump threatens Greenland and Panama Canal.
extra_data: null
card: {}
language: en
response_type: action_done
data:
targets: []
success: []
failed: []
conversation_id: 01JH2B56RMR2DABQQGAAA9X95D
```
## Template Entity with Service Responses
Below is an example `template` entity definition based on the Model Context Protocol tool calling to fetch a web page:
```yaml
template:
- trigger:
- platform: time_pattern
hours: "/1"
# Used for testing
- platform: homeassistant
event: start
action:
- action: conversation.process
data:
agent_id: conversation.google_generative_ai
text: >-
Please visit bbc.com and summarize the single most important headline.
Please respond with succinct output as the output will be used as
headline for an eInk display with limited space, so answers must be
less than 200 characters.
response_variable: headline
sensor:
- name: Headline
attributes:
title: "{{ headline.response.speech.plain.speech }}"
unique_id: "d3641cdf-aa9f-4169-acae-7f7ba989c492"
unique_id: "272f0508-3e27-4179-9aca-06d8333874e7"
```