Completing these steps will let you have an LLM Powered Web Browser in Home Assistant using Puppeteer through the Model Context Protocol.
This assumes you already know about the following:
- Home Assistant
- Voice Assistant with Conversation Agent (OpenAI, Google Gemini, Anthropic, etc)
- Python virtual environment for running MCP server
This will install mcp-proxy and MCP Fetch Server.
Install dependencies:
$ uv pip install mcp-proxy mcp-server-fetchThe next step will start an SSE proxy and spawn a local stdio MCP server that can fetch web pages. The proxy server spawns the command without any environment variables set so we give the full path to uv.
$ mcp-proxy --sse-port 42783 -- /usr/local/bin/uv run mcp-server-fetch
...
INFO: Uvicorn running on http://127.0.0.1:42783 (Press CTRL+C to quit)The SSE server is now exposed at http://127.0.0.1:42783/sse. You can set flags
to change the IP and port the proxy listens on.
Manually add the integration
Set the SSE Server URL to your MCP proxy server SSE endpoint e.g. http://127.0.0.1:42783/sse. Make sure the URL ends with /sse.
The integration will create a new LLM API called mcp-fetch that is available to conversation agents. It does not add any other entities or devices.
-
Navigate to your existing conversation agent integration and reconfigure it
-
Set the LLM Control to
mcp-fetch -
Update the prompt to be something simple for now such as:
You are an agent for Home Asisstant, with access to tools through an external server. This external server enables LLMs to fetch web page contents.
Open the conversation agent and ask it to fetch a web page:

Thank you. Can I use the LLM for Assist and mcp-fetch at the same time? i.e would the LLM still be able to control my home while using the mcp-fetch?