Completing these steps will let you have an LLM Powered Web Browser in Home Assistant using Puppeteer through the Model Context Protocol.
This assumes you already know about the following:
- Node
- Python
- Home Assistant
- Conversation Agent (OpenAI, Google Gemini, Anthropic, etc)
This will install mcp-proxy and MCP Fetch Server.
Install packages
$ ux pip install mcp-proxy mcp-server-fetchThis will start an SSE proxy and spawn a local stdio MCP server that can fetch web pages:
$ mcp-proxy --sse-port 42783 --env PATH "${PATH}" -- uv run mcp-server-fetch
...
INFO: Uvicorn running on http://127.0.0.1:42783 (Press CTRL+C to quit)The SSE server is now exposed at http://127.0.0.1:42783/sse. You can set flags
to change the IP and port the proxy listens on.
Manually add the integration
Set the SSE Server URL to your MCP proxy server SSE endpoint e.g. http://127.0.0.1:42783/sse. Make sure the URL ends with /sse.
The integration should create a new LLM API called mcp-fetch
-
Navigate to your existing conversation agent integration and reconfigure it
-
Set the LLM Control to
mcp-fetch -
Update the prompt to be something simple for now such as:
You are an agent for Home Asisstant, with access to tools through an external server. This external server enables LLMs to fetch web page contents.

Thank you. Can I use the LLM for Assist and mcp-fetch at the same time? i.e would the LLM still be able to control my home while using the mcp-fetch?