Skip to content

Instantly share code, notes, and snippets.

@allenporter
Last active March 1, 2026 16:29
Show Gist options
  • Select an option

  • Save allenporter/b0e9946feb2ab60901c4f467ac1ba6f9 to your computer and use it in GitHub Desktop.

Select an option

Save allenporter/b0e9946feb2ab60901c4f467ac1ba6f9 to your computer and use it in GitHub Desktop.

Revisions

  1. allenporter revised this gist Jan 31, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion Home Assistant Model Context Protocol.md
    Original file line number Diff line number Diff line change
    @@ -52,7 +52,7 @@ to change the IP and port the proxy listens on.
    ## Configure Model Context Protocol Integration

    > [!IMPORTANT]
    NOTE: This integration is currently in review https://github.com/home-assistant/core/pull/135058
    NOTE: This integration is currently available in 2025.2.0b beta release

    [![Open your Home Assistant instance and start setting up a new integration.](https://my.home-assistant.io/badges/config_flow_start.svg)](https://my.home-assistant.io/redirect/config_flow_start/?domain=mcp)

  2. allenporter revised this gist Jan 8, 2025. 1 changed file with 3 additions and 3 deletions.
    6 changes: 3 additions & 3 deletions Home Assistant Model Context Protocol.md
    Original file line number Diff line number Diff line change
    @@ -83,19 +83,19 @@ Open the conversation agent and ask it to fetch a web page:

    ## Prompt Engineering

    Lets now try to use the tool to make a sensor. We should ask the model to respond more succintly:
    Lets now experiment to use the tool to make a sensor. We should ask the model to respond more succintly, and call it from the developer tools. Here is a yaml example:

    ```
    action: conversation.process
    data:
    agent_id: conversation.google_generative_ai
    text: >-
    Please visit bbc.com and summarize the headlines. Please respond with
    Please visit bbc.com and summarize the first headline. Please respond with
    succinct output as the output will be used as headline for an eInk display
    with limited space.
    ```

    We could even improve this by giving some few-shot example headlines, however, the model follows instructions well already and produces a nice headline:
    We could even improve this by giving some few-shot example headlines, however, the model follows instructions OK already and produces a headline:

    ```
    response:
  3. allenporter revised this gist Jan 8, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion Home Assistant Model Context Protocol.md
    Original file line number Diff line number Diff line change
    @@ -145,5 +145,5 @@ template:
    <img width="870" alt="Screenshot of template entity output" src="https://gist.github.com/user-attachments/assets/6f085a73-7c80-4cd7-afc5-17c129477495" />
    Now go out and make yourself an [ESPHome E-ink display}(https://community.home-assistant.io/t/use-esphome-with-e-ink-displays-to-blend-in-with-your-home-decor/435428) if you don't have one already!
    Now go out and make yourself an [ESPHome E-ink display](https://community.home-assistant.io/t/use-esphome-with-e-ink-displays-to-blend-in-with-your-home-decor/435428) if you don't have one already!
  4. allenporter revised this gist Jan 8, 2025. 1 changed file with 3 additions and 1 deletion.
    4 changes: 3 additions & 1 deletion Home Assistant Model Context Protocol.md
    Original file line number Diff line number Diff line change
    @@ -113,7 +113,7 @@ response:
    conversation_id: 01JH2B56RMR2DABQQGAAA9X95D
    ```

    ## Template Entity with Service Responses
    ## Template Entity

    Below is an example `template` entity definition based on the Model Context Protocol tool calling to fetch a web page:

    @@ -145,3 +145,5 @@ template:
    <img width="870" alt="Screenshot of template entity output" src="https://gist.github.com/user-attachments/assets/6f085a73-7c80-4cd7-afc5-17c129477495" />
    Now go out and make yourself an [ESPHome E-ink display}(https://community.home-assistant.io/t/use-esphome-with-e-ink-displays-to-blend-in-with-your-home-decor/435428) if you don't have one already!
  5. allenporter revised this gist Jan 8, 2025. 1 changed file with 2 additions and 3 deletions.
    5 changes: 2 additions & 3 deletions Home Assistant Model Context Protocol.md
    Original file line number Diff line number Diff line change
    @@ -31,15 +31,14 @@ graph LR

    ## Install MCP Proxy & Fetch MCP Server

    This will install [mcp-proxy](https://github.com/sparfenyuk/mcp-proxy) and [MCP Fetch Server](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch).
    Install dependencies [mcp-proxy](https://github.com/sparfenyuk/mcp-proxy) and [MCP Fetch Server](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch).

    Install dependencies:

    ```bash
    $ uv pip install mcp-proxy mcp-server-fetch
    ```

    The next step will start an SSE proxy and spawn a local stdio MCP server that can fetch web pages. The proxy
    Most MCP servers are stdio based (e.g. spawned by Claude Desktop) so we need to start a server to expose them to Home Assistant. We use `mcp-proxy` which runs an SSE server, then spawns the stdio MCP server that can fetch web pages. The proxy
    server spawns the command without any environment variables, so we set our path.

    ```bash
  6. allenporter revised this gist Jan 8, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion Home Assistant Model Context Protocol.md
    Original file line number Diff line number Diff line change
    @@ -21,7 +21,7 @@ This guide will get you an LLM powered template enttiy using MCP to fetch extern
    graph LR
    A["Home Assistant LLM Conversation Agent"] <--> |sse| B["mcp-proxy"]
    B <--> |stdio| C["mcp-fetch MCP Server"]
    B <--> |http/s| D["web pages"]
    C <--> |http/s| D["web pages"]
    style A fill:#ffe6f9,stroke:#333,color:black,stroke-width:2px
    style B fill:#e6e6ff,stroke:#333,color:black,stroke-width:2px
  7. allenporter revised this gist Jan 8, 2025. 1 changed file with 4 additions and 0 deletions.
    4 changes: 4 additions & 0 deletions Home Assistant Model Context Protocol.md
    Original file line number Diff line number Diff line change
    @@ -15,14 +15,18 @@ This assumes you already know about the following:

    ## Overview

    This guide will get you an LLM powered template enttiy using MCP to fetch external web pages:

    ```mermaid
    graph LR
    A["Home Assistant LLM Conversation Agent"] <--> |sse| B["mcp-proxy"]
    B <--> |stdio| C["mcp-fetch MCP Server"]
    B <--> |http/s| D["web pages"]
    style A fill:#ffe6f9,stroke:#333,color:black,stroke-width:2px
    style B fill:#e6e6ff,stroke:#333,color:black,stroke-width:2px
    style C fill:#e6ffe6,stroke:#333,color:black,stroke-width:2px
    style D fill:#e6ffe6,stroke:#333,color:black,stroke-width:2px
    ```

    ## Install MCP Proxy & Fetch MCP Server
  8. allenporter revised this gist Jan 8, 2025. 1 changed file with 12 additions and 0 deletions.
    12 changes: 12 additions & 0 deletions Home Assistant Model Context Protocol.md
    Original file line number Diff line number Diff line change
    @@ -13,6 +13,18 @@ This assumes you already know about the following:
    - Voice Assistant with Conversation Agent (OpenAI, Google Gemini, Anthropic, etc)
    - Python virtual environment for running MCP server

    ## Overview

    ```mermaid
    graph LR
    A["Home Assistant LLM Conversation Agent"] <--> |sse| B["mcp-proxy"]
    B <--> |stdio| C["mcp-fetch MCP Server"]
    style A fill:#ffe6f9,stroke:#333,color:black,stroke-width:2px
    style B fill:#e6e6ff,stroke:#333,color:black,stroke-width:2px
    style C fill:#e6ffe6,stroke:#333,color:black,stroke-width:2px
    ```

    ## Install MCP Proxy & Fetch MCP Server

    This will install [mcp-proxy](https://github.com/sparfenyuk/mcp-proxy) and [MCP Fetch Server](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch).
  9. allenporter revised this gist Jan 8, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion Home Assistant Model Context Protocol.md
    Original file line number Diff line number Diff line change
    @@ -3,7 +3,7 @@
    ## TL;DR

    Completing these steps will let you have an LLM Powered Web scraper in Home
    Assistant through the [Model Context Protocol](https://github.com/modelcontextprotocol).
    Assistant through the [Model Context Protocol](https://github.com/modelcontextprotocol) with an example of how you could make a template entity for extracting new headlines for a display.

    ## Pre-requisites

  10. allenporter renamed this gist Jan 8, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion MCP_README.md → Home Assistant Model Context Protocol.md
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,4 @@
    # Model Context Protocol
    # Home Assistant Model Context Protocol integration

    ## TL;DR

  11. allenporter revised this gist Jan 8, 2025. 1 changed file with 5 additions and 2 deletions.
    7 changes: 5 additions & 2 deletions MCP_README.md
    Original file line number Diff line number Diff line change
    @@ -2,8 +2,8 @@

    ## TL;DR

    Completing these steps will let you have an LLM Powered Web Browser in Home
    Assistant using Puppeteer through the [Model Context Protocol](https://github.com/modelcontextprotocol).
    Completing these steps will let you have an LLM Powered Web scraper in Home
    Assistant through the [Model Context Protocol](https://github.com/modelcontextprotocol).

    ## Pre-requisites

    @@ -36,6 +36,9 @@ to change the IP and port the proxy listens on.

    ## Configure Model Context Protocol Integration

    > [!IMPORTANT]
    NOTE: This integration is currently in review https://github.com/home-assistant/core/pull/135058

    [![Open your Home Assistant instance and start setting up a new integration.](https://my.home-assistant.io/badges/config_flow_start.svg)](https://my.home-assistant.io/redirect/config_flow_start/?domain=mcp)

    Manually [add the integration](https://my.home-assistant.io/redirect/config_flow_start/?domain=mcp)
  12. allenporter revised this gist Jan 8, 2025. 1 changed file with 62 additions and 0 deletions.
    62 changes: 62 additions & 0 deletions MCP_README.md
    Original file line number Diff line number Diff line change
    @@ -63,5 +63,67 @@ Open the conversation agent and ask it to fetch a web page:
    <img width="396" alt="Screenshot 2025-01-07 at 10 34 10 PM" src="https://gist.github.com/user-attachments/assets/07562834-a35c-4cef-9cef-70796f549ef6" />


    ## Prompt Engineering

    Lets now try to use the tool to make a sensor. We should ask the model to respond more succintly:

    ```
    action: conversation.process
    data:
    agent_id: conversation.google_generative_ai
    text: >-
    Please visit bbc.com and summarize the headlines. Please respond with
    succinct output as the output will be used as headline for an eInk display
    with limited space.
    ```

    We could even improve this by giving some few-shot example headlines, however, the model follows instructions well already and produces a nice headline:

    ```
    response:
    speech:
    plain:
    speech: Trump threatens Greenland and Panama Canal.
    extra_data: null
    card: {}
    language: en
    response_type: action_done
    data:
    targets: []
    success: []
    failed: []
    conversation_id: 01JH2B56RMR2DABQQGAAA9X95D
    ```

    ## Template Entity with Service Responses

    Below is an example `template` entity definition based on the Model Context Protocol tool calling to fetch a web page:

    ```yaml
    template:
    - trigger:
    - platform: time_pattern
    hours: "/1"
    # Used for testing
    - platform: homeassistant
    event: start
    action:
    - action: conversation.process
    data:
    agent_id: conversation.google_generative_ai
    text: >-
    Please visit bbc.com and summarize the single most important headline.
    Please respond with succinct output as the output will be used as
    headline for an eInk display with limited space, so answers must be
    less than 200 characters.
    response_variable: headline
    sensor:
    - name: Headline
    attributes:
    title: "{{ headline.response.speech.plain.speech }}"
    unique_id: "d3641cdf-aa9f-4169-acae-7f7ba989c492"
    unique_id: "272f0508-3e27-4179-9aca-06d8333874e7"
    ```
    <img width="870" alt="Screenshot of template entity output" src="https://gist.github.com/user-attachments/assets/6f085a73-7c80-4cd7-afc5-17c129477495" />
  13. allenporter revised this gist Jan 8, 2025. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions MCP_README.md
    Original file line number Diff line number Diff line change
    @@ -24,10 +24,10 @@ $ uv pip install mcp-proxy mcp-server-fetch
    ```

    The next step will start an SSE proxy and spawn a local stdio MCP server that can fetch web pages. The proxy
    server spawns the command without any environment variables set so we give the full path to uv.
    server spawns the command without any environment variables, so we set our path.

    ```bash
    $ mcp-proxy --sse-port 42783 -- /usr/local/bin/uv run mcp-server-fetch
    $ mcp-proxy --sse-port 42783 --env PATH "${PATH}" -- uv run mcp-server-fetch
    ...
    INFO: Uvicorn running on http://127.0.0.1:42783 (Press CTRL+C to quit)
    ```
  14. allenporter revised this gist Jan 8, 2025. 1 changed file with 6 additions and 2 deletions.
    8 changes: 6 additions & 2 deletions MCP_README.md
    Original file line number Diff line number Diff line change
    @@ -56,8 +56,12 @@ The integration will create a new LLM API called `mcp-fetch` that is available t

    ## Try it Out

    <img width="1440" alt="Screenshot of conversation with agent 1" src="https://gist.github.com/user-attachments/assets/9ce57169-cfea-4e4e-95c5-b7df69fc1ac6" />
    Open the conversation agent and ask it to fetch a web page:

    <img width="389" alt="Screenshot 2025-01-07 at 10 33 25 PM" src="https://gist.github.com/user-attachments/assets/68df3c8e-55e7-459e-ba5a-8990fbd6bc80" />

    <img width="396" alt="Screenshot 2025-01-07 at 10 34 10 PM" src="https://gist.github.com/user-attachments/assets/07562834-a35c-4cef-9cef-70796f549ef6" />


    <img width="1440" alt="Screenshot of conversation with agent 2" src="https://gist.github.com/user-attachments/assets/77fb50d8-0910-44a7-97c9-97472757fda3" />


  15. allenporter revised this gist Jan 8, 2025. 1 changed file with 8 additions and 8 deletions.
    16 changes: 8 additions & 8 deletions MCP_README.md
    Original file line number Diff line number Diff line change
    @@ -9,25 +9,25 @@ Assistant using Puppeteer through the [Model Context Protocol](https://github.co

    This assumes you already know about the following:

    - Node
    - Python
    - Home Assistant
    - Home Assistant
    - Voice Assistant with Conversation Agent (OpenAI, Google Gemini, Anthropic, etc)
    - Python virtual environment for running MCP server

    ## Install MCP Proxy & Fetch MCP Server

    This will install [mcp-proxy](https://github.com/sparfenyuk/mcp-proxy) and [MCP Fetch Server](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch).

    Install packages
    Install dependencies:

    ```bash
    $ ux pip install mcp-proxy mcp-server-fetch
    $ uv pip install mcp-proxy mcp-server-fetch
    ```

    This will start an SSE proxy and spawn a local stdio MCP server that can fetch web pages:
    The next step will start an SSE proxy and spawn a local stdio MCP server that can fetch web pages. The proxy
    server spawns the command without any environment variables set so we give the full path to uv.

    ```bash
    $ mcp-proxy --sse-port 42783 --env PATH "${PATH}" -- uv run mcp-server-fetch
    $ mcp-proxy --sse-port 42783 -- /usr/local/bin/uv run mcp-server-fetch
    ...
    INFO: Uvicorn running on http://127.0.0.1:42783 (Press CTRL+C to quit)
    ```
    @@ -42,7 +42,7 @@ Manually [add the integration](https://my.home-assistant.io/redirect/config_flow

    Set the SSE Server URL to your MCP proxy server SSE endpoint e.g. `http://127.0.0.1:42783/sse`. Make sure the URL ends with `/sse`.

    The integration should create a new LLM API called `mcp-fetch`
    The integration will create a new LLM API called `mcp-fetch` that is available to conversation agents. It does not add any other entities or devices.

    ## Configure Conversation Agent

  16. allenporter revised this gist Jan 8, 2025. 1 changed file with 8 additions and 2 deletions.
    10 changes: 8 additions & 2 deletions MCP_README.md
    Original file line number Diff line number Diff line change
    @@ -12,7 +12,7 @@ This assumes you already know about the following:
    - Node
    - Python
    - Home Assistant
    - Conversation Agent (OpenAI, Google Gemini, Anthropic, etc)
    - Voice Assistant with Conversation Agent (OpenAI, Google Gemini, Anthropic, etc)

    ## Install MCP Proxy & Fetch MCP Server

    @@ -52,6 +52,12 @@ The integration should create a new LLM API called `mcp-fetch`

    1. Update the prompt to be something simple for now such as: `You are an agent for Home Asisstant, with access to tools through an external server. This external server enables LLMs to fetch web page contents.`


    <img width="586" alt="Screenshot of Conversation Agent Configuration" src="https://gist.github.com/user-attachments/assets/c4d16919-3ecd-43c3-ad22-17fce32e0a02" />

    ## Try it Out

    <img width="1440" alt="Screenshot of conversation with agent 1" src="https://gist.github.com/user-attachments/assets/9ce57169-cfea-4e4e-95c5-b7df69fc1ac6" />

    <img width="1440" alt="Screenshot of conversation with agent 2" src="https://gist.github.com/user-attachments/assets/77fb50d8-0910-44a7-97c9-97472757fda3" />


  17. allenporter created this gist Jan 8, 2025.
    57 changes: 57 additions & 0 deletions MCP_README.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,57 @@
    # Model Context Protocol

    ## TL;DR

    Completing these steps will let you have an LLM Powered Web Browser in Home
    Assistant using Puppeteer through the [Model Context Protocol](https://github.com/modelcontextprotocol).

    ## Pre-requisites

    This assumes you already know about the following:

    - Node
    - Python
    - Home Assistant
    - Conversation Agent (OpenAI, Google Gemini, Anthropic, etc)

    ## Install MCP Proxy & Fetch MCP Server

    This will install [mcp-proxy](https://github.com/sparfenyuk/mcp-proxy) and [MCP Fetch Server](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch).

    Install packages

    ```bash
    $ ux pip install mcp-proxy mcp-server-fetch
    ```

    This will start an SSE proxy and spawn a local stdio MCP server that can fetch web pages:

    ```bash
    $ mcp-proxy --sse-port 42783 --env PATH "${PATH}" -- uv run mcp-server-fetch
    ...
    INFO: Uvicorn running on http://127.0.0.1:42783 (Press CTRL+C to quit)
    ```
    The SSE server is now exposed at `http://127.0.0.1:42783/sse`. You can set flags
    to change the IP and port the proxy listens on.

    ## Configure Model Context Protocol Integration

    [![Open your Home Assistant instance and start setting up a new integration.](https://my.home-assistant.io/badges/config_flow_start.svg)](https://my.home-assistant.io/redirect/config_flow_start/?domain=mcp)

    Manually [add the integration](https://my.home-assistant.io/redirect/config_flow_start/?domain=mcp)

    Set the SSE Server URL to your MCP proxy server SSE endpoint e.g. `http://127.0.0.1:42783/sse`. Make sure the URL ends with `/sse`.

    The integration should create a new LLM API called `mcp-fetch`

    ## Configure Conversation Agent

    1. Navigate to your existing conversation agent integration and reconfigure it

    1. Set the *LLM Control* to `mcp-fetch`

    1. Update the prompt to be something simple for now such as: `You are an agent for Home Asisstant, with access to tools through an external server. This external server enables LLMs to fetch web page contents.`



    ## Try it Out