Skip to content

Instantly share code, notes, and snippets.

@abd3lraouf
Created March 5, 2026 06:10
Show Gist options
  • Select an option

  • Save abd3lraouf/626e2c17977f3be1db9758594e46319b to your computer and use it in GitHub Desktop.

Select an option

Save abd3lraouf/626e2c17977f3be1db9758594e46319b to your computer and use it in GitHub Desktop.

Revisions

  1. abd3lraouf created this gist Mar 5, 2026.
    237 changes: 237 additions & 0 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,237 @@
    # πŸš€ GLM-5 Proxy for Xcode 26 Intelligence Mode

    [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
    [![Docker](https://img.shields.io/badge/Docker-Required-blue)](https://www.docker.com/)
    [![macOS](https://img.shields.io/badge/macOS-Compatible-brightgreen)](https://www.apple.com/macos)

    > **Use Z.AI's GLM-5 as a local AI coding assistant in Xcode 26 Intelligence Mode for just $3/month**
    Bypass Xcode's "Provider is not valid" error and unlock frontier-level coding capabilities at 1/7th the cost of ChatGPT or Claude subscriptions.

    ## ✨ Features

    - πŸ”— **Local Proxy Server** - Bridges Xcode Intelligence to Z.AI's GLM-5 API
    - πŸ’° **Cost-Effective** - $3/month vs $20+/month for alternatives
    - πŸš€ **Easy Setup** - One script installation in under 2 minutes
    - πŸ”’ **Secure** - API key stored in environment variables
    - πŸ”„ **Auto-Restart** - Docker container restarts automatically
    - 🧠 **Reasoning Support** - GLM-5's advanced thinking capabilities

    ## πŸ“‹ Prerequisites

    Before you begin, ensure you have:

    - **macOS** with Xcode 26 installed
    - **Xcode Intelligence Mode** enabled
    - **Docker Desktop** - [Download here](https://www.docker.com/products/docker-desktop)
    - **GLM Coding Plan subscription** - [$3/month at z.ai/subscribe](https://z.ai/subscribe)
    - **Z.AI API Key** - Get it from your [Z.AI Dashboard](https://z.ai/dashboard)

    ## πŸ› οΈ Quick Start

    ### Option 1: Automated Setup (Recommended)

    ```bash
    # Download and run the setup script
    curl -fsSL https://gist.githubusercontent.com/abd3lraouf/434e42bd246926ee9b319b9b7c0d1b5c/raw/setup.sh | bash
    ```

    Or clone the gist and run locally:

    ```bash
    # Download the script
    curl -O https://gist.githubusercontent.com/abd3lraouf/434e42bd246926ee9b319b9b7c0d1b5c/raw/setup.sh

    # Make it executable
    chmod +x setup.sh

    # Run it
    ./setup.sh
    ```

    The script will prompt you to enter your Z.AI API key (input is hidden for security).

    ### Option 2: Manual Setup

    <details>
    <summary>Click to expand manual setup instructions</summary>

    1. **Create project directory**
    ```bash
    mkdir ~/glm5-proxy && cd ~/glm5-proxy
    ```

    2. **Create `docker-compose.yaml`**
    ```yaml
    services:
    litellm:
    image: ghcr.io/berriai/litellm:main-latest
    container_name: glm5-proxy
    ports:
    - "4000:4000"
    volumes:
    - ./litellm_config.yaml:/app/config.yaml
    environment:
    - ZAI_API_KEY=your-api-key-here
    command: --config /app/config.yaml
    restart: unless-stopped
    ```
    3. **Create `litellm_config.yaml`**
    ```yaml
    model_list:
    - model_name: glm-5
    litellm_params:
    model: openai/glm-5
    api_base: https://api.z.ai/api/coding/paas/v4
    api_key: os.environ/ZAI_API_KEY
    ```

    4. **Start the proxy**
    ```bash
    docker compose up -d
    ```

    </details>

    ## βš™οΈ Configure Xcode

    After the proxy is running:

    1. Open **Xcode** β†’ **Settings** (⌘,)
    2. Navigate to **Intelligence** tab
    3. Click **"Add a Model Provider"**
    4. Select **"Locally Hosted"**
    5. Enter **Port**: `4000`
    6. Click **Save**

    Xcode will now use GLM-5 for code completion, suggestions, and Intelligence features!

    ## βœ… Verify Installation

    Test that the proxy is working correctly:

    ```bash
    # Check available models
    curl http://localhost:4000/v1/models
    # View proxy logs
    docker logs -f glm5-proxy
    ```

    Expected output should include `"glm-5"` in the models list.

    ## πŸ”§ Configuration Options

    ### Enable "Deep Thinking" Mode

    For complex refactoring or unit test generation, add a thinking variant:

    ```yaml
    model_list:
    - model_name: glm-5
    litellm_params:
    model: openai/glm-5
    api_base: https://api.z.ai/api/coding/paas/v4
    api_key: os.environ/ZAI_API_KEY
    - model_name: glm-5-thinking
    litellm_params:
    model: openai/glm-5
    api_base: https://api.z.ai/api/coding/paas/v4
    api_key: os.environ/ZAI_API_KEY
    extra_body:
    reasoning_effort: high
    ```

    ### Docker Resource Tuning

    If autocomplete feels sluggish:

    1. Open **Docker Desktop** β†’ **Settings**
    2. Allocate at least **4GB RAM** and **2 CPUs**
    3. Restart the container: `docker compose restart`

    ## πŸ› Troubleshooting

    ### Container Keeps Restarting

    Check for configuration errors:
    ```bash
    docker logs glm5-proxy
    ```

    Common issues:
    - Invalid API key
    - YAML syntax errors
    - Port 4000 already in use

    ### IsADirectoryError

    Docker created a folder instead of file:
    ```bash
    docker compose down
    rm -rf litellm_config.yaml
    # Recreate the file properly
    docker compose up -d
    ```

    ### Proxy Not Working After Mac Restart

    Ensure Docker Desktop starts automatically:
    1. Open **Docker Desktop** β†’ **Settings** β†’ **General**
    2. Enable **"Start Docker Desktop when you log in"**
    3. Xcode will auto-reconnect to `localhost:4000`

    ### Xcode Shows "Provider is not valid"

    1. Verify proxy is running: `curl http://localhost:4000/v1/models`
    2. Check Xcode Intelligence settings
    3. Restart Xcode

    ## πŸ’‘ Best Practices

    1. **Security**: Never commit `litellm_config.yaml` with your API key
    2. **Performance**: Turn off "Thinking" mode for faster autocomplete
    3. **Monitoring**: Check logs regularly: `docker logs -f glm5-proxy`
    4. **Updates**: Pull latest LiteLLM image periodically: `docker compose pull`

    ## πŸ’° Cost Comparison

    | Service | Monthly Cost | Value |
    |---------|-------------|-------|
    | **GLM Coding Plan** | **$3/month** | βœ… Frontier-level coding |
    | ChatGPT Plus | $20/month | 7x more expensive |
    | Claude Pro | $20/month | 7x more expensive |
    | GitHub Copilot | $10/month | 3x more expensive |

    **Save $200+ per year** while maintaining coding intelligence.

    ## 🀝 Contributing

    Found a bug or have an improvement?
    - Open an issue on the [Gist](https://gist.github.com/abd3lraouf/434e42bd246926ee9b319b9b7c0d1b5c)
    - Fork and submit improvements

    ## πŸ“„ License

    MIT License - Feel free to use and modify for your needs.

    ## πŸ”— Useful Links

    - [Z.AI Official Website](https://z.ai)
    - [Get GLM Coding Plan](https://z.ai/subscribe)
    - [Z.AI Dashboard](https://z.ai/dashboard)
    - [LiteLLM Documentation](https://docs.litellm.ai)
    - [Docker Desktop Download](https://www.docker.com/products/docker-desktop)

    ## ⭐ Show Your Support

    If this saved you money and improved your Xcode workflow:
    - ⭐ Star this gist
    - πŸ”— Share with fellow developers
    - πŸ’¬ Leave feedback

    ---

    **Keywords**: Xcode 26 Intelligence, GLM-5, Z.AI, Local AI Proxy, Xcode AI Assistant, Code Completion, macOS Development, Docker, LiteLLM, Cost-Effective AI Coding, Alternative to ChatGPT, Alternative to Claude, Xcode Intelligence Setup, AI Code Assistant, Swift Development AI
    67 changes: 67 additions & 0 deletions setup.sh
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,67 @@
    #!/bin/bash

    set -e

    echo "πŸš€ Setting up GLM-5 Proxy for Xcode Intelligence..."

    PROXY_DIR="$HOME/glm5-proxy"

    mkdir -p "$PROXY_DIR"
    cd "$PROXY_DIR"

    cat > docker-compose.yaml << 'EOF'
    services:
    litellm:
    image: ghcr.io/berriai/litellm:main-latest
    container_name: glm5-proxy
    ports:
    - "4000:4000"
    volumes:
    - ./litellm_config.yaml:/app/config.yaml
    environment:
    - ZAI_API_KEY=REPLACE_WITH_KEY
    command: --config /app/config.yaml
    restart: unless-stopped
    EOF

    cat > litellm_config.yaml << 'EOF'
    model_list:
    - model_name: glm-5
    litellm_params:
    model: openai/glm-5
    api_base: https://api.z.ai/api/coding/paas/v4
    api_key: os.environ/ZAI_API_KEY
    EOF

    echo ""
    echo "πŸ“ Enter your Z.AI API Key:"
    read -s API_KEY
    echo ""

    sed -i.bak "s/REPLACE_WITH_KEY/$API_KEY/" docker-compose.yaml
    rm -f docker-compose.yaml.bak

    echo "🐳 Starting Docker container..."
    docker compose up -d

    echo "⏳ Waiting for proxy to initialize..."
    sleep 5

    echo ""
    echo "βœ… Verifying connection..."
    echo "Testing: http://localhost:4000/v1/models"
    curl -s http://localhost:4000/v1/models | head -c 200

    echo ""
    echo ""
    echo "βœ… Setup complete!"
    echo ""
    echo "Now configure Xcode:"
    echo " 1. Open Xcode β†’ Settings (⌘,)"
    echo " 2. Go to Intelligence tab"
    echo " 3. Click 'Add a Model Provider'"
    echo " 4. Select 'Locally Hosted'"
    echo " 5. Enter Port: 4000"
    echo " 6. Click Save"
    echo ""
    echo "Proxy logs: docker logs -f glm5-proxy"