Created
March 5, 2026 06:10
-
-
Save abd3lraouf/626e2c17977f3be1db9758594e46319b to your computer and use it in GitHub Desktop.
Revisions
-
abd3lraouf created this gist
Mar 5, 2026 .There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -0,0 +1,237 @@ # π GLM-5 Proxy for Xcode 26 Intelligence Mode [](https://opensource.org/licenses/MIT) [](https://www.docker.com/) [](https://www.apple.com/macos) > **Use Z.AI's GLM-5 as a local AI coding assistant in Xcode 26 Intelligence Mode for just $3/month** Bypass Xcode's "Provider is not valid" error and unlock frontier-level coding capabilities at 1/7th the cost of ChatGPT or Claude subscriptions. ## β¨ Features - π **Local Proxy Server** - Bridges Xcode Intelligence to Z.AI's GLM-5 API - π° **Cost-Effective** - $3/month vs $20+/month for alternatives - π **Easy Setup** - One script installation in under 2 minutes - π **Secure** - API key stored in environment variables - π **Auto-Restart** - Docker container restarts automatically - π§ **Reasoning Support** - GLM-5's advanced thinking capabilities ## π Prerequisites Before you begin, ensure you have: - **macOS** with Xcode 26 installed - **Xcode Intelligence Mode** enabled - **Docker Desktop** - [Download here](https://www.docker.com/products/docker-desktop) - **GLM Coding Plan subscription** - [$3/month at z.ai/subscribe](https://z.ai/subscribe) - **Z.AI API Key** - Get it from your [Z.AI Dashboard](https://z.ai/dashboard) ## π οΈ Quick Start ### Option 1: Automated Setup (Recommended) ```bash # Download and run the setup script curl -fsSL https://gist.githubusercontent.com/abd3lraouf/434e42bd246926ee9b319b9b7c0d1b5c/raw/setup.sh | bash ``` Or clone the gist and run locally: ```bash # Download the script curl -O https://gist.githubusercontent.com/abd3lraouf/434e42bd246926ee9b319b9b7c0d1b5c/raw/setup.sh # Make it executable chmod +x setup.sh # Run it ./setup.sh ``` The script will prompt you to enter your Z.AI API key (input is hidden for security). ### Option 2: Manual Setup <details> <summary>Click to expand manual setup instructions</summary> 1. **Create project directory** ```bash mkdir ~/glm5-proxy && cd ~/glm5-proxy ``` 2. **Create `docker-compose.yaml`** ```yaml services: litellm: image: ghcr.io/berriai/litellm:main-latest container_name: glm5-proxy ports: - "4000:4000" volumes: - ./litellm_config.yaml:/app/config.yaml environment: - ZAI_API_KEY=your-api-key-here command: --config /app/config.yaml restart: unless-stopped ``` 3. **Create `litellm_config.yaml`** ```yaml model_list: - model_name: glm-5 litellm_params: model: openai/glm-5 api_base: https://api.z.ai/api/coding/paas/v4 api_key: os.environ/ZAI_API_KEY ``` 4. **Start the proxy** ```bash docker compose up -d ``` </details> ## βοΈ Configure Xcode After the proxy is running: 1. Open **Xcode** β **Settings** (β,) 2. Navigate to **Intelligence** tab 3. Click **"Add a Model Provider"** 4. Select **"Locally Hosted"** 5. Enter **Port**: `4000` 6. Click **Save** Xcode will now use GLM-5 for code completion, suggestions, and Intelligence features! ## β Verify Installation Test that the proxy is working correctly: ```bash # Check available models curl http://localhost:4000/v1/models # View proxy logs docker logs -f glm5-proxy ``` Expected output should include `"glm-5"` in the models list. ## π§ Configuration Options ### Enable "Deep Thinking" Mode For complex refactoring or unit test generation, add a thinking variant: ```yaml model_list: - model_name: glm-5 litellm_params: model: openai/glm-5 api_base: https://api.z.ai/api/coding/paas/v4 api_key: os.environ/ZAI_API_KEY - model_name: glm-5-thinking litellm_params: model: openai/glm-5 api_base: https://api.z.ai/api/coding/paas/v4 api_key: os.environ/ZAI_API_KEY extra_body: reasoning_effort: high ``` ### Docker Resource Tuning If autocomplete feels sluggish: 1. Open **Docker Desktop** β **Settings** 2. Allocate at least **4GB RAM** and **2 CPUs** 3. Restart the container: `docker compose restart` ## π Troubleshooting ### Container Keeps Restarting Check for configuration errors: ```bash docker logs glm5-proxy ``` Common issues: - Invalid API key - YAML syntax errors - Port 4000 already in use ### IsADirectoryError Docker created a folder instead of file: ```bash docker compose down rm -rf litellm_config.yaml # Recreate the file properly docker compose up -d ``` ### Proxy Not Working After Mac Restart Ensure Docker Desktop starts automatically: 1. Open **Docker Desktop** β **Settings** β **General** 2. Enable **"Start Docker Desktop when you log in"** 3. Xcode will auto-reconnect to `localhost:4000` ### Xcode Shows "Provider is not valid" 1. Verify proxy is running: `curl http://localhost:4000/v1/models` 2. Check Xcode Intelligence settings 3. Restart Xcode ## π‘ Best Practices 1. **Security**: Never commit `litellm_config.yaml` with your API key 2. **Performance**: Turn off "Thinking" mode for faster autocomplete 3. **Monitoring**: Check logs regularly: `docker logs -f glm5-proxy` 4. **Updates**: Pull latest LiteLLM image periodically: `docker compose pull` ## π° Cost Comparison | Service | Monthly Cost | Value | |---------|-------------|-------| | **GLM Coding Plan** | **$3/month** | β Frontier-level coding | | ChatGPT Plus | $20/month | 7x more expensive | | Claude Pro | $20/month | 7x more expensive | | GitHub Copilot | $10/month | 3x more expensive | **Save $200+ per year** while maintaining coding intelligence. ## π€ Contributing Found a bug or have an improvement? - Open an issue on the [Gist](https://gist.github.com/abd3lraouf/434e42bd246926ee9b319b9b7c0d1b5c) - Fork and submit improvements ## π License MIT License - Feel free to use and modify for your needs. ## π Useful Links - [Z.AI Official Website](https://z.ai) - [Get GLM Coding Plan](https://z.ai/subscribe) - [Z.AI Dashboard](https://z.ai/dashboard) - [LiteLLM Documentation](https://docs.litellm.ai) - [Docker Desktop Download](https://www.docker.com/products/docker-desktop) ## β Show Your Support If this saved you money and improved your Xcode workflow: - β Star this gist - π Share with fellow developers - π¬ Leave feedback --- **Keywords**: Xcode 26 Intelligence, GLM-5, Z.AI, Local AI Proxy, Xcode AI Assistant, Code Completion, macOS Development, Docker, LiteLLM, Cost-Effective AI Coding, Alternative to ChatGPT, Alternative to Claude, Xcode Intelligence Setup, AI Code Assistant, Swift Development AI This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -0,0 +1,67 @@ #!/bin/bash set -e echo "π Setting up GLM-5 Proxy for Xcode Intelligence..." PROXY_DIR="$HOME/glm5-proxy" mkdir -p "$PROXY_DIR" cd "$PROXY_DIR" cat > docker-compose.yaml << 'EOF' services: litellm: image: ghcr.io/berriai/litellm:main-latest container_name: glm5-proxy ports: - "4000:4000" volumes: - ./litellm_config.yaml:/app/config.yaml environment: - ZAI_API_KEY=REPLACE_WITH_KEY command: --config /app/config.yaml restart: unless-stopped EOF cat > litellm_config.yaml << 'EOF' model_list: - model_name: glm-5 litellm_params: model: openai/glm-5 api_base: https://api.z.ai/api/coding/paas/v4 api_key: os.environ/ZAI_API_KEY EOF echo "" echo "π Enter your Z.AI API Key:" read -s API_KEY echo "" sed -i.bak "s/REPLACE_WITH_KEY/$API_KEY/" docker-compose.yaml rm -f docker-compose.yaml.bak echo "π³ Starting Docker container..." docker compose up -d echo "β³ Waiting for proxy to initialize..." sleep 5 echo "" echo "β Verifying connection..." echo "Testing: http://localhost:4000/v1/models" curl -s http://localhost:4000/v1/models | head -c 200 echo "" echo "" echo "β Setup complete!" echo "" echo "Now configure Xcode:" echo " 1. Open Xcode β Settings (β,)" echo " 2. Go to Intelligence tab" echo " 3. Click 'Add a Model Provider'" echo " 4. Select 'Locally Hosted'" echo " 5. Enter Port: 4000" echo " 6. Click Save" echo "" echo "Proxy logs: docker logs -f glm5-proxy"