讓 Claude Code(或任何 Anthropic/OpenAI 格式的 client)透過 LiteLLM proxy 使用本地 Ollama 模型。 實測環境:macOS ARM64, Python 3.13, LiteLLM 1.82, Ollama + qwen2.5
Claude Code / Any Client
│
Claude Code 發送 Anthropic Messages API 格式的請求,但 MLX Server(mlx_lm.server)提供的是 OpenAI Chat Completions API 格式。兩者格式不同,無法直接串接。
用 LiteLLM 作為中間層 proxy,自動處理格式轉換:
| { | |
| "ai_status": "completed", | |
| "ai_suggestions": "適合作為學習材料,可用於技能培養或團隊培訓", | |
| "ai_summary": "Claude Code 官方文檔:涵蓋代碼庫探索、除錯、重構、測試、PR 管理、子代理、自定義技能、計畫模式、思考模式等常見工作流程的完整指南。", | |
| "ai_tags": [ | |
| "claude-code", | |
| "workflow", | |
| "debugging", | |
| "refactoring", | |
| "testing", |
| /** | |
| * 計算 Multiplier(x): | |
| * y = e^( (ln2/8) * x ) , if 0 ≤ x < 8 | |
| * y = (1/8) * e^( (ln2/2) * x ) , if 8 ≤ x < 24 | |
| * y = (1/8) * e^( (ln2/2) * x ) , if x ≥ 24 | |
| * | |
| * @param {number} x — 輸入值(假設 x ≥ 0) | |
| * @returns {number} y — 分段函數的值 | |
| */ | |
| function multiplier(x) { |
| python3 -m venv venv && source venv/bin/activate && pip install -r requirements.txt | |
| source venv/bin/activate && python start_api.py --host 127.0.0.1 --port 8000 --log-level info --env development |
Some notes, tools, and techniques for reverse engineering Golang binaries.
Content :
| Vagrant.configure("2") do |config| | |
| config.vm.define "centos7_with_docker" do |v| | |
| v.vm.box = "genebean/centos-7-docker-ce" | |
| v.vm.synced_folder ".", "/vagrant" | |
| v.ssh.username = "root" | |
| v.ssh.password = "root" | |
| config.vm.provider "virtualbox" do |vb| | |
| vb.memory = "51200" | |
| vb.cpus = 24 |