Goal: Make just-gemini usable as an LLM backend by any OpenAI-compatible client (including justbot's future subagent system).
Approach: Add standard OpenAI-compatible POST /v1/chat/completions (streaming SSE) and GET /v1/models endpoints on top of the existing session-based architecture. This is the most widely supported LLM API format — any client that speaks OpenAI can use just-gemini without custom integration code.
Target OpenAI format:
POST /v1/chat/completionswith{ model, messages, stream: true, max_tokens? }