Skip to content

Instantly share code, notes, and snippets.

@nerdalert
Created April 15, 2026 20:51
Show Gist options
  • Select an option

  • Save nerdalert/2918b718f9a4b5769196f2bbf6f1c462 to your computer and use it in GitHub Desktop.

Select an option

Save nerdalert/2918b718f9a4b5769196f2bbf6f1c462 to your computer and use it in GitHub Desktop.

ExternalModel Deployment & Validation Guide

Prerequisites

  • OpenShift cluster with oc/kubectl access as cluster-admin
  • MaaS repo cloned (e.g., ~/istio-gw/prs/4-rconciler-namespace-path/models-as-a-service)

Step 1: Deploy MaaS with ODH Operator

cd ~/istio-gw/prs/4-rconciler-namespace-path/models-as-a-service
./scripts/deploy.sh --operator-type odh

This will timeout on maas-controller — that's expected. Continue to Step 2.

Step 2: Fix Known ODH Operator Issues

These are upstream bugs not yet in the ODH ea.2 image.

# Fix cluster-audience missing from ConfigMap
kubectl patch configmap maas-parameters -n opendatahub --type merge \
  -p '{"data":{"cluster-audience":"https://kubernetes.default.svc"}}'

# Fix maas-api RBAC (secrets + MaaS CRDs)
kubectl apply -f - <<'EOF'
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: maas-api-secret-reader
  namespace: opendatahub
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: maas-api-secret-reader
  namespace: opendatahub
subjects:
- kind: ServiceAccount
  name: maas-api
  namespace: opendatahub
roleRef:
  kind: Role
  name: maas-api-secret-reader
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: maas-api-supplemental
rules:
- apiGroups: ["maas.opendatahub.io"]
  resources: ["*"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: maas-api-supplemental
subjects:
- kind: ServiceAccount
  name: maas-api
  namespace: opendatahub
roleRef:
  kind: ClusterRole
  name: maas-api-supplemental
  apiGroup: rbac.authorization.k8s.io
EOF

# Scale down operator so it doesn't overwrite our patches
kubectl scale deployment opendatahub-operator-controller-manager -n opendatahub --replicas=0

# Patch ClusterRole for ExternalModel finalizers
kubectl patch clusterrole maas-controller-role --type=json -p='[
  {"op":"add","path":"/rules/-","value":{"apiGroups":["maas.opendatahub.io"],"resources":["externalmodels/finalizers"],"verbs":["update"]}}
]'

# Create models-as-a-service namespace (deploy.sh may not have reached this)
kubectl create namespace models-as-a-service 2>/dev/null || true

# Restart to pick up fixes
kubectl rollout restart deployment maas-controller maas-api -n opendatahub

# Wait and verify
sleep 20
kubectl get pods -n opendatahub | grep maas

Both maas-controller and maas-api should be 1/1 Running.

Step 3: Create ExternalModel Resources

# Create model namespace
kubectl create namespace llm
kubectl label namespace llm istio-injection=enabled

# Create provider API key secret (replace with your key)
kubectl create secret generic openai-api-key -n llm \
  --from-literal=api-key="YOUR_OPENAI_API_KEY"
kubectl label secret openai-api-key -n llm inference.networking.k8s.io/bbr-managed=true

# Create ExternalModel + MaaSModelRef + auth + subscription
kubectl apply -f - <<'EOF'
apiVersion: maas.opendatahub.io/v1alpha1
kind: ExternalModel
metadata:
  name: gpt-4o
  namespace: llm
spec:
  provider: openai
  endpoint: api.openai.com
  targetModel: gpt-4o
  credentialRef:
    name: openai-api-key
---
apiVersion: maas.opendatahub.io/v1alpha1
kind: MaaSModelRef
metadata:
  name: gpt-4o
  namespace: llm
spec:
  modelRef:
    kind: ExternalModel
    name: gpt-4o
---
apiVersion: maas.opendatahub.io/v1alpha1
kind: MaaSAuthPolicy
metadata:
  name: gpt-4o-access
  namespace: models-as-a-service
spec:
  modelRefs:
  - name: gpt-4o
    namespace: llm
  subjects:
    groups:
    - name: "system:authenticated"
---
apiVersion: maas.opendatahub.io/v1alpha1
kind: MaaSSubscription
metadata:
  name: gpt-4o-subscription
  namespace: models-as-a-service
spec:
  owner:
    groups:
    - name: "system:authenticated"
  modelRefs:
  - name: gpt-4o
    namespace: llm
    tokenRateLimits:
    - limit: 100000
      window: "1h"
EOF

# Verify reconciliation
sleep 15
kubectl get maasmodelref -n llm
kubectl get authpolicy -n llm
kubectl get httproute -n llm

MaaSModelRef should show Ready with an endpoint URL.

Step 4: Deploy BBR (Payload Processing)

kubectl apply -f - <<'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
  name: payload-processing
  namespace: openshift-ingress
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: payload-processing-reader
rules:
- apiGroups: [""]
  resources: ["configmaps", "secrets"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["maas.opendatahub.io"]
  resources: ["maasmodelrefs", "externalmodels"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: payload-processing-reader
subjects:
- kind: ServiceAccount
  name: payload-processing
  namespace: openshift-ingress
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: payload-processing-reader
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: payload-processing-plugins
  namespace: openshift-ingress
data:
  model-to-header-plugin: 'body-field-to-header:model-extractor:{"fieldName":"model","headerName":"X-Gateway-Model-Name"}'
  model-provider-resolver-plugin: "model-provider-resolver:model-provider-resolver"
  api-translation-plugin: "api-translation:api-translation"
  apikey-injection-plugin: "apikey-injection:apikey-injection"
---
apiVersion: v1
kind: Service
metadata:
  name: payload-processing
  namespace: openshift-ingress
spec:
  selector:
    app: payload-processing
  ports:
  - protocol: TCP
    port: 9004
    targetPort: 9004
    appProtocol: HTTP2
  type: ClusterIP
---
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
  name: payload-processing
  namespace: openshift-ingress
spec:
  host: payload-processing.openshift-ingress.svc.cluster.local
  trafficPolicy:
    tls:
      mode: SIMPLE
      insecureSkipVerify: true
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: payload-processing
  namespace: openshift-ingress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: payload-processing
  template:
    metadata:
      labels:
        app: payload-processing
    spec:
      serviceAccountName: payload-processing
      securityContext:
        runAsNonRoot: true
      containers:
      - name: payload-processing
        image: quay.io/opendatahub/odh-ai-gateway-payload-processing:odh-stable
        args:
        - "--streaming"
        - "--v"
        - "3"
        - "--plugin"
        - "$(MODEL_TO_HEADER)"
        - "--plugin"
        - "$(MODEL_PROVIDER_RESOLVER)"
        - "--plugin"
        - "$(API_TRANSLATION)"
        - "--plugin"
        - "$(APIKEY_INJECTION)"
        - "--tracing=false"
        env:
        - name: MODEL_TO_HEADER
          valueFrom:
            configMapKeyRef:
              name: payload-processing-plugins
              key: model-to-header-plugin
        - name: MODEL_PROVIDER_RESOLVER
          valueFrom:
            configMapKeyRef:
              name: payload-processing-plugins
              key: model-provider-resolver-plugin
        - name: API_TRANSLATION
          valueFrom:
            configMapKeyRef:
              name: payload-processing-plugins
              key: api-translation-plugin
        - name: APIKEY_INJECTION
          valueFrom:
            configMapKeyRef:
              name: payload-processing-plugins
              key: apikey-injection-plugin
        ports:
        - containerPort: 9004
          name: grpc
        resources:
          requests:
            memory: "64Mi"
            cpu: "50m"
          limits:
            memory: "256Mi"
            cpu: "500m"
        livenessProbe:
          tcpSocket:
            port: grpc
          initialDelaySeconds: 15
          periodSeconds: 20
        readinessProbe:
          tcpSocket:
            port: grpc
          initialDelaySeconds: 5
          periodSeconds: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: payload-processing
  namespace: openshift-ingress
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: maas-default-gateway
  configPatches:
  - applyTo: HTTP_FILTER
    match:
      context: GATEWAY
      listener:
        filterChain:
          filter:
            name: "envoy.filters.network.http_connection_manager"
            subFilter:
              name: extensions.istio.io/wasmplugin/openshift-ingress.kuadrant-maas-default-gateway
    patch:
      operation: INSERT_AFTER
      value:
        name: envoy.filters.http.ext_proc.bbr
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.http.ext_proc.v3.ExternalProcessor
          failure_mode_allow: false
          allow_mode_override: true
          processing_mode:
            request_header_mode: "SEND"
            response_header_mode: "SEND"
            request_body_mode: "FULL_DUPLEX_STREAMED"
            response_body_mode: "FULL_DUPLEX_STREAMED"
            request_trailer_mode: "SEND"
            response_trailer_mode: "SEND"
          grpc_service:
            envoy_grpc:
              cluster_name: outbound|9004||payload-processing.openshift-ingress.svc.cluster.local
EOF

# Verify BBR is running
sleep 15
kubectl get pods -n openshift-ingress -l app=payload-processing

Step 5: Validate

Mint a key and test inference

GW_HOST=$(kubectl get gateway maas-default-gateway -n openshift-ingress -o jsonpath='{.spec.listeners[0].hostname}')
TOKEN=$(oc whoami -t)

# Mint API key
KEY=$(curl -sk -X POST "https://${GW_HOST}/maas-api/v1/api-keys" \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"name":"test-key","subscription":"gpt-4o-subscription"}' | jq -r '.key')
echo "MaaS key: $KEY"

# Inference
curl -sk "https://${GW_HOST}/llm/gpt-4o/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $KEY" \
  -d '{"model":"gpt-4o","messages":[{"role":"user","content":"say hello"}]}'

Auth validation (run ~/test-auth.sh or manually)

GW_HOST=$(kubectl get gateway maas-default-gateway -n openshift-ingress -o jsonpath='{.spec.listeners[0].hostname}')
TOKEN=$(oc whoami -t)

KEY=$(curl -sk -X POST "https://${GW_HOST}/maas-api/v1/api-keys" \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"name":"auth-test","subscription":"gpt-4o-subscription"}' | jq -r '.key')
echo "MaaS key: $KEY"

echo "=== 1. Valid key (expect 200) ==="
curl -sk -w "\nHTTP: %{http_code}\n" "https://${GW_HOST}/llm/gpt-4o/v1/chat/completions" \
  -H "Content-Type: application/json" -H "Authorization: Bearer $KEY" \
  -d '{"model":"gpt-4o","messages":[{"role":"user","content":"hi"}]}'

echo "=== 2. Bogus sk-oai- key (expect 403) ==="
curl -sk -w "\nHTTP: %{http_code}\n" "https://${GW_HOST}/llm/gpt-4o/v1/chat/completions" \
  -H "Content-Type: application/json" -H "Authorization: Bearer sk-oai-FAKE-KEY-12345" \
  -d '{"model":"gpt-4o","messages":[{"role":"user","content":"hi"}]}'

echo "=== 3. Random token (expect 401) ==="
curl -sk -w "\nHTTP: %{http_code}\n" "https://${GW_HOST}/llm/gpt-4o/v1/chat/completions" \
  -H "Content-Type: application/json" -H "Authorization: Bearer randomgarbage" \
  -d '{"model":"gpt-4o","messages":[{"role":"user","content":"hi"}]}'

echo "=== 4. No auth (expect 401) ==="
curl -sk -w "\nHTTP: %{http_code}\n" "https://${GW_HOST}/llm/gpt-4o/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-4o","messages":[{"role":"user","content":"hi"}]}'

echo "=== 5. Old path without namespace (expect 404) ==="
curl -sk -w "\nHTTP: %{http_code}\n" "https://${GW_HOST}/gpt-4o/v1/chat/completions" \
  -H "Content-Type: application/json" -H "Authorization: Bearer $KEY" \
  -d '{"model":"gpt-4o","messages":[{"role":"user","content":"hi"}]}'

echo "=== 6. Bogus key, old path (expect 404) ==="
curl -sk -w "\nHTTP: %{http_code}\n" "https://${GW_HOST}/gpt-4o/v1/chat/completions" \
  -H "Content-Type: application/json" -H "Authorization: Bearer sk-oai-FAKE-KEY-12345" \
  -d '{"model":"gpt-4o","messages":[{"role":"user","content":"hi"}]}'

echo "=== 7. Direct header injection (expect 401) ==="
curl -sk -w "\nHTTP: %{http_code}\n" "https://${GW_HOST}/v1/chat/completions" \
  -H "Content-Type: application/json" -H "Authorization: Bearer FAKE" \
  -H "X-Gateway-Model-Name: gpt-4o" \
  -d '{"model":"gpt-4o","messages":[{"role":"user","content":"hi"}]}'

Troubleshooting

maas-controller CreateContainerConfigError

kubectl patch configmap maas-parameters -n opendatahub --type merge \
  -p '{"data":{"cluster-audience":"https://kubernetes.default.svc"}}'
kubectl rollout restart deployment maas-controller -n opendatahub

maas-api CrashLoopBackOff (secrets forbidden)

# Apply the maas-api RBAC from Step 2

ExternalModel stuck Pending (blockOwnerDeletion error)

kubectl patch clusterrole maas-controller-role --type=json -p='[
  {"op":"add","path":"/rules/-","value":{"apiGroups":["maas.opendatahub.io"],"resources":["externalmodels/finalizers"],"verbs":["update"]}}
]'
kubectl rollout restart deployment maas-controller -n opendatahub

Verify ExternalModel RBAC

oc auth can-i update externalmodels --subresource=finalizers \
  -n llm --as=system:serviceaccount:opendatahub:maas-controller
# Should print: yes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment