Status: Phase 1 Complete (Models, Factories, Tests) - December 2025 Overall Progress: Foundation Complete, UI/Controllers Pending
- System Overview
- Architecture Diagram
- Core Models (12 Total)
- Data Flow
- AI Integration
- GitHub Automation
- Performance Tracking
- Usage Examples
- Database Schema
- Implementation Status
The AI-First Support System is a comprehensive help center and support automation platform that combines:
- Knowledge Base with semantic search (RAG)
- AI Support Agent (Claude Sonnet 4.5) targeting 60%+ autonomous resolution
- Automated GitHub Integration for bug reports → issues → PRs
- Canny-style Feature Voting with MRR-weighted prioritization
- Real-time Performance Analytics for AI and human agents
- Training Loop that improves AI responses from successful resolutions
| Metric | Target | Purpose |
|---|---|---|
| Autonomous Resolution Rate | 60%+ | AI resolves without human escalation |
| Avg Response Time | <7 minutes | First AI response to user |
| Cost per 1K Users | ~$25/month | LLM + embeddings operational cost |
| Human Escalation Rate | <30% | Only complex issues reach humans |
graph TD
User[User Submits Ticket] --> Ticket[SupportTicket]
Ticket --> AI[AI Agent Processes]
AI --> Message[SupportTicketMessage]
AI --> Search[Search Knowledge Base]
Search --> Article[SupportArticle]
Article --> Category[SupportCategory]
Article --> Metric[SupportKnowledgeBaseMetric]
Message --> Decision{AI Confident?}
Decision -->|Yes| Resolve[Auto-Resolve]
Decision -->|No| Escalate[SupportEscalation]
Escalate --> Human[Human Agent]
Human --> Message
Ticket --> Event[SupportTicketEvent]
Ticket --> GitHub[SupportGithubLink]
GitHub --> Issue[GitHub Issue Created]
GitHub --> PR[AI Attempts Fix → PR]
Message --> Training[SupportTrainingExample]
Training --> Improve[Future AI Responses]
Ticket --> Performance[Performance Tracking]
Performance --> AgentPerf[SupportAgentPerformance]
Performance --> ModelPerf[SupportModelPerformance]
Article --> Vote[SupportFeedbackVote]
Vote --> Roadmap[Public Roadmap]
style AI fill:#e1f5ff
style Training fill:#fff4e1
style GitHub fill:#f0e1ff
style Performance fill:#e1ffe1
Purpose: Core support request from a user
# Key Fields
ticket_type: enum [:bug, :feature, :cancellation, :billing, :general]
status: enum [:new, :analyzing, :ai_responding, :escalated, :resolved, :closed]
priority: enum [:low, :normal, :high, :urgent]
satisfaction_rating: integer (1-5 stars)
ai_confidence_score: decimal (0-1)
# Key Relationships
has_many :messages # Conversation history
has_many :events # Audit trail
has_one :escalation # If escalated to human
has_one :github_link # If bug → GitHub issue
belongs_to :user # Ticket creator
belongs_to :assigned_to (User) # Human agent if escalatedKey Methods:
ticket.auto_escalate_if_needed! # Escalates if AI confidence < threshold
ticket.escalate!(reason:, agent:) # Manual escalation
ticket.resolve!(resolution:) # Mark resolved
ticket.close! # Close ticketPurpose: Individual message in ticket conversation (user or AI)
# Key Fields
sender_type: string ["user", "ai", "agent"]
content: text
ai_model_used: string # e.g., "claude-sonnet-4-5"
ai_confidence: decimal (0-1)
ai_tokens_used: integer
ai_context_used: jsonb # What articles/examples were referenced
# Key Relationships
belongs_to :ticket
belongs_to :user (optional) # If from humanUsage:
# AI responds to ticket
message = ticket.messages.create!(
sender_type: "ai",
content: ai_response,
ai_model_used: "claude-sonnet-4-5",
ai_confidence: 0.87,
ai_tokens_used: 1543,
ai_context_used: { articles: [123, 456], examples: [789] }
)Purpose: Help articles with semantic search via embeddings
# Key Fields
title: string
slug: string (auto-generated)
content: text
status: enum [:draft, :published, :archived]
is_beta: boolean
embedding_text: text # Combined title+content for RAG
embedding_generated_at: datetime
view_count: integer
helpful_count: integer
not_helpful_count: integer
# Key Relationships
belongs_to :category
belongs_to :author (User)
has_many :feedback_votes
has_many :metricsKey Methods:
article.needs_embedding? # Check if needs re-embedding
article.increment_view_count!
article.mark_helpful!
article.mark_not_helpful!
article.helpfulness_ratio # Returns percentage (0-100)Scopes:
SupportArticle.published # Only published, live articles
SupportArticle.beta # Beta features (gated)
SupportArticle.popular # Sorted by view_count
SupportArticle.helpful # Sorted by helpfulness scorePurpose: Hierarchical organization of articles
# Key Fields
name: string
slug: string (auto-generated)
description: text
icon: string # Emoji or icon name
display_order: integer
visible: boolean
# Key Relationships
belongs_to :parent (SupportCategory, optional)
has_many :subcategories (SupportCategory)
has_many :articlesExample Hierarchy:
Getting Started (parent)
├─ Account Setup (subcategory)
├─ First App (subcategory)
└─ Billing Basics (subcategory)
Advanced Features (parent)
├─ API Integration (subcategory)
└─ Custom Domains (subcategory)
Purpose: Track AI → Human escalations with context
# Key Fields
escalation_reason: text
escalation_type: string ["ai_low_confidence", "user_requested", "complex_issue"]
ai_attempted_solutions: jsonb # What AI tried before escalating
resolved: boolean
# Key Relationships
belongs_to :ticket
belongs_to :escalated_by (User, optional)
belongs_to :assigned_to (User)Escalation Reasons:
ai_low_confidence: AI confidence < 0.7 thresholduser_requested: User explicitly asks for humancomplex_issue: Requires account access, refunds, etc.
Purpose: Automatic GitHub issue/PR creation for bugs
# Key Fields
status: enum [:pending, :issue_created, :pr_drafted, :pr_merged, :pr_rejected]
github_repo: string
github_issue_number: integer
github_issue_url: string
github_pr_number: integer
github_pr_url: string
ai_suggested_fix: text
pr_checks_passed: boolean
human_reviewed: boolean
# Key Relationships
belongs_to :ticketWorkflow:
# 1. Bug ticket created
ticket = SupportTicket.create!(ticket_type: :bug, ...)
# 2. AI analyzes and creates GitHub issue
github_link = SupportGithubLink.create!(
ticket: ticket,
github_repo: "org/repo",
status: :pending
)
# 3. GitHub issue created
github_link.create_github_issue!(title:, body:)
github_link.update!(status: :issue_created, github_issue_number: 123)
# 4. AI attempts fix and drafts PR
github_link.draft_pull_request!(code_changes:)
github_link.update!(status: :pr_drafted)
# 5. Human reviews and merges
github_link.update!(human_reviewed: true)
# After CI passes and merge:
github_link.update!(status: :pr_merged)Key Methods:
github_link.ready_for_review? # PR drafted, checks pass, not reviewed
github_link.github_issue_link # Returns full URLPurpose: Canny-style feature request voting
# Key Fields
title: string
description: text
vote_type: string ["feature_request", "bug_report", "improvement"]
status: enum [:open, :under_review, :planned, :in_progress, :completed, :wont_fix]
upvotes: integer
downvotes: integer
weighted_score: decimal # MRR-weighted voting
admin_notes: text
# Key Relationships
belongs_to :user
belongs_to :article (optional) # Can link to related articleMRR-Weighted Voting:
# Higher-paying customers' votes count more
vote.weighted_score = vote.upvotes * user.team.mrr_weightScopes:
SupportFeedbackVote.feature_requests # Only feature requests
SupportFeedbackVote.most_voted # Sorted by weighted_score
SupportFeedbackVote.planned # Roadmap itemsPurpose: Immutable log of all ticket state changes
# Key Fields
event_type: string [
"created", "status_changed", "assigned", "unassigned",
"priority_changed", "escalated", "resolved", "closed",
"reopened", "comment_added", "github_issue_created", "github_pr_created"
]
description: text
metadata: jsonb
field_changed: string (optional)
old_value: string (optional)
new_value: string (optional)
# Key Relationships
belongs_to :ticket
belongs_to :user (optional)Example Events:
# Status change
SupportTicketEvent.create!(
ticket: ticket,
event_type: "status_changed",
field_changed: "status",
old_value: "new",
new_value: "resolved",
description: "Ticket resolved by AI"
)
# GitHub issue created
SupportTicketEvent.create!(
ticket: ticket,
event_type: "github_issue_created",
metadata: { issue_number: 123, issue_url: "..." },
description: "GitHub issue #123 created"
)Purpose: Store successful resolutions to improve future AI responses
# Key Fields
user_query: text
ai_response: text
outcome: string ["successful", "escalated", "failed"]
confidence_score: decimal (0-1)
used_in_training: boolean
quality_score: integer (1-5)
pii_detected: jsonb # ["email", "phone", "ssn"]
anonymized_query: text # PII-stripped version
# Key Relationships
belongs_to :ticket
belongs_to :ticket_messageTraining Loop:
# 1. Successful resolution → save as training example
if ticket.resolved? && ticket.satisfaction_rating >= 4
SupportTrainingExample.create!(
ticket: ticket,
ticket_message: ai_message,
user_query: ticket.subject,
ai_response: ai_message.content,
outcome: "successful",
confidence_score: ai_message.ai_confidence
)
end
# 2. Periodic retraining uses high-quality examples
examples = SupportTrainingExample.high_quality.unused
# Feed into fine-tuning pipelineScopes:
SupportTrainingExample.high_quality # quality_score >= 4
SupportTrainingExample.ready_for_training # Not used, no PII
SupportTrainingExample.successful # outcome == "successful"Purpose: Track human agent performance metrics
# Key Fields
metric_date: date
tickets_handled: integer
tickets_resolved: integer
tickets_escalated_further: integer
avg_first_response_time: integer (seconds)
avg_resolution_time: integer (seconds)
training_examples_created: integer
quality_score: decimal (0-5)
# Key Relationships
belongs_to :agent (User)Key Methods:
performance.resolution_rate # Percentage resolved
performance.escalation_rate # Percentage escalated further
performance.quality_rate # Training examples / tickets
performance.avg_first_response_time_human # "2h 15m" formatCalculation:
# Daily calculation
SupportAgentPerformance.calculate_for_agent_and_date(
agent: agent,
date: Date.current
)
# Aggregates all tickets handled by agent on that datePurpose: Track AI model performance by date
# Key Fields
ai_model: string # "claude-sonnet-4-5"
metric_date: date
total_requests: integer
successful_resolutions: integer
escalations: integer
avg_confidence_score: decimal (0-1)
high_confidence_count: integer # confidence > 0.8
low_confidence_count: integer # confidence < 0.5
total_tokens_used: integer
total_cost_usd: decimal
avg_user_satisfaction: decimal (0-1)
positive_feedback_count: integer # rating >= 4
negative_feedback_count: integer # rating <= 2
# No relationships - aggregate metrics onlyKey Methods:
performance.success_rate # Percentage successful
performance.escalation_rate # Percentage escalated
performance.cost_per_request # Cost efficiency
performance.positive_feedback_rate # User satisfactionCalculation:
# Daily calculation per model
SupportModelPerformance.calculate_for_model_and_date(
ai_model: "claude-sonnet-4-5",
date: Date.current
)
# Aggregates:
# - All AI messages from that model
# - Resulting ticket outcomes
# - Token usage and costs
# - User satisfaction ratingsPurpose: Track article performance and usefulness
# Key Fields
metric_type: string ["article_view", "search_result", "ai_referenced", "helpful_vote"]
metric_date: date
value: integer
search_query: string (optional)
ai_model: string (optional)
# Key Relationships
belongs_to :article
belongs_to :user (optional)Metrics Tracked:
- article_view: Article page view
- search_result: Article appeared in search results
- ai_referenced: AI used article in response
- helpful_vote: User marked article helpful/not helpful
Usage:
# Track article view
SupportKnowledgeBaseMetric.create!(
article: article,
metric_type: "article_view",
metric_date: Date.current,
value: 1,
user: current_user
)
# Track AI reference
SupportKnowledgeBaseMetric.create!(
article: article,
metric_type: "ai_referenced",
metric_date: Date.current,
value: 1,
ai_model: "claude-sonnet-4-5"
)sequenceDiagram
participant User
participant Ticket
participant AI
participant KB as Knowledge Base
participant Message
participant Event
User->>Ticket: Submit ticket (bug/feature/help)
Ticket->>Event: Log "created" event
Ticket->>AI: Analyze ticket
AI->>KB: Search relevant articles
KB-->>AI: Return matching articles
AI->>Message: Generate AI response
Message->>Ticket: Attach to ticket
alt High Confidence (>0.8)
AI->>Ticket: Auto-resolve
Ticket->>Event: Log "resolved" event
else Low Confidence (<0.7)
AI->>Ticket: Auto-escalate
Ticket->>Event: Log "escalated" event
else Medium Confidence
AI->>User: Ask clarifying questions
end
sequenceDiagram
participant User
participant Ticket
participant AI
participant GitHub as SupportGithubLink
participant GH as GitHub API
User->>Ticket: Submit bug report
Ticket->>AI: Analyze bug
AI->>GitHub: Create GitHub link record
GitHub->>GH: POST /repos/:owner/:repo/issues
GH-->>GitHub: Issue #123 created
GitHub->>Ticket: Link issue to ticket
Ticket->>Event: Log "github_issue_created"
AI->>AI: Attempt fix
AI->>GH: POST /repos/:owner/:repo/pulls
GH-->>GitHub: PR #45 drafted
GitHub->>GitHub: Wait for CI checks
GitHub->>Human: Notify for review
Human->>GH: Review and merge PR
GH-->>GitHub: PR merged
GitHub->>Ticket: Auto-resolve ticket
graph LR
A[Ticket Resolved] --> B{Satisfaction >= 4?}
B -->|Yes| C[Create TrainingExample]
B -->|No| D[Discard]
C --> E[Strip PII]
E --> F[Quality Review]
F --> G{Quality >= 4?}
G -->|Yes| H[Mark ready_for_training]
G -->|No| I[Archive]
H --> J[Periodic Batch]
J --> K[Fine-tune AI Model]
K --> L[Deploy Improved Model]
L --> M[Better Future Responses]
style C fill:#e1ffe1
style K fill:#e1f5ff
style M fill:#fff4e1
Embedding Generation:
# When article published or updated
article = SupportArticle.find(123)
if article.needs_embedding?
# Combine title + content for embedding
embedding_text = "#{article.title}\n\n#{article.content}"
# Generate embedding with OpenAI
response = OpenAI::Client.new.embeddings(
parameters: {
model: "text-embedding-3-small",
input: embedding_text
}
)
article.update!(
embedding_text: embedding_text,
embedding_vector: response["data"][0]["embedding"],
embedding_generated_at: Time.current
)
endSemantic Search:
# User's question
query = "How do I deploy my app?"
# Generate query embedding
query_embedding = OpenAI::Client.new.embeddings(
parameters: { model: "text-embedding-3-small", input: query }
)["data"][0]["embedding"]
# Vector similarity search (PostgreSQL pgvector)
articles = SupportArticle
.published
.order(Arel.sql("embedding_vector <=> '#{query_embedding}'"))
.limit(5)
# Return top 5 most relevant articlesContext Assembly:
def build_ai_context(ticket, relevant_articles)
context = {
ticket: {
subject: ticket.subject,
description: ticket.description,
type: ticket.ticket_type,
user_history: ticket.user.support_tickets.recent.pluck(:subject)
},
articles: relevant_articles.map do |article|
{
title: article.title,
summary: article.summary,
url: article_url(article)
}
end,
previous_messages: ticket.messages.order(created_at: :asc).map do |msg|
{
sender: msg.sender_type,
content: msg.content,
timestamp: msg.created_at
}
end
}
endAI Call:
response = RubyLLM.chat(
model: "claude-sonnet-4-5",
system: "You are a helpful support agent for OverSkill. Use the provided context...",
messages: [
{
role: "user",
content: JSON.pretty_generate(build_ai_context(ticket, articles))
}
]
)
# Parse response
ai_response = response.content
confidence = response.metadata[:confidence] # 0-1 score
# Save message
message = ticket.messages.create!(
sender_type: "ai",
content: ai_response,
ai_model_used: "claude-sonnet-4-5",
ai_confidence: confidence,
ai_tokens_used: response.usage.total_tokens,
ai_context_used: {
articles: articles.pluck(:id),
embedding_model: "text-embedding-3-small"
}
)Escalation Logic:
class SupportTicket < ApplicationRecord
def auto_escalate_if_needed!
last_ai_message = messages.where(sender_type: "ai").last
return unless last_ai_message
# Escalate if confidence too low
if last_ai_message.ai_confidence < 0.7
escalate!(
reason: "AI confidence below threshold (#{last_ai_message.ai_confidence})",
escalation_type: "ai_low_confidence"
)
end
# Escalate if too many back-and-forth messages (>5)
if messages.count > 5 && status_ai_responding?
escalate!(
reason: "Extended conversation without resolution",
escalation_type: "complex_issue"
)
end
end
endBug Ticket → AI Analysis → GitHub Issue → AI Fix Attempt → Draft PR → Human Review → Merge
1. GitHub Issue Creation:
class SupportGithubLink < ApplicationRecord
def create_github_issue!(title:, body:)
client = Octokit::Client.new(access_token: ENV['GITHUB_TOKEN'])
issue = client.create_issue(
github_repo,
title,
body,
labels: ["bug", "automated"]
)
update!(
status: :issue_created,
github_issue_number: issue.number,
github_issue_url: issue.html_url
)
# Log event
ticket.events.create!(
event_type: "github_issue_created",
metadata: { issue_number: issue.number, issue_url: issue.html_url },
description: "GitHub issue ##{issue.number} created automatically"
)
end
end2. AI Fix Attempt:
def attempt_automated_fix
# AI analyzes bug and suggests fix
fix_result = AI::BugFixService.new(
ticket: ticket,
github_issue_url: github_issue_url
).generate_fix
update!(ai_suggested_fix: fix_result[:code_changes])
# If high confidence, draft PR
if fix_result[:confidence] > 0.8
draft_pull_request!(code_changes: fix_result[:code_changes])
end
end3. Draft PR Creation:
def draft_pull_request!(code_changes:)
client = Octokit::Client.new(access_token: ENV['GITHUB_TOKEN'])
# Create branch
base_sha = client.ref(github_repo, "heads/main").object.sha
branch_name = "automated-fix/ticket-#{ticket.id}"
client.create_ref(
github_repo,
"refs/heads/#{branch_name}",
base_sha
)
# Commit changes
code_changes.each do |file_path, content|
client.create_contents(
github_repo,
file_path,
"Fix: #{ticket.subject}",
content,
branch: branch_name
)
end
# Create draft PR
pr = client.create_pull_request(
github_repo,
"main",
branch_name,
"[Automated Fix] #{ticket.subject}",
pr_body,
draft: true
)
update!(
status: :pr_drafted,
github_pr_number: pr.number,
github_pr_url: pr.html_url
)
end4. Human Review Gate:
def ready_for_review?
status_pr_drafted? &&
pr_checks_passed? &&
!human_reviewed?
end
# Admin dashboard shows:
SupportGithubLink.where(ready_for_review: true)Daily Metrics:
# Calculate daily metrics for agent
performance = SupportAgentPerformance.calculate_for_agent_and_date(
agent: agent,
date: Date.current
)
# Display in dashboard
{
tickets_handled: performance.tickets_handled,
resolution_rate: performance.resolution_rate, # 85%
escalation_rate: performance.escalation_rate, # 10%
quality_rate: performance.quality_rate, # 75%
avg_first_response: performance.avg_first_response_time_human, # "2h 15m"
avg_resolution: performance.avg_resolution_time # 1.5 hours
}Leaderboard:
# Top performers this month
SupportAgentPerformance
.for_date_range(Date.current.beginning_of_month, Date.current)
.group_by(&:agent_id)
.map { |agent_id, perfs|
{
agent: User.find(agent_id),
total_resolved: perfs.sum(&:tickets_resolved),
avg_resolution_rate: perfs.sum(&:tickets_resolved) / perfs.sum(&:tickets_handled).to_f * 100
}
}
.sort_by { |p| -p[:total_resolved] }
.first(10)Model Comparison:
# Compare models this week
models = ["claude-sonnet-4-5", "claude-haiku-4-5", "claude-opus-4-5"]
comparison = models.map do |model|
perf = SupportModelPerformance
.for_model(model)
.for_date_range(1.week.ago, Date.current)
{
model: model,
success_rate: perf.average(:success_rate),
avg_confidence: perf.average(:avg_confidence_score),
cost_per_request: perf.sum(:total_cost_usd) / perf.sum(:total_requests),
user_satisfaction: perf.average(:avg_user_satisfaction)
}
end
# Best model = highest success_rate, lowest cost
best = comparison.max_by { |m| m[:success_rate] / m[:cost_per_request] }Cost Tracking:
# Monthly cost by model
SupportModelPerformance
.for_date_range(Date.current.beginning_of_month, Date.current)
.group(:ai_model)
.sum(:total_cost_usd)
# => { "claude-sonnet-4-5" => 23.45, "claude-haiku-4-5" => 2.15 }# User submits ticket
ticket = SupportTicket.create!(
user: current_user,
subject: "Can't deploy my app",
description: "Getting error 500 when I try to deploy",
ticket_type: :bug,
priority: :normal,
status: :new
)
# Log creation event
ticket.events.create!(
event_type: "created",
description: "Ticket created by user"
)
# Trigger AI analysis
AI::TicketAnalysisJob.perform_later(ticket.id)# In AI::TicketAnalysisJob
class AI::TicketAnalysisJob < ApplicationJob
def perform(ticket_id)
ticket = SupportTicket.find(ticket_id)
# Update status
ticket.update!(status: :analyzing)
# Search knowledge base
articles = search_knowledge_base(ticket.description)
# Generate AI response
response = generate_ai_response(ticket, articles)
# Create message
message = ticket.messages.create!(
sender_type: "ai",
content: response[:content],
ai_model_used: "claude-sonnet-4-5",
ai_confidence: response[:confidence],
ai_tokens_used: response[:tokens],
ai_context_used: { articles: articles.pluck(:id) }
)
# Track metrics
articles.each do |article|
article.metrics.create!(
metric_type: "ai_referenced",
metric_date: Date.current,
value: 1,
ai_model: "claude-sonnet-4-5"
)
end
# Auto-escalate if needed
ticket.update!(status: :ai_responding)
ticket.auto_escalate_if_needed!
# If not escalated and high confidence, resolve
if !ticket.status_escalated? && response[:confidence] > 0.85
ticket.resolve!(resolution: message.content)
end
end
end# User clicks "Yes, this helped"
article = SupportArticle.find(params[:id])
article.mark_helpful!
article.metrics.create!(
metric_type: "helpful_vote",
metric_date: Date.current,
value: 1,
user: current_user
)# After ticket resolved with high satisfaction
if ticket.resolved? && ticket.satisfaction_rating >= 4
ai_message = ticket.messages.where(sender_type: "ai").last
SupportTrainingExample.create!(
ticket: ticket,
ticket_message: ai_message,
user_query: ticket.description,
ai_response: ai_message.content,
outcome: "successful",
confidence_score: ai_message.ai_confidence,
quality_score: ticket.satisfaction_rating
)
endvote = SupportFeedbackVote.create!(
user: current_user,
title: "Dark mode for dashboard",
description: "Would love a dark mode option",
vote_type: "feature_request",
status: :open,
upvotes: 1,
weighted_score: current_user.team.mrr_weight
)
# Other users can upvote
vote.increment!(:upvotes)
vote.update!(
weighted_score: vote.upvotes * avg_voter_mrr_weight
)User (BulletTrain)
├── has_many :support_tickets (as creator)
├── has_many :assigned_tickets (as agent)
├── has_many :support_articles (as author)
└── has_many :support_feedback_votes
SupportTicket (Central Hub)
├── belongs_to :user (creator)
├── belongs_to :assigned_to (User, optional)
├── has_many :messages
├── has_many :events
├── has_one :escalation
└── has_one :github_link
SupportTicketMessage
├── belongs_to :ticket
└── belongs_to :user (optional)
SupportArticle
├── belongs_to :category
├── belongs_to :author (User)
├── has_many :feedback_votes
└── has_many :metrics
SupportCategory
├── belongs_to :parent (SupportCategory, optional)
├── has_many :subcategories
└── has_many :articles
SupportEscalation
├── belongs_to :ticket
├── belongs_to :escalated_by (User, optional)
└── belongs_to :assigned_to (User)
SupportGithubLink
└── belongs_to :ticket
SupportFeedbackVote
├── belongs_to :user
└── belongs_to :article (optional)
SupportTicketEvent
├── belongs_to :ticket
└── belongs_to :user (optional)
SupportTrainingExample
├── belongs_to :ticket
└── belongs_to :ticket_message
SupportAgentPerformance
└── belongs_to :agent (User)
SupportModelPerformance
(aggregate metrics, no relationships)
SupportKnowledgeBaseMetric
├── belongs_to :article
└── belongs_to :user (optional)
# High-traffic queries
add_index :support_tickets, [:user_id, :status]
add_index :support_tickets, [:status, :created_at]
add_index :support_tickets, :assigned_to_id
add_index :support_ticket_messages, [:ticket_id, :created_at]
add_index :support_ticket_messages, :sender_type
add_index :support_articles, [:status, :published_at]
add_index :support_articles, [:category_id, :status]
add_index :support_articles, :slug, unique: true
add_index :support_model_performances, [:ai_model, :metric_date], unique: true
add_index :support_agent_performances, [:agent_id, :metric_date], unique: true
# Vector similarity search (pgvector)
add_index :support_articles, :embedding_vector, using: :ivfflat, opclass: :vector_cosine_ops- Database migrations for all 12 models
- Model classes with validations and associations
- FactoryBot factories for testing
- RSpec model tests (140/213 passing, 66%)
- Comprehensive documentation
- Admin controllers for article management
- API endpoints for ticket CRUD
- Webhook endpoints for GitHub integration
- Routes configuration
- Controller specs
- Help panel Stimulus controller
- Article search interface
- Ticket submission form
- Agent dashboard (real-time feed)
- Admin analytics dashboard
- RAG implementation (embeddings + search)
- AI agent service
- Training loop pipeline
- GitHub automation service
- Performance tracking jobs
- Seed data for development
- Integration tests (golden flows)
- Production deployment
- Monitoring and alerts
- Documentation for end users
1. Run migrations:
rails db:migrate2. Seed test data:
rails db:seed # TODO: Create seed file3. Run tests:
# All support model tests
rspec spec/models/support_*
# Specific model
rspec spec/models/support_ticket_spec.rb4. Create a test ticket:
rails console
# Create ticket
ticket = SupportTicket.create!(
user: User.first,
subject: "Test ticket",
description: "This is a test",
ticket_type: :general
)
# Add AI response
ticket.messages.create!(
sender_type: "ai",
content: "I can help with that!",
ai_model_used: "claude-sonnet-4-5",
ai_confidence: 0.92
)Create realistic test data:
# Categories
getting_started = SupportCategory.create!(
name: "Getting Started",
slug: "getting-started",
visible: true
)
# Articles
article = SupportArticle.create!(
title: "How to deploy your first app",
content: "Here's how to deploy...",
category: getting_started,
author: User.first,
status: :published
)
# Tickets
10.times do |i|
ticket = SupportTicket.create!(
user: User.first,
subject: "Question #{i}",
description: "I need help with...",
ticket_type: :general,
status: :new
)
end| Component | Cost | Calculation |
|---|---|---|
| OpenAI Embeddings | $5 | 100K articles × $0.00002/1K tokens |
| Claude API Calls | $15 | 10K tickets × 1K tokens × $0.003/1K |
| Vector Database | $0 | PostgreSQL pgvector (free) |
| GitHub API | $0 | Free tier (5K requests/hour) |
| Total | ~$25/month |
Scaling to 10K users: ~$250/month (linear scaling)
| Metric | Target | Current Status |
|---|---|---|
| Article Search Latency | <200ms | ⏳ Pending (pgvector setup) |
| AI Response Time | <7 seconds | ⏳ Pending (AI integration) |
| Ticket Creation | <100ms | ✅ Expected (Rails standard) |
| Embedding Generation | <1 second | ⏳ Pending (OpenAI API) |
| Dashboard Load | <500ms | ⏳ Pending (UI implementation) |
-
Immediate (Week 1):
- Fix remaining 73 test failures
- Create seed data for development
- Set up development environment for AI integration
-
Short-term (Weeks 2-3):
- Implement admin controllers
- Build help panel UI component
- Set up OpenAI API integration
-
Medium-term (Month 2):
- Complete AI agent service
- GitHub automation
- Agent dashboard
-
Long-term (Month 3+):
- Production deployment
- Training loop pipeline
- Advanced analytics