Overview

Modern development workflows demand flexibility, consistency, and powerful tooling. Cloud development environments solve these challenges by providing on-demand, standardized workspaces that developers can access from anywhere. This post explores how I built a production-ready cloud development platform using Coder, Kubernetes, and homelab infrastructure.

What is Coder?

Coder is an open-source platform that provisions cloud development environments. Think of it as “development workspaces as a service” - developers can spin up fully-configured development machines on-demand, access them via SSH or web-based IDEs, and destroy them when done.

Key Benefits

Infrastructure Architecture: The Complete Picture

The Coder platform runs on a sophisticated homelab infrastructure that demonstrates enterprise-grade architecture principles. Understanding the underlying infrastructure is critical to appreciating the platform’s capabilities and reliability.

Multi-Layer Architecture Overview

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
┌─────────────────────────────────────────────────────────────────┐
│ Kubernetes Control Plane (K3s)                                  │
│ - Coder Server (2 replicas)                                     │
│ - PostgreSQL Database                                           │
│ - Vault (Secrets Management)                                    │
│ - Forgejo (Git/CI/CD)                                          │
│ - cliProxy (OAuth → API Key Translation)                       │
└────────────────────┬────────────────────────────────────────────┘
                     │
                     ↓
┌─────────────────────────────────────────────────────────────────┐
│ Proxmox VE Cluster (Multiple Nodes)                            │
│ - Workspace VM provisioning                                     │
│ - Resource allocation (CPU, RAM, Disk)                         │
│ - Network management                                            │
│ - High availability and live migration                          │
└────────────────────┬────────────────────────────────────────────┘
                     │
                     ↓
┌─────────────────────────────────────────────────────────────────┐
│ TrueNAS Storage Cluster (Multiple Servers)                     │
│ - NFS home directories                                          │
│ - iSCSI block storage                                          │
│ - NVMe high-performance storage                                │
│ - ZFS datasets with quotas                                     │
└─────────────────────────────────────────────────────────────────┘

Multi-Layer Infrastructure Architecture Complete infrastructure stack showing K3s → Proxmox → TrueNAS architecture

Kubernetes Layer: Coder Control Plane

Cluster Configuration:

Deployed Services:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# Coder Server
replicas: 2
resources:
  cpu: 2 cores per pod
  memory: 4GB per pod
storage: PostgreSQL on persistent volume

# PostgreSQL
replicas: 1 (with backup strategy)
storage: 100GB persistent volume
backup: Daily snapshots to TrueNAS

# Vault
replicas: 1
storage: Persistent KV store
purpose: Proxmox credentials, API keys, secrets

# Forgejo
replicas: 1
storage: Git repositories on persistent volume
runners: 2 Forgejo Actions runners for CI/CD

# cliProxy
replicas: 2 (load balanced)
purpose: OAuth → API key translation for AI services

High Availability Considerations:

Proxmox VE Cluster: Compute Layer

Cluster Configuration:

Node Specifications:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Node 1 (Primary):
  - CPU: 32 cores (AMD EPYC/Ryzen)
  - RAM: 128GB
  - Storage: Local NVMe for VM disks
  - Network: 2x 10GbE (bonded for redundancy)
  - Role: Primary workspace VM host

Node 2-3 (Workers):
  - CPU: 16-24 cores each
  - RAM: 64GB each
  - Storage: Local SSD + iSCSI from TrueNAS
  - Network: 10GbE
  - Role: Additional workspace capacity

Node 4 (Hybrid):
  - CPU: 16 cores
  - RAM: 64GB
  - Storage: NVMe + iSCSI
  - Network: 10GbE
  - Role: Overflow capacity + testing

Storage Backends per Node:

Workspace Distribution Strategy:

TrueNAS Storage Cluster: Persistence Layer

Four-Server Enterprise Storage Architecture:

The platform leverages four dedicated TrueNAS servers providing a combined 317+ TB of enterprise-grade ZFS storage with RAIDZ2 redundancy across all pools.

TrueNAS-01 (Primary NFS Server)

TrueNAS-02 (High-Capacity Storage)

TrueNAS-03 (VM Block Storage)

TrueNAS-04 (Expansion/Backup)

Aggregate Storage Capacity:

TrueNAS Storage Cluster: Persistence Layer

Multi-Server Architecture:

The platform uses multiple TrueNAS servers to provide distributed, redundant storage for different use cases:

TrueNAS Server 1 (Primary NFS)

TrueNAS Server 2 (iSCSI Block Storage)

TrueNAS Server 3 (NVMe/SSD Pool)

Storage Selection in Templates:

Workspace templates allow developers to choose storage backend:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
data "coder_parameter" "storage_backend" {
  name = "storage_type"
  option {
    name = "iSCSI (Ample Capacity)"
    value = "iscsi"
    description = "16TB pool, good for most workloads"
  }
  option {
    name = "NVMe (High Performance)"  
    value = "nvme"
    description = "4TB pool, ultra-low latency"
  }
}

resource "proxmox_virtual_environment_vm" "workspace" {
  disk {
    datastore_id = var.storage_backend == "nvme" ? "nvme-pool" : "iscsi-pool"
  }
}

NFS Home Directory Architecture:

All workspaces mount their home directory from TrueNAS Server 1 via NFS:

1
2
3
4
Workspace VM → 10GbE Network → TrueNAS NFS Server → ZFS Dataset
     ↓                                                      ↓
  /home/coder                              /mnt/tank/coder-home/username
  (NFS mount)                              (ZFS dataset with quota)

Benefits:

Networking Architecture

Note: The VLAN IDs and subnet ranges shown below are examples for illustration purposes.

Network Segmentation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
┌──────────────────────────────────────────────────────────┐
│ Management Network (Example: VLAN 10)                             │
│ - Proxmox management interfaces                          │
│ - TrueNAS management                                     │
│ - Kubernetes API server                                  │
│ Example Subnet: 192.168.10.0/24                                  │
└──────────────────────────────────────────────────────────┘

┌──────────────────────────────────────────────────────────┐
│ Workspace Network (Example: VLAN 20)                              │
│ - Workspace VM primary interfaces                        │
│ - Internet access (NAT)                                  │
│ - Inter-workspace communication                          │
│ Example Subnet: 192.168.20.0/24                                  │
└──────────────────────────────────────────────────────────┘

┌──────────────────────────────────────────────────────────┐
│ Storage Network (Example: VLAN 30)                                │
│ - NFS traffic (TrueNAS → Workspace VMs)                  │
│ - iSCSI traffic (TrueNAS → Proxmox)                      │
│ - 10GbE dedicated bandwidth                              │
│ Example Subnet: 10.10.30.0/24 (high-performance routing)         │
└──────────────────────────────────────────────────────────┘

┌──────────────────────────────────────────────────────────┐
│ Services Network (Example: VLAN 40)                               │
│ - Kubernetes service network                             │
│ - Coder agent communication                              │
│ - WebSocket connections                                  │
│ Example Subnet: 192.168.40.0/24                                  │
└──────────────────────────────────────────────────────────┘

10GbE Backbone:

Routing and Connectivity:

Reliability and Redundancy

Component Redundancy:

Component Redundancy Strategy Recovery Time
Coder Server 2 Kubernetes replicas Instant (load balanced)
PostgreSQL Daily backups + WAL archiving <5 minutes
Proxmox Nodes 4-node cluster with HA <2 minutes (VM migration)
TrueNAS Storage Multiple independent servers Varies by storage tier
Network Bonded 10GbE interfaces Instant failover
Power Dual PSU + UPS per server Seconds

Disaster Recovery Strategy:

  1. Kubernetes Cluster: etcd snapshots every 6 hours, stored on TrueNAS
  2. PostgreSQL Database: Daily full backups, point-in-time recovery enabled
  3. TrueNAS Datasets: ZFS replication to backup TrueNAS (hourly sync)
  4. Proxmox Configuration: Cluster config backed up weekly
  5. Workspace VMs: Ephemeral (can be recreated), data persisted on NFS

Backup Infrastructure:

1
2
3
4
Primary Infrastructure → Backup TrueNAS (off-site or isolated)
     ↓                           ↓
ZFS Send/Receive          Encrypted backups
Hourly replication        7-day retention

Monitoring and Observability

Infrastructure Monitoring:

Key Metrics Tracked:

Scaling Considerations

Current Capacity:

Expansion Strategy:

Why This Architecture?

Separation of Concerns:

Benefits of Multi-Layer Approach:

  1. Flexibility: Replace components independently
  2. Scalability: Scale compute and storage separately
  3. Reliability: Failure in one layer doesn’t cascade
  4. Performance: Optimize each layer for its workload
  5. Cost Efficiency: Use appropriate hardware for each role

This architecture demonstrates that sophisticated cloud-like infrastructure can be built on-premises with careful planning and the right open-source tools.

Authentication and Identity Flow

A critical aspect of the platform is how authentication and identity flow through the entire stack, from initial login to dataset provisioning. Everything is tied together through Authentik SSO - from Coder access to Vault secrets to workspace app links.

SSO with Authentik: The Central Identity Provider:

The platform uses Authentik as the central SSO (Single Sign-On) provider for ALL services:

1
2
3
4
5
6
7
8
9
                    Authentik SSO (Central Identity)
                             ↓
        ┌────────────────────┼────────────────────┐
        ↓                    ↓                    ↓
   Coder Login        Vault Access        Workspace Apps
   (Platform)         (Secrets)           (Tools/Services)
        ↓                    ↓                    ↓
  Workspace Create    API Keys Retrieval    One-Click Access
  Dataset Creation    Proxmox Creds         (links in Coder UI)

Complete Authentication Flow:

  1. User Accesses Coder
  2. SSO Authentication
  3. Coder Session Creation
  4. Workspace Provisioning
  5. Vault Integration (SSO-Protected)
  6. Dynamic Dataset Creation
  7. Workspace VM Configuration
  8. Workspace App Links

Identity Consistency Across All Services:

Service SSO via Authentik Username Usage
Coder ✅ Yes Primary platform login, workspace owner
Vault ✅ Yes (integrated) Retrieve Proxmox creds, personal secrets
Forgejo ✅ Yes Git push/pull, CI/CD access
Grafana ✅ Yes View personal workspace metrics
cliProxy ✅ Yes OAuth → API key for AI services
TrueNAS ❌ No (script-based) Dataset creation via API
Proxmox ❌ No (API-based) VM provisioning via Vault creds

Workspace App Links: Enhanced Developer Experience:

When you open a workspace in Coder, the UI displays clickable app links:

1
2
3
4
5
6
7
8
9
10
11
12
┌─────────────────────────────────────────────────────────┐
│ Workspace: crimson-mite-10                    [Running] │
├─────────────────────────────────────────────────────────┤
│ Apps:                                                   │
│  🖥️  VS Code Desktop        [Open in Browser]          │
│  🤖 Codex AI               [Open Terminal]             │
│  💬 Droid AI               [Chat Interface]            │
│  🔐 Vault (SSO)            [Open Secrets]              │
│  📊 Grafana (SSO)          [View Metrics]              │
│  🔧 Forgejo (SSO)          [Git Repos]                 │
│  📦 S3 Bucket              [Object Storage]            │
└─────────────────────────────────────────────────────────┘

How App Links Work:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# In Coder template - define app links
resource "coder_app" "vault" {
  agent_id     = coder_agent.main.id
  display_name = "Vault (Personal Secrets)"
  url          = "https://vault.example.com"
  icon         = "https://vault.io/favicon.ico"
  
  # Authentik SSO protects Vault access
  # User clicks link → Redirects to Vault → Authentik SSO → Vault UI
}

resource "coder_app" "grafana" {
  agent_id     = coder_agent.main.id
  display_name = "Workspace Metrics"
  url          = "https://grafana.example.com/d/workspace?var-user=${data.coder_workspace.me.owner}"
  icon         = "https://grafana.com/favicon.ico"
  
  # Shows metrics for this specific workspace
  # Pre-filtered by username via URL parameter
}

resource "coder_app" "s3_bucket" {
  agent_id     = coder_agent.main.id
  display_name = "S3 Bucket"
  url          = "https://s3-console.example.com/buckets/${data.coder_workspace.me.owner}-workspace"
  icon         = "https://min.io/favicon.ico"
  
  # Direct link to user's personal S3 bucket
}

Example User Journey:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# John logs into Coder via Authentik (Google SSO)
john.doe@company.com → Authentik → Coder UI

# Creates workspace - username flows through:
Workspace Owner: john.doe
Dataset: /mnt/tank/coder-home/john.doe
S3 Bucket: john.doe-workspace (auto-created)

# Opens workspace - sees app links:
[Open Vault] → Clicks → Authentik SSO → Vault UI
  → Can now access personal API keys, database passwords, etc.

[Open Grafana] → Clicks → Authentik SSO → Grafana
  → Dashboard pre-filtered to show john.doe's workspace metrics

[Open S3 Bucket] → Clicks → S3 Console
  → Direct access to john.doe-workspace bucket

[Open Forgejo] → Clicks → Authentik SSO → Git repos
  → Access to personal and team repositories

Benefits of Unified SSO + App Links:

Security:

Developer Experience:

Operational Excellence:

Vault SSO Integration:

Vault’s Authentik SSO integration enables:

1
2
3
4
5
Developer → Vault UI → Authentik SSO → Personal Namespace
                                              ↓
                          Personal Secrets (API keys, passwords)
                          Team Secrets (shared credentials)
                          Workspace Secrets (temporary tokens)

Future Enhancements:

This creates a truly integrated platform where:

  1. ✅ Authenticate once via Authentik SSO
  2. ✅ Username flows through every layer automatically
  3. ✅ All services accessible via app links in Coder UI
  4. ✅ No separate logins for Vault, Grafana, Git, S3, etc.
  5. ✅ Complete audit trail tied to corporate identity
  6. ✅ Resources (datasets, buckets, secrets) automatically scoped to user

The combination of Authentik SSO, dynamic provisioning, and workspace app links creates an experience that rivals commercial cloud IDEs while maintaining complete control and security.

Architecture Overview

The platform consists of several integrated components working together to provide a seamless development experience.

Coder Platform Architecture

Core Components

Coder Server (Kubernetes)

Terraform Provisioner

Workspace VMs (Proxmox)

Storage Backend (TrueNAS)

System Context

The platform integrates with existing homelab services to provide a complete solution:

C4 System Context

External Integrations

Container Architecture

The Kubernetes deployment provides high availability and scalability:

C4 Container Diagram

Kubernetes Components

coder-server

postgres

terraform

vault

token-rotator (CronJob)

Workspace Creation Flow

When a developer creates a new workspace, several automated steps occur:

Workspace Creation Flow

Provisioning Steps

  1. User Request: Developer selects a template and provides parameters (CPU, RAM, storage)
  2. Coder Orchestration: Server validates request and initiates Terraform job
  3. Credential Retrieval: Terraform fetches Proxmox credentials from Vault
  4. VM Creation: Proxmox provisions virtual machine with specified resources
  5. Storage Setup: TrueNAS creates ZFS dataset with quota and NFS export
  6. VM Configuration: Cloud-init configures VM and mounts NFS home directory
  7. Agent Connection: Coder agent starts and connects via WebSocket
  8. Ready State: Workspace becomes available for SSH/web IDE access

Template Auto-Deployment

Templates are version-controlled and automatically deployed via Forgejo Actions:

Template Deployment Flow

CI/CD Workflow

  1. Developer Push: Commit template changes to Git repository
  2. Webhook Trigger: Forgejo detects changes in template directories
  3. Validation: Terraform validates syntax and configuration
  4. Secret Injection: Forgejo secrets provide API credentials
  5. Template Push: Coder CLI deploys new template version
  6. Notification: Developer receives deployment confirmation

This ensures templates stay synchronized with Git and provides an audit trail for all changes.

Token Rotation

Security is maintained through automated credential rotation:

Token Rotation Flow

Rotation Process

This ensures CI/CD pipelines never use expired credentials while maintaining security best practices.

AI Integration: Next-Level Development Experience

One of the most powerful aspects of the Coder platform is its seamless integration with AI-powered development tools. By providing consistent, remote development environments, Coder creates the perfect foundation for integrating advanced AI assistants that enhance developer productivity.

Aider and Claude in Every Workspace

Each workspace comes pre-configured with both Aider and Claude AI integration. This means developers can leverage AI-powered coding assistance directly within their development environment, regardless of their local machine setup.

What makes this powerful:

Coder Tasks: Clean AI Workflows

Coder provides a feature called Tasks that takes AI integration to the next level. Tasks allow developers to define custom commands and workflows that can be triggered directly from the Coder UI or CLI.

Benefits for AI Workflows:

Example Task Definition:

1
2
3
4
5
6
7
8
tasks:
  - name: "AI Code Review"
    command: "aider --review --no-auto-commit"
    description: "Run AI-powered code review on current changes"
  
  - name: "Generate Unit Tests"
    command: "aider --message Generate comprehensive unit tests for the current module"
    description: "Use AI to generate test coverage"

This transforms AI coding assistance from a manual, ad-hoc process into a structured, repeatable workflow that integrates naturally with the development process.

The Power of Remote AI Integration

Running AI tools on remote workspaces instead of local machines provides significant advantages:

  1. Compute Flexibility: Scale workspace resources based on AI workload requirements
  2. Network Optimization: Direct connectivity between workspaces and AI API endpoints
  3. Credential Management: Centralized API key management through Vault integration
  4. Cost Control: Track AI API usage per workspace/team
  5. Consistent Performance: Developers arent limited by local machine capabilities

Implementation Considerations

The AI integration required careful planning and architecture:

This level of integration required significant thought and engineering effort, but the result is a platform where AI assistance is a first-class feature, not an afterthought.

User Interface: Cloud-Like Experience

The Coder web UI provides an intuitive, cloud-service-like experience for managing workspaces:

Coder Workspace Dashboard Workspace dashboard showing running workspace with integrated tools: VS Code Desktop, Codex, Droid AI, VS Code, and Terminal - all accessible with one click

Key UI Features:

AI Integration in Action

AI-powered development tools are seamlessly integrated into every workspace:

Codex AI Integration OpenAI Codex running directly in the workspace terminal - ready to assist with code generation, reviews, and implementation tasks

The integration provides:

Parameter Selection: True Self-Service

Template parameters are presented as intuitive sliders and dropdowns, making resource selection feel like using a commercial cloud service:

Template Parameters UI Dynamic sliders for CPU cores, memory, disk size, and NFS storage quota - adjust resources instantly without infrastructure tickets

Parameter UI Features:

The slider-based interface transforms infrastructure provisioning from a complex request process into an instant, self-service experience - no need to file tickets, wait for approvals, or understand infrastructure details.

MCP Server Architecture: Giving AI Specialized Tools

MCP Architecture Central MCP Proxy architecture with per-user authentication and HTTP-streamable MCP servers

A revolutionary aspect of the platform is the integration of MCP (Model Context Protocol) servers - a standardized way to give AI models access to external tools, data sources, and services. This transforms AI from a simple chat interface into an intelligent agent that can interact with your entire development infrastructure.

What is MCP?

Model Context Protocol (MCP) is an open standard that allows AI models to:

Think of MCP as giving AI eyes and hands - instead of just generating text, AI can read your documentation, query your databases, interact with your tools, and take actions on your behalf.

MCP Servers in Coder Workspaces

Each workspace can have multiple MCP servers running, each providing different capabilities:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
┌─────────────────────────────────────────────────────────┐
│ AI Model (Claude, GPT-4, etc.)                          │
│ Running in workspace via Aider/Codex                    │
└────────────────┬────────────────────────────────────────┘
                 │ MCP Protocol
                 ↓
┌─────────────────────────────────────────────────────────┐
│ Central MCP Proxy (Per-User Authentication)            │
│ - Routes requests to appropriate MCP servers            │
│ - Validates user OAuth token                            │
│ - Injects user credentials per MCP server               │
└────────────────┬────────────────────────────────────────┘
                 │
     ┌───────────┼───────────┬───────────┬────────────┐
     ↓           ↓           ↓           ↓            ↓
┌─────────┐ ┌────────┐ ┌──────────┐ ┌────────┐ ┌─────────┐
│ Outline │ │AppFlowy│ │  Splunk  │ │  Git   │ │   S3    │
│   MCP   │ │  MCP   │ │   MCP    │ │  MCP   │ │   MCP   │
└─────────┘ └────────┘ └──────────┘ └────────┘ └─────────┘
     ↓           ↓           ↓           ↓            ↓
[User Auth] [User Auth] [User Auth] [User Auth] [User Auth]

Why HTTP-Streamable MCP Servers?

The platform uses HTTP-streamable MCP servers (not stdio/local MCP servers) for a critical reason: per-user authentication.

The Problem with Traditional MCP:

1
2
3
4
5
❌ Traditional Approach (stdio/local):
- MCP server runs locally with pre-configured credentials
- All users share the same MCP server instance
- Everyone has access to the same data/tools
- Security nightmare: one compromised workspace = everyone's data exposed

The Solution: HTTP-Streamable + Central Proxy:

1
2
3
4
5
6
✅ HTTP-Streamable Approach:
- Each MCP server is a network service (HTTP/SSE)
- Central MCP proxy authenticates each request
- User's OAuth token passed to MCP servers
- Each user accesses only THEIR data via THEIR credentials
- Zero shared access, complete isolation

Central MCP Proxy Architecture

The Central MCP Proxy is the key innovation that makes multi-user MCP deployment secure:

Architecture:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Workspace (User: john.doe)
    ↓
AI Model makes request: "Search my Outline documents for Coder notes"
    ↓
Aider sends MCP request with workspace token
    ↓
┌─────────────────────────────────────────────────────────┐
│ Central MCP Proxy                                       │
│                                                         │
│ 1. Validate workspace token                            │
│    → Extract username: john.doe                         │
│                                                         │
│ 2. Route to Outline MCP server                         │
│    → HTTP POST to outline-mcp.internal                 │
│                                                         │
│ 3. Inject user credentials                             │
│    → Add header: X-User-Token: john.doe-outline-token  │
│    → Outline MCP uses john.doe's API key from Vault    │
│                                                         │
│ 4. Return results to workspace                         │
│    → Stream response back to AI model                  │
└─────────────────────────────────────────────────────────┘
    ↓
Outline MCP Server
    → Authenticates to Outline API using john.doe's token
    → Returns only documents john.doe has access to
    → AI sees only john.doe's Outline workspace

Security Benefits:

Aspect Traditional MCP HTTP-Streamable + Proxy
Authentication Shared credentials Per-user OAuth tokens
Access Control Everyone sees everything User sees only their data
Audit Trail No user attribution Complete per-user logging
Revocation Restart MCP server Disable user’s OAuth token
Isolation None Complete workspace isolation

MCP Server Examples

1. Outline MCP Server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// outline-mcp-server (HTTP-streamable)
// Provides AI access to Outline documentation

Tools provided to AI:
- search_documents(query: string): Search user's Outline docs
- get_document(id: string): Retrieve specific document
- create_document(title: string, content: string): Create new doc
- list_collections(): List user's collections

Example AI interaction:
User: "Search my Outline docs for Coder architecture notes"
AI: [Calls search_documents("Coder architecture")]
MCP Proxy: [Authenticates as john.doe, queries Outline API]
Result: Returns john.doe's Outline documents about Coder
AI: "I found 3 documents about Coder architecture..."

Per-User Authentication:

1
2
3
4
5
6
7
8
9
10
11
12
13
# MCP request includes workspace token
POST /mcp/outline/search
Headers:
  X-Workspace-Token: john.doe:workspace-abc123

# MCP Proxy validates token and looks up Outline API key
john.doe → Vault → outline_api_key_john_doe

# Outline MCP uses john.doe's API key
GET https://outline.example.com/api/documents
Authorization: Bearer john_doe_api_key

# Returns only john.doe's accessible documents

2. AppFlowy MCP Server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// appflowy-mcp-server (HTTP-streamable)
// Provides AI access to AppFlowy workspaces (Notion-like)

Tools provided to AI:
- get_workspace(): Get user's AppFlowy workspace
- search_pages(query: string): Search pages and databases
- create_page(title: string): Create new page
- update_database(id: string, data: object): Update database rows
- get_kanban_board(id: string): Get project board

Example AI interaction:
User: "Show me tasks from my sprint board in AppFlowy"
AI: [Calls get_kanban_board("sprint-board")]
MCP Proxy: [Authenticates as john.doe, queries AppFlowy]
Result: Returns john.doe's AppFlowy sprint board data
AI: "You have 5 tasks in progress: 1. Implement auth..."

3. Splunk MCP Server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// splunk-mcp-server (HTTP-streamable)
// Provides AI access to Splunk data and searches

Tools provided to AI:
- search(query: string, timerange: string): Run SPL search
- get_saved_searches(): List user's saved searches
- get_dashboards(): List accessible dashboards
- create_alert(query: string, conditions: object): Create alert

Example AI interaction:
User: "Show me error rate for my app in the last hour"
AI: [Calls search("index=main app=myapp error | stats count", "1h")]
MCP Proxy: [Authenticates as john.doe with Splunk token]
Result: Returns Splunk search results john.doe can access
AI: "Your app had 47 errors in the last hour, mostly 500s..."

4. Git MCP Server (Forgejo)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// git-mcp-server (HTTP-streamable)
// Provides AI access to Git repositories

Tools provided to AI:
- list_repos(): List user's repositories
- search_code(query: string): Search code across repos
- get_file(repo: string, path: string): Get file contents
- create_pr(repo: string, title: string, branch: string): Create PR
- list_issues(repo: string): List issues

Example AI interaction:
User: "Find all TODO comments in my coder-templates repo"
AI: [Calls search_code("TODO", repo="coder-templates")]
MCP Proxy: [Authenticates as john.doe to Forgejo]
Result: Returns TODO comments from john.doe's repo
AI: "Found 12 TODO comments across 5 files..."

5. S3 MCP Server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// s3-mcp-server (HTTP-streamable)
// Provides AI access to S3 object storage

Tools provided to AI:
- list_buckets(): List user's buckets
- list_objects(bucket: string): List objects in bucket
- upload_file(bucket: string, path: string, content: string): Upload
- download_file(bucket: string, path: string): Download
- create_presigned_url(bucket: string, path: string): Get shareable URL

Example AI interaction:
User: "Upload this diagram to my workspace S3 bucket"
AI: [Calls upload_file("john-doe-workspace", "diagrams/arch.png", data)]
MCP Proxy: [Authenticates as john.doe, gets S3 credentials]
Result: File uploaded to john.doe's personal bucket
AI: "Diagram uploaded successfully to your workspace bucket"

Automated MCP Server Deployment

MCP servers are deployed and managed automatically via scripts:

Deployment Script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/bin/bash
# deploy-mcp-servers.sh
# Automatically deploy MCP servers to Kubernetes

MCP_SERVERS=(
    "outline-mcp"
    "appflowy-mcp"
    "splunk-mcp"
    "git-mcp"
    "s3-mcp"
)

for server in "${MCP_SERVERS[@]}"; do
    echo "Deploying $server..."
    
    # Build container image
    docker build -t mcp-registry.local/$server:latest ./mcp-servers/$server
    
    # Push to internal registry
    docker push mcp-registry.local/$server:latest
    
    # Deploy to Kubernetes
    kubectl apply -f ./k8s-manifests/$server-deployment.yaml
    
    # Update MCP Proxy routing configuration
    kubectl exec -n coder mcp-proxy-0 -- \
        mcp-proxy-ctl add-route $server http://$server.mcp-namespace.svc.cluster.local:8080
done

echo "All MCP servers deployed!"

Kubernetes Deployment Example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# outline-mcp-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: outline-mcp
  namespace: mcp-servers
spec:
  replicas: 2  # HA for reliability
  selector:
    matchLabels:
      app: outline-mcp
  template:
    metadata:
      labels:
        app: outline-mcp
    spec:
      containers:
      - name: outline-mcp
        image: mcp-registry.local/outline-mcp:latest
        ports:
        - containerPort: 8080
          name: http
        env:
        - name: VAULT_ADDR
          value: "http://vault.vault.svc.cluster.local:8200"
        - name: MCP_MODE
          value: "http-streamable"  # Not stdio!
---
apiVersion: v1
kind: Service
metadata:
  name: outline-mcp
  namespace: mcp-servers
spec:
  selector:
    app: outline-mcp
  ports:
  - port: 8080
    targetPort: 8080
    name: http
  type: ClusterIP

Central MCP Proxy Configuration

MCP Proxy Routing Table:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# mcp-proxy-config.yaml
routes:
  - name: outline
    upstream: http://outline-mcp.mcp-servers.svc.cluster.local:8080
    auth:
      type: oauth-vault
      vault_path: secret/mcp/outline/{username}
    
  - name: appflowy
    upstream: http://appflowy-mcp.mcp-servers.svc.cluster.local:8080
    auth:
      type: oauth-vault
      vault_path: secret/mcp/appflowy/{username}
    
  - name: splunk
    upstream: http://splunk-mcp.mcp-servers.svc.cluster.local:8080
    auth:
      type: oauth-vault
      vault_path: secret/mcp/splunk/{username}
    
  - name: git
    upstream: http://git-mcp.mcp-servers.svc.cluster.local:8080
    auth:
      type: oauth-vault
      vault_path: secret/mcp/git/{username}

# Proxy validates workspace token via Coder API
coder_api_url: https://coder.example.com/api/v2

MCP Proxy Request Flow:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Simplified MCP Proxy logic
async def handle_mcp_request(request):
    # 1. Extract workspace token from request
    workspace_token = request.headers.get("X-Workspace-Token")
    
    # 2. Validate token with Coder API
    username = await coder_api.validate_token(workspace_token)
    if not username:
        return {"error": "Invalid workspace token"}
    
    # 3. Extract MCP server name from request path
    mcp_server = request.path.split("/")[2]  # /mcp/outline/search
    
    # 4. Get user's credentials for this MCP server from Vault
    vault_path = f"secret/mcp/{mcp_server}/{username}"
    user_creds = await vault.read(vault_path)
    
    # 5. Forward request to MCP server with user credentials
    upstream_url = mcp_routes[mcp_server]["upstream"]
    response = await http.post(
        url=upstream_url + request.path,
        headers={
            "X-User-Credentials": user_creds,
            "X-Username": username,
        },
        json=request.json
    )
    
    # 6. Stream response back to workspace
    return response

MCP Server Configuration in Workspace

Aider Configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# ~/.aider/mcp.yaml in workspace
# AI tools automatically discover and use these MCP servers

mcp_servers:
  outline:
    url: https://mcp-proxy.example.com/mcp/outline
    auth: workspace_token  # Automatically injected by Coder agent
    
  appflowy:
    url: https://mcp-proxy.example.com/mcp/appflowy
    auth: workspace_token
    
  splunk:
    url: https://mcp-proxy.example.com/mcp/splunk
    auth: workspace_token
    
  git:
    url: https://mcp-proxy.example.com/mcp/git
    auth: workspace_token
    
  s3:
    url: https://mcp-proxy.example.com/mcp/s3
    auth: workspace_token

Automatically configured by Terraform template:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
resource "coder_agent" "main" {
  # ... other config ...
  
  startup_script = <<-EOT
    # Configure MCP servers for AI tools
    mkdir -p ~/.aider
    cat > ~/.aider/mcp.yaml <<EOF
    mcp_servers:
      outline:
        url: ${var.mcp_proxy_url}/mcp/outline
        auth: $CODER_AGENT_TOKEN
      appflowy:
        url: ${var.mcp_proxy_url}/mcp/appflowy
        auth: $CODER_AGENT_TOKEN
      splunk:
        url: ${var.mcp_proxy_url}/mcp/splunk
        auth: $CODER_AGENT_TOKEN
      git:
        url: ${var.mcp_proxy_url}/mcp/git
        auth: $CODER_AGENT_TOKEN
      s3:
        url: ${var.mcp_proxy_url}/mcp/s3
        auth: $CODER_AGENT_TOKEN
    EOF
    
    # AI tools now have access to all MCP servers with user's credentials
  EOT
}

Benefits of This Architecture

Security:

Developer Experience:

Operational Excellence:

Scalability:

Real-World Use Case: AI-Powered Development Workflow

Scenario: Developer working on a feature, using AI with full MCP integration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
Developer: "Help me implement user authentication"

AI (via MCP):
1. [Searches Outline MCP] → Finds internal auth documentation
   "Based on your team's docs, you use Authentik OIDC..."

2. [Searches Git MCP] → Finds similar auth implementations
   "I found auth.go in the user-service repo that you can reference..."

3. [Searches Splunk MCP] → Checks for auth-related errors
   "Recent logs show 23 auth failures with error 'invalid_grant'..."

4. [Searches AppFlowy MCP] → Checks sprint board
   "This task is assigned to you with a deadline of Friday..."

Developer: "Create a PR for my auth implementation"

AI (via MCP):
5. [Creates PR via Git MCP] → Opens PR in Forgejo
   "PR #47 created: 'Implement Authentik OIDC authentication'"

6. [Updates AppFlowy MCP] → Moves task to "In Review"
   "Moved sprint board task to 'In Review' column"

7. [Uploads to S3 MCP] → Stores architecture diagram
   "Uploaded auth-flow-diagram.png to your workspace bucket"

This is AI-augmented development at its finest - the AI has access to your documentation, code, logs, tasks, and tools, all authenticated as YOU, with YOUR permissions.

Future MCP Server Integrations

Planned MCP Servers:

The MCP server architecture transforms AI from a code generator into an intelligent development assistant that can interact with your entire development infrastructure, all while maintaining strict per-user authentication and security.

More to Come: Expanding the AI Ecosystem

The current AI integration is just the beginning. Several exciting enhancements are in active development:

LiteLLM Integration

LiteLLM provides a unified interface for multiple LLM providers (OpenAI, Anthropic, Azure, etc.). Integration with Coder will enable:

Langfuse for Observability

Langfuse brings observability to LLM interactions. Once integrated with Coder workspaces:

This addresses a critical gap in the current setup - right now, AI usage happens in a black box. Langfuse will provide full visibility into AI operations.

Refactored cliProxy with Enhanced Authentication

The cliProxy service is being enhanced to provide better multi-tenant support for AI tools:

Since workspaces run on remote hosts, proper authentication and routing is critical for security and reliability. The refactored proxy will provide production-grade infrastructure for AI services.

cliProxy: OAuth Authentication for AI Services

cliProxy Configuration cliProxy script baked into workspace - enables OAuth authentication instead of API keys for AI services

One of the most innovative components of the AI integration is the cliProxy service - a custom-built proxy that enables OAuth-based authentication for AI services that typically require API keys.

The Innovation: OAuth Instead of API Keys

Many LLM providers (like OpenAI) require API keys for authentication. API keys present challenges:

The cliProxy solves this by translating OAuth authentication (which Coder already uses) into API key authentication for LLM providers.

Key Benefits:

Security & Identity

The Bigger Picture:

When combined with LiteLLM and Langfuse, cliProxy creates a complete observability and routing layer that makes enterprise AI adoption practical and secure.

To be discussed in detail in future blog post - the OAuth authentication flow, token validation mechanics, and integration with enterprise identity providers deserves its own deep-dive.

ChatMock for Development and Testing

ChatMock enables local testing of AI integrations without consuming API credits:

This will be particularly valuable for template development and CI/CD workflows that involve AI tooling.

MCP Server Integration

The Model Context Protocol (MCP) provides a standardized way for AI models to interact with tools and data sources. Planned MCP integration includes:

Example Use Cases:

Aider Toolkit (APTK) Integration

Aider Toolkit extends Aider with additional capabilities:

OpenAI Proxy and API Management

A dedicated OpenAI proxy layer (and similar proxies for other providers) will provide:

Gemini Integration

Google Gemini integration is planned to expand the multi-model AI support:

Combined with existing Aider, Claude, and Codex integration, developers will have access to multiple AI providers within a single workspace, enabling them to choose the best model for each task.

S3 Bucket Per Workspace

Per-workspace S3 bucket provisioning is in development to provide object storage for each workspace:

Implementation Details:

1
2
3
4
5
6
7
8
# Automatic bucket creation during provisioning
s3-bucket-manager.sh create <username>-workspace

# Workspace gets environment variables:
export AWS_ACCESS_KEY_ID=<workspace-specific-key>
export AWS_SECRET_ACCESS_KEY=<workspace-secret>
export AWS_ENDPOINT_URL=https://s3.example.com
export S3_BUCKET_NAME=<username>-workspace

Developers can immediately start using S3 APIs without manual configuration, and access a web UI to manage buckets and objects.

Enhanced Workspace Metadata

The Coder agent provides rich metadata about each workspace that will be exposed to developers:

Use Cases:

This metadata will be accessible via CLI, web UI, and environment variables for programmatic access.

Desktop Workspace Support

With persistent home directories working reliably, the next major expansion is desktop workspace templates:

macOS Workspaces (UTM/QEMU)

Windows Workspaces (Proxmox)

Linux Desktop Workspaces (Proxmox)

Benefits of Desktop Workspaces:

The desktop workspace expansion will complete the suite, providing:

This makes Coder a truly universal development platform where any type of development environment can be provisioned on-demand.

The Bigger Picture

These integrations represent a comprehensive strategy to make AI-assisted development a core platform capability:

  1. Unified Experience: AI tools work consistently across all workspaces
  2. Observability: Full visibility into AI usage, costs, and performance
  3. Security: Centralized credential management and access control
  4. Flexibility: Support multiple AI providers and models
  5. Scalability: Infrastructure designed for team-wide AI adoption

The amount of planning, architecture, and engineering required to get this right is substantial. Its not just about installing tools - its about creating a cohesive platform where AI assistance integrates naturally with the development workflow while maintaining security, observability, and operational excellence.

These enhancements will transform Coder from a platform that provisions workspaces into a platform that provides AI-augmented development environments as a service.

Infrastructure as Code

All components are defined declaratively:

Terraform Templates

Templates define workspace resources:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
resource "proxmox_virtual_environment_vm" "workspace" {
  count = data.coder_workspace.me.start_count
  
  node_name = var.proxmox_node
  vm_id     = var.vm_id
  
  cpu {
    cores = data.coder_parameter.cpu_cores.value
  }
  
  memory {
    dedicated = data.coder_parameter.memory_gb.value * 1024
  }
  
  initialization {
    user_data_file_id = proxmox_virtual_environment_file.cloud_init.id
  }
}

Cloud-Init Configuration

VMs are configured automatically on first boot:

1
2
3
4
5
6
7
8
9
10
11
12
13
#cloud-config
users:
  - name: coder
    groups: [sudo, docker]
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL

mounts:
  - [nfs-server:/path/to/home, /home/coder, nfs, defaults, 0, 0]

runcmd:
  - systemctl enable coder-agent.service
  - systemctl start coder-agent.service

Dynamic Template Parameters: Cloud-Like Self-Service

One of the most powerful features of the platform is the dynamic template parameter system. This provides a true cloud-service experience where developers can customize their workspace resources through intuitive sliders and dropdowns in the Coder UI.

Interactive Resource Selection

When creating a workspace, developers are presented with interactive controls to select:

CPU Cores (Slider: 2-16 cores)

Memory (Slider: 4GB-64GB)

Storage Quota (Slider: 20GB-500GB)

Storage Backend (Dropdown)

This slider-based interface transforms infrastructure provisioning from a ticketing process into an instant self-service experience.

Storage Template Options

The platform provides two distinct storage backends optimized for different use cases:

iSCSI Storage Template

Use Case: General development, ample storage capacity

NVMe Storage Template

Use Case: High-performance I/O workloads

The ability to choose storage backend per workspace allows resource optimization - developers can use cost-effective iSCSI storage for most work, reserving NVMe storage for performance-critical tasks.

NFS Provisioning Integration

Behind the scenes, the template system integrates with custom provisioning scripts that handle the complete storage lifecycle:

Automated Dataset Creation

1
2
# Invoked by Terraform during workspace provisioning
truenas-dataset-manager.sh create <username> <quota_gb>

This script:

  1. Creates ZFS dataset: pool/coder-home/<username>
  2. Sets ZFS quota based on slider value
  3. Configures NFS export with appropriate permissions
  4. Returns NFS mount path to Terraform

Dynamic Quota Management

Quota Updates

1
2
# Update existing workspace quota
truenas-dataset-manager.sh update-quota <username> <new_quota_gb>

Developers can request quota increases by recreating their workspace with a higher slider value. The provisioning system handles the update automatically.

Cleanup on Deletion

1
2
# Invoked when workspace is permanently deleted
truenas-dataset-manager.sh delete <username>

Ensures proper cleanup of datasets and NFS exports when workspaces are decommissioned.

Template Parameter Implementation

Coder parameters are defined in Terraform and rendered as UI controls:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
data "coder_parameter" "storage_type" {
  name         = "storage_backend"
  display_name = "Storage Backend"
  description  = "Choose storage type based on workload"
  type         = "string"
  default      = "iscsi"
  
  option {
    name  = "iSCSI (Ample Storage)"
    value = "iscsi"
  }
  
  option {
    name  = "NVMe (High Performance)"
    value = "nvme"
  }
}

data "coder_parameter" "storage_quota" {
  name         = "storage_quota_gb"
  display_name = "Storage Quota"
  description  = "Home directory size limit"
  type         = "number"
  default      = 50
  
  validation {
    min = 20
    max = 500
  }
}

data "coder_parameter" "cpu_cores" {
  name         = "cpu_cores"
  display_name = "CPU Cores"
  description  = "Number of virtual CPU cores"
  type         = "number"
  default      = 4
  
  validation {
    min = 2
    max = 16
  }
}

data "coder_parameter" "memory_gb" {
  name         = "memory_gb"
  display_name = "Memory (GB)"
  description  = "RAM allocation"
  type         = "number"
  default      = 8
  
  validation {
    min = 4
    max = 64
  }
}

These parameters are then used throughout the Terraform template:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Use parameter values in resource definitions
resource "null_resource" "nfs_dataset" {
  triggers = {
    username = data.coder_workspace.me.owner
    quota    = data.coder_parameter.storage_quota.value
  }
  
  provisioner "local-exec" {
    command = "/path/to/truenas-dataset-manager.sh create ${self.triggers.username} ${self.triggers.quota}"
  }
}

resource "proxmox_virtual_environment_vm" "workspace" {
  cpu {
    cores = data.coder_parameter.cpu_cores.value
  }
  
  memory {
    dedicated = data.coder_parameter.memory_gb.value * 1024
  }
  
  # Conditionally use NVMe or iSCSI storage
  disk {
    datastore_id = data.coder_parameter.storage_type.value == "nvme" ? "nvme-pool" : "iscsi-pool"
    size         = 40
  }
}

Benefits of Dynamic Parameters

Developer Autonomy

Cost Optimization

Operational Excellence

User Experience

Continuous Improvement

The dynamic parameter system is actively being enhanced:

This approach makes the platform feel like a commercial cloud service while running entirely on homelab infrastructure. The combination of intuitive UI controls, robust backend automation, and flexible storage options provides developers with a truly self-service experience.

Operational Insights

Performance

Resource Management

Reliability

Lessons Learned

What Worked Well

NFS Home Directories: Provides true persistence and allows workspace recreation without data loss. Developers can destroy/recreate workspaces confidently.

Template Automation: CI/CD for templates eliminates manual deployment and ensures templates stay in sync with Git.

Token Rotation: Automated rotation removes operational burden while maintaining security posture.

Cloud-Init: Declarative VM configuration via cloud-init eliminates shell script complexity.

AI Integration Planning: Taking time to architect proper AI tool integration pays dividends. Rushed implementations would have resulted in security issues and poor developer experience.

Challenges Overcome

NFS Mount Timing: Initial implementation had race conditions where Coder agent started before NFS mounted. Solution: systemd mount dependencies and verification scripts.

Token Lifetime: Initially used 24-hour tokens requiring daily rotation. Extended to 7 days with 6-day rotation provides better balance.

Template Validation: Early deployments pushed broken templates. Added terraform validate step in CI prevents deployment of invalid configurations.

Storage Quotas: Manual quota management was error-prone. Automated via Terraform ensures consistency.

AI Tool Remote Execution: Running AI tools on remote workspaces required solving authentication, network routing, and credential management challenges. Proxy-based architecture with Vault integration solved these issues.

Future Enhancements

Planned Improvements

Enterprise Environment Templates

A particularly powerful use case is provisioning complete enterprise environments on-demand. As a Solutions Engineer, being able to spin up full enterprise stacks dramatically improves demonstration, testing, and development capabilities.

Planned Template Types:

Splunk Enterprise Cluster

Multi-Tier Application Stacks

Security Lab Environments

Benefits for Solutions Engineering:

Implementation Approach:

The platform will support “grouped workspaces” where a single template provisions multiple interconnected VMs:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
resource "coder_workspace" "splunk_cluster" {
  name = "splunk-demo"
  
  # Search Head Cluster
  vm_group "search_heads" {
    count = 3
    template = "splunk-search-head"
    cpu = 4
    memory = 16
  }
  
  # Indexer Cluster
  vm_group "indexers" {
    count = 2
    template = "splunk-indexer"
    cpu = 8
    memory = 32
    storage_type = "nvme"  # High I/O for indexing
  }
  
  # Management
  vm_group "management" {
    count = 1
    template = "splunk-deployment-server"
    cpu = 2
    memory = 8
  }
  
  # Networking
  network = "splunk-demo-network"
  dns_domain = "splunkdemo.local"
}

All VMs would be provisioned simultaneously, configured to communicate with each other, and accessible via the Coder UI as a single logical “workspace” with multiple components.

This transforms Coder from a personal development environment platform into a comprehensive environment provisioning system capable of supporting complex, multi-system use cases.

Scaling Considerations

Conclusion

Building a cloud development platform with Coder demonstrates how modern infrastructure tools can create powerful, self-service development environments. The combination of Kubernetes orchestration, declarative infrastructure provisioning, and automated lifecycle management provides developers with on-demand workspaces while maintaining operational simplicity.

The platform showcases infrastructure-as-code principles, where every component is version-controlled, automatically deployed, and consistently configured. This approach reduces manual operations, improves reliability, and provides a foundation for future enhancements.

The integration of AI-powered development tools represents the next evolution of this platform - moving beyond just providing compute resources to providing intelligent, AI-augmented development environments. The thoughtful architecture and planning required to integrate these tools properly demonstrates that modern platform engineering requires consideration of AI workflows as first-class platform features.

Resources


Platform: Coder on Kubernetes
Virtualization: Proxmox VE
Storage: TrueNAS with NFS
Automation: Forgejo Actions
AI Integration: Aider, Claude, LiteLLM, Langfuse (planned)
Architecture: C4 Model diagrams