Create Configuration wiki page for chrysopedia

xpltd_admin 2026-04-03 23:16:11 -06:00
parent 0bb2098654
commit 54b34900fb

134
Configuration.md Normal file

@ -0,0 +1,134 @@
# Configuration
| Meta | Value |
|------|-------|
| **Repo** | `xpltdco/chrysopedia` |
| **Page** | `Configuration` |
| **Audience** | developers, agents |
| **Last Updated** | 2026-04-04 |
| **Status** | current |
## Overview
Configuration is managed via environment variables in two `.env` files loaded by Docker Compose. The backend parses settings via `backend/config.py` (Pydantic-style with LRU caching). Frontend build-time constants are injected via Docker build args.
## Environment Files
| File | Purpose |
|------|---------|
| `.env` | Core app config (DB, Redis, LLM, auth, app settings) |
| `.env.lightrag` | LightRAG service config (LLM model, embedding, Qdrant connection) |
## Core Application (.env)
### Database
| Variable | Description |
|----------|-------------|
| `DATABASE_URL` | PostgreSQL connection string (`postgresql+asyncpg://...`) |
### Redis
| Variable | Description |
|----------|-------------|
| `REDIS_URL` | Redis connection string (`redis://chrysopedia-redis:6379/0`) |
### Authentication
| Variable | Description |
|----------|-------------|
| `APP_SECRET_KEY` | JWT signing key (HS256) |
### LLM Configuration
| Variable | Description |
|----------|-------------|
| `OPENAI_API_KEY` | API key for OpenAI-compatible endpoint |
| `LLM_MODEL` | Primary LLM model name (DGX Sparks Qwen) |
| `LLM_BASE_URL` | OpenAI-compatible API endpoint URL |
### Application
| Variable | Default | Description |
|----------|---------|-------------|
| `APP_VERSION` | `0.1.0` | Version string (injected into frontend) |
| `GIT_COMMIT` | `unknown` | Git commit hash (injected into frontend) |
| `DEBUG` | `false` | Debug mode |
### Watch Directory
| Variable | Description |
|----------|-------------|
| `WATCH_DIR` | Path to transcript JSON watch folder |
## LightRAG (.env.lightrag)
| Variable | Description |
|----------|-------------|
| `LLM_MODEL` | LLM model for LightRAG queries |
| `EMBEDDING_MODEL` | Embedding model (`nomic-embed-text`) |
| `QDRANT_URL` | Qdrant connection for LightRAG |
| `WORKING_DIR` | LightRAG data directory |
## Frontend Build Args
Injected at Docker build time via `docker-compose.yml` build args:
| Build Arg | Becomes | Purpose |
|-----------|---------|---------|
| `VITE_APP_VERSION` | `import.meta.env.VITE_APP_VERSION` | Version display in UI |
| `VITE_GIT_COMMIT` | `import.meta.env.VITE_GIT_COMMIT` | Commit hash in UI |
**Important:** In Dockerfile.web, the order must be `ARG``ENV``RUN npm run build`.
## Docker Service Configuration
### PostgreSQL
- **Port:** 5433:5432 (host:container)
- **Volume:** `chrysopedia_postgres_data`
### Redis
- **Port:** Internal only (6379)
- **Persistence:** Default (RDB snapshots)
### Qdrant
- **Port:** Internal only (6333)
- **Volume:** `chrysopedia_qdrant_data`
### Ollama
- **Port:** Internal only (11434)
- **Volume:** `chrysopedia_ollama_data`
- **Model:** `nomic-embed-text` (pulled on init)
### LightRAG
- **Port:** 9621 (localhost only on host)
- **Volume:** `chrysopedia_lightrag_data`
### Web (nginx)
- **Port:** 8096:80 (host:container)
- **Serves:** Pre-built React SPA with fallback to index.html
## Pipeline Configuration
LLM settings are configured per pipeline stage:
- **Stages 1-4:** Chat model (faster, cheaper)
- **Stage 5 (synthesis):** Thinking model (higher quality)
- **Stage 6 (embedding):** Ollama local (`nomic-embed-text`)
Prompt templates are loaded from disk (`prompts/` directory) at runtime. SHA-256 hashes are tracked for reproducibility.
## Network
- **Compose subnet:** 172.32.0.0/24
- **External access:** nginx on nuc01 (10.0.0.9) → ub01:8096
- **DNS:** AdGuard Home rewrites `chrysopedia.com` → 10.0.0.9
---
*See also: [[Architecture]], [[Deployment]], [[Agent-Context]]*