Commit graph

9 commits

Author SHA1 Message Date
jlightner
fd1fd6c6f9 fix: Pipeline LLM audit — temperature=0, realistic token ratios, structured request_params
Audit findings & fixes:
- temperature was never set (API defaulted to 1.0) → now explicit 0.0 for deterministic JSON
- llm_max_tokens=65536 exceeded hard_limit=32768 → aligned to 32768
- Output ratio estimates were 5-30x too high (based on actual pipeline data):
  stage2: 0.6→0.05, stage3: 2.0→0.3, stage4: 0.5→0.3, stage5: 2.5→0.8
- request_params now structured as api_params (what's sent to LLM) vs pipeline_config
  (internal estimator settings) — no more ambiguous 'hard_limit' in request params
- temperature=0.0 sent on both primary and fallback endpoints
2026-04-01 07:20:09 +00:00
jlightner
e80094dc05 feat: Truncation detection, batched classification, and pipeline auto-resume
Three resilience improvements to the pipeline:

1. LLMResponse(str) subclass carries finish_reason metadata from the LLM.
   _safe_parse_llm_response detects truncation (finish=length) and raises
   LLMTruncationError instead of wastefully retrying with a JSON nudge
   that makes the prompt even longer.

2. Stage 4 classification now batches moments (20 per call) instead of
   sending all moments in a single LLM call. Prevents context window
   overflow for videos with many moments. Batch results are merged with
   reindexed moment_index values.

3. run_pipeline auto-resumes from the last completed stage on error/retry
   instead of always restarting from stage 2. Queries pipeline_events for
   the most recent run to find completed stages. clean_reprocess trigger
   still forces a full restart.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 17:48:19 -05:00
jlightner
5984129e25 fix: Inflate LLM token estimates and forward max_tokens on retry
Stage 4 classification was truncating (finish=length) because the 0.15x
output ratio underestimated token needs. Inflated all stage ratios,
bumped the buffer from 20% to 50%, raised the floor from 2048 to 4096,
and fixed _safe_parse_llm_response to forward max_tokens on retry
instead of falling back to the 65k default.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 17:28:58 -05:00
jlightner
4b0914b12b fix: restore complete project tree from ub01 canonical state
Auto-mode commit 7aa33cd accidentally deleted 78 files (14,814 lines) during M005
execution. Subsequent commits rebuilt some frontend files but backend/, alembic/,
tests/, whisper/, docker configs, and prompts were never restored in this repo.

This commit restores the full project tree by syncing from ub01's working directory,
which has all M001-M007 features running in production containers.

Restored: backend/ (config, models, routers, database, redis, search_service, worker),
alembic/ (6 migrations), docker/ (Dockerfiles, nginx, compose), prompts/ (4 stages),
tests/, whisper/, README.md, .env.example, chrysopedia-spec.md
2026-03-31 02:10:41 +00:00
jlightner
7aa33cd17f fix: Fixed syntax errors in pipeline event instrumentation — _emit_even…
- "backend/pipeline/stages.py"

GSD-Task: S01/T01
2026-03-30 08:27:53 +00:00
jlightner
0b0ca598b4 feat: Log LLM response token usage (prompt/completion/total, content_len, finish_reason) 2026-03-30 06:15:24 +00:00
jlightner
cf759f3739 fix: Add max_tokens=16384 to LLM requests (OpenWebUI defaults to 1000, truncating pipeline JSON) 2026-03-30 04:08:29 +00:00
jlightner
4aa4b08a7f feat: Per-stage LLM model routing with thinking modality and think-tag stripping
- Added 8 per-stage config fields: llm_stage{2-5}_model and llm_stage{2-5}_modality
- LLMClient.complete() accepts modality ('chat'/'thinking') and model_override
- Thinking modality: appends JSON instructions to system prompt, strips <think> tags
- strip_think_tags() handles multiline, multiple blocks, and edge cases
- Pipeline stages 2-5 read per-stage config and pass to LLM client
- Updated .env.example with per-stage model/modality documentation
- All 59 tests pass including new think-tag stripping test
2026-03-30 02:12:14 +00:00
jlightner
12cc86aef9 chore: Extended Settings with 12 LLM/embedding/Qdrant config fields, cr…
- "backend/config.py"
- "backend/worker.py"
- "backend/pipeline/schemas.py"
- "backend/pipeline/llm_client.py"
- "backend/requirements.txt"
- "backend/pipeline/__init__.py"
- "backend/pipeline/stages.py"

GSD-Task: S03/T01
2026-03-29 22:30:31 +00:00