chrysopedia/backend/pipeline
jlightner 5984129e25 fix: Inflate LLM token estimates and forward max_tokens on retry
Stage 4 classification was truncating (finish=length) because the 0.15x
output ratio underestimated token needs. Inflated all stage ratios,
bumped the buffer from 20% to 50%, raised the floor from 2048 to 4096,
and fixed _safe_parse_llm_response to forward max_tokens on retry
instead of falling back to the 65k default.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 17:28:58 -05:00
..
__init__.py fix: restore complete project tree from ub01 canonical state 2026-03-31 02:10:41 +00:00
embedding_client.py fix: restore complete project tree from ub01 canonical state 2026-03-31 02:10:41 +00:00
llm_client.py fix: Inflate LLM token estimates and forward max_tokens on retry 2026-03-31 17:28:58 -05:00
qdrant_client.py feat: Add bulk pipeline reprocessing — creator filter, multi-select, clean retrigger 2026-03-31 15:24:59 +00:00
schemas.py fix: restore complete project tree from ub01 canonical state 2026-03-31 02:10:41 +00:00
stages.py fix: Inflate LLM token estimates and forward max_tokens on retry 2026-03-31 17:28:58 -05:00