..
pipeline
feat: Log LLM response token usage (prompt/completion/total, content_len, finish_reason)
2026-03-30 06:15:24 +00:00
routers
fix: Creators endpoint returns paginated response, review queue limit raised to 1000, added GET /review/moments/{id} endpoint
2026-03-30 01:26:12 -05:00
tests
feat: Per-stage LLM model routing with thinking modality and think-tag stripping
2026-03-30 02:12:14 +00:00
config.py
feat: Switch to FYN-LLM-Agent models — chat for stages 2/4, think for stages 3/5
2026-03-30 05:42:27 +00:00
database.py
fix: Created SQLAlchemy models for all 7 entities, Alembic async migrat…
2026-03-29 21:48:36 +00:00
main.py
feat: Created async search service with embedding+Qdrant+keyword fallba…
2026-03-29 23:55:52 +00:00
models.py
test: Added 6 integration tests proving ingestion, creator auto-detecti…
2026-03-29 22:16:15 +00:00
pytest.ini
test: Added 6 integration tests proving ingestion, creator auto-detecti…
2026-03-29 22:16:15 +00:00
redis_client.py
test: Built 9 review queue API endpoints (queue, stats, approve, reject…
2026-03-29 23:13:43 +00:00
requirements.txt
feat: Created 4 prompt templates and implemented 5 Celery tasks (stages…
2026-03-29 22:36:06 +00:00
schemas.py
feat: Created async search service with embedding+Qdrant+keyword fallba…
2026-03-29 23:55:52 +00:00
search_service.py
feat: Created async search service with embedding+Qdrant+keyword fallba…
2026-03-29 23:55:52 +00:00
worker.py
chore: Extended Settings with 12 LLM/embedding/Qdrant config fields, cr…
2026-03-29 22:30:31 +00:00