| .. |
|
pipeline
|
fix: Add max_tokens=16384 to LLM requests (OpenWebUI defaults to 1000, truncating pipeline JSON)
|
2026-03-30 04:08:29 +00:00 |
|
routers
|
feat: Created async search service with embedding+Qdrant+keyword fallba…
|
2026-03-29 23:55:52 +00:00 |
|
tests
|
feat: Per-stage LLM model routing with thinking modality and think-tag stripping
|
2026-03-30 02:12:14 +00:00 |
|
config.py
|
fix: Bump max_tokens to 65536 (model supports 94K context, extraction needs headroom)
|
2026-03-30 04:57:44 +00:00 |
|
database.py
|
fix: Created SQLAlchemy models for all 7 entities, Alembic async migrat…
|
2026-03-29 21:48:36 +00:00 |
|
main.py
|
feat: Created async search service with embedding+Qdrant+keyword fallba…
|
2026-03-29 23:55:52 +00:00 |
|
models.py
|
test: Added 6 integration tests proving ingestion, creator auto-detecti…
|
2026-03-29 22:16:15 +00:00 |
|
pytest.ini
|
test: Added 6 integration tests proving ingestion, creator auto-detecti…
|
2026-03-29 22:16:15 +00:00 |
|
redis_client.py
|
test: Built 9 review queue API endpoints (queue, stats, approve, reject…
|
2026-03-29 23:13:43 +00:00 |
|
requirements.txt
|
feat: Created 4 prompt templates and implemented 5 Celery tasks (stages…
|
2026-03-29 22:36:06 +00:00 |
|
schemas.py
|
feat: Created async search service with embedding+Qdrant+keyword fallba…
|
2026-03-29 23:55:52 +00:00 |
|
search_service.py
|
feat: Created async search service with embedding+Qdrant+keyword fallba…
|
2026-03-29 23:55:52 +00:00 |
|
worker.py
|
chore: Extended Settings with 12 LLM/embedding/Qdrant config fields, cr…
|
2026-03-29 22:30:31 +00:00 |