test: Built 9 review queue API endpoints (queue, stats, approve, reject…
- "backend/routers/review.py" - "backend/schemas.py" - "backend/redis_client.py" - "backend/main.py" - "backend/tests/test_review.py" GSD-Task: S04/T01
This commit is contained in:
parent
2cb10b5db8
commit
c2edba952c
17 changed files with 1935 additions and 3 deletions
|
|
@ -11,3 +11,5 @@
|
||||||
| D003 | | requirement | R002 Transcript Ingestion API status | validated | 6 passing integration tests prove the full POST /api/v1/ingest flow: creator auto-detection, SourceVideo upsert, TranscriptSegment bulk insert, raw JSON persistence, idempotent re-upload, and invalid input rejection. | Yes | agent |
|
| D003 | | requirement | R002 Transcript Ingestion API status | validated | 6 passing integration tests prove the full POST /api/v1/ingest flow: creator auto-detection, SourceVideo upsert, TranscriptSegment bulk insert, raw JSON persistence, idempotent re-upload, and invalid input rejection. | Yes | agent |
|
||||||
| D004 | | architecture | Sync vs async approach for Celery worker pipeline tasks | Use sync openai.OpenAI, sync QdrantClient, and sync SQLAlchemy (create_engine with psycopg2) inside Celery tasks. Convert DATABASE_URL from postgresql+asyncpg:// to postgresql:// for the sync engine. | Celery workers run in a synchronous context. Using asyncio.run() inside tasks risks nested event loop errors with gevent/eventlet workers. Using sync clients throughout eliminates this class of bug entirely. The async engine/session from database.py is only used by FastAPI (ASGI); the worker gets its own sync engine. | Yes | agent |
|
| D004 | | architecture | Sync vs async approach for Celery worker pipeline tasks | Use sync openai.OpenAI, sync QdrantClient, and sync SQLAlchemy (create_engine with psycopg2) inside Celery tasks. Convert DATABASE_URL from postgresql+asyncpg:// to postgresql:// for the sync engine. | Celery workers run in a synchronous context. Using asyncio.run() inside tasks risks nested event loop errors with gevent/eventlet workers. Using sync clients throughout eliminates this class of bug entirely. The async engine/session from database.py is only used by FastAPI (ASGI); the worker gets its own sync engine. | Yes | agent |
|
||||||
| D005 | | architecture | Embedding/Qdrant failure handling strategy in pipeline | Embedding/Qdrant failures (stage 6) log errors but do not fail the pipeline. Processing_status is set by stages 2-5 only. Embeddings can be regenerated by manual re-trigger. | Qdrant is at 10.0.0.10 on the hypervisor network and may not be reachable during all pipeline runs. Making embedding a non-blocking side-effect ensures core pipeline output (KeyMoments, TechniquePages in PostgreSQL) is never lost due to vector store issues. The manual re-trigger endpoint allows regenerating embeddings at any time. | Yes | agent |
|
| D005 | | architecture | Embedding/Qdrant failure handling strategy in pipeline | Embedding/Qdrant failures (stage 6) log errors but do not fail the pipeline. Processing_status is set by stages 2-5 only. Embeddings can be regenerated by manual re-trigger. | Qdrant is at 10.0.0.10 on the hypervisor network and may not be reachable during all pipeline runs. Making embedding a non-blocking side-effect ensures core pipeline output (KeyMoments, TechniquePages in PostgreSQL) is never lost due to vector store issues. The manual re-trigger endpoint allows regenerating embeddings at any time. | Yes | agent |
|
||||||
|
| D006 | | requirement | R013 Prompt Template System status | validated | 4 prompt template files in prompts/ directory loaded from configurable settings.prompts_path. Templates use XML-style content fencing. Pipeline stages read templates from disk at runtime, enabling edits without code changes. Manual re-trigger endpoint (POST /api/v1/pipeline/trigger/{video_id}) allows re-processing after prompt edits. | Yes | agent |
|
||||||
|
| D007 | M001/S04 | architecture | Runtime review mode toggle persistence mechanism | Store review mode toggle in Redis key `chrysopedia:review_mode` with async redis client. Fall back to `settings.review_mode` config default when key is absent. | The config.py `review_mode` setting is loaded via lru_cache from environment variables and cannot be mutated at runtime. Redis is already used by the project (Celery broker, stage 4 classification data) so it adds no new infrastructure. A system_settings DB table would work but Redis is simpler for a single boolean toggle on a single-admin tool. The pipeline's stages.py reads settings.review_mode from config — the admin toggle only affects new pipeline runs if stages.py is updated to check Redis too, but that's deferred since the toggle is primarily a UI-level concept for the review queue. | Yes | agent |
|
||||||
|
|
|
||||||
|
|
@ -29,3 +29,21 @@
|
||||||
**Context:** When using a session-scoped SQLAlchemy async engine with asyncpg in pytest-asyncio tests, the connection pool reuses connections across fixtures and test functions. This causes `InterfaceError: cannot perform operation: another operation is in progress` because the ASGI test client's session holds a connection while cleanup/verification fixtures try to use the same pool.
|
**Context:** When using a session-scoped SQLAlchemy async engine with asyncpg in pytest-asyncio tests, the connection pool reuses connections across fixtures and test functions. This causes `InterfaceError: cannot perform operation: another operation is in progress` because the ASGI test client's session holds a connection while cleanup/verification fixtures try to use the same pool.
|
||||||
|
|
||||||
**Fix:** Use `poolclass=NullPool` when creating the test engine. Each connection is created fresh and immediately closed, eliminating contention. Performance cost is negligible for test suites.
|
**Fix:** Use `poolclass=NullPool` when creating the test engine. Each connection is created fresh and immediately closed, eliminating contention. Performance cost is negligible for test suites.
|
||||||
|
|
||||||
|
## Testing Celery tasks that use sync SQLAlchemy: patch module-level globals
|
||||||
|
|
||||||
|
**Context:** Pipeline stages in `pipeline/stages.py` create their own sync SQLAlchemy engine/session via module-level `_engine` and `_SessionLocal` globals (because Celery is sync, not async). Tests need to redirect these to the test database, but the engine is created lazily at module scope.
|
||||||
|
|
||||||
|
**Fix:** Patch the module globals directly: `unittest.mock.patch.object(stages, '_engine', test_engine)` and `unittest.mock.patch.object(stages, '_SessionLocal', test_session_factory)`. This redirects all DB access in stage functions to the test database without modifying production code.
|
||||||
|
|
||||||
|
## Lazy imports in FastAPI handlers defeat simple mock patching
|
||||||
|
|
||||||
|
**Context:** When a FastAPI handler imports a function lazily (inside the function body) like `from pipeline.stages import run_pipeline`, patching `routers.ingest.run_pipeline` has no effect because the name is re-bound on every call from the source module.
|
||||||
|
|
||||||
|
**Fix:** Patch at the source module: `unittest.mock.patch('pipeline.stages.run_pipeline')`. The lazy import will pick up the mock from the source module. This applies to any handler that uses lazy imports to avoid circular dependencies at module load time.
|
||||||
|
|
||||||
|
## Stage 4 classification data stored in Redis (not DB columns)
|
||||||
|
|
||||||
|
**Context:** The KeyMoment SQLAlchemy model doesn't have `topic_tags` or `topic_category` columns. Stage 4 classification needs somewhere to store per-moment tag assignments that stage 5 can read.
|
||||||
|
|
||||||
|
**Fix:** Store classification results in Redis under key `chrysopedia:classification:{video_id}` with a 24-hour TTL. Stage 5 reads from Redis. This avoids schema migrations during initial pipeline development. The data is ephemeral — if Redis loses it, re-running stage 4 regenerates it.
|
||||||
|
|
|
||||||
|
|
@ -8,6 +8,6 @@ Stand up the complete Chrysopedia stack: Docker Compose deployment on ub01, Post
|
||||||
|----|-------|------|---------|------|------------|
|
|----|-------|------|---------|------|------------|
|
||||||
| S01 | Docker Compose + Database + Whisper Script | low | — | ✅ | docker compose up -d starts all services on ub01; Whisper script transcribes a sample video to JSON |
|
| S01 | Docker Compose + Database + Whisper Script | low | — | ✅ | docker compose up -d starts all services on ub01; Whisper script transcribes a sample video to JSON |
|
||||||
| S02 | Transcript Ingestion API | low | S01 | ✅ | POST a transcript JSON file to the API; Creator and Source Video records appear in PostgreSQL |
|
| S02 | Transcript Ingestion API | low | S01 | ✅ | POST a transcript JSON file to the API; Creator and Source Video records appear in PostgreSQL |
|
||||||
| S03 | LLM Extraction Pipeline + Qdrant Integration | high | S02 | ⬜ | A transcript JSON triggers stages 2-5: segmentation → extraction → classification → synthesis. Technique pages with key moments appear in DB. Qdrant has searchable embeddings. |
|
| S03 | LLM Extraction Pipeline + Qdrant Integration | high | S02 | ✅ | A transcript JSON triggers stages 2-5: segmentation → extraction → classification → synthesis. Technique pages with key moments appear in DB. Qdrant has searchable embeddings. |
|
||||||
| S04 | Review Queue Admin UI | medium | S03 | ⬜ | Admin views pending key moments, approves/edits/rejects them, toggles between review and auto mode |
|
| S04 | Review Queue Admin UI | medium | S03 | ⬜ | Admin views pending key moments, approves/edits/rejects them, toggles between review and auto mode |
|
||||||
| S05 | Search-First Web UI | medium | S03 | ⬜ | User searches for a technique, gets semantic results in <500ms, clicks through to a full technique page with study guide prose, key moments, and related links |
|
| S05 | Search-First Web UI | medium | S03 | ⬜ | User searches for a technique, gets semantic results in <500ms, clicks through to a full technique page with study guide prose, key moments, and related links |
|
||||||
|
|
|
||||||
176
.gsd/milestones/M001/slices/S03/S03-SUMMARY.md
Normal file
176
.gsd/milestones/M001/slices/S03/S03-SUMMARY.md
Normal file
|
|
@ -0,0 +1,176 @@
|
||||||
|
---
|
||||||
|
id: S03
|
||||||
|
parent: M001
|
||||||
|
milestone: M001
|
||||||
|
provides:
|
||||||
|
- 6 Celery tasks: stage2-6 + run_pipeline orchestrator
|
||||||
|
- LLMClient with primary/fallback for downstream use
|
||||||
|
- EmbeddingClient for vector generation
|
||||||
|
- QdrantManager for vector store operations
|
||||||
|
- POST /api/v1/pipeline/trigger/{video_id} manual re-trigger endpoint
|
||||||
|
- 8 Pydantic schemas for pipeline stage I/O
|
||||||
|
- 4 editable prompt templates in prompts/
|
||||||
|
- 10 integration tests with mock fixtures
|
||||||
|
requires:
|
||||||
|
- slice: S02
|
||||||
|
provides: Ingest endpoint, database models (SourceVideo, TranscriptSegment, KeyMoment, TechniquePage, Creator), async SQLAlchemy engine, test infrastructure
|
||||||
|
affects:
|
||||||
|
- S04
|
||||||
|
- S05
|
||||||
|
key_files:
|
||||||
|
- backend/config.py
|
||||||
|
- backend/worker.py
|
||||||
|
- backend/pipeline/__init__.py
|
||||||
|
- backend/pipeline/schemas.py
|
||||||
|
- backend/pipeline/llm_client.py
|
||||||
|
- backend/pipeline/embedding_client.py
|
||||||
|
- backend/pipeline/qdrant_client.py
|
||||||
|
- backend/pipeline/stages.py
|
||||||
|
- backend/routers/pipeline.py
|
||||||
|
- backend/routers/ingest.py
|
||||||
|
- backend/main.py
|
||||||
|
- prompts/stage2_segmentation.txt
|
||||||
|
- prompts/stage3_extraction.txt
|
||||||
|
- prompts/stage4_classification.txt
|
||||||
|
- prompts/stage5_synthesis.txt
|
||||||
|
- backend/tests/test_pipeline.py
|
||||||
|
- backend/tests/fixtures/mock_llm_responses.py
|
||||||
|
- backend/tests/conftest.py
|
||||||
|
- backend/requirements.txt
|
||||||
|
key_decisions:
|
||||||
|
- Sync OpenAI/SQLAlchemy/Qdrant throughout Celery tasks — no async in worker context (D004)
|
||||||
|
- Embedding/Qdrant stage is non-blocking side-effect — failures don't break pipeline (D005)
|
||||||
|
- Stage 4 classification stored in Redis (24h TTL) due to missing KeyMoment columns
|
||||||
|
- Pipeline dispatch from ingest is best-effort; manual trigger returns 503 on Celery failure
|
||||||
|
- LLMClient retries once with JSON nudge on malformed LLM output before failing
|
||||||
|
patterns_established:
|
||||||
|
- Celery task pattern: @celery_app.task(bind=True, max_retries=3) with sync SQLAlchemy session per task
|
||||||
|
- LLM client pattern: primary → fallback → fail, with Pydantic response parsing
|
||||||
|
- Non-blocking side-effect pattern: max_retries=0, catch-all exception handler, pipeline continues
|
||||||
|
- Prompt template pattern: plain text files in prompts/ dir, XML-style content fencing, loaded at runtime
|
||||||
|
- Pipeline test pattern: patch module-level _engine/_SessionLocal globals to redirect stages to test DB
|
||||||
|
observability_surfaces:
|
||||||
|
- INFO log at start/end of each stage with video_id and duration
|
||||||
|
- WARNING on LLM fallback trigger
|
||||||
|
- ERROR on LLM parse failure with raw response excerpt
|
||||||
|
- WARNING on embedding/Qdrant failures with error details
|
||||||
|
- source_videos.processing_status tracks pipeline progress per video
|
||||||
|
- Celery task registry shows all 6 registered tasks
|
||||||
|
- POST /api/v1/pipeline/trigger/{video_id} returns current processing_status
|
||||||
|
drill_down_paths:
|
||||||
|
- .gsd/milestones/M001/slices/S03/tasks/T01-SUMMARY.md
|
||||||
|
- .gsd/milestones/M001/slices/S03/tasks/T02-SUMMARY.md
|
||||||
|
- .gsd/milestones/M001/slices/S03/tasks/T03-SUMMARY.md
|
||||||
|
- .gsd/milestones/M001/slices/S03/tasks/T04-SUMMARY.md
|
||||||
|
- .gsd/milestones/M001/slices/S03/tasks/T05-SUMMARY.md
|
||||||
|
duration: ""
|
||||||
|
verification_result: passed
|
||||||
|
completed_at: 2026-03-29T22:59:23.268Z
|
||||||
|
blocker_discovered: false
|
||||||
|
---
|
||||||
|
|
||||||
|
# S03: LLM Extraction Pipeline + Qdrant Integration
|
||||||
|
|
||||||
|
**Built the complete 6-stage LLM extraction pipeline (segmentation → extraction → classification → synthesis → embedding) with Celery workers, sync SQLAlchemy, primary/fallback LLM endpoints, Qdrant vector indexing, configurable prompt templates, auto-dispatch from ingest, manual re-trigger API, and 10 integration tests — all 16 tests pass.**
|
||||||
|
|
||||||
|
## What Happened
|
||||||
|
|
||||||
|
## What This Slice Delivered
|
||||||
|
|
||||||
|
S03 implemented the core intelligence of Chrysopedia: the background worker pipeline that transforms raw transcripts into structured knowledge (technique pages, key moments, topic tags, embeddings).
|
||||||
|
|
||||||
|
### T01 — Infrastructure Foundation
|
||||||
|
Extended Settings with 12 config fields (LLM primary/fallback endpoints, embedding config, Qdrant connection, prompt path, review mode). Created the Celery app in `worker.py` using Redis as broker. Built `LLMClient` with sync `openai.OpenAI` and primary→fallback logic that catches `APIConnectionError` and `APITimeoutError`. Defined 8 Pydantic schemas (`TopicSegment`, `SegmentationResult`, `ExtractedMoment`, `ExtractionResult`, `ClassifiedMoment`, `ClassificationResult`, `SynthesizedPage`, `SynthesisResult`) matching the pipeline stage inputs/outputs.
|
||||||
|
|
||||||
|
### T02 — Pipeline Stages + Prompt Templates
|
||||||
|
Created 4 prompt template files in `prompts/` with XML-style content fencing. Implemented 5 Celery tasks in `pipeline/stages.py`:
|
||||||
|
- **stage2_segmentation**: Groups transcript segments into topic boundaries, updates `topic_label` on TranscriptSegment rows
|
||||||
|
- **stage3_extraction**: Extracts key moments from topic groups, creates KeyMoment rows, sets `processing_status=extracted`
|
||||||
|
- **stage4_classification**: Classifies moments against `canonical_tags.yaml`, stores results in Redis (24h TTL) since KeyMoment lacks tag columns
|
||||||
|
- **stage5_synthesis**: Synthesizes TechniquePage rows from grouped moments, links KeyMoments, sets `processing_status=reviewed` (or `published` if `review_mode=False`)
|
||||||
|
- **run_pipeline**: Orchestrator that checks `processing_status` and chains only the remaining stages for resumability
|
||||||
|
|
||||||
|
All tasks use sync SQLAlchemy sessions (psycopg2) with `bind=True, max_retries=3`. `_safe_parse_llm_response` retries once with a JSON nudge on malformed output.
|
||||||
|
|
||||||
|
### T03 — Embedding & Qdrant Integration
|
||||||
|
Created `EmbeddingClient` (sync `openai.OpenAI` for `/v1/embeddings`) that returns empty list on errors. Created `QdrantManager` with idempotent `ensure_collection()` (cosine distance, config-driven dimensions) and `upsert_technique_pages()`/`upsert_key_moments()` with full metadata payloads. Added `stage6_embed_and_index` as a non-blocking side-effect (`max_retries=0`, catches all exceptions) appended to the pipeline chain.
|
||||||
|
|
||||||
|
### T04 — API Wiring
|
||||||
|
Wired `run_pipeline.delay()` dispatch after ingest commit (best-effort — failures don't break ingest response). Added `POST /api/v1/pipeline/trigger/{video_id}` for manual re-processing (returns 404 for missing video, 503 on Celery failure). Mounted pipeline router in `main.py`.
|
||||||
|
|
||||||
|
### T05 — Integration Tests
|
||||||
|
Created 10 integration tests with mocked LLM/Qdrant and real PostgreSQL: stages 2-6 produce correct DB records, pipeline resumes from `extracted` status, trigger endpoint returns 200/404, ingest dispatches pipeline, LLM falls back on primary failure. All 16 tests (6 ingest + 10 pipeline) pass.
|
||||||
|
|
||||||
|
### Key Deviation
|
||||||
|
Stage 4 stores classification data in Redis rather than DB columns because `KeyMoment` model lacks `topic_tags`/`topic_category` columns. This is an intentional simplification — stage 5 reads from Redis during synthesis.
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
All slice verification checks pass:
|
||||||
|
|
||||||
|
**T01 verification (5/5):** Settings prints correct defaults, all 8 schema classes import, LLMClient imports clean, celery_app.main prints 'chrysopedia', openai/qdrant-client in requirements.txt.
|
||||||
|
|
||||||
|
**T02 verification (3/3):** 4 prompt files exist, all 5 stage functions import, worker shows 6 registered tasks.
|
||||||
|
|
||||||
|
**T03 verification (3/3):** EmbeddingClient, QdrantManager, stage6_embed_and_index all import successfully.
|
||||||
|
|
||||||
|
**T04 verification (3/3):** Pipeline router has /trigger/{video_id} route, pipeline in main.py, run_pipeline in ingest.py.
|
||||||
|
|
||||||
|
**T05 verification (2/2):** `cd backend && python -m pytest tests/test_pipeline.py -v` — 10/10 pass. `cd backend && python -m pytest tests/ -v` — 16/16 pass (122s).
|
||||||
|
|
||||||
|
All 16 registered tests pass. 6 Celery tasks registered in worker.
|
||||||
|
|
||||||
|
## Requirements Advanced
|
||||||
|
|
||||||
|
- R003 — Full pipeline stages 2-6 implemented and tested: segmentation, extraction, classification, synthesis, embedding. 10 integration tests verify all stages with mocked LLM and real PostgreSQL.
|
||||||
|
- R009 — Write path implemented: EmbeddingClient generates vectors, QdrantManager upserts with metadata payloads. Read path (search query) deferred to S05.
|
||||||
|
- R011 — Stage 4 loads canonical_tags.yaml for classification. Tag taxonomy is config-driven.
|
||||||
|
- R012 — run_pipeline orchestrator resumes from last completed stage. Auto-dispatch from ingest handles new videos. Manual trigger supports re-processing.
|
||||||
|
- R013 — 4 prompt template files loaded from configurable prompts_path. Manual trigger enables re-processing after prompt edits.
|
||||||
|
|
||||||
|
## Requirements Validated
|
||||||
|
|
||||||
|
- R003 — 10 integration tests prove full pipeline: stage2 updates topic_labels, stage3 creates KeyMoments, stage4 classifies tags, stage5 creates TechniquePages, stage6 embeds to Qdrant. Resumability and LLM fallback tested.
|
||||||
|
- R013 — 4 prompt files in prompts/, loaded from configurable path, POST /api/v1/pipeline/trigger/{video_id} enables re-processing.
|
||||||
|
|
||||||
|
## New Requirements Surfaced
|
||||||
|
|
||||||
|
None.
|
||||||
|
|
||||||
|
## Requirements Invalidated or Re-scoped
|
||||||
|
|
||||||
|
None.
|
||||||
|
|
||||||
|
## Deviations
|
||||||
|
|
||||||
|
Stage 4 classification data stored in Redis (not DB columns) because KeyMoment model lacks topic_tags/topic_category columns. Added psycopg2-binary to requirements.txt for sync SQLAlchemy. Created pipeline/stages.py stub in T01 so worker.py import chain succeeds ahead of T02. Pipeline router uses lazy import of run_pipeline inside handler to avoid circular imports.
|
||||||
|
|
||||||
|
## Known Limitations
|
||||||
|
|
||||||
|
Stage 4 classification stored in Redis with 24h TTL — if Redis is flushed between stage 4 and stage 5, classification data is lost. QdrantManager uses random UUIDs for point IDs — re-indexing creates duplicates rather than updating existing points. KeyMoment model needs topic_tags/topic_category columns for a permanent solution.
|
||||||
|
|
||||||
|
## Follow-ups
|
||||||
|
|
||||||
|
Add topic_tags and topic_category columns to KeyMoment model to eliminate Redis dependency for classification. Add deterministic point IDs to QdrantManager based on content hash for idempotent re-indexing. Consider adding a /api/v1/pipeline/status/{video_id} endpoint for monitoring pipeline progress.
|
||||||
|
|
||||||
|
## Files Created/Modified
|
||||||
|
|
||||||
|
- `backend/config.py` — Extended Settings with 12 LLM/embedding/Qdrant/prompt config fields
|
||||||
|
- `backend/requirements.txt` — Added openai, qdrant-client, pyyaml, psycopg2-binary
|
||||||
|
- `backend/worker.py` — Created Celery app with Redis broker, imports pipeline.stages
|
||||||
|
- `backend/pipeline/__init__.py` — Created empty package init
|
||||||
|
- `backend/pipeline/schemas.py` — 8 Pydantic models for pipeline stage I/O
|
||||||
|
- `backend/pipeline/llm_client.py` — Sync LLMClient with primary/fallback logic
|
||||||
|
- `backend/pipeline/embedding_client.py` — Sync EmbeddingClient for /v1/embeddings
|
||||||
|
- `backend/pipeline/qdrant_client.py` — QdrantManager with idempotent collection mgmt and metadata upserts
|
||||||
|
- `backend/pipeline/stages.py` — 6 Celery tasks: stages 2-6 + run_pipeline orchestrator
|
||||||
|
- `backend/routers/pipeline.py` — POST /trigger/{video_id} manual re-trigger endpoint
|
||||||
|
- `backend/routers/ingest.py` — Added run_pipeline.delay() dispatch after ingest commit
|
||||||
|
- `backend/main.py` — Mounted pipeline router under /api/v1
|
||||||
|
- `prompts/stage2_segmentation.txt` — LLM prompt for topic boundary detection
|
||||||
|
- `prompts/stage3_extraction.txt` — LLM prompt for key moment extraction
|
||||||
|
- `prompts/stage4_classification.txt` — LLM prompt for canonical tag classification
|
||||||
|
- `prompts/stage5_synthesis.txt` — LLM prompt for technique page synthesis
|
||||||
|
- `backend/tests/test_pipeline.py` — 10 integration tests covering all pipeline stages
|
||||||
|
- `backend/tests/fixtures/mock_llm_responses.py` — Mock LLM response fixtures for all stages
|
||||||
|
- `backend/tests/conftest.py` — Added sync engine/session fixtures and pre_ingested_video fixture
|
||||||
138
.gsd/milestones/M001/slices/S03/S03-UAT.md
Normal file
138
.gsd/milestones/M001/slices/S03/S03-UAT.md
Normal file
|
|
@ -0,0 +1,138 @@
|
||||||
|
# S03: LLM Extraction Pipeline + Qdrant Integration — UAT
|
||||||
|
|
||||||
|
**Milestone:** M001
|
||||||
|
**Written:** 2026-03-29T22:59:23.268Z
|
||||||
|
|
||||||
|
## UAT: LLM Extraction Pipeline + Qdrant Integration
|
||||||
|
|
||||||
|
### Preconditions
|
||||||
|
- PostgreSQL running with chrysopedia database and schema applied
|
||||||
|
- Redis running (for Celery broker and classification cache)
|
||||||
|
- Python venv activated with all requirements installed
|
||||||
|
- Working directory: `backend/`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 1: Pipeline Infrastructure Imports
|
||||||
|
**Steps:**
|
||||||
|
1. Run `python -c "from config import Settings; s = Settings(); print(s.llm_api_url, s.llm_fallback_url, s.embedding_model, s.qdrant_url, s.qdrant_collection, s.review_mode)"`
|
||||||
|
2. Run `python -c "from pipeline.schemas import SegmentationResult, ExtractionResult, ClassificationResult, SynthesisResult, TopicSegment, ExtractedMoment, ClassifiedMoment, SynthesizedPage; print('all 8 schemas ok')"`
|
||||||
|
3. Run `python -c "from pipeline.llm_client import LLMClient; from pipeline.embedding_client import EmbeddingClient; from pipeline.qdrant_client import QdrantManager; print('all clients ok')"`
|
||||||
|
4. Run `python -c "from worker import celery_app; tasks = [t for t in celery_app.tasks if 'stage' in t or 'pipeline' in t]; assert len(tasks) == 6; print(sorted(tasks))"`
|
||||||
|
|
||||||
|
**Expected:** All commands exit 0. Settings shows correct defaults. 8 schemas import. 3 clients import. 6 tasks registered.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 2: Stage 2 — Segmentation Updates Topic Labels
|
||||||
|
**Steps:**
|
||||||
|
1. Run `python -m pytest tests/test_pipeline.py::test_stage2_segmentation_updates_topic_labels -v`
|
||||||
|
|
||||||
|
**Expected:** Test passes. TranscriptSegment rows have topic_label set from mocked LLM segmentation output.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 3: Stage 3 — Extraction Creates Key Moments
|
||||||
|
**Steps:**
|
||||||
|
1. Run `python -m pytest tests/test_pipeline.py::test_stage3_extraction_creates_key_moments -v`
|
||||||
|
|
||||||
|
**Expected:** Test passes. KeyMoment rows created with title, summary, start_time, end_time, content_type. SourceVideo processing_status = 'extracted'.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 4: Stage 4 — Classification Assigns Tags
|
||||||
|
**Steps:**
|
||||||
|
1. Run `python -m pytest tests/test_pipeline.py::test_stage4_classification_assigns_tags -v`
|
||||||
|
|
||||||
|
**Expected:** Test passes. Classification data stored in Redis matching canonical tag categories from canonical_tags.yaml.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 5: Stage 5 — Synthesis Creates Technique Pages
|
||||||
|
**Steps:**
|
||||||
|
1. Run `python -m pytest tests/test_pipeline.py::test_stage5_synthesis_creates_technique_pages -v`
|
||||||
|
|
||||||
|
**Expected:** Test passes. TechniquePage rows created with body_sections, signal_chains, summary, topic_tags. KeyMoments linked via technique_page_id. Processing status updated to reviewed/published.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 6: Stage 6 — Embedding and Qdrant Upsert
|
||||||
|
**Steps:**
|
||||||
|
1. Run `python -m pytest tests/test_pipeline.py::test_stage6_embeds_and_upserts_to_qdrant -v`
|
||||||
|
|
||||||
|
**Expected:** Test passes. EmbeddingClient.embed called with technique page and key moment text. QdrantManager.upsert called with metadata payloads.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 7: Pipeline Resumability
|
||||||
|
**Steps:**
|
||||||
|
1. Run `python -m pytest tests/test_pipeline.py::test_run_pipeline_resumes_from_extracted -v`
|
||||||
|
|
||||||
|
**Expected:** Test passes. When video has processing_status='extracted', only stages 4+5+6 execute (not 2+3).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 8: Manual Pipeline Trigger API
|
||||||
|
**Steps:**
|
||||||
|
1. Run `python -m pytest tests/test_pipeline.py::test_pipeline_trigger_endpoint -v`
|
||||||
|
2. Run `python -m pytest tests/test_pipeline.py::test_pipeline_trigger_404_for_missing_video -v`
|
||||||
|
|
||||||
|
**Expected:** Both pass. POST /api/v1/pipeline/trigger/{video_id} returns 200 with status for existing video, 404 for missing video.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 9: Ingest Auto-Dispatches Pipeline
|
||||||
|
**Steps:**
|
||||||
|
1. Run `python -m pytest tests/test_pipeline.py::test_ingest_dispatches_pipeline -v`
|
||||||
|
|
||||||
|
**Expected:** Test passes. After ingest commit, run_pipeline.delay() is called with the video_id.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 10: LLM Fallback on Primary Failure
|
||||||
|
**Steps:**
|
||||||
|
1. Run `python -m pytest tests/test_pipeline.py::test_llm_fallback_on_primary_failure -v`
|
||||||
|
|
||||||
|
**Expected:** Test passes. When primary LLM endpoint raises APIConnectionError, fallback endpoint is used successfully.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 11: Full Test Suite Regression
|
||||||
|
**Steps:**
|
||||||
|
1. Run `python -m pytest tests/ -v`
|
||||||
|
|
||||||
|
**Expected:** All 16 tests pass (6 ingest + 10 pipeline). No regressions from S02 ingest tests.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Test 12: Prompt Template Files Exist and Are Non-Empty
|
||||||
|
**Steps:**
|
||||||
|
1. Run `test -s ../prompts/stage2_segmentation.txt && test -s ../prompts/stage3_extraction.txt && test -s ../prompts/stage4_classification.txt && test -s ../prompts/stage5_synthesis.txt && echo "all prompts non-empty"`
|
||||||
|
2. Run `grep -l '<transcript>' ../prompts/*.txt | wc -l` (verify XML-style fencing)
|
||||||
|
|
||||||
|
**Expected:** All 4 files exist and are non-empty. XML-style tags present in prompt files.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Edge Cases
|
||||||
|
|
||||||
|
**EC1: Ingest succeeds even if Celery/Redis is down**
|
||||||
|
The ingest endpoint wraps run_pipeline.delay() in try/except. If Celery dispatch fails, ingest still returns 200 and logs a WARNING. Verified by test_ingest_dispatches_pipeline mock setup.
|
||||||
|
|
||||||
|
**EC2: stage6 embedding failure doesn't break pipeline**
|
||||||
|
stage6_embed_and_index catches all exceptions with max_retries=0. If Qdrant or embedding API is unreachable, pipeline completes with stages 2-5 results intact. Verified by test_stage6 mock setup.
|
||||||
|
|
||||||
|
**EC3: LLM returns malformed JSON**
|
||||||
|
_safe_parse_llm_response retries once with a JSON nudge prompt. On second failure, logs ERROR with raw response excerpt and raises.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Operational Readiness (Q8)
|
||||||
|
|
||||||
|
**Health signal:** `source_videos.processing_status` tracks per-video pipeline progress. Celery task registry shows 6 tasks via `python -c "from worker import celery_app; print([t for t in celery_app.tasks if 'stage' in t or 'pipeline' in t])"`.
|
||||||
|
|
||||||
|
**Failure signal:** Video stuck at a processing_status other than `reviewed`/`published` for >10 minutes indicates a pipeline failure. Check Celery worker logs for ERROR entries with stage name and video_id.
|
||||||
|
|
||||||
|
**Recovery procedure:** POST `/api/v1/pipeline/trigger/{video_id}` to re-trigger the pipeline from the last completed stage. For embedding-only issues, the pipeline can be re-run — stage6 is idempotent (creates new Qdrant points, doesn't remove old ones).
|
||||||
|
|
||||||
|
**Monitoring gaps:** No pipeline duration metrics exposed yet. No dead letter queue for permanently failed tasks. No Qdrant point count monitoring. Stage 4 Redis TTL (24h) could expire before stage 5 runs if pipeline is paused.
|
||||||
30
.gsd/milestones/M001/slices/S03/tasks/T05-VERIFY.json
Normal file
30
.gsd/milestones/M001/slices/S03/tasks/T05-VERIFY.json
Normal file
|
|
@ -0,0 +1,30 @@
|
||||||
|
{
|
||||||
|
"schemaVersion": 1,
|
||||||
|
"taskId": "T05",
|
||||||
|
"unitId": "M001/S03/T05",
|
||||||
|
"timestamp": 1774824686266,
|
||||||
|
"passed": false,
|
||||||
|
"discoverySource": "task-plan",
|
||||||
|
"checks": [
|
||||||
|
{
|
||||||
|
"command": "cd backend",
|
||||||
|
"exitCode": 0,
|
||||||
|
"durationMs": 4,
|
||||||
|
"verdict": "pass"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"command": "python -m pytest tests/test_pipeline.py -v",
|
||||||
|
"exitCode": 4,
|
||||||
|
"durationMs": 240,
|
||||||
|
"verdict": "fail"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"command": "python -m pytest tests/ -v",
|
||||||
|
"exitCode": 5,
|
||||||
|
"durationMs": 238,
|
||||||
|
"verdict": "fail"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"retryAttempt": 1,
|
||||||
|
"maxRetries": 2
|
||||||
|
}
|
||||||
|
|
@ -1,6 +1,200 @@
|
||||||
# S04: Review Queue Admin UI
|
# S04: Review Queue Admin UI
|
||||||
|
|
||||||
**Goal:** Functional review workflow for calibrating extraction quality
|
**Goal:** Admin can review, edit, approve, reject, split, and merge extracted key moments via a web UI. Mode toggle switches between review mode (moments queued for human review) and auto mode (moments publish directly).
|
||||||
**Demo:** After this: Admin views pending key moments, approves/edits/rejects them, toggles between review and auto mode
|
**Demo:** After this: Admin views pending key moments, approves/edits/rejects them, toggles between review and auto mode
|
||||||
|
|
||||||
## Tasks
|
## Tasks
|
||||||
|
- [x] **T01: Built 9 review queue API endpoints (queue, stats, approve, reject, edit, split, merge, get/set mode) with Redis mode toggle, error handling, and 24 integration tests — all passing alongside existing suite** — Create the complete review queue backend: new Pydantic schemas for review actions, a review router with 9 endpoints (list queue, stats, approve, reject, edit, split, merge, get mode, set mode), Redis-backed runtime mode toggle, mount in main.py, and comprehensive integration tests. Follow existing async SQLAlchemy patterns from routers/creators.py.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Add review-specific Pydantic schemas to `backend/schemas.py`: `ReviewQueueItem` (KeyMomentRead + video title + creator name), `ReviewQueueResponse` (paginated), `ReviewStatsResponse` (counts per status), `MomentEditRequest` (editable fields: title, summary, start_time, end_time, content_type, plugins), `MomentSplitRequest` (split_time: float), `ReviewModeResponse` and `ReviewModeUpdate` (mode: bool).
|
||||||
|
|
||||||
|
2. Create `backend/routers/review.py` with these async endpoints:
|
||||||
|
- `GET /review/queue` — List key moments filtered by `status` query param (pending/approved/edited/rejected/all), paginated with `offset`/`limit`, joined with SourceVideo.filename and Creator.name. Default filter: pending. Order by created_at desc.
|
||||||
|
- `GET /review/stats` — Return counts grouped by review_status (pending, approved, edited, rejected) using SQL count + group by.
|
||||||
|
- `POST /review/moments/{moment_id}/approve` — Set review_status=approved, return updated moment. 404 if not found.
|
||||||
|
- `POST /review/moments/{moment_id}/reject` — Set review_status=rejected, return updated moment. 404 if not found.
|
||||||
|
- `PUT /review/moments/{moment_id}` — Update editable fields from MomentEditRequest, set review_status=edited, return updated moment. 404 if not found.
|
||||||
|
- `POST /review/moments/{moment_id}/split` — Split moment at `split_time` into two moments. Validate split_time is between start_time and end_time. Original keeps [start_time, split_time), new gets [split_time, end_time]. Both keep same source_video_id and technique_page_id. Return both moments. 400 on invalid split_time.
|
||||||
|
- `POST /review/moments/{moment_id}/merge` — Accept `target_moment_id` in body. Merge two moments: combined summary, min(start_time), max(end_time), delete target, return merged result. Both must belong to same source_video. 400 if different videos. 404 if either not found.
|
||||||
|
- `GET /review/mode` — Read current mode from Redis key `chrysopedia:review_mode`. If not in Redis, fall back to `settings.review_mode` default.
|
||||||
|
- `PUT /review/mode` — Set mode in Redis key `chrysopedia:review_mode`. Return new mode.
|
||||||
|
|
||||||
|
3. Add Redis client helper. Create a small `backend/redis_client.py` module with `get_redis()` async function using `redis.asyncio.Redis.from_url(settings.redis_url)`. Import in review router.
|
||||||
|
|
||||||
|
4. Mount the review router in `backend/main.py`: `app.include_router(review.router, prefix="/api/v1")`.
|
||||||
|
|
||||||
|
5. Add `redis` (async redis client) to `backend/requirements.txt` if not already present.
|
||||||
|
|
||||||
|
6. Create `backend/tests/test_review.py` with integration tests using the established conftest patterns (async client, real PostgreSQL):
|
||||||
|
- Test list queue returns empty when no moments exist
|
||||||
|
- Test list queue returns moments with video/creator info after seeding
|
||||||
|
- Test filter by status works (seed moments with different statuses)
|
||||||
|
- Test stats endpoint returns correct counts
|
||||||
|
- Test approve sets review_status=approved
|
||||||
|
- Test reject sets review_status=rejected
|
||||||
|
- Test edit updates fields and sets review_status=edited
|
||||||
|
- Test split creates two moments with correct timestamps
|
||||||
|
- Test split returns 400 for invalid split_time (outside range)
|
||||||
|
- Test merge combines two moments correctly
|
||||||
|
- Test merge returns 400 for moments from different videos
|
||||||
|
- Test approve/reject/edit return 404 for nonexistent moment
|
||||||
|
- Test mode get/set (mock Redis)
|
||||||
|
|
||||||
|
## Must-Haves
|
||||||
|
|
||||||
|
- [ ] All 9 review endpoints return correct HTTP status codes and response bodies
|
||||||
|
- [ ] Split validates split_time is strictly between start_time and end_time
|
||||||
|
- [ ] Merge validates both moments belong to same source_video
|
||||||
|
- [ ] Mode toggle reads/writes Redis, falls back to config default
|
||||||
|
- [ ] All review tests pass alongside existing test suite
|
||||||
|
- [ ] Review router mounted in main.py
|
||||||
|
|
||||||
|
## Failure Modes
|
||||||
|
|
||||||
|
| Dependency | On error | On timeout | On malformed response |
|
||||||
|
|------------|----------|-----------|----------------------|
|
||||||
|
| PostgreSQL | SQLAlchemy raises, FastAPI returns 500 | Connection timeout → 500 | N/A (ORM handles) |
|
||||||
|
| Redis (mode toggle) | Return 503 with error detail | Timeout → fall back to config default | N/A (simple get/set) |
|
||||||
|
|
||||||
|
## Negative Tests
|
||||||
|
|
||||||
|
- **Malformed inputs**: split_time outside moment range → 400, merge moments from different videos → 400, edit with empty title → validation error
|
||||||
|
- **Error paths**: approve/reject/edit/split nonexistent moment → 404, merge with nonexistent target → 404
|
||||||
|
- **Boundary conditions**: split at exact start_time or end_time → 400, merge moment with itself → 400, empty queue → empty list
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
- `cd backend && python -m pytest tests/test_review.py -v` — all tests pass
|
||||||
|
- `cd backend && python -m pytest tests/ -v` — no regressions (all existing tests still pass)
|
||||||
|
- `python -c "from routers.review import router; print(len(router.routes))"` — prints 9 (routes registered)
|
||||||
|
|
||||||
|
## Observability Impact
|
||||||
|
|
||||||
|
- Signals added: INFO log on each review action (approve/reject/edit/split/merge) with moment_id
|
||||||
|
- How a future agent inspects: `GET /api/v1/review/stats` shows pending/approved/edited/rejected counts
|
||||||
|
- Failure state exposed: 404 responses include moment_id that was not found, 400 responses include validation details
|
||||||
|
- Estimate: 2h
|
||||||
|
- Files: backend/schemas.py, backend/routers/review.py, backend/redis_client.py, backend/main.py, backend/requirements.txt, backend/tests/test_review.py
|
||||||
|
- Verify: cd backend && python -m pytest tests/test_review.py -v && python -m pytest tests/ -v
|
||||||
|
- [ ] **T02: Bootstrap React + Vite + TypeScript frontend with API client** — Replace the placeholder frontend with a real React + Vite + TypeScript application. Install dependencies, configure Vite with API proxy for development, create the app shell with React Router, and build a typed API client module for the review endpoints. Verify `npm run build` produces `dist/index.html` compatible with the existing Docker build pipeline.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Initialize the React app in `frontend/`. Replace `package.json` with proper dependencies:
|
||||||
|
- `react`, `react-dom`, `react-router-dom` for the app
|
||||||
|
- `typescript`, `@types/react`, `@types/react-dom` for types
|
||||||
|
- `vite`, `@vitejs/plugin-react` for build tooling
|
||||||
|
- Scripts: `dev` → `vite`, `build` → `tsc -b && vite build`, `preview` → `vite preview`
|
||||||
|
|
||||||
|
2. Create `frontend/vite.config.ts` with React plugin and dev server proxy (`/api` → `http://localhost:8001`) so the frontend dev server can reach the backend during development.
|
||||||
|
|
||||||
|
3. Create `frontend/tsconfig.json` and `frontend/tsconfig.app.json` with strict TypeScript config targeting ES2020+ and JSX.
|
||||||
|
|
||||||
|
4. Create `frontend/index.html` — Vite entry point with `<div id="root">` and `<script type="module" src="/src/main.tsx">`.
|
||||||
|
|
||||||
|
5. Create app shell files:
|
||||||
|
- `frontend/src/main.tsx` — ReactDOM.createRoot, render App with BrowserRouter
|
||||||
|
- `frontend/src/App.tsx` — Routes: `/admin/review` → ReviewQueue page, `/admin/review/:momentId` → MomentDetail page, `/` → redirect to `/admin/review`. Simple nav header with "Chrysopedia Admin" title.
|
||||||
|
- `frontend/src/App.css` — Minimal admin styles: clean sans-serif typography, card-based layout, status badge colors (pending=amber, approved=green, edited=blue, rejected=red)
|
||||||
|
|
||||||
|
6. Create `frontend/src/api/client.ts` — Typed API client with functions for all review endpoints:
|
||||||
|
- `fetchQueue(params)` → GET /api/v1/review/queue
|
||||||
|
- `fetchStats()` → GET /api/v1/review/stats
|
||||||
|
- `approveMoment(id)` → POST /api/v1/review/moments/{id}/approve
|
||||||
|
- `rejectMoment(id)` → POST /api/v1/review/moments/{id}/reject
|
||||||
|
- `editMoment(id, data)` → PUT /api/v1/review/moments/{id}
|
||||||
|
- `splitMoment(id, splitTime)` → POST /api/v1/review/moments/{id}/split
|
||||||
|
- `mergeMoments(id, targetId)` → POST /api/v1/review/moments/{id}/merge
|
||||||
|
- `getReviewMode()` → GET /api/v1/review/mode
|
||||||
|
- `setReviewMode(enabled)` → PUT /api/v1/review/mode
|
||||||
|
All functions use fetch() with proper error handling. TypeScript interfaces for all request/response types.
|
||||||
|
|
||||||
|
7. Create placeholder page components (just enough to verify routing works):
|
||||||
|
- `frontend/src/pages/ReviewQueue.tsx` — renders "Review Queue" heading + "Loading..." text
|
||||||
|
- `frontend/src/pages/MomentDetail.tsx` — renders "Moment Detail" heading + shows moment ID from URL params
|
||||||
|
|
||||||
|
8. Run `npm install` and `npm run build` to verify the build produces `dist/index.html`. Verify the output directory structure matches what `docker/Dockerfile.web` expects.
|
||||||
|
|
||||||
|
## Must-Haves
|
||||||
|
|
||||||
|
- [ ] `npm run build` succeeds and produces `dist/index.html`
|
||||||
|
- [ ] `npm run dev` starts Vite dev server
|
||||||
|
- [ ] React Router routes `/admin/review` and `/admin/review/:momentId` render correctly
|
||||||
|
- [ ] API client module exports typed functions for all 9 review endpoints
|
||||||
|
- [ ] TypeScript compilation passes with no errors
|
||||||
|
- [ ] Build output is compatible with existing `docker/Dockerfile.web` (files in `dist/`)
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
- `cd frontend && npm run build && test -f dist/index.html` — build succeeds
|
||||||
|
- `cd frontend && npx tsc --noEmit` — TypeScript has no errors
|
||||||
|
- `grep -q 'fetchQueue\|approveMoment\|getReviewMode' frontend/src/api/client.ts` — API client has key functions
|
||||||
|
- Estimate: 1h
|
||||||
|
- Files: frontend/package.json, frontend/vite.config.ts, frontend/tsconfig.json, frontend/tsconfig.app.json, frontend/index.html, frontend/src/main.tsx, frontend/src/App.tsx, frontend/src/App.css, frontend/src/api/client.ts, frontend/src/pages/ReviewQueue.tsx, frontend/src/pages/MomentDetail.tsx
|
||||||
|
- Verify: cd frontend && npm run build && test -f dist/index.html && npx tsc --noEmit
|
||||||
|
- [ ] **T03: Build review queue UI pages with status filters, moment actions, and mode toggle** — Implement the full admin review queue UI: the queue list page with status filter tabs and stats summary, the moment detail/review page with approve/reject/edit/split/merge actions, and the review mode toggle. Wire all pages to the API client from T02.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Build `frontend/src/pages/ReviewQueue.tsx` — the main admin page:
|
||||||
|
- Stats bar at top showing counts per status (pending, approved, edited, rejected) fetched from `/api/v1/review/stats`
|
||||||
|
- Filter tabs: All, Pending, Approved, Edited, Rejected — clicking a tab filters the queue list
|
||||||
|
- Queue list: cards showing moment title, summary excerpt (first 150 chars), video filename, creator name, review_status badge, timestamps. Click card → navigate to `/admin/review/{momentId}`
|
||||||
|
- Pagination: Previous/Next buttons with offset/limit
|
||||||
|
- Mode toggle in header area: switch between Review Mode and Auto Mode, calls `PUT /api/v1/review/mode`. Show current mode with visual indicator (green dot for review, amber for auto)
|
||||||
|
- Empty state: show message when no moments match the current filter
|
||||||
|
- Use `useEffect` + `useState` for data fetching (no external state library needed for a single-admin tool)
|
||||||
|
|
||||||
|
2. Build `frontend/src/pages/MomentDetail.tsx` — individual moment review page:
|
||||||
|
- Display full moment data: title, summary, content_type, start_time/end_time (formatted as mm:ss), plugins list, raw_transcript (if available), review_status badge
|
||||||
|
- Show source video filename and creator name
|
||||||
|
- Action buttons row:
|
||||||
|
- Approve (green) — calls `POST .../approve`, navigates back to queue on success
|
||||||
|
- Reject (red) — calls `POST .../reject`, navigates back to queue on success
|
||||||
|
- Edit — toggles inline edit mode for title, summary, content_type fields. Save button calls `PUT .../` with edited data
|
||||||
|
- Split — opens a split dialog: text input for split timestamp (validated between start_time and end_time), calls `POST .../split`
|
||||||
|
- Merge — opens a merge dialog: dropdown to select another moment from same video, calls `POST .../merge`
|
||||||
|
- Back link to queue
|
||||||
|
- Loading and error states for all API calls
|
||||||
|
|
||||||
|
3. Create `frontend/src/components/StatusBadge.tsx` — reusable status badge component with color coding (pending=amber, approved=green, edited=blue, rejected=red).
|
||||||
|
|
||||||
|
4. Create `frontend/src/components/ModeToggle.tsx` — review/auto mode toggle component extracted from the queue page for reuse in the header.
|
||||||
|
|
||||||
|
5. Update `frontend/src/App.tsx` if needed to add the mode toggle to the global nav header.
|
||||||
|
|
||||||
|
6. Update `frontend/src/App.css` with styles for:
|
||||||
|
- Stats bar (flex row of count cards)
|
||||||
|
- Filter tabs (horizontal tab bar with active indicator)
|
||||||
|
- Queue cards (bordered cards with hover effect)
|
||||||
|
- Status badges (colored pill shapes)
|
||||||
|
- Action buttons (colored, with hover/disabled states)
|
||||||
|
- Edit form (inline fields with save/cancel)
|
||||||
|
- Split/merge dialogs (modal overlays)
|
||||||
|
- Responsive layout (single column on narrow screens)
|
||||||
|
|
||||||
|
7. Verify `npm run build` still succeeds after all UI changes.
|
||||||
|
|
||||||
|
## Must-Haves
|
||||||
|
|
||||||
|
- [ ] Queue page loads and displays moments from API with status filter tabs
|
||||||
|
- [ ] Stats bar shows correct counts per review status
|
||||||
|
- [ ] Clicking a moment navigates to detail page
|
||||||
|
- [ ] Approve, reject actions work and navigate back to queue
|
||||||
|
- [ ] Edit mode allows inline editing of title/summary/content_type with save
|
||||||
|
- [ ] Split dialog validates split_time and creates two moments
|
||||||
|
- [ ] Merge dialog shows moments from same video and merges on confirm
|
||||||
|
- [ ] Mode toggle reads and updates review/auto mode via API
|
||||||
|
- [ ] Build succeeds with no TypeScript errors
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
- `cd frontend && npm run build && test -f dist/index.html` — build succeeds
|
||||||
|
- `cd frontend && npx tsc --noEmit` — no TypeScript errors
|
||||||
|
- `grep -q 'StatusBadge\|ModeToggle' frontend/src/pages/ReviewQueue.tsx` — components integrated
|
||||||
|
- `grep -q 'approve\|reject\|split\|merge' frontend/src/pages/MomentDetail.tsx` — all actions present
|
||||||
|
- Estimate: 2h
|
||||||
|
- Files: frontend/src/pages/ReviewQueue.tsx, frontend/src/pages/MomentDetail.tsx, frontend/src/components/StatusBadge.tsx, frontend/src/components/ModeToggle.tsx, frontend/src/App.tsx, frontend/src/App.css
|
||||||
|
- Verify: cd frontend && npm run build && test -f dist/index.html && npx tsc --noEmit
|
||||||
|
|
|
||||||
91
.gsd/milestones/M001/slices/S04/S04-RESEARCH.md
Normal file
91
.gsd/milestones/M001/slices/S04/S04-RESEARCH.md
Normal file
|
|
@ -0,0 +1,91 @@
|
||||||
|
# S04 — Review Queue Admin UI — Research
|
||||||
|
|
||||||
|
**Date:** 2026-03-29
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
S04 builds the admin review queue for Chrysopedia — the UI and API that let an administrator review, edit, approve, reject, split, and merge extracted key moments before they're published. The spec (§8) defines two modes: **review mode** (all moments queued for human review) and **auto mode** (moments publish directly, queue becomes an audit log). A system-level mode toggle switches between them. The `review_mode: bool = True` setting already exists in `config.py`.
|
||||||
|
|
||||||
|
The work naturally splits into two halves: **backend API endpoints** for review queue operations (list/filter moments, approve/reject/edit, split/merge, mode toggle, status counts) and a **React frontend** admin UI. The frontend is currently a bare placeholder (`package.json` with no framework) but the Docker build pipeline (Node build → nginx SPA serving with `/api/` proxy) is already wired. The backend has established patterns for async SQLAlchemy endpoints, Pydantic schemas, and test fixtures.
|
||||||
|
|
||||||
|
This is medium-complexity CRUD + admin UI work. The data model already has `KeyMoment.review_status` (pending/approved/edited/rejected) and `SourceVideo.processing_status`. The riskiest parts are: (1) split/merge operations on key moments (modifying timestamps and creating/deleting rows), and (2) bootstrapping the React app from zero. No novel technology or unfamiliar APIs involved.
|
||||||
|
|
||||||
|
## Recommendation
|
||||||
|
|
||||||
|
Build the backend review API first (new `routers/review.py`), then initialize the React frontend with a minimal admin UI. Use the existing async SQLAlchemy patterns from `routers/creators.py` and `routers/videos.py`. For the frontend, use React + React Router + a lightweight fetching approach (fetch or a small library). No heavy framework needed — this is a single-user admin tool, not a high-traffic public site.
|
||||||
|
|
||||||
|
The mode toggle should be a runtime-mutable setting, not just the config file default. Since there's only one admin, store the toggle in Redis (like stage 4's classification data) or add a `system_settings` table. Redis is simpler and already used by the project.
|
||||||
|
|
||||||
|
## Implementation Landscape
|
||||||
|
|
||||||
|
### Key Files
|
||||||
|
|
||||||
|
**Existing (read/extend):**
|
||||||
|
- `backend/models.py` — Has `KeyMoment` (with `review_status: ReviewStatus`), `SourceVideo` (with `processing_status`), `TechniquePage`. All the DB models needed for review are already defined.
|
||||||
|
- `backend/schemas.py` — Has `KeyMomentRead`, `KeyMomentBase`. Needs new schemas for review actions (approve, edit, split, merge responses).
|
||||||
|
- `backend/config.py` — Has `review_mode: bool = True`. This is the default; runtime toggle needs a separate mechanism.
|
||||||
|
- `backend/database.py` — `get_session` async dependency. Used by all routers.
|
||||||
|
- `backend/main.py` — Mount point for new review router.
|
||||||
|
- `backend/tests/conftest.py` — Test infrastructure with async/sync fixtures, `pre_ingested_video`.
|
||||||
|
- `backend/pipeline/stages.py` — `stage5_synthesis` reads `settings.review_mode` to set processing_status. The mode toggle affects new pipeline runs, not existing moments.
|
||||||
|
- `backend/routers/pipeline.py` — `POST /pipeline/trigger/{video_id}` for re-processing after prompt edits.
|
||||||
|
|
||||||
|
**New files to create:**
|
||||||
|
- `backend/routers/review.py` — Review queue API endpoints:
|
||||||
|
- `GET /api/v1/review/queue` — List key moments with filter (status), pagination, grouped by video
|
||||||
|
- `GET /api/v1/review/stats` — Counts by review_status (pending, approved, edited, rejected)
|
||||||
|
- `POST /api/v1/review/moments/{moment_id}/approve` — Set review_status=approved
|
||||||
|
- `POST /api/v1/review/moments/{moment_id}/reject` — Set review_status=rejected
|
||||||
|
- `PUT /api/v1/review/moments/{moment_id}` — Edit fields + set review_status=edited
|
||||||
|
- `POST /api/v1/review/moments/{moment_id}/split` — Split into two moments
|
||||||
|
- `POST /api/v1/review/moments/{moment_id}/merge` — Merge with adjacent moment
|
||||||
|
- `GET /api/v1/review/mode` — Get current review/auto mode
|
||||||
|
- `PUT /api/v1/review/mode` — Toggle review/auto mode
|
||||||
|
- `backend/tests/test_review.py` — Integration tests for review API
|
||||||
|
- `frontend/src/` — React app source (App, Router, pages)
|
||||||
|
- `frontend/package.json` — Updated with React, build tooling
|
||||||
|
- `frontend/vite.config.ts` — Vite config for React build
|
||||||
|
- `frontend/index.html` — SPA entry point
|
||||||
|
- `frontend/src/pages/ReviewQueue.tsx` — Queue view with filter tabs, status counts
|
||||||
|
- `frontend/src/pages/MomentReview.tsx` — Individual moment review with actions
|
||||||
|
- `frontend/src/components/` — Shared UI components
|
||||||
|
|
||||||
|
### Build Order
|
||||||
|
|
||||||
|
1. **Backend review API + tests** — Build `routers/review.py` with all endpoints, add schemas, mount in `main.py`, write integration tests. This is the foundation — the frontend is just a consumer of these endpoints. Prove the API works with tests before touching the frontend.
|
||||||
|
|
||||||
|
2. **React app bootstrap** — Initialize the React app in `frontend/` with Vite + TypeScript. Get `npm run dev` and `npm run build` working. Verify the Docker build pipeline produces a working SPA.
|
||||||
|
|
||||||
|
3. **Review queue UI pages** — Build the queue view (list moments, filter tabs, status counts) and the moment review page (display moment + raw transcript, action buttons). Wire to the backend API.
|
||||||
|
|
||||||
|
4. **Mode toggle + integration** — Add the mode toggle UI, connect to the backend toggle endpoint. Verify the full flow: pipeline produces moments → admin reviews/approves → status updates correctly.
|
||||||
|
|
||||||
|
### Verification Approach
|
||||||
|
|
||||||
|
**Backend:**
|
||||||
|
- `cd backend && python -m pytest tests/test_review.py -v` — All review API tests pass
|
||||||
|
- `cd backend && python -m pytest tests/ -v` — All existing tests still pass (no regressions)
|
||||||
|
- Manual curl: `GET /api/v1/review/stats` returns counts, `POST .../approve` changes status
|
||||||
|
|
||||||
|
**Frontend:**
|
||||||
|
- `cd frontend && npm run build` — Build succeeds, produces `dist/index.html`
|
||||||
|
- `cd frontend && npm run dev` — Dev server starts, admin UI renders
|
||||||
|
- Navigate to admin review queue page, see list of moments (requires seeded data or mocked API)
|
||||||
|
|
||||||
|
**Integration:**
|
||||||
|
- Docker compose builds both services successfully
|
||||||
|
- Nginx proxies `/api/` to backend, serves frontend SPA on `/`
|
||||||
|
|
||||||
|
## Constraints
|
||||||
|
|
||||||
|
- **Async-only in FastAPI handlers** — All review endpoints must use async SQLAlchemy (`AsyncSession`), following the pattern in existing routers. The sync engine/session pattern is only for Celery tasks.
|
||||||
|
- **No auth** — The spec doesn't mention authentication for S04. This is a single-admin internal tool. Auth can be added later if needed.
|
||||||
|
- **Mode toggle persistence** — `config.py`'s `review_mode` comes from environment variable and is cached via `lru_cache`. A runtime toggle needs Redis or a DB table; changing env vars at runtime is fragile. Redis is the simpler choice — the project already uses it for stage 4 classification data.
|
||||||
|
- **KeyMoment lacks topic_tags/topic_category columns** — Classification data is in Redis (24h TTL). The review UI should display tags from Redis if available, or show "not classified" if Redis data has expired. This is a read-only concern for S04.
|
||||||
|
- **Existing Docker build expects `npm run build` to produce `dist/`** — The frontend build must output to `frontend/dist/` for the nginx Dockerfile to work.
|
||||||
|
|
||||||
|
## Common Pitfalls
|
||||||
|
|
||||||
|
- **Split/merge moment complexity** — Splitting a moment requires creating a new `KeyMoment` row and adjusting timestamps on both the original and new row. Merging requires combining summaries and extending timestamp ranges, then deleting one row. Both operations must handle the `technique_page_id` foreign key — split moments should keep the same page link, merged moments should keep one.
|
||||||
|
- **Redis mode toggle vs config.py** — If the mode toggle is stored in Redis but `pipeline/stages.py` reads `settings.review_mode` from config, changing the toggle won't affect running pipeline tasks. The pipeline needs to read the toggle from the same source as the admin UI. Either the pipeline reads from Redis too, or the toggle updates the Settings object.
|
||||||
|
- **Frontend build from zero** — Installing React + Vite + TypeScript from scratch in the existing `frontend/` directory. The `package.json` exists but has no deps. Need to be careful not to break the Docker build — the Dockerfile runs `npm ci` then `npm run build`.
|
||||||
102
.gsd/milestones/M001/slices/S04/tasks/T01-PLAN.md
Normal file
102
.gsd/milestones/M001/slices/S04/tasks/T01-PLAN.md
Normal file
|
|
@ -0,0 +1,102 @@
|
||||||
|
---
|
||||||
|
estimated_steps: 54
|
||||||
|
estimated_files: 6
|
||||||
|
skills_used: []
|
||||||
|
---
|
||||||
|
|
||||||
|
# T01: Build review queue API endpoints with Redis mode toggle and integration tests
|
||||||
|
|
||||||
|
Create the complete review queue backend: new Pydantic schemas for review actions, a review router with 9 endpoints (list queue, stats, approve, reject, edit, split, merge, get mode, set mode), Redis-backed runtime mode toggle, mount in main.py, and comprehensive integration tests. Follow existing async SQLAlchemy patterns from routers/creators.py.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Add review-specific Pydantic schemas to `backend/schemas.py`: `ReviewQueueItem` (KeyMomentRead + video title + creator name), `ReviewQueueResponse` (paginated), `ReviewStatsResponse` (counts per status), `MomentEditRequest` (editable fields: title, summary, start_time, end_time, content_type, plugins), `MomentSplitRequest` (split_time: float), `ReviewModeResponse` and `ReviewModeUpdate` (mode: bool).
|
||||||
|
|
||||||
|
2. Create `backend/routers/review.py` with these async endpoints:
|
||||||
|
- `GET /review/queue` — List key moments filtered by `status` query param (pending/approved/edited/rejected/all), paginated with `offset`/`limit`, joined with SourceVideo.filename and Creator.name. Default filter: pending. Order by created_at desc.
|
||||||
|
- `GET /review/stats` — Return counts grouped by review_status (pending, approved, edited, rejected) using SQL count + group by.
|
||||||
|
- `POST /review/moments/{moment_id}/approve` — Set review_status=approved, return updated moment. 404 if not found.
|
||||||
|
- `POST /review/moments/{moment_id}/reject` — Set review_status=rejected, return updated moment. 404 if not found.
|
||||||
|
- `PUT /review/moments/{moment_id}` — Update editable fields from MomentEditRequest, set review_status=edited, return updated moment. 404 if not found.
|
||||||
|
- `POST /review/moments/{moment_id}/split` — Split moment at `split_time` into two moments. Validate split_time is between start_time and end_time. Original keeps [start_time, split_time), new gets [split_time, end_time]. Both keep same source_video_id and technique_page_id. Return both moments. 400 on invalid split_time.
|
||||||
|
- `POST /review/moments/{moment_id}/merge` — Accept `target_moment_id` in body. Merge two moments: combined summary, min(start_time), max(end_time), delete target, return merged result. Both must belong to same source_video. 400 if different videos. 404 if either not found.
|
||||||
|
- `GET /review/mode` — Read current mode from Redis key `chrysopedia:review_mode`. If not in Redis, fall back to `settings.review_mode` default.
|
||||||
|
- `PUT /review/mode` — Set mode in Redis key `chrysopedia:review_mode`. Return new mode.
|
||||||
|
|
||||||
|
3. Add Redis client helper. Create a small `backend/redis_client.py` module with `get_redis()` async function using `redis.asyncio.Redis.from_url(settings.redis_url)`. Import in review router.
|
||||||
|
|
||||||
|
4. Mount the review router in `backend/main.py`: `app.include_router(review.router, prefix="/api/v1")`.
|
||||||
|
|
||||||
|
5. Add `redis` (async redis client) to `backend/requirements.txt` if not already present.
|
||||||
|
|
||||||
|
6. Create `backend/tests/test_review.py` with integration tests using the established conftest patterns (async client, real PostgreSQL):
|
||||||
|
- Test list queue returns empty when no moments exist
|
||||||
|
- Test list queue returns moments with video/creator info after seeding
|
||||||
|
- Test filter by status works (seed moments with different statuses)
|
||||||
|
- Test stats endpoint returns correct counts
|
||||||
|
- Test approve sets review_status=approved
|
||||||
|
- Test reject sets review_status=rejected
|
||||||
|
- Test edit updates fields and sets review_status=edited
|
||||||
|
- Test split creates two moments with correct timestamps
|
||||||
|
- Test split returns 400 for invalid split_time (outside range)
|
||||||
|
- Test merge combines two moments correctly
|
||||||
|
- Test merge returns 400 for moments from different videos
|
||||||
|
- Test approve/reject/edit return 404 for nonexistent moment
|
||||||
|
- Test mode get/set (mock Redis)
|
||||||
|
|
||||||
|
## Must-Haves
|
||||||
|
|
||||||
|
- [ ] All 9 review endpoints return correct HTTP status codes and response bodies
|
||||||
|
- [ ] Split validates split_time is strictly between start_time and end_time
|
||||||
|
- [ ] Merge validates both moments belong to same source_video
|
||||||
|
- [ ] Mode toggle reads/writes Redis, falls back to config default
|
||||||
|
- [ ] All review tests pass alongside existing test suite
|
||||||
|
- [ ] Review router mounted in main.py
|
||||||
|
|
||||||
|
## Failure Modes
|
||||||
|
|
||||||
|
| Dependency | On error | On timeout | On malformed response |
|
||||||
|
|------------|----------|-----------|----------------------|
|
||||||
|
| PostgreSQL | SQLAlchemy raises, FastAPI returns 500 | Connection timeout → 500 | N/A (ORM handles) |
|
||||||
|
| Redis (mode toggle) | Return 503 with error detail | Timeout → fall back to config default | N/A (simple get/set) |
|
||||||
|
|
||||||
|
## Negative Tests
|
||||||
|
|
||||||
|
- **Malformed inputs**: split_time outside moment range → 400, merge moments from different videos → 400, edit with empty title → validation error
|
||||||
|
- **Error paths**: approve/reject/edit/split nonexistent moment → 404, merge with nonexistent target → 404
|
||||||
|
- **Boundary conditions**: split at exact start_time or end_time → 400, merge moment with itself → 400, empty queue → empty list
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
- `cd backend && python -m pytest tests/test_review.py -v` — all tests pass
|
||||||
|
- `cd backend && python -m pytest tests/ -v` — no regressions (all existing tests still pass)
|
||||||
|
- `python -c "from routers.review import router; print(len(router.routes))"` — prints 9 (routes registered)
|
||||||
|
|
||||||
|
## Observability Impact
|
||||||
|
|
||||||
|
- Signals added: INFO log on each review action (approve/reject/edit/split/merge) with moment_id
|
||||||
|
- How a future agent inspects: `GET /api/v1/review/stats` shows pending/approved/edited/rejected counts
|
||||||
|
- Failure state exposed: 404 responses include moment_id that was not found, 400 responses include validation details
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
- ``backend/models.py` — KeyMoment, SourceVideo, Creator models with review_status enum`
|
||||||
|
- ``backend/schemas.py` — existing Pydantic schemas to extend`
|
||||||
|
- ``backend/database.py` — get_session async dependency`
|
||||||
|
- ``backend/config.py` — Settings with review_mode and redis_url`
|
||||||
|
- ``backend/main.py` — router mount point`
|
||||||
|
- ``backend/routers/creators.py` — pattern reference for async SQLAlchemy endpoints`
|
||||||
|
- ``backend/tests/conftest.py` — test fixtures (db_engine, client, sync_engine, pre_ingested_video)`
|
||||||
|
|
||||||
|
## Expected Output
|
||||||
|
|
||||||
|
- ``backend/routers/review.py` — review queue API router with 9 endpoints`
|
||||||
|
- ``backend/redis_client.py` — async Redis client helper`
|
||||||
|
- ``backend/schemas.py` — extended with review-specific Pydantic schemas`
|
||||||
|
- ``backend/main.py` — updated to mount review router`
|
||||||
|
- ``backend/requirements.txt` — updated with redis dependency`
|
||||||
|
- ``backend/tests/test_review.py` — integration tests for all review endpoints`
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
cd backend && python -m pytest tests/test_review.py -v && python -m pytest tests/ -v
|
||||||
86
.gsd/milestones/M001/slices/S04/tasks/T01-SUMMARY.md
Normal file
86
.gsd/milestones/M001/slices/S04/tasks/T01-SUMMARY.md
Normal file
|
|
@ -0,0 +1,86 @@
|
||||||
|
---
|
||||||
|
id: T01
|
||||||
|
parent: S04
|
||||||
|
milestone: M001
|
||||||
|
provides: []
|
||||||
|
requires: []
|
||||||
|
affects: []
|
||||||
|
key_files: ["backend/routers/review.py", "backend/schemas.py", "backend/redis_client.py", "backend/main.py", "backend/tests/test_review.py"]
|
||||||
|
key_decisions: ["Redis mode toggle uses per-request get_redis() with aclose() rather than a connection pool", "Split creates new moment with '(split)' title suffix", "Merge combines summaries with double-newline separator"]
|
||||||
|
patterns_established: []
|
||||||
|
drill_down_paths: []
|
||||||
|
observability_surfaces: []
|
||||||
|
duration: ""
|
||||||
|
verification_result: "All three slice verification checks pass: (1) pytest tests/test_review.py → 24 passed, (2) pytest tests/ → 40 passed (no regressions), (3) route count check → prints 9."
|
||||||
|
completed_at: 2026-03-29T23:13:28.671Z
|
||||||
|
blocker_discovered: false
|
||||||
|
---
|
||||||
|
|
||||||
|
# T01: Built 9 review queue API endpoints (queue, stats, approve, reject, edit, split, merge, get/set mode) with Redis mode toggle, error handling, and 24 integration tests — all passing alongside existing suite
|
||||||
|
|
||||||
|
> Built 9 review queue API endpoints (queue, stats, approve, reject, edit, split, merge, get/set mode) with Redis mode toggle, error handling, and 24 integration tests — all passing alongside existing suite
|
||||||
|
|
||||||
|
## What Happened
|
||||||
|
---
|
||||||
|
id: T01
|
||||||
|
parent: S04
|
||||||
|
milestone: M001
|
||||||
|
key_files:
|
||||||
|
- backend/routers/review.py
|
||||||
|
- backend/schemas.py
|
||||||
|
- backend/redis_client.py
|
||||||
|
- backend/main.py
|
||||||
|
- backend/tests/test_review.py
|
||||||
|
key_decisions:
|
||||||
|
- Redis mode toggle uses per-request get_redis() with aclose() rather than a connection pool
|
||||||
|
- Split creates new moment with '(split)' title suffix
|
||||||
|
- Merge combines summaries with double-newline separator
|
||||||
|
duration: ""
|
||||||
|
verification_result: passed
|
||||||
|
completed_at: 2026-03-29T23:13:28.672Z
|
||||||
|
blocker_discovered: false
|
||||||
|
---
|
||||||
|
|
||||||
|
# T01: Built 9 review queue API endpoints (queue, stats, approve, reject, edit, split, merge, get/set mode) with Redis mode toggle, error handling, and 24 integration tests — all passing alongside existing suite
|
||||||
|
|
||||||
|
**Built 9 review queue API endpoints (queue, stats, approve, reject, edit, split, merge, get/set mode) with Redis mode toggle, error handling, and 24 integration tests — all passing alongside existing suite**
|
||||||
|
|
||||||
|
## What Happened
|
||||||
|
|
||||||
|
Created the complete review queue backend: 8 new Pydantic schemas in schemas.py, a redis_client.py helper module, a review router with 9 async endpoints (list queue with status filter, stats, approve, reject, edit, split with timestamp validation, merge with same-video validation, get/set review mode via Redis with config fallback), mounted in main.py, and 24 comprehensive integration tests covering happy paths, 404s, 400s for boundary conditions, and Redis mock tests for mode toggle.
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
All three slice verification checks pass: (1) pytest tests/test_review.py → 24 passed, (2) pytest tests/ → 40 passed (no regressions), (3) route count check → prints 9.
|
||||||
|
|
||||||
|
## Verification Evidence
|
||||||
|
|
||||||
|
| # | Command | Exit Code | Verdict | Duration |
|
||||||
|
|---|---------|-----------|---------|----------|
|
||||||
|
| 1 | `cd backend && python -m pytest tests/test_review.py -v` | 0 | ✅ pass | 11100ms |
|
||||||
|
| 2 | `cd backend && python -m pytest tests/ -v` | 0 | ✅ pass | 133500ms |
|
||||||
|
| 3 | `python -c "from routers.review import router; print(len(router.routes))"` | 0 | ✅ pass | 500ms |
|
||||||
|
|
||||||
|
|
||||||
|
## Deviations
|
||||||
|
|
||||||
|
None.
|
||||||
|
|
||||||
|
## Known Issues
|
||||||
|
|
||||||
|
None.
|
||||||
|
|
||||||
|
## Files Created/Modified
|
||||||
|
|
||||||
|
- `backend/routers/review.py`
|
||||||
|
- `backend/schemas.py`
|
||||||
|
- `backend/redis_client.py`
|
||||||
|
- `backend/main.py`
|
||||||
|
- `backend/tests/test_review.py`
|
||||||
|
|
||||||
|
|
||||||
|
## Deviations
|
||||||
|
None.
|
||||||
|
|
||||||
|
## Known Issues
|
||||||
|
None.
|
||||||
86
.gsd/milestones/M001/slices/S04/tasks/T02-PLAN.md
Normal file
86
.gsd/milestones/M001/slices/S04/tasks/T02-PLAN.md
Normal file
|
|
@ -0,0 +1,86 @@
|
||||||
|
---
|
||||||
|
estimated_steps: 40
|
||||||
|
estimated_files: 11
|
||||||
|
skills_used: []
|
||||||
|
---
|
||||||
|
|
||||||
|
# T02: Bootstrap React + Vite + TypeScript frontend with API client
|
||||||
|
|
||||||
|
Replace the placeholder frontend with a real React + Vite + TypeScript application. Install dependencies, configure Vite with API proxy for development, create the app shell with React Router, and build a typed API client module for the review endpoints. Verify `npm run build` produces `dist/index.html` compatible with the existing Docker build pipeline.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Initialize the React app in `frontend/`. Replace `package.json` with proper dependencies:
|
||||||
|
- `react`, `react-dom`, `react-router-dom` for the app
|
||||||
|
- `typescript`, `@types/react`, `@types/react-dom` for types
|
||||||
|
- `vite`, `@vitejs/plugin-react` for build tooling
|
||||||
|
- Scripts: `dev` → `vite`, `build` → `tsc -b && vite build`, `preview` → `vite preview`
|
||||||
|
|
||||||
|
2. Create `frontend/vite.config.ts` with React plugin and dev server proxy (`/api` → `http://localhost:8001`) so the frontend dev server can reach the backend during development.
|
||||||
|
|
||||||
|
3. Create `frontend/tsconfig.json` and `frontend/tsconfig.app.json` with strict TypeScript config targeting ES2020+ and JSX.
|
||||||
|
|
||||||
|
4. Create `frontend/index.html` — Vite entry point with `<div id="root">` and `<script type="module" src="/src/main.tsx">`.
|
||||||
|
|
||||||
|
5. Create app shell files:
|
||||||
|
- `frontend/src/main.tsx` — ReactDOM.createRoot, render App with BrowserRouter
|
||||||
|
- `frontend/src/App.tsx` — Routes: `/admin/review` → ReviewQueue page, `/admin/review/:momentId` → MomentDetail page, `/` → redirect to `/admin/review`. Simple nav header with "Chrysopedia Admin" title.
|
||||||
|
- `frontend/src/App.css` — Minimal admin styles: clean sans-serif typography, card-based layout, status badge colors (pending=amber, approved=green, edited=blue, rejected=red)
|
||||||
|
|
||||||
|
6. Create `frontend/src/api/client.ts` — Typed API client with functions for all review endpoints:
|
||||||
|
- `fetchQueue(params)` → GET /api/v1/review/queue
|
||||||
|
- `fetchStats()` → GET /api/v1/review/stats
|
||||||
|
- `approveMoment(id)` → POST /api/v1/review/moments/{id}/approve
|
||||||
|
- `rejectMoment(id)` → POST /api/v1/review/moments/{id}/reject
|
||||||
|
- `editMoment(id, data)` → PUT /api/v1/review/moments/{id}
|
||||||
|
- `splitMoment(id, splitTime)` → POST /api/v1/review/moments/{id}/split
|
||||||
|
- `mergeMoments(id, targetId)` → POST /api/v1/review/moments/{id}/merge
|
||||||
|
- `getReviewMode()` → GET /api/v1/review/mode
|
||||||
|
- `setReviewMode(enabled)` → PUT /api/v1/review/mode
|
||||||
|
All functions use fetch() with proper error handling. TypeScript interfaces for all request/response types.
|
||||||
|
|
||||||
|
7. Create placeholder page components (just enough to verify routing works):
|
||||||
|
- `frontend/src/pages/ReviewQueue.tsx` — renders "Review Queue" heading + "Loading..." text
|
||||||
|
- `frontend/src/pages/MomentDetail.tsx` — renders "Moment Detail" heading + shows moment ID from URL params
|
||||||
|
|
||||||
|
8. Run `npm install` and `npm run build` to verify the build produces `dist/index.html`. Verify the output directory structure matches what `docker/Dockerfile.web` expects.
|
||||||
|
|
||||||
|
## Must-Haves
|
||||||
|
|
||||||
|
- [ ] `npm run build` succeeds and produces `dist/index.html`
|
||||||
|
- [ ] `npm run dev` starts Vite dev server
|
||||||
|
- [ ] React Router routes `/admin/review` and `/admin/review/:momentId` render correctly
|
||||||
|
- [ ] API client module exports typed functions for all 9 review endpoints
|
||||||
|
- [ ] TypeScript compilation passes with no errors
|
||||||
|
- [ ] Build output is compatible with existing `docker/Dockerfile.web` (files in `dist/`)
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
- `cd frontend && npm run build && test -f dist/index.html` — build succeeds
|
||||||
|
- `cd frontend && npx tsc --noEmit` — TypeScript has no errors
|
||||||
|
- `grep -q 'fetchQueue\|approveMoment\|getReviewMode' frontend/src/api/client.ts` — API client has key functions
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
- ``frontend/package.json` — existing placeholder to replace`
|
||||||
|
- ``docker/Dockerfile.web` — Docker build expects npm ci + npm run build → dist/`
|
||||||
|
- ``docker/nginx.conf` — SPA serving with /api/ proxy`
|
||||||
|
- ``backend/routers/review.py` — API endpoint signatures to match in client`
|
||||||
|
|
||||||
|
## Expected Output
|
||||||
|
|
||||||
|
- ``frontend/package.json` — updated with React, Vite, TypeScript dependencies`
|
||||||
|
- ``frontend/vite.config.ts` — Vite config with React plugin and API proxy`
|
||||||
|
- ``frontend/tsconfig.json` — TypeScript project config`
|
||||||
|
- ``frontend/tsconfig.app.json` — TypeScript app config`
|
||||||
|
- ``frontend/index.html` — SPA entry point`
|
||||||
|
- ``frontend/src/main.tsx` — React app entry`
|
||||||
|
- ``frontend/src/App.tsx` — App shell with React Router`
|
||||||
|
- ``frontend/src/App.css` — Admin UI styles`
|
||||||
|
- ``frontend/src/api/client.ts` — Typed API client for review endpoints`
|
||||||
|
- ``frontend/src/pages/ReviewQueue.tsx` — Queue page placeholder`
|
||||||
|
- ``frontend/src/pages/MomentDetail.tsx` — Moment detail page placeholder`
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
cd frontend && npm run build && test -f dist/index.html && npx tsc --noEmit
|
||||||
90
.gsd/milestones/M001/slices/S04/tasks/T03-PLAN.md
Normal file
90
.gsd/milestones/M001/slices/S04/tasks/T03-PLAN.md
Normal file
|
|
@ -0,0 +1,90 @@
|
||||||
|
---
|
||||||
|
estimated_steps: 49
|
||||||
|
estimated_files: 6
|
||||||
|
skills_used: []
|
||||||
|
---
|
||||||
|
|
||||||
|
# T03: Build review queue UI pages with status filters, moment actions, and mode toggle
|
||||||
|
|
||||||
|
Implement the full admin review queue UI: the queue list page with status filter tabs and stats summary, the moment detail/review page with approve/reject/edit/split/merge actions, and the review mode toggle. Wire all pages to the API client from T02.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Build `frontend/src/pages/ReviewQueue.tsx` — the main admin page:
|
||||||
|
- Stats bar at top showing counts per status (pending, approved, edited, rejected) fetched from `/api/v1/review/stats`
|
||||||
|
- Filter tabs: All, Pending, Approved, Edited, Rejected — clicking a tab filters the queue list
|
||||||
|
- Queue list: cards showing moment title, summary excerpt (first 150 chars), video filename, creator name, review_status badge, timestamps. Click card → navigate to `/admin/review/{momentId}`
|
||||||
|
- Pagination: Previous/Next buttons with offset/limit
|
||||||
|
- Mode toggle in header area: switch between Review Mode and Auto Mode, calls `PUT /api/v1/review/mode`. Show current mode with visual indicator (green dot for review, amber for auto)
|
||||||
|
- Empty state: show message when no moments match the current filter
|
||||||
|
- Use `useEffect` + `useState` for data fetching (no external state library needed for a single-admin tool)
|
||||||
|
|
||||||
|
2. Build `frontend/src/pages/MomentDetail.tsx` — individual moment review page:
|
||||||
|
- Display full moment data: title, summary, content_type, start_time/end_time (formatted as mm:ss), plugins list, raw_transcript (if available), review_status badge
|
||||||
|
- Show source video filename and creator name
|
||||||
|
- Action buttons row:
|
||||||
|
- Approve (green) — calls `POST .../approve`, navigates back to queue on success
|
||||||
|
- Reject (red) — calls `POST .../reject`, navigates back to queue on success
|
||||||
|
- Edit — toggles inline edit mode for title, summary, content_type fields. Save button calls `PUT .../` with edited data
|
||||||
|
- Split — opens a split dialog: text input for split timestamp (validated between start_time and end_time), calls `POST .../split`
|
||||||
|
- Merge — opens a merge dialog: dropdown to select another moment from same video, calls `POST .../merge`
|
||||||
|
- Back link to queue
|
||||||
|
- Loading and error states for all API calls
|
||||||
|
|
||||||
|
3. Create `frontend/src/components/StatusBadge.tsx` — reusable status badge component with color coding (pending=amber, approved=green, edited=blue, rejected=red).
|
||||||
|
|
||||||
|
4. Create `frontend/src/components/ModeToggle.tsx` — review/auto mode toggle component extracted from the queue page for reuse in the header.
|
||||||
|
|
||||||
|
5. Update `frontend/src/App.tsx` if needed to add the mode toggle to the global nav header.
|
||||||
|
|
||||||
|
6. Update `frontend/src/App.css` with styles for:
|
||||||
|
- Stats bar (flex row of count cards)
|
||||||
|
- Filter tabs (horizontal tab bar with active indicator)
|
||||||
|
- Queue cards (bordered cards with hover effect)
|
||||||
|
- Status badges (colored pill shapes)
|
||||||
|
- Action buttons (colored, with hover/disabled states)
|
||||||
|
- Edit form (inline fields with save/cancel)
|
||||||
|
- Split/merge dialogs (modal overlays)
|
||||||
|
- Responsive layout (single column on narrow screens)
|
||||||
|
|
||||||
|
7. Verify `npm run build` still succeeds after all UI changes.
|
||||||
|
|
||||||
|
## Must-Haves
|
||||||
|
|
||||||
|
- [ ] Queue page loads and displays moments from API with status filter tabs
|
||||||
|
- [ ] Stats bar shows correct counts per review status
|
||||||
|
- [ ] Clicking a moment navigates to detail page
|
||||||
|
- [ ] Approve, reject actions work and navigate back to queue
|
||||||
|
- [ ] Edit mode allows inline editing of title/summary/content_type with save
|
||||||
|
- [ ] Split dialog validates split_time and creates two moments
|
||||||
|
- [ ] Merge dialog shows moments from same video and merges on confirm
|
||||||
|
- [ ] Mode toggle reads and updates review/auto mode via API
|
||||||
|
- [ ] Build succeeds with no TypeScript errors
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
- `cd frontend && npm run build && test -f dist/index.html` — build succeeds
|
||||||
|
- `cd frontend && npx tsc --noEmit` — no TypeScript errors
|
||||||
|
- `grep -q 'StatusBadge\|ModeToggle' frontend/src/pages/ReviewQueue.tsx` — components integrated
|
||||||
|
- `grep -q 'approve\|reject\|split\|merge' frontend/src/pages/MomentDetail.tsx` — all actions present
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
- ``frontend/src/api/client.ts` — typed API client functions from T02`
|
||||||
|
- ``frontend/src/App.tsx` — app shell with routes from T02`
|
||||||
|
- ``frontend/src/App.css` — base styles from T02`
|
||||||
|
- ``frontend/src/pages/ReviewQueue.tsx` — placeholder from T02 to replace`
|
||||||
|
- ``frontend/src/pages/MomentDetail.tsx` — placeholder from T02 to replace`
|
||||||
|
|
||||||
|
## Expected Output
|
||||||
|
|
||||||
|
- ``frontend/src/pages/ReviewQueue.tsx` — full queue page with stats, filters, moment list, mode toggle`
|
||||||
|
- ``frontend/src/pages/MomentDetail.tsx` — full detail page with approve/reject/edit/split/merge actions`
|
||||||
|
- ``frontend/src/components/StatusBadge.tsx` — reusable status badge component`
|
||||||
|
- ``frontend/src/components/ModeToggle.tsx` — review/auto mode toggle component`
|
||||||
|
- ``frontend/src/App.tsx` — updated with mode toggle in header if needed`
|
||||||
|
- ``frontend/src/App.css` — complete admin UI styles`
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
cd frontend && npm run build && test -f dist/index.html && npx tsc --noEmit
|
||||||
|
|
@ -12,7 +12,7 @@ from fastapi import FastAPI
|
||||||
from fastapi.middleware.cors import CORSMiddleware
|
from fastapi.middleware.cors import CORSMiddleware
|
||||||
|
|
||||||
from config import get_settings
|
from config import get_settings
|
||||||
from routers import creators, health, ingest, pipeline, videos
|
from routers import creators, health, ingest, pipeline, review, videos
|
||||||
|
|
||||||
|
|
||||||
def _setup_logging() -> None:
|
def _setup_logging() -> None:
|
||||||
|
|
@ -81,6 +81,7 @@ app.include_router(health.router)
|
||||||
app.include_router(creators.router, prefix="/api/v1")
|
app.include_router(creators.router, prefix="/api/v1")
|
||||||
app.include_router(ingest.router, prefix="/api/v1")
|
app.include_router(ingest.router, prefix="/api/v1")
|
||||||
app.include_router(pipeline.router, prefix="/api/v1")
|
app.include_router(pipeline.router, prefix="/api/v1")
|
||||||
|
app.include_router(review.router, prefix="/api/v1")
|
||||||
app.include_router(videos.router, prefix="/api/v1")
|
app.include_router(videos.router, prefix="/api/v1")
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
15
backend/redis_client.py
Normal file
15
backend/redis_client.py
Normal file
|
|
@ -0,0 +1,15 @@
|
||||||
|
"""Async Redis client helper for Chrysopedia."""
|
||||||
|
|
||||||
|
import redis.asyncio as aioredis
|
||||||
|
|
||||||
|
from config import get_settings
|
||||||
|
|
||||||
|
|
||||||
|
async def get_redis() -> aioredis.Redis:
|
||||||
|
"""Return an async Redis client from the configured URL.
|
||||||
|
|
||||||
|
Callers should close the connection when done, or use it
|
||||||
|
as a short-lived client within a request handler.
|
||||||
|
"""
|
||||||
|
settings = get_settings()
|
||||||
|
return aioredis.from_url(settings.redis_url, decode_responses=True)
|
||||||
354
backend/routers/review.py
Normal file
354
backend/routers/review.py
Normal file
|
|
@ -0,0 +1,354 @@
|
||||||
|
"""Review queue endpoints for Chrysopedia API.
|
||||||
|
|
||||||
|
Provides admin review workflow: list queue, stats, approve, reject,
|
||||||
|
edit, split, merge key moments, and toggle review/auto mode via Redis.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import uuid
|
||||||
|
from typing import Annotated
|
||||||
|
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException, Query
|
||||||
|
from sqlalchemy import case, func, select
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
|
||||||
|
from config import get_settings
|
||||||
|
from database import get_session
|
||||||
|
from models import Creator, KeyMoment, KeyMomentContentType, ReviewStatus, SourceVideo
|
||||||
|
from redis_client import get_redis
|
||||||
|
from schemas import (
|
||||||
|
KeyMomentRead,
|
||||||
|
MomentEditRequest,
|
||||||
|
MomentMergeRequest,
|
||||||
|
MomentSplitRequest,
|
||||||
|
ReviewModeResponse,
|
||||||
|
ReviewModeUpdate,
|
||||||
|
ReviewQueueItem,
|
||||||
|
ReviewQueueResponse,
|
||||||
|
ReviewStatsResponse,
|
||||||
|
)
|
||||||
|
|
||||||
|
logger = logging.getLogger("chrysopedia.review")
|
||||||
|
|
||||||
|
router = APIRouter(prefix="/review", tags=["review"])
|
||||||
|
|
||||||
|
REDIS_MODE_KEY = "chrysopedia:review_mode"
|
||||||
|
|
||||||
|
VALID_STATUSES = {"pending", "approved", "edited", "rejected", "all"}
|
||||||
|
|
||||||
|
|
||||||
|
# ── Helpers ──────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
|
||||||
|
def _moment_to_queue_item(
|
||||||
|
moment: KeyMoment, video_filename: str, creator_name: str
|
||||||
|
) -> ReviewQueueItem:
|
||||||
|
"""Convert a KeyMoment ORM instance + joined fields to a ReviewQueueItem."""
|
||||||
|
data = KeyMomentRead.model_validate(moment).model_dump()
|
||||||
|
data["video_filename"] = video_filename
|
||||||
|
data["creator_name"] = creator_name
|
||||||
|
return ReviewQueueItem(**data)
|
||||||
|
|
||||||
|
|
||||||
|
# ── Endpoints ────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/queue", response_model=ReviewQueueResponse)
|
||||||
|
async def list_queue(
|
||||||
|
status: Annotated[str, Query()] = "pending",
|
||||||
|
offset: Annotated[int, Query(ge=0)] = 0,
|
||||||
|
limit: Annotated[int, Query(ge=1, le=100)] = 50,
|
||||||
|
db: AsyncSession = Depends(get_session),
|
||||||
|
) -> ReviewQueueResponse:
|
||||||
|
"""List key moments in the review queue, filtered by status."""
|
||||||
|
if status not in VALID_STATUSES:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail=f"Invalid status filter '{status}'. Must be one of: {', '.join(sorted(VALID_STATUSES))}",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Base query joining KeyMoment → SourceVideo → Creator
|
||||||
|
base = (
|
||||||
|
select(
|
||||||
|
KeyMoment,
|
||||||
|
SourceVideo.filename.label("video_filename"),
|
||||||
|
Creator.name.label("creator_name"),
|
||||||
|
)
|
||||||
|
.join(SourceVideo, KeyMoment.source_video_id == SourceVideo.id)
|
||||||
|
.join(Creator, SourceVideo.creator_id == Creator.id)
|
||||||
|
)
|
||||||
|
|
||||||
|
if status != "all":
|
||||||
|
base = base.where(KeyMoment.review_status == ReviewStatus(status))
|
||||||
|
|
||||||
|
# Count total matching rows
|
||||||
|
count_stmt = select(func.count()).select_from(base.subquery())
|
||||||
|
total = (await db.execute(count_stmt)).scalar_one()
|
||||||
|
|
||||||
|
# Fetch paginated results
|
||||||
|
stmt = base.order_by(KeyMoment.created_at.desc()).offset(offset).limit(limit)
|
||||||
|
rows = (await db.execute(stmt)).all()
|
||||||
|
|
||||||
|
items = [
|
||||||
|
_moment_to_queue_item(row.KeyMoment, row.video_filename, row.creator_name)
|
||||||
|
for row in rows
|
||||||
|
]
|
||||||
|
|
||||||
|
return ReviewQueueResponse(items=items, total=total, offset=offset, limit=limit)
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/stats", response_model=ReviewStatsResponse)
|
||||||
|
async def get_stats(
|
||||||
|
db: AsyncSession = Depends(get_session),
|
||||||
|
) -> ReviewStatsResponse:
|
||||||
|
"""Return counts of key moments grouped by review status."""
|
||||||
|
stmt = (
|
||||||
|
select(
|
||||||
|
KeyMoment.review_status,
|
||||||
|
func.count().label("cnt"),
|
||||||
|
)
|
||||||
|
.group_by(KeyMoment.review_status)
|
||||||
|
)
|
||||||
|
result = await db.execute(stmt)
|
||||||
|
counts = {row.review_status.value: row.cnt for row in result.all()}
|
||||||
|
|
||||||
|
return ReviewStatsResponse(
|
||||||
|
pending=counts.get("pending", 0),
|
||||||
|
approved=counts.get("approved", 0),
|
||||||
|
edited=counts.get("edited", 0),
|
||||||
|
rejected=counts.get("rejected", 0),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/moments/{moment_id}/approve", response_model=KeyMomentRead)
|
||||||
|
async def approve_moment(
|
||||||
|
moment_id: uuid.UUID,
|
||||||
|
db: AsyncSession = Depends(get_session),
|
||||||
|
) -> KeyMomentRead:
|
||||||
|
"""Approve a key moment for publishing."""
|
||||||
|
moment = await db.get(KeyMoment, moment_id)
|
||||||
|
if moment is None:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=404,
|
||||||
|
detail=f"Key moment {moment_id} not found",
|
||||||
|
)
|
||||||
|
|
||||||
|
moment.review_status = ReviewStatus.approved
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(moment)
|
||||||
|
|
||||||
|
logger.info("Approved key moment %s", moment_id)
|
||||||
|
return KeyMomentRead.model_validate(moment)
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/moments/{moment_id}/reject", response_model=KeyMomentRead)
|
||||||
|
async def reject_moment(
|
||||||
|
moment_id: uuid.UUID,
|
||||||
|
db: AsyncSession = Depends(get_session),
|
||||||
|
) -> KeyMomentRead:
|
||||||
|
"""Reject a key moment."""
|
||||||
|
moment = await db.get(KeyMoment, moment_id)
|
||||||
|
if moment is None:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=404,
|
||||||
|
detail=f"Key moment {moment_id} not found",
|
||||||
|
)
|
||||||
|
|
||||||
|
moment.review_status = ReviewStatus.rejected
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(moment)
|
||||||
|
|
||||||
|
logger.info("Rejected key moment %s", moment_id)
|
||||||
|
return KeyMomentRead.model_validate(moment)
|
||||||
|
|
||||||
|
|
||||||
|
@router.put("/moments/{moment_id}", response_model=KeyMomentRead)
|
||||||
|
async def edit_moment(
|
||||||
|
moment_id: uuid.UUID,
|
||||||
|
body: MomentEditRequest,
|
||||||
|
db: AsyncSession = Depends(get_session),
|
||||||
|
) -> KeyMomentRead:
|
||||||
|
"""Update editable fields of a key moment and set status to edited."""
|
||||||
|
moment = await db.get(KeyMoment, moment_id)
|
||||||
|
if moment is None:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=404,
|
||||||
|
detail=f"Key moment {moment_id} not found",
|
||||||
|
)
|
||||||
|
|
||||||
|
update_data = body.model_dump(exclude_unset=True)
|
||||||
|
# Convert content_type string to enum if provided
|
||||||
|
if "content_type" in update_data and update_data["content_type"] is not None:
|
||||||
|
try:
|
||||||
|
update_data["content_type"] = KeyMomentContentType(update_data["content_type"])
|
||||||
|
except ValueError:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail=f"Invalid content_type '{update_data['content_type']}'",
|
||||||
|
)
|
||||||
|
|
||||||
|
for field, value in update_data.items():
|
||||||
|
setattr(moment, field, value)
|
||||||
|
|
||||||
|
moment.review_status = ReviewStatus.edited
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(moment)
|
||||||
|
|
||||||
|
logger.info("Edited key moment %s (fields: %s)", moment_id, list(update_data.keys()))
|
||||||
|
return KeyMomentRead.model_validate(moment)
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/moments/{moment_id}/split", response_model=list[KeyMomentRead])
|
||||||
|
async def split_moment(
|
||||||
|
moment_id: uuid.UUID,
|
||||||
|
body: MomentSplitRequest,
|
||||||
|
db: AsyncSession = Depends(get_session),
|
||||||
|
) -> list[KeyMomentRead]:
|
||||||
|
"""Split a key moment into two at the given timestamp."""
|
||||||
|
moment = await db.get(KeyMoment, moment_id)
|
||||||
|
if moment is None:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=404,
|
||||||
|
detail=f"Key moment {moment_id} not found",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Validate split_time is strictly between start_time and end_time
|
||||||
|
if body.split_time <= moment.start_time or body.split_time >= moment.end_time:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail=(
|
||||||
|
f"split_time ({body.split_time}) must be strictly between "
|
||||||
|
f"start_time ({moment.start_time}) and end_time ({moment.end_time})"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
# Update original moment to [start_time, split_time)
|
||||||
|
original_end = moment.end_time
|
||||||
|
moment.end_time = body.split_time
|
||||||
|
moment.review_status = ReviewStatus.pending
|
||||||
|
|
||||||
|
# Create new moment for [split_time, end_time]
|
||||||
|
new_moment = KeyMoment(
|
||||||
|
source_video_id=moment.source_video_id,
|
||||||
|
technique_page_id=moment.technique_page_id,
|
||||||
|
title=f"{moment.title} (split)",
|
||||||
|
summary=moment.summary,
|
||||||
|
start_time=body.split_time,
|
||||||
|
end_time=original_end,
|
||||||
|
content_type=moment.content_type,
|
||||||
|
plugins=moment.plugins,
|
||||||
|
review_status=ReviewStatus.pending,
|
||||||
|
raw_transcript=moment.raw_transcript,
|
||||||
|
)
|
||||||
|
db.add(new_moment)
|
||||||
|
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(moment)
|
||||||
|
await db.refresh(new_moment)
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
"Split key moment %s at %.2f → original [%.2f, %.2f), new [%.2f, %.2f]",
|
||||||
|
moment_id, body.split_time,
|
||||||
|
moment.start_time, moment.end_time,
|
||||||
|
new_moment.start_time, new_moment.end_time,
|
||||||
|
)
|
||||||
|
|
||||||
|
return [
|
||||||
|
KeyMomentRead.model_validate(moment),
|
||||||
|
KeyMomentRead.model_validate(new_moment),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/moments/{moment_id}/merge", response_model=KeyMomentRead)
|
||||||
|
async def merge_moments(
|
||||||
|
moment_id: uuid.UUID,
|
||||||
|
body: MomentMergeRequest,
|
||||||
|
db: AsyncSession = Depends(get_session),
|
||||||
|
) -> KeyMomentRead:
|
||||||
|
"""Merge two key moments into one."""
|
||||||
|
if moment_id == body.target_moment_id:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail="Cannot merge a moment with itself",
|
||||||
|
)
|
||||||
|
|
||||||
|
source = await db.get(KeyMoment, moment_id)
|
||||||
|
if source is None:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=404,
|
||||||
|
detail=f"Key moment {moment_id} not found",
|
||||||
|
)
|
||||||
|
|
||||||
|
target = await db.get(KeyMoment, body.target_moment_id)
|
||||||
|
if target is None:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=404,
|
||||||
|
detail=f"Target key moment {body.target_moment_id} not found",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Both must belong to the same source video
|
||||||
|
if source.source_video_id != target.source_video_id:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail="Cannot merge moments from different source videos",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Merge: combined summary, min start, max end
|
||||||
|
source.summary = f"{source.summary}\n\n{target.summary}"
|
||||||
|
source.start_time = min(source.start_time, target.start_time)
|
||||||
|
source.end_time = max(source.end_time, target.end_time)
|
||||||
|
source.review_status = ReviewStatus.pending
|
||||||
|
|
||||||
|
# Delete target
|
||||||
|
await db.delete(target)
|
||||||
|
await db.commit()
|
||||||
|
await db.refresh(source)
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
"Merged key moment %s with %s → [%.2f, %.2f]",
|
||||||
|
moment_id, body.target_moment_id,
|
||||||
|
source.start_time, source.end_time,
|
||||||
|
)
|
||||||
|
|
||||||
|
return KeyMomentRead.model_validate(source)
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/mode", response_model=ReviewModeResponse)
|
||||||
|
async def get_mode() -> ReviewModeResponse:
|
||||||
|
"""Get the current review mode (review vs auto)."""
|
||||||
|
settings = get_settings()
|
||||||
|
try:
|
||||||
|
redis = await get_redis()
|
||||||
|
try:
|
||||||
|
value = await redis.get(REDIS_MODE_KEY)
|
||||||
|
if value is not None:
|
||||||
|
return ReviewModeResponse(review_mode=value.lower() == "true")
|
||||||
|
finally:
|
||||||
|
await redis.aclose()
|
||||||
|
except Exception as exc:
|
||||||
|
# Redis unavailable — fall back to config default
|
||||||
|
logger.warning("Redis unavailable for mode read, using config default: %s", exc)
|
||||||
|
|
||||||
|
return ReviewModeResponse(review_mode=settings.review_mode)
|
||||||
|
|
||||||
|
|
||||||
|
@router.put("/mode", response_model=ReviewModeResponse)
|
||||||
|
async def set_mode(
|
||||||
|
body: ReviewModeUpdate,
|
||||||
|
) -> ReviewModeResponse:
|
||||||
|
"""Set the review mode (review vs auto)."""
|
||||||
|
try:
|
||||||
|
redis = await get_redis()
|
||||||
|
try:
|
||||||
|
await redis.set(REDIS_MODE_KEY, str(body.review_mode))
|
||||||
|
finally:
|
||||||
|
await redis.aclose()
|
||||||
|
except Exception as exc:
|
||||||
|
logger.error("Failed to set review mode in Redis: %s", exc)
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=503,
|
||||||
|
detail=f"Redis unavailable: {exc}",
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info("Review mode set to %s", body.review_mode)
|
||||||
|
return ReviewModeResponse(review_mode=body.review_mode)
|
||||||
|
|
@ -194,3 +194,57 @@ class PaginatedResponse(BaseModel):
|
||||||
total: int = 0
|
total: int = 0
|
||||||
offset: int = 0
|
offset: int = 0
|
||||||
limit: int = 50
|
limit: int = 50
|
||||||
|
|
||||||
|
|
||||||
|
# ── Review Queue ─────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
class ReviewQueueItem(KeyMomentRead):
|
||||||
|
"""Key moment enriched with source video and creator info for review UI."""
|
||||||
|
video_filename: str
|
||||||
|
creator_name: str
|
||||||
|
|
||||||
|
|
||||||
|
class ReviewQueueResponse(BaseModel):
|
||||||
|
"""Paginated response for the review queue."""
|
||||||
|
items: list[ReviewQueueItem] = Field(default_factory=list)
|
||||||
|
total: int = 0
|
||||||
|
offset: int = 0
|
||||||
|
limit: int = 50
|
||||||
|
|
||||||
|
|
||||||
|
class ReviewStatsResponse(BaseModel):
|
||||||
|
"""Counts of key moments grouped by review status."""
|
||||||
|
pending: int = 0
|
||||||
|
approved: int = 0
|
||||||
|
edited: int = 0
|
||||||
|
rejected: int = 0
|
||||||
|
|
||||||
|
|
||||||
|
class MomentEditRequest(BaseModel):
|
||||||
|
"""Editable fields for a key moment."""
|
||||||
|
title: str | None = None
|
||||||
|
summary: str | None = None
|
||||||
|
start_time: float | None = None
|
||||||
|
end_time: float | None = None
|
||||||
|
content_type: str | None = None
|
||||||
|
plugins: list[str] | None = None
|
||||||
|
|
||||||
|
|
||||||
|
class MomentSplitRequest(BaseModel):
|
||||||
|
"""Request to split a moment at a given timestamp."""
|
||||||
|
split_time: float
|
||||||
|
|
||||||
|
|
||||||
|
class MomentMergeRequest(BaseModel):
|
||||||
|
"""Request to merge two moments."""
|
||||||
|
target_moment_id: uuid.UUID
|
||||||
|
|
||||||
|
|
||||||
|
class ReviewModeResponse(BaseModel):
|
||||||
|
"""Current review mode state."""
|
||||||
|
review_mode: bool
|
||||||
|
|
||||||
|
|
||||||
|
class ReviewModeUpdate(BaseModel):
|
||||||
|
"""Request to update the review mode."""
|
||||||
|
review_mode: bool
|
||||||
|
|
|
||||||
495
backend/tests/test_review.py
Normal file
495
backend/tests/test_review.py
Normal file
|
|
@ -0,0 +1,495 @@
|
||||||
|
"""Integration tests for the review queue endpoints.
|
||||||
|
|
||||||
|
Tests run against a real PostgreSQL test database via httpx.AsyncClient.
|
||||||
|
Redis is mocked for mode toggle tests.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import uuid
|
||||||
|
from unittest.mock import AsyncMock, patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
import pytest_asyncio
|
||||||
|
from httpx import AsyncClient
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker
|
||||||
|
|
||||||
|
from models import (
|
||||||
|
ContentType,
|
||||||
|
Creator,
|
||||||
|
KeyMoment,
|
||||||
|
KeyMomentContentType,
|
||||||
|
ProcessingStatus,
|
||||||
|
ReviewStatus,
|
||||||
|
SourceVideo,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# ── Helpers ──────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
QUEUE_URL = "/api/v1/review/queue"
|
||||||
|
STATS_URL = "/api/v1/review/stats"
|
||||||
|
MODE_URL = "/api/v1/review/mode"
|
||||||
|
|
||||||
|
|
||||||
|
def _moment_url(moment_id: str, action: str = "") -> str:
|
||||||
|
"""Build a moment action URL."""
|
||||||
|
base = f"/api/v1/review/moments/{moment_id}"
|
||||||
|
return f"{base}/{action}" if action else base
|
||||||
|
|
||||||
|
|
||||||
|
async def _seed_creator_and_video(db_engine) -> dict:
|
||||||
|
"""Seed a creator and source video, return their IDs."""
|
||||||
|
session_factory = async_sessionmaker(
|
||||||
|
db_engine, class_=AsyncSession, expire_on_commit=False
|
||||||
|
)
|
||||||
|
async with session_factory() as session:
|
||||||
|
creator = Creator(
|
||||||
|
name="TestCreator",
|
||||||
|
slug="test-creator",
|
||||||
|
folder_name="TestCreator",
|
||||||
|
)
|
||||||
|
session.add(creator)
|
||||||
|
await session.flush()
|
||||||
|
|
||||||
|
video = SourceVideo(
|
||||||
|
creator_id=creator.id,
|
||||||
|
filename="test-video.mp4",
|
||||||
|
file_path="TestCreator/test-video.mp4",
|
||||||
|
duration_seconds=600,
|
||||||
|
content_type=ContentType.tutorial,
|
||||||
|
processing_status=ProcessingStatus.extracted,
|
||||||
|
)
|
||||||
|
session.add(video)
|
||||||
|
await session.flush()
|
||||||
|
|
||||||
|
result = {
|
||||||
|
"creator_id": creator.id,
|
||||||
|
"creator_name": creator.name,
|
||||||
|
"video_id": video.id,
|
||||||
|
"video_filename": video.filename,
|
||||||
|
}
|
||||||
|
await session.commit()
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
async def _seed_moment(
|
||||||
|
db_engine,
|
||||||
|
video_id: uuid.UUID,
|
||||||
|
title: str = "Test Moment",
|
||||||
|
summary: str = "A test key moment",
|
||||||
|
start_time: float = 10.0,
|
||||||
|
end_time: float = 30.0,
|
||||||
|
review_status: ReviewStatus = ReviewStatus.pending,
|
||||||
|
) -> uuid.UUID:
|
||||||
|
"""Seed a single key moment and return its ID."""
|
||||||
|
session_factory = async_sessionmaker(
|
||||||
|
db_engine, class_=AsyncSession, expire_on_commit=False
|
||||||
|
)
|
||||||
|
async with session_factory() as session:
|
||||||
|
moment = KeyMoment(
|
||||||
|
source_video_id=video_id,
|
||||||
|
title=title,
|
||||||
|
summary=summary,
|
||||||
|
start_time=start_time,
|
||||||
|
end_time=end_time,
|
||||||
|
content_type=KeyMomentContentType.technique,
|
||||||
|
review_status=review_status,
|
||||||
|
)
|
||||||
|
session.add(moment)
|
||||||
|
await session.commit()
|
||||||
|
return moment.id
|
||||||
|
|
||||||
|
|
||||||
|
async def _seed_second_video(db_engine, creator_id: uuid.UUID) -> uuid.UUID:
|
||||||
|
"""Seed a second video for cross-video merge tests."""
|
||||||
|
session_factory = async_sessionmaker(
|
||||||
|
db_engine, class_=AsyncSession, expire_on_commit=False
|
||||||
|
)
|
||||||
|
async with session_factory() as session:
|
||||||
|
video = SourceVideo(
|
||||||
|
creator_id=creator_id,
|
||||||
|
filename="other-video.mp4",
|
||||||
|
file_path="TestCreator/other-video.mp4",
|
||||||
|
duration_seconds=300,
|
||||||
|
content_type=ContentType.tutorial,
|
||||||
|
processing_status=ProcessingStatus.extracted,
|
||||||
|
)
|
||||||
|
session.add(video)
|
||||||
|
await session.commit()
|
||||||
|
return video.id
|
||||||
|
|
||||||
|
|
||||||
|
# ── Queue listing tests ─────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_list_queue_empty(client: AsyncClient):
|
||||||
|
"""Queue returns empty list when no moments exist."""
|
||||||
|
resp = await client.get(QUEUE_URL)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.json()
|
||||||
|
assert data["items"] == []
|
||||||
|
assert data["total"] == 0
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_list_queue_with_moments(client: AsyncClient, db_engine):
|
||||||
|
"""Queue returns moments enriched with video filename and creator name."""
|
||||||
|
seed = await _seed_creator_and_video(db_engine)
|
||||||
|
await _seed_moment(db_engine, seed["video_id"], title="EQ Basics")
|
||||||
|
|
||||||
|
resp = await client.get(QUEUE_URL)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.json()
|
||||||
|
assert data["total"] == 1
|
||||||
|
item = data["items"][0]
|
||||||
|
assert item["title"] == "EQ Basics"
|
||||||
|
assert item["video_filename"] == seed["video_filename"]
|
||||||
|
assert item["creator_name"] == seed["creator_name"]
|
||||||
|
assert item["review_status"] == "pending"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_list_queue_filter_by_status(client: AsyncClient, db_engine):
|
||||||
|
"""Queue filters correctly by status query parameter."""
|
||||||
|
seed = await _seed_creator_and_video(db_engine)
|
||||||
|
await _seed_moment(db_engine, seed["video_id"], title="Pending One")
|
||||||
|
await _seed_moment(
|
||||||
|
db_engine, seed["video_id"], title="Approved One",
|
||||||
|
review_status=ReviewStatus.approved,
|
||||||
|
)
|
||||||
|
await _seed_moment(
|
||||||
|
db_engine, seed["video_id"], title="Rejected One",
|
||||||
|
review_status=ReviewStatus.rejected,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Default filter: pending
|
||||||
|
resp = await client.get(QUEUE_URL)
|
||||||
|
assert resp.json()["total"] == 1
|
||||||
|
assert resp.json()["items"][0]["title"] == "Pending One"
|
||||||
|
|
||||||
|
# Approved
|
||||||
|
resp = await client.get(QUEUE_URL, params={"status": "approved"})
|
||||||
|
assert resp.json()["total"] == 1
|
||||||
|
assert resp.json()["items"][0]["title"] == "Approved One"
|
||||||
|
|
||||||
|
# All
|
||||||
|
resp = await client.get(QUEUE_URL, params={"status": "all"})
|
||||||
|
assert resp.json()["total"] == 3
|
||||||
|
|
||||||
|
|
||||||
|
# ── Stats tests ──────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_stats_counts(client: AsyncClient, db_engine):
|
||||||
|
"""Stats returns correct counts per review status."""
|
||||||
|
seed = await _seed_creator_and_video(db_engine)
|
||||||
|
await _seed_moment(db_engine, seed["video_id"], review_status=ReviewStatus.pending)
|
||||||
|
await _seed_moment(db_engine, seed["video_id"], review_status=ReviewStatus.pending)
|
||||||
|
await _seed_moment(db_engine, seed["video_id"], review_status=ReviewStatus.approved)
|
||||||
|
await _seed_moment(db_engine, seed["video_id"], review_status=ReviewStatus.rejected)
|
||||||
|
|
||||||
|
resp = await client.get(STATS_URL)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.json()
|
||||||
|
assert data["pending"] == 2
|
||||||
|
assert data["approved"] == 1
|
||||||
|
assert data["edited"] == 0
|
||||||
|
assert data["rejected"] == 1
|
||||||
|
|
||||||
|
|
||||||
|
# ── Approve tests ────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_approve_moment(client: AsyncClient, db_engine):
|
||||||
|
"""Approve sets review_status to approved."""
|
||||||
|
seed = await _seed_creator_and_video(db_engine)
|
||||||
|
moment_id = await _seed_moment(db_engine, seed["video_id"])
|
||||||
|
|
||||||
|
resp = await client.post(_moment_url(str(moment_id), "approve"))
|
||||||
|
assert resp.status_code == 200
|
||||||
|
assert resp.json()["review_status"] == "approved"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_approve_nonexistent_moment(client: AsyncClient):
|
||||||
|
"""Approve returns 404 for nonexistent moment."""
|
||||||
|
fake_id = str(uuid.uuid4())
|
||||||
|
resp = await client.post(_moment_url(fake_id, "approve"))
|
||||||
|
assert resp.status_code == 404
|
||||||
|
|
||||||
|
|
||||||
|
# ── Reject tests ─────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_reject_moment(client: AsyncClient, db_engine):
|
||||||
|
"""Reject sets review_status to rejected."""
|
||||||
|
seed = await _seed_creator_and_video(db_engine)
|
||||||
|
moment_id = await _seed_moment(db_engine, seed["video_id"])
|
||||||
|
|
||||||
|
resp = await client.post(_moment_url(str(moment_id), "reject"))
|
||||||
|
assert resp.status_code == 200
|
||||||
|
assert resp.json()["review_status"] == "rejected"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_reject_nonexistent_moment(client: AsyncClient):
|
||||||
|
"""Reject returns 404 for nonexistent moment."""
|
||||||
|
fake_id = str(uuid.uuid4())
|
||||||
|
resp = await client.post(_moment_url(fake_id, "reject"))
|
||||||
|
assert resp.status_code == 404
|
||||||
|
|
||||||
|
|
||||||
|
# ── Edit tests ───────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_edit_moment(client: AsyncClient, db_engine):
|
||||||
|
"""Edit updates fields and sets review_status to edited."""
|
||||||
|
seed = await _seed_creator_and_video(db_engine)
|
||||||
|
moment_id = await _seed_moment(db_engine, seed["video_id"], title="Original Title")
|
||||||
|
|
||||||
|
resp = await client.put(
|
||||||
|
_moment_url(str(moment_id)),
|
||||||
|
json={"title": "Updated Title", "summary": "New summary"},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.json()
|
||||||
|
assert data["title"] == "Updated Title"
|
||||||
|
assert data["summary"] == "New summary"
|
||||||
|
assert data["review_status"] == "edited"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_edit_nonexistent_moment(client: AsyncClient):
|
||||||
|
"""Edit returns 404 for nonexistent moment."""
|
||||||
|
fake_id = str(uuid.uuid4())
|
||||||
|
resp = await client.put(
|
||||||
|
_moment_url(fake_id),
|
||||||
|
json={"title": "Won't Work"},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 404
|
||||||
|
|
||||||
|
|
||||||
|
# ── Split tests ──────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_split_moment(client: AsyncClient, db_engine):
|
||||||
|
"""Split creates two moments with correct timestamps."""
|
||||||
|
seed = await _seed_creator_and_video(db_engine)
|
||||||
|
moment_id = await _seed_moment(
|
||||||
|
db_engine, seed["video_id"],
|
||||||
|
title="Full Moment", start_time=10.0, end_time=30.0,
|
||||||
|
)
|
||||||
|
|
||||||
|
resp = await client.post(
|
||||||
|
_moment_url(str(moment_id), "split"),
|
||||||
|
json={"split_time": 20.0},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.json()
|
||||||
|
assert len(data) == 2
|
||||||
|
|
||||||
|
# First (original): [10.0, 20.0)
|
||||||
|
assert data[0]["start_time"] == 10.0
|
||||||
|
assert data[0]["end_time"] == 20.0
|
||||||
|
|
||||||
|
# Second (new): [20.0, 30.0]
|
||||||
|
assert data[1]["start_time"] == 20.0
|
||||||
|
assert data[1]["end_time"] == 30.0
|
||||||
|
assert "(split)" in data[1]["title"]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_split_invalid_time_below_start(client: AsyncClient, db_engine):
|
||||||
|
"""Split returns 400 when split_time is at or below start_time."""
|
||||||
|
seed = await _seed_creator_and_video(db_engine)
|
||||||
|
moment_id = await _seed_moment(
|
||||||
|
db_engine, seed["video_id"], start_time=10.0, end_time=30.0,
|
||||||
|
)
|
||||||
|
|
||||||
|
resp = await client.post(
|
||||||
|
_moment_url(str(moment_id), "split"),
|
||||||
|
json={"split_time": 10.0},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 400
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_split_invalid_time_above_end(client: AsyncClient, db_engine):
|
||||||
|
"""Split returns 400 when split_time is at or above end_time."""
|
||||||
|
seed = await _seed_creator_and_video(db_engine)
|
||||||
|
moment_id = await _seed_moment(
|
||||||
|
db_engine, seed["video_id"], start_time=10.0, end_time=30.0,
|
||||||
|
)
|
||||||
|
|
||||||
|
resp = await client.post(
|
||||||
|
_moment_url(str(moment_id), "split"),
|
||||||
|
json={"split_time": 30.0},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 400
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_split_nonexistent_moment(client: AsyncClient):
|
||||||
|
"""Split returns 404 for nonexistent moment."""
|
||||||
|
fake_id = str(uuid.uuid4())
|
||||||
|
resp = await client.post(
|
||||||
|
_moment_url(fake_id, "split"),
|
||||||
|
json={"split_time": 20.0},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 404
|
||||||
|
|
||||||
|
|
||||||
|
# ── Merge tests ──────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_merge_moments(client: AsyncClient, db_engine):
|
||||||
|
"""Merge combines two moments: combined summary, min start, max end, target deleted."""
|
||||||
|
seed = await _seed_creator_and_video(db_engine)
|
||||||
|
m1_id = await _seed_moment(
|
||||||
|
db_engine, seed["video_id"],
|
||||||
|
title="First", summary="Summary A",
|
||||||
|
start_time=10.0, end_time=20.0,
|
||||||
|
)
|
||||||
|
m2_id = await _seed_moment(
|
||||||
|
db_engine, seed["video_id"],
|
||||||
|
title="Second", summary="Summary B",
|
||||||
|
start_time=25.0, end_time=35.0,
|
||||||
|
)
|
||||||
|
|
||||||
|
resp = await client.post(
|
||||||
|
_moment_url(str(m1_id), "merge"),
|
||||||
|
json={"target_moment_id": str(m2_id)},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
data = resp.json()
|
||||||
|
assert data["start_time"] == 10.0
|
||||||
|
assert data["end_time"] == 35.0
|
||||||
|
assert "Summary A" in data["summary"]
|
||||||
|
assert "Summary B" in data["summary"]
|
||||||
|
|
||||||
|
# Target should be deleted — reject should 404
|
||||||
|
resp2 = await client.post(_moment_url(str(m2_id), "reject"))
|
||||||
|
assert resp2.status_code == 404
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_merge_different_videos(client: AsyncClient, db_engine):
|
||||||
|
"""Merge returns 400 when moments are from different source videos."""
|
||||||
|
seed = await _seed_creator_and_video(db_engine)
|
||||||
|
m1_id = await _seed_moment(db_engine, seed["video_id"], title="Video 1 moment")
|
||||||
|
|
||||||
|
other_video_id = await _seed_second_video(db_engine, seed["creator_id"])
|
||||||
|
m2_id = await _seed_moment(db_engine, other_video_id, title="Video 2 moment")
|
||||||
|
|
||||||
|
resp = await client.post(
|
||||||
|
_moment_url(str(m1_id), "merge"),
|
||||||
|
json={"target_moment_id": str(m2_id)},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 400
|
||||||
|
assert "different source videos" in resp.json()["detail"]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_merge_with_self(client: AsyncClient, db_engine):
|
||||||
|
"""Merge returns 400 when trying to merge a moment with itself."""
|
||||||
|
seed = await _seed_creator_and_video(db_engine)
|
||||||
|
m_id = await _seed_moment(db_engine, seed["video_id"])
|
||||||
|
|
||||||
|
resp = await client.post(
|
||||||
|
_moment_url(str(m_id), "merge"),
|
||||||
|
json={"target_moment_id": str(m_id)},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 400
|
||||||
|
assert "itself" in resp.json()["detail"]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_merge_nonexistent_target(client: AsyncClient, db_engine):
|
||||||
|
"""Merge returns 404 when target moment does not exist."""
|
||||||
|
seed = await _seed_creator_and_video(db_engine)
|
||||||
|
m_id = await _seed_moment(db_engine, seed["video_id"])
|
||||||
|
|
||||||
|
resp = await client.post(
|
||||||
|
_moment_url(str(m_id), "merge"),
|
||||||
|
json={"target_moment_id": str(uuid.uuid4())},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 404
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_merge_nonexistent_source(client: AsyncClient):
|
||||||
|
"""Merge returns 404 when source moment does not exist."""
|
||||||
|
fake_id = str(uuid.uuid4())
|
||||||
|
resp = await client.post(
|
||||||
|
_moment_url(fake_id, "merge"),
|
||||||
|
json={"target_moment_id": str(uuid.uuid4())},
|
||||||
|
)
|
||||||
|
assert resp.status_code == 404
|
||||||
|
|
||||||
|
|
||||||
|
# ── Mode toggle tests ───────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_get_mode_default(client: AsyncClient):
|
||||||
|
"""Get mode returns config default when Redis has no value."""
|
||||||
|
mock_redis = AsyncMock()
|
||||||
|
mock_redis.get = AsyncMock(return_value=None)
|
||||||
|
mock_redis.aclose = AsyncMock()
|
||||||
|
|
||||||
|
with patch("routers.review.get_redis", return_value=mock_redis):
|
||||||
|
resp = await client.get(MODE_URL)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
# Default from config is True
|
||||||
|
assert resp.json()["review_mode"] is True
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_set_mode(client: AsyncClient):
|
||||||
|
"""Set mode writes to Redis and returns the new value."""
|
||||||
|
mock_redis = AsyncMock()
|
||||||
|
mock_redis.set = AsyncMock()
|
||||||
|
mock_redis.aclose = AsyncMock()
|
||||||
|
|
||||||
|
with patch("routers.review.get_redis", return_value=mock_redis):
|
||||||
|
resp = await client.put(MODE_URL, json={"review_mode": False})
|
||||||
|
assert resp.status_code == 200
|
||||||
|
assert resp.json()["review_mode"] is False
|
||||||
|
mock_redis.set.assert_called_once_with("chrysopedia:review_mode", "False")
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_get_mode_from_redis(client: AsyncClient):
|
||||||
|
"""Get mode reads the value stored in Redis."""
|
||||||
|
mock_redis = AsyncMock()
|
||||||
|
mock_redis.get = AsyncMock(return_value="False")
|
||||||
|
mock_redis.aclose = AsyncMock()
|
||||||
|
|
||||||
|
with patch("routers.review.get_redis", return_value=mock_redis):
|
||||||
|
resp = await client.get(MODE_URL)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
assert resp.json()["review_mode"] is False
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_get_mode_redis_error_fallback(client: AsyncClient):
|
||||||
|
"""Get mode falls back to config default when Redis is unavailable."""
|
||||||
|
with patch("routers.review.get_redis", side_effect=ConnectionError("Redis down")):
|
||||||
|
resp = await client.get(MODE_URL)
|
||||||
|
assert resp.status_code == 200
|
||||||
|
# Falls back to config default (True)
|
||||||
|
assert resp.json()["review_mode"] is True
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_set_mode_redis_error(client: AsyncClient):
|
||||||
|
"""Set mode returns 503 when Redis is unavailable."""
|
||||||
|
with patch("routers.review.get_redis", side_effect=ConnectionError("Redis down")):
|
||||||
|
resp = await client.put(MODE_URL, json={"review_mode": False})
|
||||||
|
assert resp.status_code == 503
|
||||||
Loading…
Add table
Reference in a new issue