feat: Added body_sections_format column, technique_page_videos associat…
- "alembic/versions/012_multi_source_format.py" - "backend/models.py" - "backend/schemas.py" GSD-Task: S03/T01
This commit is contained in:
parent
5cd7db8938
commit
bd0dbb4df9
13 changed files with 687 additions and 3 deletions
|
|
@ -7,7 +7,7 @@ Restructure technique pages to be broader (per-creator+category across videos),
|
||||||
| ID | Slice | Risk | Depends | Done | After this |
|
| ID | Slice | Risk | Depends | Done | After this |
|
||||||
|----|-------|------|---------|------|------------|
|
|----|-------|------|---------|------|------------|
|
||||||
| S01 | Synthesis Prompt v5 — Nested Sections + Citations | high | — | ✅ | Run test harness with new prompt → output has list-of-objects body_sections with H2/H3 nesting, citation markers on key claims, broader page scope. |
|
| S01 | Synthesis Prompt v5 — Nested Sections + Citations | high | — | ✅ | Run test harness with new prompt → output has list-of-objects body_sections with H2/H3 nesting, citation markers on key claims, broader page scope. |
|
||||||
| S02 | Composition Prompt + Test Harness Compose Mode | high | S01 | ⬜ | Run test harness --compose mode with existing page + new moments → merged output with deduplication, new sections, updated citations. |
|
| S02 | Composition Prompt + Test Harness Compose Mode | high | S01 | ✅ | Run test harness --compose mode with existing page + new moments → merged output with deduplication, new sections, updated citations. |
|
||||||
| S03 | Data Model + Migration | low | — | ⬜ | Alembic migration runs clean. API response includes body_sections_format and source_videos fields. |
|
| S03 | Data Model + Migration | low | — | ⬜ | Alembic migration runs clean. API response includes body_sections_format and source_videos fields. |
|
||||||
| S04 | Pipeline Compose-or-Create Logic | high | S01, S02, S03 | ⬜ | Process two COPYCATT videos. Second video's moments composed into existing page. technique_page_videos has both video IDs. |
|
| S04 | Pipeline Compose-or-Create Logic | high | S01, S02, S03 | ⬜ | Process two COPYCATT videos. Second video's moments composed into existing page. technique_page_videos has both video IDs. |
|
||||||
| S05 | Frontend — Nested Rendering, TOC, Citations | medium | S03 | ⬜ | Format-2 page renders with TOC, nested sections, clickable citations. Format-1 pages unchanged. |
|
| S05 | Frontend — Nested Rendering, TOC, Citations | medium | S03 | ⬜ | Format-2 page renders with TOC, nested sections, clickable citations. Format-1 pages unchanged. |
|
||||||
|
|
|
||||||
95
.gsd/milestones/M014/slices/S02/S02-SUMMARY.md
Normal file
95
.gsd/milestones/M014/slices/S02/S02-SUMMARY.md
Normal file
|
|
@ -0,0 +1,95 @@
|
||||||
|
---
|
||||||
|
id: S02
|
||||||
|
parent: M014
|
||||||
|
milestone: M014
|
||||||
|
provides:
|
||||||
|
- stage5_compose.txt prompt for S04 pipeline compose-or-create logic
|
||||||
|
- build_compose_prompt() function for S04 to call from pipeline stages
|
||||||
|
- compose CLI subcommand for offline testing of composition before pipeline integration
|
||||||
|
requires:
|
||||||
|
- slice: S01
|
||||||
|
provides: v2 SynthesisResult schema and body_sections format used by composition output
|
||||||
|
affects:
|
||||||
|
- S04
|
||||||
|
key_files:
|
||||||
|
- prompts/stage5_compose.txt
|
||||||
|
- backend/pipeline/test_harness.py
|
||||||
|
- backend/pipeline/test_harness_compose.py
|
||||||
|
- conftest.py
|
||||||
|
key_decisions:
|
||||||
|
- Composition prompt is self-contained — carries its own writing standards rather than importing from synthesis prompt at runtime
|
||||||
|
- Offset-based citation scheme: existing keep [0]-[N-1], new get [N]-[N+M-1], no renumbering of existing citations
|
||||||
|
- Compose accepts both harness output (with .pages[]) and raw SynthesizedPage JSON
|
||||||
|
- Root conftest.py bootstraps backend/ onto sys.path for project-root test discovery via pipeline symlink
|
||||||
|
patterns_established:
|
||||||
|
- Compose prompt pattern: XML-tagged inputs with offset-based citation indexing for multi-source content merging
|
||||||
|
- Test harness subcommand pattern: build_*_prompt() pure function + run_*() orchestrator + CLI wiring
|
||||||
|
observability_surfaces:
|
||||||
|
- none
|
||||||
|
drill_down_paths:
|
||||||
|
- .gsd/milestones/M014/slices/S02/tasks/T01-SUMMARY.md
|
||||||
|
- .gsd/milestones/M014/slices/S02/tasks/T02-SUMMARY.md
|
||||||
|
- .gsd/milestones/M014/slices/S02/tasks/T03-SUMMARY.md
|
||||||
|
duration: ""
|
||||||
|
verification_result: passed
|
||||||
|
completed_at: 2026-04-03T01:10:16.667Z
|
||||||
|
blocker_discovered: false
|
||||||
|
---
|
||||||
|
|
||||||
|
# S02: Composition Prompt + Test Harness Compose Mode
|
||||||
|
|
||||||
|
**Composition prompt, test harness compose subcommand, and 16 unit tests enable offline testing of merging new video moments into existing technique pages with correct citation re-indexing.**
|
||||||
|
|
||||||
|
## What Happened
|
||||||
|
|
||||||
|
Three tasks delivered the composition pipeline's offline testing surface:
|
||||||
|
|
||||||
|
T01 wrote `prompts/stage5_compose.txt` — a self-contained LLM prompt for merging new video moments into existing technique pages. It defines four XML input tags (`<existing_page>`, `<existing_moments>`, `<new_moments>`, `<creator>`), offset-based citation re-indexing (existing moments keep [0]-[N-1], new moments get [N]-[N+M-1]), merge/enrich/dedup rules with concrete examples, workflow-ordered section placement, and v2 SynthesisResult output format with two validated JSON examples.
|
||||||
|
|
||||||
|
T02 added `build_compose_prompt()` and `run_compose()` to the test harness, plus a `compose` CLI subcommand. The prompt builder constructs the XML-tagged user prompt with correct citation index offsets. `run_compose()` loads an existing page JSON + two fixture files (original moments and new moments), filters new moments by category, calls the LLM with the compose system prompt, validates the output as SynthesisResult, and logs compose-specific metrics (word count before/after, sections before/after).
|
||||||
|
|
||||||
|
T03 created 16 unit tests in `test_harness_compose.py` covering prompt XML structure (6 tests), citation offset arithmetic (4 tests), category filtering (2 tests), and edge cases (4 tests including empty moments, single moment, no subsections, large offsets). All pass in 0.4s.
|
||||||
|
|
||||||
|
A root-level `conftest.py` was added to bootstrap `backend/` onto `sys.path`, fixing test discovery when running from the project root via the `pipeline` symlink.
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
All three verification gates pass:
|
||||||
|
1. `python -m pytest pipeline/test_harness_compose.py -v` — 16/16 passed (0.40s) from project root
|
||||||
|
2. `cd backend && python -m pipeline.test_harness compose --help` — exits 0, correct arg listing
|
||||||
|
3. Structural validation of `prompts/stage5_compose.txt` — 13053 chars, all required sections present, JSON examples parse valid
|
||||||
|
|
||||||
|
## Requirements Advanced
|
||||||
|
|
||||||
|
- R012 — Composition prompt and harness provide the offline-testable merge mechanism for updating existing technique pages with new video content
|
||||||
|
|
||||||
|
## Requirements Validated
|
||||||
|
|
||||||
|
None.
|
||||||
|
|
||||||
|
## New Requirements Surfaced
|
||||||
|
|
||||||
|
None.
|
||||||
|
|
||||||
|
## Requirements Invalidated or Re-scoped
|
||||||
|
|
||||||
|
None.
|
||||||
|
|
||||||
|
## Deviations
|
||||||
|
|
||||||
|
Added root-level conftest.py to fix sys.path for project-root test execution — not in original plan but required for the verification command to work without cd backend prefix.
|
||||||
|
|
||||||
|
## Known Limitations
|
||||||
|
|
||||||
|
run_compose() requires an LLM endpoint to actually test composition end-to-end. The unit tests validate prompt construction and plumbing only.
|
||||||
|
|
||||||
|
## Follow-ups
|
||||||
|
|
||||||
|
None.
|
||||||
|
|
||||||
|
## Files Created/Modified
|
||||||
|
|
||||||
|
- `prompts/stage5_compose.txt` — New composition prompt with merge rules, citation re-indexing, dedup guidance, two v2 JSON examples
|
||||||
|
- `backend/pipeline/test_harness.py` — Added build_compose_prompt(), run_compose(), and compose CLI subcommand
|
||||||
|
- `backend/pipeline/test_harness_compose.py` — 16 unit tests for compose prompt construction, citation math, category filtering, edge cases
|
||||||
|
- `conftest.py` — Root-level sys.path bootstrap for project-root test discovery via pipeline symlink
|
||||||
53
.gsd/milestones/M014/slices/S02/S02-UAT.md
Normal file
53
.gsd/milestones/M014/slices/S02/S02-UAT.md
Normal file
|
|
@ -0,0 +1,53 @@
|
||||||
|
# S02: Composition Prompt + Test Harness Compose Mode — UAT
|
||||||
|
|
||||||
|
**Milestone:** M014
|
||||||
|
**Written:** 2026-04-03T01:10:16.667Z
|
||||||
|
|
||||||
|
## UAT: S02 — Composition Prompt + Test Harness Compose Mode
|
||||||
|
|
||||||
|
### Preconditions
|
||||||
|
- Working directory: project root (`/home/aux/projects/content-to-kb-automator`)
|
||||||
|
- Python 3.12+ available
|
||||||
|
- Backend dependencies installed (`pip install -r backend/requirements.txt`)
|
||||||
|
- S01 synthesis prompt exists at `prompts/stage5_synthesis.txt`
|
||||||
|
|
||||||
|
### Test 1: Composition prompt structural integrity
|
||||||
|
1. Open `prompts/stage5_compose.txt`
|
||||||
|
2. Verify it contains XML tag references: `<existing_page>`, `<existing_moments>`, `<new_moments>`, `<creator>`
|
||||||
|
3. Verify citation re-indexing section explains offset scheme: existing [0]-[N-1], new [N]-[N+M-1]
|
||||||
|
4. Verify merge rules section covers: preserve existing prose, enrich vs duplicate, new section creation
|
||||||
|
5. Verify at least one JSON example block parses as valid JSON with `pages` key and `body_sections_format: "v2"`
|
||||||
|
- **Expected:** All checks pass. Prompt is >2000 chars.
|
||||||
|
|
||||||
|
### Test 2: Compose CLI subcommand help
|
||||||
|
1. Run: `cd backend && python -m pipeline.test_harness compose --help`
|
||||||
|
- **Expected:** Exit code 0. Output shows `--existing-page`, `--fixture`, `--existing-fixture` as required args; `--prompt`, `--output`, `--category`, `--model`, `--modality` as optional.
|
||||||
|
|
||||||
|
### Test 3: Unit test suite — full run from project root
|
||||||
|
1. Run: `python -m pytest pipeline/test_harness_compose.py -v`
|
||||||
|
- **Expected:** 16 tests pass. No errors or warnings. Runs in <2s.
|
||||||
|
|
||||||
|
### Test 4: Unit test suite — full run from backend directory
|
||||||
|
1. Run: `cd backend && python -m pytest pipeline/test_harness_compose.py -v`
|
||||||
|
- **Expected:** 16 tests pass. Same results as Test 3.
|
||||||
|
|
||||||
|
### Test 5: Prompt XML tag population
|
||||||
|
1. In a Python shell, import `build_compose_prompt` from `pipeline.test_harness`
|
||||||
|
2. Call with a mock existing page dict, 3 existing moments, 2 new moments, and a creator name
|
||||||
|
3. Verify the returned string contains `<existing_page>` with valid JSON inside
|
||||||
|
4. Verify `<existing_moments>` section has indices [0], [1], [2]
|
||||||
|
5. Verify `<new_moments>` section has indices [3], [4] (offset by 3)
|
||||||
|
6. Verify `<creator>` tag contains the creator name
|
||||||
|
- **Expected:** All XML tags present with correct content and citation indices.
|
||||||
|
|
||||||
|
### Test 6: Category filtering behavior
|
||||||
|
1. Create two MockKeyMoment objects with different `topic_category` values (e.g., "Sound Design" and "Mixing")
|
||||||
|
2. Create an existing page dict with `topic_category: "Sound Design"`
|
||||||
|
3. Call `build_compose_prompt` with both moments as new_moments
|
||||||
|
4. Verify only the "Sound Design" moment appears in the prompt (filtering happens in run_compose, not build_compose_prompt — verify at run_compose level or in test suite)
|
||||||
|
- **Expected:** TestCategoryFiltering tests confirm only matching-category moments are used.
|
||||||
|
|
||||||
|
### Edge Cases
|
||||||
|
- **Empty new moments:** `build_compose_prompt` with empty new_moments list produces valid prompt with empty `<new_moments>` section
|
||||||
|
- **Single new moment:** Index starts at N (offset from existing count)
|
||||||
|
- **Large offset:** 50 existing + 10 new moments produces indices [50]-[59] for new moments
|
||||||
24
.gsd/milestones/M014/slices/S02/tasks/T03-VERIFY.json
Normal file
24
.gsd/milestones/M014/slices/S02/tasks/T03-VERIFY.json
Normal file
|
|
@ -0,0 +1,24 @@
|
||||||
|
{
|
||||||
|
"schemaVersion": 1,
|
||||||
|
"taskId": "T03",
|
||||||
|
"unitId": "M014/S02/T03",
|
||||||
|
"timestamp": 1775178521961,
|
||||||
|
"passed": false,
|
||||||
|
"discoverySource": "task-plan",
|
||||||
|
"checks": [
|
||||||
|
{
|
||||||
|
"command": "cd backend",
|
||||||
|
"exitCode": 0,
|
||||||
|
"durationMs": 4,
|
||||||
|
"verdict": "pass"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"command": "python -m pytest pipeline/test_harness_compose.py -v",
|
||||||
|
"exitCode": 2,
|
||||||
|
"durationMs": 375,
|
||||||
|
"verdict": "fail"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"retryAttempt": 1,
|
||||||
|
"maxRetries": 2
|
||||||
|
}
|
||||||
|
|
@ -1,6 +1,77 @@
|
||||||
# S03: Data Model + Migration
|
# S03: Data Model + Migration
|
||||||
|
|
||||||
**Goal:** Add body_sections_format column, technique_page_videos association table, update models and API schemas.
|
**Goal:** Add `body_sections_format` column to technique_pages, create `technique_page_videos` association table, and expose both in the API response.
|
||||||
**Demo:** After this: Alembic migration runs clean. API response includes body_sections_format and source_videos fields.
|
**Demo:** After this: Alembic migration runs clean. API response includes body_sections_format and source_videos fields.
|
||||||
|
|
||||||
## Tasks
|
## Tasks
|
||||||
|
- [x] **T01: Added body_sections_format column, technique_page_videos association table, and SourceVideoSummary schema for multi-source technique pages** — Create the Alembic migration for the new column and table, update the SQLAlchemy models, and widen the Pydantic API schemas.
|
||||||
|
|
||||||
|
This is the data layer foundation — migration file, model changes, and schema changes. The router wiring comes in T02.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Create `alembic/versions/012_multi_source_format.py`:
|
||||||
|
- Add `body_sections_format` column to `technique_pages`: `VARCHAR(20)`, `NOT NULL`, `server_default='v1'`
|
||||||
|
- Create `technique_page_videos` table: `id` (UUID PK), `technique_page_id` (UUID FK → technique_pages.id ON DELETE CASCADE), `source_video_id` (UUID FK → source_videos.id ON DELETE CASCADE), `added_at` (TIMESTAMP, server_default=now()), UNIQUE constraint on (technique_page_id, source_video_id)
|
||||||
|
- Downgrade: drop `technique_page_videos` table, drop `body_sections_format` column
|
||||||
|
|
||||||
|
2. Update `backend/models.py`:
|
||||||
|
- Add `body_sections_format` mapped column to `TechniquePage`: `String(20)`, default `'v1'`, server_default `'v1'`, nullable=False
|
||||||
|
- Add `TechniquePageVideo` model class with fields matching migration, plus `UniqueConstraint` named `uq_page_video`
|
||||||
|
- Add `sa_relationship` on `TechniquePage`: `source_video_links: Mapped[list[TechniquePageVideo]]` with back_populates
|
||||||
|
- Add `sa_relationship` on `TechniquePageVideo` back to both `TechniquePage` and `SourceVideo`
|
||||||
|
- Note: use `sa_relationship` (the alias) per KNOWLEDGE.md rule about column names shadowing ORM functions
|
||||||
|
|
||||||
|
3. Update `backend/schemas.py`:
|
||||||
|
- Change `TechniquePageBase.body_sections` type from `dict | None` to `list | dict | None`
|
||||||
|
- Add `body_sections_format: str = "v1"` to `TechniquePageBase`
|
||||||
|
- Add `SourceVideoSummary` schema: `id` (UUID), `filename` (str), `content_type` (str), `added_at` (datetime | None)
|
||||||
|
- Add `source_videos: list[SourceVideoSummary] = Field(default_factory=list)` to `TechniquePageDetail`
|
||||||
|
|
||||||
|
## Must-Haves
|
||||||
|
|
||||||
|
- [ ] Migration file has both upgrade and downgrade
|
||||||
|
- [ ] `body_sections_format` defaults to `'v1'` for existing rows
|
||||||
|
- [ ] `TechniquePageVideo` has CASCADE deletes on both FKs
|
||||||
|
- [ ] Unique constraint on (technique_page_id, source_video_id)
|
||||||
|
- [ ] `body_sections` schema type widened to accept list or dict
|
||||||
|
- [ ] `SourceVideoSummary` schema uses `from_attributes=True`
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
- `cd backend && python -c "from models import TechniquePageVideo, TechniquePage; print(TechniquePage.body_sections_format); print('OK')"` succeeds
|
||||||
|
- `cd backend && python -c "from schemas import SourceVideoSummary, TechniquePageDetail; print('OK')"` succeeds
|
||||||
|
- `alembic upgrade head` runs without error (tested via Docker: `docker exec chrysopedia-api alembic upgrade head`)
|
||||||
|
- Estimate: 30m
|
||||||
|
- Files: alembic/versions/012_multi_source_format.py, backend/models.py, backend/schemas.py
|
||||||
|
- Verify: cd backend && python -c "from models import TechniquePageVideo, TechniquePage; assert hasattr(TechniquePage, 'body_sections_format'); print('models OK')" && python -c "from schemas import SourceVideoSummary, TechniquePageDetail; print('schemas OK')"
|
||||||
|
- [ ] **T02: Wire source_videos into technique detail endpoint** — Update the technique detail endpoint to eagerly load source_video_links and build the source_videos list on the response. Verify end-to-end with an API call.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Update `backend/routers/techniques.py` `get_technique()` function:
|
||||||
|
- Add `selectinload(TechniquePage.source_video_links).selectinload(TechniquePageVideo.source_video)` to the query options
|
||||||
|
- After building existing response fields, build `source_videos` list from `page.source_video_links`
|
||||||
|
- Each item: `SourceVideoSummary(id=link.source_video.id, filename=link.source_video.filename, content_type=link.source_video.content_type.value, added_at=link.added_at)`
|
||||||
|
- Add the import for `TechniquePageVideo` and `SourceVideoSummary` at the top of the file
|
||||||
|
- Pass `source_videos=source_videos` to the `TechniquePageDetail(...)` constructor
|
||||||
|
|
||||||
|
2. Verify the endpoint returns the new fields:
|
||||||
|
- Existing pages should return `body_sections_format: "v1"` (inherited from schema default + model default)
|
||||||
|
- Existing pages should return `source_videos: []` (no rows in association table yet)
|
||||||
|
|
||||||
|
## Must-Haves
|
||||||
|
|
||||||
|
- [ ] `selectinload` for source_video_links chained to source_video
|
||||||
|
- [ ] `source_videos` list built from association table rows
|
||||||
|
- [ ] Existing pages return empty `source_videos` list (not null, not omitted)
|
||||||
|
- [ ] `body_sections_format` appears in response
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
- Pick any existing technique slug from the DB
|
||||||
|
- `curl -s http://ub01:8096/techniques/{slug} | python3 -m json.tool | grep -E 'body_sections_format|source_videos'` shows both fields
|
||||||
|
- Or if testing locally: verify the router code by reading the constructed response and confirming both fields are passed
|
||||||
|
- Estimate: 20m
|
||||||
|
- Files: backend/routers/techniques.py
|
||||||
|
- Verify: ssh ub01 'docker exec chrysopedia-api alembic upgrade head' && ssh ub01 'curl -s http://localhost:8000/techniques/$(docker exec chrysopedia-db psql -U chrysopedia -tAc "SELECT slug FROM technique_pages LIMIT 1") | python3 -m json.tool | grep -E "body_sections_format|source_videos"'
|
||||||
|
|
|
||||||
145
.gsd/milestones/M014/slices/S03/S03-RESEARCH.md
Normal file
145
.gsd/milestones/M014/slices/S03/S03-RESEARCH.md
Normal file
|
|
@ -0,0 +1,145 @@
|
||||||
|
# S03 Research: Data Model + Migration
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Straightforward data model expansion. Three changes needed: (1) add `body_sections_format` column to `technique_pages`, (2) create `technique_page_videos` association table, (3) update API response schemas to expose both fields. No new technology, no ambiguous requirements — established Alembic + SQLAlchemy patterns already in codebase.
|
||||||
|
|
||||||
|
## Recommendation
|
||||||
|
|
||||||
|
Light-touch migration + model/schema update. Single Alembic migration file. Model changes in `models.py`, API schema changes in `schemas.py`, router query updates in `techniques.py` to populate `source_videos`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Landscape
|
||||||
|
|
||||||
|
### Current State
|
||||||
|
|
||||||
|
**TechniquePage model** (`backend/models.py`, line ~195):
|
||||||
|
- `body_sections: Mapped[dict | None] = mapped_column(JSONB, nullable=True)` — stores either old flat dict or new v2 list-of-objects
|
||||||
|
- No `body_sections_format` column — the pipeline Pydantic schema (`pipeline/schemas.py`) has `body_sections_format='v2'` but it's never persisted to DB
|
||||||
|
- No association table linking technique pages to source videos
|
||||||
|
|
||||||
|
**Video-to-page relationship today**: Indirect via `key_moments` table — `KeyMoment.source_video_id` + `KeyMoment.technique_page_id`. You can derive contributing videos with a JOIN, but there's no direct record. S04 needs explicit tracking for composition history.
|
||||||
|
|
||||||
|
**API schemas** (`backend/schemas.py`):
|
||||||
|
- `TechniquePageBase.body_sections: dict | None` — must become `list | dict | None` to accept v2 format
|
||||||
|
- No `body_sections_format` field on any read schema
|
||||||
|
- No `source_videos` field on detail response
|
||||||
|
|
||||||
|
**Alembic chain**: `001_initial` → ... → `011_cls_cache_rerun`. Next revision = `012`.
|
||||||
|
|
||||||
|
**Migration runner** (`alembic/env.py`): Async PostgreSQL via `asyncpg`, `sys.path` bootstrapped for both local and Docker. DB URL from `DATABASE_URL` env var or `alembic.ini` fallback (`postgresql+asyncpg://chrysopedia:changeme@localhost:5433/chrysopedia`).
|
||||||
|
|
||||||
|
### What Needs to Change
|
||||||
|
|
||||||
|
#### 1. New Alembic migration (`alembic/versions/012_multi_source_format.py`)
|
||||||
|
|
||||||
|
```
|
||||||
|
technique_pages:
|
||||||
|
+ body_sections_format VARCHAR(20) DEFAULT 'v1' NOT NULL
|
||||||
|
|
||||||
|
technique_page_videos (new table):
|
||||||
|
id UUID PK
|
||||||
|
technique_page_id UUID FK → technique_pages.id ON DELETE CASCADE
|
||||||
|
source_video_id UUID FK → source_videos.id ON DELETE CASCADE
|
||||||
|
added_at TIMESTAMP DEFAULT now()
|
||||||
|
UNIQUE(technique_page_id, source_video_id)
|
||||||
|
```
|
||||||
|
|
||||||
|
Default `body_sections_format` to `'v1'` for existing rows. New pages from the v2 pipeline will set `'v2'`. This lets the frontend discriminate rendering: format-1 pages use old dict iteration, format-2 pages use nested section rendering (S05).
|
||||||
|
|
||||||
|
The `technique_page_videos` table is a simple many-to-many join. `added_at` tracks when a video was composed into the page (useful for S06 admin UI composition history).
|
||||||
|
|
||||||
|
#### 2. Model updates (`backend/models.py`)
|
||||||
|
|
||||||
|
Add to `TechniquePage`:
|
||||||
|
```python
|
||||||
|
body_sections_format: Mapped[str] = mapped_column(
|
||||||
|
String(20), default="v1", server_default="v1", nullable=False
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
New association model:
|
||||||
|
```python
|
||||||
|
class TechniquePageVideo(Base):
|
||||||
|
__tablename__ = "technique_page_videos"
|
||||||
|
__table_args__ = (
|
||||||
|
UniqueConstraint("technique_page_id", "source_video_id", name="uq_page_video"),
|
||||||
|
)
|
||||||
|
id: Mapped[uuid.UUID] = _uuid_pk()
|
||||||
|
technique_page_id: Mapped[uuid.UUID] = mapped_column(
|
||||||
|
ForeignKey("technique_pages.id", ondelete="CASCADE"), nullable=False
|
||||||
|
)
|
||||||
|
source_video_id: Mapped[uuid.UUID] = mapped_column(
|
||||||
|
ForeignKey("source_videos.id", ondelete="CASCADE"), nullable=False
|
||||||
|
)
|
||||||
|
added_at: Mapped[datetime] = mapped_column(default=_now, server_default=func.now())
|
||||||
|
```
|
||||||
|
|
||||||
|
Add relationships on `TechniquePage`:
|
||||||
|
```python
|
||||||
|
source_video_links: Mapped[list[TechniquePageVideo]] = sa_relationship(back_populates="technique_page")
|
||||||
|
```
|
||||||
|
|
||||||
|
And on `SourceVideo` (optional, for reverse navigation):
|
||||||
|
```python
|
||||||
|
technique_page_links: Mapped[list[TechniquePageVideo]] = sa_relationship(back_populates="source_video")
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. API schema updates (`backend/schemas.py`)
|
||||||
|
|
||||||
|
`TechniquePageBase`:
|
||||||
|
- Change `body_sections: dict | None = None` → `body_sections: list | dict | None = None`
|
||||||
|
- Add `body_sections_format: str = "v1"`
|
||||||
|
|
||||||
|
`TechniquePageRead` (inherits from Base, so gets both automatically).
|
||||||
|
|
||||||
|
`TechniquePageDetail`:
|
||||||
|
- Add `source_videos: list[SourceVideoSummary] = Field(default_factory=list)`
|
||||||
|
- New lightweight schema:
|
||||||
|
```python
|
||||||
|
class SourceVideoSummary(BaseModel):
|
||||||
|
model_config = ConfigDict(from_attributes=True)
|
||||||
|
id: uuid.UUID
|
||||||
|
filename: str
|
||||||
|
content_type: str
|
||||||
|
added_at: datetime | None = None
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. Router query update (`backend/routers/techniques.py`)
|
||||||
|
|
||||||
|
In `get_technique()` (the detail endpoint): eagerly load `source_video_links` → `source_video`, build `source_videos` list on the response. The list endpoint doesn't need source_videos (too heavy for a list view).
|
||||||
|
|
||||||
|
#### 5. Pipeline write path (`backend/pipeline/stages.py`)
|
||||||
|
|
||||||
|
When creating/updating a TechniquePage in `stage5_synthesis`:
|
||||||
|
- Set `body_sections_format = "v2"` on the page
|
||||||
|
- After creating/updating the page, upsert a `TechniquePageVideo` row for the current video
|
||||||
|
|
||||||
|
This is S04 scope (pipeline compose-or-create logic), but the model/migration must support it. S03 just needs to ensure the columns and tables exist.
|
||||||
|
|
||||||
|
### Constraints and Gotchas
|
||||||
|
|
||||||
|
1. **Default value for existing rows**: `body_sections_format` must default to `'v1'` so existing pages don't break. The migration's `server_default='v1'` handles this.
|
||||||
|
|
||||||
|
2. **JSONB type flexibility**: The `body_sections` DB column is JSONB, which already accepts any JSON type (list, dict, null). No DDL change needed — only the Pydantic schema type annotation needs widening.
|
||||||
|
|
||||||
|
3. **Backfill `technique_page_videos`**: Existing pages don't have entries in the new table. We could backfill from `key_moments` JOINs, but the roadmap demo only requires the table to exist and API to expose it — actual population happens in S04. Empty `source_videos: []` for existing pages is fine.
|
||||||
|
|
||||||
|
4. **Migration must run on production DB (ub01:5433)**: Use `docker exec chrysopedia-api alembic upgrade head` per the project's existing deployment pattern.
|
||||||
|
|
||||||
|
### Task Decomposition Seams
|
||||||
|
|
||||||
|
Three natural tasks:
|
||||||
|
1. **Migration file** — `alembic/versions/012_multi_source_format.py` (standalone, can be verified with `alembic upgrade head`)
|
||||||
|
2. **Model + Schema** — `models.py` (new column + model), `schemas.py` (response fields) — verifiable with import check
|
||||||
|
3. **Router wiring** — `techniques.py` detail endpoint loads source_videos — verifiable with API call to `/techniques/{slug}`
|
||||||
|
|
||||||
|
Tasks 2 and 3 could be one task since they're small, but splitting keeps verification clean.
|
||||||
|
|
||||||
|
### Verification Strategy
|
||||||
|
|
||||||
|
- `alembic upgrade head` runs clean (no errors, table and column exist)
|
||||||
|
- `python -c "from models import TechniquePageVideo, TechniquePage"` — imports succeed
|
||||||
|
- `curl localhost:8000/techniques/{slug}` response includes `body_sections_format` and `source_videos` fields
|
||||||
|
- For local testing without Docker: SQLAlchemy model import + Pydantic schema serialization test
|
||||||
62
.gsd/milestones/M014/slices/S03/tasks/T01-PLAN.md
Normal file
62
.gsd/milestones/M014/slices/S03/tasks/T01-PLAN.md
Normal file
|
|
@ -0,0 +1,62 @@
|
||||||
|
---
|
||||||
|
estimated_steps: 29
|
||||||
|
estimated_files: 3
|
||||||
|
skills_used: []
|
||||||
|
---
|
||||||
|
|
||||||
|
# T01: Alembic migration + SQLAlchemy model + Pydantic schema updates
|
||||||
|
|
||||||
|
Create the Alembic migration for the new column and table, update the SQLAlchemy models, and widen the Pydantic API schemas.
|
||||||
|
|
||||||
|
This is the data layer foundation — migration file, model changes, and schema changes. The router wiring comes in T02.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Create `alembic/versions/012_multi_source_format.py`:
|
||||||
|
- Add `body_sections_format` column to `technique_pages`: `VARCHAR(20)`, `NOT NULL`, `server_default='v1'`
|
||||||
|
- Create `technique_page_videos` table: `id` (UUID PK), `technique_page_id` (UUID FK → technique_pages.id ON DELETE CASCADE), `source_video_id` (UUID FK → source_videos.id ON DELETE CASCADE), `added_at` (TIMESTAMP, server_default=now()), UNIQUE constraint on (technique_page_id, source_video_id)
|
||||||
|
- Downgrade: drop `technique_page_videos` table, drop `body_sections_format` column
|
||||||
|
|
||||||
|
2. Update `backend/models.py`:
|
||||||
|
- Add `body_sections_format` mapped column to `TechniquePage`: `String(20)`, default `'v1'`, server_default `'v1'`, nullable=False
|
||||||
|
- Add `TechniquePageVideo` model class with fields matching migration, plus `UniqueConstraint` named `uq_page_video`
|
||||||
|
- Add `sa_relationship` on `TechniquePage`: `source_video_links: Mapped[list[TechniquePageVideo]]` with back_populates
|
||||||
|
- Add `sa_relationship` on `TechniquePageVideo` back to both `TechniquePage` and `SourceVideo`
|
||||||
|
- Note: use `sa_relationship` (the alias) per KNOWLEDGE.md rule about column names shadowing ORM functions
|
||||||
|
|
||||||
|
3. Update `backend/schemas.py`:
|
||||||
|
- Change `TechniquePageBase.body_sections` type from `dict | None` to `list | dict | None`
|
||||||
|
- Add `body_sections_format: str = "v1"` to `TechniquePageBase`
|
||||||
|
- Add `SourceVideoSummary` schema: `id` (UUID), `filename` (str), `content_type` (str), `added_at` (datetime | None)
|
||||||
|
- Add `source_videos: list[SourceVideoSummary] = Field(default_factory=list)` to `TechniquePageDetail`
|
||||||
|
|
||||||
|
## Must-Haves
|
||||||
|
|
||||||
|
- [ ] Migration file has both upgrade and downgrade
|
||||||
|
- [ ] `body_sections_format` defaults to `'v1'` for existing rows
|
||||||
|
- [ ] `TechniquePageVideo` has CASCADE deletes on both FKs
|
||||||
|
- [ ] Unique constraint on (technique_page_id, source_video_id)
|
||||||
|
- [ ] `body_sections` schema type widened to accept list or dict
|
||||||
|
- [ ] `SourceVideoSummary` schema uses `from_attributes=True`
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
- `cd backend && python -c "from models import TechniquePageVideo, TechniquePage; print(TechniquePage.body_sections_format); print('OK')"` succeeds
|
||||||
|
- `cd backend && python -c "from schemas import SourceVideoSummary, TechniquePageDetail; print('OK')"` succeeds
|
||||||
|
- `alembic upgrade head` runs without error (tested via Docker: `docker exec chrysopedia-api alembic upgrade head`)
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
- ``backend/models.py` — existing TechniquePage and SourceVideo models`
|
||||||
|
- ``backend/schemas.py` — existing TechniquePageBase, TechniquePageRead, TechniquePageDetail schemas`
|
||||||
|
- ``alembic/versions/011_classification_cache_and_stage_rerun.py` — previous migration in chain`
|
||||||
|
|
||||||
|
## Expected Output
|
||||||
|
|
||||||
|
- ``alembic/versions/012_multi_source_format.py` — new migration adding column and table`
|
||||||
|
- ``backend/models.py` — updated with TechniquePageVideo model and body_sections_format column`
|
||||||
|
- ``backend/schemas.py` — updated with SourceVideoSummary, widened body_sections type, source_videos on detail`
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
cd backend && python -c "from models import TechniquePageVideo, TechniquePage; assert hasattr(TechniquePage, 'body_sections_format'); print('models OK')" && python -c "from schemas import SourceVideoSummary, TechniquePageDetail; print('schemas OK')"
|
||||||
80
.gsd/milestones/M014/slices/S03/tasks/T01-SUMMARY.md
Normal file
80
.gsd/milestones/M014/slices/S03/tasks/T01-SUMMARY.md
Normal file
|
|
@ -0,0 +1,80 @@
|
||||||
|
---
|
||||||
|
id: T01
|
||||||
|
parent: S03
|
||||||
|
milestone: M014
|
||||||
|
provides: []
|
||||||
|
requires: []
|
||||||
|
affects: []
|
||||||
|
key_files: ["alembic/versions/012_multi_source_format.py", "backend/models.py", "backend/schemas.py"]
|
||||||
|
key_decisions: ["Used TIMESTAMP (not WITH TIME ZONE) for added_at to stay consistent with existing schema convention"]
|
||||||
|
patterns_established: []
|
||||||
|
drill_down_paths: []
|
||||||
|
observability_surfaces: []
|
||||||
|
duration: ""
|
||||||
|
verification_result: "Model imports pass with attribute assertions. Schema imports pass. Migration file compiles cleanly. Alembic upgrade head deferred to Docker execution on ub01."
|
||||||
|
completed_at: 2026-04-03T01:16:27.993Z
|
||||||
|
blocker_discovered: false
|
||||||
|
---
|
||||||
|
|
||||||
|
# T01: Added body_sections_format column, technique_page_videos association table, and SourceVideoSummary schema for multi-source technique pages
|
||||||
|
|
||||||
|
> Added body_sections_format column, technique_page_videos association table, and SourceVideoSummary schema for multi-source technique pages
|
||||||
|
|
||||||
|
## What Happened
|
||||||
|
---
|
||||||
|
id: T01
|
||||||
|
parent: S03
|
||||||
|
milestone: M014
|
||||||
|
key_files:
|
||||||
|
- alembic/versions/012_multi_source_format.py
|
||||||
|
- backend/models.py
|
||||||
|
- backend/schemas.py
|
||||||
|
key_decisions:
|
||||||
|
- Used TIMESTAMP (not WITH TIME ZONE) for added_at to stay consistent with existing schema convention
|
||||||
|
duration: ""
|
||||||
|
verification_result: passed
|
||||||
|
completed_at: 2026-04-03T01:16:27.994Z
|
||||||
|
blocker_discovered: false
|
||||||
|
---
|
||||||
|
|
||||||
|
# T01: Added body_sections_format column, technique_page_videos association table, and SourceVideoSummary schema for multi-source technique pages
|
||||||
|
|
||||||
|
**Added body_sections_format column, technique_page_videos association table, and SourceVideoSummary schema for multi-source technique pages**
|
||||||
|
|
||||||
|
## What Happened
|
||||||
|
|
||||||
|
Created Alembic migration 012 with body_sections_format VARCHAR(20) NOT NULL DEFAULT 'v1' on technique_pages and technique_page_videos association table with dual CASCADE FKs and unique constraint. Updated SQLAlchemy models with TechniquePageVideo class and body_sections_format column. Widened Pydantic body_sections type to list | dict | None, added SourceVideoSummary schema, and added source_videos field to TechniquePageDetail.
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
Model imports pass with attribute assertions. Schema imports pass. Migration file compiles cleanly. Alembic upgrade head deferred to Docker execution on ub01.
|
||||||
|
|
||||||
|
## Verification Evidence
|
||||||
|
|
||||||
|
| # | Command | Exit Code | Verdict | Duration |
|
||||||
|
|---|---------|-----------|---------|----------|
|
||||||
|
| 1 | `cd backend && python -c "from models import TechniquePageVideo, TechniquePage; assert hasattr(TechniquePage, 'body_sections_format'); print('models OK')"` | 0 | ✅ pass | 500ms |
|
||||||
|
| 2 | `cd backend && python -c "from schemas import SourceVideoSummary, TechniquePageDetail; print('schemas OK')"` | 0 | ✅ pass | 400ms |
|
||||||
|
| 3 | `python -c "import py_compile; py_compile.compile('alembic/versions/012_multi_source_format.py', doraise=True)"` | 0 | ✅ pass | 200ms |
|
||||||
|
|
||||||
|
|
||||||
|
## Deviations
|
||||||
|
|
||||||
|
None.
|
||||||
|
|
||||||
|
## Known Issues
|
||||||
|
|
||||||
|
None.
|
||||||
|
|
||||||
|
## Files Created/Modified
|
||||||
|
|
||||||
|
- `alembic/versions/012_multi_source_format.py`
|
||||||
|
- `backend/models.py`
|
||||||
|
- `backend/schemas.py`
|
||||||
|
|
||||||
|
|
||||||
|
## Deviations
|
||||||
|
None.
|
||||||
|
|
||||||
|
## Known Issues
|
||||||
|
None.
|
||||||
49
.gsd/milestones/M014/slices/S03/tasks/T02-PLAN.md
Normal file
49
.gsd/milestones/M014/slices/S03/tasks/T02-PLAN.md
Normal file
|
|
@ -0,0 +1,49 @@
|
||||||
|
---
|
||||||
|
estimated_steps: 20
|
||||||
|
estimated_files: 1
|
||||||
|
skills_used: []
|
||||||
|
---
|
||||||
|
|
||||||
|
# T02: Wire source_videos into technique detail endpoint
|
||||||
|
|
||||||
|
Update the technique detail endpoint to eagerly load source_video_links and build the source_videos list on the response. Verify end-to-end with an API call.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Update `backend/routers/techniques.py` `get_technique()` function:
|
||||||
|
- Add `selectinload(TechniquePage.source_video_links).selectinload(TechniquePageVideo.source_video)` to the query options
|
||||||
|
- After building existing response fields, build `source_videos` list from `page.source_video_links`
|
||||||
|
- Each item: `SourceVideoSummary(id=link.source_video.id, filename=link.source_video.filename, content_type=link.source_video.content_type.value, added_at=link.added_at)`
|
||||||
|
- Add the import for `TechniquePageVideo` and `SourceVideoSummary` at the top of the file
|
||||||
|
- Pass `source_videos=source_videos` to the `TechniquePageDetail(...)` constructor
|
||||||
|
|
||||||
|
2. Verify the endpoint returns the new fields:
|
||||||
|
- Existing pages should return `body_sections_format: "v1"` (inherited from schema default + model default)
|
||||||
|
- Existing pages should return `source_videos: []` (no rows in association table yet)
|
||||||
|
|
||||||
|
## Must-Haves
|
||||||
|
|
||||||
|
- [ ] `selectinload` for source_video_links chained to source_video
|
||||||
|
- [ ] `source_videos` list built from association table rows
|
||||||
|
- [ ] Existing pages return empty `source_videos` list (not null, not omitted)
|
||||||
|
- [ ] `body_sections_format` appears in response
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
- Pick any existing technique slug from the DB
|
||||||
|
- `curl -s http://ub01:8096/techniques/{slug} | python3 -m json.tool | grep -E 'body_sections_format|source_videos'` shows both fields
|
||||||
|
- Or if testing locally: verify the router code by reading the constructed response and confirming both fields are passed
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
- ``backend/models.py` — TechniquePageVideo model and source_video_links relationship (from T01)`
|
||||||
|
- ``backend/schemas.py` — SourceVideoSummary and updated TechniquePageDetail (from T01)`
|
||||||
|
- ``backend/routers/techniques.py` — existing get_technique endpoint`
|
||||||
|
|
||||||
|
## Expected Output
|
||||||
|
|
||||||
|
- ``backend/routers/techniques.py` — updated with source_videos loading and response construction`
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
ssh ub01 'docker exec chrysopedia-api alembic upgrade head' && ssh ub01 'curl -s http://localhost:8000/techniques/$(docker exec chrysopedia-db psql -U chrysopedia -tAc "SELECT slug FROM technique_pages LIMIT 1") | python3 -m json.tool | grep -E "body_sections_format|source_videos"'
|
||||||
55
alembic/versions/012_multi_source_format.py
Normal file
55
alembic/versions/012_multi_source_format.py
Normal file
|
|
@ -0,0 +1,55 @@
|
||||||
|
"""Add body_sections_format column and technique_page_videos association table.
|
||||||
|
|
||||||
|
Supports multi-source technique pages: tracks which source videos contributed
|
||||||
|
to a technique page, and marks the body_sections format version for future
|
||||||
|
structured section layouts.
|
||||||
|
|
||||||
|
Revision ID: 012_multi_source_fmt
|
||||||
|
Revises: 011_cls_cache_rerun
|
||||||
|
"""
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
from sqlalchemy.dialects.postgresql import UUID
|
||||||
|
|
||||||
|
revision = "012_multi_source_fmt"
|
||||||
|
down_revision = "011_cls_cache_rerun"
|
||||||
|
branch_labels = None
|
||||||
|
depends_on = None
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade() -> None:
|
||||||
|
# Add body_sections_format to technique_pages with default for existing rows
|
||||||
|
op.add_column(
|
||||||
|
"technique_pages",
|
||||||
|
sa.Column(
|
||||||
|
"body_sections_format",
|
||||||
|
sa.String(20),
|
||||||
|
nullable=False,
|
||||||
|
server_default="v1",
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create technique_page_videos association table
|
||||||
|
op.create_table(
|
||||||
|
"technique_page_videos",
|
||||||
|
sa.Column("id", UUID(as_uuid=True), primary_key=True, server_default=sa.func.gen_random_uuid()),
|
||||||
|
sa.Column(
|
||||||
|
"technique_page_id",
|
||||||
|
UUID(as_uuid=True),
|
||||||
|
sa.ForeignKey("technique_pages.id", ondelete="CASCADE"),
|
||||||
|
nullable=False,
|
||||||
|
),
|
||||||
|
sa.Column(
|
||||||
|
"source_video_id",
|
||||||
|
UUID(as_uuid=True),
|
||||||
|
sa.ForeignKey("source_videos.id", ondelete="CASCADE"),
|
||||||
|
nullable=False,
|
||||||
|
),
|
||||||
|
sa.Column("added_at", sa.TIMESTAMP(), server_default=sa.func.now(), nullable=False),
|
||||||
|
sa.UniqueConstraint("technique_page_id", "source_video_id", name="uq_page_video"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def downgrade() -> None:
|
||||||
|
op.drop_table("technique_page_videos")
|
||||||
|
op.drop_column("technique_pages", "body_sections_format")
|
||||||
|
|
@ -216,6 +216,9 @@ class TechniquePage(Base):
|
||||||
topic_tags: Mapped[list[str] | None] = mapped_column(ARRAY(String), nullable=True)
|
topic_tags: Mapped[list[str] | None] = mapped_column(ARRAY(String), nullable=True)
|
||||||
summary: Mapped[str | None] = mapped_column(Text, nullable=True)
|
summary: Mapped[str | None] = mapped_column(Text, nullable=True)
|
||||||
body_sections: Mapped[dict | None] = mapped_column(JSONB, nullable=True)
|
body_sections: Mapped[dict | None] = mapped_column(JSONB, nullable=True)
|
||||||
|
body_sections_format: Mapped[str] = mapped_column(
|
||||||
|
String(20), nullable=False, default="v1", server_default="v1"
|
||||||
|
)
|
||||||
signal_chains: Mapped[list | None] = mapped_column(JSONB, nullable=True)
|
signal_chains: Mapped[list | None] = mapped_column(JSONB, nullable=True)
|
||||||
plugins: Mapped[list[str] | None] = mapped_column(ARRAY(String), nullable=True)
|
plugins: Mapped[list[str] | None] = mapped_column(ARRAY(String), nullable=True)
|
||||||
source_quality: Mapped[SourceQuality | None] = mapped_column(
|
source_quality: Mapped[SourceQuality | None] = mapped_column(
|
||||||
|
|
@ -244,6 +247,9 @@ class TechniquePage(Base):
|
||||||
incoming_links: Mapped[list[RelatedTechniqueLink]] = sa_relationship(
|
incoming_links: Mapped[list[RelatedTechniqueLink]] = sa_relationship(
|
||||||
foreign_keys="RelatedTechniqueLink.target_page_id", back_populates="target_page"
|
foreign_keys="RelatedTechniqueLink.target_page_id", back_populates="target_page"
|
||||||
)
|
)
|
||||||
|
source_video_links: Mapped[list[TechniquePageVideo]] = sa_relationship(
|
||||||
|
back_populates="technique_page"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class RelatedTechniqueLink(Base):
|
class RelatedTechniqueLink(Base):
|
||||||
|
|
@ -303,6 +309,31 @@ class Tag(Base):
|
||||||
aliases: Mapped[list[str] | None] = mapped_column(ARRAY(String), nullable=True)
|
aliases: Mapped[list[str] | None] = mapped_column(ARRAY(String), nullable=True)
|
||||||
|
|
||||||
|
|
||||||
|
class TechniquePageVideo(Base):
|
||||||
|
"""Association linking a technique page to its contributing source videos."""
|
||||||
|
__tablename__ = "technique_page_videos"
|
||||||
|
__table_args__ = (
|
||||||
|
UniqueConstraint("technique_page_id", "source_video_id", name="uq_page_video"),
|
||||||
|
)
|
||||||
|
|
||||||
|
id: Mapped[uuid.UUID] = _uuid_pk()
|
||||||
|
technique_page_id: Mapped[uuid.UUID] = mapped_column(
|
||||||
|
ForeignKey("technique_pages.id", ondelete="CASCADE"), nullable=False
|
||||||
|
)
|
||||||
|
source_video_id: Mapped[uuid.UUID] = mapped_column(
|
||||||
|
ForeignKey("source_videos.id", ondelete="CASCADE"), nullable=False
|
||||||
|
)
|
||||||
|
added_at: Mapped[datetime] = mapped_column(
|
||||||
|
default=_now, server_default=func.now()
|
||||||
|
)
|
||||||
|
|
||||||
|
# relationships
|
||||||
|
technique_page: Mapped[TechniquePage] = sa_relationship(
|
||||||
|
back_populates="source_video_links"
|
||||||
|
)
|
||||||
|
source_video: Mapped[SourceVideo] = sa_relationship()
|
||||||
|
|
||||||
|
|
||||||
# ── Content Report Enums ─────────────────────────────────────────────────────
|
# ── Content Report Enums ─────────────────────────────────────────────────────
|
||||||
|
|
||||||
class ReportType(str, enum.Enum):
|
class ReportType(str, enum.Enum):
|
||||||
|
|
|
||||||
|
|
@ -122,7 +122,8 @@ class TechniquePageBase(BaseModel):
|
||||||
topic_category: str
|
topic_category: str
|
||||||
topic_tags: list[str] | None = None
|
topic_tags: list[str] | None = None
|
||||||
summary: str | None = None
|
summary: str | None = None
|
||||||
body_sections: dict | None = None
|
body_sections: list | dict | None = None
|
||||||
|
body_sections_format: str = "v1"
|
||||||
signal_chains: list | None = None
|
signal_chains: list | None = None
|
||||||
plugins: list[str] | None = None
|
plugins: list[str] | None = None
|
||||||
|
|
||||||
|
|
@ -275,12 +276,23 @@ class CreatorInfo(BaseModel):
|
||||||
genres: list[str] | None = None
|
genres: list[str] | None = None
|
||||||
|
|
||||||
|
|
||||||
|
class SourceVideoSummary(BaseModel):
|
||||||
|
"""Lightweight source video info for technique page detail."""
|
||||||
|
model_config = ConfigDict(from_attributes=True)
|
||||||
|
|
||||||
|
id: uuid.UUID
|
||||||
|
filename: str
|
||||||
|
content_type: str
|
||||||
|
added_at: datetime | None = None
|
||||||
|
|
||||||
|
|
||||||
class TechniquePageDetail(TechniquePageRead):
|
class TechniquePageDetail(TechniquePageRead):
|
||||||
"""Technique page with nested key moments, creator, and related links."""
|
"""Technique page with nested key moments, creator, and related links."""
|
||||||
key_moments: list[KeyMomentSummary] = Field(default_factory=list)
|
key_moments: list[KeyMomentSummary] = Field(default_factory=list)
|
||||||
creator_info: CreatorInfo | None = None
|
creator_info: CreatorInfo | None = None
|
||||||
related_links: list[RelatedLinkItem] = Field(default_factory=list)
|
related_links: list[RelatedLinkItem] = Field(default_factory=list)
|
||||||
version_count: int = 0
|
version_count: int = 0
|
||||||
|
source_videos: list[SourceVideoSummary] = Field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
# ── Technique Page Versions ──────────────────────────────────────────────────
|
# ── Technique Page Versions ──────────────────────────────────────────────────
|
||||||
|
|
|
||||||
7
conftest.py
Normal file
7
conftest.py
Normal file
|
|
@ -0,0 +1,7 @@
|
||||||
|
"""Root conftest: ensure backend/ is on sys.path for symlinked test discovery."""
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
_backend = os.path.join(os.path.dirname(__file__), "backend")
|
||||||
|
if _backend not in sys.path:
|
||||||
|
sys.path.insert(0, _backend)
|
||||||
Loading…
Add table
Reference in a new issue