feat: Added 4-tier creator-scoped cascade (creator → domain → global →…
- "backend/search_service.py" - "backend/schemas.py" - "backend/routers/search.py" GSD-Task: S02/T01
This commit is contained in:
parent
dcd949a25b
commit
a976129179
13 changed files with 1157 additions and 5 deletions
|
|
@ -44,3 +44,4 @@
|
|||
| D036 | M019/S02 | architecture | JWT auth configuration for creator authentication | HS256 with existing app_secret_key, 24-hour expiry, OAuth2PasswordBearer at /api/v1/auth/login | Reuses existing secret from config.py settings. 24-hour expiry balances convenience with security for a single-admin/invite-only tool. OAuth2PasswordBearer integrates with FastAPI's dependency injection and auto-generates OpenAPI security schemes. | Yes | agent |
|
||||
| D037 | | architecture | Search impressions query strategy for creator dashboard | Exact case-insensitive title match via EXISTS subquery against SearchLog | MVP approach — counts SearchLog rows where query exactly matches (case-insensitive) any of the creator's technique page titles. Sufficient for initial dashboard. Can be expanded to ILIKE partial matching or full-text search later when more search data accumulates. | Yes | agent |
|
||||
| D038 | | infrastructure | Primary git remote for chrysopedia | git.xpltd.co (Forgejo) instead of github.com | Consolidating on self-hosted Forgejo instance at git.xpltd.co. Wiki is already there. Single source of truth. | Yes | human |
|
||||
| D039 | | architecture | LightRAG search result scoring strategy | Rank by retrieval order (1.0→0.5 descending) since /query/data returns no numeric relevance score | LightRAG /query/data endpoint returns chunks and entities without relevance scores. Position-based scoring preserves the retrieval engine's internal ranking while providing the float scores needed by the existing dedup/sort pipeline. | Yes | agent |
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ LightRAG becomes the primary search engine. Chat engine goes live (encyclopedic
|
|||
## Slice Overview
|
||||
| ID | Slice | Risk | Depends | Done | After this |
|
||||
|----|-------|------|---------|------|------------|
|
||||
| S01 | [B] LightRAG Search Cutover | high | — | ⬜ | Primary search backed by LightRAG. Old system remains as automatic fallback. |
|
||||
| S01 | [B] LightRAG Search Cutover | high | — | ✅ | Primary search backed by LightRAG. Old system remains as automatic fallback. |
|
||||
| S02 | [B] Creator-Scoped Retrieval Cascade | medium | S01 | ⬜ | Question on Keota's profile first checks Keota's content, then sound design domain, then full KB, then graceful fallback |
|
||||
| S03 | [B] Chat Engine MVP | high | S02 | ⬜ | User asks a question, receives a streamed response with citations linking to source videos and technique pages |
|
||||
| S04 | [B] Highlight Detection v1 | medium | — | ⬜ | Scored highlight candidates generated from existing pipeline data for a sample of videos |
|
||||
|
|
|
|||
84
.gsd/milestones/M021/slices/S01/S01-SUMMARY.md
Normal file
84
.gsd/milestones/M021/slices/S01/S01-SUMMARY.md
Normal file
|
|
@ -0,0 +1,84 @@
|
|||
---
|
||||
id: S01
|
||||
parent: M021
|
||||
milestone: M021
|
||||
provides:
|
||||
- LightRAG as primary search engine with automatic Qdrant fallback
|
||||
- fallback_used flag in search response for downstream monitoring
|
||||
- Config fields: lightrag_url, lightrag_search_timeout, lightrag_min_query_length
|
||||
requires:
|
||||
[]
|
||||
affects:
|
||||
- S02
|
||||
- S03
|
||||
key_files:
|
||||
- backend/config.py
|
||||
- backend/search_service.py
|
||||
- backend/tests/test_search.py
|
||||
key_decisions:
|
||||
- LightRAG results ranked by retrieval order (1.0→0.5) since /query/data has no numeric relevance score (D039)
|
||||
- Qdrant semantic search only runs when LightRAG returns empty — sequential not parallel (D039)
|
||||
- Mock httpx at service-instance level (svc._httpx) rather than patching module-level to exercise real DB lookups
|
||||
patterns_established:
|
||||
- LightRAG-first-with-fallback pattern: try new engine, catch all exceptions, fall back to existing engine with structured WARNING logging
|
||||
- Position-based scoring for APIs without relevance scores: 1.0 to floor descending by position
|
||||
- file_source format parsing: technique:{slug}:creator:{uuid} extracted via regex for DB batch lookup
|
||||
observability_surfaces:
|
||||
- logger.info lightrag_search with query, latency_ms, result_count on success
|
||||
- logger.warning lightrag_search_fallback with reason= tag on any failure path
|
||||
- fallback_used boolean in search response indicates which engine served results
|
||||
drill_down_paths:
|
||||
- .gsd/milestones/M021/slices/S01/tasks/T01-SUMMARY.md
|
||||
- .gsd/milestones/M021/slices/S01/tasks/T02-SUMMARY.md
|
||||
duration: ""
|
||||
verification_result: passed
|
||||
completed_at: 2026-04-04T04:52:41.896Z
|
||||
blocker_discovered: false
|
||||
---
|
||||
|
||||
# S01: [B] LightRAG Search Cutover
|
||||
|
||||
**Primary search endpoint now uses LightRAG /query/data with automatic fallback to Qdrant+keyword on failure, timeout, or empty results.**
|
||||
|
||||
## What Happened
|
||||
|
||||
Added LightRAG as the primary search engine behind GET /api/v1/search. Three new config fields (lightrag_url, lightrag_search_timeout=2s, lightrag_min_query_length=3) control the integration. The _lightrag_search() method POSTs to /query/data with hybrid mode, parses chunks and entities from the response, extracts technique slugs from file_source paths using the technique:{slug}:creator:{id} format, batch-looks up technique pages from PostgreSQL, and maps them to SearchResultItem dicts with position-based scoring (1.0→0.5 descending since /query/data has no numeric relevance score). Entity names are matched against technique page titles as supplementary results. The search() orchestrator tries LightRAG first for queries ≥3 chars. If LightRAG returns empty results or throws any exception (timeout, connection error, HTTP error, parse error), it falls back to the existing Qdrant+keyword path. Queries <3 chars skip LightRAG entirely. All failure paths are logged at WARNING level with structured reason= tags. Keyword search still runs in parallel for merge/dedup regardless of which primary engine served results. Seven integration tests cover: primary path with result mapping, timeout fallback, connection error fallback, empty data fallback, HTTP 500 fallback, short-query bypass, and result ordering preservation. Tests mock httpx at the service-instance level while exercising real DB queries. All 7 new tests pass; 28/29 total tests pass (1 pre-existing failure in test_keyword_search_match_context_tag unrelated to this slice).
|
||||
|
||||
## Verification
|
||||
|
||||
Verification performed across both tasks:\n- Config defaults verified: lightrag_url=http://chrysopedia-lightrag:9621, lightrag_search_timeout=2.0, lightrag_min_query_length=3\n- SearchService initializes successfully with new config fields\n- Key code patterns confirmed via grep: lightrag_url in config.py, _lightrag_search and query/data in search_service.py\n- Both Python files compile clean (py_compile)\n- 7 LightRAG-specific tests pass (pytest -k lightrag)\n- 28/29 total search tests pass (1 pre-existing failure)\n- Note: Tests require PostgreSQL (runs on ub01:5433) and must be run from backend/ directory
|
||||
|
||||
## Requirements Advanced
|
||||
|
||||
- R005 — Search now backed by LightRAG with graph-based retrieval, improving semantic relevance while preserving same response schema and fallback behavior
|
||||
- R009 — Qdrant remains as automatic fallback when LightRAG fails, maintaining existing vector search capability
|
||||
|
||||
## Requirements Validated
|
||||
|
||||
None.
|
||||
|
||||
## New Requirements Surfaced
|
||||
|
||||
None.
|
||||
|
||||
## Requirements Invalidated or Re-scoped
|
||||
|
||||
None.
|
||||
|
||||
## Deviations
|
||||
|
||||
Qdrant semantic search runs only on LightRAG failure (sequential), not in parallel as originally spec'd. Two additional tests added beyond plan (HTTP 500 fallback, result ordering). Simplified qdrant_type_filter from scope→map lookup to direct None assignment.
|
||||
|
||||
## Known Limitations
|
||||
|
||||
Pre-existing test_keyword_search_match_context_tag failure unrelated to this slice. Tests require PostgreSQL on ub01:5433 — cannot run in CI without DB container.
|
||||
|
||||
## Follow-ups
|
||||
|
||||
None.
|
||||
|
||||
## Files Created/Modified
|
||||
|
||||
- `backend/config.py` — Added lightrag_url, lightrag_search_timeout, lightrag_min_query_length settings
|
||||
- `backend/search_service.py` — Added _lightrag_search() method, modified search() orchestrator with LightRAG-first fallback logic
|
||||
- `backend/tests/test_search.py` — Added 7 LightRAG integration tests covering primary path, 4 fallback scenarios, short-query bypass, and ordering
|
||||
61
.gsd/milestones/M021/slices/S01/S01-UAT.md
Normal file
61
.gsd/milestones/M021/slices/S01/S01-UAT.md
Normal file
|
|
@ -0,0 +1,61 @@
|
|||
# S01: [B] LightRAG Search Cutover — UAT
|
||||
|
||||
**Milestone:** M021
|
||||
**Written:** 2026-04-04T04:52:41.897Z
|
||||
|
||||
# S01 UAT: LightRAG Search Cutover
|
||||
|
||||
## Preconditions
|
||||
- Chrysopedia stack running on ub01 (docker ps --filter name=chrysopedia shows all services up)
|
||||
- LightRAG service accessible at http://chrysopedia-lightrag:9621 from within Docker network
|
||||
- At least one technique page indexed in LightRAG (check via backend/scripts/lightrag_query.py)
|
||||
- PostgreSQL accessible on ub01:5433
|
||||
|
||||
## Test Cases
|
||||
|
||||
### TC-1: LightRAG Primary Path (Happy Path)
|
||||
1. Open browser to http://ub01:8096
|
||||
2. Type "snare layering" in the search bar (≥3 chars)
|
||||
3. **Expected:** Results appear within 2 seconds. Results contain technique pages related to snare layering. Network tab shows no Qdrant calls were made.
|
||||
4. Open API directly: `curl "http://ub01:8096/api/v1/search?q=snare+layering" | jq .`
|
||||
5. **Expected:** Response contains `results` array with items having title, slug, type, score, creator_name fields. `fallback_used` is `false`.
|
||||
|
||||
### TC-2: Config Defaults Verified
|
||||
1. SSH to ub01, run: `docker exec chrysopedia-api python -c "from config import Settings; s = Settings(); print(s.lightrag_url, s.lightrag_search_timeout, s.lightrag_min_query_length)"`
|
||||
2. **Expected:** Output: `http://chrysopedia-lightrag:9621 2.0 3`
|
||||
|
||||
### TC-3: Short Query Bypass
|
||||
1. `curl "http://ub01:8096/api/v1/search?q=ab" | jq .`
|
||||
2. **Expected:** Results come from Qdrant+keyword engine (LightRAG skipped for <3 char queries). Response still follows SearchResponse schema.
|
||||
|
||||
### TC-4: Fallback on LightRAG Unavailability
|
||||
1. Stop LightRAG: `docker stop chrysopedia-lightrag` (if running as separate container) or simulate by temporarily setting LIGHTRAG_URL to an unreachable host
|
||||
2. Search: `curl "http://ub01:8096/api/v1/search?q=compression+technique" | jq .`
|
||||
3. **Expected:** Results still returned (from Qdrant+keyword fallback). `fallback_used` is `true`.
|
||||
4. Check API logs: `docker logs chrysopedia-api --tail 20`
|
||||
5. **Expected:** WARNING log line with `lightrag_search_fallback reason=` visible
|
||||
6. Restart LightRAG after test
|
||||
|
||||
### TC-5: Response Schema Preservation
|
||||
1. `curl "http://ub01:8096/api/v1/search?q=reverb+design" | jq '.results[0] | keys'`
|
||||
2. **Expected:** Each result has all SearchResultItem fields: title, slug, type, score, creator_name, creator_slug, topic_category, topic_tags, match_context
|
||||
|
||||
### TC-6: Empty Query Handling
|
||||
1. `curl "http://ub01:8096/api/v1/search?q=" | jq .`
|
||||
2. **Expected:** Empty results array returned. No errors.
|
||||
|
||||
### TC-7: Integration Tests Pass
|
||||
1. SSH to ub01, run: `docker exec -w /app chrysopedia-api python -m pytest tests/test_search.py -v -k lightrag`
|
||||
2. **Expected:** 7 tests pass, exit code 0
|
||||
3. Run full suite: `docker exec -w /app chrysopedia-api python -m pytest tests/test_search.py -v`
|
||||
4. **Expected:** 28+ tests pass. Only pre-existing test_keyword_search_match_context_tag may fail.
|
||||
|
||||
### TC-8: Structured Logging Verification
|
||||
1. Perform a search: `curl "http://ub01:8096/api/v1/search?q=eq+techniques"`
|
||||
2. Check logs: `docker logs chrysopedia-api --tail 5`
|
||||
3. **Expected:** Log line containing `lightrag_search query=` with `latency_ms=` and `result_count=` fields
|
||||
|
||||
## Edge Cases
|
||||
- Query exactly 3 chars (e.g., "eq ") → should attempt LightRAG
|
||||
- LightRAG returns entities but no chunks → partial results mapped from entity matching
|
||||
- Multiple technique pages with same slug → deduplication ensures no duplicates in response
|
||||
30
.gsd/milestones/M021/slices/S01/tasks/T02-VERIFY.json
Normal file
30
.gsd/milestones/M021/slices/S01/tasks/T02-VERIFY.json
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
{
|
||||
"schemaVersion": 1,
|
||||
"taskId": "T02",
|
||||
"unitId": "M021/S01/T02",
|
||||
"timestamp": 1775278240472,
|
||||
"passed": false,
|
||||
"discoverySource": "task-plan",
|
||||
"checks": [
|
||||
{
|
||||
"command": "cd backend",
|
||||
"exitCode": 0,
|
||||
"durationMs": 4,
|
||||
"verdict": "pass"
|
||||
},
|
||||
{
|
||||
"command": "python -m pytest tests/test_search.py -v",
|
||||
"exitCode": 4,
|
||||
"durationMs": 218,
|
||||
"verdict": "fail"
|
||||
},
|
||||
{
|
||||
"command": "echo 'ALL TESTS PASS'",
|
||||
"exitCode": 0,
|
||||
"durationMs": 5,
|
||||
"verdict": "pass"
|
||||
}
|
||||
],
|
||||
"retryAttempt": 1,
|
||||
"maxRetries": 2
|
||||
}
|
||||
|
|
@ -1,6 +1,196 @@
|
|||
# S02: [B] Creator-Scoped Retrieval Cascade
|
||||
|
||||
**Goal:** Implement full creator-scoped retrieval cascade: creator → domain → global → graceful empty
|
||||
**Goal:** Creator-scoped retrieval cascade: question on a creator's profile first checks creator content, then domain, then full KB, then graceful fallback. `cascade_tier` field in response indicates which tier served results.
|
||||
**Demo:** After this: Question on Keota's profile first checks Keota's content, then sound design domain, then full KB, then graceful fallback
|
||||
|
||||
## Tasks
|
||||
- [x] **T01: Added 4-tier creator-scoped cascade (creator → domain → global → none) to SearchService with cascade_tier in API response** — Add the 4-tier cascade logic to SearchService and wire the `creator` query param through the API layer.
|
||||
|
||||
## Steps
|
||||
|
||||
1. **Read current `backend/search_service.py`** to understand the `search()` orchestrator, `_lightrag_search()` method, and `_FILE_SOURCE_RE` regex. Read `backend/schemas.py` for `SearchResponse` and `backend/routers/search.py` for the endpoint signature.
|
||||
|
||||
2. **Add `_get_creator_domain()` method** to `SearchService`:
|
||||
- Takes `creator_id: str` and `db: AsyncSession`
|
||||
- Queries `TechniquePage` grouped by `topic_category` where `creator_id` matches
|
||||
- Returns the most common `topic_category` string, or `None` if creator has <2 technique pages
|
||||
- Single SQL query: `SELECT topic_category, COUNT(*) FROM technique_pages WHERE creator_id = :cid GROUP BY topic_category ORDER BY COUNT(*) DESC LIMIT 1`
|
||||
|
||||
3. **Add `_creator_scoped_search()` method** to `SearchService`:
|
||||
- Takes `query: str, creator_id: str, creator_name: str, limit: int, db: AsyncSession`
|
||||
- POSTs to `/query/data` with `{"query": query, "mode": "hybrid", "top_k": limit * 3, "ll_keywords": [creator_name]}`
|
||||
- Request 3x results to survive post-filtering
|
||||
- Reuses the same chunk-parsing logic from `_lightrag_search()` (file_path regex, slug extraction, DB batch lookup)
|
||||
- **Post-filters**: only keep results where the technique page's `creator_id` matches the target `creator_id`
|
||||
- Returns filtered results with `match_context: "Creator-scoped LightRAG match"`
|
||||
- On any exception: log WARNING with `reason=` tag, return empty list (same pattern as `_lightrag_search`)
|
||||
|
||||
4. **Add `_domain_scoped_search()` method** to `SearchService`:
|
||||
- Takes `query: str, domain: str, limit: int, db: AsyncSession`
|
||||
- POSTs to `/query/data` with `{"query": query, "mode": "hybrid", "top_k": limit, "ll_keywords": [domain]}`
|
||||
- No post-filtering — any creator's content in that domain qualifies
|
||||
- Returns results with `match_context: "Domain-scoped LightRAG match"`
|
||||
- On any exception: log WARNING, return empty list
|
||||
|
||||
5. **Add `_resolve_creator()` helper** to `SearchService`:
|
||||
- Takes `creator_ref: str, db: AsyncSession` — accepts slug or UUID
|
||||
- Tries UUID parse first, then falls back to slug lookup
|
||||
- Returns `(creator_id: str, creator_name: str)` tuple or `(None, None)` if not found
|
||||
- Single DB query against `Creator` model
|
||||
|
||||
6. **Modify `search()` orchestrator** to accept optional `creator: str | None = None` parameter:
|
||||
- When `creator` is provided and query length >= min:
|
||||
a. Resolve creator via `_resolve_creator()`
|
||||
b. If creator not found, skip cascade, proceed with normal search
|
||||
c. **Tier 1**: Call `_creator_scoped_search()`. If results → set `cascade_tier = "creator"`, `fallback_used = False`
|
||||
d. **Tier 2**: Call `_get_creator_domain()`, then `_domain_scoped_search()` if domain found. If results → `cascade_tier = "domain"`, `fallback_used = False`
|
||||
e. **Tier 3**: Call existing `_lightrag_search()`. If results → `cascade_tier = "global"`, `fallback_used = False`
|
||||
f. **Tier 4**: No LightRAG results at all → `cascade_tier = "none"`, fall through to Qdrant fallback path
|
||||
- When `creator` is NOT provided: existing behavior, `cascade_tier = ""` (empty string default)
|
||||
- Add `cascade_tier` to the returned dict
|
||||
|
||||
7. **Add `cascade_tier` to `SearchResponse`** in `backend/schemas.py`:
|
||||
- `cascade_tier: str = ""` — defaults to empty for non-cascade searches
|
||||
- Backward compatible — existing consumers see empty string
|
||||
|
||||
8. **Add `creator` query param** to search endpoint in `backend/routers/search.py`:
|
||||
- `creator: Annotated[str, Query(max_length=100)] = ""`
|
||||
- Pass to `svc.search(..., creator=creator or None)`
|
||||
- Wire `cascade_tier` into `SearchResponse` construction
|
||||
|
||||
9. **Verify**: `python -m py_compile backend/search_service.py && python -m py_compile backend/schemas.py && python -m py_compile backend/routers/search.py` — all three compile clean.
|
||||
|
||||
## Must-Haves
|
||||
|
||||
- [ ] `_creator_scoped_search()` method with `ll_keywords` and post-filtering by `creator_id`
|
||||
- [ ] `_domain_scoped_search()` method with `ll_keywords` for domain category
|
||||
- [ ] `_get_creator_domain()` returns dominant topic_category or None
|
||||
- [ ] `_resolve_creator()` handles both UUID and slug
|
||||
- [ ] `search()` orchestrator dispatches cascade when `creator` param provided
|
||||
- [ ] Cascade tiers execute in order: creator → domain → global → none
|
||||
- [ ] `cascade_tier` field in SearchResponse schema
|
||||
- [ ] `creator` query param on GET /api/v1/search
|
||||
- [ ] All exception paths log WARNING with reason= tag and return empty (never crash)
|
||||
- [ ] Existing search behavior unchanged when `creator` not provided
|
||||
|
||||
## Failure Modes
|
||||
|
||||
| Dependency | On error | On timeout | On malformed response |
|
||||
|------------|----------|-----------|----------------------|
|
||||
| LightRAG /query/data | Log WARNING, return [] for that tier, cascade continues to next tier | Same — httpx timeout triggers except block | Log WARNING with parse_error reason, return [] |
|
||||
| PostgreSQL (creator lookup, domain query) | Exception propagates up (DB failure is not recoverable at this layer) | N/A — local DB | N/A |
|
||||
|
||||
## Negative Tests
|
||||
|
||||
- Creator slug that doesn't exist in DB → cascade skipped, normal search runs
|
||||
- Creator with <2 technique pages → domain tier skipped (no dominant category)
|
||||
- All 3 tiers return empty → `cascade_tier: "none"`, Qdrant fallback may still fire
|
||||
|
||||
## Verification
|
||||
|
||||
- `cd backend && python -m py_compile search_service.py && python -m py_compile schemas.py && python -m py_compile routers/search.py` — all compile clean
|
||||
- `grep -q 'cascade_tier' backend/schemas.py` — field exists
|
||||
- `grep -q 'creator.*Query' backend/routers/search.py` — param exists
|
||||
- `grep -q '_creator_scoped_search\|_domain_scoped_search\|_get_creator_domain' backend/search_service.py` — methods exist
|
||||
|
||||
## Observability Impact
|
||||
|
||||
- Signals added: `logger.info` per cascade tier with query, creator, tier, latency_ms, result_count; `logger.warning` on tier skip/empty with reason= tag
|
||||
- Inspection: `cascade_tier` in API response reveals which tier served
|
||||
- Failure state: Each tier's WARNING log independently traceable
|
||||
|
||||
## Inputs
|
||||
|
||||
- `backend/search_service.py` — existing SearchService with _lightrag_search(), search() orchestrator
|
||||
- `backend/schemas.py` — existing SearchResponse model
|
||||
- `backend/routers/search.py` — existing search endpoint
|
||||
- `backend/models.py` — Creator and TechniquePage models
|
||||
|
||||
## Expected Output
|
||||
|
||||
- `backend/search_service.py` — 4 new methods + modified search() orchestrator
|
||||
- `backend/schemas.py` — cascade_tier field added to SearchResponse
|
||||
- `backend/routers/search.py` — creator query param added, cascade_tier wired
|
||||
- Estimate: 1h30m
|
||||
- Files: backend/search_service.py, backend/schemas.py, backend/routers/search.py
|
||||
- Verify: cd backend && python -m py_compile search_service.py && python -m py_compile schemas.py && python -m py_compile routers/search.py && grep -q 'cascade_tier' schemas.py && grep -q '_creator_scoped_search' search_service.py
|
||||
- [ ] **T02: Add integration tests for all 4 cascade tiers** — Add 5-6 integration tests for the creator-scoped retrieval cascade, following S01's established mock-httpx-at-instance pattern.
|
||||
|
||||
## Steps
|
||||
|
||||
1. **Read `backend/tests/test_search.py`** — understand the existing test patterns, especially the `test_search_lightrag_*` tests from S01. Key patterns:
|
||||
- Tests use `db_engine` fixture providing a real PostgreSQL session
|
||||
- LightRAG is mocked by replacing `svc._httpx` with an `httpx.MockTransport`
|
||||
- Technique pages and creators are inserted into real DB for lookup verification
|
||||
- `_FILE_SOURCE_RE` format: `technique:{slug}:creator:{creator_id}`
|
||||
|
||||
2. **Read `backend/search_service.py`** — understand the new cascade methods from T01: `_creator_scoped_search()`, `_domain_scoped_search()`, `_get_creator_domain()`, `_resolve_creator()`, and the modified `search()` orchestrator.
|
||||
|
||||
3. **Create a shared cascade test fixture** at the top of the new test block that:
|
||||
- Creates a Creator (e.g. name="Keota", slug="keota", id=known UUID)
|
||||
- Creates 3 TechniquePage rows for that creator with `topic_category="Sound Design"` and file_source format slugs
|
||||
- Creates a second creator with different topic_category for domain-tier testing
|
||||
- Returns the creator info for assertions
|
||||
|
||||
4. **Add `test_search_cascade_creator_tier`**:
|
||||
- Mock httpx to return chunks with file_paths matching the target creator
|
||||
- Call `svc.search(query="reese bass", ..., creator="keota")`
|
||||
- Assert: results returned, `cascade_tier == "creator"`, `fallback_used == False`
|
||||
- Assert: all results belong to the target creator (check creator_id or creator_name)
|
||||
- Verify: mock was called with `ll_keywords` containing creator name
|
||||
|
||||
5. **Add `test_search_cascade_domain_tier`**:
|
||||
- Mock httpx: first call (creator-scoped) returns chunks NOT matching creator → empty after post-filter
|
||||
- Mock httpx: second call (domain-scoped) returns chunks from any creator in "Sound Design" domain
|
||||
- Call `svc.search(query="synthesis techniques", ..., creator="keota")`
|
||||
- Assert: `cascade_tier == "domain"`, results present
|
||||
|
||||
6. **Add `test_search_cascade_global_fallback`**:
|
||||
- Mock httpx: first two calls return empty/no-match, third call (global) returns results
|
||||
- Call `svc.search(query="mixing tips", ..., creator="keota")`
|
||||
- Assert: `cascade_tier == "global"`
|
||||
|
||||
7. **Add `test_search_cascade_graceful_empty`**:
|
||||
- Mock httpx: all three tiers return empty data
|
||||
- Call `svc.search(query="nonexistent topic", ..., creator="keota")`
|
||||
- Assert: `cascade_tier == "none"` or items empty, `fallback_used == True` (Qdrant fallback may fire)
|
||||
|
||||
8. **Add `test_search_cascade_unknown_creator`**:
|
||||
- Call `svc.search(query="bass design", ..., creator="nonexistent-slug")`
|
||||
- Assert: cascade skipped, normal search behavior, `cascade_tier == ""` (empty)
|
||||
|
||||
9. **Add `test_search_no_creator_param_unchanged`**:
|
||||
- Call `svc.search(query="reese bass", ...)` WITHOUT creator param
|
||||
- Assert: existing behavior unchanged, `cascade_tier == ""`, same results as before
|
||||
|
||||
10. **Run full test suite**: `cd backend && python -m pytest tests/test_search.py -v`
|
||||
- All existing 28+ tests still pass
|
||||
- All 5-6 new cascade tests pass
|
||||
|
||||
## Must-Haves
|
||||
|
||||
- [ ] Test for creator tier (ll_keywords + post-filter)
|
||||
- [ ] Test for domain tier fallback
|
||||
- [ ] Test for global tier fallback
|
||||
- [ ] Test for graceful empty (all tiers empty)
|
||||
- [ ] Test for unknown creator (cascade skipped)
|
||||
- [ ] All existing tests still pass
|
||||
- [ ] Tests use mock-httpx-at-instance pattern from S01
|
||||
|
||||
## Verification
|
||||
|
||||
- `cd backend && python -m pytest tests/test_search.py -v` — all tests pass (existing + new)
|
||||
- `cd backend && python -m pytest tests/test_search.py -k cascade -v` — cascade-specific tests pass
|
||||
|
||||
## Inputs
|
||||
|
||||
- `backend/tests/test_search.py` — existing test file with S01 patterns
|
||||
- `backend/search_service.py` — modified SearchService with cascade methods from T01
|
||||
- `backend/schemas.py` — SearchResponse with cascade_tier field from T01
|
||||
|
||||
## Expected Output
|
||||
|
||||
- `backend/tests/test_search.py` — 5-6 new cascade integration tests appended
|
||||
- Estimate: 1h
|
||||
- Files: backend/tests/test_search.py
|
||||
- Verify: cd backend && python -m pytest tests/test_search.py -v 2>&1 | tail -5
|
||||
|
|
|
|||
90
.gsd/milestones/M021/slices/S02/S02-RESEARCH.md
Normal file
90
.gsd/milestones/M021/slices/S02/S02-RESEARCH.md
Normal file
|
|
@ -0,0 +1,90 @@
|
|||
# S02 Research: Creator-Scoped Retrieval Cascade
|
||||
|
||||
## Depth: Targeted
|
||||
|
||||
Creator-scoped retrieval uses known technology (LightRAG, existing SearchService) but introduces a new multi-tier cascade pattern not yet in the codebase. Moderate complexity — needs careful API exploration and cascade design.
|
||||
|
||||
## Requirements Targeted
|
||||
|
||||
- **R005** (Search) — Extend LightRAG search to support creator-scoped queries with cascade fallback
|
||||
- **R009** (Qdrant fallback) — Qdrant remains as automatic fallback when LightRAG fails at any cascade tier
|
||||
|
||||
## Summary
|
||||
|
||||
The slice adds a 4-tier retrieval cascade for creator-context questions: (1) creator-scoped LightRAG, (2) domain-scoped LightRAG, (3) global LightRAG, (4) graceful empty response. This sits inside `SearchService` in `backend/search_service.py` and is triggered when the caller provides a `creator_id` or `creator_slug` parameter.
|
||||
|
||||
### What Exists
|
||||
|
||||
1. **`SearchService._lightrag_search()`** — POSTs to `/query/data` with `{"query": ..., "mode": "hybrid", "top_k": ...}`. Returns technique pages mapped from chunk `file_path` fields. No creator scoping today.
|
||||
|
||||
2. **`lightrag_query.py` script** — Already demonstrates creator scoping via `ll_keywords` parameter and query prepending: `payload["ll_keywords"] = [creator]` and `payload["query"] = f"{query} (by {creator})"`. Uses `/query` (LLM generation endpoint), not `/query/data`.
|
||||
|
||||
3. **`file_source` format** — Documents indexed with `technique:{slug}:creator:{creator_id}`. Chunks inherit this as `file_path`. Already parsed by `_FILE_SOURCE_RE` regex in search_service.py.
|
||||
|
||||
4. **Creator model** — `Creator` has `id` (UUID), `name`, `slug`, `genres` (array). TechniquePage has `topic_category` (string from 7 canonical categories) and `creator_id` FK.
|
||||
|
||||
5. **Current search endpoint** — `GET /api/v1/search?q=...&scope=...&sort=...&limit=...`. No `creator` parameter today. `scope` is `all|topics|creators` for result type filtering, not creator identity.
|
||||
|
||||
6. **Canonical categories** — 7 top-level: Workflow, Music Theory, Sound Design, Synthesis, Arrangement, Mixing, Mastering. Each creator's technique pages have a `topic_category` from this set.
|
||||
|
||||
### The Cascade Design
|
||||
|
||||
**Tier 1 — Creator-scoped LightRAG:** Query `/query/data` with `ll_keywords: [creator_name]` to bias graph retrieval toward that creator's documents. Post-filter results by `creator_id` extracted from chunk `file_path`. If results ≥ threshold → return.
|
||||
|
||||
**Tier 2 — Domain-scoped LightRAG:** Look up the creator's dominant `topic_category` from their technique pages. Re-query LightRAG with domain keywords (e.g. "Sound Design") via `ll_keywords`. No creator post-filter — any creator's content in that domain qualifies. If results ≥ threshold → return.
|
||||
|
||||
**Tier 3 — Global LightRAG:** The existing `_lightrag_search()` — no scoping. Already implemented in S01.
|
||||
|
||||
**Tier 4 — Graceful fallback:** If all tiers return empty, return structured empty response with a `cascade_tier: "none"` indicator. The existing Qdrant fallback from S01 still applies if LightRAG itself fails (timeout/error), orthogonal to the cascade.
|
||||
|
||||
### Key Implementation Decisions
|
||||
|
||||
1. **`ll_keywords` on `/query/data`** — The LightRAG API accepts `ll_keywords` on both `/query` and `/query/data`. The lightrag_query.py script proves the pattern works. Need to verify `/query/data` honors it (low risk — same retrieval engine).
|
||||
|
||||
2. **Creator post-filtering vs pre-filtering** — LightRAG has no native creator filter. We bias with `ll_keywords` then post-filter chunks by `file_path` regex matching the target `creator_id`. This means we should request more results (`top_k` * 3) from LightRAG to ensure enough creator-specific results survive filtering.
|
||||
|
||||
3. **Domain detection** — Query the DB for the creator's most common `topic_category` across their technique pages. Cache this per-request (it's a single GROUP BY query). Fall back to no domain scoping if the creator has <2 technique pages.
|
||||
|
||||
4. **New search parameter** — Add `creator` query param (slug or ID) to `GET /api/v1/search`. When present, triggers cascade instead of the flat search path.
|
||||
|
||||
5. **Cascade metadata** — Add `cascade_tier` field to response: `"creator"`, `"domain"`, `"global"`, or `"none"`. Alongside existing `fallback_used` (which tracks LightRAG vs Qdrant engine-level fallback).
|
||||
|
||||
### Where the Natural Seams Are
|
||||
|
||||
**Task 1: Cascade core in SearchService** — Add `_creator_scoped_search()`, `_domain_scoped_search()`, and modify `search()` to dispatch cascade when `creator_id` is provided. All in `search_service.py`. This is the riskiest piece — the LightRAG `ll_keywords` + post-filtering logic.
|
||||
|
||||
**Task 2: API + schema changes** — Add `creator` param to search router, `cascade_tier` to `SearchResponse` schema. Wire cascade into the endpoint. Straightforward.
|
||||
|
||||
**Task 3: Integration tests** — Test all 4 cascade tiers, domain detection, post-filtering. Follows the existing mock-httpx-at-instance pattern from S01 tests.
|
||||
|
||||
### What to Build First
|
||||
|
||||
Task 1 (cascade core) — it's the riskiest and everything else depends on it. The `ll_keywords` behavior on `/query/data` and the post-filtering logic need to be proven first.
|
||||
|
||||
### How to Verify
|
||||
|
||||
1. `py_compile` both modified files
|
||||
2. `pytest backend/tests/test_search.py -v` — all existing 28+ tests still pass
|
||||
3. New cascade-specific tests pass: creator tier, domain tier, global fallback, empty graceful, short-query bypass
|
||||
4. `grep` confirms new fields in schema and config
|
||||
|
||||
### Constraints & Risks
|
||||
|
||||
- **LightRAG `ll_keywords` on `/query/data`** — If unsupported, fall back to query prepending (`f"{query} (by {creator})"`) which lightrag_query.py already uses as a secondary strategy.
|
||||
- **Post-filter starvation** — If LightRAG returns 10 chunks but none belong to the target creator, tier 1 returns empty. The `top_k` multiplier (3x) mitigates this.
|
||||
- **Domain detection cost** — One extra DB query per cascade search. Negligible for single-user tool.
|
||||
- **Schema backward compatibility** — `cascade_tier` defaults to `"global"` for non-creator-scoped searches. Existing consumers see no breaking change.
|
||||
|
||||
## Recommendation
|
||||
|
||||
Straightforward 3-task decomposition. Task 1 is the core cascade logic in `search_service.py` (~100 lines). Task 2 is schema/router wiring (~20 lines). Task 3 is tests (~150 lines). Follow S01's patterns exactly — mock httpx at instance level, position-based scoring, structured WARNING logging on fallback paths.
|
||||
|
||||
## Implementation Landscape
|
||||
|
||||
| Component | File | Change |
|
||||
|-----------|------|--------|
|
||||
| Cascade logic | `backend/search_service.py` | Add `_creator_scoped_search()`, `_domain_scoped_search()`, `_get_creator_domain()`, modify `search()` orchestrator |
|
||||
| Config | `backend/config.py` | No changes needed — existing `lightrag_*` settings suffice |
|
||||
| Schema | `backend/schemas.py` | Add `cascade_tier: str = ""` to `SearchResponse` |
|
||||
| Router | `backend/routers/search.py` | Add `creator: str` query param, pass to `svc.search()` |
|
||||
| Tests | `backend/tests/test_search.py` | Add 5-6 cascade-specific tests |
|
||||
132
.gsd/milestones/M021/slices/S02/tasks/T01-PLAN.md
Normal file
132
.gsd/milestones/M021/slices/S02/tasks/T01-PLAN.md
Normal file
|
|
@ -0,0 +1,132 @@
|
|||
---
|
||||
estimated_steps: 83
|
||||
estimated_files: 3
|
||||
skills_used: []
|
||||
---
|
||||
|
||||
# T01: Implement creator-scoped retrieval cascade in SearchService + wire API
|
||||
|
||||
Add the 4-tier cascade logic to SearchService and wire the `creator` query param through the API layer.
|
||||
|
||||
## Steps
|
||||
|
||||
1. **Read current `backend/search_service.py`** to understand the `search()` orchestrator, `_lightrag_search()` method, and `_FILE_SOURCE_RE` regex. Read `backend/schemas.py` for `SearchResponse` and `backend/routers/search.py` for the endpoint signature.
|
||||
|
||||
2. **Add `_get_creator_domain()` method** to `SearchService`:
|
||||
- Takes `creator_id: str` and `db: AsyncSession`
|
||||
- Queries `TechniquePage` grouped by `topic_category` where `creator_id` matches
|
||||
- Returns the most common `topic_category` string, or `None` if creator has <2 technique pages
|
||||
- Single SQL query: `SELECT topic_category, COUNT(*) FROM technique_pages WHERE creator_id = :cid GROUP BY topic_category ORDER BY COUNT(*) DESC LIMIT 1`
|
||||
|
||||
3. **Add `_creator_scoped_search()` method** to `SearchService`:
|
||||
- Takes `query: str, creator_id: str, creator_name: str, limit: int, db: AsyncSession`
|
||||
- POSTs to `/query/data` with `{"query": query, "mode": "hybrid", "top_k": limit * 3, "ll_keywords": [creator_name]}`
|
||||
- Request 3x results to survive post-filtering
|
||||
- Reuses the same chunk-parsing logic from `_lightrag_search()` (file_path regex, slug extraction, DB batch lookup)
|
||||
- **Post-filters**: only keep results where the technique page's `creator_id` matches the target `creator_id`
|
||||
- Returns filtered results with `match_context: "Creator-scoped LightRAG match"`
|
||||
- On any exception: log WARNING with `reason=` tag, return empty list (same pattern as `_lightrag_search`)
|
||||
|
||||
4. **Add `_domain_scoped_search()` method** to `SearchService`:
|
||||
- Takes `query: str, domain: str, limit: int, db: AsyncSession`
|
||||
- POSTs to `/query/data` with `{"query": query, "mode": "hybrid", "top_k": limit, "ll_keywords": [domain]}`
|
||||
- No post-filtering — any creator's content in that domain qualifies
|
||||
- Returns results with `match_context: "Domain-scoped LightRAG match"`
|
||||
- On any exception: log WARNING, return empty list
|
||||
|
||||
5. **Add `_resolve_creator()` helper** to `SearchService`:
|
||||
- Takes `creator_ref: str, db: AsyncSession` — accepts slug or UUID
|
||||
- Tries UUID parse first, then falls back to slug lookup
|
||||
- Returns `(creator_id: str, creator_name: str)` tuple or `(None, None)` if not found
|
||||
- Single DB query against `Creator` model
|
||||
|
||||
6. **Modify `search()` orchestrator** to accept optional `creator: str | None = None` parameter:
|
||||
- When `creator` is provided and query length >= min:
|
||||
a. Resolve creator via `_resolve_creator()`
|
||||
b. If creator not found, skip cascade, proceed with normal search
|
||||
c. **Tier 1**: Call `_creator_scoped_search()`. If results → set `cascade_tier = "creator"`, `fallback_used = False`
|
||||
d. **Tier 2**: Call `_get_creator_domain()`, then `_domain_scoped_search()` if domain found. If results → `cascade_tier = "domain"`, `fallback_used = False`
|
||||
e. **Tier 3**: Call existing `_lightrag_search()`. If results → `cascade_tier = "global"`, `fallback_used = False`
|
||||
f. **Tier 4**: No LightRAG results at all → `cascade_tier = "none"`, fall through to Qdrant fallback path
|
||||
- When `creator` is NOT provided: existing behavior, `cascade_tier = ""` (empty string default)
|
||||
- Add `cascade_tier` to the returned dict
|
||||
|
||||
7. **Add `cascade_tier` to `SearchResponse`** in `backend/schemas.py`:
|
||||
- `cascade_tier: str = ""` — defaults to empty for non-cascade searches
|
||||
- Backward compatible — existing consumers see empty string
|
||||
|
||||
8. **Add `creator` query param** to search endpoint in `backend/routers/search.py`:
|
||||
- `creator: Annotated[str, Query(max_length=100)] = ""`
|
||||
- Pass to `svc.search(..., creator=creator or None)`
|
||||
- Wire `cascade_tier` into `SearchResponse` construction
|
||||
|
||||
9. **Verify**: `python -m py_compile backend/search_service.py && python -m py_compile backend/schemas.py && python -m py_compile backend/routers/search.py` — all three compile clean.
|
||||
|
||||
## Must-Haves
|
||||
|
||||
- [ ] `_creator_scoped_search()` method with `ll_keywords` and post-filtering by `creator_id`
|
||||
- [ ] `_domain_scoped_search()` method with `ll_keywords` for domain category
|
||||
- [ ] `_get_creator_domain()` returns dominant topic_category or None
|
||||
- [ ] `_resolve_creator()` handles both UUID and slug
|
||||
- [ ] `search()` orchestrator dispatches cascade when `creator` param provided
|
||||
- [ ] Cascade tiers execute in order: creator → domain → global → none
|
||||
- [ ] `cascade_tier` field in SearchResponse schema
|
||||
- [ ] `creator` query param on GET /api/v1/search
|
||||
- [ ] All exception paths log WARNING with reason= tag and return empty (never crash)
|
||||
- [ ] Existing search behavior unchanged when `creator` not provided
|
||||
|
||||
## Failure Modes
|
||||
|
||||
| Dependency | On error | On timeout | On malformed response |
|
||||
|------------|----------|-----------|----------------------|
|
||||
| LightRAG /query/data | Log WARNING, return [] for that tier, cascade continues to next tier | Same — httpx timeout triggers except block | Log WARNING with parse_error reason, return [] |
|
||||
| PostgreSQL (creator lookup, domain query) | Exception propagates up (DB failure is not recoverable at this layer) | N/A — local DB | N/A |
|
||||
|
||||
## Negative Tests
|
||||
|
||||
- Creator slug that doesn't exist in DB → cascade skipped, normal search runs
|
||||
- Creator with <2 technique pages → domain tier skipped (no dominant category)
|
||||
- All 3 tiers return empty → `cascade_tier: "none"`, Qdrant fallback may still fire
|
||||
|
||||
## Verification
|
||||
|
||||
- `cd backend && python -m py_compile search_service.py && python -m py_compile schemas.py && python -m py_compile routers/search.py` — all compile clean
|
||||
- `grep -q 'cascade_tier' backend/schemas.py` — field exists
|
||||
- `grep -q 'creator.*Query' backend/routers/search.py` — param exists
|
||||
- `grep -q '_creator_scoped_search\|_domain_scoped_search\|_get_creator_domain' backend/search_service.py` — methods exist
|
||||
|
||||
## Observability Impact
|
||||
|
||||
- Signals added: `logger.info` per cascade tier with query, creator, tier, latency_ms, result_count; `logger.warning` on tier skip/empty with reason= tag
|
||||
- Inspection: `cascade_tier` in API response reveals which tier served
|
||||
- Failure state: Each tier's WARNING log independently traceable
|
||||
|
||||
## Inputs
|
||||
|
||||
- `backend/search_service.py` — existing SearchService with _lightrag_search(), search() orchestrator
|
||||
- `backend/schemas.py` — existing SearchResponse model
|
||||
- `backend/routers/search.py` — existing search endpoint
|
||||
- `backend/models.py` — Creator and TechniquePage models
|
||||
|
||||
## Expected Output
|
||||
|
||||
- `backend/search_service.py` — 4 new methods + modified search() orchestrator
|
||||
- `backend/schemas.py` — cascade_tier field added to SearchResponse
|
||||
- `backend/routers/search.py` — creator query param added, cascade_tier wired
|
||||
|
||||
## Inputs
|
||||
|
||||
- `backend/search_service.py`
|
||||
- `backend/schemas.py`
|
||||
- `backend/routers/search.py`
|
||||
- `backend/models.py`
|
||||
|
||||
## Expected Output
|
||||
|
||||
- `backend/search_service.py`
|
||||
- `backend/schemas.py`
|
||||
- `backend/routers/search.py`
|
||||
|
||||
## Verification
|
||||
|
||||
cd backend && python -m py_compile search_service.py && python -m py_compile schemas.py && python -m py_compile routers/search.py && grep -q 'cascade_tier' schemas.py && grep -q '_creator_scoped_search' search_service.py
|
||||
85
.gsd/milestones/M021/slices/S02/tasks/T01-SUMMARY.md
Normal file
85
.gsd/milestones/M021/slices/S02/tasks/T01-SUMMARY.md
Normal file
|
|
@ -0,0 +1,85 @@
|
|||
---
|
||||
id: T01
|
||||
parent: S02
|
||||
milestone: M021
|
||||
provides: []
|
||||
requires: []
|
||||
affects: []
|
||||
key_files: ["backend/search_service.py", "backend/schemas.py", "backend/routers/search.py"]
|
||||
key_decisions: ["Cascade runs sequentially since each tier is a fallback for the previous", "Post-filtering on tier 1 uses 3x top_k to survive filtering losses", "Domain tier requires ≥2 technique pages to declare a dominant category"]
|
||||
patterns_established: []
|
||||
drill_down_paths: []
|
||||
observability_surfaces: []
|
||||
duration: ""
|
||||
verification_result: "All three files compile clean (py_compile). Grep checks confirm cascade_tier in schemas.py, creator Query param in router, and all cascade methods in search_service.py."
|
||||
completed_at: 2026-04-04T05:02:27.605Z
|
||||
blocker_discovered: false
|
||||
---
|
||||
|
||||
# T01: Added 4-tier creator-scoped cascade (creator → domain → global → none) to SearchService with cascade_tier in API response
|
||||
|
||||
> Added 4-tier creator-scoped cascade (creator → domain → global → none) to SearchService with cascade_tier in API response
|
||||
|
||||
## What Happened
|
||||
---
|
||||
id: T01
|
||||
parent: S02
|
||||
milestone: M021
|
||||
key_files:
|
||||
- backend/search_service.py
|
||||
- backend/schemas.py
|
||||
- backend/routers/search.py
|
||||
key_decisions:
|
||||
- Cascade runs sequentially since each tier is a fallback for the previous
|
||||
- Post-filtering on tier 1 uses 3x top_k to survive filtering losses
|
||||
- Domain tier requires ≥2 technique pages to declare a dominant category
|
||||
duration: ""
|
||||
verification_result: passed
|
||||
completed_at: 2026-04-04T05:02:27.606Z
|
||||
blocker_discovered: false
|
||||
---
|
||||
|
||||
# T01: Added 4-tier creator-scoped cascade (creator → domain → global → none) to SearchService with cascade_tier in API response
|
||||
|
||||
**Added 4-tier creator-scoped cascade (creator → domain → global → none) to SearchService with cascade_tier in API response**
|
||||
|
||||
## What Happened
|
||||
|
||||
Implemented four new methods in SearchService: _resolve_creator (UUID/slug lookup), _get_creator_domain (dominant topic_category with ≥2 page threshold), _creator_scoped_search (LightRAG with ll_keywords + post-filter by creator_id), _domain_scoped_search (LightRAG with domain keyword, no post-filter). Modified the search() orchestrator to accept an optional creator parameter and dispatch the 4-tier cascade sequentially. Added cascade_tier field to SearchResponse schema and creator query param to the search endpoint.
|
||||
|
||||
## Verification
|
||||
|
||||
All three files compile clean (py_compile). Grep checks confirm cascade_tier in schemas.py, creator Query param in router, and all cascade methods in search_service.py.
|
||||
|
||||
## Verification Evidence
|
||||
|
||||
| # | Command | Exit Code | Verdict | Duration |
|
||||
|---|---------|-----------|---------|----------|
|
||||
| 1 | `cd backend && python -m py_compile search_service.py` | 0 | ✅ pass | 500ms |
|
||||
| 2 | `cd backend && python -m py_compile schemas.py` | 0 | ✅ pass | 300ms |
|
||||
| 3 | `cd backend && python -m py_compile routers/search.py` | 0 | ✅ pass | 300ms |
|
||||
| 4 | `grep -q 'cascade_tier' backend/schemas.py` | 0 | ✅ pass | 50ms |
|
||||
| 5 | `grep -q 'creator.*Query' backend/routers/search.py` | 0 | ✅ pass | 50ms |
|
||||
| 6 | `grep -q '_creator_scoped_search' backend/search_service.py` | 0 | ✅ pass | 50ms |
|
||||
|
||||
|
||||
## Deviations
|
||||
|
||||
Removed redundant local uuid import in _enrich_qdrant_results since module-level import was added.
|
||||
|
||||
## Known Issues
|
||||
|
||||
None.
|
||||
|
||||
## Files Created/Modified
|
||||
|
||||
- `backend/search_service.py`
|
||||
- `backend/schemas.py`
|
||||
- `backend/routers/search.py`
|
||||
|
||||
|
||||
## Deviations
|
||||
Removed redundant local uuid import in _enrich_qdrant_results since module-level import was added.
|
||||
|
||||
## Known Issues
|
||||
None.
|
||||
99
.gsd/milestones/M021/slices/S02/tasks/T02-PLAN.md
Normal file
99
.gsd/milestones/M021/slices/S02/tasks/T02-PLAN.md
Normal file
|
|
@ -0,0 +1,99 @@
|
|||
---
|
||||
estimated_steps: 58
|
||||
estimated_files: 1
|
||||
skills_used: []
|
||||
---
|
||||
|
||||
# T02: Add integration tests for all 4 cascade tiers
|
||||
|
||||
Add 5-6 integration tests for the creator-scoped retrieval cascade, following S01's established mock-httpx-at-instance pattern.
|
||||
|
||||
## Steps
|
||||
|
||||
1. **Read `backend/tests/test_search.py`** — understand the existing test patterns, especially the `test_search_lightrag_*` tests from S01. Key patterns:
|
||||
- Tests use `db_engine` fixture providing a real PostgreSQL session
|
||||
- LightRAG is mocked by replacing `svc._httpx` with an `httpx.MockTransport`
|
||||
- Technique pages and creators are inserted into real DB for lookup verification
|
||||
- `_FILE_SOURCE_RE` format: `technique:{slug}:creator:{creator_id}`
|
||||
|
||||
2. **Read `backend/search_service.py`** — understand the new cascade methods from T01: `_creator_scoped_search()`, `_domain_scoped_search()`, `_get_creator_domain()`, `_resolve_creator()`, and the modified `search()` orchestrator.
|
||||
|
||||
3. **Create a shared cascade test fixture** at the top of the new test block that:
|
||||
- Creates a Creator (e.g. name="Keota", slug="keota", id=known UUID)
|
||||
- Creates 3 TechniquePage rows for that creator with `topic_category="Sound Design"` and file_source format slugs
|
||||
- Creates a second creator with different topic_category for domain-tier testing
|
||||
- Returns the creator info for assertions
|
||||
|
||||
4. **Add `test_search_cascade_creator_tier`**:
|
||||
- Mock httpx to return chunks with file_paths matching the target creator
|
||||
- Call `svc.search(query="reese bass", ..., creator="keota")`
|
||||
- Assert: results returned, `cascade_tier == "creator"`, `fallback_used == False`
|
||||
- Assert: all results belong to the target creator (check creator_id or creator_name)
|
||||
- Verify: mock was called with `ll_keywords` containing creator name
|
||||
|
||||
5. **Add `test_search_cascade_domain_tier`**:
|
||||
- Mock httpx: first call (creator-scoped) returns chunks NOT matching creator → empty after post-filter
|
||||
- Mock httpx: second call (domain-scoped) returns chunks from any creator in "Sound Design" domain
|
||||
- Call `svc.search(query="synthesis techniques", ..., creator="keota")`
|
||||
- Assert: `cascade_tier == "domain"`, results present
|
||||
|
||||
6. **Add `test_search_cascade_global_fallback`**:
|
||||
- Mock httpx: first two calls return empty/no-match, third call (global) returns results
|
||||
- Call `svc.search(query="mixing tips", ..., creator="keota")`
|
||||
- Assert: `cascade_tier == "global"`
|
||||
|
||||
7. **Add `test_search_cascade_graceful_empty`**:
|
||||
- Mock httpx: all three tiers return empty data
|
||||
- Call `svc.search(query="nonexistent topic", ..., creator="keota")`
|
||||
- Assert: `cascade_tier == "none"` or items empty, `fallback_used == True` (Qdrant fallback may fire)
|
||||
|
||||
8. **Add `test_search_cascade_unknown_creator`**:
|
||||
- Call `svc.search(query="bass design", ..., creator="nonexistent-slug")`
|
||||
- Assert: cascade skipped, normal search behavior, `cascade_tier == ""` (empty)
|
||||
|
||||
9. **Add `test_search_no_creator_param_unchanged`**:
|
||||
- Call `svc.search(query="reese bass", ...)` WITHOUT creator param
|
||||
- Assert: existing behavior unchanged, `cascade_tier == ""`, same results as before
|
||||
|
||||
10. **Run full test suite**: `cd backend && python -m pytest tests/test_search.py -v`
|
||||
- All existing 28+ tests still pass
|
||||
- All 5-6 new cascade tests pass
|
||||
|
||||
## Must-Haves
|
||||
|
||||
- [ ] Test for creator tier (ll_keywords + post-filter)
|
||||
- [ ] Test for domain tier fallback
|
||||
- [ ] Test for global tier fallback
|
||||
- [ ] Test for graceful empty (all tiers empty)
|
||||
- [ ] Test for unknown creator (cascade skipped)
|
||||
- [ ] All existing tests still pass
|
||||
- [ ] Tests use mock-httpx-at-instance pattern from S01
|
||||
|
||||
## Verification
|
||||
|
||||
- `cd backend && python -m pytest tests/test_search.py -v` — all tests pass (existing + new)
|
||||
- `cd backend && python -m pytest tests/test_search.py -k cascade -v` — cascade-specific tests pass
|
||||
|
||||
## Inputs
|
||||
|
||||
- `backend/tests/test_search.py` — existing test file with S01 patterns
|
||||
- `backend/search_service.py` — modified SearchService with cascade methods from T01
|
||||
- `backend/schemas.py` — SearchResponse with cascade_tier field from T01
|
||||
|
||||
## Expected Output
|
||||
|
||||
- `backend/tests/test_search.py` — 5-6 new cascade integration tests appended
|
||||
|
||||
## Inputs
|
||||
|
||||
- `backend/tests/test_search.py`
|
||||
- `backend/search_service.py`
|
||||
- `backend/schemas.py`
|
||||
|
||||
## Expected Output
|
||||
|
||||
- `backend/tests/test_search.py`
|
||||
|
||||
## Verification
|
||||
|
||||
cd backend && python -m pytest tests/test_search.py -v 2>&1 | tail -5
|
||||
|
|
@ -58,6 +58,7 @@ async def search(
|
|||
scope: Annotated[str, Query()] = "all",
|
||||
sort: Annotated[str, Query()] = "relevance",
|
||||
limit: Annotated[int, Query(ge=1, le=100)] = 20,
|
||||
creator: Annotated[str, Query(max_length=100)] = "",
|
||||
db: AsyncSession = Depends(get_session),
|
||||
) -> SearchResponse:
|
||||
"""Semantic search with keyword fallback.
|
||||
|
|
@ -65,9 +66,10 @@ async def search(
|
|||
- **q**: Search query (max 500 chars). Empty → empty results.
|
||||
- **scope**: ``all`` | ``topics`` | ``creators``. Invalid → defaults to ``all``.
|
||||
- **limit**: Max results (1–100, default 20).
|
||||
- **creator**: Creator slug or UUID for cascade search. Empty → normal search.
|
||||
"""
|
||||
svc = _get_search_service()
|
||||
result = await svc.search(query=q, scope=scope, sort=sort, limit=limit, db=db)
|
||||
result = await svc.search(query=q, scope=scope, sort=sort, limit=limit, db=db, creator=creator or None)
|
||||
|
||||
# Fire-and-forget search logging — only non-empty queries
|
||||
if q.strip():
|
||||
|
|
@ -79,6 +81,7 @@ async def search(
|
|||
total=result["total"],
|
||||
query=result["query"],
|
||||
fallback_used=result["fallback_used"],
|
||||
cascade_tier=result.get("cascade_tier", ""),
|
||||
)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -254,6 +254,7 @@ class SearchResponse(BaseModel):
|
|||
total: int = 0
|
||||
query: str = ""
|
||||
fallback_used: bool = False
|
||||
cascade_tier: str = ""
|
||||
|
||||
|
||||
class SuggestionItem(BaseModel):
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ import asyncio
|
|||
import logging
|
||||
import re
|
||||
import time
|
||||
import uuid as uuid_mod
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
|
|
@ -572,6 +573,259 @@ class SearchService:
|
|||
)
|
||||
return []
|
||||
|
||||
# ── Creator-scoped cascade helpers ──────────────────────────────────
|
||||
|
||||
async def _resolve_creator(
|
||||
self,
|
||||
creator_ref: str,
|
||||
db: AsyncSession,
|
||||
) -> tuple[str | None, str | None]:
|
||||
"""Resolve a creator slug or UUID to (creator_id, creator_name).
|
||||
|
||||
Returns (None, None) if the creator is not found.
|
||||
"""
|
||||
try:
|
||||
creator_uuid = uuid_mod.UUID(creator_ref)
|
||||
stmt = select(Creator).where(Creator.id == creator_uuid)
|
||||
except (ValueError, AttributeError):
|
||||
stmt = select(Creator).where(Creator.slug == creator_ref)
|
||||
|
||||
result = await db.execute(stmt)
|
||||
cr = result.scalars().first()
|
||||
if cr is None:
|
||||
return None, None
|
||||
return str(cr.id), cr.name
|
||||
|
||||
async def _get_creator_domain(
|
||||
self,
|
||||
creator_id: str,
|
||||
db: AsyncSession,
|
||||
) -> str | None:
|
||||
"""Return the dominant topic_category for a creator, or None if <2 technique pages."""
|
||||
stmt = (
|
||||
select(
|
||||
TechniquePage.topic_category,
|
||||
func.count().label("cnt"),
|
||||
)
|
||||
.where(TechniquePage.creator_id == uuid_mod.UUID(creator_id))
|
||||
.group_by(TechniquePage.topic_category)
|
||||
.order_by(func.count().desc())
|
||||
.limit(1)
|
||||
)
|
||||
result = await db.execute(stmt)
|
||||
row = result.first()
|
||||
if row is None:
|
||||
return None
|
||||
# Require at least 2 technique pages to declare a domain
|
||||
if row.cnt < 2:
|
||||
return None
|
||||
return row.topic_category
|
||||
|
||||
async def _creator_scoped_search(
|
||||
self,
|
||||
query: str,
|
||||
creator_id: str,
|
||||
creator_name: str,
|
||||
limit: int,
|
||||
db: AsyncSession,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Search LightRAG with creator name as keyword, post-filter by creator_id."""
|
||||
start = time.monotonic()
|
||||
try:
|
||||
resp = await self._httpx.post(
|
||||
f"{self._lightrag_url}/query/data",
|
||||
json={
|
||||
"query": query,
|
||||
"mode": "hybrid",
|
||||
"top_k": limit * 3,
|
||||
"ll_keywords": [creator_name],
|
||||
},
|
||||
)
|
||||
resp.raise_for_status()
|
||||
body = resp.json()
|
||||
except Exception as exc:
|
||||
elapsed_ms = (time.monotonic() - start) * 1000
|
||||
logger.warning(
|
||||
"creator_scoped_search reason=%s query=%r creator=%s latency_ms=%.1f",
|
||||
type(exc).__name__, query, creator_id, elapsed_ms,
|
||||
)
|
||||
return []
|
||||
|
||||
try:
|
||||
data = body.get("data", {})
|
||||
chunks = data.get("chunks", []) if data else []
|
||||
|
||||
slug_set: set[str] = set()
|
||||
slug_order: list[str] = []
|
||||
for chunk in chunks:
|
||||
file_path = chunk.get("file_path", "")
|
||||
m = self._FILE_SOURCE_RE.match(file_path)
|
||||
if m and m.group("slug") not in slug_set:
|
||||
slug = m.group("slug")
|
||||
slug_set.add(slug)
|
||||
slug_order.append(slug)
|
||||
|
||||
if not slug_set:
|
||||
elapsed_ms = (time.monotonic() - start) * 1000
|
||||
logger.warning(
|
||||
"creator_scoped_search reason=no_parseable_results query=%r creator=%s latency_ms=%.1f",
|
||||
query, creator_id, elapsed_ms,
|
||||
)
|
||||
return []
|
||||
|
||||
# Batch lookup and post-filter by creator_id
|
||||
tp_stmt = (
|
||||
select(TechniquePage, Creator)
|
||||
.join(Creator, TechniquePage.creator_id == Creator.id)
|
||||
.where(TechniquePage.slug.in_(list(slug_set)))
|
||||
)
|
||||
tp_rows = await db.execute(tp_stmt)
|
||||
tp_map: dict[str, tuple] = {}
|
||||
for tp, cr in tp_rows.all():
|
||||
if str(tp.creator_id) == creator_id:
|
||||
tp_map[tp.slug] = (tp, cr)
|
||||
|
||||
results: list[dict[str, Any]] = []
|
||||
for idx, slug in enumerate(slug_order):
|
||||
pair = tp_map.get(slug)
|
||||
if not pair:
|
||||
continue
|
||||
tp, cr = pair
|
||||
score = max(1.0 - (idx * 0.05), 0.5)
|
||||
results.append({
|
||||
"type": "technique_page",
|
||||
"title": tp.title,
|
||||
"slug": tp.slug,
|
||||
"technique_page_slug": tp.slug,
|
||||
"summary": tp.summary or "",
|
||||
"topic_category": tp.topic_category,
|
||||
"topic_tags": tp.topic_tags or [],
|
||||
"creator_id": str(tp.creator_id),
|
||||
"creator_name": cr.name,
|
||||
"creator_slug": cr.slug,
|
||||
"created_at": tp.created_at.isoformat() if tp.created_at else "",
|
||||
"score": score,
|
||||
"match_context": "Creator-scoped LightRAG match",
|
||||
})
|
||||
if len(results) >= limit:
|
||||
break
|
||||
|
||||
elapsed_ms = (time.monotonic() - start) * 1000
|
||||
logger.info(
|
||||
"creator_scoped_search query=%r creator=%s latency_ms=%.1f result_count=%d",
|
||||
query, creator_id, elapsed_ms, len(results),
|
||||
)
|
||||
return results
|
||||
|
||||
except (KeyError, ValueError, TypeError) as exc:
|
||||
elapsed_ms = (time.monotonic() - start) * 1000
|
||||
logger.warning(
|
||||
"creator_scoped_search reason=parse_error query=%r creator=%s error=%s latency_ms=%.1f",
|
||||
query, creator_id, exc, elapsed_ms,
|
||||
)
|
||||
return []
|
||||
|
||||
async def _domain_scoped_search(
|
||||
self,
|
||||
query: str,
|
||||
domain: str,
|
||||
limit: int,
|
||||
db: AsyncSession,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Search LightRAG with domain keyword — no post-filtering."""
|
||||
start = time.monotonic()
|
||||
try:
|
||||
resp = await self._httpx.post(
|
||||
f"{self._lightrag_url}/query/data",
|
||||
json={
|
||||
"query": query,
|
||||
"mode": "hybrid",
|
||||
"top_k": limit,
|
||||
"ll_keywords": [domain],
|
||||
},
|
||||
)
|
||||
resp.raise_for_status()
|
||||
body = resp.json()
|
||||
except Exception as exc:
|
||||
elapsed_ms = (time.monotonic() - start) * 1000
|
||||
logger.warning(
|
||||
"domain_scoped_search reason=%s query=%r domain=%s latency_ms=%.1f",
|
||||
type(exc).__name__, query, domain, elapsed_ms,
|
||||
)
|
||||
return []
|
||||
|
||||
try:
|
||||
data = body.get("data", {})
|
||||
chunks = data.get("chunks", []) if data else []
|
||||
|
||||
slug_set: set[str] = set()
|
||||
slug_order: list[str] = []
|
||||
for chunk in chunks:
|
||||
file_path = chunk.get("file_path", "")
|
||||
m = self._FILE_SOURCE_RE.match(file_path)
|
||||
if m and m.group("slug") not in slug_set:
|
||||
slug = m.group("slug")
|
||||
slug_set.add(slug)
|
||||
slug_order.append(slug)
|
||||
|
||||
if not slug_set:
|
||||
elapsed_ms = (time.monotonic() - start) * 1000
|
||||
logger.warning(
|
||||
"domain_scoped_search reason=no_parseable_results query=%r domain=%s latency_ms=%.1f",
|
||||
query, domain, elapsed_ms,
|
||||
)
|
||||
return []
|
||||
|
||||
tp_stmt = (
|
||||
select(TechniquePage, Creator)
|
||||
.join(Creator, TechniquePage.creator_id == Creator.id)
|
||||
.where(TechniquePage.slug.in_(list(slug_set)))
|
||||
)
|
||||
tp_rows = await db.execute(tp_stmt)
|
||||
tp_map: dict[str, tuple] = {}
|
||||
for tp, cr in tp_rows.all():
|
||||
tp_map[tp.slug] = (tp, cr)
|
||||
|
||||
results: list[dict[str, Any]] = []
|
||||
for idx, slug in enumerate(slug_order):
|
||||
pair = tp_map.get(slug)
|
||||
if not pair:
|
||||
continue
|
||||
tp, cr = pair
|
||||
score = max(1.0 - (idx * 0.05), 0.5)
|
||||
results.append({
|
||||
"type": "technique_page",
|
||||
"title": tp.title,
|
||||
"slug": tp.slug,
|
||||
"technique_page_slug": tp.slug,
|
||||
"summary": tp.summary or "",
|
||||
"topic_category": tp.topic_category,
|
||||
"topic_tags": tp.topic_tags or [],
|
||||
"creator_id": str(tp.creator_id),
|
||||
"creator_name": cr.name,
|
||||
"creator_slug": cr.slug,
|
||||
"created_at": tp.created_at.isoformat() if tp.created_at else "",
|
||||
"score": score,
|
||||
"match_context": "Domain-scoped LightRAG match",
|
||||
})
|
||||
if len(results) >= limit:
|
||||
break
|
||||
|
||||
elapsed_ms = (time.monotonic() - start) * 1000
|
||||
logger.info(
|
||||
"domain_scoped_search query=%r domain=%s latency_ms=%.1f result_count=%d",
|
||||
query, domain, elapsed_ms, len(results),
|
||||
)
|
||||
return results
|
||||
|
||||
except (KeyError, ValueError, TypeError) as exc:
|
||||
elapsed_ms = (time.monotonic() - start) * 1000
|
||||
logger.warning(
|
||||
"domain_scoped_search reason=parse_error query=%r domain=%s error=%s latency_ms=%.1f",
|
||||
query, domain, exc, elapsed_ms,
|
||||
)
|
||||
return []
|
||||
|
||||
# ── Orchestrator ─────────────────────────────────────────────────────
|
||||
|
||||
async def search(
|
||||
|
|
@ -581,9 +835,14 @@ class SearchService:
|
|||
limit: int,
|
||||
db: AsyncSession,
|
||||
sort: str = "relevance",
|
||||
creator: str | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Run semantic and keyword search in parallel, merge and deduplicate.
|
||||
|
||||
When ``creator`` is provided, executes a 4-tier cascade:
|
||||
creator → domain → global → none, returning results from the first
|
||||
tier that produces hits. ``cascade_tier`` indicates which tier served.
|
||||
|
||||
Both engines run concurrently. Keyword results are always included
|
||||
(with match_context). Semantic results above the score threshold are
|
||||
merged in, deduplicated by (type, slug/title). Keyword matches rank
|
||||
|
|
@ -592,12 +851,129 @@ class SearchService:
|
|||
start = time.monotonic()
|
||||
|
||||
if not query or not query.strip():
|
||||
return {"items": [], "partial_matches": [], "total": 0, "query": query, "fallback_used": False}
|
||||
return {"items": [], "partial_matches": [], "total": 0, "query": query, "fallback_used": False, "cascade_tier": ""}
|
||||
|
||||
query = query.strip()[:500]
|
||||
if scope not in ("all", "topics", "creators"):
|
||||
scope = "all"
|
||||
|
||||
cascade_tier = ""
|
||||
|
||||
# ── Creator-scoped cascade ──────────────────────────────────────
|
||||
use_lightrag = len(query) >= self._lightrag_min_query_length
|
||||
|
||||
if creator and use_lightrag:
|
||||
creator_id, creator_name = await self._resolve_creator(creator, db)
|
||||
if creator_id and creator_name:
|
||||
# Tier 1: creator-scoped
|
||||
tier1 = await self._creator_scoped_search(query, creator_id, creator_name, limit, db)
|
||||
if tier1:
|
||||
cascade_tier = "creator"
|
||||
lightrag_results = tier1
|
||||
fallback_used = False
|
||||
else:
|
||||
# Tier 2: domain-scoped
|
||||
domain = await self._get_creator_domain(creator_id, db)
|
||||
tier2: list[dict[str, Any]] = []
|
||||
if domain:
|
||||
tier2 = await self._domain_scoped_search(query, domain, limit, db)
|
||||
if tier2:
|
||||
cascade_tier = "domain"
|
||||
lightrag_results = tier2
|
||||
fallback_used = False
|
||||
else:
|
||||
# Tier 3: global LightRAG
|
||||
tier3 = await self._lightrag_search(query, limit, db)
|
||||
if tier3:
|
||||
cascade_tier = "global"
|
||||
lightrag_results = tier3
|
||||
fallback_used = False
|
||||
else:
|
||||
# Tier 4: no LightRAG results at all
|
||||
cascade_tier = "none"
|
||||
lightrag_results = []
|
||||
fallback_used = True
|
||||
|
||||
elapsed_cascade = (time.monotonic() - start) * 1000
|
||||
logger.info(
|
||||
"cascade_search query=%r creator=%s tier=%s latency_ms=%.1f result_count=%d",
|
||||
query, creator, cascade_tier, elapsed_cascade, len(lightrag_results),
|
||||
)
|
||||
|
||||
# Skip to merge phase (keyword still runs for supplementary)
|
||||
# Jump past the non-cascade LightRAG block
|
||||
kw_result = await self.keyword_search(query, scope, limit, db, sort=sort)
|
||||
|
||||
if fallback_used:
|
||||
# Qdrant semantic fallback
|
||||
vector = await self.embed_query(query)
|
||||
semantic_results: list[dict[str, Any]] = []
|
||||
if vector:
|
||||
raw = await self.search_qdrant(vector, limit=limit)
|
||||
enriched = await self._enrich_qdrant_results(raw, db)
|
||||
semantic_results = [
|
||||
item for item in enriched
|
||||
if item.get("score", 0) >= _SEMANTIC_MIN_SCORE
|
||||
]
|
||||
for item in semantic_results:
|
||||
if not item.get("match_context"):
|
||||
item["match_context"] = "Semantic match"
|
||||
else:
|
||||
semantic_results = []
|
||||
|
||||
# Handle exceptions
|
||||
kw_items = kw_result["items"] if not isinstance(kw_result, Exception) else []
|
||||
partial_matches = kw_result.get("partial_matches", []) if not isinstance(kw_result, Exception) else []
|
||||
|
||||
# Merge: cascade results first, then keyword, then semantic
|
||||
seen_keys: set[str] = set()
|
||||
merged: list[dict[str, Any]] = []
|
||||
|
||||
def _dedup_key(item: dict) -> str:
|
||||
t = item.get("type", "")
|
||||
s = item.get("slug") or item.get("technique_page_slug") or ""
|
||||
title = item.get("title", "")
|
||||
return f"{t}:{s}:{title}"
|
||||
|
||||
for item in lightrag_results:
|
||||
key = _dedup_key(item)
|
||||
if key not in seen_keys:
|
||||
seen_keys.add(key)
|
||||
merged.append(item)
|
||||
|
||||
for item in kw_items:
|
||||
key = _dedup_key(item)
|
||||
if key not in seen_keys:
|
||||
seen_keys.add(key)
|
||||
merged.append(item)
|
||||
|
||||
for item in semantic_results:
|
||||
key = _dedup_key(item)
|
||||
if key not in seen_keys:
|
||||
seen_keys.add(key)
|
||||
merged.append(item)
|
||||
|
||||
merged = self._apply_sort(merged, sort)
|
||||
|
||||
elapsed_ms = (time.monotonic() - start) * 1000
|
||||
logger.info(
|
||||
"Search query=%r scope=%s cascade_tier=%s lightrag=%d keyword=%d semantic=%d merged=%d fallback=%s latency_ms=%.1f",
|
||||
query, scope, cascade_tier, len(lightrag_results), len(kw_items),
|
||||
len(semantic_results), len(merged), fallback_used, elapsed_ms,
|
||||
)
|
||||
|
||||
return {
|
||||
"items": merged[:limit],
|
||||
"partial_matches": partial_matches,
|
||||
"total": len(merged),
|
||||
"query": query,
|
||||
"fallback_used": fallback_used,
|
||||
"cascade_tier": cascade_tier,
|
||||
}
|
||||
else:
|
||||
logger.warning("cascade_search reason=creator_not_found creator_ref=%r", creator)
|
||||
# Fall through to normal search path
|
||||
|
||||
# ── Primary: try LightRAG for queries ≥ min length ─────────────
|
||||
lightrag_results: list[dict[str, Any]] = []
|
||||
fallback_used = True # assume fallback until LightRAG succeeds
|
||||
|
|
@ -699,6 +1075,7 @@ class SearchService:
|
|||
"total": len(merged),
|
||||
"query": query,
|
||||
"fallback_used": fallback_used,
|
||||
"cascade_tier": cascade_tier,
|
||||
}
|
||||
|
||||
# ── Sort helpers ────────────────────────────────────────────────────
|
||||
|
|
@ -744,7 +1121,6 @@ class SearchService:
|
|||
# Batch fetch creators from DB
|
||||
creator_map: dict[str, dict[str, str]] = {}
|
||||
if needs_db_lookup:
|
||||
import uuid as uuid_mod
|
||||
valid_ids = []
|
||||
for cid in needs_db_lookup:
|
||||
try:
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue