feat: Added personality_weight (0.0–1.0) to chat API; modulates system…

- "backend/routers/chat.py"
- "backend/chat_service.py"
- "backend/tests/test_chat.py"

GSD-Task: S02/T01
This commit is contained in:
jlightner 2026-04-04 09:28:35 +00:00
parent 9431aa2095
commit d1efdbb3fa
13 changed files with 1010 additions and 4 deletions

View file

@ -362,3 +362,15 @@
**Context:** Extracting a creator personality profile from transcripts. Small creators have <5 videos, large creators have 20+. Sampling strategy must adapt to corpus size while ensuring topic diversity.
**Pattern:** 3 tiers: small (≤5 videos: use all), medium (6-15: sample ~8), large (>15: sample ~10). For medium/large, use Redis classification data to group transcripts by topic and sample proportionally, ensuring the profile captures the creator's full range rather than overrepresenting one topic area.
## Tiptap v3 useEditor requires immediatelyRender: false for React 18
**Context:** Tiptap v3's `useEditor` hook by default attempts synchronous rendering on mount, which conflicts with React 18's concurrent features (StrictMode double-mounting, suspense boundaries). This causes hydration mismatches and "flushSync was called from inside a lifecycle method" warnings.
**Fix:** Pass `immediatelyRender: false` in the useEditor config: `useEditor({ immediatelyRender: false, extensions: [...], content: ... })`. The editor still renders on the first paint — this just defers it to be React 18 compatible. No visual difference.
## get_optional_user pattern for public-but-auth-aware endpoints
**Context:** Some API endpoints need to be publicly accessible but provide different behavior for authenticated users (e.g., showing draft posts to the owner). Using the standard `get_current_user` dependency rejects unauthenticated requests with 401.
**Fix:** Create `get_optional_user` using `OAuth2PasswordBearer(auto_error=False)`. When `auto_error=False`, missing/invalid tokens return `None` instead of raising 401. The endpoint receives `Optional[User]` and branches on whether the user is present and matches the resource owner.

View file

@ -6,7 +6,7 @@ The demo MVP comes together. Chat widget wires to the intelligence layer (INT-1)
## Slice Overview
| ID | Slice | Risk | Depends | Done | After this |
|----|-------|------|---------|------|------------|
| S01 | [A] Post Editor + File Sharing | high | — | | Creator writes rich text posts with file attachments (presets, sample packs). Followers see posts in feed. Files downloadable via signed URLs. |
| S01 | [A] Post Editor + File Sharing | high | — | | Creator writes rich text posts with file attachments (presets, sample packs). Followers see posts in feed. Files downloadable via signed URLs. |
| S02 | [A] Chat Widget ↔ Chat Engine Wiring (INT-1) | high | — | ⬜ | Chat widget on creator profile wired to chat engine. Personality slider adjusts response style. Citations link to sources. |
| S03 | [B] Shorts Generation Pipeline v1 | medium | — | ⬜ | Shorts pipeline extracts clips from highlight boundaries in 3 format presets (vertical, square, horizontal) |
| S04 | [B] Personality Slider (Full Interpolation) | medium | — | ⬜ | Personality slider at 0.0 gives encyclopedic response. At 1.0 gives creator-voiced response with their speech patterns. |

View file

@ -0,0 +1,151 @@
---
id: S01
parent: M023
milestone: M023
provides:
- Post CRUD API at /api/v1/posts
- File upload/download API at /api/v1/files
- MinIO object storage integration
- PostsFeed component for creator profile pages
- PostEditor page for creator content authoring
- PostsList management page for creators
requires:
[]
affects:
- S05
key_files:
- docker-compose.yml
- backend/minio_client.py
- backend/routers/posts.py
- backend/routers/files.py
- backend/models.py
- backend/schemas.py
- backend/auth.py
- backend/main.py
- alembic/versions/024_add_posts_and_attachments.py
- frontend/src/pages/PostEditor.tsx
- frontend/src/pages/PostEditor.module.css
- frontend/src/api/posts.ts
- frontend/src/api/client.ts
- frontend/src/components/PostsFeed.tsx
- frontend/src/components/PostsFeed.module.css
- frontend/src/pages/PostsList.tsx
- frontend/src/pages/PostsList.module.css
- frontend/src/pages/CreatorDetail.tsx
- frontend/src/App.tsx
- frontend/src/pages/CreatorDashboard.tsx
- docker/nginx.conf
- backend/config.py
- backend/requirements.txt
key_decisions:
- MinIO internal-only (no public port) — files served via API-generated presigned URLs
- get_optional_user dependency for public endpoints with auth-aware behavior
- MinIO object deletion on post delete is best-effort (logged, non-blocking)
- Tiptap v3 with StarterKit + Link + Placeholder for rich text editing
- PostsFeed hides section entirely when creator has zero posts
- File uploads use run_in_executor for sync MinIO I/O in async handlers
patterns_established:
- MinIO integration pattern: lazy-init singleton, ensure_bucket on first write, presigned URLs for downloads
- requestMultipart API client helper for FormData uploads without Content-Type header
- get_optional_user auth dependency for public-but-auth-aware endpoints
- Tiptap JSON→HTML rendering with generateHTML + extensions for read-only display
observability_surfaces:
- File upload errors logged with MinIO error message and object_key
- Post ownership violations logged at WARNING
- MinIO bucket auto-creation logged on startup
- 503 status for MinIO failures, 403 for ownership violations, 404 for missing resources
drill_down_paths:
- .gsd/milestones/M023/slices/S01/tasks/T01-SUMMARY.md
- .gsd/milestones/M023/slices/S01/tasks/T02-SUMMARY.md
- .gsd/milestones/M023/slices/S01/tasks/T03-SUMMARY.md
- .gsd/milestones/M023/slices/S01/tasks/T04-SUMMARY.md
duration: ""
verification_result: passed
completed_at: 2026-04-04T09:19:26.082Z
blocker_discovered: false
---
# S01: [A] Post Editor + File Sharing
**Full post editor with Tiptap rich text, MinIO-backed file attachments, CRUD API, public feed rendering, and creator management page.**
## What Happened
This slice delivered the complete post system — write path, read path, and file storage.
**T01 (Data Layer):** Added MinIO Docker service (internal-only, no public port) to docker-compose.yml with healthcheck and volume mount. Created minio_client.py with lazy-init singleton, ensure_bucket, upload_file, delete_file, and presigned URL generation. Added Post and PostAttachment SQLAlchemy models with UUID PKs and cascade delete. Created Alembic migration 024. Added Pydantic schemas for all CRUD operations. Bumped nginx client_max_body_size to 100m.
**T02 (API Layer):** Built posts.py router with 5 CRUD endpoints enforcing creator ownership via auth. Introduced get_optional_user dependency (OAuth2PasswordBearer with auto_error=False) for public list endpoints that show drafts only to owners. Built files.py router with multipart upload (run_in_executor for sync MinIO I/O) and signed download URL generation. Registered both routers and added MinIO bucket auto-creation in the app lifespan handler.
**T03 (Post Editor):** Installed Tiptap v3 with StarterKit, Link, and Placeholder extensions. Built PostEditor page with formatting toolbar, title input, drag-and-drop file attachment zone, publish toggle, and sequential save flow (create post → upload files). Added requestMultipart helper to the API client for FormData uploads. Lazy-loaded routes at /creator/posts/new and /creator/posts/:postId/edit.
**T04 (Public Feed + Management):** Created PostsFeed component rendering published posts with Tiptap JSON→HTML conversion and signed-URL download buttons. Integrated into CreatorDetail page (hidden when no posts). Built PostsList management page with status badges, edit/delete actions, and confirmation dialog. Added SidebarNav link and route at /creator/posts.
## Verification
All slice verification checks passed:
1. `docker compose config --quiet` — exit 0
2. All backend imports (Post, PostAttachment, PostCreate, PostRead, PostAttachmentRead, PostListResponse, get_minio_client, generate_download_url, upload_file, ensure_bucket, get_optional_user, posts router, files router) — exit 0
3. `cd frontend && npm run build` — exit 0, 179 modules, PostEditor and PostsList lazy-loaded as separate chunks
4. Route registration verified: 8 new API routes (5 posts CRUD + 2 files + bucket init), 3 frontend routes (/creator/posts, /creator/posts/new, /creator/posts/:postId/edit)
## Requirements Advanced
None.
## Requirements Validated
None.
## New Requirements Surfaced
None.
## Requirements Invalidated or Re-scoped
None.
## Deviations
- Added get_optional_user auth dependency (not in plan, needed for public list with draft visibility)
- Added delete_file() to minio_client.py (needed for post deletion cleanup)
- Tiptap v3 required immediatelyRender: false for React 18 compatibility
- SidebarNav links to Posts management page instead of direct New Post link (better UX with management page available)
## Known Limitations
- Alembic migration 024 not yet applied to live database (deploy-time)
- PostsFeed fetches all posts for a creator in one call (no pagination on public profile)
- File upload reads entire file into memory before MinIO write
## Follow-ups
- Apply migration 024 on ub01 deploy
- Add pagination to PostsFeed if post counts grow
- Consider streaming file uploads for very large files
## Files Created/Modified
- `docker-compose.yml` — Added MinIO service with healthcheck, volume, internal network
- `backend/config.py` — Added MinIO connection settings
- `backend/minio_client.py` — Created — lazy-init MinIO client, ensure_bucket, upload, delete, presigned URL
- `backend/models.py` — Added Post and PostAttachment models with FKs and cascade
- `backend/schemas.py` — Added post CRUD and attachment Pydantic schemas
- `backend/requirements.txt` — Added minio package
- `backend/auth.py` — Added get_optional_user dependency
- `backend/routers/posts.py` — Created — 5 CRUD endpoints with auth and ownership
- `backend/routers/files.py` — Created — multipart upload and signed download URL
- `backend/main.py` — Registered posts and files routers, added MinIO bucket init
- `alembic/versions/024_add_posts_and_attachments.py` — Created — posts and post_attachments tables
- `docker/nginx.conf` — Bumped client_max_body_size to 100m
- `frontend/src/api/client.ts` — Added requestMultipart helper
- `frontend/src/api/posts.ts` — Created — TypeScript types and API functions for posts
- `frontend/src/pages/PostEditor.tsx` — Created — Tiptap editor with toolbar, file attachments, save flow
- `frontend/src/pages/PostEditor.module.css` — Created — dark theme editor styles
- `frontend/src/components/PostsFeed.tsx` — Created — public feed with HTML rendering and downloads
- `frontend/src/components/PostsFeed.module.css` — Created — feed card styles
- `frontend/src/pages/PostsList.tsx` — Created — creator post management page
- `frontend/src/pages/PostsList.module.css` — Created — management page styles
- `frontend/src/pages/CreatorDetail.tsx` — Integrated PostsFeed component
- `frontend/src/App.tsx` — Added PostEditor, PostsList lazy imports and routes
- `frontend/src/pages/CreatorDashboard.tsx` — Added Posts link to SidebarNav

View file

@ -0,0 +1,84 @@
# S01: [A] Post Editor + File Sharing — UAT
**Milestone:** M023
**Written:** 2026-04-04T09:19:26.083Z
## UAT: S01 — Post Editor + File Sharing
### Preconditions
- Chrysopedia stack running on ub01 (docker compose up -d)
- Migration 024 applied (alembic upgrade head)
- MinIO container healthy (docker ps shows chrysopedia-minio healthy)
- At least one creator account with login credentials
- A test file (any .zip or .wav, 1-50MB)
---
### TC-01: Post Creation (Happy Path)
1. Log in as creator at /login
2. Navigate to /creator/posts via sidebar → "Posts" link
3. **Expected:** PostsList page renders, shows "New Post" button
4. Click "New Post"
5. **Expected:** PostEditor renders with title input, Tiptap editor with toolbar, file attachment area, publish toggle
6. Enter title: "Test Post — Sample Pack Release"
7. Type rich text in editor: a paragraph, then bold text, then a bullet list
8. **Expected:** Toolbar buttons (B, I, H2, H3, list, link, code) toggle formatting visually
9. Toggle "Published" on
10. Attach a file by clicking the file attachment area and selecting the test file
11. **Expected:** File appears in attachment list below editor with filename and remove button
12. Click "Save"
13. **Expected:** Post created, files uploaded, redirects to creator dashboard or posts list
### TC-02: Post Listing (Creator Management)
1. Navigate to /creator/posts
2. **Expected:** PostsList shows the post from TC-01 with title, "Published" badge, created date, Edit and Delete buttons
3. **Expected:** "New Post" button visible at top
### TC-03: Post Editing
1. From PostsList, click "Edit" on the TC-01 post
2. **Expected:** PostEditor loads with existing title, body content, and attached file
3. Change the title to "Updated — Sample Pack v2"
4. Click "Save"
5. **Expected:** Post updates successfully
### TC-04: Public Feed Display
1. Log out (or open incognito)
2. Navigate to the creator's public profile page (/creators/{slug})
3. **Expected:** "Posts" section appears below technique grid
4. **Expected:** Post card shows title, rendered rich text (bold, lists visible), timestamp
5. **Expected:** File attachment shows with download button (filename, size)
### TC-05: File Download via Signed URL
1. On the public creator profile, click the download button on the post attachment
2. **Expected:** Browser initiates file download (MinIO presigned URL)
3. **Expected:** Downloaded file matches the original upload
### TC-06: Draft Visibility
1. Log in as creator, create a new post, leave "Published" toggled OFF, save
2. Navigate to /creator/posts
3. **Expected:** New draft post shows with "Draft" badge
4. Log out, visit creator's public profile
5. **Expected:** Draft post is NOT visible in the PostsFeed
### TC-07: Post Deletion
1. Log in as creator, navigate to /creator/posts
2. Click "Delete" on a post
3. **Expected:** Confirmation dialog appears
4. Confirm deletion
5. **Expected:** Post removed from list, no longer visible on public profile
### TC-08: Ownership Enforcement
1. Log in as a different user (not the post creator)
2. Attempt PUT /api/v1/posts/{post_id} via curl with a valid auth token for the wrong user
3. **Expected:** 403 Forbidden response
4. Attempt DELETE /api/v1/posts/{post_id} with wrong user token
5. **Expected:** 403 Forbidden response
6. Attempt POST /api/v1/files/upload with wrong user's post_id
7. **Expected:** 403 Forbidden response
### TC-09: Edge Cases
1. Create a post with no attachments → save succeeds
2. Create a post with empty body → save succeeds (title-only post)
3. Upload a file with special characters in filename (e.g., "my file (v2).zip") → filename sanitized, upload succeeds
4. Visit public profile for creator with zero posts → Posts section hidden (no empty state)
5. Attempt GET /api/v1/posts/nonexistent-uuid → 404 response

View file

@ -0,0 +1,16 @@
{
"schemaVersion": 1,
"taskId": "T04",
"unitId": "M023/S01/T04",
"timestamp": 1775294250736,
"passed": true,
"discoverySource": "task-plan",
"checks": [
{
"command": "cd frontend",
"exitCode": 0,
"durationMs": 6,
"verdict": "pass"
}
]
}

View file

@ -1,6 +1,75 @@
# S02: [A] Chat Widget ↔ Chat Engine Wiring (INT-1)
**Goal:** Wire chat widget UI to chat engine via creator-scoped router with personality system. INT-1 complete.
**Goal:** Chat widget on creator profile sends personality_weight (0.01.0) to the chat engine. Backend modulates the system prompt: 0.0 = encyclopedic, >0 = creator voice injected from personality_profile JSONB. Slider UI visible in chat panel header.
**Demo:** After this: Chat widget on creator profile wired to chat engine. Personality slider adjusts response style. Citations link to sources.
## Tasks
- [x] **T01: Added personality_weight (0.01.0) to chat API; modulates system prompt with creator voice profile and scales LLM temperature** — Thread personality_weight through the chat API and modulate the system prompt based on creator personality profile.
## Failure Modes
| Dependency | On error | On timeout | On malformed response |
|---|---|---|---|
| Creator DB query | Log warning, fall back to encyclopedic prompt | Same — DB timeout treated as missing profile | N/A — query returns model or None |
## Negative Tests
- **Malformed inputs**: personality_weight outside 0.01.0 → 422, personality_weight as string → 422
- **Error paths**: creator name doesn't match any DB record → encyclopedic fallback, personality_profile is null → encyclopedic fallback
- **Boundary conditions**: weight exactly 0.0 → no profile query needed, weight exactly 1.0 → full personality injection
## Steps
1. In `backend/routers/chat.py`, add `personality_weight: float = Field(default=0.0, ge=0.0, le=1.0)` to `ChatRequest`. Pass it to `service.stream_response()`.
2. In `backend/chat_service.py`, add `personality_weight: float = 0.0` param to `stream_response()`. When weight > 0 and creator is not None:
- Import `Creator` from models, `select` from sqlalchemy
- Query `select(Creator).where(Creator.name == creator)` using the existing `db` session
- If creator found and `personality_profile` is not None, build a personality injection block from the profile dict (extract signature_phrases, tone descriptors, teaching_style, energy, formality from the nested vocabulary/tone/style_markers structure)
- Append the personality block to the system prompt: 'Respond in {creator}'s voice. {profile_summary}. Use their signature phrases: {phrases}. Match their {formality} {energy} tone.'
- If creator not found or profile is null, log at DEBUG and use the standard encyclopedic prompt
3. Scale LLM temperature with weight: `temperature = 0.3 + (personality_weight * 0.2)` (range 0.30.5).
4. Extend the `logger.info('chat_request ...')` line to include `weight=body.personality_weight`.
5. Add tests to `backend/tests/test_chat.py`:
- Test personality_weight is accepted and forwarded (mock verifies stream_response called with weight)
- Test system prompt includes personality context when weight > 0 and profile exists (mock Creator query, capture messages sent to OpenAI)
- Test encyclopedic fallback when weight > 0 but personality_profile is null
- Test 422 for personality_weight outside [0.0, 1.0]
6. Run `cd backend && python -m pytest tests/test_chat.py -v` — all tests pass.
## Must-Haves
- [ ] ChatRequest schema has personality_weight with ge=0.0, le=1.0, default=0.0
- [ ] stream_response queries Creator.personality_profile when weight > 0
- [ ] Null profile or missing creator falls back to encyclopedic prompt silently
- [ ] Temperature scales with weight (0.3 at 0.0, up to 0.5 at 1.0)
- [ ] All existing tests still pass
- [ ] New tests for weight forwarding, prompt injection, null fallback, validation
- Estimate: 1h
- Files: backend/routers/chat.py, backend/chat_service.py, backend/tests/test_chat.py
- Verify: cd backend && python -m pytest tests/test_chat.py -v
- [ ] **T02: Frontend — personality slider UI + API wiring in ChatWidget** — Add a personality weight slider to ChatWidget and thread it through the streamChat() API call.
## Steps
1. In `frontend/src/api/chat.ts`, add `personalityWeight?: number` parameter to `streamChat()` function signature (after `conversationId`). Include `personality_weight: personalityWeight ?? 0` in the JSON body.
2. In `frontend/src/components/ChatWidget.tsx`:
- Add `const [personalityWeight, setPersonalityWeight] = useState(0);` state
- In the panel header (the `<div className={styles.header}>` block), add a slider row below the title: `<div className={styles.sliderRow}>` containing a label span ('Encyclopedic' on left, 'Creator Voice' on right), and an `<input type='range' min={0} max={1} step={0.1}>` bound to personalityWeight state
- Pass `personalityWeight` to `streamChat()` in the `sendMessage` callback (the 5th argument)
3. In `frontend/src/components/ChatWidget.module.css`, add styles:
- `.sliderRow` — flex row with gap, padding, align-items center, subtle border-bottom to separate from messages
- `.sliderLabel` — small muted text (var(--text-secondary) or similar)
- `.slider` — custom range input styling matching the dark theme (cyan accent track/thumb matching existing brand colors)
4. Run `cd frontend && npm run build` — zero errors.
## Must-Haves
- [ ] streamChat() accepts and sends personalityWeight in POST body
- [ ] ChatWidget renders a range slider in the panel header
- [ ] Slider labels show 'Encyclopedic' at 0 and 'Creator Voice' at 1
- [ ] Slider value flows through to API call
- [ ] Styles match existing dark theme
- [ ] Frontend builds with zero TypeScript/compilation errors
- Estimate: 45m
- Files: frontend/src/api/chat.ts, frontend/src/components/ChatWidget.tsx, frontend/src/components/ChatWidget.module.css
- Verify: cd frontend && npm run build

View file

@ -0,0 +1,89 @@
# S02 Research: Chat Widget ↔ Chat Engine Wiring (INT-1)
## Summary
Most of the chat plumbing already exists and works. The ChatWidget component (`frontend/src/components/ChatWidget.tsx`) renders a floating bubble on the CreatorDetail page, streams SSE responses via `frontend/src/api/chat.ts`, and renders citations with links. The backend (`backend/routers/chat.py``backend/chat_service.py`) accepts a query + optional creator name, retrieves context via SearchService, streams an LLM completion with citations, and supports multi-turn memory via Redis.
**What's missing is the personality slider.** The current system prompt is pure encyclopedic — no creator voice injection. The `personality_weight` parameter doesn't exist anywhere in the stack. S04 owns "full interpolation" (the sophisticated blending logic), but S02 needs to:
1. Add a slider UI to ChatWidget
2. Thread `personality_weight` (0.01.0) through the API
3. Implement basic system prompt variation: at weight 0.0 use the existing encyclopedic prompt; at weight > 0 inject the creator's personality profile data into the system prompt to flavor the response
## Recommendation
**Targeted research depth.** The tech is all familiar — React slider input, FastAPI field addition, system prompt templating. The integration seam is clean: one new field flows frontend → API → chat_service → system prompt.
## Implementation Landscape
### What Exists
| Component | File | Status |
|---|---|---|
| Chat bubble + panel UI | `frontend/src/components/ChatWidget.tsx` (491 lines) | ✅ Complete — renders on CreatorDetail, has streaming, citations, suggestions, conversation memory |
| Chat SSE client | `frontend/src/api/chat.ts` | ✅ Complete — POST with `{query, creator, conversation_id}`, parses sources/token/done/error events |
| Chat API route | `backend/routers/chat.py` | ✅ Complete — `POST /api/v1/chat` accepts `ChatRequest(query, creator, conversation_id)` |
| Chat service | `backend/chat_service.py` | ✅ Complete — search → build context → stream LLM. Has Redis-backed multi-turn. System prompt is hardcoded encyclopedic |
| Chat tests | `backend/tests/test_chat.py` | ✅ Complete — 11 tests covering SSE format, citations, creator forwarding, validation, errors, conversation memory |
| Personality profile model | `backend/models.py:134``Creator.personality_profile` JSONB | ✅ Extracted and stored per creator |
| Personality profile schema | `backend/schemas.py:766``PersonalityProfile` with `VocabularyProfile`, `ToneProfile`, `StyleMarkersProfile` | ✅ |
| Personality profile display | `frontend/src/components/PersonalityProfile.tsx` | ✅ Collapsible section on CreatorDetail |
| ChatPage (standalone) | `frontend/src/pages/ChatPage.tsx` | ✅ No creator context — no personality slider needed here |
| CSS module | `frontend/src/components/ChatWidget.module.css` (388 lines) | ✅ Dark theme, styled |
### What Needs Building
1. **Slider UI in ChatWidget**`<input type="range">` with 0.01.0 range, styled label ("Encyclopedic ↔ Creator Voice"), positioned in the chat panel header or above the input row. Only shown when the widget is creator-scoped (always true for ChatWidget, since it only appears on CreatorDetail).
2. **`personality_weight` in API contract** — Add to `ChatRequest` schema (`float`, 0.01.0, default 0.0), `streamChat()` TS function, and the fetch body.
3. **System prompt interpolation in ChatService** — When `personality_weight > 0` and a creator is specified:
- Query `Creator.personality_profile` from DB (the `db: AsyncSession` is already passed to `stream_response`)
- Build a personality injection block from the profile (signature phrases, tone descriptors, teaching style, etc.)
- Append it to the system prompt with instructions like "Respond in {creator}'s voice: {profile_summary}. Use their signature phrases: {phrases}. Match their {formality} tone and {energy} energy."
- Scale the temperature slightly with the weight (0.3 at 0.0, up to ~0.5 at 1.0)
4. **Tests** — Add test cases for personality_weight parameter forwarding, system prompt injection when weight > 0, and behavior when personality_profile is null.
### Key Files to Modify
| File | Changes |
|---|---|
| `backend/routers/chat.py` | Add `personality_weight: float = 0.0` to `ChatRequest`, pass to service |
| `backend/chat_service.py` | Accept `personality_weight`, query Creator model for profile, build voice-injected prompt variant |
| `frontend/src/api/chat.ts` | Add `personalityWeight` param to `streamChat()`, include in POST body |
| `frontend/src/components/ChatWidget.tsx` | Add slider state, pass to `streamChat()`, render slider UI |
| `frontend/src/components/ChatWidget.module.css` | Styles for slider row |
| `backend/tests/test_chat.py` | New tests for personality_weight behavior |
### Natural Task Seams
**T01: Backend — personality_weight API + prompt interpolation**
- Add field to `ChatRequest`
- Modify `ChatService.stream_response()` to accept weight, query Creator, build voice prompt
- Add/update tests
- Verify: `pytest backend/tests/test_chat.py`
**T02: Frontend — slider UI + API wiring**
- Add slider component to ChatWidget header
- Thread personality_weight through `streamChat()`
- Style the slider
- Verify: `npm run build` (type-check + bundle)
### Constraints & Risks
- **Creator lookup by name, not ID:** The current API accepts `creator: str` (name string). To fetch the personality profile, the service needs to query `Creator` by name/slug. The `db` session is already available in `stream_response()`. Query: `select(Creator).where(Creator.name == creator)`.
- **Null personality profiles:** Not all creators have extracted profiles. When `personality_weight > 0` but `personality_profile` is null, fall back to the encyclopedic prompt silently. Don't error.
- **S04 boundary:** S02 builds the plumbing and a basic two-mode prompt (encyclopedic vs voice-injected). S04 does the sophisticated interpolation (blending between the two at continuous weight values, speech pattern matching, etc.). S02 should use a simple approach: weight == 0 → pure encyclopedic; weight > 0 → append personality context to the prompt. This is enough for the demo.
- **ChatPage (standalone) doesn't need a slider** — it has no creator context. No changes needed there.
- **Existing test pattern:** Tests use a standalone ASGI client with mocked DB session and Redis. The personality_weight tests should follow the same pattern — mock the Creator query result.
### Verification Strategy
- `cd backend && python -m pytest tests/test_chat.py -v` — all existing tests pass + new personality tests
- `cd frontend && npm run build` — zero TypeScript/build errors
- Manual: open CreatorDetail page, move slider, send chat message, observe personality-flavored response vs encyclopedic response

View file

@ -0,0 +1,66 @@
---
estimated_steps: 32
estimated_files: 3
skills_used: []
---
# T01: Backend — personality_weight API field + system prompt modulation + tests
Thread personality_weight through the chat API and modulate the system prompt based on creator personality profile.
## Failure Modes
| Dependency | On error | On timeout | On malformed response |
|---|---|---|---|
| Creator DB query | Log warning, fall back to encyclopedic prompt | Same — DB timeout treated as missing profile | N/A — query returns model or None |
## Negative Tests
- **Malformed inputs**: personality_weight outside 0.01.0 → 422, personality_weight as string → 422
- **Error paths**: creator name doesn't match any DB record → encyclopedic fallback, personality_profile is null → encyclopedic fallback
- **Boundary conditions**: weight exactly 0.0 → no profile query needed, weight exactly 1.0 → full personality injection
## Steps
1. In `backend/routers/chat.py`, add `personality_weight: float = Field(default=0.0, ge=0.0, le=1.0)` to `ChatRequest`. Pass it to `service.stream_response()`.
2. In `backend/chat_service.py`, add `personality_weight: float = 0.0` param to `stream_response()`. When weight > 0 and creator is not None:
- Import `Creator` from models, `select` from sqlalchemy
- Query `select(Creator).where(Creator.name == creator)` using the existing `db` session
- If creator found and `personality_profile` is not None, build a personality injection block from the profile dict (extract signature_phrases, tone descriptors, teaching_style, energy, formality from the nested vocabulary/tone/style_markers structure)
- Append the personality block to the system prompt: 'Respond in {creator}'s voice. {profile_summary}. Use their signature phrases: {phrases}. Match their {formality} {energy} tone.'
- If creator not found or profile is null, log at DEBUG and use the standard encyclopedic prompt
3. Scale LLM temperature with weight: `temperature = 0.3 + (personality_weight * 0.2)` (range 0.30.5).
4. Extend the `logger.info('chat_request ...')` line to include `weight=body.personality_weight`.
5. Add tests to `backend/tests/test_chat.py`:
- Test personality_weight is accepted and forwarded (mock verifies stream_response called with weight)
- Test system prompt includes personality context when weight > 0 and profile exists (mock Creator query, capture messages sent to OpenAI)
- Test encyclopedic fallback when weight > 0 but personality_profile is null
- Test 422 for personality_weight outside [0.0, 1.0]
6. Run `cd backend && python -m pytest tests/test_chat.py -v` — all tests pass.
## Must-Haves
- [ ] ChatRequest schema has personality_weight with ge=0.0, le=1.0, default=0.0
- [ ] stream_response queries Creator.personality_profile when weight > 0
- [ ] Null profile or missing creator falls back to encyclopedic prompt silently
- [ ] Temperature scales with weight (0.3 at 0.0, up to 0.5 at 1.0)
- [ ] All existing tests still pass
- [ ] New tests for weight forwarding, prompt injection, null fallback, validation
## Inputs
- ``backend/routers/chat.py` — existing ChatRequest schema and route handler`
- ``backend/chat_service.py` — existing ChatService with stream_response() and _SYSTEM_PROMPT_TEMPLATE`
- ``backend/tests/test_chat.py` — existing 11 tests with standalone ASGI client pattern`
- ``backend/models.py` — Creator model with personality_profile JSONB column`
- ``backend/schemas.py` — PersonalityProfile, VocabularyProfile, ToneProfile, StyleMarkersProfile schemas`
## Expected Output
- ``backend/routers/chat.py` — ChatRequest with personality_weight field, passed to service`
- ``backend/chat_service.py` — stream_response accepts personality_weight, queries Creator, builds voice prompt`
- ``backend/tests/test_chat.py` — new tests for personality_weight behavior (forwarding, prompt injection, null fallback, validation)`
## Verification
cd backend && python -m pytest tests/test_chat.py -v

View file

@ -0,0 +1,79 @@
---
id: T01
parent: S02
milestone: M023
provides: []
requires: []
affects: []
key_files: ["backend/routers/chat.py", "backend/chat_service.py", "backend/tests/test_chat.py"]
key_decisions: ["Personality injection intensity varies with weight: <0.4 subtle, 0.4-0.8 voice, >=0.8 full embody", "Temperature range 0.3-0.5 keeps responses grounded while allowing expressiveness"]
patterns_established: []
drill_down_paths: []
observability_surfaces: []
duration: ""
verification_result: "cd backend && python -m pytest tests/test_chat.py -v — 22 passed (13 existing + 9 new), 0 failed"
completed_at: 2026-04-04T09:28:21.555Z
blocker_discovered: false
---
# T01: Added personality_weight (0.01.0) to chat API; modulates system prompt with creator voice profile and scales LLM temperature
> Added personality_weight (0.01.0) to chat API; modulates system prompt with creator voice profile and scales LLM temperature
## What Happened
---
id: T01
parent: S02
milestone: M023
key_files:
- backend/routers/chat.py
- backend/chat_service.py
- backend/tests/test_chat.py
key_decisions:
- Personality injection intensity varies with weight: <0.4 subtle, 0.4-0.8 voice, >=0.8 full embody
- Temperature range 0.3-0.5 keeps responses grounded while allowing expressiveness
duration: ""
verification_result: passed
completed_at: 2026-04-04T09:28:21.556Z
blocker_discovered: false
---
# T01: Added personality_weight (0.01.0) to chat API; modulates system prompt with creator voice profile and scales LLM temperature
**Added personality_weight (0.01.0) to chat API; modulates system prompt with creator voice profile and scales LLM temperature**
## What Happened
Added personality_weight field to ChatRequest with Pydantic validation (ge=0.0, le=1.0, default=0.0), forwarded to ChatService.stream_response(). When weight > 0 and creator is provided, queries Creator.personality_profile JSONB, extracts voice cues (signature phrases, tone, teaching style), and appends a personality injection block to the system prompt. Temperature scales linearly from 0.3 to 0.5. Graceful fallback to encyclopedic prompt on DB error, missing creator, or null profile. Wrote 9 new tests covering all paths.
## Verification
cd backend && python -m pytest tests/test_chat.py -v — 22 passed (13 existing + 9 new), 0 failed
## Verification Evidence
| # | Command | Exit Code | Verdict | Duration |
|---|---------|-----------|---------|----------|
| 1 | `cd backend && python -m pytest tests/test_chat.py -v` | 0 | ✅ pass | 1150ms |
## Deviations
Simplified test_personality_weight_accepted_and_forwarded from wraps-based approach to capture-kwargs approach due to unbound method limitation.
## Known Issues
None.
## Files Created/Modified
- `backend/routers/chat.py`
- `backend/chat_service.py`
- `backend/tests/test_chat.py`
## Deviations
Simplified test_personality_weight_accepted_and_forwarded from wraps-based approach to capture-kwargs approach due to unbound method limitation.
## Known Issues
None.

View file

@ -0,0 +1,48 @@
---
estimated_steps: 19
estimated_files: 3
skills_used: []
---
# T02: Frontend — personality slider UI + API wiring in ChatWidget
Add a personality weight slider to ChatWidget and thread it through the streamChat() API call.
## Steps
1. In `frontend/src/api/chat.ts`, add `personalityWeight?: number` parameter to `streamChat()` function signature (after `conversationId`). Include `personality_weight: personalityWeight ?? 0` in the JSON body.
2. In `frontend/src/components/ChatWidget.tsx`:
- Add `const [personalityWeight, setPersonalityWeight] = useState(0);` state
- In the panel header (the `<div className={styles.header}>` block), add a slider row below the title: `<div className={styles.sliderRow}>` containing a label span ('Encyclopedic' on left, 'Creator Voice' on right), and an `<input type='range' min={0} max={1} step={0.1}>` bound to personalityWeight state
- Pass `personalityWeight` to `streamChat()` in the `sendMessage` callback (the 5th argument)
3. In `frontend/src/components/ChatWidget.module.css`, add styles:
- `.sliderRow` — flex row with gap, padding, align-items center, subtle border-bottom to separate from messages
- `.sliderLabel` — small muted text (var(--text-secondary) or similar)
- `.slider` — custom range input styling matching the dark theme (cyan accent track/thumb matching existing brand colors)
4. Run `cd frontend && npm run build` — zero errors.
## Must-Haves
- [ ] streamChat() accepts and sends personalityWeight in POST body
- [ ] ChatWidget renders a range slider in the panel header
- [ ] Slider labels show 'Encyclopedic' at 0 and 'Creator Voice' at 1
- [ ] Slider value flows through to API call
- [ ] Styles match existing dark theme
- [ ] Frontend builds with zero TypeScript/compilation errors
## Inputs
- ``frontend/src/api/chat.ts` — existing streamChat() with query, callbacks, creator, conversationId params`
- ``frontend/src/components/ChatWidget.tsx` — existing ChatWidget with header, messages, input form`
- ``frontend/src/components/ChatWidget.module.css` — existing dark-theme styles`
- ``backend/routers/chat.py` — T01 output: ChatRequest now accepts personality_weight field`
## Expected Output
- ``frontend/src/api/chat.ts` — streamChat() sends personality_weight in POST body`
- ``frontend/src/components/ChatWidget.tsx` — slider state + UI in header + passed to streamChat()`
- ``frontend/src/components/ChatWidget.module.css` — slider row styles matching dark theme`
## Verification
cd frontend && npm run build

View file

@ -21,9 +21,11 @@ import uuid
from typing import Any, AsyncIterator
import openai
from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession
from config import Settings
from models import Creator
from search_service import SearchService
logger = logging.getLogger("chrysopedia.chat")
@ -95,12 +97,43 @@ class ChatService:
except Exception:
logger.warning("chat_history_save_error cid=%s", conversation_id, exc_info=True)
async def _inject_personality(
self,
system_prompt: str,
db: AsyncSession,
creator_name: str,
weight: float,
) -> str:
"""Query creator personality_profile and append a voice block to the system prompt.
Falls back to the unmodified prompt on DB error, missing creator, or null profile.
"""
try:
result = await db.execute(
select(Creator).where(Creator.name == creator_name)
)
creator_row = result.scalars().first()
except Exception:
logger.warning("chat_personality_db_error creator=%r", creator_name, exc_info=True)
return system_prompt
if creator_row is None or creator_row.personality_profile is None:
logger.debug("chat_personality_skip creator=%r reason=%s",
creator_name,
"not_found" if creator_row is None else "null_profile")
return system_prompt
profile = creator_row.personality_profile
voice_block = _build_personality_block(creator_name, profile, weight)
return system_prompt + "\n\n" + voice_block
async def stream_response(
self,
query: str,
db: AsyncSession,
creator: str | None = None,
conversation_id: str | None = None,
personality_weight: float = 0.0,
) -> AsyncIterator[str]:
"""Yield SSE-formatted events for a chat query.
@ -151,6 +184,15 @@ class ChatService:
# ── 3. Stream LLM completion ────────────────────────────────────
system_prompt = _SYSTEM_PROMPT_TEMPLATE.format(context_block=context_block)
# Inject creator personality voice when weight > 0
if personality_weight > 0 and creator:
system_prompt = await self._inject_personality(
system_prompt, db, creator, personality_weight,
)
# Scale temperature with personality weight: 0.3 (encyclopedic) → 0.5 (full personality)
temperature = 0.3 + (personality_weight * 0.2)
messages: list[dict[str, str]] = [
{"role": "system", "content": system_prompt},
]
@ -165,7 +207,7 @@ class ChatService:
model=self.settings.llm_model,
messages=messages,
stream=True,
temperature=0.3,
temperature=temperature,
max_tokens=2048,
)
@ -245,3 +287,47 @@ def _build_context_block(items: list[dict[str, Any]]) -> str:
lines.append("")
return "\n".join(lines)
def _build_personality_block(creator_name: str, profile: dict[str, Any], weight: float) -> str:
"""Build a personality voice injection block from a creator's personality_profile JSONB.
The ``weight`` (0.01.0) determines how strongly the personality should
come through. At low weights the instruction is softer ("subtly adopt");
at high weights it is emphatic ("fully embody").
"""
vocab = profile.get("vocabulary", {})
tone = profile.get("tone", {})
style = profile.get("style_markers", {})
phrases = vocab.get("signature_phrases", [])
descriptors = tone.get("descriptors", [])
teaching_style = tone.get("teaching_style", "")
energy = tone.get("energy", "moderate")
formality = tone.get("formality", "conversational")
parts: list[str] = []
# Intensity qualifier
if weight >= 0.8:
parts.append(f"Fully embody {creator_name}'s voice and style.")
elif weight >= 0.4:
parts.append(f"Respond in {creator_name}'s voice.")
else:
parts.append(f"Subtly adopt {creator_name}'s communication style.")
if teaching_style:
parts.append(f"Teaching style: {teaching_style}.")
if descriptors:
parts.append(f"Tone: {', '.join(descriptors[:5])}.")
if phrases:
parts.append(f"Use their signature phrases: {', '.join(phrases[:6])}.")
parts.append(f"Match their {formality} {energy} tone.")
# Style markers
if style.get("uses_analogies"):
parts.append("Use analogies when helpful.")
if style.get("audience_engagement"):
parts.append(f"Audience engagement: {style['audience_engagement']}.")
return " ".join(parts)

View file

@ -29,6 +29,7 @@ class ChatRequest(BaseModel):
query: str = Field(..., min_length=1, max_length=1000)
creator: str | None = None
conversation_id: str | None = None
personality_weight: float = Field(default=0.0, ge=0.0, le=1.0)
@router.post("")
@ -45,7 +46,7 @@ async def chat(
- ``event: done`` completion metadata with cascade_tier, conversation_id
- ``event: error`` error message (on failure)
"""
logger.info("chat_request query=%r creator=%r cid=%r", body.query, body.creator, body.conversation_id)
logger.info("chat_request query=%r creator=%r cid=%r weight=%.2f", body.query, body.creator, body.conversation_id, body.personality_weight)
redis = await get_redis()
service = ChatService(settings, redis=redis)
@ -56,6 +57,7 @@ async def chat(
db=db,
creator=body.creator,
conversation_id=body.conversation_id,
personality_weight=body.personality_weight,
),
media_type="text/event-stream",
headers={

View file

@ -563,3 +563,307 @@ async def test_single_turn_fallback_no_redis_history(chat_client, mock_redis):
assert len(captured_messages) == 2
assert captured_messages[0]["role"] == "system"
assert captured_messages[1]["role"] == "user"
# ── Personality weight tests ─────────────────────────────────────────────────
_FAKE_PERSONALITY_PROFILE = {
"vocabulary": {
"signature_phrases": ["let's gooo", "that's fire"],
"jargon_level": "mixed",
"filler_words": [],
"distinctive_terms": ["sauce", "vibes"],
"sound_descriptions": ["crispy", "punchy"],
},
"tone": {
"formality": "casual",
"energy": "high",
"humor": "occasional",
"teaching_style": "hands-on demo-driven",
"descriptors": ["enthusiastic", "direct", "encouraging"],
},
"style_markers": {
"explanation_approach": "example-first",
"uses_analogies": True,
"analogy_examples": ["like cooking a steak"],
"sound_words": ["brrr", "thwack"],
"self_references": "I always",
"audience_engagement": "asks rhetorical questions",
"pacing": "fast",
},
"summary": "High-energy producer who teaches by doing.",
}
def _mock_creator_row(name: str, profile: dict | None):
"""Build a mock Creator ORM row with just the fields personality injection needs."""
row = MagicMock()
row.name = name
row.personality_profile = profile
return row
def _mock_db_execute(creator_row):
"""Return a mock db.execute that yields a scalars().first() result."""
mock_scalars = MagicMock()
mock_scalars.first.return_value = creator_row
mock_result = MagicMock()
mock_result.scalars.return_value = mock_scalars
return AsyncMock(return_value=mock_result)
@pytest.mark.asyncio
async def test_personality_weight_accepted_and_forwarded(chat_client):
"""personality_weight is accepted in the request and forwarded to stream_response."""
search_result = _fake_search_result()
captured_kwargs = {}
mock_openai_client = MagicMock()
async def _capture_create(**kwargs):
captured_kwargs.update(kwargs)
return _mock_openai_stream(["ok"])
mock_openai_client.chat.completions.create = AsyncMock(side_effect=_capture_create)
with (
patch("chat_service.SearchService.search", new_callable=AsyncMock, return_value=search_result),
patch("chat_service.openai.AsyncOpenAI", return_value=mock_openai_client),
):
resp = await chat_client.post(
"/api/v1/chat",
json={"query": "test", "creator": "Keota", "personality_weight": 0.7},
)
assert resp.status_code == 200
events = _parse_sse(resp.text)
event_types = [e["event"] for e in events]
assert "done" in event_types
# Temperature should reflect the weight: 0.3 + 0.7*0.2 = 0.44
assert captured_kwargs.get("temperature") == pytest.approx(0.44)
@pytest.mark.asyncio
async def test_personality_prompt_injected_when_weight_and_profile(chat_client):
"""System prompt includes personality context when weight > 0 and profile exists."""
search_result = _fake_search_result()
creator_row = _mock_creator_row("Keota", _FAKE_PERSONALITY_PROFILE)
captured_messages = []
mock_openai_client = MagicMock()
async def _capture_create(**kwargs):
captured_messages.extend(kwargs.get("messages", []))
return _mock_openai_stream(["personality answer"])
mock_openai_client.chat.completions.create = AsyncMock(side_effect=_capture_create)
with (
patch("chat_service.SearchService.search", new_callable=AsyncMock, return_value=search_result),
patch("chat_service.openai.AsyncOpenAI", return_value=mock_openai_client),
):
# We need to mock db.execute inside the service — override the session
mock_session = AsyncMock()
mock_session.execute = _mock_db_execute(creator_row)
async def _mock_get_session():
yield mock_session
app.dependency_overrides[get_session] = _mock_get_session
resp = await chat_client.post(
"/api/v1/chat",
json={"query": "snare tips", "creator": "Keota", "personality_weight": 0.7},
)
assert resp.status_code == 200
assert len(captured_messages) >= 2
system_prompt = captured_messages[0]["content"]
# Personality block should be appended
assert "Keota" in system_prompt
assert "let's gooo" in system_prompt
assert "hands-on demo-driven" in system_prompt
assert "casual" in system_prompt
assert "high" in system_prompt
@pytest.mark.asyncio
async def test_personality_encyclopedic_fallback_null_profile(chat_client):
"""When weight > 0 but personality_profile is null, falls back to encyclopedic prompt."""
search_result = _fake_search_result()
creator_row = _mock_creator_row("NullCreator", None)
captured_messages = []
mock_openai_client = MagicMock()
async def _capture_create(**kwargs):
captured_messages.extend(kwargs.get("messages", []))
return _mock_openai_stream(["encyclopedic answer"])
mock_openai_client.chat.completions.create = AsyncMock(side_effect=_capture_create)
with (
patch("chat_service.SearchService.search", new_callable=AsyncMock, return_value=search_result),
patch("chat_service.openai.AsyncOpenAI", return_value=mock_openai_client),
):
mock_session = AsyncMock()
mock_session.execute = _mock_db_execute(creator_row)
async def _mock_get_session():
yield mock_session
app.dependency_overrides[get_session] = _mock_get_session
resp = await chat_client.post(
"/api/v1/chat",
json={"query": "reverb tips", "creator": "NullCreator", "personality_weight": 0.5},
)
assert resp.status_code == 200
system_prompt = captured_messages[0]["content"]
# Should be the standard encyclopedic prompt, no personality injection
assert "Chrysopedia" in system_prompt
assert "NullCreator" not in system_prompt
@pytest.mark.asyncio
async def test_personality_encyclopedic_fallback_missing_creator(chat_client):
"""When weight > 0 but creator doesn't exist in DB, falls back to encyclopedic prompt."""
search_result = _fake_search_result()
captured_messages = []
mock_openai_client = MagicMock()
async def _capture_create(**kwargs):
captured_messages.extend(kwargs.get("messages", []))
return _mock_openai_stream(["encyclopedic answer"])
mock_openai_client.chat.completions.create = AsyncMock(side_effect=_capture_create)
with (
patch("chat_service.SearchService.search", new_callable=AsyncMock, return_value=search_result),
patch("chat_service.openai.AsyncOpenAI", return_value=mock_openai_client),
):
mock_session = AsyncMock()
mock_session.execute = _mock_db_execute(None) # No creator found
async def _mock_get_session():
yield mock_session
app.dependency_overrides[get_session] = _mock_get_session
resp = await chat_client.post(
"/api/v1/chat",
json={"query": "bass tips", "creator": "GhostCreator", "personality_weight": 0.8},
)
assert resp.status_code == 200
system_prompt = captured_messages[0]["content"]
assert "Chrysopedia" in system_prompt
assert "GhostCreator" not in system_prompt
@pytest.mark.asyncio
async def test_personality_weight_zero_skips_profile_query(chat_client):
"""When weight is 0.0, no Creator query is made even if creator is set."""
search_result = _fake_search_result()
captured_kwargs = {}
mock_openai_client = MagicMock()
async def _capture_create(**kwargs):
captured_kwargs.update(kwargs)
return _mock_openai_stream(["ok"])
mock_openai_client.chat.completions.create = AsyncMock(side_effect=_capture_create)
with (
patch("chat_service.SearchService.search", new_callable=AsyncMock, return_value=search_result),
patch("chat_service.openai.AsyncOpenAI", return_value=mock_openai_client),
):
mock_session = AsyncMock()
mock_session.execute = AsyncMock() # Should NOT be called
async def _mock_get_session():
yield mock_session
app.dependency_overrides[get_session] = _mock_get_session
resp = await chat_client.post(
"/api/v1/chat",
json={"query": "test", "creator": "Keota", "personality_weight": 0.0},
)
assert resp.status_code == 200
# DB execute should not have been called for Creator lookup
mock_session.execute.assert_not_called()
# Temperature should be 0.3 (base)
assert captured_kwargs.get("temperature") == pytest.approx(0.3)
@pytest.mark.asyncio
async def test_personality_temperature_scales_with_weight(chat_client):
"""Temperature scales: 0.3 at weight=0.0, 0.5 at weight=1.0."""
search_result = _fake_search_result()
creator_row = _mock_creator_row("Keota", _FAKE_PERSONALITY_PROFILE)
captured_kwargs = {}
mock_openai_client = MagicMock()
async def _capture_create(**kwargs):
captured_kwargs.update(kwargs)
return _mock_openai_stream(["warm"])
mock_openai_client.chat.completions.create = AsyncMock(side_effect=_capture_create)
with (
patch("chat_service.SearchService.search", new_callable=AsyncMock, return_value=search_result),
patch("chat_service.openai.AsyncOpenAI", return_value=mock_openai_client),
):
mock_session = AsyncMock()
mock_session.execute = _mock_db_execute(creator_row)
async def _mock_get_session():
yield mock_session
app.dependency_overrides[get_session] = _mock_get_session
resp = await chat_client.post(
"/api/v1/chat",
json={"query": "test", "creator": "Keota", "personality_weight": 1.0},
)
assert resp.status_code == 200
assert captured_kwargs.get("temperature") == pytest.approx(0.5)
@pytest.mark.asyncio
async def test_personality_weight_above_1_returns_422(chat_client):
"""personality_weight > 1.0 fails Pydantic validation with 422."""
resp = await chat_client.post(
"/api/v1/chat",
json={"query": "test", "personality_weight": 1.5},
)
assert resp.status_code == 422
@pytest.mark.asyncio
async def test_personality_weight_below_0_returns_422(chat_client):
"""personality_weight < 0.0 fails Pydantic validation with 422."""
resp = await chat_client.post(
"/api/v1/chat",
json={"query": "test", "personality_weight": -0.1},
)
assert resp.status_code == 422
@pytest.mark.asyncio
async def test_personality_weight_string_returns_422(chat_client):
"""personality_weight as a non-numeric string fails validation with 422."""
resp = await chat_client.post(
"/api/v1/chat",
json={"query": "test", "personality_weight": "high"},
)
assert resp.status_code == 422