mirror of
https://github.com/xpltdco/media-rip.git
synced 2026-04-03 10:54:00 -06:00
Persistent admin settings + new server config fields
Settings are now persisted to SQLite (config table) and survive restarts. New admin-configurable settings (migrated from env-var-only): - Max concurrent downloads (1-10, default 3) - Session mode (isolated/shared/open) - Session timeout hours (1-8760, default 72) - Admin username - Auto-purge enabled (bool) - Purge max age hours (1-87600, default 168) Existing admin settings now also persist: - Welcome message - Default video/audio formats - Privacy mode + retention hours Architecture: - New settings service (services/settings.py) handles DB read/write - Startup loads persisted settings and applies to AppConfig - Admin PUT /settings validates, updates live config, and persists - GET /admin/settings returns all configurable fields - DownloadService.update_max_concurrent() hot-swaps the thread pool Also: - Fix footer GitHub URL (jlightner → xpltdco) - Add DEPLOY-TEST-PROMPT.md for deployment testing
This commit is contained in:
parent
5a6eb00906
commit
b4fd0af8e9
8 changed files with 620 additions and 90 deletions
213
DEPLOY-TEST-PROMPT.md
Normal file
213
DEPLOY-TEST-PROMPT.md
Normal file
|
|
@ -0,0 +1,213 @@
|
||||||
|
# media.rip() — Deployment Testing Prompt
|
||||||
|
|
||||||
|
Take this to a separate Claude session on a machine with Docker installed.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
You're testing a freshly published Docker image for **media.rip()**, a self-hosted yt-dlp web frontend. The image is at `ghcr.io/xpltdco/media-rip:latest` (v1.0.1). Your job is to deploy it, exercise the features, and report back with findings.
|
||||||
|
|
||||||
|
The app is a FastAPI + Vue 3 web app that lets users paste video/audio URLs, pick quality, and download media. It has session isolation, real-time SSE progress, an admin panel, theme switching, and auto-purge.
|
||||||
|
|
||||||
|
## Step 1: Deploy (zero-config)
|
||||||
|
|
||||||
|
Create a directory and bring it up:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir media-rip-test && cd media-rip-test
|
||||||
|
|
||||||
|
cat > docker-compose.yml << 'EOF'
|
||||||
|
services:
|
||||||
|
mediarip:
|
||||||
|
image: ghcr.io/xpltdco/media-rip:latest
|
||||||
|
ports:
|
||||||
|
- "8080:8000"
|
||||||
|
volumes:
|
||||||
|
- ./downloads:/downloads
|
||||||
|
- mediarip-data:/data
|
||||||
|
environment:
|
||||||
|
- MEDIARIP__SESSION__MODE=isolated
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
mediarip-data:
|
||||||
|
EOF
|
||||||
|
|
||||||
|
docker compose up -d
|
||||||
|
docker compose logs -f # watch for startup, Ctrl+C when ready
|
||||||
|
```
|
||||||
|
|
||||||
|
Open http://localhost:8080 in a browser.
|
||||||
|
|
||||||
|
## Step 2: Test the core loop
|
||||||
|
|
||||||
|
Test each of these and note what happens:
|
||||||
|
|
||||||
|
1. **Paste a URL and download** — Try a YouTube video (e.g. `https://www.youtube.com/watch?v=jNQXAC9IVRw` — "Me at the zoo", 19 seconds). Does the format picker appear? Can you select quality? Does the download start and show real-time progress?
|
||||||
|
|
||||||
|
2. **Check the download file** — Look in `./downloads/` on the host. Is the file there? Is the filename sensible?
|
||||||
|
|
||||||
|
3. **Try a non-YouTube URL** — Try a SoundCloud track, Vimeo video, or any other URL. Does format extraction work?
|
||||||
|
|
||||||
|
4. **Try a playlist** — Paste a YouTube playlist URL. Do parent/child jobs appear? Can you collapse/expand them?
|
||||||
|
|
||||||
|
5. **Queue management** — Start multiple downloads. Can you cancel one mid-download? Does the queue show correct statuses?
|
||||||
|
|
||||||
|
6. **Page refresh** — Refresh the browser mid-download. Do your downloads reappear (SSE reconnect replay)?
|
||||||
|
|
||||||
|
7. **Session isolation** — Open a second browser (or incognito window). Does it have its own empty queue? Can it see the first browser's downloads? (It shouldn't in isolated mode.)
|
||||||
|
|
||||||
|
## Step 3: Test the admin panel
|
||||||
|
|
||||||
|
Bring the container down, enable admin, bring it back up:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose down
|
||||||
|
|
||||||
|
# Generate a bcrypt hash for password "testpass123"
|
||||||
|
HASH=$(docker run --rm python:3.12-slim python -c "import bcrypt; print(bcrypt.hashpw(b'testpass123', bcrypt.gensalt()).decode())")
|
||||||
|
|
||||||
|
cat > docker-compose.yml << EOF
|
||||||
|
services:
|
||||||
|
mediarip:
|
||||||
|
image: ghcr.io/xpltdco/media-rip:latest
|
||||||
|
ports:
|
||||||
|
- "8080:8000"
|
||||||
|
volumes:
|
||||||
|
- ./downloads:/downloads
|
||||||
|
- mediarip-data:/data
|
||||||
|
environment:
|
||||||
|
- MEDIARIP__SESSION__MODE=isolated
|
||||||
|
- MEDIARIP__ADMIN__ENABLED=true
|
||||||
|
- MEDIARIP__ADMIN__USERNAME=admin
|
||||||
|
- MEDIARIP__ADMIN__PASSWORD_HASH=$HASH
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
mediarip-data:
|
||||||
|
EOF
|
||||||
|
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
Test:
|
||||||
|
1. Does the admin panel appear in the UI? Can you log in with `admin` / `testpass123`?
|
||||||
|
2. Can you see active sessions, storage info, error logs?
|
||||||
|
3. Can you trigger a manual purge?
|
||||||
|
4. Do previous downloads (from step 2) still appear? (Data should persist across restarts via the named volume.)
|
||||||
|
|
||||||
|
## Step 4: Test persistence
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose restart mediarip
|
||||||
|
```
|
||||||
|
|
||||||
|
After restart:
|
||||||
|
1. Does the download history survive?
|
||||||
|
2. Does the admin login still work?
|
||||||
|
3. Are downloaded files still in `./downloads/`?
|
||||||
|
|
||||||
|
## Step 5: Test themes
|
||||||
|
|
||||||
|
1. Switch between Cyberpunk, Dark, and Light themes in the header. Do they all render correctly?
|
||||||
|
2. Check on mobile viewport (resize browser to <768px). Does the layout switch to mobile mode with bottom tabs?
|
||||||
|
|
||||||
|
## Step 6: Test auto-purge (optional)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose down
|
||||||
|
|
||||||
|
# Enable purge with a very short max age for testing
|
||||||
|
cat > docker-compose.yml << 'EOF'
|
||||||
|
services:
|
||||||
|
mediarip:
|
||||||
|
image: ghcr.io/xpltdco/media-rip:latest
|
||||||
|
ports:
|
||||||
|
- "8080:8000"
|
||||||
|
volumes:
|
||||||
|
- ./downloads:/downloads
|
||||||
|
- mediarip-data:/data
|
||||||
|
environment:
|
||||||
|
- MEDIARIP__SESSION__MODE=isolated
|
||||||
|
- MEDIARIP__PURGE__ENABLED=true
|
||||||
|
- MEDIARIP__PURGE__MAX_AGE_HOURS=0
|
||||||
|
- MEDIARIP__PURGE__CRON=* * * * *
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
mediarip-data:
|
||||||
|
EOF
|
||||||
|
|
||||||
|
docker compose up -d
|
||||||
|
docker compose logs -f # watch for purge log messages
|
||||||
|
```
|
||||||
|
|
||||||
|
Do completed downloads get purged? Do files get removed from `./downloads/`?
|
||||||
|
|
||||||
|
## Step 7: Health check
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl http://localhost:8080/api/health | python -m json.tool
|
||||||
|
```
|
||||||
|
|
||||||
|
Does it return status, version, yt_dlp_version, uptime?
|
||||||
|
|
||||||
|
## Step 8: Container inspection
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check image size
|
||||||
|
docker images ghcr.io/xpltdco/media-rip
|
||||||
|
|
||||||
|
# Check the container is running as non-root
|
||||||
|
docker compose exec mediarip whoami
|
||||||
|
|
||||||
|
# Check no outbound network requests
|
||||||
|
docker compose exec mediarip python -c "import urllib.request; urllib.request.urlopen('http://localhost:8000/api/health')"
|
||||||
|
|
||||||
|
# Check ffmpeg and deno are available
|
||||||
|
docker compose exec mediarip ffmpeg -version | head -1
|
||||||
|
docker compose exec mediarip deno --version
|
||||||
|
```
|
||||||
|
|
||||||
|
## What to report back
|
||||||
|
|
||||||
|
When you're done, bring the findings back to the original session. Structure your report as:
|
||||||
|
|
||||||
|
### Working
|
||||||
|
- List everything that worked as expected
|
||||||
|
|
||||||
|
### Broken / Bugs
|
||||||
|
- Exact steps to reproduce
|
||||||
|
- What you expected vs what happened
|
||||||
|
- Any error messages from `docker compose logs` or browser console
|
||||||
|
|
||||||
|
### UX Issues
|
||||||
|
- Anything confusing, ugly, slow, or unintuitive
|
||||||
|
- Mobile layout problems
|
||||||
|
- Theme rendering issues
|
||||||
|
|
||||||
|
### Missing / Gaps
|
||||||
|
- Features that felt absent
|
||||||
|
- Configuration that was hard to figure out
|
||||||
|
- Documentation gaps
|
||||||
|
|
||||||
|
### Container / Ops
|
||||||
|
- Image size
|
||||||
|
- Startup time
|
||||||
|
- Resource usage (`docker stats`)
|
||||||
|
- Any permission errors with volumes
|
||||||
|
- Health check behavior
|
||||||
|
|
||||||
|
### Raw logs
|
||||||
|
- Paste any interesting lines from `docker compose logs`
|
||||||
|
- Browser console errors (F12 → Console tab)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cleanup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose down -v # removes containers + named volumes
|
||||||
|
rm -rf media-rip-test
|
||||||
|
```
|
||||||
|
|
@ -64,6 +64,13 @@ async def lifespan(app: FastAPI):
|
||||||
db = await init_db(config.server.db_path)
|
db = await init_db(config.server.db_path)
|
||||||
logger.info("Database initialised at %s", config.server.db_path)
|
logger.info("Database initialised at %s", config.server.db_path)
|
||||||
|
|
||||||
|
# --- Load persisted settings from DB ---
|
||||||
|
from app.services.settings import apply_persisted_to_config, load_persisted_settings
|
||||||
|
|
||||||
|
persisted = await load_persisted_settings(db)
|
||||||
|
if persisted:
|
||||||
|
apply_persisted_to_config(config, persisted)
|
||||||
|
|
||||||
# --- Event loop + SSE broker ---
|
# --- Event loop + SSE broker ---
|
||||||
loop = asyncio.get_event_loop()
|
loop = asyncio.get_event_loop()
|
||||||
broker = SSEBroker(loop)
|
broker = SSEBroker(loop)
|
||||||
|
|
@ -101,6 +108,10 @@ async def lifespan(app: FastAPI):
|
||||||
app.state.download_service = download_service
|
app.state.download_service = download_service
|
||||||
app.state.start_time = datetime.now(timezone.utc)
|
app.state.start_time = datetime.now(timezone.utc)
|
||||||
|
|
||||||
|
# Store format overrides from persisted settings
|
||||||
|
app.state._default_video_format = persisted.get("default_video_format", "auto")
|
||||||
|
app.state._default_audio_format = persisted.get("default_audio_format", "auto")
|
||||||
|
|
||||||
yield
|
yield
|
||||||
|
|
||||||
# --- Teardown ---
|
# --- Teardown ---
|
||||||
|
|
|
||||||
|
|
@ -1,10 +1,14 @@
|
||||||
"""Admin API endpoints — protected by require_admin dependency."""
|
"""Admin API endpoints — protected by require_admin dependency.
|
||||||
|
|
||||||
|
Settings are persisted to SQLite and survive container restarts.
|
||||||
|
"""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from fastapi import APIRouter, Depends, Request
|
from fastapi import APIRouter, Depends, Request
|
||||||
|
from fastapi.responses import JSONResponse
|
||||||
|
|
||||||
from app.dependencies import require_admin
|
from app.dependencies import require_admin
|
||||||
|
|
||||||
|
|
@ -137,7 +141,6 @@ async def list_unsupported_urls(
|
||||||
for row in rows
|
for row in rows
|
||||||
]
|
]
|
||||||
|
|
||||||
# Total count
|
|
||||||
count_cursor = await db.execute("SELECT COUNT(*) FROM unsupported_urls")
|
count_cursor = await db.execute("SELECT COUNT(*) FROM unsupported_urls")
|
||||||
count_row = await count_cursor.fetchone()
|
count_row = await count_cursor.fetchone()
|
||||||
total = count_row[0] if count_row else 0
|
total = count_row[0] if count_row else 0
|
||||||
|
|
@ -181,9 +184,6 @@ async def manual_purge(
|
||||||
|
|
||||||
config = request.app.state.config
|
config = request.app.state.config
|
||||||
db = request.app.state.db
|
db = request.app.state.db
|
||||||
# Attach runtime overrides so purge service can read them
|
|
||||||
overrides = getattr(request.app.state, "settings_overrides", {})
|
|
||||||
config._runtime_overrides = overrides
|
|
||||||
result = await run_purge(db, config, purge_all=True)
|
result = await run_purge(db, config, purge_all=True)
|
||||||
|
|
||||||
# Broadcast job_removed events to all SSE clients
|
# Broadcast job_removed events to all SSE clients
|
||||||
|
|
@ -191,41 +191,71 @@ async def manual_purge(
|
||||||
for job_id in result.get("deleted_job_ids", []):
|
for job_id in result.get("deleted_job_ids", []):
|
||||||
broker.publish_all({"event": "job_removed", "data": {"job_id": job_id}})
|
broker.publish_all({"event": "job_removed", "data": {"job_id": job_id}})
|
||||||
|
|
||||||
# Don't send internal field to client
|
|
||||||
result.pop("deleted_job_ids", None)
|
result.pop("deleted_job_ids", None)
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/settings")
|
||||||
|
async def get_settings(
|
||||||
|
request: Request,
|
||||||
|
_admin: str = Depends(require_admin),
|
||||||
|
) -> dict:
|
||||||
|
"""Return all admin-configurable settings with current values."""
|
||||||
|
config = request.app.state.config
|
||||||
|
|
||||||
|
return {
|
||||||
|
"welcome_message": config.ui.welcome_message,
|
||||||
|
"default_video_format": getattr(request.app.state, "_default_video_format", "auto"),
|
||||||
|
"default_audio_format": getattr(request.app.state, "_default_audio_format", "auto"),
|
||||||
|
"privacy_mode": config.purge.privacy_mode,
|
||||||
|
"privacy_retention_hours": config.purge.privacy_retention_hours,
|
||||||
|
"max_concurrent": config.downloads.max_concurrent,
|
||||||
|
"session_mode": config.session.mode,
|
||||||
|
"session_timeout_hours": config.session.timeout_hours,
|
||||||
|
"admin_username": config.admin.username,
|
||||||
|
"purge_enabled": config.purge.enabled,
|
||||||
|
"purge_max_age_hours": config.purge.max_age_hours,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
@router.put("/settings")
|
@router.put("/settings")
|
||||||
async def update_settings(
|
async def update_settings(
|
||||||
request: Request,
|
request: Request,
|
||||||
_admin: str = Depends(require_admin),
|
_admin: str = Depends(require_admin),
|
||||||
) -> dict:
|
) -> dict:
|
||||||
"""Update runtime settings (in-memory only — resets on restart).
|
"""Update and persist admin settings to SQLite.
|
||||||
|
|
||||||
Accepts a JSON body with optional fields:
|
Accepts a JSON body with any combination of:
|
||||||
- welcome_message: str
|
- welcome_message: str
|
||||||
- default_video_format: str (auto, mp4, webm)
|
- default_video_format: str (auto, mp4, webm)
|
||||||
- default_audio_format: str (auto, mp3, m4a, flac, wav, opus)
|
- default_audio_format: str (auto, mp3, m4a, flac, wav, opus)
|
||||||
|
- privacy_mode: bool
|
||||||
|
- privacy_retention_hours: int (1-8760)
|
||||||
|
- max_concurrent: int (1-10)
|
||||||
|
- session_mode: str (isolated, shared, open)
|
||||||
|
- session_timeout_hours: int (1-8760)
|
||||||
|
- admin_username: str
|
||||||
|
- purge_enabled: bool
|
||||||
|
- purge_max_age_hours: int (1-87600)
|
||||||
"""
|
"""
|
||||||
|
from app.services.settings import save_settings
|
||||||
|
|
||||||
body = await request.json()
|
body = await request.json()
|
||||||
|
config = request.app.state.config
|
||||||
|
db = request.app.state.db
|
||||||
|
|
||||||
if not hasattr(request.app.state, "settings_overrides"):
|
to_persist = {}
|
||||||
request.app.state.settings_overrides = {}
|
|
||||||
|
|
||||||
updated = []
|
updated = []
|
||||||
|
|
||||||
|
# --- Validate and collect ---
|
||||||
|
|
||||||
if "welcome_message" in body:
|
if "welcome_message" in body:
|
||||||
msg = body["welcome_message"]
|
msg = body["welcome_message"]
|
||||||
if not isinstance(msg, str):
|
if not isinstance(msg, str):
|
||||||
from fastapi.responses import JSONResponse
|
return JSONResponse(status_code=422, content={"detail": "welcome_message must be a string"})
|
||||||
|
config.ui.welcome_message = msg
|
||||||
return JSONResponse(
|
to_persist["welcome_message"] = msg
|
||||||
status_code=422,
|
|
||||||
content={"detail": "welcome_message must be a string"},
|
|
||||||
)
|
|
||||||
request.app.state.settings_overrides["welcome_message"] = msg
|
|
||||||
updated.append("welcome_message")
|
updated.append("welcome_message")
|
||||||
logger.info("Admin updated welcome_message to: %s", msg[:80])
|
|
||||||
|
|
||||||
valid_video_formats = {"auto", "mp4", "webm"}
|
valid_video_formats = {"auto", "mp4", "webm"}
|
||||||
valid_audio_formats = {"auto", "mp3", "m4a", "flac", "wav", "opus"}
|
valid_audio_formats = {"auto", "mp3", "m4a", "flac", "wav", "opus"}
|
||||||
|
|
@ -233,56 +263,85 @@ async def update_settings(
|
||||||
if "default_video_format" in body:
|
if "default_video_format" in body:
|
||||||
fmt = body["default_video_format"]
|
fmt = body["default_video_format"]
|
||||||
if fmt in valid_video_formats:
|
if fmt in valid_video_formats:
|
||||||
request.app.state.settings_overrides["default_video_format"] = fmt
|
request.app.state._default_video_format = fmt
|
||||||
|
to_persist["default_video_format"] = fmt
|
||||||
updated.append("default_video_format")
|
updated.append("default_video_format")
|
||||||
logger.info("Admin updated default_video_format to: %s", fmt)
|
|
||||||
|
|
||||||
if "default_audio_format" in body:
|
if "default_audio_format" in body:
|
||||||
fmt = body["default_audio_format"]
|
fmt = body["default_audio_format"]
|
||||||
if fmt in valid_audio_formats:
|
if fmt in valid_audio_formats:
|
||||||
request.app.state.settings_overrides["default_audio_format"] = fmt
|
request.app.state._default_audio_format = fmt
|
||||||
|
to_persist["default_audio_format"] = fmt
|
||||||
updated.append("default_audio_format")
|
updated.append("default_audio_format")
|
||||||
logger.info("Admin updated default_audio_format to: %s", fmt)
|
|
||||||
|
|
||||||
if "privacy_mode" in body:
|
if "privacy_mode" in body:
|
||||||
val = body["privacy_mode"]
|
val = body["privacy_mode"]
|
||||||
if isinstance(val, bool):
|
if isinstance(val, bool):
|
||||||
request.app.state.settings_overrides["privacy_mode"] = val
|
config.purge.privacy_mode = val
|
||||||
# When enabling privacy mode, also enable the purge scheduler
|
to_persist["privacy_mode"] = val
|
||||||
config = request.app.state.config
|
|
||||||
if val and not config.purge.enabled:
|
|
||||||
config.purge.enabled = True
|
|
||||||
# Start the scheduler if APScheduler is available
|
|
||||||
try:
|
|
||||||
from apscheduler.schedulers.asyncio import AsyncIOScheduler
|
|
||||||
from apscheduler.triggers.cron import CronTrigger
|
|
||||||
from app.services.purge import run_purge
|
|
||||||
|
|
||||||
if not hasattr(request.app.state, "scheduler"):
|
|
||||||
scheduler = AsyncIOScheduler()
|
|
||||||
scheduler.add_job(
|
|
||||||
run_purge,
|
|
||||||
CronTrigger(minute="*/30"), # every 30 min for privacy
|
|
||||||
args=[request.app.state.db, config],
|
|
||||||
id="purge_job",
|
|
||||||
name="Privacy purge",
|
|
||||||
replace_existing=True,
|
|
||||||
)
|
|
||||||
scheduler.start()
|
|
||||||
request.app.state.scheduler = scheduler
|
|
||||||
logger.info("Privacy mode: started purge scheduler (every 30 min)")
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning("Could not start purge scheduler: %s", e)
|
|
||||||
updated.append("privacy_mode")
|
updated.append("privacy_mode")
|
||||||
logger.info("Admin updated privacy_mode to: %s", val)
|
# Start purge scheduler if enabling privacy mode
|
||||||
|
if val and not getattr(request.app.state, "scheduler", None):
|
||||||
|
_start_purge_scheduler(request.app.state, config, db)
|
||||||
|
|
||||||
if "privacy_retention_hours" in body:
|
if "privacy_retention_hours" in body:
|
||||||
val = body["privacy_retention_hours"]
|
val = body["privacy_retention_hours"]
|
||||||
if isinstance(val, (int, float)) and 1 <= val <= 8760: # 1 hour to 1 year
|
if isinstance(val, (int, float)) and 1 <= val <= 8760:
|
||||||
request.app.state.settings_overrides["privacy_retention_hours"] = int(val)
|
config.purge.privacy_retention_hours = int(val)
|
||||||
|
to_persist["privacy_retention_hours"] = int(val)
|
||||||
updated.append("privacy_retention_hours")
|
updated.append("privacy_retention_hours")
|
||||||
logger.info("Admin updated privacy_retention_hours to: %d", int(val))
|
|
||||||
logger.info("Admin updated default_audio_format to: %s", fmt)
|
if "max_concurrent" in body:
|
||||||
|
val = body["max_concurrent"]
|
||||||
|
if isinstance(val, int) and 1 <= val <= 10:
|
||||||
|
config.downloads.max_concurrent = val
|
||||||
|
to_persist["max_concurrent"] = val
|
||||||
|
updated.append("max_concurrent")
|
||||||
|
# Update the download service's executor pool size
|
||||||
|
download_service = request.app.state.download_service
|
||||||
|
download_service.update_max_concurrent(val)
|
||||||
|
|
||||||
|
if "session_mode" in body:
|
||||||
|
val = body["session_mode"]
|
||||||
|
if val in ("isolated", "shared", "open"):
|
||||||
|
config.session.mode = val
|
||||||
|
to_persist["session_mode"] = val
|
||||||
|
updated.append("session_mode")
|
||||||
|
|
||||||
|
if "session_timeout_hours" in body:
|
||||||
|
val = body["session_timeout_hours"]
|
||||||
|
if isinstance(val, int) and 1 <= val <= 8760:
|
||||||
|
config.session.timeout_hours = val
|
||||||
|
to_persist["session_timeout_hours"] = val
|
||||||
|
updated.append("session_timeout_hours")
|
||||||
|
|
||||||
|
if "admin_username" in body:
|
||||||
|
val = body["admin_username"]
|
||||||
|
if isinstance(val, str) and len(val) >= 1:
|
||||||
|
config.admin.username = val
|
||||||
|
to_persist["admin_username"] = val
|
||||||
|
updated.append("admin_username")
|
||||||
|
|
||||||
|
if "purge_enabled" in body:
|
||||||
|
val = body["purge_enabled"]
|
||||||
|
if isinstance(val, bool):
|
||||||
|
config.purge.enabled = val
|
||||||
|
to_persist["purge_enabled"] = val
|
||||||
|
updated.append("purge_enabled")
|
||||||
|
if val and not getattr(request.app.state, "scheduler", None):
|
||||||
|
_start_purge_scheduler(request.app.state, config, db)
|
||||||
|
|
||||||
|
if "purge_max_age_hours" in body:
|
||||||
|
val = body["purge_max_age_hours"]
|
||||||
|
if isinstance(val, int) and 1 <= val <= 87600:
|
||||||
|
config.purge.max_age_hours = val
|
||||||
|
to_persist["purge_max_age_hours"] = val
|
||||||
|
updated.append("purge_max_age_hours")
|
||||||
|
|
||||||
|
# --- Persist to DB ---
|
||||||
|
if to_persist:
|
||||||
|
await save_settings(db, to_persist)
|
||||||
|
logger.info("Admin persisted settings: %s", ", ".join(updated))
|
||||||
|
|
||||||
return {"updated": updated, "status": "ok"}
|
return {"updated": updated, "status": "ok"}
|
||||||
|
|
||||||
|
|
@ -292,12 +351,7 @@ async def change_password(
|
||||||
request: Request,
|
request: Request,
|
||||||
_admin: str = Depends(require_admin),
|
_admin: str = Depends(require_admin),
|
||||||
) -> dict:
|
) -> dict:
|
||||||
"""Change admin password (in-memory only — resets on restart).
|
"""Change admin password. Persisted in-memory only (set via env var for persistence)."""
|
||||||
|
|
||||||
Accepts JSON body:
|
|
||||||
- current_password: str (required, must match current password)
|
|
||||||
- new_password: str (required, min 4 chars)
|
|
||||||
"""
|
|
||||||
import bcrypt
|
import bcrypt
|
||||||
|
|
||||||
body = await request.json()
|
body = await request.json()
|
||||||
|
|
@ -305,20 +359,17 @@ async def change_password(
|
||||||
new_pw = body.get("new_password", "")
|
new_pw = body.get("new_password", "")
|
||||||
|
|
||||||
if not current or not new_pw:
|
if not current or not new_pw:
|
||||||
from fastapi.responses import JSONResponse
|
|
||||||
return JSONResponse(
|
return JSONResponse(
|
||||||
status_code=422,
|
status_code=422,
|
||||||
content={"detail": "current_password and new_password are required"},
|
content={"detail": "current_password and new_password are required"},
|
||||||
)
|
)
|
||||||
|
|
||||||
if len(new_pw) < 4:
|
if len(new_pw) < 4:
|
||||||
from fastapi.responses import JSONResponse
|
|
||||||
return JSONResponse(
|
return JSONResponse(
|
||||||
status_code=422,
|
status_code=422,
|
||||||
content={"detail": "New password must be at least 4 characters"},
|
content={"detail": "New password must be at least 4 characters"},
|
||||||
)
|
)
|
||||||
|
|
||||||
# Verify current password
|
|
||||||
config = request.app.state.config
|
config = request.app.state.config
|
||||||
try:
|
try:
|
||||||
valid = bcrypt.checkpw(
|
valid = bcrypt.checkpw(
|
||||||
|
|
@ -329,15 +380,36 @@ async def change_password(
|
||||||
valid = False
|
valid = False
|
||||||
|
|
||||||
if not valid:
|
if not valid:
|
||||||
from fastapi.responses import JSONResponse
|
|
||||||
return JSONResponse(
|
return JSONResponse(
|
||||||
status_code=403,
|
status_code=403,
|
||||||
content={"detail": "Current password is incorrect"},
|
content={"detail": "Current password is incorrect"},
|
||||||
)
|
)
|
||||||
|
|
||||||
# Hash and store new password
|
|
||||||
new_hash = bcrypt.hashpw(new_pw.encode("utf-8"), bcrypt.gensalt()).decode("utf-8")
|
new_hash = bcrypt.hashpw(new_pw.encode("utf-8"), bcrypt.gensalt()).decode("utf-8")
|
||||||
config.admin.password_hash = new_hash
|
config.admin.password_hash = new_hash
|
||||||
logger.info("Admin password changed by user '%s'", _admin)
|
logger.info("Admin password changed by user '%s'", _admin)
|
||||||
|
|
||||||
return {"status": "ok", "message": "Password changed successfully"}
|
return {"status": "ok", "message": "Password changed successfully"}
|
||||||
|
|
||||||
|
|
||||||
|
def _start_purge_scheduler(state, config, db) -> None:
|
||||||
|
"""Start the APScheduler purge job if not already running."""
|
||||||
|
try:
|
||||||
|
from apscheduler.schedulers.asyncio import AsyncIOScheduler
|
||||||
|
from apscheduler.triggers.cron import CronTrigger
|
||||||
|
from app.services.purge import run_purge
|
||||||
|
|
||||||
|
scheduler = AsyncIOScheduler()
|
||||||
|
scheduler.add_job(
|
||||||
|
run_purge,
|
||||||
|
CronTrigger(minute="*/30"),
|
||||||
|
args=[db, config],
|
||||||
|
id="purge_job",
|
||||||
|
name="Scheduled purge",
|
||||||
|
replace_existing=True,
|
||||||
|
)
|
||||||
|
scheduler.start()
|
||||||
|
state.scheduler = scheduler
|
||||||
|
logger.info("Purge scheduler started")
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning("Could not start purge scheduler: %s", e)
|
||||||
|
|
|
||||||
|
|
@ -15,27 +15,18 @@ router = APIRouter(tags=["system"])
|
||||||
async def public_config(request: Request) -> dict:
|
async def public_config(request: Request) -> dict:
|
||||||
"""Return the safe subset of application config for the frontend.
|
"""Return the safe subset of application config for the frontend.
|
||||||
|
|
||||||
Explicitly constructs the response dict from known-safe fields.
|
Reads from the live AppConfig which includes persisted admin settings.
|
||||||
Does NOT serialize the full AppConfig and strip fields — that pattern
|
|
||||||
is fragile when new sensitive fields are added later.
|
|
||||||
"""
|
"""
|
||||||
config = request.app.state.config
|
config = request.app.state.config
|
||||||
|
|
||||||
# Runtime overrides (set via admin settings endpoint) take precedence
|
|
||||||
overrides = getattr(request.app.state, "settings_overrides", {})
|
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"session_mode": config.session.mode,
|
"session_mode": config.session.mode,
|
||||||
"default_theme": config.ui.default_theme,
|
"default_theme": config.ui.default_theme,
|
||||||
"welcome_message": overrides.get(
|
"welcome_message": config.ui.welcome_message,
|
||||||
"welcome_message", config.ui.welcome_message
|
|
||||||
),
|
|
||||||
"purge_enabled": config.purge.enabled,
|
"purge_enabled": config.purge.enabled,
|
||||||
"max_concurrent_downloads": config.downloads.max_concurrent,
|
"max_concurrent_downloads": config.downloads.max_concurrent,
|
||||||
"default_video_format": overrides.get("default_video_format", "auto"),
|
"default_video_format": getattr(request.app.state, "_default_video_format", "auto"),
|
||||||
"default_audio_format": overrides.get("default_audio_format", "auto"),
|
"default_audio_format": getattr(request.app.state, "_default_audio_format", "auto"),
|
||||||
"privacy_mode": overrides.get("privacy_mode", config.purge.privacy_mode),
|
"privacy_mode": config.purge.privacy_mode,
|
||||||
"privacy_retention_hours": overrides.get(
|
"privacy_retention_hours": config.purge.privacy_retention_hours,
|
||||||
"privacy_retention_hours", config.purge.privacy_retention_hours
|
|
||||||
),
|
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -71,6 +71,18 @@ class DownloadService:
|
||||||
# Per-job throttle state for DB writes (only used inside worker threads)
|
# Per-job throttle state for DB writes (only used inside worker threads)
|
||||||
self._last_db_percent: dict[str, float] = {}
|
self._last_db_percent: dict[str, float] = {}
|
||||||
|
|
||||||
|
def update_max_concurrent(self, max_workers: int) -> None:
|
||||||
|
"""Update the thread pool size for concurrent downloads.
|
||||||
|
|
||||||
|
Creates a new executor — existing in-flight downloads continue on the old one.
|
||||||
|
"""
|
||||||
|
self._executor = ThreadPoolExecutor(
|
||||||
|
max_workers=max_workers,
|
||||||
|
thread_name_prefix="ytdl",
|
||||||
|
)
|
||||||
|
# Don't shutdown old executor — let in-flight downloads finish
|
||||||
|
logger.info("Updated max concurrent downloads to %d", max_workers)
|
||||||
|
|
||||||
# ------------------------------------------------------------------
|
# ------------------------------------------------------------------
|
||||||
# Public async interface
|
# Public async interface
|
||||||
# ------------------------------------------------------------------
|
# ------------------------------------------------------------------
|
||||||
|
|
|
||||||
129
backend/app/services/settings.py
Normal file
129
backend/app/services/settings.py
Normal file
|
|
@ -0,0 +1,129 @@
|
||||||
|
"""Persistent settings service — reads/writes the `config` table in SQLite.
|
||||||
|
|
||||||
|
Settings priority (highest wins):
|
||||||
|
1. Admin writes via UI → persisted in SQLite `config` table
|
||||||
|
2. Environment variables (MEDIARIP__*)
|
||||||
|
3. config.yaml
|
||||||
|
4. Hardcoded defaults
|
||||||
|
|
||||||
|
On startup, persisted settings are loaded and applied to the AppConfig.
|
||||||
|
Admin writes go to DB immediately and update the live config.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
|
||||||
|
import aiosqlite
|
||||||
|
|
||||||
|
logger = logging.getLogger("mediarip.settings")
|
||||||
|
|
||||||
|
# Keys that can be persisted via admin UI
|
||||||
|
ADMIN_WRITABLE_KEYS = {
|
||||||
|
"welcome_message",
|
||||||
|
"default_video_format",
|
||||||
|
"default_audio_format",
|
||||||
|
"privacy_mode",
|
||||||
|
"privacy_retention_hours",
|
||||||
|
"max_concurrent",
|
||||||
|
"session_mode",
|
||||||
|
"session_timeout_hours",
|
||||||
|
"admin_username",
|
||||||
|
"purge_enabled",
|
||||||
|
"purge_max_age_hours",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
async def load_persisted_settings(db: aiosqlite.Connection) -> dict:
|
||||||
|
"""Load all persisted settings from the config table."""
|
||||||
|
cursor = await db.execute("SELECT key, value FROM config")
|
||||||
|
rows = await cursor.fetchall()
|
||||||
|
settings = {}
|
||||||
|
for row in rows:
|
||||||
|
key = row["key"]
|
||||||
|
raw = row["value"]
|
||||||
|
if key in ADMIN_WRITABLE_KEYS:
|
||||||
|
settings[key] = _deserialize(key, raw)
|
||||||
|
return settings
|
||||||
|
|
||||||
|
|
||||||
|
async def save_setting(db: aiosqlite.Connection, key: str, value: object) -> None:
|
||||||
|
"""Persist a single setting to the config table."""
|
||||||
|
if key not in ADMIN_WRITABLE_KEYS:
|
||||||
|
raise ValueError(f"Setting '{key}' is not admin-writable")
|
||||||
|
now = datetime.now(timezone.utc).isoformat()
|
||||||
|
serialized = json.dumps(value)
|
||||||
|
await db.execute(
|
||||||
|
"""
|
||||||
|
INSERT INTO config (key, value, updated_at)
|
||||||
|
VALUES (?, ?, ?)
|
||||||
|
ON CONFLICT(key) DO UPDATE SET value = excluded.value, updated_at = excluded.updated_at
|
||||||
|
""",
|
||||||
|
(key, serialized, now),
|
||||||
|
)
|
||||||
|
await db.commit()
|
||||||
|
logger.info("Persisted setting %s", key)
|
||||||
|
|
||||||
|
|
||||||
|
async def save_settings(db: aiosqlite.Connection, settings: dict) -> list[str]:
|
||||||
|
"""Persist multiple settings. Returns list of keys saved."""
|
||||||
|
saved = []
|
||||||
|
for key, value in settings.items():
|
||||||
|
if key in ADMIN_WRITABLE_KEYS:
|
||||||
|
await save_setting(db, key, value)
|
||||||
|
saved.append(key)
|
||||||
|
return saved
|
||||||
|
|
||||||
|
|
||||||
|
async def delete_setting(db: aiosqlite.Connection, key: str) -> None:
|
||||||
|
"""Remove a persisted setting (reverts to default)."""
|
||||||
|
await db.execute("DELETE FROM config WHERE key = ?", (key,))
|
||||||
|
await db.commit()
|
||||||
|
|
||||||
|
|
||||||
|
def apply_persisted_to_config(config, settings: dict) -> None:
|
||||||
|
"""Apply persisted settings to the live AppConfig object.
|
||||||
|
|
||||||
|
Only applies values for keys that exist in settings dict.
|
||||||
|
Does NOT overwrite values that were explicitly set via env vars.
|
||||||
|
"""
|
||||||
|
if "welcome_message" in settings:
|
||||||
|
config.ui.welcome_message = settings["welcome_message"]
|
||||||
|
if "max_concurrent" in settings:
|
||||||
|
config.downloads.max_concurrent = settings["max_concurrent"]
|
||||||
|
if "session_mode" in settings:
|
||||||
|
config.session.mode = settings["session_mode"]
|
||||||
|
if "session_timeout_hours" in settings:
|
||||||
|
config.session.timeout_hours = settings["session_timeout_hours"]
|
||||||
|
if "admin_username" in settings:
|
||||||
|
config.admin.username = settings["admin_username"]
|
||||||
|
if "purge_enabled" in settings:
|
||||||
|
config.purge.enabled = settings["purge_enabled"]
|
||||||
|
if "purge_max_age_hours" in settings:
|
||||||
|
config.purge.max_age_hours = settings["purge_max_age_hours"]
|
||||||
|
if "privacy_mode" in settings:
|
||||||
|
config.purge.privacy_mode = settings["privacy_mode"]
|
||||||
|
if "privacy_retention_hours" in settings:
|
||||||
|
config.purge.privacy_retention_hours = settings["privacy_retention_hours"]
|
||||||
|
|
||||||
|
logger.info("Applied %d persisted settings to config", len(settings))
|
||||||
|
|
||||||
|
|
||||||
|
def _deserialize(key: str, raw: str) -> object:
|
||||||
|
"""Deserialize a config value from its JSON string."""
|
||||||
|
try:
|
||||||
|
value = json.loads(raw)
|
||||||
|
except (json.JSONDecodeError, TypeError):
|
||||||
|
return raw
|
||||||
|
|
||||||
|
# Type coercion for known keys
|
||||||
|
bool_keys = {"privacy_mode", "purge_enabled"}
|
||||||
|
int_keys = {"max_concurrent", "session_timeout_hours", "purge_max_age_hours", "privacy_retention_hours"}
|
||||||
|
|
||||||
|
if key in bool_keys:
|
||||||
|
return bool(value)
|
||||||
|
if key in int_keys:
|
||||||
|
return int(value) if value is not None else value
|
||||||
|
return value
|
||||||
|
|
@ -26,6 +26,14 @@ const privacyRetentionHours = ref(24)
|
||||||
const purgeConfirming = ref(false)
|
const purgeConfirming = ref(false)
|
||||||
let purgeConfirmTimer: ReturnType<typeof setTimeout> | null = null
|
let purgeConfirmTimer: ReturnType<typeof setTimeout> | null = null
|
||||||
|
|
||||||
|
// New persisted settings
|
||||||
|
const maxConcurrent = ref(3)
|
||||||
|
const sessionMode = ref('isolated')
|
||||||
|
const sessionTimeoutHours = ref(72)
|
||||||
|
const adminUsername = ref('admin')
|
||||||
|
const purgeEnabled = ref(false)
|
||||||
|
const purgeMaxAgeHours = ref(168)
|
||||||
|
|
||||||
// Change password state
|
// Change password state
|
||||||
const currentPassword = ref('')
|
const currentPassword = ref('')
|
||||||
const newPassword = ref('')
|
const newPassword = ref('')
|
||||||
|
|
@ -55,14 +63,25 @@ async function switchTab(tab: typeof activeTab.value) {
|
||||||
if (tab === 'errors') await store.loadErrorLog()
|
if (tab === 'errors') await store.loadErrorLog()
|
||||||
if (tab === 'settings') {
|
if (tab === 'settings') {
|
||||||
try {
|
try {
|
||||||
const config = await api.getPublicConfig()
|
const res = await fetch('/api/admin/settings', {
|
||||||
welcomeMessage.value = config.welcome_message
|
headers: { Authorization: `Basic ${btoa(`${store.username}:${store.password}`)}` },
|
||||||
defaultVideoFormat.value = config.default_video_format || 'auto'
|
})
|
||||||
defaultAudioFormat.value = config.default_audio_format || 'auto'
|
if (res.ok) {
|
||||||
privacyMode.value = config.privacy_mode ?? false
|
const data = await res.json()
|
||||||
privacyRetentionHours.value = config.privacy_retention_hours ?? 24
|
welcomeMessage.value = data.welcome_message ?? ''
|
||||||
|
defaultVideoFormat.value = data.default_video_format || 'auto'
|
||||||
|
defaultAudioFormat.value = data.default_audio_format || 'auto'
|
||||||
|
privacyMode.value = data.privacy_mode ?? false
|
||||||
|
privacyRetentionHours.value = data.privacy_retention_hours ?? 24
|
||||||
|
maxConcurrent.value = data.max_concurrent ?? 3
|
||||||
|
sessionMode.value = data.session_mode ?? 'isolated'
|
||||||
|
sessionTimeoutHours.value = data.session_timeout_hours ?? 72
|
||||||
|
adminUsername.value = data.admin_username ?? 'admin'
|
||||||
|
purgeEnabled.value = data.purge_enabled ?? false
|
||||||
|
purgeMaxAgeHours.value = data.purge_max_age_hours ?? 168
|
||||||
|
}
|
||||||
} catch {
|
} catch {
|
||||||
// Keep current value
|
// Keep current values
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -75,6 +94,12 @@ async function saveAllSettings() {
|
||||||
default_audio_format: defaultAudioFormat.value,
|
default_audio_format: defaultAudioFormat.value,
|
||||||
privacy_mode: privacyMode.value,
|
privacy_mode: privacyMode.value,
|
||||||
privacy_retention_hours: privacyRetentionHours.value,
|
privacy_retention_hours: privacyRetentionHours.value,
|
||||||
|
max_concurrent: maxConcurrent.value,
|
||||||
|
session_mode: sessionMode.value,
|
||||||
|
session_timeout_hours: sessionTimeoutHours.value,
|
||||||
|
admin_username: adminUsername.value,
|
||||||
|
purge_enabled: purgeEnabled.value,
|
||||||
|
purge_max_age_hours: purgeMaxAgeHours.value,
|
||||||
})
|
})
|
||||||
if (ok) {
|
if (ok) {
|
||||||
await configStore.loadConfig()
|
await configStore.loadConfig()
|
||||||
|
|
@ -362,6 +387,83 @@ function formatFilesize(bytes: number | null): string {
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<!-- Server settings -->
|
||||||
|
<div class="settings-field">
|
||||||
|
<label>Max Concurrent Downloads</label>
|
||||||
|
<p class="field-hint">How many downloads can run in parallel (1–10).</p>
|
||||||
|
<input
|
||||||
|
type="number"
|
||||||
|
v-model.number="maxConcurrent"
|
||||||
|
min="1"
|
||||||
|
max="10"
|
||||||
|
class="settings-input"
|
||||||
|
style="width: 80px;"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="settings-field">
|
||||||
|
<label>Session Mode</label>
|
||||||
|
<p class="field-hint">Controls download queue visibility between browser sessions.</p>
|
||||||
|
<select v-model="sessionMode" class="settings-select">
|
||||||
|
<option value="isolated">Isolated — each browser has its own queue</option>
|
||||||
|
<option value="shared">Shared — all users see all downloads</option>
|
||||||
|
<option value="open">Open — no session tracking</option>
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="settings-field">
|
||||||
|
<label>Session Timeout</label>
|
||||||
|
<p class="field-hint">Hours before an inactive session cookie expires (1–8760).</p>
|
||||||
|
<div class="retention-input-row">
|
||||||
|
<input
|
||||||
|
type="number"
|
||||||
|
v-model.number="sessionTimeoutHours"
|
||||||
|
min="1"
|
||||||
|
max="8760"
|
||||||
|
class="settings-input retention-input"
|
||||||
|
/>
|
||||||
|
<span class="retention-unit">hours</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="settings-field">
|
||||||
|
<label>Admin Username</label>
|
||||||
|
<p class="field-hint">Username for admin panel login.</p>
|
||||||
|
<input
|
||||||
|
type="text"
|
||||||
|
v-model="adminUsername"
|
||||||
|
class="settings-input"
|
||||||
|
style="max-width: 200px;"
|
||||||
|
autocomplete="username"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="settings-field">
|
||||||
|
<label class="toggle-label">
|
||||||
|
<span>Auto-Purge</span>
|
||||||
|
<label class="toggle-switch">
|
||||||
|
<input type="checkbox" v-model="purgeEnabled" />
|
||||||
|
<span class="toggle-slider"></span>
|
||||||
|
</label>
|
||||||
|
</label>
|
||||||
|
<p class="field-hint">
|
||||||
|
Automatically delete completed/failed downloads on a schedule.
|
||||||
|
</p>
|
||||||
|
<div v-if="purgeEnabled" class="retention-setting">
|
||||||
|
<label class="retention-label">Delete downloads older than</label>
|
||||||
|
<div class="retention-input-row">
|
||||||
|
<input
|
||||||
|
type="number"
|
||||||
|
v-model.number="purgeMaxAgeHours"
|
||||||
|
min="1"
|
||||||
|
max="87600"
|
||||||
|
class="settings-input retention-input"
|
||||||
|
/>
|
||||||
|
<span class="retention-unit">hours</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
<div class="settings-actions settings-save-row">
|
<div class="settings-actions settings-save-row">
|
||||||
<button @click="saveAllSettings" :disabled="store.isLoading" class="btn-save">
|
<button @click="saveAllSettings" :disabled="store.isLoading" class="btn-save">
|
||||||
{{ store.isLoading ? 'Saving…' : 'Save Settings' }}
|
{{ store.isLoading ? 'Saving…' : 'Save Settings' }}
|
||||||
|
|
@ -369,7 +471,7 @@ function formatFilesize(bytes: number | null): string {
|
||||||
<span v-if="settingsSaved" class="save-confirm">✓ Saved</span>
|
<span v-if="settingsSaved" class="save-confirm">✓ Saved</span>
|
||||||
</div>
|
</div>
|
||||||
<p class="field-hint">
|
<p class="field-hint">
|
||||||
Settings are applied immediately but reset on server restart.
|
Settings are saved to the database and persist across restarts.
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
|
@ -403,7 +505,7 @@ function formatFilesize(bytes: number | null): string {
|
||||||
|
|
||||||
<div class="settings-field">
|
<div class="settings-field">
|
||||||
<label>Change Password</label>
|
<label>Change Password</label>
|
||||||
<p class="field-hint">Takes effect immediately but resets on server restart.</p>
|
<p class="field-hint">Takes effect immediately. Set via MEDIARIP__ADMIN__PASSWORD_HASH env var for initial deployment.</p>
|
||||||
<div class="password-fields">
|
<div class="password-fields">
|
||||||
<input
|
<input
|
||||||
v-model="currentPassword"
|
v-model="currentPassword"
|
||||||
|
|
|
||||||
|
|
@ -24,7 +24,7 @@ onMounted(async () => {
|
||||||
<span>yt-dlp {{ ytDlpVersion }}</span>
|
<span>yt-dlp {{ ytDlpVersion }}</span>
|
||||||
<span class="sep">|</span>
|
<span class="sep">|</span>
|
||||||
<a
|
<a
|
||||||
href="https://github.com/jlightner/media-rip"
|
href="https://github.com/xpltdco/media-rip"
|
||||||
target="_blank"
|
target="_blank"
|
||||||
rel="noopener noreferrer"
|
rel="noopener noreferrer"
|
||||||
>GitHub</a>
|
>GitHub</a>
|
||||||
|
|
|
||||||
Loading…
Add table
Reference in a new issue