The DEBtech AI Hub is not a standalone add-on and provides no visible functionality on its own. It is a technical foundation whose sole purpose is to supply our DEBtech AI add-ons with a unified, secure, and observable API layer.
In other words: Without at least one of the supported DEBtech add-ons, the AI Hub has no purpose. Installing it only makes sense if you are using (or plan to use) one of the following add-ons.
Add-ons this hub was built for
What does the AI Hub do?
The AI Hub provides a single unified interface to eight different AI providers. Instead of every add-on bringing its own API keys, retry logic, and cost tracking, the hub bundles it all centrally — with budget enforcement, structured logging, and an admin dashboard.
Supported providers:
Cloud: OpenAI · OpenRouter · DeepSeek
Local: Ollama · LocalAI · vLLM · LM Studio · SD.cpp
Core Features
Security
Admin Dashboard
In other words: Without at least one of the supported DEBtech add-ons, the AI Hub has no purpose. Installing it only makes sense if you are using (or plan to use) one of the following add-ons.
- DEBtech Portal AI Suite Pro — AI news, AskBox, interest categories, cover images (hard dependency)
- DEBtech Advertising Suite Pro — AI-optimized product descriptions (next release)
- Future DEBtech AI add-ons — all of them route through this central hub
The AI Hub provides a single unified interface to eight different AI providers. Instead of every add-on bringing its own API keys, retry logic, and cost tracking, the hub bundles it all centrally — with budget enforcement, structured logging, and an admin dashboard.
Supported providers:
- Multi-provider management — All 8 providers usable simultaneously, each add-on picks its own
- Smart routing — Automatic retry on HTTP 429/500/502/503/504 with exponential backoff
- Fallback provider chain — If OpenAI goes down, OpenRouter/DeepSeek takes over automatically
- Budget tracking & enforcement — Monthly budget per provider; calls are blocked when exceeded
- Streaming support (SSE) — Live-typing output for long chat responses
- API log with filter & search — Every API call with tokens, cost, duration, and prompt excerpt
- Log export — CSV or JSON for analysis in Excel/BI tools
- Reasoning model detection — o-series, GPT-5+, DeepSeek-R1 get longer timeouts and proper parameters automatically
- Setup assistant — Detects OS, RAM, GPU, Docker, Python and recommends suitable models for your hardware
- Zero-load dashboard — Stats are pre-computed every 10 minutes via cron; dashboard loads with 0 SQL queries
- Model test tool — Test chat and image generation straight from the admin panel, including custom prompts
- Admin permission system — Per-admin "Manage AI Hub" permission control
- SSRF protection — Provider URLs validated against an allowlist (AWS/GCE metadata endpoints blocked)
- CSP-compliant JavaScript — No inline onclick, runs under strict Content Security Policies
- API key redaction — Bearer tokens and keys stripped from all error messages and logs
- CSV injection protection — Exported log cells hardened against Excel formula injection
- XSS hardening — All user/provider strings escaped; image URLs only via DOM API (no innerHTML)
- Input validation — Model tester with provider and model allowlist, prompt length cap
- KPI overview: Requests today/week/total · Monthly cost · Avg response time · 24h error rate
- Provider status: Live check for each provider with latency
- Usage per add-on: Which component consumes how many tokens
- Budget monitor: Percentage bar per provider, warning at 80 %
- Recent API calls: The last 10 calls at a glance