Every AI provider.
One place to configure,
route, and trust.
Valymux routes your requests across providers, tells you exactly what each model supports, and keeps your credentials isolated from your application code.
Already building with AI? Share your experience →
Chaos under the hood.
Every provider has its own formats, its own auth, its own quirks. Your team writes glue code instead of building product.
What if every provider just worked the same way?
One interface. Route, translate, and observe — across all of them.
One stable layer.
/v1/chat/completionsmessages[]/v1/messagescontent[]/v1/generateContentparts[]/v1/chat/completionsmessages[]One integration. One interface. One mental model.
Route. Translate. Observe.
Smart Routing
Requests routed to the best available provider based on config, load, and fallback rules.
Universal Translation
One API format across all providers. No more adapting to each provider's quirks.
Secure Credentials
Provider keys never leave the gateway. Virtual API keys for your team, automatic rotation.
Full Observability
Every request traced. Latency, tokens, cost — unified across all providers in real time.
Built without compromise.
Security
Provider credentials encrypted at rest. Virtual keys displayed once on creation, never stored in recoverable form. AGPL codebase — every line auditable. Self-hostable by design.
Speed
Rust-native engine with no garbage collection overhead. Concurrent streaming across providers. Designed to never be your bottleneck.
Clarity
Every model cataloged with its exact capabilities: streaming, thinking, tools, temperature range, context window. Configure once. Copy to code. No docs tab.
Less glue code. More product.
// Different client for each provider
const openai = new OpenAI({ apiKey: KEY_1 })
const anthropic = new Anthropic({ apiKey: KEY_2 })
const gemini = new Gemini({ apiKey: KEY_3 })
// Different formats everywhere
if (provider === "openai") {
res = await openai.chat.completions.create(...)
} else if (provider === "anthropic") {
res = await anthropic.messages.create(...)
} else if (provider === "gemini") {
res = await gemini.generateContent(...)
}
// Different streaming, tools, errors...
// 200+ lines of glue code per provider// One client. Any provider.
const res = await fetch("http://valymux/v1/chat", {
method: "POST",
headers: {
"Authorization": "Bearer vk_live_***",
"Content-Type": "application/json"
},
body: JSON.stringify({
model: "primary-model",
messages: [{ role: "user", content: "..." }]
})
})
// That's it. Routing, failover, auth,
// streaming, tracing — all handled.Configuration First
Swap LLMs in YAML, not in production code.
# gateway-config.yaml
providers:
- id: primary-model
target: openai/gpt-5.4
fallback: anthropic/claude-sonnet-4-6
security:
virtual_keys: true
pii_filter: enabled
budget_cap: $500/moBuilt in the open. Shaped by developers.
Valymux is open source from day one. We believe the infrastructure you trust with your API keys should be transparent, auditable, and under your control.
Transparent
Every line of code is public. Audit the gateway yourself.
Community
Feature requests, bug reports, and PRs welcome from day one.
Honest
We're early. We share what works, what doesn't, and what's next.
Secure
Rust-native; avoids dynamic imports where possible. Audit the binary. Host it yourself.
Stop managing providers.
Start building product.
MVP launching Q2 2026. Join early — your feedback shapes what gets built next.
git clone https://github.com/CLoaKY233/Valymux.gitOSS • Rust-Native • Self-Hostable