Gateway Developer Portal is now in open beta — create API keys and explore the playground. Open Portal →
Arkonova Platform

One key.
Any model.

A single API endpoint for GPT-4o, Claude, Gemini, and any custom model. Policy-based routing, per-key quotas, and real-time telemetry — all in one control plane.

OOpenAI
AAnthropic
GGoogle
Custom
Providers
Planned
Multiple AI providers (in development)
API Format
OpenAI
Drop-in compatible (planned)
Latency
TBD
Gateway overhead
Streaming
Planned
SSE token streaming (in development)

Everything your AI stack needs

Stop managing multiple provider SDKs, secrets, and billing dashboards. One integration point for your entire AI infrastructure.

Unified Access

One request format for GPT-4o, Claude, Gemini, and custom endpoints. Switch models with a single parameter change.

Smart Routing

Policy-based routing by latency, cost, or fallback rules. Auto-failover to backup models when a provider is unavailable.

Usage & Quotas

Per-key quotas, project-level rate limits, and real-time token telemetry. Know exactly what every team spends.

Secure by Default

Scoped API keys keep vendor credentials off the client. Rotate upstream keys without touching your app code.

Supported Providers

Connect any AI provider through a single, stable interface.

OpenAI

GPT-4o, GPT-4 Turbo, o1, o3, embeddings, DALL·E

Anthropic

Claude 4 Opus, Sonnet, Haiku, Claude 3.5 family

Google

Gemini 2.0, Gemini Flash, Gemma via Vertex AI

Custom

Any OpenAI-compatible endpoint — Ollama, Together AI, self-hosted

One API, Any Model

Point your existing OpenAI SDK at the Gateway URL. Change the model name — everything else stays the same.

gateway_example.py
import openai

client = openai.OpenAI(
    base_url="https://arkonova.network/gateway/v1",
    api_key="ark-..."          # your Arkonova key
)

# Switch models without changing your code
response = client.chat.completions.create(
    model="claude-sonnet-4-6",  # or "gpt-4o"
    messages=[{"role": "user", "content": "Hello"}],
    stream=True
)
Streaming Support

Full SSE streaming for all providers. Token-by-token output with a consistent event format across every model.

Fallback Chains

Define a priority list of models. If the primary is down or rate-limited, the gateway retries the next one automatically.

Scoped API Keys

Issue per-app or per-team keys with model allowlists, spend caps, and IP restrictions — without exposing provider credentials.

How It Works

Get an API Key

Issue an ark- key from the dashboard with your chosen model scope and quota limits.

Point Your SDK

Set base_url to the Gateway endpoint. Works with OpenAI SDK, LangChain, LiteLLM — no rewrites.

Request is Routed

The gateway resolves the model name to a provider, applies routing policy, and forwards with the appropriate credentials.

Track Everything

Per-request latency, token counts, cost estimates, and error rates captured in real time. Export via webhooks or dashboard.

Under the Hood

API ProtocolOpenAI-compatible
StreamingSSE / chunked
AuthBearer token
Retry Policy3 attempts
TelemetryReal-time
Key prefixark-*

Run AI through one control plane

Start with OpenAI + Anthropic and expand to any provider without changing your product architecture.

Domain migration

Primary domain: arkonova.network

We are migrating away from arkonova.ru. Please update bookmarks and links.

Open primary domain