Getting started
Five minutes from clone to a translated response. The default setup runs
against SQLite and the passthrough translator, so you can
verify routing before wiring up DeepL or an LLM translator.
Run the stack
The API, the admin UI, and a SQLite volume come up with one command. The
compose file lives under deploy/ in the repo.
# clone
git clone https://github.com/DeeJayTC/adaptiveapi.git
cd adaptiveapi/deploy
# configure
cp .env.example .env
# run
docker compose up --build
You should now have:
- API on
http://localhost:8080, health at/healthz. - Admin UI on
http://localhost:8000. - A SQLite volume mounted at
/datainside the API container.
To also bring up the bundled chat demo (a tiny .NET backend plus a Vue chat UI you can point an OpenAI key at), add the demo profile.
docker compose --profile demo up --build
The demo UI lands on http://localhost:8100.
Create your first route
A route is a token-gated entry point that maps an inbound request
to an upstream service plus a translation policy. Open the admin UI at
http://localhost:8000, click Routes, and
create one of:
- OpenAI. Speaks
chat/completionsand the Responses API. - Anthropic. Speaks
messages. - MCP. Proxies a remote MCP server.
- Generic. Declarative adapter for any HTTP+JSON API. See Generic JSON API.
You will get back a route token like rt_yourtenant_xxxxx. The
token is the only secret you need on the client side. It is hashed with
Argon2id before storage and never logged.
Want a stable token for local development? Set
Dev__FixedRouteToken=rt_dev_LOCALDEMOindeploy/.envand the API will seed that token on first boot.
Send your first request
With the OpenAI Python SDK and the route token from above:
from openai import OpenAI
client = OpenAI(
api_key="sk-...", # your real OpenAI key
base_url="http://localhost:8080/v1/rt_dev_LOCALDEMO",
)
resp = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user",
"content": "Was ist eine Hashmap?"}],
extra_headers={"X-AdaptiveApi-Target-Lang": "de"},
)
print(resp.choices[0].message.content)
With the default passthrough translator, you will get the
English answer back unchanged. That confirms routing, auth, and streaming
all work. Wire up a real translator next.
Wire up a translator
Edit deploy/.env. To use DeepL:
LLMTRANS_DEFAULT_TRANSLATOR=deepl
DEEPL_API_KEY="your-deepl-key"
DEEPL_BASE_URL=https://api.deepl.com/ # or https://api-free.deepl.com/
To use an LLM as the translator instead:
LLMTRANS_DEFAULT_TRANSLATOR=llm
LLM_TRANSLATOR_API_KEY="sk-..."
LLM_TRANSLATOR_BASE_URL=https://api.openai.com/
LLM_TRANSLATOR_MODEL=gpt-4o-mini
Restart the API container. The same request now arrives in German.
Where to next
- Routes and tokens covers per-route policy and the request-time headers.
- LLM providers, MCP servers, and Generic JSON API cover each integration shape.
- Translation pipeline explains streaming, tool calls, glossaries, and PII redaction.