Point your ownify agent at your Ollama Cloud, OpenAI, Groq, or any OpenAI-compatible endpoint. ownify never sees prompts or completions for BYO calls. Spend caps and retention are governed by your provider.
ownify still bills its standard plans (Solo, Pro, etc.) — those cover the agent runtime, memory subsystem, skills, Matrix bot, audit, portal, and storage. BYO replaces only the LLM layer.
The portal dropdown lists four options. All endpoints must be OpenAI-compatible (i.e. accept POST /v1/chat/completions with the standard OpenAI request schema).
| Provider | Endpoint | Get a key |
|---|---|---|
| Ollama Cloud | https://ollama.com/v1 | ollama.com/settings/keys ↗ |
| OpenAI | https://api.openai.com/v1 | platform.openai.com/api-keys ↗ |
| Groq | https://api.groq.com/openai/v1 | console.groq.com/keys ↗ |
| Custom | any OpenAI-compatible URL | OpenRouter, Together, self-hosted Ollama, TGI… |
Anthropic’s native API isn’t directly supported in v1 (microclaw expects OpenAI-shaped requests). Use Anthropic via OpenRouter as a Custom endpoint for now.
In the portal: Dashboard → your agent → LLM. Pick a provider from the dropdown, paste your API key, set the default model id, save. The agent pod restarts in ~15 seconds and begins routing every chat call to your provider.
For Ollama, the default model id is whatever your subscription has access to — e.g. llama3.3:70b, qwen2.5:32b. For OpenAI it’s the model id you’d pass in any chat-completions request, e.g. gpt-4o-mini.
To revert to ownify-managed (Fireworks via klaw-router), pick that option from the dropdown and save. The four BYO secret keys are wiped and the pod restarts.
With BYO active, the agent pod sends every LLM request directly to your provider over HTTPS. klaw-router and LiteLLM are bypassed entirely.
| ownify still records | ownify does NOT see |
|---|---|
| Audit metadata (which channel triggered a call, when, the agent slug) Skill execution + memory ACL events Pod health + restarts | Prompt content Completion content Token counts (your provider’s dashboard has these) Spend (ownify can’t enforce caps for BYO providers) |
Langfuse traces are skipped for BYO calls so prompts and completions never land in ownify’s observability stack.
With BYO, your prompts and completions are governed by your provider’s privacy policy — review ollama.com/privacy, openai.com/policies/privacy-policy, or your custom endpoint’s terms before sending sensitive data. ownify stores audit metadata only.