Skip to main content
Model providers let you point supported agents at a custom API base URL — a self-hosted model server, an alternative cloud provider, or an OpenAI-compatible endpoint — and supply the API key for it. You can create multiple providers and assign each one to specific agents. Navigate to Settings → Model Providers to manage providers.

What a model provider configures

Each provider stores three values:
  • Name — a label you choose to identify the provider in the UI
  • API base URL — the endpoint your agent sends requests to (e.g., http://localhost:11434/v1 for Ollama)
  • API key — the credential for that endpoint, stored securely in your machine’s OS keyring
Providers also have an agent types list — the agents that will use this provider when selected in their individual settings.

Adding a provider

1

Open Model Providers

Go to Settings → Model Providers and click Add Provider.
2

Enter provider details

Fill in the following fields:
FieldDescription
NameA descriptive label, e.g., “Local Ollama” or “Azure OpenAI”
API base URLThe full base URL of the API endpoint
API keyYour API key or bearer token for this endpoint
3

Select agent types

Toggle the agents that should use this provider. You can assign a provider to one or more of: Claude Code, Codex CLI, Gemini CLI, OpenClaw, OpenCode, Cline.
4

Create

Click Create. The provider appears in the list and is immediately available to assign to agents.

Editing a provider

Click the pencil icon next to a provider to open the edit dialog. You can update the name, API URL, API key, and the agent types list. Changes take effect the next time an agent connects.

Assigning a provider to an agent

After creating a provider, go to Settings → Agents, select the agent, and choose the provider from the Model Provider dropdown. The agent will then route its API calls through the configured URL and use the stored API key.

Removing a provider

Click the trash icon next to a provider to delete it. If any agents are currently assigned to that provider, Codeg blocks the deletion and tells you which agents are affected. Reassign those agents to a different provider first, then delete.

Use cases

Point agents at a local inference server running Ollama or LM Studio. Use the server’s OpenAI-compatible endpoint as the API base URL and leave the API key empty or use a placeholder value.
API base URL: http://localhost:11434/v1
API key:      (any non-empty value, e.g. "ollama")
API keys are stored in your OS keyring and are never written to Codeg’s configuration files or database. Keys are retrieved from the keyring at connection time.
Provider configuration is local to your Codeg installation. In server or Docker deployment mode, keys stored in the keyring on the server host are not accessible from other machines.