Standardize header capitalization across documentation to follow title case conventions (capitalize major words like nouns, verbs, adjectives; keep minor words lowercase). Changes include: - Model provider docs: "Chat model" → "Chat Model", etc. - Plan mode docs: "Understanding Plan mode" → "Understanding Plan Mode" - CLI docs: "Quick start" → "Quick Start", "Next steps" → "Next Steps" - Guide docs: various headers fixed for consistency - Also fixed typo: "Availible models" → "Available Models" Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
493 lines
16 KiB
Plaintext
493 lines
16 KiB
Plaintext
---
|
|
title: "config.yaml Reference"
|
|
description: "Comprehensive guide to the config.yaml format used by Continue.dev for building custom coding agents. Learn how to define models, context providers, rules, prompts, and more using YAML configuration."
|
|
---
|
|
|
|
## Introduction
|
|
|
|
Continue Agents are defined using the `config.yaml` specification.
|
|
|
|
**Agents** are composed of models, rules, and tools (MCP servers).
|
|
|
|
<Columns cols={2}>
|
|
<Card title="Configuring Models, Rules, and Tools" icon="cube" href="/guides/configuring-models-rules-tools">
|
|
Learn how to work with Continue's configuration system, including using hub models, rules, and tools, creating local configurations, and organizing your setup.
|
|
</Card>
|
|
|
|
<Card title="Understanding Configs" icon="robot" href="/guides/understanding-configs">
|
|
Learn how to build and configure configs, understand their capabilities, and customize them for your development workflow.
|
|
</Card>
|
|
</Columns>
|
|
|
|
## Properties
|
|
|
|
Below are details for each property that can be set in `config.yaml`.
|
|
|
|
**All properties at all levels are optional unless explicitly marked as required.**
|
|
|
|
The top-level properties in the `config.yaml` configuration file are:
|
|
|
|
- [`name`](#name) (**required**)
|
|
- [`version`](#version) (**required**)
|
|
- [`schema`](#schema) (**required**)
|
|
- [`models`](#models)
|
|
- [`context`](#context)
|
|
- [`rules`](#rules)
|
|
- [`prompts`](#prompts)
|
|
- [`docs`](#docs)
|
|
- [`mcpServers`](#mcpservers)
|
|
- [`data`](#data)
|
|
|
|
---
|
|
|
|
### `name`
|
|
|
|
The `name` property specifies the name of your project or configuration.
|
|
|
|
```yaml title="config.yaml"
|
|
name: My Config
|
|
version: 1.0.0
|
|
schema: v1
|
|
```
|
|
|
|
---
|
|
|
|
### `version`
|
|
|
|
The `version` property specifies the version of your project or configuration.
|
|
|
|
### `schema`
|
|
|
|
The `schema` property specifies the schema version used for the `config.yaml`, e.g. `v1`
|
|
|
|
---
|
|
|
|
### `models`
|
|
|
|
The `models` section defines the language models used in your configuration. Models are used for functionalities such as chat, editing, and summarizing.
|
|
|
|
**Properties:**
|
|
|
|
- `name` (**required**): A unique name to identify the model within your configuration.
|
|
|
|
- `provider` (**required**): The provider of the model (e.g., `openai`, `ollama`).
|
|
|
|
- `model` (**required**): The specific model name (e.g., `gpt-4`, `starcoder`).
|
|
|
|
- `apiBase`: Can be used to override the default API base that is specified per model
|
|
|
|
- `roles`: An array specifying the roles this model can fulfill, such as `chat`, `autocomplete`, `embed`, `rerank`, `edit`, `apply`, `summarize`. The default value is `[chat, edit, apply, summarize]`. Note that the `summarize` role is not currently used.
|
|
|
|
- `capabilities`: Array of strings denoting model capabilities, which will overwrite Continue's autodetection based on provider and model. See the [Model Capabilities guide](/customize/deep-dives/model-capabilities) for detailed information. Supported capabilities include:
|
|
|
|
- `tool_use`: Enables function/tool calling support (required for Agent mode)
|
|
- `image_input`: Enables image upload and processing support
|
|
|
|
Continue automatically detects these capabilities for most models, but you can override this when using custom deployments or if autodetection isn't working correctly.
|
|
|
|
- `maxStopWords`: Maximum number of stop words allowed, to avoid API errors with extensive lists.
|
|
|
|
- `promptTemplates`: Can be used to override the default prompt templates for different model roles. Valid values are [`chat`](), [`edit`](/customize/model-roles/edit#edit-prompt-templating), [`apply`](/customize/model-roles/apply#apply-prompt-templating) and [`autocomplete`](/customize/model-roles/autocomplete#autocomplete-prompt-templating). The `chat` property must be a valid template name, such as `llama3` or `anthropic`.
|
|
|
|
- `chatOptions`: If the model includes role `chat`, these settings apply for Agent and Chat mode:
|
|
|
|
- `baseSystemMessage`: Can be used to override the default system prompt for **Chat** mode.
|
|
- `baseAgentSystemMessage`: Can be used to override the default system prompt for **Agent** mode.
|
|
- `basePlanSystemMessage`: Can be used to override the default system prompt for **Plan** mode.
|
|
|
|
- `embedOptions`: If the model includes role `embed`, these settings apply for embeddings:
|
|
|
|
- `maxChunkSize`: Maximum tokens per document chunk. Minimum is 128 tokens.
|
|
- `maxBatchSize`: Maximum number of chunks per request. Minimum is 1 chunk.
|
|
|
|
- `defaultCompletionOptions`: Default completion options for model settings.
|
|
|
|
- `contextLength`: Maximum context length of the model, typically in tokens.
|
|
- `maxTokens`: Maximum number of tokens to generate in a completion.
|
|
- `temperature`: Controls the randomness of the completion. Values range from `0.0` (deterministic) to `1.0` (random).
|
|
- `topP`: The cumulative probability for nucleus sampling.
|
|
- `topK`: Maximum number of tokens considered at each step.
|
|
- `stop`: An array of stop tokens that will terminate the completion.
|
|
- `reasoning`: Boolean to enable thinking/reasoning for Anthropic Claude 3.7+ and some Ollama models.
|
|
- `reasoningBudgetTokens`: Budget tokens for thinking/reasoning in Anthropic Claude 3.7+ models.
|
|
|
|
- `requestOptions`: HTTP request options specific to the model.
|
|
|
|
- `timeout`: Timeout for each request to the language model.
|
|
|
|
- `verifySsl`: Whether to verify SSL certificates for requests.
|
|
|
|
- `caBundlePath`: Path to a custom CA bundle for HTTP requests.
|
|
|
|
- `proxy`: Proxy URL for HTTP requests.
|
|
|
|
- `headers`: Custom headers for HTTP requests.
|
|
|
|
- `extraBodyProperties`: Additional properties to merge with the HTTP request body.
|
|
|
|
- `noProxy`: List of hostnames that should bypass the specified proxy.
|
|
|
|
- `clientCertificate`: Client certificate for HTTP requests.
|
|
- `cert`: Path to the client certificate file.
|
|
- `key`: Path to the client certificate key file.
|
|
- `passphrase`: Optional passphrase for the client certificate key file.
|
|
|
|
- `autocompleteOptions`: If the model includes role `autocomplete`, these settings apply for tab autocompletion:
|
|
|
|
- `disable`: If `true`, disables autocomplete for this model.
|
|
- `maxPromptTokens`: Maximum number of tokens for the autocomplete prompt.
|
|
- `debounceDelay`: Delay before triggering autocomplete in milliseconds.
|
|
- `modelTimeout`: Model timeout for autocomplete requests in milliseconds.
|
|
- `maxSuffixPercentage`: Maximum percentage of prompt allocated for suffix.
|
|
- `prefixPercentage`: Percentage of input allocated for prefix.
|
|
- `transform`: If `false`, disables trimming of multiline completions. Defaults to `true`. Useful for models that generate better multiline completions without transformations.
|
|
- `template`: Custom template for autocomplete using Mustache syntax. You can use the `{{{ prefix }}}`, `{{{ suffix }}}`, `{{{ filename }}}`, `{{{ reponame }}}`, and `{{{ language }}}` variables.
|
|
- `onlyMyCode`: Only includes code within the repository for context.
|
|
- `useCache`: If `true`, enables caching for completions.
|
|
- `useImports`: If `true`, includes imports in context.
|
|
- `useRecentlyEdited`: If `true`, includes recently edited files in context.
|
|
- `useRecentlyOpened`: If `true`, includes recently opened files in context.
|
|
|
|
**Example:**
|
|
|
|
```yaml title="config.yaml"
|
|
name: My Config
|
|
version: 1.0.0
|
|
schema: v1
|
|
models:
|
|
- name: GPT-4o
|
|
provider: openai
|
|
model: gpt-4o
|
|
roles:
|
|
- chat
|
|
- edit
|
|
- apply
|
|
defaultCompletionOptions:
|
|
temperature: 0.7
|
|
maxTokens: 1500
|
|
- name: Codestral
|
|
provider: mistral
|
|
model: codestral-latest
|
|
roles:
|
|
- autocomplete
|
|
autocompleteOptions:
|
|
debounceDelay: 250
|
|
maxPromptTokens: 1024
|
|
onlyMyCode: true
|
|
- name: My Model - OpenAI-Compatible
|
|
provider: openai
|
|
apiBase: http://my-endpoint/v1
|
|
model: my-custom-model
|
|
capabilities:
|
|
- tool_use
|
|
- image_input
|
|
roles:
|
|
- chat
|
|
- edit
|
|
```
|
|
|
|
---
|
|
|
|
### `context`
|
|
|
|
The `context` section defines context providers, which supply additional information or context to the language models. Each context provider can be configured with specific parameters.
|
|
|
|
More information about usage/params for each context provider can be found [here](/customize/deep-dives/custom-providers)
|
|
|
|
**Properties:**
|
|
|
|
- `provider` (**required**): The identifier or name of the context provider (e.g., `code`, `docs`, `web`)
|
|
- `name`: Optional name for the provider
|
|
- `params`: Optional parameters to configure the context provider's behavior.
|
|
|
|
**Example:**
|
|
|
|
```yaml title="config.yaml"
|
|
name: My Config
|
|
version: 1.0.0
|
|
schema: v1
|
|
context:
|
|
- provider: file
|
|
- provider: code
|
|
- provider: diff
|
|
- provider: http
|
|
name: Context Server 1
|
|
params:
|
|
url: "https://api.example.com/server1"
|
|
- provider: terminal
|
|
```
|
|
|
|
---
|
|
|
|
### `rules`
|
|
|
|
Rules are concatenated into the system message for all [Agent](/ide-extensions/agent/quick-start), [Chat](/ide-extensions/chat/quick-start), and [Edit](/ide-extensions/edit/quick-start) requests.
|
|
|
|
Confiugration example:
|
|
|
|
```yaml title="config.yaml"
|
|
name: My Config
|
|
version: 1.0.0
|
|
schema: v1
|
|
rules:
|
|
- uses: sanity/sanity-opinionated # rules file stored on Continue Mission Control
|
|
- uses: file://user/Desktop/rules.md # rules file stored on local computer
|
|
```
|
|
|
|
Rules file example:
|
|
|
|
```md title="rules.md"
|
|
---
|
|
name: Pirate rule
|
|
---
|
|
|
|
Talk like a pirate
|
|
```
|
|
|
|
See the [rules deep dive](/customize/deep-dives/rules) for more details.
|
|
|
|
---
|
|
|
|
### `prompts`
|
|
|
|
Prompts can be invoked with a <kbd>/</kbd> command.
|
|
|
|
Configuration example:
|
|
|
|
```yaml title="config.yaml"
|
|
name: My Config
|
|
version: 1.0.0
|
|
schema: v1
|
|
prompts:
|
|
- uses: supabase/create-functions # prompts file stored on Continue Mission Control
|
|
- uses: file://user/Desktop/prompts.md # prompts file stored on local computer
|
|
```
|
|
|
|
Prompts file example:
|
|
|
|
```md title="prompts.md"
|
|
---
|
|
name: Make pirate comments
|
|
invokable: true
|
|
---
|
|
|
|
Rewrite all comments in the active file to talk like a pirate
|
|
```
|
|
|
|
See the [prompts deep dive](/customize/deep-dives/prompts) for more details.
|
|
|
|
---
|
|
|
|
### `docs`
|
|
|
|
List of documentation sites to index.
|
|
|
|
**Properties:**
|
|
|
|
- `name` (**required**): Name of the documentation site, displayed in dropdowns, etc.
|
|
- `startUrl` (**required**): Start page for crawling - usually root or intro page for docs
|
|
- `favicon`: URL for site favicon (default is `/favicon.ico` from `startUrl`).
|
|
- `useLocalCrawling`: Skip the default crawler and only crawl using a local crawler.
|
|
|
|
**Example:**
|
|
|
|
```yaml title="config.yaml"
|
|
name: My Config
|
|
version: 1.0.0
|
|
schema: v1
|
|
docs:
|
|
- name: Continue
|
|
startUrl: https://docs.continue.dev/intro
|
|
favicon: https://docs.continue.dev/favicon.ico
|
|
```
|
|
|
|
---
|
|
|
|
### `mcpServers`
|
|
|
|
The [Model Context Protocol](https://modelcontextprotocol.io/introduction) is a standard proposed by Anthropic to unify prompts, context, and tool use. Continue supports any MCP server with the MCP context provider.
|
|
|
|
**Properties:**
|
|
|
|
- `name` (**required**): The name of the MCP server.
|
|
- `command` (**required**): The command used to start the server.
|
|
- `args`: An optional array of arguments for the command.
|
|
- `env`: An optional map of environment variables for the server process.
|
|
- `cwd`: An optional working directory to run the command in. Can be absolute or relative path.
|
|
- `requestOptions`: Optional request options for `sse` and `streamable-http` servers. Same format as [model requestOptions](#models).
|
|
- `connectionTimeout`: Optional timeout for _initial_ connection to MCP server
|
|
|
|
**Example:**
|
|
|
|
```yaml title="config.yaml"
|
|
name: My Config
|
|
version: 1.0.0
|
|
schema: v1
|
|
mcpServers:
|
|
- name: My MCP Server
|
|
command: uvx
|
|
args:
|
|
- mcp-server-sqlite
|
|
- --db-path
|
|
- ./test.db
|
|
cwd: /Users/NAME/project
|
|
env:
|
|
NODE_ENV: production
|
|
```
|
|
|
|
### `data`
|
|
|
|
Destinations to which [development data](/customize/deep-dives/development-data) will be sent.
|
|
|
|
**Properties:**
|
|
|
|
- `name` (**required**): The display name of the data destination
|
|
- `destination` (**required**): The destination/endpoint that will receive the data. Can be:
|
|
- an HTTP endpoint that will receive a POST request with a JSON blob
|
|
- a file URL to a directory in which events will be dumpted to `.jsonl` files
|
|
- `schema` (**required**): the schema version of the JSON blobs to be sent. Options include `0.1.0` and `0.2.0`
|
|
- `events`: an array of event names to include. Defaults to all events if not specified.
|
|
- `level`: a pre-defined filter for event fields. Options include `all` and `noCode`; the latter excludes data like file contents, prompts, and completions. Defaults to `all`
|
|
- `apiKey`: api key to be sent with request (Bearer header)
|
|
- `requestOptions`: Options for event POST requests. Same format as [model requestOptions](#models).
|
|
|
|
**Example:**
|
|
|
|
```yaml title="config.yaml"
|
|
name: My Config
|
|
version: 1.0.0
|
|
schema: v1
|
|
data:
|
|
- name: Local Data Bank
|
|
destination: file:///Users/dallin/Documents/code/continuedev/continue-extras/external-data
|
|
schema: 0.2.0
|
|
level: all
|
|
- name: My Private Company
|
|
destination: https://mycompany.com/ingest
|
|
schema: 0.2.0
|
|
level: noCode
|
|
events:
|
|
- autocomplete
|
|
- chatInteraction
|
|
```
|
|
|
|
---
|
|
|
|
## Complete YAML Config Example
|
|
|
|
Putting it all together, here's a complete example of a `config.yaml` configuration file:
|
|
|
|
```yaml title="config.yaml"
|
|
name: My Config
|
|
version: 1.0.0
|
|
schema: v1
|
|
models:
|
|
- uses: anthropic/claude-3.5-sonnet
|
|
with:
|
|
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
|
override:
|
|
defaultCompletionOptions:
|
|
temperature: 0.8
|
|
- name: GPT-4
|
|
provider: openai
|
|
model: gpt-4
|
|
roles:
|
|
- chat
|
|
- edit
|
|
defaultCompletionOptions:
|
|
temperature: 0.5
|
|
maxTokens: 2000
|
|
requestOptions:
|
|
headers:
|
|
Authorization: Bearer YOUR_OPENAI_API_KEY
|
|
- name: Ollama Starcoder
|
|
provider: ollama
|
|
model: starcoder
|
|
roles:
|
|
- autocomplete
|
|
autocompleteOptions:
|
|
debounceDelay: 350
|
|
maxPromptTokens: 1024
|
|
onlyMyCode: true
|
|
defaultCompletionOptions:
|
|
temperature: 0.3
|
|
stop:
|
|
- "\n"
|
|
rules:
|
|
- Give concise responses
|
|
- Always assume TypeScript rather than JavaScript
|
|
prompts:
|
|
- name: test
|
|
description: Unit test a function
|
|
prompt: |
|
|
Please write a complete suite of unit tests for this function. You should use the Jest testing framework.
|
|
The tests should cover all possible edge cases and should be as thorough as possible.
|
|
You should also include a description of each test case.
|
|
- uses: myprofile/my-favorite-prompt
|
|
context:
|
|
- provider: diff
|
|
- provider: file
|
|
- provider: code
|
|
mcpServers:
|
|
- name: DevServer
|
|
command: npm
|
|
args:
|
|
- run
|
|
- dev
|
|
env:
|
|
PORT: "3000"
|
|
data:
|
|
- name: My Private Company
|
|
destination: https://mycompany.com/ingest
|
|
schema: 0.2.0
|
|
level: noCode
|
|
events:
|
|
- autocomplete
|
|
- chatInteraction
|
|
```
|
|
|
|
## Using YAML Anchors to Avoid Config Duplication
|
|
|
|
You can also use node anchors to avoid duplication of properties. To do so, adding the YAML version header `%YAML 1.1` is needed, here's an example of a `config.yaml` configuration file using anchors:
|
|
|
|
```yaml title="config.yaml"
|
|
%YAML 1.1
|
|
---
|
|
name: My Config
|
|
version: 1.0.0
|
|
schema: v1
|
|
model_defaults: &model_defaults
|
|
provider: openai
|
|
apiKey: my-api-key
|
|
apiBase: https://api.example.com/llm
|
|
models:
|
|
- name: mistral
|
|
<<: *model_defaults
|
|
model: mistral-7b-instruct
|
|
roles:
|
|
- chat
|
|
- edit
|
|
- name: qwen2.5-coder-7b-instruct
|
|
<<: *model_defaults
|
|
model: qwen2.5-coder-7b-instruct
|
|
roles:
|
|
- chat
|
|
- edit
|
|
- name: qwen2.5-coder-7b
|
|
<<: *model_defaults
|
|
model: qwen2.5-coder-7b
|
|
useLegacyCompletionsEndpoint: false
|
|
roles:
|
|
- autocomplete
|
|
autocompleteOptions:
|
|
debounceDelay: 350
|
|
maxPromptTokens: 1024
|
|
onlyMyCode: true
|
|
```
|
|
|
|
---
|
|
|
|
## `config.json` Deprecation
|
|
|
|
`config.yaml `replaces `config.json`, which is deprecated. View the **[Migration Guide](/reference/yaml-migration)** for help transitioning from the old format.
|