Remove extra spaces in Mustache variable syntax for consistency. The documentation already listed all the correct variables including the 'language' parameter added in PR #1853. Co-authored-by: nate <nate@continue.dev>
541 lines
17 KiB
Plaintext
541 lines
17 KiB
Plaintext
---
|
|
title: "config.json Reference (Deprecated)"
|
|
description: "Legacy configuration reference for config.json - use config.yaml instead"
|
|
noindex: true
|
|
---
|
|
|
|
<Warning>
|
|
**This page documents deprecated functionality.** For creating slash commands and customizing Continue, please use:
|
|
- [Prompt blocks and prompt files](/customize/deep-dives/prompts) for slash commands
|
|
- [config.yaml](/reference) for configuration
|
|
|
|
See the `config.yaml` reference and migration guide
|
|
[here](/reference).
|
|
|
|
</Warning>
|
|
|
|
Below are details for each property that can be set in `config.json`. The config schema code is found in [`extensions/vscode/config_schema.json`](https://github.com/continuedev/continue/blob/main/extensions/vscode/config_schema.json).
|
|
|
|
**All properties at all levels are optional unless explicitly marked required**
|
|
|
|
## `models`
|
|
|
|
Your **chat** models are defined here, which are used for [Chat](/ide-extensions/chat/how-it-works) and [Edit](/ide-extensions/edit/how-it-works).
|
|
|
|
Each model has specific configuration options tailored to its provider and functionality, which can be seen as suggestions while editing the json.
|
|
|
|
**Properties:**
|
|
|
|
- `title` (**required**): The title to assign to your model, shown in dropdowns, etc.
|
|
|
|
- `provider` (**required**): The provider of the model, which determines the type and interaction method. Options inclued `openai`, `ollama`, `xAI`, etc., see IntelliJ suggestions.
|
|
|
|
- `model` (**required**): The name of the model, used for prompt template auto-detection. Use `AUTODETECT` special name to get all available models.
|
|
|
|
- `apiKey`: API key required by providers like OpenAI, Anthropic, Cohere, and xAI.
|
|
|
|
- `apiBase`: The base URL of the LLM API.
|
|
|
|
- `contextLength`: Maximum context length of the model, typically in tokens (default: 2048).
|
|
|
|
- `maxStopWords`: Maximum number of stop words allowed, to avoid API errors with extensive lists.
|
|
|
|
- `template`: Chat template to format messages. Auto-detected for most models but can be overridden. See intelliJ suggestions.
|
|
|
|
- `promptTemplates`: A mapping of prompt template names (e.g., `edit`) to template strings. See the [Deep Dives section](/customize/deep-dives/prompts) for customization details.
|
|
|
|
- `completionOptions`: Model-specific completion options, same format as top-level [`completionOptions`](#completionoptions), which they override.
|
|
|
|
- `systemMessage`: A system message that will precede responses from the LLM.
|
|
|
|
- `requestOptions`: Model-specific HTTP request options, same format as top-level [`requestOptions`](#requestoptions), which they override.
|
|
|
|
- `apiType`: Specifies the type of API (`openai` or `azure`).
|
|
|
|
- `apiVersion`: Azure API version (e.g., `2023-07-01-preview`).
|
|
|
|
- `engine`: Engine for Azure OpenAI requests.
|
|
|
|
- `capabilities`: Override auto-detected capabilities:
|
|
- `uploadImage`: Boolean indicating if the model supports image uploads.
|
|
- `tools`: Boolean indicating if the model supports tool use.
|
|
|
|
_(AWS Only)_
|
|
|
|
- `profile`: AWS security profile for authorization.
|
|
- `modelArn`: AWS ARN for imported models (e.g., for `bedrockimport` provider).
|
|
- `region`: Region where the model is hosted (e.g., `us-east-1`, `eu-central-1`).
|
|
|
|
Example:
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"models": [
|
|
{ "title": "Ollama", "provider": "ollama", "model": "AUTODETECT" },
|
|
{
|
|
"model": "gpt-4o",
|
|
"contextLength": 128000,
|
|
"title": "GPT-4o",
|
|
"provider": "openai",
|
|
"apiKey": "YOUR_API_KEY"
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
## `tabAutocompleteModel`
|
|
|
|
Specifies the model or models for tab autocompletion, defaulting to an Ollama instance. This property uses the same format as `models`. Can be an array of models or an object for one model.
|
|
|
|
Example
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"tabAutocompleteModel": {
|
|
"title": "My Starcoder",
|
|
"provider": "ollama",
|
|
"model": "starcoder2:3b"
|
|
}
|
|
}
|
|
```
|
|
|
|
### `tabAutocompleteOptions`
|
|
|
|
Specifies options for tab autocompletion behavior.
|
|
|
|
<Info>
|
|
**Note**: Many of these parameters can also be configured per-model using the
|
|
`autocompleteOptions` field in `config.yaml`. See the [YAML
|
|
Reference](/reference#models) for more details.
|
|
</Info>
|
|
|
|
**Properties:**
|
|
|
|
- `disable`: If `true`, disables tab autocomplete (default: `false`).
|
|
- `maxPromptTokens`: Maximum number of tokens for the prompt (default: `1024`).
|
|
- `debounceDelay`: Delay (in ms) before triggering autocomplete (default: `350`).
|
|
- `maxSuffixPercentage`: Maximum percentage of prompt for suffix (default: `0.2`).
|
|
- `prefixPercentage`: Percentage of input for prefix (default: `0.3`).
|
|
- `template`: Template string for autocomplete, using Mustache templating. You can use the `{{{prefix}}}`, `{{{suffix}}}`, `{{{filename}}}`, `{{{reponame}}}`, and `{{{language}}}` variables.
|
|
- `onlyMyCode`: If `true`, only includes code within the repository (default: `true`).
|
|
|
|
Example
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"tabAutocompleteOptions": {
|
|
"debounceDelay": 500,
|
|
"maxPromptTokens": 1500,
|
|
"disableInFiles": ["*.md"]
|
|
}
|
|
}
|
|
```
|
|
|
|
## `embeddingsProvider`
|
|
|
|
Embeddings model settings - the model used for @Codebase and @docs.
|
|
|
|
**Properties:**
|
|
|
|
- `provider` (**required**): Specifies the embeddings provider, with options including `transformers.js`, `ollama`, `openai`, `cohere`, `gemini`, etc
|
|
- `model`: Model name for embeddings.
|
|
- `apiKey`: API key for the provider.
|
|
- `apiBase`: Base URL for API requests.
|
|
- `requestOptions`: Additional HTTP request settings specific to the embeddings provider.
|
|
- `maxEmbeddingChunkSize`: Maximum tokens per document chunk. Minimum is 128 tokens.
|
|
- `maxEmbeddingBatchSize`: Maximum number of chunks per request. Minimum is 1 chunk.
|
|
|
|
(AWS ONLY)
|
|
|
|
- `region`: Specifies the region hosting the model.
|
|
- `profile`: AWS security profile.
|
|
|
|
Example:
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"embeddingsProvider": {
|
|
"provider": "openai",
|
|
"model": "text-embedding-ada-002",
|
|
"apiKey": "<API_KEY>",
|
|
"maxEmbeddingChunkSize": 256,
|
|
"maxEmbeddingBatchSize": 5
|
|
}
|
|
}
|
|
```
|
|
|
|
## `completionOptions`
|
|
|
|
Parameters that control the behavior of text generation and completion settings. Top-level `completionOptions` apply to all models, _unless overridden at the model level_.
|
|
|
|
**Properties:**
|
|
|
|
- `stream`: Whether to stream the LLM response. Currently only respected by the `anthropic` and `ollama` providers; other providers will always stream (default: `true`).
|
|
- `temperature`: Controls the randomness of the completion. Higher values result in more diverse outputs.
|
|
- `topP`: The cumulative probability for nucleus sampling. Lower values limit responses to tokens within the top probability mass.
|
|
- `topK`: The maximum number of tokens considered at each step. Limits the generated text to tokens within this probability.
|
|
- `presencePenalty`: Discourages the model from generating tokens that have already appeared in the output.
|
|
- `frequencyPenalty`: Penalizes tokens based on their frequency in the text, reducing repetition.
|
|
- `mirostat`: Enables Mirostat sampling, which controls the perplexity during text generation. Supported by Ollama, LM Studio, and llama.cpp providers (default: `0`, where `0` = disabled, `1` = Mirostat, and `2` = Mirostat 2.0).
|
|
- `stop`: An array of stop tokens that, when encountered, will terminate the completion. Allows specifying multiple end conditions.
|
|
- `maxTokens`: The maximum number of tokens to generate in a completion (default: `2048`).
|
|
- `numThreads`: The number of threads used during the generation process. Available only for Ollama as `num_thread`.
|
|
- `keepAlive`: For Ollama, this parameter sets the number of seconds to keep the model loaded after the last request, unloading it from memory if inactive (default: `1800` seconds, or 30 minutes).
|
|
- `numGpu`: For Ollama, this parameter overrides the number of gpu layers that will be used to load the model into VRAM.
|
|
- `useMmap`: For Ollama, this parameter allows the model to be mapped into memory. If disabled can enhance response time on low end devices but will slow down the stream.
|
|
- `reasoning`: Enables thinking/reasoning for Anthropic Claude 3.7+ and some Ollama models.
|
|
- `reasoningBudgetTokens`: Sets budget tokens for thinking/reasoning in Anthropic Claude 3.7+ models.
|
|
|
|
Example
|
|
|
|
config.json
|
|
|
|
```json
|
|
{ "completionOptions": { "stream": false, "temperature": 0.5 } }
|
|
```
|
|
|
|
## `requestOptions`
|
|
|
|
Default HTTP request options that apply to all models and context providers, unless overridden at the model level.
|
|
|
|
**Properties:**
|
|
|
|
- `timeout`: Timeout for each request to the LLM (default: 7200 seconds).
|
|
|
|
- `verifySsl`: Whether to verify SSL certificates for requests.
|
|
|
|
- `caBundlePath`: Path to a custom CA bundle for HTTP requests - path to `.pem` file (or array of paths)
|
|
|
|
- `proxy`: Proxy URL to use for HTTP requests.
|
|
|
|
- `headers`: Custom headers for HTTP requests.
|
|
|
|
- `extraBodyProperties`: Additional properties to merge with the HTTP request body.
|
|
|
|
- `noProxy`: List of hostnames that should bypass the specified proxy.
|
|
|
|
- `clientCertificate`: Client certificate for HTTP requests.
|
|
- `cert`: Path to the client certificate file.
|
|
- `key`: Path to the client certificate key file.
|
|
- `passphrase`: Optional passphrase for the client certificate key file.
|
|
|
|
Example
|
|
|
|
config.json
|
|
|
|
```json
|
|
{ "requestOptions": { "headers": { "X-Auth-Token": "xxx" } } }
|
|
```
|
|
|
|
### `reranker`
|
|
|
|
Configuration for the reranker model used in response ranking.
|
|
|
|
**Properties:**
|
|
|
|
- `name` (**required**): Reranker name, e.g., `cohere`, `voyage`, `llm`, `huggingface-tei`, `bedrock`
|
|
|
|
- `params`:
|
|
- `model`: Model name
|
|
- `apiKey`: Api key
|
|
- `region`: Region (for Bedrock only)
|
|
|
|
Example
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"reranker": {
|
|
"name": "voyage",
|
|
"params": { "model": "rerank-2", "apiKey": "<VOYAGE_API_KEY>" }
|
|
}
|
|
}
|
|
```
|
|
|
|
## `docs`
|
|
|
|
List of documentation sites to index.
|
|
|
|
**Properties:**
|
|
|
|
- `title` (**required**): Title of the documentation site, displayed in dropdowns, etc.
|
|
- `startUrl` (**required**): Start page for crawling - usually root or intro page for docs
|
|
|
|
- `maxDepth`: Maximum link depth for crawling. Default `4`
|
|
- `favicon`: URL for site favicon (default is `/favicon.ico` from `startUrl`).
|
|
- `useLocalCrawling`: Skip the default crawler and only crawl using a local crawler.
|
|
|
|
Example
|
|
|
|
config.json
|
|
|
|
```
|
|
"docs": [ { "title": "Continue", "startUrl": "https://docs.continue.dev/intro", "faviconUrl": "https://docs.continue.dev/favicon.ico", }]
|
|
```
|
|
|
|
## `slashCommands`
|
|
|
|
Custom commands initiated by typing "/" in the sidebar. Commands include predefined functionality or may be user-defined.
|
|
|
|
**Properties:**
|
|
|
|
- `name`: The command name. Options include "issue", "share", "cmd", "http", "commit", and "review".
|
|
- `description`: Brief description of the command.
|
|
- `step`: (Deprecated) Used for built-in commands; set the name for pre-configured options.
|
|
- `params`: Additional parameters to configure command behavior (command-specific - see code for command)
|
|
|
|
The following commands are built-in and can be added to `config.json` to make them visible:
|
|
|
|
### `/share`
|
|
|
|
Generate a shareable markdown transcript of your current chat history.
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"slashCommands": [
|
|
{
|
|
"name": "share",
|
|
"description": "Export the current chat session to markdown",
|
|
"params": { "outputDir": "~/.continue/session-transcripts" }
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
Use the `outputDir` parameter to specify where you want to the markdown file to be saved.
|
|
|
|
### `/cmd`
|
|
|
|
Generate a shell command from natural language and (only in VS Code) automatically paste it into the terminal.
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"slashCommands": [
|
|
{ "name": "cmd", "description": "Generate a shell command" }
|
|
]
|
|
}
|
|
```
|
|
|
|
### `/commit`
|
|
|
|
Shows the LLM your current git diff and asks it to generate a commit message.
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"slashCommands": [
|
|
{
|
|
"name": "commit",
|
|
"description": "Generate a commit message for the current changes"
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
### `/http`
|
|
|
|
Write a custom slash command at your own HTTP endpoint. Set 'url' in the params object for the endpoint you have setup. The endpoint should return a sequence of string updates, which will be streamed to the Continue sidebar. See our basic [FastAPI example](https://github.com/continuedev/continue/blob/74002369a5e435735b83278fb965e004ae38a97d/core/context/providers/context_provider_server.py#L34-L45) for reference.
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"slashCommands": [
|
|
{
|
|
"name": "http",
|
|
"description": "Does something custom",
|
|
"params": { "url": "<my server endpoint>" }
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
### `/issue`
|
|
|
|
Describe the issue you'd like to generate, and Continue will turn into a well-formatted title and body, then give you a link to the draft so you can submit. Make sure to set the URL of the repository you want to generate issues for.
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"slashCommands": [
|
|
{
|
|
"name": "issue",
|
|
"description": "Generate a link to a drafted GitHub issue",
|
|
"params": { "repositoryUrl": "https://github.com/continuedev/continue" }
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
### `/onboard`
|
|
|
|
The onboard slash command helps to familiarize yourself with a new project by analyzing the project structure, READMEs, and dependency files. It identifies key folders, explains their purpose, and highlights popular packages used. Additionally, it offers insights into the project's architecture.
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"slashCommands": [
|
|
{
|
|
"name": "onboard",
|
|
"description": "Familiarize yourself with the codebase"
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
Example:
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"slashCommands": [
|
|
{ "name": "commit", "description": "Generate a commit message" },
|
|
{ "name": "share", "description": "Export this session as markdown" },
|
|
{ "name": "cmd", "description": "Generate a shell command" }
|
|
]
|
|
}
|
|
```
|
|
|
|
### `customCommands`
|
|
|
|
User-defined commands for prompt shortcuts in the sidebar, allowing quick access to common actions.
|
|
|
|
**Properties:**
|
|
|
|
- `name`: The name of the custom command.
|
|
- `prompt`: Text prompt for the command.
|
|
- `description`: Brief description explaining the command's function.
|
|
|
|
Example:
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"customCommands": [
|
|
{
|
|
"name": "test",
|
|
"prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
|
"description": "Write unit tests for highlighted code"
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
### `contextProviders`
|
|
|
|
List of the pre-defined context providers that will show up as options while typing in the chat, and their customization with `params`.
|
|
|
|
**Properties:**
|
|
|
|
- `name`: Name of the context provider, e.g. `docs` or `web`
|
|
- `params`: A context-provider-specific record of params to configure the context behavior
|
|
|
|
Example
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"contextProviders": [
|
|
{ "name": "code", "params": {} },
|
|
{ "name": "docs", "params": {} },
|
|
{ "name": "diff", "params": {} },
|
|
{ "name": "open", "params": {} }
|
|
]
|
|
}
|
|
```
|
|
|
|
## `userToken`
|
|
|
|
An optional token that identifies the user, primarily for authenticated services.
|
|
|
|
## `systemMessage`
|
|
|
|
Defines a system message that appears before every response from the language model, providing guidance or context.
|
|
|
|
## `experimental`
|
|
|
|
Several experimental config parameters are available, as described below:
|
|
|
|
`experimental`:
|
|
|
|
- `defaultContext`: Defines the default context for the LLM. Uses the same format as `contextProviders` but includes an additional `query` property to specify custom query parameters.=
|
|
|
|
- `modelRoles`:
|
|
|
|
- `inlineEdit`: Model title for inline edits.
|
|
- `applyCodeBlock`: Model title for applying code blocks.
|
|
- `repoMapFileSelection`: Model title for repo map selections.
|
|
|
|
- `modelContextProtocolServers`: See [Model Context Protocol](/customize/deep-dives/mcp)
|
|
|
|
Example
|
|
|
|
config.json
|
|
|
|
```json
|
|
{
|
|
"experimental": {
|
|
"modelRoles": { "inlineEdit": "Edit Model" },
|
|
"modelContextProtocolServers": [
|
|
{
|
|
"transport": {
|
|
"type": "stdio",
|
|
"command": "uvx",
|
|
"args": ["mcp-server-sqlite", "--db-path", "/Users/NAME/test.db"]
|
|
}
|
|
}
|
|
]
|
|
}
|
|
}
|
|
```
|
|
|
|
## Fully deprecated settings
|
|
|
|
Some deprecated `config.json` settings are no longer stored in config and have been moved to be editable through the user settings. If found in `config.json`, they will be auto-migrated to User Settings and removed from `config.json`.
|
|
|
|
- `allowAnonymousTelemetry`: This value will be migrated to the safest merged value (`false` if either are `false`).
|
|
|
|
- `promptPath`: This value will override during migration.
|
|
|
|
- `disableIndexing`: This value will be migrated to the safest merged value (`true` if either are `true`).
|
|
|
|
- `disableSessionTitles`/`ui.getChatTitles`: This value will be migrated to the safest merged value (`true` if either are `true`). `getChatTitles` takes precedence if set to false
|
|
|
|
- `tabAutocompleteOptions`
|
|
|
|
- `useCache`: This value will override during migration.
|
|
- `disableInFiles`: This value will be migrated to the safest merged value (arrays of file matches merged/deduplicated)
|
|
- `multilineCompletions`: This value will override during migration.
|
|
|
|
- `experimental`
|
|
|
|
- `useChromiumForDocsCrawling`: This value will override during migration.
|
|
- `readResponseTTS`: This value will override during migration.
|
|
|
|
- `ui` - all will override during migration
|
|
|
|
- `codeBlockToolbarPosition`
|
|
- `fontSize`
|
|
- `codeWrap`
|
|
- `displayRawMarkdown`
|
|
- `showChatScrollbar`
|