Files
continue/docs/customize/model-roles/reranking.mdx
2026-01-30 15:47:00 -08:00

131 lines
3.6 KiB
Plaintext

---
title: Rerank Role
description: Rerank model role
keywords: [rerank, reranking, model, role]
sidebar_position: 6
---
A "reranking model" is trained to take two pieces of text (often a user question and a document) and return a relevancy score between 0 and 1, estimating how useful the document will be in answering the question. Rerankers are typically much smaller than LLMs, and will be extremely fast and cheap in comparison.
In Continue, rerankers are designated using the `rerank` role and used by [codebase awareness](/guides/codebase-documentation-awareness) in order to select the most relevant code snippets after vector search.
## Recommended reranking models
<Info>
For a comparison of all reranking models including open and closed options, see our [comprehensive model recommendations](/customize/models#recommended-models).
</Info>
If you have the ability to use any model, we recommend `rerank-2` by Voyage AI, which is listed below along with the rest of the options for rerankers.
### Voyage AI
Voyage AI offers the best reranking model for code with their `rerank-2` model. After obtaining an API key from [here](https://www.voyageai.com/), you can configure a reranker as follows:
<Tabs>
<Tab title="Hub">
```yaml title="config.yaml"
name: My Config
version: 0.0.1
schema: v1
models:
- uses: voyageai/rerank-2
```
</Tab>
<Tab title="YAML">
```yaml title="config.yaml"
name: My Config
version: 0.0.1
schema: v1
models:
- name: My Voyage Reranker
provider: voyage
apiKey: <YOUR_VOYAGE_API_KEY>
model: rerank-2
roles:
- rerank
```
</Tab>
</Tabs>
### Cohere
See Cohere's documentation for rerankers [here](https://docs.cohere.com/docs/rerank-2).
<Tabs>
{/* HUB_TODO block doesn't exist */}
{/* <Tab title="Hub">
[Cohere Reranker English v3](https://continue.dev/)
</Tab> */}
<Tab title="YAML">
```yaml title="config.yaml"
name: My Config
version: 0.0.1
schema: v1
models:
- name: Cohere Reranker
provider: cohere
model: rerank-english-v3.0
apiKey: <YOUR_COHERE_API_KEY>
roles:
- rerank
```
</Tab>
</Tabs>
### LLM
If you only have access to a single LLM, then you can use it as a reranker. This is discouraged unless truly necessary, because it will be much more expensive and still less accurate than any of the above models trained specifically for the task. Note that this will not work if you are using a local model, for example with Ollama, because too many parallel requests need to be made.
<Tabs>
{/* HUB_TODO block doesn't exist */}
{/* <Tab title="Hub">
[GPT-4o LLM Reranker Block](https://continue.dev/)
</Tab> */}
<Tab title="YAML">
```yaml title="config.yaml"
name: My Config
version: 0.0.1
schema: v1
models:
- name: LLM Reranker
provider: openai
model: gpt-4o
roles:
- rerank
```
</Tab>
</Tabs>
### Text Embeddings Inference
[Hugging Face Text Embeddings Inference](https://huggingface.co/docs/text-embeddings-inference/en/index) enables you to host your own [reranker endpoint](https://huggingface.github.io/text-embeddings-inference/#/Text%20Embeddings%20Inference/rerank). You can configure your reranker as follows:
<Tabs>
{/* HUB_TODO */}
{/* <Tab title="Hub">
[HuggingFace TEI Reranker block](https://continue.dev/)
</Tab> */}
<Tab title="YAML">
```yaml title="config.yaml"
name: My Config
version: 0.0.1
schema: v1
models:
- name: Huggingface-tei Reranker
provider: huggingface-tei
model: tei
apiBase: http://localhost:8080
apiKey: <YOUR_TEI_API_KEY>
roles:
- rerank
```
</Tab>
</Tabs>