feat(realtime): Add audio conversations (#6245)
* feat(realtime): Add audio conversations Signed-off-by: Richard Palethorpe <io@richiejp.com> * chore(realtime): Vendor the updated API and modify for server side Signed-off-by: Richard Palethorpe <io@richiejp.com> * feat(realtime): Update to the GA realtime API Signed-off-by: Richard Palethorpe <io@richiejp.com> * chore: Document realtime API and add docs to AGENTS.md Signed-off-by: Richard Palethorpe <io@richiejp.com> * feat: Filter reasoning from spoken output Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(realtime): Send delta and done events for tool calls and audio transcripts Ensure that content is sent in both deltas and done events for function call arguments and audio transcripts. This fixes compatibility with clients that rely on delta events for parsing. 💘 Generated with Crush Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(realtime): Improve tool call handling and error reporting - Refactor Model interface to accept []types.ToolUnion and *types.ToolChoiceUnion instead of JSON strings, eliminating unnecessary marshal/unmarshal cycles - Fix Parameters field handling: support both map[string]any and JSON string formats - Add PredictConfig() method to Model interface for accessing model configuration - Add comprehensive debug logging for tool call parsing and function config - Add missing return statement after prediction error (critical bug fix) - Add warning logs for NoAction function argument parsing failures - Improve error visibility throughout generateResponse function 💘 Generated with Crush Assisted-by: Claude Sonnet 4.5 via Crush <crush@charm.land> Signed-off-by: Richard Palethorpe <io@richiejp.com> --------- Signed-off-by: Richard Palethorpe <io@richiejp.com>
This commit is contained in:
committed by
GitHub
parent
48e08772f3
commit
dd8e74a486
@@ -476,7 +476,7 @@ reasoning:
|
||||
|
||||
## Pipeline Configuration
|
||||
|
||||
Define pipelines for audio-to-audio processing:
|
||||
Define pipelines for audio-to-audio processing and the [Realtime API]({{%relref "features/openai-realtime" %}}):
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
|
||||
@@ -20,6 +20,7 @@ LocalAI provides a comprehensive set of features for running AI models locally.
|
||||
## Advanced Features
|
||||
|
||||
- **[OpenAI Functions](openai-functions/)** - Use function calling and tools API with local models
|
||||
- **[Realtime API](openai-realtime/)** - Low-latency multi-modal conversations (voice+text) over WebSocket
|
||||
- **[Constrained Grammars](constrained_grammars/)** - Control model output format with BNF grammars
|
||||
- **[GPU Acceleration](GPU-acceleration/)** - Optimize performance with GPU support
|
||||
- **[Distributed Inference](distributed_inferencing/)** - Scale inference across multiple nodes
|
||||
|
||||
42
docs/content/features/openai-realtime.md
Normal file
42
docs/content/features/openai-realtime.md
Normal file
@@ -0,0 +1,42 @@
|
||||
|
||||
---
|
||||
title: "Realtime API"
|
||||
weight: 60
|
||||
---
|
||||
|
||||
# Realtime API
|
||||
|
||||
LocalAI supports the [OpenAI Realtime API](https://platform.openai.com/docs/guides/realtime) which enables low-latency, multi-modal conversations (voice and text) over WebSocket.
|
||||
|
||||
To use the Realtime API, you need to configure a pipeline model that defines the components for Voice Activity Detection (VAD), Transcription (STT), Language Model (LLM), and Text-to-Speech (TTS).
|
||||
|
||||
## Configuration
|
||||
|
||||
Create a model configuration file (e.g., `gpt-realtime.yaml`) in your models directory. For a complete reference of configuration options, see [Model Configuration]({{%relref "advanced/model-configuration" %}}).
|
||||
|
||||
```yaml
|
||||
name: gpt-realtime
|
||||
pipeline:
|
||||
vad: silero-vad-ggml
|
||||
transcription: whisper-large-turbo
|
||||
llm: qwen3-4b
|
||||
tts: tts-1
|
||||
```
|
||||
|
||||
This configuration links the following components:
|
||||
- **vad**: The Voice Activity Detection model (e.g., `silero-vad-ggml`) to detect when the user is speaking.
|
||||
- **transcription**: The Speech-to-Text model (e.g., `whisper-large-turbo`) to transcribe user audio.
|
||||
- **llm**: The Large Language Model (e.g., `qwen3-4b`) to generate responses.
|
||||
- **tts**: The Text-to-Speech model (e.g., `tts-1`) to synthesize the audio response.
|
||||
|
||||
Make sure all referenced models (`silero-vad-ggml`, `whisper-large-turbo`, `qwen3-4b`, `tts-1`) are also installed or defined in your LocalAI instance.
|
||||
|
||||
## Usage
|
||||
|
||||
Once configured, you can connect to the Realtime API endpoint via WebSocket:
|
||||
|
||||
```
|
||||
ws://localhost:8080/v1/realtime?model=gpt-realtime
|
||||
```
|
||||
|
||||
The API follows the OpenAI Realtime API protocol for handling sessions, audio buffers, and conversation items.
|
||||
Reference in New Issue
Block a user