chore(model gallery): 🤖 add 1 new models via gallery agent (#8170)

chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
This commit is contained in:
LocalAI [bot]
2026-01-23 08:19:39 +01:00
committed by GitHub
parent 552c62a19c
commit ea51567b89

View File

@@ -1,4 +1,36 @@
---
- name: "huihui-glm-4.7-flash-abliterated-i1"
url: "github:mudler/LocalAI/gallery/virtual.yaml@master"
urls:
- https://huggingface.co/mradermacher/Huihui-GLM-4.7-Flash-abliterated-i1-GGUF
description: |
The model is a quantized version of **huihui-ai/Huihui-GLM-4.7-Flash-abliterated**, optimized for efficiency and deployment. It uses GGUF files with various quantization levels (e.g., IQ1_M, IQ2_XXS, Q4_K_M) and is designed for tasks requiring low-resource deployment. Key features include:
- **Base Model**: Huihui-GLM-4.7-Flash-abliterated (unmodified, original model).
- **Quantization**: Supports IQ1_M to Q4_K_M, balancing accuracy and efficiency.
- **Use Cases**: Suitable for applications needing lightweight inference, such as edge devices or resource-constrained environments.
- **Downloads**: Available in GGUF format with varying quality and size (e.g., 0.2GB to 18.2GB).
- **Tags**: Abliterated, uncensored, and optimized for specific tasks.
This model is a modified version of the original GLM-4.7, tailored for deployment with quantized weights.
overrides:
parameters:
model: llama-cpp/models/Huihui-GLM-4.7-Flash-abliterated.i1-Q4_K_M.gguf
name: Huihui-GLM-4.7-Flash-abliterated-i1-GGUF
backend: llama-cpp
template:
use_tokenizer_template: true
known_usecases:
- chat
function:
grammar:
disable: true
description: Imported from https://huggingface.co/mradermacher/Huihui-GLM-4.7-Flash-abliterated-i1-GGUF
options:
- use_jinja:true
files:
- filename: llama-cpp/models/Huihui-GLM-4.7-Flash-abliterated.i1-Q4_K_M.gguf
sha256: 2ec5fcf2aa882c0c55fc67a35ea7ed50c24016bc4a8a4ceacfcea103dc2f1cb8
uri: https://huggingface.co/mradermacher/Huihui-GLM-4.7-Flash-abliterated-i1-GGUF/resolve/main/Huihui-GLM-4.7-Flash-abliterated.i1-Q4_K_M.gguf
- name: "mox-small-1-i1"
url: "github:mudler/LocalAI/gallery/virtual.yaml@master"
urls: