This website requires JavaScript.
Explore
Help
Register
Sign In
GithubMirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
f4b0a304d7f87b99d9af7accfed239adec37a945
LocalAI
/
backend
/
cpp
/
llama-cpp
History
Ettore Di Giacinto
f4b0a304d7
chore(llama.cpp): propagate errors during model load (
#7937
)
...
Signed-off-by: Ettore Di Giacinto <
mudler@localai.io
>
2026-01-09 07:52:49 +01:00
..
CMakeLists.txt
fix: BMI2 crash on AVX-only CPUs (Intel Ivy Bridge/Sandy Bridge) (
#7864
)
2026-01-06 00:13:48 +00:00
grpc-server.cpp
chore(llama.cpp): propagate errors during model load (
#7937
)
2026-01-09 07:52:49 +01:00
Makefile
chore(deps): Bump llama.cpp to '480160d47297df43b43746294963476fc0a6e10f' (
#7933
)
2026-01-09 07:52:32 +01:00
package.sh
feat: package GPU libraries inside backend containers for unified base image (
#7891
)
2026-01-07 15:48:51 +01:00
prepare.sh
chore:
⬆️
Update ggml-org/llama.cpp to
7f8ef50cce40e3e7e4526a3696cb45658190e69a
(
#7402
)
2025-12-01 07:50:40 +01:00
run.sh
…