This website requires JavaScript.
Explore
Help
Register
Sign In
GithubMirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
48e08772f3b2553a61558e53962f405846861f49
LocalAI
/
backend
/
cpp
/
llama-cpp
History
Ettore Di Giacinto
48e08772f3
chore(llama.cpp): bump to 'f6b533d898ce84bae8d9fa8dfc6697ac087800bf' (
#8275
)
...
Signed-off-by: Ettore Di Giacinto <
mudler@localai.io
>
2026-01-29 00:22:25 +01:00
..
CMakeLists.txt
…
grpc-server.cpp
chore(llama.cpp): bump to 'f6b533d898ce84bae8d9fa8dfc6697ac087800bf' (
#8275
)
2026-01-29 00:22:25 +01:00
Makefile
chore(llama.cpp): bump to 'f6b533d898ce84bae8d9fa8dfc6697ac087800bf' (
#8275
)
2026-01-29 00:22:25 +01:00
package.sh
feat: package GPU libraries inside backend containers for unified base image (
#7891
)
2026-01-07 15:48:51 +01:00
prepare.sh
…
run.sh
…