Projects with this topic
Sort by:
-
yzma
https://github.com/hybridgroup/yzma Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
Updated
https://github.com/hybridgroup/yzma Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.