Tags: JamePeng/llama-cpp-python
Toggle v0.3.16-cu128-AVX2-win-20250913's commit message
Simplify the code structure of test.yaml
Toggle v0.3.16-cu128-AVX2-linux-20250913's commit message
Simplify the code structure of test.yaml
Toggle v0.3.16-cu126-AVX2-win-20250913's commit message
Simplify the code structure of test.yaml
Toggle v0.3.16-cu126-AVX2-linux-20250913's commit message
Simplify the code structure of test.yaml
Toggle v0.3.16-cu124-AVX2-win-20250913's commit message
Simplify the code structure of test.yaml
Toggle v0.3.16-cu124-AVX2-linux-20250913's commit message
Simplify the code structure of test.yaml
Toggle v0.3.16-cu128-AVX2-win-20250831's commit message
Sync llama: use FA + max. GPU layers by default
Toggle v0.3.16-cu128-AVX2-linux-20250831's commit message
Sync llama: use FA + max. GPU layers by default
Toggle v0.3.16-cu126-AVX2-win-20250831's commit message
Sync llama: use FA + max. GPU layers by default
Toggle v0.3.16-cu126-AVX2-linux-20250831's commit message
Sync llama: use FA + max. GPU layers by default
You can’t perform that action at this time.