Skip to content
Merged
Changes from 1 commit
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
10a47fa
iq4_nl: squash commits for easier rebase
Feb 19, 2024
5691fec
Resurrecting iq3_xs
Feb 20, 2024
76aff09
Minor PPL improvement via a block scale fudge factor
Feb 20, 2024
5be4e7a
Minor improvement via 3 neighbours
Feb 20, 2024
f1255c5
iq3_xs: working scalar and AVX2 dot products
Feb 20, 2024
76214ab
iq3_xs: ARM_NEON dot product - works but extremely slow (10 t/s)
Feb 20, 2024
38aa7b1
iq3_xs: working Metal implementation
Feb 20, 2024
2ec600b
Adding IQ3_M - IQ3_XS mix with mostly Q4_K
Feb 21, 2024
d83fdda
iiq3_xs: a 3.4375 bpw variant
Feb 22, 2024
eacff4a
iq3_xs: make CUDA work for new version
Feb 22, 2024
1fef4b8
iq3_xs: make scalar and AVX2 work for new version
Feb 22, 2024
1328331
iq3_s: make ARM_NEON work with new version
Feb 22, 2024
1777825
iq3_xs: make new version work on metal
Feb 22, 2024
87038fe
iq3_xs: tiny Metal speed improvement
Feb 22, 2024
4d5feeb
iq3_xs: tiny Metal speed improvement
Feb 22, 2024
b25f996
Fix stupid warning
Feb 22, 2024
272c7f7
Q3_K_XS now uses a mix of IQ3_XS and IQ3_XXS
Feb 22, 2024
2730225
iq3_xs: rename to iq3_s
Feb 22, 2024
47cf30b
iq3_s: make tests pass
Feb 22, 2024
cd6a0f0
Move Q3_K_XS mix to 3.25 bpw
Feb 23, 2024
436a146
Attempt to fix failing tests
Feb 23, 2024
303f3f3
Another attempt to fix the Windows builds
Feb 23, 2024
0d6d185
Attempt to fix ROCm
Feb 23, 2024
1d47de3
ROCm again
Feb 23, 2024
e6e61e3
iq3_s: partial fix for QK_K = 64
Feb 23, 2024
cbd950b
iq3_s: make it work on metal for QK_K = 64
Feb 23, 2024
e1b8efb
Will this fix ROCm?
Feb 23, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Move Q3_K_XS mix to 3.25 bpw
  • Loading branch information
Iwan Kawrakow committed Feb 23, 2024
commit cd6a0f08be014047a57e5eeeb84b8e10d729d0fe
8 changes: 6 additions & 2 deletions llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -10663,13 +10663,17 @@ static ggml_type get_k_quant_type(quantize_state_internal & qs, ggml_type new_ty
else if (ftype == LLAMA_FTYPE_MOSTLY_Q5_K_M) new_type = GGML_TYPE_Q6_K;
}
else if (name.find("ffn_gate") != std::string::npos) {
if (ftype == LLAMA_FTYPE_MOSTLY_Q3_K_XS) {
auto info = layer_info(qs.i_ffn_gate, qs.n_ffn_gate, name.c_str());
int i_layer = info.first, n_layer = info.second;
if (ftype == LLAMA_FTYPE_MOSTLY_Q3_K_XS && (i_layer >= n_layer/8 && i_layer < 7*n_layer/8)) {
new_type = GGML_TYPE_IQ3_XXS;
}
++qs.i_ffn_gate;
}
else if (name.find("ffn_up") != std::string::npos) {
if (ftype == LLAMA_FTYPE_MOSTLY_Q3_K_XS) {
auto info = layer_info(qs.i_ffn_up, qs.n_ffn_up, name.c_str());
int i_layer = info.first, n_layer = info.second;
if (ftype == LLAMA_FTYPE_MOSTLY_Q3_K_XS && (i_layer >= n_layer/8 && i_layer < 7*n_layer/8)) {
new_type = GGML_TYPE_IQ3_XXS;
}
++qs.i_ffn_up;
Expand Down