Skip to content

Commit 2722844

Browse files
author
ytian218
committed
server: fix crash when batch > ubatch with embeddings (#12836)
Fixes #12836 where the server crashes with GGML_ASSERT failure when running with embeddings enabled and n_batch > n_ubatch. Root cause: Embeddings use non-causal attention which requires all tokens to be processed within a single ubatch. When n_batch > n_ubatch, the server attempts to split processing, causing assertion failure. Solution: - Add parameter validation in main() after common_params_parse() - When embeddings enabled and n_batch > n_ubatch: * Log warnings explaining the issue * Automatically set n_batch = n_ubatch * Prevent server crash This follows the approach suggested by @ggerganov in issue #12836. Note: This supersedes stalled PR #12940 which attempted a runtime fix in the old examples/server/server.cpp location. This implementation validates at startup in tools/server/server.cpp (current location). Testing: - Build: Compiles successfully - Validation triggers: Warns when -b > -ub with --embedding - Auto-correction works: Adjusts n_batch = n_ubatch - No false positives: Valid params don't trigger warnings - Verified on macOS M3 Pro with embedding model
1 parent 583cb83 commit 2722844

File tree

1 file changed

+9
-0
lines changed

1 file changed

+9
-0
lines changed

tools/server/server.cpp

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3657,6 +3657,15 @@ int main(int argc, char ** argv) {
36573657
return 1;
36583658
}
36593659

3660+
// validate batch size for embeddings
3661+
// embeddings require all tokens to be processed in a single ubatch
3662+
// see https://github.com/ggml-org/llama.cpp/issues/12836
3663+
if (params.embedding && params.n_batch > params.n_ubatch) {
3664+
LOG_WRN("%s: embeddings enabled with n_batch (%d) > n_ubatch (%d)\n", __func__, params.n_batch, params.n_ubatch);
3665+
LOG_WRN("%s: setting n_batch = n_ubatch = %d to avoid assertion failure\n", __func__, params.n_ubatch);
3666+
params.n_batch = params.n_ubatch;
3667+
}
3668+
36603669
// TODO: should we have a separate n_parallel parameter for the server?
36613670
// https://github.com/ggml-org/llama.cpp/pull/16736#discussion_r2483763177
36623671
// TODO: this is a common configuration that is suitable for most local use cases

0 commit comments

Comments
 (0)