Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: ggml-org/llama.cpp
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: master
Choose a base ref
...
head repository: dsd/llama.cpp
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: master
Choose a head ref
Checking mergeability… Don’t worry, you can still create the pull request.
  • 1 commit
  • 1 file changed
  • 1 contributor

Commits on Jul 1, 2023

  1. cmake: don't force -mcpu=native on aarch64

    It's currently not possible to cross-compile llama.cpp for aarch64
    because CMakeLists.txt forces -mcpu=native for that target.
    
    -mcpu=native doesn't make sense if your build host is not the
    target architecture, and clang rejects it for that reason, aborting the
    build. This can be easily reproduced using the current Android NDK to build
    for aarch64 on an x86_64 host.
    
    If there is not a specific CPU-tuning target for aarch64 then -mcpu
    should be omitted completely. I think that makes sense, there is not
    enough variance in the aarch64 instruction set to warrant a fixed -mcpu
    optimization at this point. And if someone is building natively and wishes
    to enable any possible optimizations for the host device, then there is
    already the LLAMA_NATIVE option available.
    
    Fixes #495.
    dsd committed Jul 1, 2023
    Configuration menu
    Copy the full SHA
    2353509 View commit details
    Browse the repository at this point in the history
Loading