-
Notifications
You must be signed in to change notification settings - Fork 7.6k
Description
Hardware: MacBook Pro, i7, 16GB RAM
OS: macOS 13.3.1
Python 3.11.3, Miniconda3
Got this in terminal:
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 7B
error loading model: this format is no longer supported (see ggml-org/llama.cpp#1305)
llama_init_from_file: failed to load model
Traceback (most recent call last):
File "/Users/pchan3/Desktop/privateGPT/ingest.py", line 62, in
main()
File "/Users/pchan3/Desktop/privateGPT/ingest.py", line 53, in main
llama = LlamaCppEmbeddings(model_path=llama_embeddings_model, n_ctx=model_n_ctx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCppEmbeddings
root
Could not load Llama model from path: models/ggml-model-q4_0.bin. Received error (type=value_error)