Skip to content

Instantly share code, notes, and snippets.

@ianscrivener
Created June 9, 2023 04:10
Show Gist options
  • Select an option

  • Save ianscrivener/6237c178b5b37198666f43c75b18f915 to your computer and use it in GitHub Desktop.

Select an option

Save ianscrivener/6237c178b5b37198666f43c75b18f915 to your computer and use it in GitHub Desktop.

Test

  • try installing llama-cpp-python and llama-cpp-python[server] from pip ... WITH ggml-metal.metal (WITH Metal GPU support) file in python executable directory

Environment

  • from previous test

Result

  • llama-cpp-python[server] FAILS

Steps

##################################
# remove previous pip package

pip uninstall llama-cpp-python -y
pip list | grep llama
> [nothing found]



##################################
# fresh pip install - force ref

pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir
pip install 'llama-cpp-python[server]''


pip list | grep llama
>  llama-cpp-python  0.1.59


##################################
# TEST

python3 -m llama_cpp.server --model $MODEL



Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment