Skip to content

Instantly share code, notes, and snippets.

@thapakazi
Created November 2, 2024 07:01
Show Gist options
  • Select an option

  • Save thapakazi/0b25bd91affaf1685b0bbd191e665fb4 to your computer and use it in GitHub Desktop.

Select an option

Save thapakazi/0b25bd91affaf1685b0bbd191e665fb4 to your computer and use it in GitHub Desktop.

Revisions

  1. thapakazi created this gist Nov 2, 2024.
    43 changes: 43 additions & 0 deletions sample-llm.py
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,43 @@
    from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
    from llama_index.embeddings.huggingface import HuggingFaceEmbedding
    from llama_index.llms.ollama import Ollama
    import logging
    import sys
    import os
    import pickle

    # logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
    # logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))

    # Get the query from command-line arguments
    if len(sys.argv) < 2:
    print("What is your query ??")
    sys.exit(1)

    query_string = " ".join(sys.argv[1:])

    # Define the file path for saving the index
    index_file_path = "saved_index.pkl"

    # Initialize the embedding and LLM settings
    Settings.embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-base-en-v1.5")
    Settings.llm = Ollama(model="llama3", request_timeout=360.0)

    # Check if the index file exists, load it if it does, otherwise create and save it
    if os.path.exists(index_file_path):
    with open(index_file_path, "rb") as f:
    index = pickle.load(f)
    else:
    documents = SimpleDirectoryReader("data").load_data()
    index = VectorStoreIndex.from_documents(documents)
    # Save the index for future runs
    with open(index_file_path, "wb") as f:
    pickle.dump(index, f)

    # Create the query engine and query
    query_engine = index.as_query_engine()

    response = query_engine.query(query_string)

    import pdb; pdb.set_trace()
    print(response)