Interact with your Ollama models using LLM.

Parameters

An Ollama LLM interface can have the following parameters:

ParameterTypeDescription
temperaturefloatThe temperature parameter for the model.
top_pfloatThe top-p parameter for the model.
num_predictintThe number of tokens to predict.
top_kintThe top-k parameter for the model.

Usage

Here is how you setup an interface to interact with your Ollama models.

1

Create a config.yaml in the same directory your code is in.

  1. src
    1. yourPythonCode.py
    2. yourPyNotebook.py
    3. config.yaml
2

Define your Ollama provider and models inside the config.yaml file.

providers:
    ollama:
        id: ollama
        name: Ollama
        chat: true
        embed: true
        keys:
        models:
            YOUR_MODEL: <- Replace with your model name
                mode: chat
                max_tokens: ...
                input_token_cost: ...
                output_token_cost: ...
If you are not sure about any of these parameters, you can just leave them as 0
3

Create your llm instance.

llm = LLM('ollama/{YOUR_MODEL}')
4

Optional: You can add your parameters as follows:

llm = LLM('ollama/model', 
temperature= ...,
num_predict= ...,
top_p= ...,
top_k= ...,)
You are done setting up your Ollama LLM!

What’s next?