LLM
Ollama
Interact with your Ollama models using LLM.
Parameters
An Ollama LLM interface can have the following parameters:
Parameter | Type | Description |
---|---|---|
temperature | float | The temperature parameter for the model. |
top_p | float | The top-p parameter for the model. |
num_predict | int | The number of tokens to predict. |
top_k | int | The top-k parameter for the model. |
Usage
Here is how you setup an interface to interact with your Ollama models.
1
Create a config.yaml
in the same directory your code is in.
- src
- yourPythonCode.py
- yourPyNotebook.py
- config.yaml
2
Define your Ollama provider and models inside the config.yaml
file.
providers:
ollama:
id: ollama
name: Ollama
chat: true
embed: true
keys:
models:
YOUR_MODEL: <- Replace with your model name
mode: chat
max_tokens: ...
input_token_cost: ...
output_token_cost: ...
If you are not sure about any of these parameters, you can just leave them as 0
3
Create your llm instance.
llm = LLM('ollama/{YOUR_MODEL}')
4
Optional: You can add your parameters as follows:
llm = LLM('ollama/model',
temperature= ...,
num_predict= ...,
top_p= ...,
top_k= ...,)
You are done setting up your Ollama LLM!