LLM
Vertexai
Interact with your VertexAI models using LLM.
Supported models
gemini-1.5-flash
gemini-1.5-pro
gemini-1.0-pro
Parameters
A VertexAI LLM interface can have the following parameters:
Parameter | Type | Description |
---|---|---|
api_key | str | The API key for authentication. |
temperature | float | The temperature parameter for the model. |
top_p | float | The top-p parameter for the model. |
max_tokens | int | The maximum number of tokens for the model’s output. |
frequency_penalty | float | The frequency penalty parameter for the model. |
presence_penalty | float | The presence penalty parameter for the model. |
Usage
Here is how you setup an interface to interact with your VertexAI models.
1
Create a .env
file with you GOOGLE_API_KEY
Make sure you call your environment variable GOOGLE_API_KEY
GOOGLE_API_KEY="YOUR-KEY"
2
In your python code, import LLM from llmstudio.
from llmstudio import LLM
3
Create your llm instance.
llm = LLM('vertexai/{model}')
4
Optional: You can add your parameters as follows:
llm = LLM('vertexai/model',
temperature= ...,
max_tokens= ...,
top_p= ...,
frequency_penalty= ...,
presence_penalty= ...)
You are done setting up your VertexAI LLM!