Interact with your VertexAI models using LLM.

Supported models

  1. gemini-1.5-flash
  2. gemini-1.5-pro
  3. gemini-1.0-pro

Parameters

A VertexAI LLM interface can have the following parameters:

ParameterTypeDescription
api_keystrThe API key for authentication.
temperaturefloatThe temperature parameter for the model.
top_pfloatThe top-p parameter for the model.
max_tokensintThe maximum number of tokens for the model’s output.
frequency_penaltyfloatThe frequency penalty parameter for the model.
presence_penaltyfloatThe presence penalty parameter for the model.

Usage

Here is how you setup an interface to interact with your VertexAI models.

1

Create a .env file with you GOOGLE_API_KEY

Make sure you call your environment variable GOOGLE_API_KEY
GOOGLE_API_KEY="YOUR-KEY"
2

In your python code, import LLM from llmstudio.

from llmstudio import LLM
3

Create your llm instance.

llm = LLM('vertexai/{model}')
4

Optional: You can add your parameters as follows:

llm = LLM('vertexai/model', 
temperature= ...,
max_tokens= ...,
top_p= ...,
frequency_penalty= ...,
presence_penalty= ...)
You are done setting up your VertexAI LLM!

What’s next?