Interact with your Azure models using LLM.

Parameters

An Azure LLM interface can have the following parameters:

ParameterTypeDescription
temperaturefloatThe temperature parameter for the model.
max_tokensintThe maximum number of tokens to generate.
top_pfloatThe top-p parameter for the model.
frequency_penaltyfloatThe frequency penalty parameter for the model.
presence_penaltyfloatThe presence penalty parameter for the model.

Usage

Here is how you setup an interface to interact with your Azure models.

1

Create a config.yaml file in the same directory as your code.

  1. πŸ“ src
    1. 🐍 PythonCode.py
    2. 🐍 PyNotebook.ipynb
    3. πŸ“„ config.yaml
2

Define your Azure OpenAI provider and models inside the config.yaml file.

    providers:
        azure:
            id: azure
            name: Azure
            chat: true
            embed: true
            models:
                YOUR_MODEL: <- Replace with your model name
                    mode: chat
                    max_tokens: ...
                    input_token_cost: ...
                    output_token_cost: ...
If you are not sure, you can leave max_tokens, input_tokens and the other parameters as 0
3

Create your llm instance.

llm = LLM('azure/YOUR_MODEL',
        api_key = YOUR_API_KEY,
        api_endpoint = YOUR_ENDPOINT,
        api_version = YOUR_API_VERSION)
4

Optional: You can add your parameters as follows:

llm = LLM('azure/model', 
temperature= ...,
max_tokens= ...,
top_p= ...,
frequency_penalty= ...,
presence_penalty= ...)
You are done setting up your Azure LLM!

What’s next?