This guide outlines how to build a tool calling agent using Langchain + LLMstudio.

1. Set up your tools

Start by defining the tools your agent is going to have access to.

from langchain.tools import tool

@tool
def buy_ticket(destination: str):
    """Use this to buy a ticket"""
    return "Bought ticket number 270924"

@tool
def get_departure(ticket_number: str):
    """Use this to fetch the departure time of a train"""
    return "8:25 AM"

2. Setup your .env

Create a .env file on the root of your project with the the credentials for the providers you want to use.

OPENAI_API_KEY="YOUR_API_KEY"

3. Set up your model using LLMstudio

Use LLMstudio to choose the provider and model you want to use.

model = ChatLLMstudio(model_id='openai/gpt-4o')

4. Build the agent

Set up your agent and agent executor using Langchain.

from langchain import hub
from langchain.agents import AgentExecutor, create_openai_tools_agent

prompt = hub.pull("hwchase17/openai-tools-agent")
agent = create_openai_tools_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)

input = "Can you buy me a ticket to madrid?"

# Using with chat history
agent_executor.invoke(
    {
        "input": input,
    }
)