Upskill/Reskill
Sep 2, 2024

LLM Function Calling: How to Get Started

Oladimeji Sowole

Function calling is a powerful capability in large language models (LLMs) like GPT-4, allowing these models to interact seamlessly with external tools and APIs. This functionality enables LLMs to convert natural language into actionable API calls, making them more versatile and useful in real-world applications. For instance, when a user asks, “What is the weather like in Lagos?” an LLM equipped with function calling can transform this query into a function call to a weather API in Lagos, Nigeria, retrieving the current weather data there.

This integration is essential for building advanced conversational agents or chatbots that require real-time data or need to perform specific actions. Function calling allows developers to define various functions that the LLM can call based on the context and requirements of the conversation. These functions act as tools within your LLM application, enabling tasks such as data extraction, knowledge retrieval and API integration.

With function calling, developers can enhance LLMs’ capabilities, making them conversational, interactive and responsive to user needs. This guide will walk you through the steps to implement function calling using OpenAI APIs, providing a simple, practical example to illustrate the process to enhance the capabilities of our language model (LLM) application. This step-by-step guide with code snippets will demonstrate how to define and call functions dynamically based on user input. We’ll use a movie database to fetch movie details.

Prerequisites

  1. Python is installed on your machine.
  2. OpenAI API key.
  3. dotenv library to manage environment variables.

Setting Up the Environment

First, let’s set up our environment and install the necessary libraries.

pip install openai python-dotenv

Next, create a .env file in your project directory and add your OpenAI API key:

OPENAI_API_KEY=your_openai_api_key

Initialize OpenAI API

Load the API key from the .env file and set it up in your script.

import os
import openai
import json
from dotenv import load_dotenv

load_dotenv()

Set OpenAI API Key

openai.api_key = os.getenv('OPENAI_API_KEY')

Define a Function to Get Movie Details

We’ll create a function that fetches movie details from a dummy movie database.

def get_movie_details(title):
    """Get details of a movie by its title"""
    movie_database = {
        "Inception": {"title": "Inception", "director": "Christopher Nolan", "year": 2010},
        "The Matrix": {"title": "The Matrix", "director": "Wachowski Sisters", "year": 1999},
        "Interstellar": {"title": "Interstellar", "director": "Christopher Nolan", "year": 2014},
    }
    
    return json.dumps(movie_database.get(title, "Movie not found"))

# Example usage
print(get_movie_details("Inception"))

Define the Function of the API

Now, we’ll define this function as a tool for the OpenAI API to use.

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_movie_details",
            "description": "Get details of a movie by its title",
            "parameters": {
                "type": "object",
                "properties": {
                    "title": {
                        "type": "string",
                        "description": "The title of the movie",
                    },
                },
                "required": ["title"],
            },
        },
    }
]

Define a Helper Function to Get Completion

This function will handle the API requests and process the responses.

def get_completion(messages, model="gpt-3.5-turbo-1106", temperature=0, max_tokens=300, tools=None, tool_choice=None):
    response = openai.chat.completions.create(
        model=model,
        messages=messages,
        temperature=temperature,
        max_tokens=max_tokens,
        tools=tools,
        tool_choice=tool_choice
    )
    return response.choices[0].message

Example: Fetching Movie Details

Let’s create a conversation where the user asks for movie details and the LLM calls our function to get the necessary information.

messages = [
    {
        "role": "user",
        "content": "Can you tell me about the movie Inception?",
    }
]

response = get_completion(messages, tools=tools)
print(response)

# Capture the function call arguments
args = json.loads(response.tool_calls[0].function.arguments)
print(get_movie_details(**args))

Controlling Function Calling Behavior

You can control whether the model should call a function automatically.

Automatic Function Calling

The model decides on its own whether to call a function.

get_completion(messages, tools=tools, tool_choice="auto")

No Function Calling

If you want to force the model to not use any of the functions provided, the code snippet below gives an example of how to implement this.

get_completion(messages, tools=tools, tool_choice="none")

Forced Function Calling

To force the model to call a specific function, you can implement:

get_completion(messages, tools=tools, tool_choice={"type": "function", "function": {"name": "get_movie_details"}})

Handling Multiple Function Calls

The OpenAI API supports calling multiple functions in one turn. For example, fetching details for multiple movies:

messages = [
    {
        "role": "user",
        "content": "Tell me about the movies Inception and Interstellar.",
    }
]

response = get_completion(messages, tools=tools)

# Assuming the model returns calls for both functions, you would handle each one.
for tool_call in response.tool_calls:
    args = json.loads(tool_call.function.arguments)
    print(get_movie_details(**args))

Passing Function Results Back to the Model

You might want to pass the result obtained from your API back to the model for further processing.

messages = [
    {"role": "user", "content": "Tell me about the movie Inception."}
]

assistant_message = get_completion(messages, tools=tools, tool_choice="auto")
assistant_message = json.loads(assistant_message.model_dump_json())
assistant_message["content"] = str(assistant_message["tool_calls"][0]["function"])

# Remove "function_call" from the assistant message
del assistant_message["function_call"]

messages.append(assistant_message)

# Get the movie details
movie_details = get_movie_details(messages[1]["tool_calls"][0]["function"]["arguments"])

# Pass the movie details back to the model
messages.append({"role": "tool", "tool_call_id": assistant_message["tool_calls"][0]["id"], "name": assistant_message["tool_calls"][0]["function"]["name"], "content": movie_details})

final_response = get_completion(messages, tools=tools)
print(final_response)

Conclusion

In this tutorial, we explored how to use function calling with OpenAI API to dynamically fetch and use data based on user input. By defining functions and controlling their invocation, you can create more interactive and capable applications. This method can be applied to various use cases, such as querying databases, calling external APIs or performing calculations.

Feel free to extend this example to suit your specific needs and experiment with different functions and behaviors. Happy coding!

About the author: Oladimeji Sowole

Oladimeji Sowole is a member of the Andela Community.  A Data Scientist and Data Analyst with more than 6 years of professional experience building data visualizations with different tools and predictive models for actionable insights, he has hands-on expertise in implementing technologies such as Python, R, and SQL to develop solutions that drive client satisfaction. A collaborative team player, he has a great passion for solving problems.

Interested in 
Learning More?

Subscribe today to stay informed and get regular updates from Andela.

You might also be interested in

Ready to get started?

Contact Us