Thinktecture Logo

How function calling and metadata make all the difference for AI integration, especially in .NET

Author: Sebastian Gingter • Published: 20.12.2023 • Category: AI, Azure OpenAI, OpenAI

Generative AI provides much more than just adding semantic search capabilities to your application.
As soon as we equip our AI with tools, and the AI model can decide to use these tools to be more effective for us, we put them on the next step of the “evolutionary ladder” and elevate them from just being helpful chat companions to what is called “Agents”.

By leveraging function calling – as it is called with Open AI’s models – we can equip the AI model with tools and give it the possibility to execute methods within our application. The output of these functions can then be passed back to the model so that it can generate a better answer for the user. On the other hand, the execution of the method could also be part of the answer.

To make the AI integration much more efficient and helpful, we can make use of a lot of features that C# and .NET offer us out of the box. This article will dive into what’s necessary to level up your generative AI integration and really helps your Co-Pilot piloting your .NET – based applications.

What is function calling?

First, we need to keep in mind that calling a Large Language Model (LLM, or just “model”) from our application is, in fact, just an API call. We pass our input to the model with the API call, it processes this input token by token and then generates an answer token by token, and we either get the response back in one piece when it is finished or we can receive the result tokens one by one as a stream. After the response has been returned, all state is lost and the model completely forgets what it just did.

Function calling allows the model to answer in a predefined way with a certain JSON structure. This response allows the code receiving it to, well, call a certain function. Function calling consists of two parts: First, when calling the model, the developer gives additional information about the methods that are available for calling, together with all of the parameters required to call them, also in a defined and structured format. The parameter name to pass these functions to the model is called “tools”, and the model can then use these tools.

Open AI coined the term “function calling” and you can find its specification here: https://platform.openai.com/docs/guides/function-calling.

A pure example of how that would look like when calling the Open AI API would be (curl in powershell):

curl 'https://api.openai.com/v1/chat/completions' `
-X POST `
-H 'host: api.openai.com' `
-H 'accept: application/json' `
-H 'authorization: Bearer {YOUR OPENAI API KEY HERE}' `
-H 'content-type: application/json' `
-d '
{
  "messages":[
    {
      "content": "What\u0027s the weather like in Karlsruhe, Hausach and Berlin?",
      "role": "user"
    }
  ],
  "model": "gpt-4-1106-preview",
  "tools":[
    {
      "function":
      {
        "name": "Functions_GetWeather",
        "description": "Gets the weather for a given location.",
        "parameters":
        {
          "type": "object",
          "properties":
          {
            "location":
            {
              "type": "string",
              "description": "The city and state, e.g. San Francisco, CA"
            },
            "unit":
            {
              "type": "string",
              "description": "The unit of temperature to return.",
              "enum": ["Fahrenheit", "Celsius", "Kelvin"]
            }
          },
          "required": ["location"]
        }
      },
      "type": "function"
    }
  ]
}'

If the model deems that it could make good use of the output of a function – anticipating that it will be called again in a second API call with the additional information from that function -, or that executing a function could help the user, then it will answer with this specific JSON structure, that tells us what function should be called and what parameter values should be set for that call. Open AI fine-tuned all of their current models so that they can produce this function calling JSON in a very consistent way, so that we shouldn’t be worried about malformed answers here.

This is an answer to our call from above:

{
  "id": "chatcmpl-8XpGSLC9uGO2vPxA7c5kPWEpE1ACN",
  "object": "chat.completion",
  "created": 1703072576,
  "model": "gpt-4-1106-preview",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "tool_calls": [
          {
            "id": "call_UU1lngrcTiTgEaOWMHRrshlq",
            "type": "function",
            "function": {
              "name": "Functions_GetWeather",
              "arguments": "{\"location\": \"Karlsruhe, Germany\"}"
            }
          },
          {
            "id": "call_0GnQoZB7zKmd2taAfzqWnKSA",
            "type": "function",
            "function": {
              "name": "Functions_GetWeather",
              "arguments": "{\"location\": \"Hausach, Germany\"}"
            }
          },
          {
            "id": "call_rT4QFHlHGXB61SjZN7lpqoHu",
            "type": "function",
            "function": {
              "name": "Functions_GetWeather",
              "arguments": "{\"location\": \"Berlin, Germany\"}"
            }
          }
        ]
      },
      "logprobs": null,
      "finish_reason": "tool_calls"
    }
  ],
  "usage": {
    "prompt_tokens": 100,
    "completion_tokens": 120,
    "total_tokens": 220
  },
  "system_fingerprint": "fp_3905aa4f79"
}
Code language: C# (cs)

After that response, our code should call that method with the provided parameters. We then can make another API call to the LLM. This time we provide the result of the function call back to the model as an “observation”. The model then can decide to generate a final answer, or to call other functions if that makes sense.

As you see in the json structure above, details about our functions are passed to the model. An example of such a function definition in C# using the Azure Open AI SDK would be like this:

var getWeatherTool = new ChatCompletionsFunctionToolDefinition()
{
    Name = "get_current_weather",
    Description = "Get the current weather in a given location",
    Parameters = BinaryData.FromObjectAsJson(
    new
    {
        Type = "object",
        Properties = new
        {
            Location = new
            {
                Type = "string",
                Description = "The city and state, e.g. San Francisco, CA",
            },
            Unit = new
            {
                Type = "string",
                Enum = new[] { "celsius", "fahrenheit", },
            }
        },
        Required = new[] { "location", },
    },
    new JsonSerializerOptions() { PropertyNamingPolicy = JsonNamingPolicy.CamelCase, }),
};
Code language: C# (cs)

Does it work with non-Open AI models?

The short answer is: In general, yes. But probably not that good, though.

The longer answer would be that Open AI fine-tuned their models by training it on the specific format the functions are provided to the model, and the JSON structure it needs to return. This means that Open AI models know that structure “by heart” and can reliably produce these calls.

Models that are not trained specifically for this task and this format of course still have the chance to understand the provided functions and can also be prompted to produce the correct response that contains all required information to call the model. In my early research I made that concept work with an earlier GPT-3.5 model that wasn’t yet fine-tuned for “function calling” as it was later called, but it sometimes messed up the response a bit and I couldn’t reliably parse it when my code was supposed to call a function. This will most likely be true also for any other LLM. This also implies that other models don’t magically adhere to the “contract” that Open AI function calling defines and you will have to nudge them by prompting to respond in a compatible way – or in a different way that you would like to parse yourself.

That said, a lot of other inference engines that can execute models like LLama or Mistral provide an Open AI-compatible HTTP API, and as such we can consider this tools structure as some sort of standard. So for the remainder of this article I assume Open AIs implementation of function calling, but you surely can make this work with a bit of additional effort with other models too that use different interfaces.

What do we need to know about function calling?

As we just learned, function calling in itself is relatively easy: We provide the possible functions to call to the model and then evaluate the response if we should call one of them and with which parameter values. However, there are some things to consider here.

First, the data that we need to provide to the model. These are all of the function names, a description what each function does and the list of parameters for each function with all of their names and descriptions. All of that must be made known to the model when it executes the request. This in turn means, all this information will consume input tokens and will also be counted against the available context size of the model.

Based on this fact, I would suggest to not provide all available functions to the model each and every time. That would be inefficient, as the model does need to process all of this information on every single call. The more data you give to the model, the more time the processing spends on that analysis, so this would make processing slower and, due to the token usage, also more expensive. To be more efficient, our application ideally should make smart decisions on when it should provide certain functions to the model.

Second, we need to think about how we need to return the result of our functions back to the model. A lot of examples provide JSON data back to the model, as Open AIs models are also trained and fine-tuned on understanding structured data. It is, however, also possible to formulate a response in natural language to the model, so that it might get a better understanding of the “observation” from the function call.

Making Function Calling a Breeze in .NET

As we saw in the code example above, we need to pass the definition of our functions (or tools) to the model in a certain JSON structure. In other languages like Python or JavaScript, we have to define the functions manually, but luckily we can use the reflection features of .NET to get all of the information about the functions we want to call at runtime.

Let’s see some code that generates a function definition with paremeter definitions from our methods.

public record FunctionDefinition(string TypeName, string MethodName, string Description, ParameterDefinition[]? Parameters);
public record ParameterDefinition(string Name, string Description, string Type, bool Required, string[]? EnumValues);

public static class FunctionExtensions
{
    public static FunctionDefinition GetDefinition(this MethodInfo method)
    {
        var parameters = method
            .GetParameters()  
            .Select(p => new ParameterDefinition(
                p.Name ?? String.Empty,
                p.GetDescription(),
                p.ParameterType.GetParameterTypeName(),
                !p.IsOptional,
                p.GetEnumValues()
            )
        ).ToArray();

        return new FunctionDefinition(
            method.DeclaringType?.Name ?? String.Empty,
            method.Name,
            method.GetDescription(),
            parameters.Any() ? parameters : null
        );
    }

    private static string GetDescription(this ParameterInfo parameter)  
        => parameter?.GetCustomAttribute<DescriptionAttribute>()?.GetDescription() ?? parameter?.Name ?? String.Empty;    

    private static string GetDescription(this MethodInfo method)
        => method.GetCustomAttribute<DescriptionAttribute>()?.GetDescription() ?? method.Name;

    private static string? GetDescription(this DescriptionAttribute attribute)
        => attribute?.Description;

    private static string GetParameterTypeName(this Type type)
    {
        if (type.IsGenericType && type.GetGenericTypeDefinition() == typeof(Nullable<>))
            type = type.GetGenericArguments()[0];

        if (type.IsEnum)
            return "string";

        switch (Type.GetTypeCode(type))
        {
            case TypeCode.Boolean:
                return "bool";
            case TypeCode.Byte:
            case TypeCode.SByte:
            case TypeCode.UInt16:
            case TypeCode.UInt32:
            case TypeCode.UInt64:
            case TypeCode.Int16:
            case TypeCode.Int32:
            case TypeCode.Int64:
                return "integer";
            case TypeCode.Decimal:
            case TypeCode.Double:
            case TypeCode.Single:
                return "float";
            case TypeCode.DateTime:
                return "datetime";
            default:
                return "string";
        }
    }

    private static string[]? GetEnumValues(this ParameterInfo parameter)
    {
        var type = parameter.ParameterType;
        if (type.IsGenericType && type.GetGenericTypeDefinition() == typeof(Nullable<>))
        {
            type = type.GetGenericArguments()[0];
        }

        return type.IsEnum
            ? type.GetEnumNames()
            : null;
    }
}
Code language: TypeScript (typescript)

As you can see, we use normal reflection features to extract all required information from our code.
Let’s take the common “Weather” example and expose a method that returns the weather information
at a certain location in a certain unit.

The usage would look like this:

public enum Units
{
    Fahrenheit,
    Celsius,
    Kelvin,
}

public class Functions
{
    [Description("Gets the weather for a given location.")]
    public string GetWeather(
        [Description("The city and state, e.g. San Francisco, CA")]
        string location,
        [Description("The unit of temperature to return.")]
        Units? unit = Units.Celsius)
    {
        return $"The weather in {location} is 42 degrees {unit.ToString()}.";
    }
}

var weatherToolDefinition = typeof(Functions).GetMethod(nameof(Functions.GetWeather));
Code language: C# (cs)

Here we have the function GetWeather, and as you can see we make use the normal
System.ComponentModel.DescriptionAttribute attribute to amend our code with a textual
description with what the method and the parameters are for. These account for the additional
information we should pass to the model so that it knows more about the method and can produce
better and more reliable function calls.

In the end, we have our representation of the method and of all of its parameters, including
the possible values for the enumeration in our weatherToolDefinition variable. We now only
need to pass this information to our model.

Calling the functions

In this example I am using the Open AI API client library from the Azure.AI.OpenAI NuGet package
in version 1.0.0-beta.12 to call the Open AI GPT-models, which can also be deployed on
Azure Open AI Services. This client library provides an own data type for functions, and since
we already have gathered all required information, we only need to transform our data to the
type provided by the library.

Again, this is an extension method:

public static ChatCompletionsFunctionToolDefinition ToToolDefinition(this FunctionDefinition definition)
{
    return new ChatCompletionsFunctionToolDefinition()
    {
        Name = $"{definition.TypeName}_{definition.MethodName}",
        Description = definition.Description,
        Parameters = BinaryData.FromObjectAsJson(
            new
            {
                Type = "object",
                Properties = definition.Parameters?.ToDictionary(
                    p => p.Name,
                    p => new
                    {
                        Type = p.Type,
                        Description = p.Description,
                        Enum = p.EnumValues,
                    }
                ),
                Required = definition.Parameters?.Where(p => p.Required).Select(p => p.Name),
            },
            new JsonSerializerOptions()
            {
                PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
                DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull,
            }
        )
    };
}
Code language: TypeScript (typescript)

This can now be passed to the

var client =  new OpenAIClient("YOUR OPEN AI API KEY GOES HERE");

var chatCompletionsOptions = new ChatCompletionsOptions()
{
    DeploymentName = "gpt-4-1106-preview",
    Messages = { new ChatRequestUserMessage("What's the weather like in Karlsruhe, Hausach and Berlin?") },
    Tools = { weatherToolDefinition, },
};

Response<ChatCompletions> response = await client.GetChatCompletionsAsync(chatCompletionsOptions);
Code language: C# (cs)

Now we need to evaluate the response. If it contains calls to our tools, we need to execute them,
take the results of the calls and create another model-call. Let’s also see that code:

// Purely for convenience and clarity, this standalone local method handles tool call responses.
ChatRequestToolMessage GetToolCallResponseMessage(ChatCompletionsToolCall toolCall)
{
    var functionToolCall = toolCall as ChatCompletionsFunctionToolCall;
    if (functionToolCall?.Name == "Functions_GetWeather")
    {
        // EXAMPLE, not a direct call:
        string unvalidatedArguments = functionToolCall.Arguments;
        Console.WriteLine("Get weather called with arguments: " + unvalidatedArguments);

        // here we would need to parse the JSON in the unvalidatedArguments,
        // then create the actual parameters with the corresponding .NET types
        // and then get the correct instance of the "Functions" class (i.e. by using
        // ActivatorUtilities and the IServiceProvider, then do the actual call here
        // by invoking the MethodInfo with the correct parameters. This is a lot of code,
        // so assume we did just that:
        var functionResultData = $"31 celsius";

        return new ChatRequestToolMessage(functionResultData.ToString(), toolCall.Id);
    }

    throw new NotImplementedException();
}

// Get the choices of the model from the response
ChatChoice responseChoice = response.Value.Choices[0];

// Handle tool calls if we have them
if (responseChoice.FinishReason == CompletionsFinishReason.ToolCalls)
{
    List<ChatRequestToolMessage> toolCallResolutionMessages = new();
    foreach (ChatCompletionsToolCall toolCall in responseChoice.Message.ToolCalls)
    {
        toolCallResolutionMessages.Add(GetToolCallResponseMessage(toolCall));
    }

    // Include the ToolCall message from the assistant in the conversation history, too
    var toolCallHistoryMessage = new ChatRequestAssistantMessage(responseChoice.Message.Content);
    foreach (ChatCompletionsToolCall requestedToolCall in responseChoice.Message.ToolCalls)
    {
        toolCallHistoryMessage.ToolCalls.Add(requestedToolCall);
    }

    // Now make a new request using all the messages, including the original
    chatCompletionsOptions.Messages.Add(toolCallHistoryMessage);
    foreach (ChatRequestToolMessage resolutionMessage in toolCallResolutionMessages)
    {
        chatCompletionsOptions.Messages.Add(resolutionMessage);
    }
}

// pass the tool call response(s) to the model again and wait for the next response
response = await client.GetChatCompletionsAsync(chatCompletionsOptions);

// write out the answer from the model
foreach(var choice in response.Value.Choices)
{
    Console.WriteLine(choice.Message.Content);
}
Code language: C# (cs)

While we omitted the actual calling in this code example out of brevity (and, to be honest, my code is way too proto-typed to share it with you just yet), this should give you a good insight in how easy it is to leverage normal .NET metadata by using the Description attribute to give the model all information it needs to call your methods. Together with well-named methods and well-named parameters this can help the model greatly to make well-informed decisions on when and how to call your methods.

Additional metadata

As I meantioned earlier, it might be benefical to not always provide all functions to the model in every call. Using additional metadata, like custom attributes, you can add information to your methods that give them even more “context”. You can then create multiple lists (or dictionaries) with your functions grouped by these additional attributes.

Depending on where the user currently is in your application, or what functionality you want to provide the model in a certain context, you can easily select the corresponding subset of function definitions based on that additional metadata and as such you can optimize your model calls and make them faster and cheaper.

Thinking a bit more outside of the box, you could also leverage a RAG-like approach for tool selection. You could, for example, create embeddings over the descriptions of your functions. Then you also create an embedding over each user input, and then do a similarity search with the input against your available functions. Embeddings are quite cheap, and you can also do them with local embedding models to not have external API calls here. With that similarity search you determine a certain amount of functions that are most likely to semnatically match the intent of your user and then provide this most likely functions to the model. This way you could get a better balance between passing too many or too few and possible the wrong functions to the model.

Summary

In this article you learned the concept of tools, how to provide them to the LLM and how to leverage the Open AI models fine tuned for “Function Calling” in your .NET application. This can make your models much more powerful and raise them from simple chat helpers to agents that can interact with their surrounding application.

You also learned about an easy strategy to use reflection and attributes to provide the function data to the model in a very efficient and .NET-like way, while keeping in mind that providing too many functions to the model in each API call will eat into our token budget and also impact performance.

Because of that we also discussed possible further optimizations to select only a subset of all available functions to pass to the model based on application context and user intent.

Aktuelle Research-Insights unserer Experten für Sie

Lesen Sie, was unsere Experten bei ihrem Research bewegt und melden Sie sich zu unserem kostenlosen Thinktecture Labs-Newsletter an.

Labs-Newsletter Anmeldung

Sebastian Gingter

I am a professional software developer for over two decades, and in that time I had the luck to experience a lot of technologies and how software development in general changed over the years. Here at Thinktecture my primary focus is right on backend technologies with (ASP) .NET Core, Artificial Intelligence / AI, developer productivity and tooling as well as software quality. Since I held my first conference talk back in 2008 I love sharing my experience and learnings with you. I speak on conferences internationally and write articles here, on my personal blog and in print.

More about me →