This package provides helper methods for interacting with Azure OpenAI Chat Completion API with and without data.
$ dotnet add package Dewiride.Azure.AI.OpenAI.HelperThe Azure OpenAI Helper provides utility methods to interact with Azure OpenAI endpoints for both standard API requests and streaming responses. It includes robust retry logic and detailed logging for handling errors and timeouts.
Newtonsoft.Json and Microsoft.Extensions.Logging.Send a standard request to the Azure OpenAI API and receive a deserialized response.
public async Task<TResponse?> GetChatCompletionResponseAsync<TRequest, TResponse>(
string apiEndpoint,
string apiKey,
TRequest dataRequest,
int maxRetryAttempts = 10,
int retryDelayMs = 1000
);
apiEndpoint (string): The URL of the Azure OpenAI endpoint.apiKey (string): Your Azure OpenAI API key.dataRequest (TRequest): The request payload.maxRetryAttempts (int): Maximum number of retries in case of failure (default: 10).retryDelayMs (int): Delay between retries in milliseconds (default: 1000).var helper = new AzureOpenAiHelper(logger);
var request = new
{
model = "gpt-4",
messages = new[]
{
new { role = "system", content = "You are a helpful assistant." },
new { role = "user", content = "Hello, how are you?" }
}
};
var response = await helper.GetChatCompletionResponseAsync<object, OpenAiDataResponse>(
apiEndpoint: "https://<your-endpoint>.openai.azure.com/openai/deployments/<deployment-id>/chat/completions?api-version=2024-01-01",
apiKey: "<your-api-key>",
dataRequest: request
);
if (response != null)
{
Console.WriteLine($"Response: {response.Choices.FirstOrDefault()?.Message?.Content}");
}
else
{
Console.WriteLine("Failed to retrieve a response.");
}
Stream the response from Azure OpenAI, processing each chunk of data as it arrives.
public async Task GetChatCompletionStreamedResponseAsync<TRequest>(
string azureOpenAiEndpoint,
string azureOpenAiKey,
TRequest request,
Action<string> onMessageReceived,
int maxRetryAttempts = 5,
int retryDelayMs = 1000
);azureOpenAiEndpoint (string): The URL of the Azure OpenAI streaming endpoint.azureOpenAiKey (string): Your Azure OpenAI API key.request (TRequest): The request payload.onMessageReceived (Action<string>): A callback to handle streamed content.maxRetryAttempts (int): Maximum number of retries in case of failure (default: 5).retryDelayMs (int): Delay between retries in milliseconds (default: 1000).var helper = new AzureOpenAiHelper(logger);
var request = new
{
model = "gpt-4",
messages = new[]
{
new { role = "system", content = "You are a helpful assistant." },
new { role = "user", content = "Tell me a story!" }
},
stream = true
};
await helper.GetChatCompletionStreamedResponseAsync(
azureOpenAiEndpoint: "https://<your-endpoint>.openai.azure.com/openai/deployments/<deployment-id>/chat/completions?api-version=2024-01-01",
azureOpenAiKey: "<your-api-key>",
request: request,
onMessageReceived: message =>
{
Console.WriteLine($"Streamed content: {message}");
}
);The AzureOpenAiHelper relies on ILogger<AzureOpenAiHelper> for logging. To use this feature, ensure you inject an appropriate logger instance into the helper.
maxRetryAttempts and retryDelayMs to suit your application's needs.Newtonsoft.Json: For JSON serialization and deserialization.Microsoft.Extensions.Logging: For logging errors and warnings.This project is licensed under the MIT License. See LICENSE for details.