.Net wrapper for OpenAI with Dependency injection integration, factory integration: you may inject more than one endpoint, azure integration: you may swap among openai endpoint and any azure endpoint quickly and easily. You can calculate tokens and cost for each request (before the request) and for each response. You can access to the dashboard api to retrieve your current or previous billing.
$ dotnet add package Rystem.OpenAiA simple C# .NET wrapper library to use with OpenAI's API.
This library targets .NET 9 or above.
Watch out my Rystem framework to be able to do .Net webapp faster (easy integration with repository pattern or CQRS for your Azure services).
Install package Rystem.OpenAi from Nuget. Here's how via command line:
Install-Package Rystem.OpenAi
📖 Back to summary
You may install with Dependency Injection one or more than on integrations at the same time. Furthermore you don't need to use the Dependency Injection pattern and use a custom Setup.
var apiKey = configuration["Azure:ApiKey"];
services.AddOpenAi(settings =>
{
settings.ApiKey = apiKey;
//add a default model for chatClient, you can add everything in this way to prepare at the best your
//client for the request
settings.DefaultRequestConfiguration.Chat = chatClient =>
{
chatClient.WithModel(configuration["OpenAi2:ModelName"]!);
};
}, "custom integration name");
var openAiApi = serviceProvider.GetRequiredService<IFactory<IOpenAi>>();
var firstInstanceOfChatClient = openAiApi.Create("custom integration name").Chat;
var openAiChatApi = serviceProvider.GetRequiredService<IFactory<IOpenAiChat>>();
var anotherInstanceOfChatClient = openAiChatApi.Create("custom integration name");
When you want to use the integration with Azure.
builder.Services.AddOpenAi(settings =>
{
settings.ApiKey = apiKey;
settings.Azure.ResourceName = "AzureResourceName (Name of your deployed service on Azure)";
});
See how to create an app registration here.
var resourceName = builder.Configuration["Azure:ResourceName"];
var clientId = builder.Configuration["AzureAd:ClientId"];
var clientSecret = builder.Configuration["AzureAd:ClientSecret"];
var tenantId = builder.Configuration["AzureAd:TenantId"];
builder.Services.AddOpenAi(settings =>
{
settings.Azure.ResourceName = resourceName;
settings.Azure.AppRegistration.ClientId = clientId;
settings.Azure.AppRegistration.ClientSecret = clientSecret;
settings.Azure.AppRegistration.TenantId = tenantId;
});
See how to create a managed identity here.
System Assigned Managed Identity
var resourceName = builder.Configuration["Azure:ResourceName"];
builder.Services.AddOpenAi(settings =>
{
settings.Azure.ResourceName = resourceName;
settings.Azure.ManagedIdentity.UseDefault = true;
});
See how to create a managed identity here.
User Assigned Managed Identity
var resourceName = builder.Configuration["Azure:ResourceName"];
var managedIdentityId = builder.Configuration["ManagedIdentity:ClientId"];
builder.Services.AddOpenAi(settings =>
{
settings.Azure.ResourceName = resourceName;
settings.Azure.ManagedIdentity.Id = managedIdentityId;
});
📖 Back to summary
You may install different version for each endpoint.
services.AddOpenAi(settings =>
{
settings.ApiKey = azureApiKey;
//default version for all endpoints
settings.DefaultVersion = "2024-08-01-preview";
//different version for chat endpoint
settings.DefaultRequestConfiguration.Chat = chatClient =>
{
chatClient.ForceModel("gpt-4");
chatClient.WithVersion("2024-08-01-preview");
};
});
In this example We are adding a different version only for chat, and all the other endpoints will use the same (in this case the default version).
📖 Back to summary
You may install more than one OpenAi integration, using name parameter in configuration.
In the next example we have two different configurations, one with OpenAi and a default name and with Azure OpenAi and name "Azure"
var apiKey = context.Configuration["OpenAi:ApiKey"];
services
.AddOpenAi(settings =>
{
settings.ApiKey = apiKey;
});
var azureApiKey = context.Configuration["Azure:ApiKey"];
var resourceName = context.Configuration["Azure:ResourceName"];
var clientId = context.Configuration["AzureAd:ClientId"];
var clientSecret = context.Configuration["AzureAd:ClientSecret"];
var tenantId = context.Configuration["AzureAd:TenantId"];
services.AddOpenAi(settings =>
{
settings.ApiKey = azureApiKey;
settings.DefaultRequestConfiguration.Chat = chatClient =>
{
chatClient.ForceModel("gpt-4");
chatClient.WithVersion("2024-08-01-preview");
};
settings.Azure.ResourceName = resourceName;
settings.Azure.AppRegistration.ClientId = clientId;
settings.Azure.AppRegistration.ClientSecret = clientSecret;
settings.Azure.AppRegistration.TenantId = tenantId;
}, "Azure");
I can retrieve the integration with IFactory<> interface (from Rystem) and the name of the integration.
private readonly IFactory<IOpenAi> _openAiFactory;
public CompletionEndpointTests(IFactory<IOpenAi> openAiFactory)
{
_openAiFactory = openAiFactory;
}
public async ValueTask DoSomethingWithDefaultIntegrationAsync()
{
var openAiApi = _openAiFactory.Create();
openAiApi.Chat.........
}
public async ValueTask DoSomethingWithAzureIntegrationAsync()
{
var openAiApi = _openAiFactory.Create("Azure");
openAiApi.Chat.........
}
or get the more specific service
private readonly IFactory<IOpenAiChat> _chatFactory;
public Constructor(IFactory<IOpenAiChat> chatFactory)
{
_chatFactory = chatFactory;
}
public async ValueTask DoSomethingWithAzureIntegrationAsync()
{
var chat = _chatFactory.Create(name);
chat.ExecuteRequestAsync(....);
}
📖 Back to summary
You may configure in a static constructor or during startup your integration without the dependency injection pattern.
OpenAiServiceLocator.Configuration.AddOpenAi(settings =>
{
settings.ApiKey = apiKey;
}, "NoDI");
and you can use it with the same static class OpenAiServiceLocator and the static Create method
var openAiApi = OpenAiServiceLocator.Instance.Create(name);
openAiApi.Embedding......
or get the more specific service
var openAiEmbeddingApi = OpenAiServiceLocator.Instance.CreateEmbedding(name);
openAiEmbeddingApi.Request(....);
📖 Back to summary
List and describe the various models available in the API. You can refer to the Models documentation to understand what models are available and the differences between them.
You may find more details here,
and here samples from unit test.
Lists the currently available models, and provides basic information about each one such as the owner and availability.
var openAiApi = _openAiFactory.Create(name);
var results = await openAiApi.Model.ListAsync();
Retrieves a model instance, providing basic information about the model such as the owner and per missioning.
var openAiApi = _openAiFactory.Create(name);
var result = await openAiApi.Model.RetrieveAsync("insert here the model name you need to retrieve");
Delete a fine-tuned model. You must have the Owner role in your organization.
var openAiApi = _openAiFactory.Create(name);
var deleteResult = await openAiApi.Model
.DeleteAsync(fineTuneModelId);
📖 Back to summary
Given a chat conversation, the model will return a chat completion response.
You may find more details here,
and here samples from unit test.
The IOpenAiChat interface provides a robust framework for interacting with OpenAI Chat models. This documentation includes method details and usage explanations, followed by 20 distinct examples that demonstrate real-world applications.
ExecuteAsync(CancellationToken cancellationToken = default)ExecuteAsStreamAsync(bool withUsage = true, CancellationToken cancellationToken = default)AddMessage(ChatMessageRequest message)Role, Content).AddMessages(params ChatMessageRequest[] messages)AddMessage(string content, ChatRole role = ChatRole.User)AddUserMessage(string content)AddSystemMessage(string content)AddAssistantMessage(string content)GetCurrentMessages()AddContent(ChatRole role = ChatRole.User)AddUserContent(), AddSystemContent(), AddAssistantContent()WithTemperature(double value)WithNucleusSampling(double value)WithPresencePenalty(double value)WithFrequencyPenalty(double value)SetMaxTokens(int value)WithNumberOfChoicesPerPrompt(int value)WithStopSequence(params string[] values)AddStopSequence(string value)WithBias(string key, int value), WithBias(Dictionary<string, int> bias)WithUser(string user)WithSeed(int? seed)ForceResponseFormat(FunctionTool function), ForceResponseFormat(MethodInfo function)ForceResponseAsJsonFormat(), ForceResponseAsText()AvoidCallingTools(), ForceCallTools(), CanCallTools()ClearTools(), ForceCallFunction(string name)Description: A simple user message and response.
var chat = openAiApi.Chat
.AddUserMessage("Hello, how are you?")
.WithModel(ChatModelName.Gpt4_o);
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
Description: Streaming a response progressively.
await foreach (var chunk in openAiApi.Chat
.AddUserMessage("Tell me a story.")
.WithModel(ChatModelName.Gpt4_o)
.ExecuteAsStreamAsync())
{
Console.Write(chunk.Choices?.FirstOrDefault()?.Delta?.Content);
}
Description: Adjusting response randomness.
var chat = openAiApi.Chat
.AddUserMessage("What is your opinion on AI?")
.WithTemperature(0.9);
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
Description: Sending multiple messages to set context.
var chat = openAiApi.Chat
.AddSystemMessage("You are a helpful assistant.")
.AddUserMessage("Who won the soccer match yesterday?")
.AddUserMessage("What are the latest updates?");
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
Description: Limiting the response with stop sequences.
var chat = openAiApi.Chat
.AddUserMessage("Explain the theory of relativity.")
.WithStopSequence("end");
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
Description: Using functions for structured responses.
var functionTool = new FunctionTool
{
Name = "calculate_sum",
Description = "Adds two numbers",
Parameters = new FunctionToolMainProperty()
.AddPrimitive("number1", new FunctionToolPrimitiveProperty { Type = "integer" })
.AddPrimitive("number2", new FunctionToolPrimitiveProperty { Type = "integer" })
.AddRequired("number1")
.AddRequired("number2")
};
var chat = openAiApi.Chat
.AddUserMessage("Calculate the sum of 5 and 10.")
.AddFunctionTool(functionTool);
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
Description: Streaming with an enforced stop condition.
await foreach (var chunk in openAiApi.Chat
.AddUserMessage("Describe the universe.")
.WithStopSequence("stop")
.ExecuteAsStreamAsync())
{
Console.Write(chunk.Choices?.FirstOrDefault()?.Delta?.Content);
}
Description: Encouraging diverse topics in the response.
var chat = openAiApi.Chat
.AddUserMessage("Tell me something new.")
.WithPresencePenalty(1.5);
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
Description: Reducing repetitive phrases.
var chat = openAiApi.Chat
.AddUserMessage("What is recursion?")
.WithFrequencyPenalty(1.5);
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
Description: Forcing the response to be in JSON format.
var chat = openAiApi.Chat
.AddUserMessage("Summarize the book '1984'.")
.ForceResponseAsJsonFormat();
var result = await chat.ExecuteAsync();
Console.WriteLine(result.Choices?.FirstOrDefault()?.Message?.Content);
You can use some JsonProperty attribute like:
After the configuration you can use this function framework in this way:
var openAiApi = _openAiFactory.Create(name);
var response = await openAiApi.Chat
.RequestWithUserMessage("What is the weather like in Boston?")
.WithModel(ChatModelType.Gpt35Turbo_Snapshot)
.WithFunction(WeatherFunction.NameLabel)
.ExecuteAndCalculateCostAsync(true);
var content = response.Result.Choices[0].Message.Content;
You may find the PlayFramework here
📖 Back to summary
Given a prompt and/or an input image, the model will generate a new image.
You may find more details here,
and here samples from unit test.
The IOpenAiImage interface provides functionality for generating, editing, and varying images using OpenAI's image models. This document covers each method with explanations and includes 20 distinct examples demonstrating their usage.
GenerateAsync(string prompt, CancellationToken cancellationToken = default)ImageResult containing the generated image's details.GenerateAsBase64Async(string prompt, CancellationToken cancellationToken = default)ImageResultForBase64 with the image encoded as Base64.EditAsync(string prompt, Stream file, string fileName = "image", CancellationToken cancellationToken = default)ImageResult with the edited image's details.EditAsBase64Async(string prompt, Stream file, string fileName = "image", CancellationToken cancellationToken = default)ImageResultForBase64.VariateAsync(Stream file, string fileName = "image", CancellationToken cancellationToken = default)ImageResult.VariateAsBase64Async(Stream file, string fileName = "image", CancellationToken cancellationToken = default)ImageResultForBase64.WithMask(Stream mask, string maskName = "mask.png")WithNumberOfResults(int numberOfResults)WithSize(ImageSize size)256x256, 512x512, 1024x1024).WithQuality(ImageQuality quality)WithStyle(ImageStyle style)WithUser(string user)Creates an image given a prompt.
var openAiApi = _openAiFactory.Create(name)!;
var response = await openAiApi.Image
.WithSize(ImageSize.Large)
.GenerateAsync("Create a captive logo with ice and fire, and thunder with the word Rystem. With a desolated futuristic landscape.");
var uri = response.Data?.FirstOrDefault();
Download directly and save as stream
var openAiApi = _openAiFactory.Create(name)!;
var response = await openAiApi.Image
.WithSize(ImageSize.Large)
.GenerateAsBase64Async("Create a captive logo with ice and fire, and thunder with the word Rystem. With a desolated futuristic landscape.");
var image = response.Data?.FirstOrDefault();
var imageAsStream = image.ConvertToStream();
Creates an edited or extended image given an original image and a prompt.
var openAiApi = _openAiFactory.Create(name)!;
var location = Assembly.GetExecutingAssembly().Location;
location = string.Join('\\', location.Split('\\').Take(location.Split('\\').Length - 1));
using var readableStream = File.OpenRead($"{location}\\Files\\otter.png");
var editableFile = new MemoryStream();
await readableStream.CopyToAsync(editableFile);
editableFile.Position = 0;
var response = await openAiApi.Image
.WithSize(ImageSize.Small)
.WithNumberOfResults(2)
.EditAsync("A cute baby sea otter wearing a beret", editableFile, "otter.png");
var uri = response.Data?.FirstOrDefault();
Creates a variation of a given image.
var openAiApi = _openAiFactory.Create(name)!;
var location = Assembly.GetExecutingAssembly().Location;
location = string.Join('\\', location.Split('\\').Take(location.Split('\\').Length - 1));
using var readableStream = File.OpenRead($"{location}\\Files\\otter.png");
var editableFile = new MemoryStream();
await readableStream.CopyToAsync(editableFile);
editableFile.Position = 0;
var response = await openAiApi.Image
.WithSize(ImageSize.Small)
.WithNumberOfResults(1)
.VariateAsync(editableFile, "otter.png");
var uri = response.Data?.FirstOrDefault();
📖 Back to summary
Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
You may find more details here,
and here samples from unit test.
The IOpenAiEmbedding interface provides methods to generate embeddings for text inputs, enabling downstream tasks such as similarity search, clustering, and machine learning model inputs. This documentation explains each method and includes 10 usage examples.
WithInputs(params string[] inputs)ClearInputs()AddPrompt(string input)WithUser(string user)WithDimensions(int dimensions)WithEncodingFormat(EncodingFormatForEmbedding encodingFormat)Base64, Float).ExecuteAsync(CancellationToken cancellationToken = default)Creates an embedding vector representing the input text.
var openAiApi = name == "NoDI" ? OpenAiServiceLocatorLocator.Instance.Create(name) : _openAiFactory.Create(name)!;
var results = await openAiApi.Embeddings
.WithInputs("A test text for embedding")
.ExecuteAsync();
var resultOfCosineSimilarity = _openAiUtility.CosineSimilarity(results.Data.First().Embedding!, results.Data.First().Embedding!);
Creates an embedding with custom dimensions vector representing the input text. Only supported in text-embedding-3 and later models.
var openAiApi = name == "NoDI" ? OpenAiServiceLocatorLocator.Instance.Create(name) : _openAiFactory.Create(name)!;
var results = await openAiApi.Embeddings
.AddPrompt("A test text for embedding")
.WithModel("text-embedding-3-large")
.WithDimensions(999)
.ExecuteAsync();
For searching over many vectors quickly, we recommend using a vector database. You can find examples of working with vector databases and the OpenAI API in our Cookbook on GitHub. Vector database options include:
We recommend cosine similarity. The choice of distance function typically doesn't matter much.
OpenAI embeddings are normalized to length 1, which means that:
Cosine similarity can be computed slightly faster using just a dot product Cosine similarity and Euclidean distance will result in the identical rankings
You may use the utility service in this repository to calculate in C# the distance with Cosine similarity
📖 Back to summary
You may find more details here,
and here samples from unit test.
The IOpenAiAudio interface provides methods to handle audio processing tasks such as transcription, translation, and customization of audio analysis. Below is a detailed breakdown of each method.
WithFile(byte[] file, string fileName = "default")file: Byte array representing the audio file.fileName: Name of the audio file (default: "default").WithStreamAsync(Stream file, string fileName = "default")file: Stream representing the audio file.fileName: Name of the audio file (default: "default").TranscriptAsync(CancellationToken cancellationToken = default)AudioResult containing the transcription details.VerboseTranscriptAsSegmentsAsync(CancellationToken cancellationToken = default)VerboseSegmentAudioResult with detailed transcription data.VerboseTranscriptAsWordsAsync(CancellationToken cancellationToken = default)VerboseWordAudioResult with detailed transcription data.TranslateAsync(CancellationToken cancellationToken = default)AudioResult containing the translated text.VerboseTranslateAsSegmentsAsync(CancellationToken cancellationToken = default)VerboseSegmentAudioResult with detailed translation data.VerboseTranslateAsWordsAsync(CancellationToken cancellationToken = default)VerboseWordAudioResult with detailed translation data.WithPrompt(string prompt)prompt: Text to provide contextual guidance or continue a previous segment.WithTemperature(double temperature)temperature: Value for controlling randomness.WithLanguage(Language language)language: Language code of the input audio.WithTranscriptionMinutes(int minutes)minutes: Duration in minutes.WithTranslationMinutes(int minutes)minutes: Duration in minutes.Transcribes audio into the input language.
var openAiApi = _openAiFactory.Create(name)!;
var location = Assembly.GetExecutingAssembly().Location;
location = string.Join('\\', location.Split('\\').Take(location.Split('\\').Length - 1));
using var readableStream = File.OpenRead($"{location}\\Files\\test.mp3");
var editableFile = new MemoryStream();
readableStream.CopyTo(editableFile);
editableFile.Position = 0;
var results = await openAiApi.Audio
.WithFile(editableFile.ToArray(), "default.mp3")
.WithTemperature(1)
.WithLanguage(Language.Italian)
.WithPrompt("Incidente")
.TranscriptAsync();
example for verbose transcription in segments
var openAiApi = _openAiFactory.Create(name)!;
var location = Assembly.GetExecutingAssembly().Location;
location = string.Join('\\', location.Split('\\').Take(location.Split('\\').Length - 1));
using var readableStream = File.OpenRead($"{location}\\Files\\test.mp3");
var editableFile = new MemoryStream();
readableStream.CopyTo(editableFile);
editableFile.Position = 0;
var results = await openAiApi.Audio
.WithFile(editableFile.ToArray(), "default.mp3")
.WithTemperature(1)
.WithLanguage(Language.Italian)
.WithPrompt("Incidente")
.VerboseTranscriptAsSegmentsAsync();
Assert.NotNull(results);
Assert.True(results.Text?.Length > 100);
Assert.StartsWith("Incidente tra due aerei di addestramento", results.Text);
Assert.NotEmpty(results.Segments ?? []);
Translates audio into English.
var openAiApi = _openAiFactory.Create(name)!;
var location = Assembly.GetExecutingAssembly().Location;
location = string.Join('\\', location.Split('\\').Take(location.Split('\\').Length - 1));
using var readableStream = File.OpenRead($"{location}\\Files\\test.mp3");
var editableFile = new MemoryStream();
await readableStream.CopyToAsync(editableFile);
editableFile.Position = 0;
var results = await openAiApi.Audio
.WithTemperature(1)
.WithPrompt("sample")
.WithFile(editableFile.ToArray(), "default.mp3")
.TranslateAsync();
example for verbose translation in segments
var openAiApi = _openAiFactory.Create(name)!;
Assert.NotNull(openAiApi.Audio);
var location = Assembly.GetExecutingAssembly().Location;
location = string.Join('\\', location.Split('\\').Take(location.Split('\\').Length - 1));
using var readableStream = File.OpenRead($"{location}\\Files\\test.mp3");
var editableFile = new MemoryStream();
await readableStream.CopyToAsync(editableFile);
editableFile.Position = 0;
var results = await openAiApi.Audio
.WithFile(editableFile.ToArray(), "default.mp3")
.WithTemperature(1)
.WithPrompt("sample")
.VerboseTranslateAsSegmentsAsync();
Assert.NotNull(results);
Assert.True(results.Text?.Length > 100);
Assert.NotEmpty(results.Segments ?? []);
The IOpenAiSpeech interface enables text-to-speech synthesis by providing methods to generate audio in various formats, along with options for controlling voice style and playback speed. Below is a detailed description of each method.
Mp3Async(string input, CancellationToken cancellationToken = default)input: The text to be synthesized into audio.cancellationToken: Optional token for cancelling the operation.Stream containing the MP3 audio data.OpusAsync(string input, CancellationToken cancellationToken = default)input: The text to be synthesized into audio.cancellationToken: Optional token for cancelling the operation.Stream containing the Opus audio data.AacAsync(string input, CancellationToken cancellationToken = default)input: The text to be synthesized into audio.cancellationToken: Optional token for cancelling the operation.Stream containing the AAC audio data.FlacAsync(string input, CancellationToken cancellationToken = default)input: The text to be synthesized into audio.cancellationToken: Optional token for cancelling the operation.Stream containing the FLAC audio data.WavAsync(string input, CancellationToken cancellationToken = default)input: The text to be synthesized into audio.cancellationToken: Optional token for cancelling the operation.Stream containing the WAV audio data.PcmAsync(string input, CancellationToken cancellationToken = default)input: The text to be synthesized into audio.cancellationToken: Optional token for cancelling the operation.Stream containing the PCM audio data.WithSpeed(double speed)1.0.speed: A value between 0.25 and 4.0 to control playback speed.IOpenAiSpeech for method chaining.ArgumentException if the speed is out of the valid range.WithVoice(AudioVoice audioVoice)audioVoice: The desired voice style. Supported values are alloy, echo, fable, onyx, nova, and shimmer.IOpenAiSpeech for method chaining.IOpenAiSpeech interface allows generating audio in multiple high-quality formats suitable for various applications, such as podcasts, presentations, and accessibility tools.ValueTask<Stream>.This interface provides powerful capabilities for creating dynamic audio content from text, offering flexibility in format, speed, and voice customization.
var openAiApi = _openAiFactory.Create(name)!;
var result = await openAiApi.Speech
.WithVoice(AudioVoice.Fable)
.WithSpeed(1.3d)
.Mp3Async(text);
📖 Back to summary
Files are used to upload documents that can be used with features like Fine-tuning.
You may find more details here,
and here samples from unit test.
The IOpenAiFile interface provides functionality for managing files within the OpenAI platform. These files are typically used for tasks such as fine-tuning models or storing custom datasets. Below is a detailed explanation of each method in the interface.
AllAsync(CancellationToken cancellationToken = default)cancellationToken: Optional token for cancelling the operation.ValueTask<FilesDataResult> containing metadata for all uploaded files.HttpRequestException if the request fails.RetrieveAsync(string fileId, CancellationToken cancellationToken = default)fileId: The unique identifier of the file to retrieve.cancellationToken: Optional token for cancelling the operation.ValueTask<FileResult> containing details about the specified file.RetrieveFileContentAsStringAsync(string fileId, CancellationToken cancellationToken = default)fileId: The unique identifier of the file to retrieve.cancellationToken: Optional token for cancelling the operation.Task<string> containing the file content as a string.RetrieveFileContentAsStreamAsync(string fileId, CancellationToken cancellationToken = default)fileId: The unique identifier of the file to retrieve.cancellationToken: Optional token for cancelling the operation.Task<Stream> containing the file content as a stream.DeleteAsync(string fileId, CancellationToken cancellationToken = default)fileId: The unique identifier of the file to delete.cancellationToken: Optional token for cancelling the operation.ValueTask<FileResult> indicating the result of the deletion operation.UploadFileAsync(Stream file, string fileName, string contentType = "application/json", PurposeFileUpload purpose = PurposeFileUpload.FineTune, CancellationToken cancellationToken = default)file: A Stream representing the file to upload.fileName: The name of the file to upload.contentType: The MIME type of the file (default: "application/json").purpose: The intended purpose of the file (e.g., "fine-tune").cancellationToken: Optional token for cancelling the operation.ValueTask<FileResult> containing details about the uploaded file.This interface is essential for managing files effectively in projects requiring fine-tuning or custom dataset handling.
Returns a list of files that belong to the user's organization.
var openAiApi = _openAiFactory.Create(name);
var results = await openAiApi.File
.AllAsync();
Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact us if you need to increase the storage limit.
var openAiApi = _openAiFactory.Create(name);
var uploadResult = await openAiApi.File
.UploadFileAsync(editableFile, name);
Delete a file.
var openAiApi = _openAiFactory.Create(name);
var deleteResult = await openAiApi.File
.DeleteAsync(uploadResult.Id);
Returns information about a specific file.
var openAiApi = _openAiFactory.Create(name);
var retrieve = await openAiApi.File
.RetrieveAsync(uploadResult.Id);
Returns the contents of the specified file
var openAiApi = _openAiFactory.Create(name);
var contentRetrieve = await openAiApi.File
.RetrieveFileContentAsStringAsync(uploadResult.Id);
You can upload large files by splitting them into parts. Upload a file that can be used across various endpoints. Individual files can be up to 512 MB, and the size of all files uploaded by one organization can be up to 100 GB. The Assistants API supports files up to 2 million tokens and of specific file types. See the Assistants Tools guide for details. The Fine-tuning API only supports .jsonl files. The input also has certain required formats for fine-tuning chat or completions models. The Batch API only supports .jsonl files up to 200 MB in size. The input also has a specific required format.
var upload = openAiApi.File
.CreateUpload(fileName)
.WithPurpose(PurposeFileUpload.FineTune)
.WithContentType("application/json")
.WithSize(editableFile.Length);
var execution = await upload.ExecuteAsync();
var partResult = await execution.AddPartAsync(editableFile);
Assert.True(partResult.Id?.Length > 7);
var completeResult = await execution.CompleteAsync();
📖 Back to summary
Manage fine-tuning jobs to tailor a model to your specific training data.
You may find more details here,
and here samples from unit test.
The IOpenAiFineTune interface provides methods to manage fine-tuning operations, allowing customization of models with specific training data. Fine-tuning is useful for tailoring models to specialized tasks or datasets. Below is a detailed breakdown of the methods provided.
WithFileId(string trainingFileId)trainingFileId: The unique identifier of the training dataset.IOpenAiFineTune for method chaining.WithValidationFile(string validationFileId)validationFileId: The unique identifier of the validation dataset.IOpenAiFineTune for method chaining.WithHyperParameters(Action<FineTuneHyperParameters> hyperParametersSettings)hyperParametersSettings: A delegate for configuring fine-tune hyperparameters.IOpenAiFineTune for method chaining.WithSuffix(string value)value: The suffix string.IOpenAiFineTune for method chaining.WithSeed(int seed)seed: The seed value to ensure consistent results.IOpenAiFineTune for method chaining.WithSpecificWeightAndBiasesIntegration(Action<WeightsAndBiasesFineTuneIntegration> integration)integration: A delegate for setting up Weights and Biases integration.IOpenAiFineTune for method chaining.ClearIntegrations()IOpenAiFineTune for method chaining.ExecuteAsync(CancellationToken cancellationToken = default)cancellationToken: A token for cancelling the operation if needed.ValueTask<FineTuneResult> representing the result of the operation.ListAsync(int take = 20, int skip = 0, CancellationToken cancellationToken = default)take: The number of results to retrieve (default: 20).skip: The number of results to skip (default: 0).cancellationToken: A token for cancelling the operation.ValueTask<FineTuneResults> containing a list of fine-tune jobs.RetrieveAsync(string fineTuneId, CancellationToken cancellationToken = default)fineTuneId: The unique identifier of the fine-tune operation.cancellationToken: A token for cancelling the operation.ValueTask<FineTuneResult> containing the fine-tune job details.CancelAsync(string fineTuneId, CancellationToken cancellationToken = default)fineTuneId: The unique identifier of the fine-tune operation.cancellationToken: A token for cancelling the operation.ValueTask<FineTuneResult> indicating the cancellation status.CheckPointEventsAsync(string fineTuneId, int take = 20, int skip = 0, CancellationToken cancellationToken = default)fineTuneId: The ID of the fine-tune operation.take: The number of results to retrieve (default: 20).skip: The number of results to skip (default: 0).cancellationToken: A token for cancelling the operation.ValueTask<FineTuneCheckPointEventsResult> containing the checkpoint events.ListEventsAsync(string fineTuneId, int take = 20, int skip = 0, CancellationToken cancellationToken = default)fineTuneId: The ID of the fine-tune operation.take: The number of results to retrieve (default: 20).skip: The number of results to skip (default: 0).cancellationToken: A token for cancelling the operation.ValueTask<FineTuneEventsResult> containing the event details.ListAsStreamAsync(int take = 20, int skip = 0, CancellationToken cancellationToken = default)take: The number of results to retrieve (default: 20).skip: The number of results to skip (default: 0).cancellationToken: A token for cancelling the operation.IAsyncEnumerable<FineTuneResult> for processing results incrementally.ListEventsAsStreamAsync(string fineTuneId, int take = 20, int skip = 0, CancellationToken cancellationToken = default)fineTuneId: The ID of the fine-tune operation.take: The number of results to retrieve (default: 20).skip: The number of results to skip (default: 0).cancellationToken: A token for cancelling the operation.IAsyncEnumerable<FineTuneEvent> for processing events incrementally.IOpenAiFineTune interface provides a comprehensive API for managing fine-tuning operations, from configuration to execution and result retrieval.Creates a job that fine-tunes a specified model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.
var openAiApi = _openAiFactory.Create(name);
var createResult = await openAiApi.FineTune
.Create(fileId)
.ExecuteAsync();
List your organization's fine-tuning jobs
var openAiApi = _openAiFactory.Create(name);
var allFineTunes = await openAiApi.FineTune
.ListAsync();
Gets info about the fine-tune job.
var openAiApi = _openAiFactory.Create(name);
var retrieveFineTune = await openAiApi.FineTune
.RetrieveAsync(fineTuneId);
Immediately cancel a fine-tune job.
var openAiApi = _openAiFactory.Create(name);
var cancelResult = await openAiApi.FineTune
.CancelAsync(fineTuneId);
Get fine-grained status updates for a fine-tune job.
var openAiApi = _openAiFactory.Create(name);
var events = await openAiApi.FineTune
.ListEventsAsync(fineTuneId);
Get fine-grained status updates for a fine-tune job.
var openAiApi = _openAiFactory.Create(name);
var events = await openAiApi.FineTune
.ListEventsAsStreamAsync(fineTuneId);
Delete a fine-tuned model. You must have the Owner role in your organization.
var openAiApi = _openAiFactory.Create(name);
var deleteResult = await openAiApi.FineTune
.DeleteAsync(fineTuneModelId);
📖 Back to summary
Given a input text, outputs if the model classifies it as violating OpenAI's content policy.
You may find more details here,
and here samples from unit test.
The IOpenAiModeration interface provides functionality for evaluating text against OpenAI's Content Policy, determining if the input violates any predefined guidelines. This interface is particularly useful for applications requiring automated content moderation to ensure safety and compliance.
ExecuteAsync(string input, CancellationToken cancellationToken = default)input: The text to be analyzed for potential policy violations.cancellationToken: Optional token for cancelling the operation.ValueTask<ModerationResult> containing the moderation outcome.ExecuteAsync method processes the input text and provides a ModerationResult object that contains detailed classification results.The IOpenAiModeration interface is a vital tool for developers aiming to build applications that enforce content guidelines and promote a safe user environment.
Classifies if text violates OpenAI's Content Policy
var openAiApi = _openAiFactory.Create(name)!;
var results = await openAiApi.Moderation
.WithModel("testModel")
.WithModel(ModerationModelName.OmniLatest)
.ExecuteAsync("I want to kill them and everyone else.");
var categories = results.Results?.FirstOrDefault()?.Categories;
📖 Back to summary
Utilities for OpenAi, you can inject the interface IOpenAiUtility everywhere you need it.
In IOpenAiUtility you can find:
📖 Back to embeddings
In data analysis, cosine similarity is a measure of similarity between two non-zero vectors defined in an inner product space. Cosine similarity is the cosine of the angle between the vectors; that is, it is the dot product of the vectors divided by the product of their lengths. It follows that the cosine similarity does not depend on the magnitudes of the vectors, but only on their angle. The cosine similarity always belongs to the interval [−1,1]. For example, two proportional vectors have a cosine similarity of 1, two orthogonal vectors have a similarity of 0, and two opposite vectors have a similarity of -1. In some contexts, the component values of the vectors cannot be negative, in which case the cosine similarity is bounded in [0,1].
Here an example from Unit test.
IOpenAiUtility _openAiUtility;
var resultOfCosineSimilarity = _openAiUtility.CosineSimilarity(results.Data.First().Embedding, results.Data.First().Embedding);
var resultOfEuclideanDinstance = _openAiUtility.EuclideanDistance(results.Data.First().Embedding, results.Data.First().Embedding);
Assert.True(resultOfCosineSimilarity >= 1);
Without DI, you need to setup an OpenAiServiceLocator without Dependency Injection and after that you can use
IOpenAiUtility openAiUtility = OpenAiServiceLocator.Instance.Utility();
📖 Back to summary
You can think of tokens as pieces of words, where 1,000 tokens is about 750 words. You can calculate your request tokens with the Tokenizer service in Utility.
IOpenAiUtility _openAiUtility
var encoded = _openAiUtility.Tokenizer
.WithChatModel(ChatModelType.Gpt4)
.Encode(value);
Assert.Equal(numberOfTokens, encoded.NumberOfTokens);
var decoded = _openAiUtility.Tokenizer.Decode(encoded.EncodedTokens);
Assert.Equal(value, decoded);
📖 Back to summary
You can think of tokens as pieces of words, where 1,000 tokens is about 750 words.
var openAiApi = _openAiFactory.Create(name)!;
var results = await openAiApi.Chat
.AddMessage(new ChatMessageRequest { Role = ChatRole.User, Content = "Hello!! How are you?" })
.WithModel(ChatModelName.Gpt4_o)
.WithTemperature(1)
.ExecuteAsync();
//calculate cost works only if you added the price during setup.
var cost = openAiApi.Chat.CalculateCost();
📖 Back to summary
During setup of your OpenAi service you may add your custom price table with settings.PriceBuilder property.
services.AddOpenAi(settings =>
{
//resource name for Azure
settings.Azure.ResourceName = resourceName;
//app registration configuration for Azure authentication
settings.Azure.AppRegistration.ClientId = clientId;
settings.Azure.AppRegistration.ClientSecret = clientSecret;
settings.Azure.AppRegistration.TenantId = tenantId;
//default request configuration for chat endpoint, this method is ran during the creation of the chat service.
settings.DefaultRequestConfiguration.Chat = chatClient =>
{
chatClient.ForceModel("gpt-4");
//custom version for chat endpoint
chatClient.WithVersion("2024-08-01-preview");
};
//add a price for kind of cost for model you want to add. Here an example with gpt-4 model.
settings.PriceBuilder
.AddModel("gpt-4",
new OpenAiCost { Kind = KindOfCost.Input, UnitOfMeasure = UnitOfMeasure.Tokens, Units = 0.0000025m },
new OpenAiCost { Kind = KindOfCost.CachedInput, UnitOfMeasure = UnitOfMeasure.Tokens, Units = 0.00000125m },
new OpenAiCost { Kind = KindOfCost.Output, UnitOfMeasure = UnitOfMeasure.Tokens, Units = 0.00001m });
}, "Azure");
📖 Back to summary
In your openai dashboard you may get the billing usage, or users, or taxes, or similar. Here you have an api to retrieve this kind of data.
📖 Back to summary
You may use the management endpoint to retrieve data for your usage. Here an example on how to get the usage for the month of April.
var management = _openAiFactory.CreateManagement(integrationName);
var usages = await management
.Billing
.From(new DateTime(2023, 4, 1))
.To(new DateTime(2023, 4, 30))
.GetUsageAsync();
Assert.NotEmpty(usages.DailyCosts);
📖 Back to summary
Only for Azure you have to deploy a model to use model in your application. You can configure Deployment during startup of your application.
services.AddOpenAi(settings =>
{
settings.ApiKey = azureApiKey;
settings.Azure.ResourceName = resourceName;
settings.Azure.AppRegistration.ClientId = clientId;
settings.Azure.AppRegistration.ClientSecret = clientSecret;
settings.Azure.AppRegistration.TenantId = tenantId;
settings.DefaultRequestConfiguration.Chat = chatClient =>
{
chatClient.ForceModel("gpt-4");
//custom version for chat endpoint
chatClient.WithVersion("2024-08-01-preview");
};
settings.Price
.SetFineTuneForAda(0.0004M, 0.0016M)
.SetAudioForTranslation(0.006M);
}, "Azure");
You can do this step with No dependency injection integration too.
You can create a new deployment
var createResponse = await openAiApi.Management.Deployment
.Create(deploymentId)
.WithCapacity(2)
.WithDeploymentTextModel("ada", TextModelType.AdaText)
.WithScaling(Management.DeploymentScaleType.Standard)
.ExecuteAsync();
Get a deployment by Id
var deploymentResult = await openAiApi.Management.Deployment.RetrieveAsync(createResponse.Id);
List of all deployments by status
var listResponse = await openAiApi.Management.Deployment.ListAsync();
Update a deployment
var updateResponse = await openAiApi.Management.Deployment
.Update(createResponse.Id)
.WithCapacity(1)
.WithDeploymentTextModel("ada", TextModelType.AdaText)
.WithScaling(Management.DeploymentScaleType.Standard)
.ExecuteAsync();
Delete a deployment by Id
var deleteResponse = await openAiApi.Management.Deployment
.DeleteAsync(createResponse.Id);
The OpenAI Assistant is a conversational agent powered by OpenAI's GPT models (e.g., GPT-4). Here's a breakdown of its key concepts:
gpt-4 or gpt-3.5.0.7 for more creative answers, 0.2 for deterministic answers).The assistant interacts with threads and runs:
Create and manage AI assistants with configurable instructions, temperature, and capabilities (e.g., file search, code interpretation).
Manage conversations by creating threads and exchanging messages with context.
Execute tasks asynchronously, allowing for step-by-step operations and status monitoring.
Store and manage vectorized data or files for advanced AI integrations, such as semantic search.
Define an assistant with specific instructions and model:
var assistant = openAiApi.Assistant;
var created = await assistant
.WithTemperature(0.7)
.WithInstructions("You are a personal assistant. Respond professionally to all queries.")
.WithModel("gpt-4")
.CreateAsync();
Console.WriteLine($"Assistant created with ID: {created.Id}");
Retrieve the assistant details:
var retrievedAssistant = await assistant.RetrieveAsync(created.Id);
Console.WriteLine($"Retrieved Assistant ID: {retrievedAssistant.Id}");
Enable the assistant to write and execute Python code:
var assistant = openAiApi.Assistant;
var created = await assistant
.WithInstructions("You are a Python code interpreter. Solve math problems by running Python code.")
.WithCodeInterpreter()
.WithModel("gpt-4")
.CreateAsync();
Console.WriteLine($"Assistant created for code interpretation with ID: {created.Id}");
Start a conversation thread:
var threadClient = openAiApi.Thread;
var response = await threadClient
.WithMessage()
.AddText(Chat.ChatRole.User, "What is the capital of France?")
.CreateAsync();
Console.WriteLine($"Thread created with ID: {response.Id}");
Add more messages to the thread:
var responseMessages = await threadClient.WithId(response.Id)
.WithMessage()
.AddText(Chat.ChatRole.Assistant, "Please explain the Nexus.")
.AddMessagesAsync()
.ToListAsync();
Start a run and retrieve its status:
var runClient = openAiApi.Run;
var runResponse = await runClient
.WithThread(threadId)
.AddText(Chat.ChatRole.Assistant, "Let me calculate that for you.")
.StartAsync(assistantId);
Console.WriteLine($"Run started with ID: {runResponse.Id}");
var steps = await runClient.ListStepsAsync(runResponse.Id);
foreach (var step in steps.Data)
{
Console.WriteLine($"Step: {step.Content}");
}
Stream responses for real-time feedback:
var runClient = openAiApi.Run;
string? runResponseId = null;
var message = new StringBuilder();
await foreach (var value in runClient
.WithThread(response.Id)
.AddText(Chat.ChatRole.Assistant, "Please explain the Nexus.")
.StreamAsync(created.Id))
{
if (value.Is<RunResult>())
runResponseId = value.AsT0?.Id;
else if (value.Is<ThreadChunkMessageResponse>())
{
var content = value.AsT2?.Delta?.Content;
if (content != null)
{
if (content.Is<string>())
message.Append(content.AsT0);
else
foreach (var c in content.CastT1)
{
if (c.Text != null)
message.Append(c.Text?.Value);
}
}
}
}
Console.WriteLine($"Streamed Response: {message.ToString()}");
Upload files to a vector store for advanced integrations:
var fileApi = openAiApi.File;
var fileId = await fileApi.UploadFileAsync(fileBytes, "document.txt", "application/text");
var vectorStore = await openAiApi.VectorStore
.WithName("KnowledgeBase")
.AddFiles(new[] { fileId })
.AddMetadata("Category", "Documentation")
.CreateAsync();
Console.WriteLine($"VectorStore created with ID: {vectorStore.Id}");
Retrieve stored vectors:
var retrievedStore = await openAiApi.VectorStore.WithId(vectorStore.Id).RetrieveAsync();
Console.WriteLine($"Metadata: {retrievedStore.Metadata["Category"]}");