This lightweight package provides easy access to OpenAI RESTful API. This package provide access to the Edits Endpoints.
$ dotnet add package SoulCalc.OpenAI.API.EditsThis project is a .NET/C# project that provides an lightweight interface for working with OpenAI's API.
The project is organized according to each category of the APIs OpenAI provides, it's been designed this way so that one can pick the specific endpoints and simply discard the ones that are not needed.
We have developped this project to use as a starting/reference point for other projects that require or fancy to use OpenAI's API.
* Please note that this project and its developers are NOT affiliated with OpenAI in anyway.
You can find the source code and documentation for this project on GitHub.
Here is a quick start peek to send a ChatCompletion request.
// initialize with api key before everything, this only needs to be called once until DeInit().
ApiConfig.Initialize(<api-key>);
// Create a reusable request handler
ChatCompletionRequestHandler requestHandler = new ChatCompletionRequestHandler();
// create and config request data
var requestData = CreateChatCompletionRequestData.Create(model, messages);
// ...
// send the request in async/await fashion
var response = await completionRequestHandler.CreateChatCompletionAsync(requestData);
// or in callback fashion
completionRequestHandler.CreateChatCompletion(requestData, (response)=> {...});
* Details of this can be seen in the Examples section.
Clone the repository:
Open the solution file in Visual Studio (or other IDE).
Build the solution. This will restore the required NuGet packages and build all the projects.
If you don't need access to all OpenAI's API endpoints, build only the desired ones under API/ would be enough.
The project is structured with such requirement in mind to keep the footprint as small as possible.
Create a new file named testsettings.json in the root directory of the project, and paste in the following JSON code:
{
"OpenAI": {
"ApiKey": "<api-key>"
}
}
Note that you should replace <api-key> with your actual OpenAI API key.
Main part of the project is organized into the following projects:
API/Core: Base project for all projects under API/.
API/<others>: For requests handling against OpenAI's endpoints.
Tests: Contains unit tests.
Most classes and methods are documented with XML in the code.
A hosted web version of the doc will be added here when available.
To use the project, simply reference the appropriate project for the OpenAI model you want to use, and call the relevant methods to interact with the API.
Before using anything, make sure to call the static method ApiConfig.Initialize(<openai-api-key>) once.
This creates a default ApiConfig for all reqeusts.
Note that you need replace <openai-api-key> with your actual api key.
For every project that interact directly with the API, a RequestHandler is exposed.
For example, ChatCompletionRequestHandler is used to interact with OpenAI's "ChatCompletion" endpoints.
Simply create an instance of the handler.
The handler provides methods to interact with all OpenAI's endpoints, named after the endpoints themselves exactly
(with descriptive words in the method name, such as Async), for instance, CreateChatCompletionAsync.
Each one of such methods has two versions to retrieve the response,
one that uses the async/await, and the other one uses callbacks.
There are also some endpoints that support streamed data. Currently, streamed ones can only be used with callback. See Limitations section.
The response data is wrapped in with a IAPIResponse<T>, where T is type for the response data, depending on the endpoint.
The IAPIResponse has few properties for checking if the result is good to use.
It has a property RawResponse which contains the raw content of the response;
It also provide a IsScucess and StatusCode property which is the raw HTTP Status and StatusCode came back with the API request.
The Result property, of type T, is the deserialized object of the response data, provided that the request succeeded;
In case of a failed response, the Error would contain the resulting information.
tl;dr:
Except Core, every project under API in the solution file is for interaction with some OpenAI API. They all follow the same pattern:
Create instance of re-usable request handler
-> either use the async/await or the callback methods to interact with the API -> get response that's deserialized into a IAPIResponse<T>.
CreateChatCompletion request dataThe request data can be created either using new CreateChatCompletionRequestData() and set each property,
or use the factory method that goes with all request data classes (except fine-tunes).
Using the provided Create factory classes would make the data handling easier, but currently they are used to get a quick request data to be created, i.e., not all properties of the data class can be set via the factory method.
Using new:
var requestData = new CreateChatCompletionRequestData()
{
Model = model, // model is the desire model to use.
Messages = new List<Message>()
{
Message.Create(MessageAuthorRole.User, "Who won the world series in 2020?"),
Message.Create(MessageAuthorRole.Assistant, "The Los Angeles Dodgers won the World Series in 2020."),
Message.Create(MessageAuthorRole.User, "Where was it played?"),
},
N = 1,
MaxTokens = 100,
};
Or the factory method:
var requestData = CreateChatCompletionRequestData.Create(
model: model,
messages: new List<Message>() {
Message.Create(MessageAuthorRole.User, "Who won the world series in 2020?"),
Message.Create(MessageAuthorRole.Assistant, "The Los Angeles Dodgers won the World Series in 2020."),
Message.Create(MessageAuthorRole.User, "Where was it played?"),
},
n: 1,
maxTokens: 100
);
where the model is the desire model to use.
A list of relavent models are set as constants in each project, for example:
OpenAIChatCompletionsModel.GPT_3_5_TURBO.
As seen in the source:
public static class OpenAIChatCompletionsModel
{
public const string GPT_3_5_TURBO = "gpt-3.5-turbo";
public const string GPT_3_5_TURBO_0301 = "gpt-3.5-turbo-0301";
public const string GPT_4 = "gpt-4";
public const string GPT_4_0314 = "gpt-4-0314";
...
}
If you have a fine-tuned model, simply set the model to yours.
Here's an example usage of the CreateChatCompletionAsync method to generate chat responses using the OpenAI API:
First, Create request data with the previous example.
var requestData = CreateChatCompletionRequestData.Create(
// ... set desire options for the request data ...
);
// ... more request configuration ...
Second, create a request handler ChatCompletionRequestHandler
// make sure ApiConfig.Initialize(<api-key>) is caleld before this,
// or an alternative ApiConfig instance is ready for the constructor to take.
ChatCompletionRequestHandler completionRequestHandler = new ChatCompletionRequestHandler();
* The request handler can be reused.
Finally, send the request.
This can be done in two fashions, one with the async/await, or callback.
Using the async/await:
IApiResponse<CreateChatCompletionResponseData> response = await completionRequestHandler.CreateChatCompletionAsync(requestData);
Or, using callback:
completionRequestHandler.CreateChatCompletion(
requestData,
onResponse: response =>
{
// the response is of type IApiResponse<CreateChatCompletionResponseData>,
});
Check the result:
Regardless of using async/await or callback, the resulting response is of type IApiResponse<T>
where T is the resulting data type depending on the endpoint used,
They can be read as the following:
// check if the response is successful by response.IsSuccess
if (response.IsSuccess) {
// request is a success
// read the Result property from the IApiResponse<T>.
CreateChatCompletionResponseData data = response.Result;
// ... read the response data ...
}
else {
// request failed.
// read the Error from the IApiResponse<T>
IApiError error = response.Error;
// ... read the error ...
// the child property, ErrorContent, contains the deserizlied errors from the API.
// if certain exception occured in the result handling process,
// the message will contain the exception message,
// however the RawResponse would still contain the original response from API, as a string.
Console.WriteLine(error.ErrorContent.Message);
}
Some API endpoints allows a streamed option, using two callbacks, one upon reciving each block, and one for completion as a note to stop, regardless of success or not.
completionRequestHandler.CreateChatCompletionStreamed(
requestData,
onBlockResponse: response =>
{
// this is called whenever a new line of data came in.
// it can be handled the same way as above examples
},
onComplete: isSuccess => {
// marks an end of the stream
}
);
The onComplete callback is called upon completion of the entire data stream, marks an ending of the it.
the isSuccess parameter come with the callback indicates if we have a successful stop (when true),
or it's done without a formal completion (when false).
It's only considered as a success if we recieved [DONE] from OpenAI's API.
There are few endpoints in the API that can take in certain string or strings as either a single string or an array of strings.
Namely, the stop for CreateChatCompletion and CreateCompletion, prompt for CreateCompletion.
These are handled by using type of object rather than strong typed properties.
However, setting them without using the factory methods, or to change them after creation,
must go through the methods designed for them, as they cannot be set externally directly,
such as:
CreateCompletionRequestData reqeustData = CreateCompletionRequestData.Create(...);
reqeustData.WithPrompt(newPrompt);
// and
reqeustData.WithStop(newStop);
This methods can be chained: requestData.WithPrompt(prompt).WithStop(stops);
These methods provide two overlords, one with param string[] and one with List<string>.
When used with List<string>, the list send in as the parameter will be assignedd by reference,
hence you could hold on to the original reference to the list, and change the content of it.
The factory methods are provided in similar fashion.
However, keep in mind that the reference will be reset when calling these method again.
For Unity Projects, although it started supporting C# 8.0 at some point,
but it's documentation says the specific feature that are commonly used for
streamed data from the API, namely IAsyncEnumerable<T> is not supported.
We welcome contributions of code to the OpenAI API Integration Project.
Please make sure that your PR includes a clear description of the problem you are trying to solve, and the changes you have made to the code.
You can find the source code and documentation for this project on GitHub (in case you are not already here)
If you encounter a bug or have a feature request, please feel free to reach out to us via GitHub Issues.
Please include as much detail as possible about the problem or feature, including any relevant error messages or screenshots.
You can also help improve the OpenAI API Integration Project by:
The project is licensed under the MIT License. See the LICENSE file for more information.