Class library for interacting with the Anthropic REST API.
$ dotnet add package Universal.Anthropic.ClientUniversal.Anthropic.Client is a C# library for interacting with the Anthropic API. It provides a simple and efficient way to create messages and handle streaming responses from Anthropic's AI models.
var client = new AnthropicClient("YOUR_API_KEY");
var request = new MessageRequest
{
Model = Models.ClaudeSonnet45,
Messages = new List<Message>
{
new Message(Roles.User, "Hello, how are you?")
}
};
var response = await client.CreateMessageAsync(request);
Console.WriteLine(response.Content[0].Text);
var streamingRequest = new MessageRequest
{
Model = "claude-sonnet-4", // Custom model string, if so desired
Messages = new List<Message>
{
new Message(Roles.User, "Tell me a story about a brave knight.")
},
Stream = true
};
var streamingResponse = client.CreateMessageStreamingAsync(streamingRequest);
streamingResponse.Updated += (sender, args) =>
{
Console.WriteLine("Response updated: " + streamingResponse.Value.Content);
};
await streamingResponse.Task; // Wait for completion
The Token Count API allows you to count the number of tokens in a message, including tools, images, and documents, without actually creating the message. This is useful for estimating costs, validating message size limits, or optimizing your requests:
// Create a token counting request
var countRequest = new CountMessageTokensRequest
{
Model = Models.ClaudeOpus4,
Messages = new List<Message>
{
new Message(Roles.User, "What is the square root of 841, and how did you determine it?")
},
System = "You are an assistant for Red Marble AI. Show your detailed reasoning process when solving problems.",
Thinking = new Thinking
{
Type = ThinkingTypes.Enabled,
BudgetTokens = 2048
}
};
// Count the tokens
var tokenCountResponse = await client.CountMessageTokensAsync(countRequest);
Console.WriteLine($"Input tokens: {tokenCountResponse.InputTokens}");
// You can now use this information to make decisions about your request
if (tokenCountResponse.InputTokens > maxAllowedTokens)
{
Console.WriteLine("Message exceeds token limit, please reduce content.");
}
else
{
// Proceed with creating the actual message
var messageRequest = new MessageRequest
{
Model = countRequest.Model,
Messages = countRequest.Messages,
System = countRequest.System,
Thinking = countRequest.Thinking,
MaxTokens = 8192
};
var response = await client.CreateMessageAsync(messageRequest);
}
The CountMessageTokensResponse includes:
InputTokens: The number of input tokens in your requestThis method supports all the same parameters as CreateMessageAsync, including:
The library provides a way to retrieve all available models from the Anthropic API:
// Get all available models
var modelsResponse = await client.ListModelsAsync();
// Display available models
foreach (var model in modelsResponse.Data)
{
Console.WriteLine($"ID: {model.Id}");
Console.WriteLine($"Name: {model.DisplayName}");
Console.WriteLine($"Created: {model.CreatedAt}");
Console.WriteLine();
}
This functionality allows you to programmatically determine which models are available for use with the API. Models are returned with the most recently released ones listed first.
The Anthropic API supports the use of tools, allowing the AI to perform specific actions or retrieve information. Here's an example of how to use a tool for weather checking:
// Define a weather tool
var weatherTool = new Tool
{
Name = "get_weather",
Description = "Get the current weather in a given location",
InputSchema = new JsonSchema
{
Type = "object",
Properties = new Dictionary<string, JsonSchema>()
{
["location"] = new JsonSchema()
{
Type = "string",
Description = "The city and state, e.g. San Francisco, CA"
}
},
Required = new[] { "location" }
}
};
// Create a message request with the tool
var request = new MessageRequest()
{
Model = Models.Claude35Sonnet,
Messages = new List<Message>
{
new Message(Roles.User, "What's the weather like in San Francisco?")
},
System = "You are an assistant for Red Marble AI. When asked about weather, always use the get_weather tool to provide accurate information.",
MaxTokens = 8192,
Tools = new List<Tool> { weatherTool },
ToolChoice = new AutoToolChoice()
};
// Send the request
var response = await anthropicClient.CreateMessageAsync(request);
// Check if the tool was used
var toolUseBlock = response.Content.FirstOrDefault(c => c is ToolUseContentBlock) as ToolUseContentBlock;
if (toolUseBlock != null)
{
Console.WriteLine($"Tool used: {toolUseBlock.Name}");
Console.WriteLine($"Tool input: {toolUseBlock.Input}");
}
In this example, we define a get_weather tool with an input schema for location. We then include this tool in our message request, along with a system message instructing the AI to use the weather tool for weather-related questions. The ToolChoice is set to AutoToolChoice, allowing the model to decide when to use the tool.
After receiving the response, you can check if the tool was used by looking for a ToolUseContentBlock in the response content.
Claude 3.7 Sonnet and newer models support an extended thinking feature that reveals Claude's detailed reasoning process. This feature helps you understand how Claude arrives at its answers, especially for complex problems:
// Create a request with extended thinking enabled
var request = new MessageRequest
{
Model = Models.Claude37Sonnet,
Messages = new List
{
new Message(Roles.User, "What's the square root of 841, and how did you determine it?")
},
MaxTokens = 8192,
Thinking = new Thinking
{
Type = ThinkingTypes.Enabled,
BudgetTokens = 2048
}
};
var response = await client.CreateMessageAsync(request);
// Process the response
foreach (var block in response.Content)
{
if (block is ThinkingContentBlock thinkingBlock)
{
Console.WriteLine("Thinking Process:");
Console.WriteLine(thinkingBlock.Thinking);
Console.WriteLine($"Signature: {thinkingBlock.Signature}");
}
else if (block is TextContentBlock textBlock)
{
Console.WriteLine("Final Answer:");
Console.WriteLine(textBlock.Text);
}
}
AnthropicClient: The main client for interacting with the Anthropic API.MessageRequest: Represents a request to create a message.MessageResponse: Represents the response from creating a message.CountMessageTokensRequest: Represents a request to count tokens in a message.CountMessageTokensResponse: Represents the response containing token count information.StreamingMessageResponse: Represents a streaming response for real-time updates.ListResponse<T>: Generic response for list operations, used with Model for listing available models.The client throws HttpException for non-successful status codes. Make sure to handle these exceptions in your code.
You can customize the API version used by setting the AnthropicVersion property on the client:
client.AnthropicVersion = "2023-06-01";
You can also opt into betas by setting the AnthropicBeta property on the client:
client.AnthropicBetas = ["beta-version-1"];
For more detailed information about the Anthropic API, please refer to the official Anthropic documentation.