Drop-in HttpClient wrapper for .NET 8-10+ with Polly resilience, response caching, and OpenTelemetry. One-line setup eliminates boilerplate for retries, circuit breakers, correlation IDs, and structured logging. Perfect for microservices, APIs, and web scrapers. Includes authentication providers (Bearer, Basic, API Key), concurrent requests, fire-and-forget operations, and streaming support. MIT licensed, 252+ tests. For web crawling features, see WebSpark.HttpClientUtility.Crawler package.
A production-ready HttpClient wrapper for .NET 8+ that makes HTTP calls simple, resilient, and observable.
Stop writing boilerplate HTTP code. Get built-in resilience, caching, telemetry, and structured logging out of the box.
Starting with v2.0, the library is split into two packages:
| Package | Purpose | Size | Use When |
|---|---|---|---|
| WebSpark.HttpClientUtility | Core HTTP features | 163 KB | You need HTTP client utilities (authentication, caching, resilience, telemetry) |
| WebSpark.HttpClientUtility.Crawler | Web crawling extension | 75 KB | You need web crawling, robots.txt parsing, sitemap generation |
Upgrading from v1.x? Most users need no code changes! See Migration Guide.
The complete documentation site includes:
Install
dotnet add package WebSpark.HttpClientUtility5-Minute Example
// Program.cs - Register services (ONE LINE!)
builder.Services.AddHttpClientUtility();
// YourService.cs - Make requests
public class WeatherService
{
private readonly IHttpRequestResultService _httpService;
public WeatherService(IHttpRequestResultService httpService) => _httpService = httpService;
public async Task<WeatherData?> GetWeatherAsync(string city)
{
var request = new HttpRequestResult<WeatherData>
{
RequestPath = $"https://api.weather.com/forecast?city={city}",
RequestMethod = HttpMethod.Get
};
var result = await _httpService.HttpSendRequestResultAsync(request);
return result.IsSuccessStatusCode ? result.ResponseResults : null;
}
}That's it! You now have:
Install Both Packages
dotnet add package WebSpark.HttpClientUtility
dotnet add package WebSpark.HttpClientUtility.CrawlerRegister Services
// Program.cs
builder.Services.AddHttpClientUtility();
builder.Services.AddHttpClientCrawler(); // Adds crawler featuresUse Crawler
public class SiteAnalyzer
{
private readonly ISiteCrawler _crawler;
public SiteAnalyzer(ISiteCrawler crawler) => _crawler = crawler;
public async Task<CrawlResult> AnalyzeSiteAsync(string url)
{
var options = new CrawlerOptions
{
MaxDepth = 3,
MaxPages = 100,
RespectRobotsTxt = true
};
return await _crawler.CrawlAsync(url, options);
}
}| Challenge | Solution |
|---|---|
| Boilerplate Code | One-line service registration replaces 50+ lines of manual setup |
| Transient Failures | Built-in Polly integration for retries and circuit breakers |
| Repeated API Calls | Automatic response caching with customizable duration |
| Observability | Correlation IDs, structured logging, and OpenTelemetry support |
| Testing | All services are interface-based for easy mocking |
| Package Size | Modular design - install only what you need |
builder.Services.AddHttpClientUtility(options =>
{
options.EnableCaching = true;
});
// In your service
var request = new HttpRequestResult<Product>
{
RequestPath = "https://api.example.com/products/123",
RequestMethod = HttpMethod.Get,
CacheDurationMinutes = 10 // Cache for 10 minutes
};builder.Services.AddHttpClientUtility(options =>
{
options.EnableResilience = true;
options.ResilienceOptions.MaxRetryAttempts = 3;
options.ResilienceOptions.RetryDelay = TimeSpan.FromSeconds(2);
});builder.Services.AddHttpClientUtilityWithAllFeatures();No code changes required! Simply upgrade:
dotnet add package WebSpark.HttpClientUtility --version 2.0.0Your existing code continues to work exactly as before. All core HTTP features (authentication, caching, resilience, telemetry, etc.) are still in the base package with the same API.
Three simple steps to migrate:
Step 1: Install the crawler package
dotnet add package WebSpark.HttpClientUtility.Crawler --version 2.0.0Step 2: Add using directive
using WebSpark.HttpClientUtility.Crawler;Step 3: Update service registration
// v1.x (old)
services.AddHttpClientUtility();
// v2.0 (new)
services.AddHttpClientUtility();
services.AddHttpClientCrawler(); // Add this lineThat's it! Your crawler code (ISiteCrawler, SiteCrawler, SimpleSiteCrawler, etc.) works identically after these changes.
Need Help? See the detailed migration guide or open an issue.
Explore working examples in the samples directory:
| Feature | WebSpark.HttpClientUtility | Raw HttpClient | RestSharp | Refit |
|---|---|---|---|---|
| Setup Complexity | ⭐ One line | ⭐⭐⭐ Manual | ⭐⭐ Low | ⭐⭐ Low |
| Built-in Caching | ✅ Yes | ❌ Manual | ❌ Manual | ⚠️ Plugin |
| Built-in Resilience | ✅ Yes | ❌ Manual | ❌ Manual | ❌ Manual |
| Telemetry | ✅ Built-in | ⚠️ Manual | ⚠️ Manual | ⚠️ Manual |
| Type Safety | ✅ Yes | ⚠️ Partial | ✅ Yes | ✅ Yes |
| Web Crawling | ✅ Yes | ❌ No | ❌ No | ❌ No |
| .NET 8 LTS Support | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes |
Contributions are welcome! See our Contributing Guide for details.
This project is licensed under the MIT License - see the LICENSE file for details.
Questions or Issues? Open an issue or start a discussion!