Drop-in HttpClient wrapper for .NET 8-10+ with Polly resilience, response caching, and OpenTelemetry. One-line setup eliminates boilerplate for retries, circuit breakers, correlation IDs, and structured logging. Perfect for microservices, APIs, and web scrapers. Includes authentication providers (Bearer, Basic, API Key), concurrent requests, fire-and-forget operations, and streaming support. MIT licensed, 252+ tests. For web crawling features, see WebSpark.HttpClientUtility.Crawler package.
$ dotnet add package WebSpark.HttpClientUtilityDrop-in HttpClient wrapper with Polly resilience, response caching, and OpenTelemetry for .NET 8-10+ APIs—configured in one line
Stop writing 50+ lines of HttpClient setup. Get enterprise-grade resilience (retries, circuit breakers), intelligent caching, structured logging with correlation IDs, and OpenTelemetry tracing in a single AddHttpClientUtility() call. Perfect for microservices, background workers, and web scrapers.
Your HTTP setup in 1 line vs. 50+
| Feature | WebSpark.HttpClientUtility | Raw HttpClient | RestSharp | Refit |
|---|---|---|---|---|
| Setup Complexity | ⭐ One line | ⭐⭐⭐ 50+ lines manual | ⭐⭐ Low | ⭐⭐ Low |
| Built-in Retry/Circuit Breaker | ✅ Polly integrated | ❌ Manual Polly setup | ❌ Manual | ❌ Manual |
| Response Caching | ✅ Configurable, in-memory | ❌ Manual | ❌ Manual | ❌ Manual |
| Correlation IDs | ✅ Automatic | ❌ Manual middleware | ❌ Manual | ❌ Manual |
| OpenTelemetry | ✅ Built-in | ❌ Manual ActivitySource | ❌ Manual | ❌ Manual |
| Structured Logging | ✅ Rich context | ❌ Manual ILogger | ⭐⭐ Basic | ⭐⭐ Basic |
| Web Crawling | ✅ Separate package | ❌ No | ❌ No | ❌ No |
| Production Trust | ✅ 252+ tests, LTS support | ✅ Microsoft-backed | ✅ Popular (7M+ downloads) | ✅ Popular (10M+ downloads) |
When to use WebSpark:
When NOT to use WebSpark:
Battle-Tested & Production-Ready
TreatWarningsAsErrors=trueSupport & Maintenance
Breaking Change Commitment
We follow semantic versioning strictly:
Starting with v2.0, the library is split into two packages:
| Package | Purpose | Size | Use When |
|---|---|---|---|
| WebSpark.HttpClientUtility | Core HTTP features | 163 KB | You need HTTP client utilities (authentication, caching, resilience, telemetry) |
| WebSpark.HttpClientUtility.Crawler | Web crawling extension | 75 KB | You need web crawling, robots.txt parsing, sitemap generation |
Upgrading from v1.x? Most users need no code changes! See Migration Guide.
The complete documentation site includes:
Install
dotnet add package WebSpark.HttpClientUtility
Minimal Example (Absolute Minimum)
// Program.cs
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddHttpClientUtility();
var app = builder.Build();
app.MapGet("/weather", async (IHttpRequestResultService http) =>
{
var request = new HttpRequestResult<WeatherData>
{
RequestPath = "https://api.weather.com/forecast?city=Seattle",
RequestMethod = HttpMethod.Get
};
var result = await http.HttpSendRequestResultAsync(request);
return result.IsSuccessStatusCode ? Results.Ok(result.ResponseResults) : Results.Problem();
});
app.Run();
record WeatherData(string City, int Temp);
That's it! You now have:
// Program.cs
builder.Services.AddHttpClientUtility(options =>
{
options.EnableCaching = true; // Cache responses
options.EnableResilience = true; // Retry on failure
});
// WeatherService.cs
public class WeatherService
{
private readonly IHttpRequestResultService _http;
private readonly ILogger<WeatherService> _logger;
public WeatherService(
IHttpRequestResultService http,
ILogger<WeatherService> logger)
{
_http = http;
_logger = logger;
}
public async Task<WeatherData?> GetWeatherAsync(string city)
{
var request = new HttpRequestResult<WeatherData>
{
RequestPath = $"https://api.weather.com/forecast?city={city}",
RequestMethod = HttpMethod.Get,
CacheDurationMinutes = 10 // Cache for 10 minutes
};
var result = await _http.HttpSendRequestResultAsync(request);
if (!result.IsSuccessStatusCode)
{
_logger.LogError(
"Weather API failed: {StatusCode} - {Error}",
result.StatusCode,
result.ErrorDetails
);
return null;
}
return result.ResponseResults;
}
}
// Program.cs - Advanced configuration
builder.Services.AddHttpClientUtility(options =>
{
options.EnableCaching = true;
options.EnableResilience = true;
options.ResilienceOptions.MaxRetryAttempts = 3;
options.ResilienceOptions.RetryDelay = TimeSpan.FromSeconds(2);
options.DefaultTimeout = TimeSpan.FromSeconds(30);
});
// WeatherService.cs - Advanced usage
public async Task<WeatherData?> GetWeatherWithAuthAsync(string city, string apiKey)
{
var request = new HttpRequestResult<WeatherData>
{
RequestPath = $"https://api.weather.com/forecast?city={city}",
RequestMethod = HttpMethod.Get,
CacheDurationMinutes = 10,
Headers = new Dictionary<string, string>
{
["X-API-Key"] = apiKey,
["Accept"] = "application/json"
}
};
var result = await _http.HttpSendRequestResultAsync(request);
// Correlation ID is automatically logged and propagated
_logger.LogInformation(
"Weather request completed in {Duration}ms with correlation {CorrelationId}",
result.RequestDuration,
result.CorrelationId
);
return result.IsSuccessStatusCode ? result.ResponseResults : null;
}
Install Both Packages
dotnet add package WebSpark.HttpClientUtility
dotnet add package WebSpark.HttpClientUtility.Crawler
Register Services
// Program.cs
builder.Services.AddHttpClientUtility();
builder.Services.AddHttpClientCrawler(); // Adds crawler features
Use Crawler
public class SiteAnalyzer
{
private readonly ISiteCrawler _crawler;
public SiteAnalyzer(ISiteCrawler crawler) => _crawler = crawler;
public async Task<CrawlResult> AnalyzeSiteAsync(string url)
{
var options = new CrawlerOptions
{
MaxDepth = 3,
MaxPages = 100,
RespectRobotsTxt = true
};
return await _crawler.CrawlAsync(url, options);
}
}
builder.Services.AddHttpClientUtility(options =>
{
options.EnableCaching = true;
});
// In your service
var request = new HttpRequestResult<Product>
{
RequestPath = "https://api.example.com/products/123",
RequestMethod = HttpMethod.Get,
CacheDurationMinutes = 10 // Cache for 10 minutes
};
builder.Services.AddHttpClientUtility(options =>
{
options.EnableResilience = true;
options.ResilienceOptions.MaxRetryAttempts = 3;
options.ResilienceOptions.RetryDelay = TimeSpan.FromSeconds(2);
});
builder.Services.AddHttpClientUtilityWithAllFeatures();
No code changes required! Simply upgrade:
dotnet add package WebSpark.HttpClientUtility --version 2.0.0
Your existing code continues to work exactly as before. All core HTTP features (authentication, caching, resilience, telemetry, etc.) are still in the base package with the same API.
Three simple steps to migrate:
Step 1: Install the crawler package
dotnet add package WebSpark.HttpClientUtility.Crawler --version 2.0.0
Step 2: Add using directive
using WebSpark.HttpClientUtility.Crawler;
Step 3: Update service registration
// v1.x (old)
services.AddHttpClientUtility();
// v2.0 (new)
services.AddHttpClientUtility();
services.AddHttpClientCrawler(); // Add this line
That's it! Your crawler code (ISiteCrawler, SiteCrawler, SimpleSiteCrawler, etc.) works identically after these changes.
Need Help? See the detailed migration guide or open an issue.
Explore working examples in the samples directory:
Contributions are welcome! See our Contributing Guide for details.
| Package | Purpose | Status |
|---|---|---|
| WebSpark.HttpClientUtility.Testing | Test helpers & fakes for unit testing | ✅ Available (v2.1.0+) |
Testing Package Features:
ForRequest().RespondWith()dotnet add package WebSpark.HttpClientUtility.Testing
See the Testing documentation for examples.
This project is licensed under the MIT License - see the LICENSE file for details.
Questions or Issues? Open an issue or start a discussion!