ποΈ Useful utilities for .NET development, including a generic host implementation for console applications.
$ dotnet add package Kagamine.ExtensionsThis repository contains a suite of libraries that provide facilities commonly needed when creating production-ready applications (as Microsoft puts it). Human-coded, as with all of my work.
Tailors the Generic Host framework for console apps as WebApplication does for ASP.NET Core. Using IHost is desirable for its dependency injection, logging, and configuration setup as well as for consistency with web apps (not to mention EF Core migrations uses it to discover the DbContext), but the out-of-box experience is mainly designed for background workers which leads to some frustrations when trying to use it in a regular executable.
Example Program.cs:
using Kagamine.Extensions.Hosting;
using Microsoft.Extensions.DependencyInjection;
var builder = ConsoleApplication.CreateBuilder();
builder.Services.AddDbContext<FooContext>();
builder.Services.AddScoped<IFooService, FooService>();
// May optionally be async and/or return an exit code
builder.Run((IFooService fooService, CancellationToken cancellationToken) =>
{
fooService.DoStuff(cancellationToken);
});
Compared to repurposing IHostedService or BackgroundService to run a console app:
SIGINT, SIGQUIT, and SIGTERM produce the correct exit codes (in background services, a Ctrl+C is supposed to trigger a "graceful" shutdown and exit with zero, which only makes sense for a long-lived worker or server)Several real-world examples of this being used can be found in Serifu.org's projects.
[!NOTE] ASP.NET Core projects include a launchSettings.json by default which sets the environment to "Development" in dev, but you have to create this file yourself for a console app. The easiest way in Visual Studio is to open Debug > {Project Name} Debug Properties and under Environment Variables add
DOTNET_ENVIRONMENT=Development. Note that theASPNETCORE_prefix won't work here, as that's specific to WebApplication.
There's currently no solution in .NET for putting a collection in a record while maintaining both immutability and value semantics. It's also sometimes necessary to have access to the underlying array for interop with APIs that do not support spans (especially for byte arrays, where copying can have a significant performance impact).
To solve this, I've created a ValueArray<T> type which represents a read-only array with value type semantics suitable for use in immutable records:
| Type | Immutable | Value equality | To/from array w/o copying |
|---|---|---|---|
| T[] | β | β | β |
| List<T> | β | β | β |
| ReadOnlyCollection<T> | β (1) | β | β |
| IReadOnlyList<T> | β (1) | β | β |
| ImmutableArray<T>(2) | β (3,4) | β | β (3) |
| ReadOnlyMemory<T> | β (4,5) | β | β οΈ(5) |
| ValueArray<T> | β (4,6) | β | β (6) |
- ReadOnlyCollection<T> is merely a read-only view of a List<T>, and IReadOnlyList<T> is usually the List<T> itself.
- Has a bug caused by misuse of the null suppression operator that can cause a null reference exception which won't be caught by static analysis if any code returns its
default. (ValueArray<T> fixes this by treating a null array as empty, as it is also a struct.)- ImmutableCollectionsMarshal can be used to access the underlying array or create an instance backed by an existing array.
- Can be modified inadvertently if a reference is held to the array used to construct it, or if the underlying buffer is accessed and passed to a method that does not treat it as read-only.
- Depending on how the ReadOnlyMemory<T> was created, it may be possible to access the buffer using MemoryMarshal, but there's no guarantee the instance is backed by an actual array, or it may represent a slice of an array (like Span<T>).
- Supports implicit conversion from T[], and the underlying array can be accessed via explicit cast to T[].
ValueArray<T> supports both collection expressions and array initializers (via implicit cast):
record Song(string Title, ValueArray<string> Artists);
Song song = new("Promise", ["samfree", "Kagamine Rin", "Hatsune Miku"]);
Song song2 = song with { Artists = [.. song.Artists] /* Clone the array */ };
// These would fail if Artists were List<T>, despite the contents being identical
Assert.True(song == song2);
Assert.True(song.Artists == song2.Artists);
ValueArray<Song> songs = new[] { song, song2 };
It's interoperable with spans as well as APIs requiring arrays such as Entity Framework. Using a value converter, a ValueArray<byte> can be cast to its underlying byte[] to use as a BLOB column without the overhead of copying an array:
entity.Property<ValueArray<byte>>(x => x.Data)
.HasColumnName("data")
.HasConversion(model => (byte[])model, column => column);
When T is an unmanaged type, ValueArray<T> can also be marshaled to and from ReadOnlySpan<byte>. This could be used, for instance, to store an array of structs in a database as an opaque blob using their binary representation:
readonly record struct Alignment(ushort FromStart, ushort FromEnd, ushort ToStart, ushort ToEnd);
entity.Property<ValueArray<Alignment>>(q => q.AlignmentData)
.HasConversion(
model => ValueArray.ToByteArray(model), // Equivalent to model.AsBytes().ToArray()
column => ValueArray.FromBytes<Alignment>(column));
Incidentally, this is how Serifu.org stores word alignment data in a local SQLite DB. I've also created a JsonConverter that uses the same technique to efficiently serialize a ValueArray<T> of structs in JSON as a base64 string (which it uses in production for storing the alignment data in Elasticsearch):
ValueArray<DateTime> dates = [ DateTime.Parse("2007-08-31"), DateTime.Parse("2007-12-27") ];
var options = new JsonSerializerOptions() { Converters = { new JsonBase64ValueArrayConverter() } };
var json = JsonSerializer.Serialize(dates, options); // "AIAeAnm5yQgAAN2OMhbKCA=="
Without a converter, a ValueArray<T> will serialize as a regular array. To deserialize a JSON array as ValueArray<T> (as System.Text.Json cannot natively deserialize to a custom readonly collection), use the JsonValueArrayConverter. Both converters have generic versions to mix-and-match for specific T's.
Provides a number of advantages for working with temp files over Path.GetTempFileName():
GetTempFileName(), it's possible to specify a file extension or suffix, which may be necessary when passing the file path to certain programs (and unlike common solutions on Stack Overflow, it guarantees that the file name is unique / avoids race conditions);using which will automatically clean up the temp file when disposed;public async Task<Stream> ConvertToOpus(Stream inputStream, CancellationToken cancellationToken)
{
using TemporaryFile inputFile = tempFileProvider.Create();
await inputFile.CopyFromAsync(inputStream);
using TemporaryFile outputFile = tempFileProvider.Create(".opus");
await FFMpegArguments
.FromFileInput(inputFile.Path)
.OutputToFile(outputFile.Path, overwrite: true, options => options
.WithAudioBitrate(Bitrate))
.CancellableThrough(cancellationToken)
.ProcessAsynchronously();
// If ffmpeg throws, both temp files will be deleted.
//
// If it succeeds, the input file is deleted, but the output file remains on
// disk until the returned stream is closed, at which point the remaining
// temp file will be cleaned up automatically.
return outputFile.OpenRead();
}
ITemporaryFileProvider is added to the service container like so:
services.AddTemporaryFileProvider();
Or you can construct a TemporaryFileProvider yourself if not using DI. The temp directory and base filename format (by default a guid) can be changed via the options (see its overloads).
A DelegatingHandler that uses System.Threading.RateLimiting to force requests to the same host to wait for a configured period of time since the last request completed before sending a new request:
// Add the rate limiter to all HttpClients
builder.Services.ConfigureHttpClientDefaults(builder => builder.AddRateLimiting());
// In libraries, consider adding it only to your own named or typed client; the
// rate limit won't stack even if the top-level project adds it to all clients
builder.Services.AddHttpClient("foo").AddRateLimiting();
// Alternatively, if not using DI
using RateLimitingHttpHandlerFactory rateLimiterFactory = new();
RateLimitingHttpHandler rateLimiter = rateLimiterFactory.CreateHandler();
rateLimiter.InnerHandler = new HttpClientHandler();
HttpClient client = new(rateLimiter);
When using DI, the per-host rate limit is shared across all named clients. This avoids accidentally hitting a host more frequently than intended simply because the code happens to use multiple clients.
To change the default time between requests or set different rate limits per host:
builder.Services.Configure<RateLimitingHttpHandlerOptions>(options =>
{
// Setting it to null disables rate limiting by default; can also leave rate
// limiting on by default and disable it for specific hosts instead.
// Libraries that need to enforce a particular rate limit to their APIs
// should avoid relying on the global TimeBetweenRequests.
options.TimeBetweenRequests = null;
options.TimeBetweenRequestsByHost.Add("example.com", TimeSpan.FromSeconds(5));
});
Note that the timer starts after the response has been received and returned to the caller, not before sending the request. Otherwise, slow responses and network latency could result in requests exhibiting effectively no rate limit.
Run the sample ConsoleApp for a demo.
A small extension method inspired by SerilogMetrics, which I've used on a number of projects in the past:
using (logger.BeginTimedOperation(nameof(DoStuff)))
{
logger.Debug("Doing stuff...");
}
// [12:00:00 INF] DoStuff: Starting
// [12:00:00 DBG] Doing stuff...
// [12:00:01 INF] DoStuff: Completed in 39 ms
Sends ANSI escape codes to display a progress bar in the terminal and clear it automatically when disposed:
using var progress = new TerminalProgressBar();
for (int i = 0; i < foos.Count; i++)
{
logger.Information("Foo {Foo} of {TotalFoos}", i + 1, foos.Count);
progress.SetProgress(i, foos.Count);
await fooService.DoStuff(foos[i]);
}
Allows for replacing an existing entity with a new instance, correctly transferring both regular property values and navigation properties, since EF will throw if you try to pass a detached entity to Update() while another instance with the same primary key is tracked (e.g. by another query performed elsewhere):
var existingEntities = await db.Foos.ToDictionaryAsync(f => f.Id);
foreach (var entity in entities)
{
if (existingEntities.Remove(entity.Id, out var existingEntity))
{
db.Foos.Update(existingEntity, entity);
}
else
{
db.Foos.Add(entity);
}
}
db.Foos.RemoveRange(existingEntities.Values);
await db.SaveChangesAsync();
[!TIP] If it makes sense for your application, consider using IDbContextFactory instead and creating a new context for each unit of work. Doing so can avoid the sort of "spooky action at a distance" (other parts of the code affecting the state of the change tracker unpredictably) that makes doing something like this necessary.
β οΈ Removed in v2.0.0, as this was made official in EF 9. Older projects can copy the method from here.
Mirrors ToArrayAsync and ToListAsync. Implemented using await foreach, like the other two, making it slightly more performant than doing ToListAsync then ToHashSet :
HashSet<string> referencedFiles = await db.Foos
.Select(f => f.FilePath)
.ToHashSetAsync(StringComparer.OrdinalIgnoreCase);
foreach (var file in Directory.EnumerateFiles(dir))
{
if (!referencedFiles.Contains(file))
{
logger.Warning("Deleting orphaned file {Path}", file);
File.Delete(file);
}
}